 Research
 Open Access
 Published:
Triangle structure diagrams for a single machine batching problem with identical jobs
Advances in Difference Equations volume 2014, Article number: 71 (2014)
Abstract
The problem of batching identical jobs on a single machine to minimize the completion time is studied by employing the difference analysis technique. Constant processing times and batch setup times are assumed. We first establish the relation between the optimal solution and the firstorder difference of the optimal objective function in terms of the number of jobs and investigate the properties of the firstorder difference. Then we obtain the triangle structure diagram of the batching problem in $O(\sqrt{n})$ time at most by using the permutation of some numbers which describe the character of the firstorder difference. The diagrams enable us to see clearly the specific expressions of optimal solutions for the njobs batching problem and any mjobs batching problem simultaneously, where $m\le n$. Also, we show that the result proposed by Santos (MSc thesis, 1984) and Santos and Magazine (Oper. Res. Lett. 4:99103, 1985) is a special case of our result.
MSC:90B30, 90B35.
1 Introduction
Consider the problem of scheduling identical jobs on a single machine, in which the jobs are processed in batches, with a setup time for each batch. The completion time of a job coincides with the completion time of the last scheduled job in its batch and all jobs in this batch have the same completion time. For a given number of jobs, we want to choose batch sizes so as to minimize the sum of the completion times of the jobs. There is a tradeoff between keeping the number of setups incurred small, by having large batches, and keeping small the time each job waits for its batch to finish, by having small batches.
Formally, there is a set of n jobs with identical processing time, $J=\{{j}_{1},\dots ,{j}_{n}\}=\{1,\dots ,n\}$, to be processed on a single machine. In a given schedule, for each job $j\in J$, we denote by ${C}_{j}$ its completion time. The problem is, given job processing time p and setup time S, to find the number of batches k, and batch sizes ${b}_{i}$, such that ${\sum}_{i=1}^{k}{b}_{i}=n$, so as to minimize ${\sum}_{j=1}^{n}{C}_{j}={\sum}_{i=1}^{k}{b}_{i}{\sum}_{j=1}^{i}(S+{b}_{j}p)$. The problem is referred to as $1{p}_{j}=p,S\mathit{batch}\sum {C}_{j}$.
There are several different versions of solving the problem $1{p}_{j}=p,S\mathit{batch}\sum {C}_{j}$. Coffman et al. [1] propose a backward dynamic programming algorithm, running in $O({n}^{2})$ time. A faster solution algorithm is given Naddef and Santos [2], running in $O((p/s)n)$ time. Another is given by Coffman, Nozari and Yannakakis [3], running in $O(\sqrt{n})$ time. Shallcross [4] presents an optimal solution of the form ${b}_{i}=\{\begin{array}{ll}\lfloor (ciS)/p\rfloor ,& 1\le i\le l,\\ \lfloor (ciS1)/p\rfloor ,& l<i\le k,\end{array}$ where c and l can be determined by the algorithm running in $O(logplog(np))$. However, Potts and Kovalyov [5] point out that the algorithm by Shallcross [4] is rather intricate. All of the optimal solutions mentioned above are not of the specific expression. Potts and Kovalyov [5] point out that a key to the development of a polynomial algorithm is provided by Santos [6] and Santos and Magazine [7] who analyze a continuous relaxation in which batch sizes are not constrained to be integer. Specifically, they show that the optimal number of batches is $k=\lceil \sqrt{1/4+2np/S}1/2\rceil $ and the optimal batch sizes ${b}_{i}$ are $n/k+S(k+1)/2piS/p$ for $i=1,\dots ,k$.
The results by Santos [6] and Santos and Magazine [7] are important because the specific expressions of the optimal solutions are given, which contributes so as not only to be much easier to implement in solving the problem than other algorithms, but also it entails a proof of the complexity for other scheduling problems. For example, Albers and Brucker [8] give the NPhardness proofs of two batching problems by using the expression by Santos [6] and Santos and Magazine [7]. One slightly regrets that the specific expressions of the optimal solutions are only appropriate for some of the problem instances. One would like to know whether there is a specific solution formula that applies to all of the problem instances. This is the reason for studying the problem here. In this paper, we first establish the relation between the optimal solution and the firstorder difference of the optimal objective function in terms of the number of jobs and we investigate the properties of the firstorder difference. Then, we obtain the triangle structure diagram of the batching problem in $O(\sqrt{n})$ time at most by using the permutation of some numbers which describe the character of the firstorder difference. The diagrams enable us to see clearly the specific expressions of optimal solutions for the njobs batching problem and any mjobs batching problem simultaneously, where $m\le n$.
This paper is organized as follows. In Section 2 we build the relation between the optimal solution and the firstorder difference of the optimal objective function in terms of the number of jobs and investigate the properties of the firstorder difference. The triangle structure diagrams are shown in the case $S=vp$ in Section 3 and in the case $S=vp+\epsilon $ in Section 4, respectively, where v and $0<\epsilon <p$ are integers. Section 5 contains a conclusion and a discussion of some possible extensions.
2 Properties of the firstorder difference
In this section, we introduce the concept of differences of the optimal objective function in terms of the number of jobs, and we show that the firstorder difference sequence is strictly monotone increasing and each term in the secondorder difference sequence, that is, each increment of the firstorder difference terms, is between p and 2p. Using the properties, we can reduce the batch sizing problem to finding some interval such that some fixed number exactly fall in it, where the interval consists of two adjacent terms in the firstorder difference sequence.
For ease of presentation, we sequence the jobs according to nonincreasing indices. Any solution of problem $1{p}_{j}=p,S\mathit{batch}\sum {C}_{j}$ is of the form
where k is the number of batches,
and the batch sizes are ${b}_{1}={n}_{k}{n}_{k1},\dots ,{b}_{k1}={n}_{2}{n}_{1},{b}_{k}={n}_{1}$. Every solution BS corresponds to an objective function value
where ${n}_{0}=0$.
In order to solve the batch sizing problem, we obviously have to find a constant k and a sequence of indices
such that the above objective function value is minimized. Clearly, problem $1{p}_{j}=p,S\mathit{batch}\sum {C}_{j}$ is a trivial matter when $S\le p$. We have the following result.
Theorem 1 If $S\le p$, then for problem $1{p}_{j}=p,S\mathit{batch}\sum {C}_{j}$ there exists an optimal solution in which $k=n$ and ${n}_{i}=i$ for $i=1,\dots ,n$, that is, ${b}_{1}=\cdots ={b}_{k}=1$.
By Theorem 1, we assume that $S>p$ hereafter.
Let $F(j)$ denote the minimum the sum of the completion times for the jjobs batching problem containing jobs $1,\dots ,j$. Coffman et al. [1] propose a backward dynamic programming algorithm. The initialization is
and the recursion for $j=1,\dots ,n$ is
Under the most natural implementation, the algorithm requires $O({n}^{2})$ time. Below we do further research into the recursion formula, so as to determine completely the optimal successor t of j, that is, an integer t with $0\le t\le j1$ such that
holds.
For $0\le i\le j1$, set
Thus, we have
Let ${\{{\mathrm{\Delta}}_{1}(i)\}}_{i=0}^{+\mathrm{\infty}}$ denote the firstorder difference sequence of the optimal objective function in terms of the number of jobs, where ${\mathrm{\Delta}}_{1}(i)=F(i+1)F(i)$ for $i=0,1,2,\dots $ . The relation $F(j,i+1)<(>)\phantom{\rule{0.25em}{0ex}}F(j,i)$ stating that $i+1$ is better (worse) than i as a successor of j is equivalent to
To break ties when choosing the successor, we assume that if ${\mathrm{\Delta}}_{1}(i)=F(i+1)F(i)=jp$, that is, $F(j,i+1)=F(j,i)$, then $i+1$ is better than i as a successor of j. Thus, if the sequence ${\{{\mathrm{\Delta}}_{1}(i)\}}_{i=0}^{+\mathrm{\infty}}$ is strictly monotone increasing, the determining optimal successor of j problem can be reduced to finding a nonnegative integer t such that ${\mathrm{\Delta}}_{1}(t1)\le jp<{\mathrm{\Delta}}_{1}(t)$, where we define ${\mathrm{\Delta}}_{1}(1)=0$. We denote by $\mathit{SUCC}(j)$ the optimal successor of j hereafter.
Proposition 1 Assume that the sequence ${\{{\mathrm{\Delta}}_{1}(i)\}}_{i=1}^{+\mathrm{\infty}}$ is strictly monotone increasing and ${\mathrm{\Delta}}_{1}(t){\mathrm{\Delta}}_{1}(t1)\ge p$ for $t=1,2,\dots $ . Then for each j there is a unique ${t}_{0}$ with $0\le {t}_{0}\le j1$ such that ${\mathrm{\Delta}}_{1}({t}_{0}1)\le jp<{\mathrm{\Delta}}_{1}({t}_{0})$ and ${t}_{0}=\mathit{SUCC}(j)$.
Proof We first prove the existence by induction. When $j=1$, we have $0={\mathrm{\Delta}}_{1}(1)\le p<{\mathrm{\Delta}}_{1}(0)=F(1)F(0)=S+p$ and $\mathit{SUCC}(1)=0$. Now suppose that it holds when $j\le h$ for some positive integer h, i.e. there is ${t}_{0}$ with $0\le {t}_{0}\le h1$ such that ${\mathrm{\Delta}}_{1}({t}_{0}1)\le hp<{\mathrm{\Delta}}_{1}({t}_{0})<\cdots <{\mathrm{\Delta}}_{1}(h1)$. For $j=h+1$, by the proposition and induction assumptions, we have $0={\mathrm{\Delta}}_{1}(1)\le (h+1)p=hp+p<{\mathrm{\Delta}}_{1}(h1)+p\le {\mathrm{\Delta}}_{1}(h)$. Thus, the existence holds for $j=h+1$, and it follows that $({\mathrm{\Delta}}_{1}(1),{\mathrm{\Delta}}_{1}(0),\dots ,{\mathrm{\Delta}}_{1}(h))$ is a partition of $[0,{\mathrm{\Delta}}_{1}(h))$.
Due to the fact that $({\mathrm{\Delta}}_{1}(1),{\mathrm{\Delta}}_{1}(0),\dots ,{\mathrm{\Delta}}_{1}(j1))$ is a partition of $[0,{\mathrm{\Delta}}_{1}(j1))$ again, the uniqueness holds.
Since the inequalities ${\mathrm{\Delta}}_{1}(1)<{\mathrm{\Delta}}_{1}(0)<\cdots <{\mathrm{\Delta}}_{1}({t}_{0}1)\le jp$ imply that ${t}_{0}$ is better than ${t}_{0}1$, ${t}_{0}1$ better than ${t}_{0}2,\dots ,1$ better than 0 in order and the inequalities $jp<{\mathrm{\Delta}}_{1}({t}_{0})<\cdots <{\mathrm{\Delta}}_{1}(j1)$ imply that ${t}_{0}$ is better than ${t}_{0}+1$, ${t}_{0}+1$ better than ${t}_{0}+2,\dots ,j2$ better than $j1$ in order as a successor of j, we have ${t}_{0}=\mathit{SUCC}(j)$. □
Now we give the key properties of the firstorder difference.
Proposition 2 If $S>p$, then the sequence ${\{{\mathrm{\Delta}}_{1}(i)\}}_{i=1}^{+\mathrm{\infty}}$ is strictly monotone increasing and $p\le {\mathrm{\Delta}}_{1}(t){\mathrm{\Delta}}_{1}(t1)\le 2p$ for $t=1,2,\dots $ .
Proof We prove this proposition by induction. Simple calculations yield
and
Now suppose that it holds when $j\le h$ for some positive integer h, i.e. ${\mathrm{\Delta}}_{1}(1)<{\mathrm{\Delta}}_{1}(0)<\cdots <{\mathrm{\Delta}}_{1}(h)$ and $p\le {\mathrm{\Delta}}_{1}(t){\mathrm{\Delta}}_{1}(t1)\le 2p$ for $t=1,2,\dots ,h$. We need to show that it holds when $j=h+1$.
By the induction assumption and Proposition 1, we may assume that $\mathit{SUCC}(h+1)={t}_{0}$, where $0\le {t}_{0}\le h$ and ${\mathrm{\Delta}}_{1}({t}_{0}1)\le (h+1)p<{\mathrm{\Delta}}_{1}({t}_{0})$. Since $p\le {\mathrm{\Delta}}_{1}({t}_{0}){\mathrm{\Delta}}_{1}({t}_{0}1)$ and $2p<S+p={\mathrm{\Delta}}_{1}(0){\mathrm{\Delta}}_{1}(1)$, we see that the optimal successor of h is either ${t}_{0}1$ or ${t}_{0}$ and $0\le {t}_{0}1\le h$. Noting the fact that $2p<S+p={\mathrm{\Delta}}_{1}(0){\mathrm{\Delta}}_{1}(1)$ and ${\mathrm{\Delta}}_{1}(1){\mathrm{\Delta}}_{1}(0)=2p$, along with $p\le {\mathrm{\Delta}}_{1}(t){\mathrm{\Delta}}_{1}(t1)$ for $t=2,\dots ,h$, we have ${\mathrm{\Delta}}_{1}(h)>(h+2)p$. Thus, the optimal successor of $h+2$ is between 0 and h. By similar arguments for h, we see that the optimal successor of $h+2$ is either ${t}_{0}$ or ${t}_{0}+1$ and $0\le {t}_{0}+1\le h$. There are four cases to consider: (a) $\mathit{SUCC}(h)=\mathit{SUCC}(h+1)=\mathit{SUCC}(h+2)={t}_{0}$, (b) $\mathit{SUCC}(h)={t}_{0}1$, $\mathit{SUCC}(h+1)=\mathit{SUCC}(h+2)={t}_{0}$, (c) $\mathit{SUCC}(h)=\mathit{SUCC}(h+1)={t}_{0}$, $\mathit{SUCC}(h+2)={t}_{0}+1$, (d) $\mathit{SUCC}(h)={t}_{0}1$, $\mathit{SUCC}(h+1)={t}_{0}$, $\mathit{SUCC}(h+2)={t}_{0}+1$.
Case (a) $\mathit{SUCC}(h)=\mathit{SUCC}(h+1)=\mathit{SUCC}(h+2)={t}_{0}$. In this case,
Thus, we have
Case (b) $\mathit{SUCC}(h)={t}_{0}1$, $\mathit{SUCC}(h+1)=\mathit{SUCC}(h+2)={t}_{0}$. In this case,
Thus, we have
Since $\mathit{SUCC}(h+1)=\mathit{SUCC}(h+2)={t}_{0}$, the inequalities ${\mathrm{\Delta}}_{1}({t}_{0})>(h+2)p>(h+1)p\ge {\mathrm{\Delta}}_{1}({t}_{0}1)$ hold. This implies ${\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)=(h+2)p{\mathrm{\Delta}}_{1}({t}_{0}1)\ge p$. Noting the fact that $1\le {t}_{0}\le h$, by the induction assumption, we have ${\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)=(h+2)p{\mathrm{\Delta}}_{1}({t}_{0}1)\le {\mathrm{\Delta}}_{1}({t}_{0}){\mathrm{\Delta}}_{1}({t}_{0}1)\le 2p$.
Case (c) $\mathit{SUCC}(h)=\mathit{SUCC}(h+1)={t}_{0}$, $\mathit{SUCC}(h+2)={t}_{0}+1$. In this case,
Thus, we have
Since $\mathit{SUCC}(h)=\mathit{SUCC}(h+1)={t}_{0}$, the inequalities ${\mathrm{\Delta}}_{1}({t}_{0})>(h+1)p>hp$ hold. This implies ${\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)={\mathrm{\Delta}}_{1}({t}_{0})hp\ge p$. Also, we have ${\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)={\mathrm{\Delta}}_{1}({t}_{0})hp\le 2p$. Otherwise, ${\mathrm{\Delta}}_{1}({t}_{0})hp>2p$ implies ${\mathrm{\Delta}}_{1}({t}_{0})>(h+2)p$. This contradicts with the fact that ${\mathrm{\Delta}}_{1}({t}_{0})\le (h+2)p<{\mathrm{\Delta}}_{1}({t}_{0}+1)$.
Case (d) $\mathit{SUCC}(h)={t}_{0}1$, $\mathit{SUCC}(h+1)={t}_{0}$, $\mathit{SUCC}(h+2)={t}_{0}+1$. In this case,
Thus, we have
Since $1\le {t}_{0}\le h$, along with the induction assumption, the inequalities $p\le {\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)\le 2p$ hold. □
These two propositions, as well as the four formulas yielded in the proof of Proposition 2, are very important for our batch sizing problem. Proposition 1 ensures that the problem can be reduced to finding the interval formed by the firstorder difference problem. Proposition 2 and the four formulas make it possible to determine exactly the firstorder difference sequence. $p\le {\mathrm{\Delta}}_{1}(h+1){\mathrm{\Delta}}_{1}(h)\le 2p$ implies that each nonnegative integer must be the optimal successor of one or two positive integers, and by the monotonicity each positive integer has a unique nonnegative integer as its optimal successor. Based on the successor membership between these integers, we can obtain a partition of the set of positive integers. We define ${N}_{k}=\{j\mid \text{the optimal number of batches of}j\text{jobs batching problem is equal to}k\}$ for $k=1,2,\dots $ , called the kbatches case set of numbers of jobs, and the sequence of integers in ${N}_{k}$ is in increasing natural order. Then $({N}_{1},\dots ,{N}_{k},\dots )$ is a partition of the set of position integers. Let ${C}_{1,i}=i$ for $i=1,\dots ,{N}_{1}$. Then ${N}_{1}=({C}_{1,1},\dots ,{C}_{1,{N}_{1}})$. We define ${C}_{k,i}=\{j\mid \mathit{SUCC}(j)\in {C}_{k1,i}\}$ for $k=2,3,\dots $ and $i=1,\dots ,{N}_{1}$, called the i th periodic set of ${N}_{k}$, and the sequence of integers in ${C}_{k,i}$ is in increasing natural order. Then ${N}_{k}=({C}_{k,1},\dots ,{C}_{k,{N}_{1}})$ and $({C}_{1,1},\dots ,{C}_{1,{N}_{1}},\dots ,{C}_{k,1},\dots ,{C}_{k,{N}_{1}},\dots )$ is a partition of the set of position integers. Based on the successor membership between these periodic sets, we now give a basic diagram of the batching problem as follows, where the nodes in the diagram consist of the integers in ${C}_{ki}$ (see Figure 1).
In the following sections, we determine exactly the number of integers in ${C}_{k,i}$ and the optimal successor membership between the integers.
3 The triangle structure diagram in the case $S=vp$
In this section we present a specific diagram of the batching problem with $S=vp$, which can be obtained in constant time. Since the diagram consists of v triangles in the shape, we call it a triangle structure diagram. Using the optimal successor membership between the integers presented by the diagram, we give the specific expression of the optimal solution for an anynumberjobs batching problem with $S=vp$, and show that the result proposed by Santos [6] and Santos and Magazine [7] is a special case of our result.
The diagram depends exactly on the firstorder difference sequence. To compute the sequence, we can first obtain values ${\mathrm{\Delta}}_{1}(0),{\mathrm{\Delta}}_{1}(1),\dots ,{\mathrm{\Delta}}_{1}({N}_{1})$ by simple calculations. Then using the results of the previous section and the special properties of the firstorder difference in the case $S=vp$, which will be given below, we can determine the firstorder difference of the integers in ${N}_{2}$. Generally, we can compute the firstorder difference of the integers in ${N}_{k}$ if those in ${N}_{k1}$ are already known. The following two propositions describe the special properties of the firstorder difference in the case $S=vp$.
Proposition 3 Assume $S=vp$. Then ${\mathrm{\Delta}}_{1}(1){\mathrm{\Delta}}_{1}(0)=2p$, ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=2p$ if j and $j1$ have the same optimal successor and ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=p$ otherwise for $j=2,3,\dots $ .
Proof Since ${\mathrm{\Delta}}_{1}(0)=F(1)F(0)=S+p$ and ${\mathrm{\Delta}}_{1}(1)=F(2)F(1)=S+3p$, we have ${\mathrm{\Delta}}_{1}(1){\mathrm{\Delta}}_{1}(0)=2p$. Suppose $\mathit{SUCC}(j)={t}_{0}$, where ${t}_{0}\ge 0$.
When $\mathit{SUCC}(j1)=\mathit{SUCC}(j)=\mathit{SUCC}(j+1)={t}_{0}$, by (2), we have ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=2p$.
When $\mathit{SUCC}(j1)=\mathit{SUCC}(j)={t}_{0}$ and $\mathit{SUCC}(j+1)={t}_{0}+1$, it follows from Proposition 1 that
Due to $S=vp$ and $F(i)=i(S+(it)p)+F(t)$, where $\mathit{SUCC}(i)=t$, $F(i)/p$ is an integer for $i=1,2,\dots $ . Thus, ${\mathrm{\Delta}}_{1}(i)/p$ is an integer for $i=0,1,2,\dots $ . By equation (6), ${\mathrm{\Delta}}_{1}({t}_{0})(j1)p=2p$. So, we have ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)={\mathrm{\Delta}}_{1}({t}_{0})(j1)p=2p$, following from equation (4).
When $\mathit{SUCC}(j1)={t}_{0}1$, $\mathit{SUCC}(j)={t}_{0}$ and $\mathit{SUCC}(j+1)={t}_{0}+1$, it follows from Proposition 1 that
Since both ${\mathrm{\Delta}}_{1}({t}_{0}1)/p$ and ${\mathrm{\Delta}}_{1}({t}_{0})/p$ are integers, along with equation (5), equation (7) implies ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)={\mathrm{\Delta}}_{1}({t}_{0}){\mathrm{\Delta}}_{1}({t}_{0}1)=p$.
When $\mathit{SUCC}(j1)={t}_{0}1$ and $\mathit{SUCC}(j)=\mathit{SUCC}(j+1)={t}_{0}$, it follows from Proposition 1 that
Since both ${\mathrm{\Delta}}_{1}({t}_{0}1)/p$ and ${\mathrm{\Delta}}_{1}({t}_{0})/p$ are integers, along with equation (3), equation (8) implies ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=(j+1)p{\mathrm{\Delta}}_{1}({t}_{0}1)=p$. □
Proposition 4 Assume $S=vp$. Then:

(i)
${N}_{k}=kv$ for $k=1,2,\dots $ .

(ii)
${C}_{ki}=k$ for $i=1,\dots ,v$.

(iii)
For $j\in {N}_{k}$, ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=2p$ if j is the last integer of ${C}_{ki}$ for $i=1,\dots ,v$, ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=p$ otherwise.
Proof We prove this proposition by induction. Clearly, all these results hold when $k=1$. Now suppose that they hold when $k=h$ for some positive integer h, i.e. ${N}_{h}=hv$, ${C}_{hi}=h$ for $i=1,\dots ,v$ and for $j\in {N}_{h}$, ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=2p$ if j is the last integer of ${C}_{hi}$ for $i=1,\dots ,v$; ${\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1)=p$ otherwise. We need to show that they hold when $k=h+1$. For any $i\in \{1,\dots ,r\}$, by the induction assumption, we assume ${C}_{hi}=(a+1,\dots ,a+h)$, where a is a given positive integer, and we have
and
Due to Proposition 1, $j\in {C}_{h+1,i}$ if and only if
Noting the fact that ${\mathrm{\Delta}}_{1}(a)/p$ is a positive integer, the number of positive integers satisfying equation (9) must be $h+1$. Thus, we have ${C}_{h+1,i}=h+1$ and ${C}_{h+1,i}=(b,b+1,\dots ,b+h)$, where $b={\mathrm{\Delta}}_{1}(a)/p$. Since ${\mathrm{\Delta}}_{1}(a+1){\mathrm{\Delta}}_{1}(a)=\cdots ={\mathrm{\Delta}}_{1}(a+h1){\mathrm{\Delta}}_{1}(a+h2)=p$, we have $\mathit{SUCC}(j)\ne \mathit{SUCC}(j1)$ for $j=b,b+1,\dots ,b+h1$. Since ${\mathrm{\Delta}}_{1}(a+h){\mathrm{\Delta}}_{1}(a+h1)=2p$, we have $\mathit{SUCC}(b+h)\ne \mathit{SUCC}(b+h1)$. Thus,
and
which follows from Proposition 3. By the arbitrariness of i, result (i), result (ii) and result (iii) hold for $k=h+1$. □
We define $f(j)=({\mathrm{\Delta}}_{1}(j){\mathrm{\Delta}}_{1}(j1))/p$ and denote by $f(j)$ the node in Figure 1 instead of integer j in ${C}_{ki}$. Now we can give the triangle structure diagram in constant time in the case $S=vp$ (see Figure 2).
To use the information provided by the diagram, we denote by $(k,i,w)$ the w th integer in the i th periodic set ${C}_{ki}$ of the k th batch case set ${N}_{k}$, i.e. $(k,i,w)={\sum}_{t=1}^{k1}tv+(i1)k+w$ under $S=vp$. For example, $(4,3,2)={\sum}_{t=1}^{41}tv+(31)4+2=28$ if $v=3$. Since $({C}_{1,1},\dots ,{C}_{1,v},\dots ,{C}_{k,1},\dots ,{C}_{k,v},\dots )$ is a partition of the set of position integers, we may write every position integer $n={\sum}_{t=1}^{k1}tv+(i1)k+w$ as $(k,i,w)$, where all k, i, and w with $k\ge 1$, $1\le i\le v$ and $1\le w\le k$ are integers. We define $(0,0,0)=0$. Now we give one of our main results.
Theorem 2 Assume $S=vp$. Then for an arbitrary positive integer $(k,i,w)$, the following hold and are decided in constant time:

(i)
$$\mathit{SUCC}(k,i,w)=\{\begin{array}{ll}0,& \mathit{\text{if}}k=1;\\ (k1,i,w)& \mathit{\text{if}}w\le k1;\\ (k1,i,w1)& \mathit{\text{if}}w=k,\end{array}$$

(ii)
the optimal solution of $(k,i,w)$jobs batch sizing problem
$${b}_{t}=\{\begin{array}{ll}(kt)v+i1& \mathit{\text{for}}t=1,\dots ,kw;\\ (kt)v+i& \mathit{\text{for}}t=kw+1,\dots ,k,\end{array}$$(10)
where all k, i, and w with $k\ge 1$, $1\le i\le v$, and $1\le w\le k$ are integers.
Proof For any n, we can give another kind of expression $(k,i,w)$ in constant time. Also, the optimal successor and ${b}_{t}$ can be decided in constant time. Thus, the specific expression of the optimal solution is decided in constant time.
In the case $k=1$, $(k,i,w)=(1,i,1)=i<v+1$. This implies $(k,i,w)p<(v+1)p={\mathrm{\Delta}}_{1}(0)$. By Proposition 1, $\mathit{SUCC}(k,i,w)=0$. In the case $k>1$, result (i) clearly follows from Proposition 4 or Figure 2. Due to result 1,
Thus,
for $t=1,\dots ,kw$,
for $t=kw+1,\dots ,k1$,
□
For example, we consider an instance of the batch sizing problem with 100 jobs and $S=4p$. 100 can be written as $84+14+2={\sum}_{t=1}^{71}t\times 4+(31)\times 7+2=(7,3,2)$. By the triangle structure diagram in the case $S=4p$, we can obtain the successor relation between the number of jobs: $(7,3,2)\to (6,3,2)\to (5,3,2)\to (4,3,2)\to (3,3,2)\to (2,3,2)\to (1,3,1)\to (0,0,0)$. Thus, the optimal solution is $({b}_{1},\dots ,{b}_{7})=(26,22,18,14,10,7,3)$. Of course, we may take the optimal solution by (10). For example, for a problem with $1\text{,}023$ jobs and $S=10p$, by $1\text{,}019=910+98+11={\sum}_{t=1}^{141}t\times 10+(81)\times 14+11=(14,8,11)$, the optimal solution $({b}_{1},\dots ,{b}_{14})=(137,127,117,108,98,88,78,68,58,48,38,28,18,8)$ as follows from (10).
We recall the solving formula by Santos [6] and Santos and Magazine [7]: ${b}_{i}=n/k+S(k+1)/2piS/p$ for $i=1,\dots ,k$, where $k=\lceil \sqrt{1/4+2np/S}1/2\rceil $. The validity of the formula depends on whether $n/k+S(k+1)/2piS/p$, for $i=1,\dots ,k$, are integers. First, it is necessary that $S/p$ is an integer, i.e. $S=vp$, which is the same as the case considered by us in this section. Noting the fact in the solving formula ${b}_{1}{b}_{2}=\cdots ={b}_{k1}{b}_{k}=v$, it follows from the triangle structure diagram in the case $S=vt$ that $n=(k,i,1)$ and/or $n=(k,i,k)$. If $n=(k,i,1)$, then $n/k+S(k+1)/2p$ = $({\sum}_{t=1}^{k1}tv+(i1)k+1)/k+(k+1)v/2$ = $(k1)v/2+i1+1/k+(k+1)v/2$ = $kv+i1+1/k$. Clearly, when $k>1$, $n/k+S(k+1)/2p$ is not an integer otherwise. Case $k=1$ can be reduced to the case $n=(k,i,k)$; if $n=(k,i,k)$, then $n/k+S(k+1)/2p$ = $({\sum}_{t=1}^{k1}tv+(i1)k+k)/k+(k+1)v/2=(k1)v/2+i1+1+(k+1)v/2$ = $kv+i$ is an integer. Thus, the solving formula by Santos [6] and Santos and Magazine [7] is appropriate for the problem with $S=vp$ and $n={\sum}_{t=1}^{k1}tv+(i1)k+k$, where k, v, and $i\le v$ are positive integers.
4 The triangle structure diagram in the case $S=vp+r$
In this section we present a specific diagram of the batching problem with $S=vp+r$ that can be obtained in $O(\sqrt{n})$ time, where $0<r<p$ and v are integers. Since the diagram consists of v the same triangles and has a trianglelike shape, we call it a triangle structure diagram. Using the optimal successor membership between integers presented by the diagram, we give the specific expression of the optimal solution for the anynumberjobs batching problem with $S=vp+r$.
Noting the fact that ${\mathrm{\Delta}}_{1}(0)=S+p=(v+1)p+r$, we have ${N}_{1}=(1,\dots ,v,v+1)$. We rewrite ${N}_{k}$ as ${N}_{k}=({C}_{k,1},\dots ,{C}_{k,v},{A}_{k})$ for $k=1,2,\dots $ .
The diagram depends on the firstorder difference sequence. Similar to the process in the last section, we first explore the properties of the firstorder difference and then give the triangle structure diagram. The proposition below shows that the increments of the firstorder difference for the first kv integers in ${N}_{k}$ are of periodicity. Using this property, we can reduce the calculated amount of determining the firstorder difference.
Proposition 5 Assume $S=vp+r$, where $0<r<p$ and v are integers, and let ${q}_{k1}={\sum}_{t=1}^{k1}{N}_{t}$. Then:

(i)
${C}_{ki}=k$ for $i=1,\dots ,v$.

(ii)
${\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k+k){\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k)=(k+1)p$ for $i=1,\dots ,v$.

(iii)
${\mathrm{\Delta}}_{1}({q}_{k1}+ik+w){\mathrm{\Delta}}_{1}({q}_{k1}+ik+(w1))$ = ${\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k+w){\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k+(w1))$ for $i=1,\dots ,v1$ and $w=1,\dots ,k$.
Proof We prove this proposition by induction. Clearly, all these results hold when $k=1$. Now suppose that they hold when $k=h$ for some positive integer h. We need to show that they hold when $k=h+1$. For any $i\in \{1,\dots ,v\}$, by result (ii) with $k=h$: ${\mathrm{\Delta}}_{1}({q}_{k1}+(i1)h+h){\mathrm{\Delta}}_{1}({q}_{k1}+(i1)h)=(h+1)p$, and it follows from Proposition 1 that the result (i) holds when $k=h+1$.
Now we may write ${C}_{h+1,i}$ as $(b+1,\dots ,b+(h+1))$, where $b={q}_{h}+(i1)(h+1)$. By Proposition 2, every integer in ${C}_{h,i}$ must be the optimal successor of some integer in ${C}_{h+1,i}$ and there is an integer, say $a+{t}_{0}$, where $a={q}_{h1}+(i1)h$, in ${C}_{h,i}$ such that it is the optimal successor of two integers in ${C}_{h+1,i}$. Thus, we have
By equation (5),
for $t=1,\dots ,({t}_{0}1)$.
By equation (3),
By equation (4),
By equation (5),
for $t=({t}_{0}+2),\dots ,(h+1)$. Equations (11), (12), (13), and (14), along with the induction assumption, imply ${\mathrm{\Delta}}_{1}(b+h+1){\mathrm{\Delta}}_{1}(b)={\mathrm{\Delta}}_{1}(a+h){\mathrm{\Delta}}_{1}(a)+p=(h+2)p$, i.e. the result (ii) holds when $k=h+1$.
By the induction assumption, we see that for, $i=1,\dots ,v$, the position of the integer in each ${C}_{hi}$ such that it is the optimal successor of two integers in ${C}_{h+1,i}$ is the same, respectively. Using a similar argument as in the proof of result (ii), we may obtain the respective formulas (11), (12), (13), and (14) for each $i\in \{1,\dots ,v\}$. The difference between them is that there are different a and b for different formulas. By comparing those formulas, along with the induction assumption, we see that result (iii) holds when $k=h+1$. □
The proposition below shows the number of integers in ${N}_{k}$ and the firstorder difference of the last integer in ${N}_{k}$. We define $Q(j)={\mathrm{\Delta}}_{1}(j)/p$. In view of the fact that ${\mathrm{\Delta}}_{1}(j)$ is an integer multiple of $Q(j)$ and the fact that all results above relative to the firstorder difference hold if we replace ‘${\mathrm{\Delta}}_{1}$’ by ‘Q’ and remove ‘p’, we call still $Q(j)$ as the firstorder difference at j. For example, the sequence ${\{Q(i)\}}_{i=1}^{+\mathrm{\infty}}$ is strictly monotone increasing, equation (3) becomes $Q(h+1)Q(h)=(h+2)Q({t}_{0}1)$ and ${\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k+k){\mathrm{\Delta}}_{1}({q}_{k1}+(i1)k)=(k+1)p$ implies $Q({q}_{k1}+(i1)k+k)Q({q}_{k1}+(i1)k)=(k+1)$. We denote by $\lfloor x)$ the maximal integer less than the rational number x. Clearly, $0<x\lfloor x)\le 1$, for example, $\lfloor 4.2)=4$ and $\lfloor 4)=3$.
Proposition 6 Assume $S=vp+r$ and $\epsilon =r/p$, where $0<r<p$ and v are integers. Then for $k=1,2,\dots $

(i)
${A}_{k}=1+\lfloor k\epsilon )$,

(ii)
${q}_{k}=k+{\sum}_{t=1}^{k}(tv+\lfloor t\epsilon ))$,

(iii)
$Q({q}_{k})={\sum}_{t=1}^{k}(tv+\lfloor t\epsilon ))+(k+1)+(k+1)v+(k+1)\epsilon $,
where ${q}_{k}={\sum}_{t=1}^{k}{N}_{t}$ is the last integer in ${N}_{k}$.
Proof We prove this proposition by induction. ${A}_{1}=1$ is known. Since $0<\epsilon <1$, we have $\lfloor \epsilon )=0$. This implies that the results (i) and (ii) hold when $k=1$. By Proposition 5, $Q(v)=Q(0)+2v$. Since $\mathit{SUCC}(v)=\mathit{SUCC}(v+1)=0$, $\mathit{SUCC}(v+2)=1$ and $\lfloor \epsilon )=0$, by (4), we have $Q({q}_{1})=Q(v+1)=Q(v)+Q(0)v=2v+2(v+1)+2\epsilon v=v+2+2v+2\epsilon $. This implies that result (iii) holds when $k=1$. Now suppose that they hold when $k=h$ for some positive integer h. We need to show that they hold when $k=h+1$. We know that ${A}_{h+1}=\{j\mid {\mathrm{\Delta}}_{1}({q}_{h}{A}_{h})\le jp<{\mathrm{\Delta}}_{1}({q}_{h})\}$ or ${A}_{h+1}=\{j\mid Q({q}_{h}{A}_{h})\le j<Q({q}_{h})\}$. By Proposition 5 and the induction assumption,
and
By $0<(h\epsilon \lfloor h\epsilon ))\le 1$ and $0<((h+1)\epsilon \lfloor (h+1)\epsilon ))\le 1$, we see that the minimal positive integer greater than or equal to $Q({q}_{h}{A}_{h})$ is $h+(h+1)v+{\sum}_{t=1}^{h}(tv+\lfloor t\epsilon ))+1$, and the maximal positive integer less than $Q({q}_{h})$ is $(h+1)+{\sum}_{t=1}^{h+1}(tv+\lfloor t\epsilon ))$. Thus,
This implies result (i) holds. Due to Proposition 5, we have
This implies result (i) holds. To prove result (iii) with $k=h+1$, we consider two cases: (a1) $\lfloor (h+1)\epsilon )=\lfloor h\epsilon )$, and (b1) $\lfloor (h+1)\epsilon )=\lfloor h\epsilon )+1$.
Case (a1) $\lfloor (h+1)\epsilon )=\lfloor h\epsilon )$. In this case, we have ${A}_{h+1}={A}_{h}$. This implies that all the optimal successors of integers in ${A}_{h+1}$ are different. By equation (5), $Q({q}_{h+1})Q({q}_{h+1}\lfloor (h+1)\epsilon )1)=Q({q}_{h})Q({q}_{h}\lfloor (h)\epsilon )1)$. This, along with the induction assumption and Proposition 5, implies that
Case (b1) $\lfloor (h+1)\epsilon )=\lfloor h\epsilon )+1$. In this case, we have ${A}_{h+1}={A}_{h}+1$. By similar arguments as used in the proof of Proposition 5, $Q({q}_{h+1})Q({q}_{h+1}\lfloor (h+1)\epsilon )1)=Q({q}_{h})Q({q}_{h}\lfloor (h)\epsilon )1)+1$. By the same argument as in Case (a1), we have
This ends the proof for result (iii). □
The proposition below shows that the optimal successor membership between numbers of jobs is exactly determined by the coefficients of ε in the expression of the firstorder difference. To determine the optimal successor membership between the numbers of jobs, by Proposition 5, we assume $v=1$. We denote by $(k,w)$ the w th integer of ${C}_{k1}$ and by $(k,k+w)$ the w th integer of ${A}_{k}$. We define $(k,0)=(k,1)1$ and $(k,k+0)=(k,k)$, respectively. ${c}_{k,w}$ denotes the coefficient of ε in $Q(k,w)$ for $w=1,\dots ,k+\lfloor k\epsilon )+1$, or the coefficient of $(k,w)$ for short. We set ${r}_{k}=\lfloor k\epsilon )$.
Proposition 7 Assume $S=vp+r$ and $\epsilon =r/p$, where $0<r<p$ and v are integers. Then for $k=1,2,\dots $ :

(i)
$({c}_{k1},\dots ,{c}_{kk})$ is a permutation of $\{1,\dots ,k\}$.

(ii)
If ${c}_{k{t}_{0}}=1$, then $\mathit{SUCC}(k,{t}_{0})=\mathit{SUCC}(k,{t}_{0}+1)=(k1,{t}_{0})$; if ${c}_{k,k+{t}_{0}}=1$, then $\mathit{SUCC}(k,{t}_{0})=\mathit{SUCC}(k,{t}_{0}+1)=(k1,{t}_{0})$.

(iii)
Triangle rule:
$${c}_{k,w}={c}_{k1,{r}_{k1}+w}\phantom{\rule{1em}{0ex}}\mathit{\text{for}}w=1,\dots ,k,$$(15)$${c}_{k,k+w}={c}_{kw}\phantom{\rule{1em}{0ex}}\mathit{\text{for}}w=1,\dots ,{r}_{k},$$(16)$${c}_{k,k+{r}_{k}+1}=k+1.$$(17)
Proof Since $F(0)=0$, $F(1)/p=v+1+\epsilon $ and $F(j)/p=j(v+\epsilon +(j\mathit{SUCC}(j)))+F(\mathit{SUCC}(j))/p$ for $j=1,2,\dots $ , $Q(j)=(F(j+1)F(j))/p$ is a linear function of ε. Now we prove this proposition by induction. Simple calculations yield
if ${r}_{2}\le 1$ and
if ${r}_{2}>1$. This implies that all results hold when $k=1$ and $k=2$. Now suppose that they hold when $k=h$ for some positive integer $h\ge 2$. We need to show that they hold when $k=h+1$.
Now we prove result (i). There are two cases to consider: (a3) $\mathit{SUCC}(h+1,1)=\mathit{SUCC}(h+1,2)=(h,1)$, and (b3) $\mathit{SUCC}(h+1,1)\ne \mathit{SUCC}(h+1,2)$.
Case (a3) $\mathit{SUCC}(h+1,1)=\mathit{SUCC}(h+1,2)=(h,1)$. By equation (3), $Q(h+1,1)=Q(h,h+(1+{r}_{h}))+(h+1,2)Q(h1,(h1)+(1+{r}_{h1}))$. Due to the induction assumption, ${c}_{h,h+(1+{r}_{h})}=h+1$ and ${c}_{h1,(h1)+(1+{r}_{h1})}=h$. Thus, we have ${c}_{h+1,1}=1$.
By equation (4), $Q(h+1,2)=Q(h+1,1)+Q(h,1)(h+1,1)$. Thus, we have ${c}_{h+1,2}={c}_{h+1,1}+{c}_{h,1}={c}_{h,1}+1$.
By equation (5), $Q(h+1,3)=Q(h+1,2)+Q(h,2)Q(h,1)$. Thus, we have ${c}_{h+1,3}={c}_{h+1,2}+{c}_{h,2}{c}_{h,1}={c}_{h,1}+1+{c}_{h,2}{c}_{h,1}={c}_{h,2}+1$. Similarly, we have ${c}_{h+1,w}={c}_{h,w1}+1$ for $w=4,\dots ,h+1$. Thus,
This, along with the induction assumption, implies that $({c}_{h+1,1},\dots ,{c}_{h+1,h+1})$ is a permutation of $\{1,\dots ,h+1\}$. Thus, result (i) holds in the case.
Case (b3) $\mathit{SUCC}(h+1,1)\ne \mathit{SUCC}(h+1,2)$. We assume $\mathit{SUCC}(h+1,{t}_{0})=\mathit{SUCC}(h+1,{t}_{0}+1)=(h,{t}_{0})$, where $1<{t}_{0}\le h$. By equation (5), $Q(h+1,1)=Q(h,h+(1+{r}_{h}))+Q(h,1)Q(h1,(h1)+(1+{r}_{h1}))$. Due to the induction assumption, ${c}_{h+1,1}={c}_{h,h+(1+{r}_{h})}+{c}_{h,1}{c}_{h1,(h1)+(1+{r}_{h1})}=(h+1)+{c}_{h,1}h={c}_{h,1}+1$. Similarly, we have ${c}_{h+1,w}={c}_{h,w}+1$ for $w=2,\dots ,{t}_{0}1$.
By equation (3), $Q(h+1,{t}_{0})=Q(h+1,{t}_{0}1)+(h+1,{t}_{0}+1)Q(h,{t}_{0}1)$. Thus, we have ${c}_{h+1,{t}_{0}}={c}_{h+1,{t}_{0}1}{c}_{h,{t}_{0}1}=1$.
By equation (4), $Q(h+1,{t}_{0}+1)=Q(h+1,{t}_{0})+Q(h,{t}_{0})(h+1,{t}_{0}1)$. Thus, we have ${c}_{h+1,{t}_{0}+1}={c}_{h+1,{t}_{0}}+{c}_{h,{t}_{0}}={c}_{h,{t}_{0}}+1$. Similarly, we have ${c}_{h+1,w}={c}_{h,w1}+1$ for $w={t}_{0}+2,\dots ,h+1$. Thus,
This, along with the induction assumption, implies that $({c}_{h+1,1},\dots ,{c}_{h+1,h+1})$ is a permutation of $\{1,\dots ,h+1\}$. Thus, result (i) holds in the case.
By the same arguments as in the proof of result (i), we have
if ${r}_{h+1}={r}_{h}$;
if ${r}_{h+1}={r}_{h}+1$, where $1\le {t}_{0}\le ({r}_{h}+1)$. Due to equations (18), (19), (20), and (21), result (ii) holds when $k=h+1$.
Before other results are proved, we give an assertion, as follows.
Assume that ${c}_{j1}$ and ${c}_{j}$ are the coefficients of ε corresponding to $Q(j1)$ and $Q(j)$ respectively, where $j\in {N}_{k}$ and $k\ge 2$. Then
(∗) j is the optimal successor of two integers if ${c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon )>{c}_{j}\epsilon \lfloor {c}_{j}\epsilon )$;
(∗∗) j is the optimal successor of one integer if ${c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon )\le {c}_{j}\epsilon \lfloor {c}_{j}\epsilon )$.
We prove this assertion by contradiction. We write $Q(j1)$ and $Q(j)$ as $a+{c}_{j1}\epsilon $ and $b+{c}_{j}\epsilon $, respectively, where a and b are integers. Clearly, $a+\lfloor {c}_{j1}\epsilon )+1$ is the minimal integer greater than or equal to $Q(j1)$ and $b+\lfloor {c}_{j}\epsilon )$ the maximal positive integer less than $Q(j)$. Suppose that result (∗) does not hold. Then we have $b+\lfloor {c}_{j}\epsilon )(a+\lfloor {c}_{j1}\epsilon ))=1$. By ${c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon )>{c}_{j}\epsilon \lfloor {c}_{j}\epsilon )$, $Q(j)=b+\lfloor {c}_{j}\epsilon )+({c}_{j}\epsilon \lfloor {c}_{j}\epsilon ))<a+\lfloor {c}_{j1}\epsilon )+({c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon ))+1=Q(j1)+1$, i.e. $Q(j)Q(j1)<1$. This contradicts Proposition 2.
Suppose that result (∗∗) does not hold. Then $b+\lfloor {c}_{j}\epsilon )(a+\lfloor {c}_{j1}\epsilon ))=2$. If ${c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon )<{c}_{j}\epsilon \lfloor {c}_{j}\epsilon )$, using a similar argument as in the proof of result (∗), we can deduce that $Q(j)Q(j1)>2$. This contradicts Proposition 2. If ${c}_{j1}\epsilon \lfloor {c}_{j1}\epsilon )={c}_{j}\epsilon \lfloor {c}_{j}\epsilon )$, then $Q(j)Q(j1)=2$. In the case $j\in {C}_{k1}$, this, along with Proposition 5, implies $Q(i)Q(i1)=1$ for an arbitrary $i\in {C}_{k1}$ with $i\ne j$. And then we have ${c}_{i1}\epsilon \lfloor {c}_{i1}\epsilon )={c}_{i}\epsilon \lfloor {c}_{i}\epsilon )$. Since $\{{c}_{j}\mid j\in {C}_{k1}\}=\{1,2,\dots ,k\}$, we have $\epsilon \lfloor \epsilon )=2\epsilon \lfloor 2\epsilon )$. This contradicts $0<\epsilon <1$. In the case $j\in {A}_{k}$, $Q(j)Q(j1)=2$ contradicts the fact that $Q({q}_{k})Q({q}_{k}{r}_{k}1)={r}_{k}+1+\epsilon $ and $Q(i)Q(i1)\ge 1$ for any integer i.
This assertion, along with the arguments used in the proof of result (i), implies that $({c}_{h+1,1},\dots ,{c}_{h+1,(h+1)+{r}_{h+1}+1})$ is exactly determined by $({c}_{h,1},\dots ,{c}_{h,h+{r}_{h}+1})$ and ${c}_{h1,(h1)+({r}_{h1}+1)}$.
We proceed with our proof. Note the fact that the coefficient permutations of the optimal successors corresponding to $\{(h,{r}_{h}+1),\dots ,(h,h+{r}_{h}+1)\}$ and $\{(h+1,1),\dots ,(h+1,h+1)\}$ are $({c}_{h1,{r}_{h1}+1},\dots ,{c}_{h1,(h1)+({r}_{h1}+1)})$ and $({c}_{h,1},\dots ,{c}_{h,h})$, respectively. By the induction assumption, we have $({c}_{h1,{r}_{h1}+1},\dots ,{c}_{h1,(h1)+({r}_{h1}+1)})=({c}_{h,1},\dots ,{c}_{h,h})$ and both are the same permutation of $\{1,\dots ,h\}$. This, along with the assertion, implies that either
(a4) ${c}_{h+1,w}={c}_{h,{r}_{h}+w}={c}_{h,w1}+1$ for $w=3,\dots ,h+1$
or
(b4) ${c}_{h+1,w}={c}_{h,{r}_{h}+w}={c}_{h,w}+1$ for $w=2,\dots ,{t}_{0}1$, ${c}_{h+1,{t}_{0}}={c}_{h,{r}_{h}+{t}_{0}}=1$ and ${c}_{h+1,w}={c}_{h,{r}_{h}+w}={c}_{h,w1}+1$ for $w={t}_{0}+1,\dots ,h+1$, where $2\le {t}_{0}\le h$ and ${c}_{h,{t}_{0}1}\epsilon \lfloor {c}_{h,{t}_{0}1}\epsilon )>{c}_{h,{t}_{0}}\epsilon \lfloor {c}_{h,{t}_{0}}\epsilon )$.
Since both $({c}_{h+1,1},\dots ,{c}_{h+1,h+1})$ and $({c}_{h,{r}_{h}+1},\dots ,{c}_{h,h+{r}_{h}+1})$ are permutations of $\{1,\dots ,h+1\}$, along with result (ii) and the fact that $({c}_{h,1},\dots ,{c}_{h,h})$ is a permutation of $\{1,\dots ,h\}$, we have ${c}_{h+1,1}={c}_{h,{r}_{h}+1}=1$, ${c}_{h+1,2}={c}_{h,{r}_{h}+2}={c}_{h,1}+1$ in the case (a4) and ${c}_{h+1,1}={c}_{h,{r}_{h}+1}={c}_{h,1}+1$ in the case (b4). Thus, equation (15) holds when $k=h+1$.
Now we prove equation (16). There are two cases to consider: (a5) ${r}_{h+1}={r}_{h}$ and (b5) ${r}_{h+1}={r}_{h}+1$.
Case (a5) ${r}_{h+1}={r}_{h}$. By equation (20), ${c}_{h+1,(h+1)+w}={c}_{h,h+w}+1$ for $w=1,\dots ,{r}_{h+1}$. By the induction assumption, ${c}_{h,h+w}={c}_{h,w}$ for $w=1,\dots ,{r}_{h}$. This, along with ${c}_{h,h}={c}_{h1,(h1)+{r}_{h1}+1}=k$ and the assertion, implies that ${c}_{h+1,w}={c}_{h,h+w}+1={c}_{h+1,(h+1)+w}$ for $w=1,\dots ,{r}_{h+1}$. Thus, equation (16) holds in the case.
Case (b5) ${r}_{h+1}={r}_{h}+1$. By equation (21), ${c}_{h+1,(h+1)+w}={c}_{h,h+w}+1$ for $w=1,\dots ,{t}_{0}1$, ${c}_{h+1,(h+1)+{t}_{0}}=1$ and ${c}_{h+1,(h+1)+w}={c}_{h,h+w1}+1$ for $w={t}_{0}+1,\dots ,{r}_{h+1}$, where $1\le {t}_{0}\le ({r}_{h}+1)$ If $1\le {t}_{0}\le {r}_{h}$, using similar arguments as in Case (a5), we find that equation (16) holds in the case. If ${t}_{0}={r}_{h}+1$, then ${c}_{h+1,(h+1)+{r}_{h+1}}=1$. This, along with equation (15), implies that $({c}_{h+1,{r}_{h+1}+1},\dots ,{c}_{h+1,(h+1)+{r}_{h+1}+1})$ is a permutation of $\{1,\dots ,h+2\}$ and $1\notin \{{c}_{h+1,{r}_{h+1}+1},\dots ,{c}_{h+1,h+1}\}$. Using similar arguments as in Case (a5), we find that $1<{c}_{h+1,w}={c}_{h+1,(h+1)+w}={c}_{h,w}+1$ for $w=1,\dots ,{r}_{h+1}1$. It follows from $({c}_{h+1,1},\dots ,{c}_{h+1,h+1})$ is a permutation of $\{1,\dots ,h+1\}$ that ${c}_{h+1,{r}_{h+1}}=1$. Thus, equation (16) holds in the case.
Equation (17) has been proven in Proposition 6. □
We denote by the coefficient of ε corresponding to integer j the node in Figure 1 instead of the integer j in ${C}_{ki}$. Using the triangle rule in Proposition 7, we may give a triangle structure diagram in the case $S=vp+r$.
We now show that the triangle structure diagram can be decided in $O(\sqrt{n})$ time. By the triangle rule in Proposition 7, we see that given the permutation of the numbers in the front row, the permutation of the numbers in the back row can be decided in constant time. The total of the permutations of k rows needs to be decided and the permutation of the first row is known. Because the number of batches in an optimal solution turns out to be $O(\sqrt{n})$, the triangle structure diagram can be decided in $O(\sqrt{n})$ time.
In order to better understand the characters of the triangle structure diagram, we give an example. Given $S=241$ and $p=100$, then $\epsilon =0.41$, and the triangle structure diagram of the batching problem is given in Figure 3
From the arguments above, we see clearly that the optimal successor membership between the numbers of jobs can exactly be determined by the position of ‘1’ in $({c}_{k,1},\dots ,{c}_{k,k})$ and $({c}_{k,k+1},\dots ,{c}_{k,k+{r}_{k}+1})$. We denote by $c(k)$ and $a(k)$ the positions in $({c}_{k,1},\dots ,{c}_{k,k})$ and $({c}_{k,k+1},\dots ,{c}_{k,k+{r}_{k}+1})$ where 1 lies, respectively. We define $a(k)=+\mathrm{\infty}$ if there is not 1 in $({c}_{k,k+1},\dots ,{c}_{k,k+{r}_{k}+1})$, and ${r}_{0}=0$. Using the triangle rule, a recursion for determining the position of 1 may be formulated as follows:
for $k=2,3,\dots $ ,
for $k=1,2,\dots $ .
Now we show that the recursion is correct. When ${r}_{k1}={r}_{k2}$, by equation (16), $c(k1)>{r}_{k1}$. By equation (15), $c(k)=c(k1){r}_{k1}$. When ${r}_{k1}={r}_{k2}+1$, by equation (16), $a(k1)=c(k1)$. By equation (16), $c(k)=k(({r}_{k1}+1)c(k1))=(k1)+c(k1){r}_{k1}$. Thus, equation (22) holds for $k=2,3,\dots $ . Equation (23) follows from equation (16) for $k=1,2,\dots $ .
By the recursion above, we see that the positions of ‘1’ in each row can be decided in constant time. The total of the positions of ‘1’ in k rows needs to be decided and the position of ‘1’ in the first row is known. Thus, all of the positions of ‘1’ in the triangle structure diagram can be decided in $O(\sqrt{n})$ time.
We denote by $(k,i,{w}_{k})$ and $(k,{w}_{k})$ the ${w}_{k}$th integer in the i th periodic set ${C}_{ki}$ of the k th batch case set ${N}_{k}$, and the ${w}_{k}$th integer in ${A}_{k}$, respectively, i.e. $(k,i,{w}_{k})={\sum}_{t=1}^{k1}{N}_{t}+(i1)k+{w}_{k}$ and $(k,{w}_{k})={\sum}_{t=1}^{k1}{N}_{t}+vk+{w}_{k}$. Now we give the last result as follows.
Theorem 3 Assume $S=vp+r$ with $0<r<p$. Then for arbitrary positive integers $(k,i,{w}_{k})$ and $(k,{w}_{k})$, the following hold and are decided in $O(\sqrt{n})$ time:

(i)
$$\mathit{SUCC}(k,i,{w}_{k})=\{\begin{array}{ll}0& \mathit{\text{if}}k=1;\\ (k1,i,{w}_{k})& \mathit{\text{if}}{w}_{k}\le c(k);\\ (k1,i,{w}_{k}1)& \mathit{\text{if}}{w}_{k}c(k).\end{array}$$

(ii)
$$\mathit{SUCC}(k,{w}_{k})=\{\begin{array}{ll}0& \mathit{\text{if}}k=1;\\ (k1,{w}_{k})& \mathit{\text{if}}{w}_{k}\le a(k);\\ (k1,{w}_{k}1)& \mathit{\text{if}}{w}_{k}a(k).\end{array}$$

(iii)
The optimal solution of the $(k,i,{w}_{k})$jobs batch sizing problem is
$${b}_{j}=\{\begin{array}{ll}(kj)v+{r}_{kj}+i& \mathit{\text{if}}{w}_{k+1j}\le c(k+1j);\\ (kj)v+{r}_{kj}+i+1& \mathit{\text{if}}{w}_{k+1j}c(k+1j)\end{array}$$(24)
for $j=1,\dots ,k$, where $c(j)$ is given by equation (22), and ${w}_{j1}={w}_{j}$ if ${w}_{j}\le c(j)$ and ${w}_{j1}={w}_{j}1$ if ${w}_{j}>c(j)$ for $j=k,\dots ,2$.

(iv)
The optimal solution of $(k,{w}_{k})$jobs batch sizing problem is
$${b}_{j}=\{\begin{array}{ll}(k+1j)v+{r}_{kj}+1& \mathit{\text{if}}{w}_{k+1j}\le a(k+1j);\\ (k+1j)v+{r}_{kj}+2& \mathit{\text{if}}{w}_{k+1j}a(k+1j)\end{array}$$(25)
for $j=1,\dots ,k$, where $a(j)$ is given by equation (23), and ${w}_{j1}={w}_{j}$ if ${w}_{j}\le a(j)$ and ${w}_{j1}={w}_{j}1$ if ${w}_{j}>a(j)$ for $j=k,\dots ,2$.
Proof For any $j=1,\dots ,k$, given $c(j)$ and $a(j)$, we can decide its the optimal successor and ${b}_{j}$ in constant time. Since $c(j)$ and $a(j)$ can be decided in $O(\sqrt{n})$ time, the specific expression of the optimal solution is decided in $O(\sqrt{n})$ time.
In the case $k=1$, $(k,i,{w}_{k})=(1,i,1)=i<v+1+\epsilon =Q(0)$. By Proposition 1, $\mathit{SUCC}(k,i,{w}_{k})=0$. In the case $k>1$, result (i) clearly follows from Proposition 7 or Figure 3. Similarly, result (ii) holds.
Now we show result (iii) for the case with $j=1$. When ${w}_{k}\le c(k)$, due to result (i), $\mathit{SUCC}(k,i,{w}_{k})=(k1,i,{w}_{k})$. Thus,
When ${w}_{k}>c(k)$, due to result (i), $\mathit{SUCC}(k,i,{w}_{k})=(k1,i,{w}_{k}1)$. Thus,
The result for $j=2,\dots ,k1$ can be proved similarly. The result for $t=k$ is
This finishes the proof of result (iii).
Using similar arguments as in the proof of result (iii), result (iv) can be proved. □
For example, we consider the instance of a batch sizing problem with 105 jobs, $S=241$ and $p=100$. Since 105 can be written as $105=91+9+5={\sum}_{h=1}^{91}{N}_{h}+(21)9+5=(9,2,5)$. By the triangle structure diagram in the case $S=vp+r=2\times 100+41$, we can obtain the successor relation between the number of jobs: $8\to 7\to 6\to 6\to 5\to 4\to 3\to 2\to 1\to 1$, i.e., $(9,2,5)\to (8,2,5)\to (7,2,4)\to (6,2,3)\to (5,2,3)\to (4,2,2)\to (3,2,2)\to (2,2,1)\to (1,2,1)\to (0,0,0)$. Thus, the optimal solution is $({b}_{1},\dots ,{b}_{9})=(21,19,17,14,12,9,7,4,2)$. Of course, we may have the optimal solution by equation (24). However, before we solve the instance, we need to go to computer ${r}_{j}$, $c(j)$, $a(j)$ and ${\sum}_{h=1}^{j}{N}_{h}$ as follows:
Using equation (24), we have
by ${w}_{9}=5\le c(9)=6$, ${b}_{1}=(91)2+3+2=21$ and ${w}_{8}={w}_{9}=5$;
by ${w}_{8}=5>c(8)=1$, ${b}_{2}=(92)2+2+2+1=19$ and ${w}_{7}={w}_{8}1=4$;
by ${w}_{7}=4>c(7)=3$, ${b}_{3}=(93)2+2+2+1=17$ and ${w}_{6}={w}_{7}1=3$;
by ${w}_{6}=3\le c(6)=5$, ${b}_{4}=(94)2+2+2=14$ and ${w}_{5}={w}_{6}=3$;
by ${w}_{5}=3>c(5)=2$, ${b}_{5}=(95)2+1+2+1=12$ and ${w}_{4}={w}_{5}1=2$;
by ${w}_{4}=2\le c(4)=3$, ${b}_{6}=(96)2+1+2=9$ and ${w}_{3}={w}_{4}=2$;
by ${w}_{3}=2>c(3)=1$, ${b}_{7}=(97)2+0+2+1=7$ and ${w}_{2}={w}_{3}1=1$;
by ${w}_{2}=1\le c(2)=1$, ${b}_{8}=(98)2+0+2=4$;
${b}_{9}=2$.
If n is chosen $137=113+20+4={\sum}_{h=1}^{101}{N}_{h}+2\times 10+4=(10,4)$, then using equation (25), similarly we have $({b}_{1},\dots ,{b}_{10})=(25,22,19,17,15,13,10,8,5,3)$.
5 Conclusions
In this paper the problem of batching identical jobs on a single machine to minimize the completion time is studied by employing the difference analysis technique. We first establish the relation between the optimal solution and the firstorder difference of the optimal objective function in terms of the number of jobs and investigate the properties of the firstorder difference. Then, we obtain the triangle structure diagram of the batching problem in $O(\sqrt{n})$ time at most by using the permutation of some numbers which describe the character of the firstorder difference. The diagrams enable us to see clearly the specific expressions of optimal solutions for the njobs batching problem and any mjobs batching problem simultaneously, where $m\le n$. The difference analysis technique employed by us should be used for solving other batching problems. Analysis of many batching problems shows that, in some cases, the sequencing and batching of jobs can be decoupled. Once a sequence of jobs is known, dynamic programming is often adopted to solve the batching problem. In the case the difference expression of the optimal solution function may be obtained. It become possible that the solving algorithm may be improved by employing the difference analysis technique. The difference analysis technique should be a better tool for solving discrete mathematics problems.
References
 1.
Coffman EG Jr., Yannakakis M, Magazine MJ, Santos CA: Batch sizing and job sequencing on a single machine. Ann. Oper. Res. 1990, 26: 135–147. 10.1007/BF02248589
 2.
Naddef D, Santos C: Onepass batching algorithms for the onemachine problem. Discrete Appl. Math. 1988, 21: 133–145. 10.1016/0166218X(88)900492
 3.
Coffman EG Jr., Nozari A, Yannakakis M: Optimal scheduling of products with two subassemblies on a single machine. Oper. Res. 1989, 37: 426–436. 10.1287/opre.37.3.426
 4.
Shallcross D: A polynomial algorithm for a one machine batching problem. Oper. Res. Lett. 1992, 11: 213–218. 10.1016/01676377(92)90027Z
 5.
Potts CN, Kovalyov MY: Scheduling with batching: a review. Eur. J. Oper. Res. 2000, 120: 228–249. 10.1016/S03772217(99)001538
 6.
Santos, CA: Batching and sequencing decisions under lead time considerations for single machine problems. MSc thesis, Department of Management Sciences, University of Waterloo, Canada (1984)
 7.
Santos CA, Magazine M: Batching in single operation manufacturing systems. Oper. Res. Lett. 1985, 4: 99–103. 10.1016/01676377(85)900112
 8.
Albers S, Brucker P: The complexity of onemachine batching problems. Discrete Appl. Math. 1993, 47: 87–107. 10.1016/0166218X(93)900853
Acknowledgements
This research was supported in part by the Zhejiang Natural Science Foundation of China Grant Y6110054, the Shanxi Scholarship Council of China, and the Hong Kong Polytechnic University Research Grant JBB7D and The Hong Kong CERG Research Fund 5515/10H.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the manuscript and typed, read, and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Received
Accepted
Published
DOI
Keywords
 scheduling
 batching
 triangle structure
 single machine