1 Introduction

Scheduling problems have received considerable attention for many years (see Gonzalez and Sahni [1], Smutnicki [2], Hoogeveen and Kawaguchi [3], Koulamas and Kyparisis [4], Easwaran et al. [5], Pinedo [6]). Most research of deterministic scheduling assumes that processing time of a job is independent of its position in the process schedule. However, additional constraints such as learning effects of machines (workers) and flow shop machines setting are of interest to increase the efficiency of manufacture system. In the “learning effect”, the production facility (a machine, a worker) improves continuously over time and the production time of a given product is shorter if it is scheduled later (Biskup [7, 8], Wang et al. [9], Cheng et al. [10], Eren [11], Lee and Wu [12], Yin et al. [13], Yin et al. [14], Wu et al. [15], Yin et al. [16], and Zhao and Tang [17]).

Cheng et al. [10] considered single machine scheduling problems with sum-of-logarithm-processing-times-based learning effects. However, in the manufacturing enterprise, the flow shop scheduling problems are important and usual. Hence, in this paper we consider the same model as that of Cheng et al. [10], but with flow shop scheduling setting. The objective functions are to minimize the total completion time, the makespan, the total weighted completion time, and the sum of the quadratic job completion times, respectively. We give a heuristic for each criteria, and analysis its worst-case error bound. For some studies about flow shop scheduling with learning effects, we refer the readers to Wang and Xia [18], Xu et al. [19], Wang and Wang [20], Li et al. [21], Sun et al. [22], Wang and Wang [23], Sun et al. [24], Wang et al. [25], Wang and Zhang [26], He [27], Bai et al. [28], and Wang et al. [29]. Wang and Xia [18] and Xu et al. [19] considered flow shop scheduling with learning effect, i.e., the actual processing time \(p_{ijr}\) of job \(J_j\) on machine \(M_i\) is \( p_{ijr}=p_{ij}(\alpha -\beta r) \) and \( p_{ijr}=p_{ij} r^{\delta }, i=1,2,\ldots ,m;r,j=1,2,\ldots ,n, \) where \(p_{ij}\) is the normal processing time of job \(J_j\) on machine \(M_i\), \(\alpha>0, \beta \ge 0, \alpha -\beta (n+1)>0\), and \(\delta \le 0\) is the learning rate. Using the three-field notation scheme (Graham et al. [30]), Wang and Xia [18] proposed heuristic algorithms with worst-case error bounds for the problems \(Fm\left| p_{ijr}=p_{ij}(\alpha -\beta r)\right| \theta \) and \(Fm\left| p_{ijr}=p_{ij}r^{\delta }\right| \theta \), where \(\theta \in \{C_{\max },\sum _{j=1}^n C_j\}\), Xu et al. [19] proposed heuristic algorithms with worst-case error bounds for the problems \(Fm\left| p_{ijr}=p_{ij}(\alpha -\beta r)\right| \theta \) and \(Fm\left| p_{ijr}=p_{ij}r^{\delta }\right| \theta \), where \(\theta \in \{\sum _{j=1}^n w_j C_j, \sum _{j=1}^n C_j^2, \sum _{j=1}^n w_j (1-e^{-\gamma C_j})\}\), where \(C_j\)\((w_j)\) is the completion time of job \(J_j\), \(w_j>0\) is a weight associated with job \(J_j\), \(C_{\max }\) is the maximum completion time, \(0<\gamma <1\). Wang and Wang [20] considered flow shop scheduling with learning effect in which \( p_{ijr}=p_{ij}b^{r-1}, \) where b denotes the learning ratio with \(0<b\le 1\). Wang and Wang [20] proposed heuristic algorithms with worst-case error bounds for \(Fm\left| p_{ijr}=p_{ij}b^{r-1}\right| \theta \), where \(\theta \in \{\sum _{j=1}^n C_j, \sum _{j=1}^n w_j C_j, \sum _{j=1}^n C_j^2, \sum _{j=1}^n w_j (1-e^{-\gamma C_j})\}\). Li et al. [21] considered flow shop scheduling with learning effect in which \( p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1} p_{i[l]}\right) ^a, \) where \(a\le 0\) denotes the learning ratio. They proposed heuristic algorithms with worst-case error bounds for \(Fm\left| p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1} p_{i[l]}\right) ^a\right| \theta \), where \(\theta \in \{C_{\max }, \sum _{j=1}^n C_j, \sum _{j=1}^n w_j C_j, \sum _{j=1}^n C_j^2, \sum _{j=1}^n w_j (1-e^{-\gamma C_j})\}\). Sun et al. [22] proposed new heuristic algorithms with worst-case error bounds for \(Fm\left| A\right| \sum _{j=1}^n w_j C_j\), where \(A\in \{p_{ijr}=p_{ij}(\alpha -\beta r), p_{ijr}=p_{ij} r^{\delta }, p_{ijr}=p_{ij}b^{r-1}\}\). Wang and Wang [23] and Sun et al. [24] considered flow shop scheduling with learning effect in which \( p_{ijr}=p_{ij}g(r), \) where g(r) is a non-increasing function on r. Wang and Wang [23] and Sun et al. [24] proposed heuristic algorithms with worst-case error bounds for \(Fm\left| p_{ijr}=p_{ij}g\left( r\right) \right| \theta \), where \(\theta \in \{C_{\max }, \sum _{j=1}^n C_j, \sum _{j=1}^n C_j^2, \sum _{j=1}^n w_j C_j, \sum _{j=1}^n w_j (1-e^{-\gamma C_j})\}\). Wang et al. [25] considered flow shop scheduling with learning effect in which \( p_{ijr}=p_{ij}\max \{r^{\delta },\zeta \}, \) where \(\zeta \) is a truncation parameter with \(0\le \zeta \le 1\). They proposed heuristic algorithms with worst-case error bounds for \(Fm\left| p_{ijr}=p_{ij}\max \{r^{\delta },\zeta \}\right| \theta \), where \(\theta \in \{C_{\max }, \sum _{j=1}^n C_j, \sum _{j=1}^n C_j^2, \sum _{j=1}^n w_j C_j, \sum _{j=1}^n w_j (1-e^{-\gamma C_j})\}\). Wang and Zhang [26] considered the problem \(Fm\left| p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\beta _l\ln p_{i[l]}\right) ^ar^c\right| \lambda C_{\max }\)\(+(1-\lambda ) \sum _{j=1}^n C_j\), where \(0<\beta _1\le \beta _2\ldots \le \beta _n\), \(c\le 0\), and \(0\le \lambda \le 1\). He [27] considered the maximum lateness minimization flow shop scheduling with a general exponential learning effect, he proposed a branch-and-bound algorithm, several heuristics, and a nested-partition-based solution approach to solve this problem. Bai et al. [28] considered the flow shop scheduling problems \(Fm\left| p_{ijr}=p_{ij}g\left( r\right) , r_j\right| \theta \), where \(\theta \in \{C_{\max }, \sum _{j=1}^n C_j, \sum _{j=1}^n C_j^2\}\), \(r_j\) is the release dates of job \(J_j\). Wang et al. [29] considered flow shop scheduling problems with truncated exponential sum of logarithm processing times based and position-based learning effects. For the makespan and total weighted completion time minimizations, they proposed several heuristics and a branch-and-bound algorithm.

The remaining part of this paper is organized as follows. In Sect. 2 we give the formulation of the model. In Sect. 3, we propose a heuristic with a worst-case error bound for several regular objective functions, respectively. In Sect. 4, computational results are given. The last section is the summary and future research.

2 Problem formulation

There is a set of n jobs \(J=\{J_1, J_2,\ldots ,J_n\}\) to be processed on m machines \(M_1, M_2,\ldots , M_m\). Each job \(J_j\) must first be prossed on \(M_1\), and then executed on \(M_2\), and so on. As in Cheng et al. [10], in this paper, we consider flow shop scheduling with sum-of-logarithm-processing-times-based learning effects, i.e., the actual processing time \(p_{ijr}\) of job \(J_j\) on machine \(M_i\) is

$$\begin{aligned} p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a, i=1,2,\ldots ,m;r,j=1,2,\ldots ,n, \end{aligned}$$
(1)

where \(p_{ij}\) is the normal processing time (i.e., the processing time without any learning effects) of job \(J_j\) on machine \(M_i\), \(a\le 0\) is the learning index, \(\ln p_{ij}\ge 1\) and \(\sum _{l=1}^{0}\ln p_{i[l]}=0\).

Considering a schedule \(\pi \), \(C_{ij}=C_{ij}(\pi )\) denotes the completion time of job \(J_j\) on machine \(M_i\), and \(C_j=C_{mj}\) represents the completion time of job \(J_j\), \(\sum _{j=1}^n C_j\) is the total completion time, \(C_{\max }=\max \{C_j|j=1,2,\ldots ,n\}\) is the makespan of all jobs, \(\sum _{j=1}^n w_j C_j\) denotes the total weighted completion time (where \(w_j>0\) is a weight associated with job \(J_j\)), \(\sum _{j=1}^n C_j^2\) is the sum of the quadratic job completion times (Townsend [31]).

3 Main results

Lemma 1

(Cheng et al. [10]) For the problem \(1|p_{jr}=p_{j}\left( 1+\sum _{l=1}^{r-1}\ln p_{[l]}\right) ^a|\sum C_j\), an optimal schedule can be obtained by sequencing the jobs in non-decreasing order of \(p_j\) (i.e., the SPT rule).

Lemma 2

(Cheng et al. [10]) For the problem \(1|p_{jr}=p_{j}\left( 1+\sum _{l=1}^{r-1}\ln p_{[l]}\right) ^a|C_{\max },\) an optimal schedule can be obtained by sequencing the jobs in non-decreasing order of \(p_j\) (i.e., the SPT rule).

Lemma 3

(Smith [32]) For the problem \(1||\sum w_jC_j,\) an optimal schedule can be obtained by sequencing the jobs in non-decreasing order of \(\frac{p_j}{w_j }\) (i.e., the weighted shortest processing time first (WSPT) rule).

Lemma 4

(Townsend [31]) “For the problem \(1||\sum C_j^2,\) an optimal schedule can be obtained by sequencing the jobs in non-decreasing order of \(p_j\) (i.e., the SPT rule).”

Obviously, the problems \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\mathrm{max}}\ (m\ge 3)\), Fm|prmu, \(p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\)\((m\ge 2)\), \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum w_j C_j\)\( (m\ge 2)\), and \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j^2\ (m\ge 2)\) are NP-complete, respectively.

Let \({V_j}=\sum _{i=1}^mp_{ij}\), from Lemma 1, the SPT (in order of non-decreasing \({V_j}\)) rule can be used as an approximate algorithm to solve \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\).

Theorem 1

Let \(S^*\) be an optimal schedule and S be an SPT schedule for the problem Fm|prmu, \(p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\). Then \(\frac{\sum C_j(S)}{\sum C_j(S^*)}\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{ p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Let \(V_1\le V_2\le \cdots \le V_n\), we have

$$\begin{aligned}&C_j(S)\le V_1+V_2\left( 1+ p_{\min \{i1\}}\right) ^a+V_3\left( 1+ p_{\min \{i1+i2\}}\right) ^a\nonumber \\&\quad +\cdots +V_j\left( 1+ p_{\min \{i1+i2+\cdots +i,j-1\}}\right) ^a, \end{aligned}$$
(2)

where \(p_{\min \{i1+i2+\cdots +i,j-1\}}=\min \{\ln p_{i1}+\ln p_{i2}+\cdots +\ln p_{i,j-1}|i=1,2,\ldots ,m\}\) and so

$$\begin{aligned} \sum _{j=1}^n C_j(S)\le \sum _{j=1}^n\sum _{l=1}^jV_l. \end{aligned}$$
(3)

Let \(\bar{S}=(J_{[1]},J_{[2]},\ldots ,J_{[n]})\) be any schedule, where [j] denotes the job that occupies the jth position in \(S^*\), we have

$$\begin{aligned} C_{1[j]}= & {} p_{1[1]}+p_{1[2]}\left( 1+\ln p_{1[1]}\right) ^a+\cdots +p_{1[j]}\\&\quad (1+\ln p_{1[1]}+\ln p_{1[2]}+\cdots +\ln p_{1[j-1]})^a\\ C_{2[j]}\ge & {} p_{2[1]}+p_{2[2]}\left( 1+\ln p_{2[1]}\right) ^a+\cdots +p_{2[j]}\\&\quad (1+\ln p_{2[1]}+\ln p_{2[2]}+\cdots +\ln p_{2[j-1]})^a\\&\ldots \\ C_{m[j]}\ge & {} p_{m[1]}+p_{m[2]}\left( 1+\ln p_{m[1]}\right) ^a+\cdots +\\&\quad p_{m[j]}\left( 1+\ln p_{m[1]}+\ln p_{m[2]}+\cdots +\ln p_{m[j-1]}\right) ^a, \end{aligned}$$

hence

$$\begin{aligned} C_{[j]}(\bar{S}) \ge \frac{1}{m}\sum _{l=1}^jV_{[j]}\left( 1+\ln P_{\max }\right) ^a, \end{aligned}$$
(4)

where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{ p_{ij}|j=1,2,\ldots ,n\}\), and for the optimal schedule \(S^*\), we have

$$\begin{aligned} \sum _{j=1}^n C_{j}(S^*)\ge \frac{1}{m}\left( 1+\ln P_{\max }\right) ^a\sum _{j=1}^n \sum _{j=1}^iV_{[j]}\ge \frac{1}{m}\left( 1+\ln P_{\max }\right) ^a\sum _{i=1}^n\sum _{j=1}^iV_j, \end{aligned}$$
(5)

as the term \(\sum _{i=1}^n \sum _{j=1}^iL_{[j]}\) is minimized by the increasing order of \(L_j\) (Lemma 1). Consequently, form (3) and (5), we have that \(\frac{\sum C_j(S)}{\sum C_j(S^*)}\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\).

We show that the bound \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}\) is tight. Consider the following instance. Learning takes place by the 100%-learning curve (a learning rate of 100% means that no learning is taking place), thus \(a=0\), i.e., the bound \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}=m\). The bound m of the SPT rule for \(Fm|{ prmu}|\sum C_j\) is tight as shown in Gonzalez and Sahni [1] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\) is also tight. \(\square \)

Gonzalez and Sahni [1] proposed the ARB (any busy schedule) rule to solve \(Fm|{ prmu}|\sum C_j\), hence, we can also use the ARB rule as an approximate algorithm to solve \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\).

Theorem 2

Let \(S^*\) (S) be an optimal (ARB rule) schedule for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\). Then \(\frac{\sum _{j=1}^n C_j(S)}{\sum _{j=1}^n C_j(S^*)}\le \frac{n}{{\left( 1+\ln P_{\max }\right) ^a}}\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{ p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Let \(\sum _{j=1}^nV_j=T\), we assume that \({V_1}\le {V_2}\le \cdots \le {V_n}\). Let \(C_j(S)\) be the completion time of job \(J_j\) using SPT schedule S. Then from Theorem 1, we have \(C_j(S)\le \sum _{l=1}^jV_l\) and so

$$\begin{aligned} \sum _{j=1}^n C_j(S)\le \sum _{j=1}^n\sum _{l=1}^jV_l\le nT. \end{aligned}$$

For the optimal schedule \(S^*\), from Theorem 1, we have \(C_{[j]}(S^*) {\ge } V_{[j]}\left( 1{+}\ln P_{\max }\right) ^a\) and so

$$\begin{aligned} \sum _{j=1}^n C_j(S^*) \ge \sum _{j=1}^n V_{[j]}{\left( 1+\ln P_{\max }\right) ^a} \ge \sum _{i=1}^n V_{[i]}{\left( 1+\ln P_{\max }\right) ^a} =T{\left( 1+\ln P_{\max }\right) ^a}. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \frac{\sum _{j=1}^n C_j(S)}{\sum _{j=1}^n C_j(S^*)}\le \frac{n}{\left( 1+\ln P_{\max }\right) ^a}. \end{aligned}$$

We show that the bound \(\frac{n}{\left( 1+\ln P_{\max }\right) ^a}\) is tight. Consider the following instance. Learning takes place by the 100%-learning curve, i.e., \(a=0\) and the bound \(\frac{n}{\left( 1+\ln P_{\max }\right) ^a}=n\). The bound n of the ARB rule for \(Fm|{ prmu}|\sum C_j\) is tight as shown in Gonzalez and Sahni [1] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j\) is also tight. \(\square \)

Gonzalez and Sahni [1] proposed the ARB rule (any busy schedule) as an approximate algorithm to solve \(Fm|{ prmu}|C_{\max }\), hence, we can also use the ARB rule as an approximate algorithm to solve \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\max }\).

Theorem 3

Let \(S^*\) (S) be an optimal (ARB rule) schedule for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\mathrm{max}}\). Then \( C_{\mathrm{max}}(S)/C_\mathrm{max}(S^*)\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Similar to the proof of Theorem 2 except that: \(C_\mathrm{max}(S)\le \sum _{j=1}^nV_j=T\) and

$$\begin{aligned} C_{\mathrm{max}}(S^*)\ge \frac{1}{m}\sum _{l=1}^nV_{[j]}\left( 1+\ln P_{\max }\right) ^a= \frac{1}{m}\left( 1+\ln P_{\max }\right) ^aT, \end{aligned}$$

hence, we have \(C_{\mathrm{max}}(S)/C_{\mathrm{max}}(S^*)\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\).

We show that the bound \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}\) is tight. Consider the following instance. Learning takes place by the 100%-learning curve, i.e., \(a=0\) and \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}=m\). The bound m of the ARB rule for \(Fm|{ prmu}|C_{\mathrm{max}}\) is tight as shown in Gonzalez and Sahni [1] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\mathrm{max}}\) is also tight. \(\square \)

Since for the problem \(1|p_{jr}=p_{j}\left( 1+\sum _{l=1}^{r-1}\ln p_{[l]}\right) ^a|C_{\mathrm{max}},\) SPT rule generates an optimal solution (Lemma 2), so we use the SPT rule as an approximate algorithm to solve \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\mathrm{max}}\).

Corollary 1

Let \(S^*\) (S) be an optimal (SPT) schedule for \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|C_{\mathrm{max}}\). Then \( C_{\mathrm{max}}(S)/C_\mathrm{max}(S^*)\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{p_{ij}|j=1,2,\ldots ,n\}\).

From Lemma 3, we can use the WSPT (in order of non-decreasing \(\frac{V_j}{w_j}\)) rule as an approximate algorithm for problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum w_j C_j\).

Theorem 4

Let \(S^*\) (S) be an optimal (a WSPT) schedule for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum w_j C_j\). Then \(\frac{\sum _{j=1}^n w_jC_j(S)}{\sum _{j=1}^n w_jC_j(S^*)}\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Without loss of generality we assume that \(\frac{V_1}{w_1}\le \frac{V_2}{w_2}\le \cdots \le \frac{V_n}{w_n}\). Let \(C_j(S)\) be the completion time of job \(J_j\) using WSPT schedule S. Then from Theorem 1, we have \(C_j(S)\le \sum _{l=1}^jV_l\) and so

$$\begin{aligned} \sum _{j=1}^nw_j C_j(S)\le \sum _{j=1}^nw_j\sum _{l=1}^jV_l. \end{aligned}$$

Let \((J_{[1]},J_{[2]},\ldots ,J_{[n]})\) be the order in which jobs complete in the optimal schedule \(S^*\). Then from Theorem 1, we have \(C_{[j]}(S^*) \ge \frac{1}{m}\sum _{l=1}^jV_{[j]}\left( 1+\ln P_{\max }\right) ^a\) and so

$$\begin{aligned} \sum _{j=1}^nw_{j} C_j(S^*) \ge \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\sum _{j=1}^nw_{[j]} \sum _{l=1}^jV_{[l]} \ge \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\sum _{i=1}^nw_{i}\sum _{j=1}^iV_j, \end{aligned}$$

as the term \(\sum _{j=1}^nw_{[j]} \sum _{l=1}^jV_{[l]}\) is minimized by the increasing order of \(\frac{V_j}{w_j}\) (Lemma 3). Consequently, we have

$$\begin{aligned} \frac{\sum _{j=1}^n w_jC_j(S)}{\sum _{j=1}^n w_jC_j(S^*)}\le \frac{m}{\left( 1+\ln P_{\max }\right) ^a}. \end{aligned}$$

We show that the bound \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}\) is tight. Consider the following instance. Learning takes place by the 100%—learning curve, i.e., \(a=0\) and \(\frac{m}{\left( 1+\ln P_{\max }\right) ^a}=m\). The bound m of the WSPT rule for \(Fm|{ prmu}|\sum w_j C_j\) is tight as shown in Smutnicki [2] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum w_j C_j\) is also tight. \(\square \)

Theorem 5

Let \(S^*\) (S) be an optimal (ARB) schedule for the \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\)\(\sum w_j C_j\) problem. Then \(\frac{\sum _{j=1}^n w_j C_j(S)}{\sum _{j=1}^n w_jC_j(S^*)}\le \frac{1+(n-1)(\overline{w}/\underline{w})}{\left( 1+\ln P_{\max }\right) ^a}\), and this bound is tight, where \(\underline{w}=\min _{j\in J}w_j\) and \(\overline{w}=\max _{j\in J}w_j\).

Proof

Let \(\sum _{j=1}^nV_j=T\) and \(S=(J_{1},J_{2},\ldots ,J_{n})\) be any busy schedule, we have \(C_i(S)\le \sum _{j=1}^iV_j\) and so

$$\begin{aligned} \sum _{i=1}^n w_iC_i(S)\le \sum _{i=1}^nw_i\sum _{j=1}^iV_j\le \sum _{i=1}^nw_i\sum _{j=1}^iL_j\le \sum _{i=1}^nw_iT\le T(\underline{w}+(n-1)\overline{w}). \end{aligned}$$

Let \(S^*=(J_{[1]},J_{[2]},\ldots ,J_{[n]})\) be an optimal schedule. For \(S^*\) we have \(C_{[i]}(S^*)\ge V_{[i]}\left( 1+\ln P_{\max }\right) ^a\) and so

$$\begin{aligned} \sum _{i=1}^n w_iC_i(S^*)\ge & {} \sum _{i=1}^nw_{[i]} V_{[i]}\left( 1+\ln P_{\max }\right) ^a \ge \underline{w} \sum _{i=1}^n V_{[i]}\left( 1+\ln P_{\max }\right) ^a \\= & {} \underline{w}T\left( 1+\ln P_{\max }\right) ^a. \end{aligned}$$

Consequently, we have

$$\begin{aligned} \frac{\sum _{j=1}^n w_jC_j(S)}{\sum _{j=1}^n w_jC_j(S^*)}\le \frac{1+(n-1)(\overline{w}/\underline{w})}{\left( 1+\ln P_{\max }\right) ^a}. \end{aligned}$$

We show that the bound \(\frac{1+(n-1)(\overline{w}/\underline{w})}{\left( 1+\ln P_{\max }\right) ^a}\) is tight. Consider the following instance. Learning takes place by the 100%-learning curve, i.e., \(a=0\) and \(\frac{1+(n-1)(\overline{w}/\underline{w})}{\left( 1+\ln P_{\max }\right) ^a}=1+(n-1)(\overline{w}/\underline{w})\). The bound \(1+(n-1)(\overline{w}/\underline{w})\) of any busy schedule for \(Fm|{ prmu}|\sum w_j C_j\) is tight as shown in Smutnicki [2] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum w_jC_j\) is also tight. \(\square \)

Theorem 6

Let \(S^*\) (S) be an optimal (SPT) schedule for the \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a| \sum C_j^2\) problem. Then \(\frac{\sum _{j=1}^n C_j^2(S)}{\sum _{j=1}^n C_j^2(S^*)}\le \left( \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\right) ^2\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Similar to the proof of Theorem 1, we assume that \(V_1\le V_2\le \cdots \le V_n\), we have \(C_j(S)\le \sum _{l=1}^jV_l\) and so

$$\begin{aligned} \sum _{j=1}^n C_j^2(S)\le \sum _{j=1}^n\left( \sum _{l=1}^jV_l\right) ^2. \end{aligned}$$

Let \(S^*=(J_{[1]},J_{[2]},\ldots ,J_{[n]})\), we have \(C_{[j]}(S^*) \ge \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\sum _{l=1}^jV_{[j]}\) and so

Table 1 Results for \(C_{\mathrm{max}}\) and \(m=3\)
Table 2 Results for \(C_{\mathrm{max}}\) and \(m=5\)
Table 3 Results for \(\sum C_{j}\) and \(m=3\)
Table 4 Results for \(\sum C_{j}\) and \(m=5\)
Table 5 Results for \(\sum w_jC_{j}\) and \(m=3\)
Table 6 Results for \(\sum w_jC_{j}\) and \(m=5\)
Table 7 Results for \(\sum C_{j}^2\) and \(m=3\)
Table 8 Results for \(\sum C_{j}^2\) and \(m=5\)
$$\begin{aligned} \sum _{j=1}^n C_j^2(S^*)\ge & {} \sum _{j=1}^n \left( \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\sum _{l=1}^jV_{[l]}\right) ^2\\\ge & {} \left( \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\right) ^2\sum _{j=1}^n \left( \sum _{l=1}^jV_{[l]}\right) ^2\\\ge & {} \left( \frac{\left( 1+\ln P_{\max }\right) ^a}{m}\right) ^2\sum _{j=1}^n \left( \sum _{l=1}^jV_{l}\right) ^2, \end{aligned}$$

as the term \(\sum _{j=1}^n \left( \sum _{l=1}^jV_{l}\right) ^2\) is minimized by the increasing order of \(V_j\) (Lemma 5). Consequently, we have

$$\begin{aligned} \frac{\sum _{j=1}^n C_j^2(S)}{\sum _{j=1}^n C_j^2(S^*)}\le \left( \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\right) ^2. \end{aligned}$$

We show that the bound \(\left( \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\right) ^2\) is tight. Consider the following instance. Learning takes place by the 100%—learning curve (a learning rate of 100% means that no learning is taking place), thus \(a=0\), that is, the bound \(\left( \frac{m}{\left( 1+\ln P_{\max }\right) ^a}\right) ^2=m^2\). The bound \(m^2\) of the SPT rule for \(Fm|{ prmu}|\sum C_j^2\) is tight as shown in Koulamas and Kyparisis [4] and therefore the bound for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j^2\) is also tight. \(\square \)

Theorem 7

Let \(S^*\) (S) be an optimal (ARB rule) schedule for the problem \(Fm|{ prmu},p_{ijr}=p_{ij}\left( 1+\sum _{l=1}^{r-1}\ln p_{i[l]}\right) ^a|\sum C_j^2\). Then \(\frac{\sum _{j=1}^n C_j^2(S)}{\sum _{j=1}^n C_j^2(S^*)}\le \left( \frac{n}{\left( 1+\ln P_{\max }\right) ^a}\right) ^2\), and this bound is tight, where \(\ln P_{\max }=\max \{\sum _{l=1}^n\ln p_{il}-\ln p_{i\min }|i=1,2,\ldots ,m\}\) and \(p_{i\min }=\min \{p_{ij}|j=1,2,\ldots ,n\}\).

Proof

Similar to the proof of Theorems 2 and 6. \(\square \)

4 Computational study

Computational experiments were employed here to evaluate the performance of the heuristic algorithms, and were coded in VC++ 6.0 and tested on a Pentium 4 with 2 GB RAM personal computer. The parameters of the test problems were generated as follows:

  1. 1.

    \(p_{ij}\) (\(w_{j}\)) were generated from a uniform distribution over [3, 100] ([1, 50]);

  2. 2.

    For small-sized instances n=8,9,10,11, \(m=3,5\);

  3. 3.

    \(a=-\,0.15,-\,0.25,-\,0.35,-\,0.45\);.

For small-sized instances of the studied problems, the percentage error of the solution produced by the heuristic algorithm is calculated as \(\frac{F({Heur})}{F({ Opt})} \), where \(F\in \{C_{\max }, \sum _{j=1}^n C_j,\sum _{j=1}^n w_jC_j\), \( \sum _{j=1}^n C_j^2\}\), \(\text{ Heur }\in \{\text{ ARB, } \text{ SPT, } \text{ WSPT }\}\), \(F({ Heur})\) is the objective value of the solution generated by the heuristic \({ Heur}\) and \(F({ Opt})\) is the objective value for the optimal solution (can be obtained by an enumerative algorithm). The experiments are run for each problem size, and 20 instances were randomly generated. The results are shown in Tables 1234567 and 8. From Tables 12347 and 8, it can be seen that the performance of the SPT rule is more effective than the ARB rule for \(C_{\mathrm{max}}\), \(\sum C_j\) and \(\sum C_j^2\). From Tables 5 and 6, it can be seen that the performance of the WSPT rule is more effective than the ARB rule for \(\sum w_jC_j\).

5 Conclusion

In this paper, we studied flow shop scheduling with sum-of-logarithm-processing-times-based learning effects. We developed heuristic algorithms with tight worst-case bound for the flow shop scheduling with four regular objective functions, numerical experiments demonstrate the effectiveness of the heuristic algorithms. In addition, future research may focus on proposing more sophisticated and efficient heuristics, considering two-stage assembly flow shop (Wu et al. [33], and Wu et al. [34]), studying the other learning effect models (Wang et al. [35], Niu et al. [36], and Yin [37]), or addressing deterioration effects problems (Fan and Zhao [38], Li and Zhao [39], Wang et al. [40], Wang and Zhao [41], and Huang [42]).