1 Introduction

Scheduling models, in which the actual processing times of jobs are not constant but are subject to various effects, have recently generated a considerable volume of publications. Traditionally, in the literature on scheduling with changing processing times, two opposite effects are studied: deterioration and learning. Under a deterioration effect, the later a job starts, the more time is required to process it. A common rationale for deterioration effects is that the processing quality of a machine tool gets worse. On the other hand, under a learning effect, the actual processing times for the jobs that are scheduled later appear to be shorter, which can be illustrated by an example of human operators who improve their skills in performing similar activities by gaining experience.

We are given the jobs of set \(N=\left\{ 1,2,\ldots ,n\right\} \) to be processed on a single machine. Each job \(j\in N\) is associated with an integer \(p_{j}\) that is called its “normal” processing time. This value can be understood as the actual processing duration of job j, provided that the machine is in a perfect condition.

In the scheduling literature, the effects that may affect the actual processing time of a job \(j\in N\) usually belong to one of the following types (or their combination):

  • Time-dependent effect: the actual processing time of job j depends on the start time of the job; see the book by Gawiejnowicz (2008) which gives a detailed exposition of scheduling models with this effect;

  • Positional effect: the actual processing time of job j depends on \(p_{j}\) and on the position of the job in the sequence; see a focused survey by Rustogi and Strusevich (2012b) and a discussion in Agnetis et al. (2014);

  • Cumulative effect: the actual processing time of job j depends on \(p_{j}\) and on an accumulated value of some parameter, typically, on the sum of normal processing times of all jobs sequenced earlier; see Kuo and Yang (2006a, (2006b), where a similar effect is introduced.

In this paper, we address several versions of a single machine scheduling problem to minimize the maximum completion time, provided that a generalized linear job-dependent cumulative effect is applied. Suppose that the jobs are processed on a single machine in accordance with the sequence \(\pi =\left( \pi \left( 1\right) ,\pi \left( 2\right) ,\ldots ,\pi \left( n\right) \right) \). Under the most studied cumulative effect, introduced by Kuo and Yang (2006a, (2006b), the actual processing time of job j scheduled in the r-th position of permutation \(\pi \) is defined by

$$\begin{aligned} p_{j}\left( \pi ; r\right) =p_{j}\left( 1+b\sum _{h=1}^{r-1}p_{\pi \left( h\right) }\right) ^{A}, \end{aligned}$$
(1)

where A is a given constant, and b is either equal to 1 or to \(-1\), in the case of deterioration or of learning, respectively. The extensions and generalizations of this basic model can be found in Yin et al. (2009) and Huang and Wang (2015). A common drawback of papers on scheduling with a cumulative effect is that normally no convincing practical motivation of the model is given. In particular, it is not well justified why the actual processing time of a job should depend on total normal time of previously scheduled jobs.

In this paper, we study a cumulative effect that arises when a job \(j\in N\) is associated not only with the normal processing time \(p_{j}\) but also with two additional parameters, \(b_{j}\) and \(q_{j}>0\). Here \(q_{j}\) is a quantity, not necessarily equal to the normal processing time, such that its accumulated value affects the actual processing time of later scheduled jobs. Formally, the actual processing time of job j scheduled in the r-th position of permutation \(\pi \) is defined by

$$\begin{aligned} p_{j}\left( \pi ; r\right) =p_{j}\left( 1+b_{j}\sum _{h=1}^{r-1}q_{\pi \left( h\right) }\right) , \end{aligned}$$
(2)

where \(b_{j}>0\) under a deterioration effect and \(b_{j}<0\) under a learning effect. Unlike (1), the effect (2) is represented not by a polynomial but by a linear function of the accumulated quantities. On the other hand, no explicit dependence on the normal time of previously scheduled jobs is assumed and the values of \(b_{j}\) can be understood as job-dependent rates that reflect how sensitive a particular job is to the previously scheduled jobs.

For illustration of our model, suppose that a floor sanding machine is used to treat floors in several rooms. The normal time \(p_{j}\) is the time requirement for sanding floors in room j, provided that a new sanding belt/disc is used. The value of \(q_{j}\) can be seen as the amount of generated saw dust or an appropriately measured wear of the sanding belt/disc, which does not necessarily depend on the time of treatment. For some rooms, the actual treatment time can be seriously affected by the quality of the equipment, and for some rooms, the effect may be less noticeable, and this job dependency is captured by the rate parameter \(b_{j}\). It is not difficult to identify a similar cumulative deterioration effect in other activities/industries.

To illustrate the effect (2) in a learning environment, consider the following situation. A computer programmer is supposed to write n software pieces for a particular project. These pieces can be developed in any order. Developing these pieces requires particular transferable technical skills (such as manipulating a certain data structure), which the programmer initially does not possess. In the beginning of the process, a software piece \(j\in N\) can be completed in \(p_{j}\) time units. Assume that after completing a particular software piece j, the technical skill of the programmer increases by \(q_{j}\) appropriately measured units, and that skill might help to speed up the creation of any piece to follow. Thus, the actual time needed to create a particular piece depends on the accumulated skills gained during the development of previously created pieces. Formally, the development time of a software piece decreases linearly with the technical skill of the programmer, so that the actual time taken to write a software piece \(j=\pi (r)\) is given by \(p_{j}\left( \pi ;r\right) =p_{j}-a_{j}\sum _{h=1}^{r-1}q_{\pi \left( h\right) }\), where the quantity \(a_{j}\) defines how sensitive the development time for software piece j is to the gained technical skills. This formulation can be written in terms of the effect (2) with \(b_{j}=-a_{j}/p_{j}, j \in N\).

Adopting standard scheduling notation, we denote the problem of minimizing the makespan \(C_{\max }\), i.e., the maximum completion time, under the effect (1) by \(1\left| p_{j}\left( \pi ;r\right) \right. \left. =p_{j}\left( 1+bP_{r}\right) ^{A}\right| C_{\max }\), where \(P_{r}\) stands for the sum of the normal processing times of the jobs scheduled prior to job \(\pi \left( r\right) \). A similar problem under the effect (2) is denoted by \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) \right| C_{\max }\), where \(Q_{r}\) represents the sum of the \(q_{j}\) values of the jobs scheduled prior to job \(\pi \left( r\right) \).

Apart from problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) \right| C_{\max }\) in which the jobs of set N are independent, we also study its version in which precedence constraints are imposed on the set of jobs, so that only those permutations of jobs which respect the constraints are feasible. These precedence constraints are given in a form of an acyclic directed graph, with the nodes representing the jobs and the arcs linking immediate successors and predecessors. Provided that the digraph that defines precedence constraints is series-parallel, we denote the problems under effects (1) and (2) by \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+bP_{r}\right) ^{A}, \hbox {SP-prec}\right| C_{\max }\) and \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) , \hbox {SP-prec}\right| C_{\max }\), respectively. See Gordon et al. (2008) for a range of results on single machine scheduling with series-parallel precedence constraints and various effects (positional, time-dependent, and cumulative), including problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+P_{r}\right) ^{A},\right. \left. \hbox {SP-prec}\right| C_{\max }\). Extending our floor sanding machine example given above, problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\right. \left. \hbox {SP-prec}\right| C_{\max }\) can arise if precedence constraints occur due to a particular physical layout of the building in which the rooms to be sanded are located.

For scheduling problems with a deterioration effect, the actual processing times grow. In order to prevent the processing times to become unacceptably large, a maintenance period (MP) can be introduced into a schedule. During an MP, no processing takes place, and after the MP, the processing facility is in better processing conditions. The duration of an MP either is a constant or depends on its start time \(\tau \). See Rustogi and Strusevich (2012a, (2014, (2015) for studies of scheduling models with maintenance under positional effects, combined effects and time-dependent effects, respectively.

Kellerer et al. (2013) study problems \(1\left| p_{j}\left( \pi ;r\right) = p_{j}\right. \left. \left( 1+bP_{r}\right) ,\hbox {MP}\left( 0\right) \right| C_{\max }\) and \(1\left| p_{j}\left( \pi ; r\right) =p_{j}\left( 1+bP_{r}\right) ,\right. \left. \hbox {MP}\left( \lambda \right) \right| C_{\max }\) with exactly one MP introduced into a schedule. Here, \(\hbox {MP}\left( \lambda \right) \) means that the duration of the MP is a linear function of its start time \(\tau \), and is given by

$$\begin{aligned} \Delta \left( \tau \right) =\lambda \tau +\mu , \end{aligned}$$
(3)

where \(\mu \ge 0\) and \(\lambda \ge 0\) are known constants. In particular, the term \(\hbox {MP}\left( 0\right) \) corresponds to an MP of constant duration \(\mu \). In this paper, we address more general problems \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}\left( 0\right) \right| C_{\max }\) and \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}\left( \lambda \right) \right| C_{\max }\) with a single maintenance period. Notice that in the models studied by Kellerer et al. (2013), the MP is assumed to fully restore the processing conditions, so that after the maintenance the machine is “as good as new”. In this paper, we consider the MP as a rate-modifying activity, as introduced by Lee and Leon (2001), and assume that for a job \(j\in N\) scheduled after an MP, the normal processing time changes from \(p_{j}\) to \(\sigma p_{j}\), where \(\sigma \) is a given positive constant.

The problems with a single MP are NP-hard, and we focus on the design of fully polynomial-time approximation schemes (FPTAS). Recall that for a problem of minimizing a function \(\Phi (\mathbf {x})\), where \(\mathbf {x}\) is a collection of decision variables, a polynomial-time algorithm that finds a feasible solution \(\mathbf {x}^{H}\) such that \(\Phi (\mathbf {x}^{H})\) is at most \(\rho \ge 1\) times the optimal value \(\Phi (\mathbf {x}^{*})\) is called a \(\rho \)-approximation algorithm; the value of \(\rho \) is called a worst-case ratio bound. A family of \(\rho \)-approximation algorithms is called a fully polynomial-time approximation scheme (FPTAS) if \(\rho =1+\varepsilon \) for any \(\varepsilon >0\) and the running time is polynomial with respect to both the length of the problem input and \(1/\varepsilon \).

The remainder of this paper is organized as follows. In Sect. 2, problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) \right| C_{\max }\) is reduced to the classical scheduling problem \(1\left| ~\right| \sum w_{j}C_{j}\) to minimize the sum of the weighted completion times on a single machine and is therefore solvable in \(O\left( n\log n\right) \) time. Using the theory of minimizing priority-generating functions under series-parallel precedence constraints, in Sect. 3, we show that problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\right. \left. \hbox {SP-prec}\right| C_{\max }\) is also solvable in \(O\left( n\log n\right) \) time. In Sect. 4, we present a fast FPTAS for problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\right. \left. \left( 1+b_{j}Q_{r}\right) ,\hbox {MP}\left( \lambda \right) \right| C_{\max }\) with a single maintenance period. Some concluding remarks can be found in Sect. 5.

2 Minimization of makespan

For a scheduling problem to minimize a function \(\Phi \) over a set of permutations, an optimal solution can be found by applying a priority rule, i.e., by associating each job \(j\in N\) with a value \(\omega \left( j\right) \) and sorting the jobs in non-increasing order of \(\omega \left( j\right) \)’s. The values \(\omega \left( j\right) , j\in N\), are called 1-priorities. The most popular 1-priorities are \(\omega \left( j\right) =p_{j}, j\in N\), and \(\omega \left( j\right) =1/p_{j}, j\in N\), which correspond to the well-known LPT and SPT priority rules, respectively.

Problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+bP_{r}\right) ^{A}\right| C_{\max }\) is known to be solvable by the SPT rule if \(A<0\) (learning, see Kuo and Yang (2006b)) and if \(A>1\) (fast deterioration, see Gordon et al. (2008)). For the problem with \(A=1\), the objective function is sequence independent, i.e., any permutation is optimal; see Gordon et al. (2008).

Another well-known scheduling priority rule is the WSPT (or Smith’s) rule. This rule is based on 1-priorities \(\omega \left( j\right) =w_{j}/p_{j}, j\in N\), and finds an optimal permutation for problem \(1\left| ~\right| \sum w_{j}C_{j}\) of minimizing the sum of the weighted completion times.

Assume that in problem \(1\left| ~\right| \sum w_{j}C_{j}\) the processing time of a job \(j\in N\) is denoted by \(q_{j}\). Then the value of the objective function for a schedule associated with a permutation \(\pi =\left( \pi \left( 1\right) ,\pi \left( 2\right) ,\ldots ,\pi \left( n\right) \right) \) is given by

$$\begin{aligned} \sum _{r=1}^{n}w_{\pi \left( r\right) }C_{\pi \left( r\right) }=\sum _{r=1}^{n}w_{\pi \left( r\right) }\sum _{h=1}^{r}q_{\pi \left( h\right) }, \end{aligned}$$

and, as proved by Smith (1956), an optimal permutation can be found in \(O\left( n\log n\right) \) time by sorting the jobs in non-increasing order of the 1-priorities \(\omega \left( j\right) =w_{j}/q_{j}\).

We use this result to solve problem \(1\left| p_{j}\left( \pi ;r\right) =\right. \left. p_{j}\left( 1+b_{j}Q_{r}\right) \right| C_{\max }\).

Theorem 1

For problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) \right| C_{\max }\), an optimal permutation can be found in \(O\left( n\log n\right) \) time by sorting the jobs in non-increasing order of the ratios \(\left( p_{j}b_{j}\right) /q_{j}, j\in N \).

Proof

We reduce the problem under consideration to problem \(1\left| ~\right| \sum w_{j}C_{j}\), with the processing times equal to \(q_{j}\) and the weights defined by

$$\begin{aligned} w_{j}=b_{j}p_{j},\quad ~j\in N. \end{aligned}$$
(4)

Given a permutation \(\pi =\left( \pi \left( 1\right) ,\pi \left( 2\right) ,\ldots ,\pi \left( n\right) \right) \) of jobs, let \(C_{\max }\left( \pi \right) \) denote the makespan for a schedule in which the jobs are processed according to permutation \(\pi \). Then for the original problem, we have

$$\begin{aligned} C_{\max }\left( \pi \right)= & {} p_{\pi \left( 1\right) }+\sum _{r=2}^{n}p_{\pi \left( r\right) }\left( 1+b_{\pi \left( r\right) }\sum _{h=1}^{r-1}q_{\pi \left( h\right) }\right) \\= & {} \sum _{r=1}^{n}p_{\pi \left( r\right) }+\sum _{r=2}^{n}b_{\pi \left( r\right) }p_{\pi \left( r\right) }\sum _{h=1}^{r-1}q_{\pi \left( h\right) } \\= & {} \sum _{r=1}^{n}p_{\pi \left( r\right) }+\sum _{r=1}^{n}b_{\pi \left( r\right) }p_{\pi \left( r\right) }\sum _{h=1}^{r-1}q_{\pi \left( h\right) }, \end{aligned}$$

where the last equality is due to \(\sum _{h=1}^{0}q_{\pi \left( h\right) }=0\).

Using (4), we further rewrite

$$\begin{aligned} C_{\max }\left( \pi \right)= & {} \sum _{r=1}^{n}p_{\pi \left( r\right) }+\sum _{r=1}^{n}w_{\pi \left( r\right) }\sum _{h=1}^{r-1}q_{\pi \left( h\right) } \\= & {} \sum _{r=1}^{n}p_{\pi \left( r\right) }+\sum _{r=1}^{n}w_{\pi \left( r\right) }\sum _{h=1}^{r}q_{\pi \left( h\right) }-\sum _{r=1}^{n}w_{\pi \left( r\right) }q_{\pi \left( r\right) } \\= & {} \sum _{r=1}^{n}w_{\pi \left( r\right) }\sum _{h=1}^{r}q_{\pi \left( h\right) }+\sum _{j=1}^{n}\left( p_{j}-w_{j}q_{j}\right) . \end{aligned}$$

Thus, \(C_{\max }\left( \pi \right) \) is minimized if the minimum of \(\sum _{r=1}^{n}w_{\pi \left( r\right) }\sum _{h=1}^{r}q_{\pi \left( h\right) }\) is attained. The latter expression is the objective function in problem \(1\left| ~\right| \sum w_{j}C_{j}\), so that the optimal permutation can be found by the WSPT rule. In terms of the original problem, an optimal permutation is obtained by sorting the jobs in non-increasing order of the ratios \(\left( p_{j}b_{j}\right) /q_{j}\). \(\square \)

Reformulating Theorem 1, we conclude that for the problem of minimizing the makespan under effect (2) the 1-priority is \(\omega \left( j\right) =b_{j}p_{j}/q_{j}, j\in N\). Notice that Theorem 1 holds irrespective of the sign of \(b_{j}\), i.e., for both deterioration and learning effects.

Theorem 1 can be applied to an effect that resembles (1) with \(A=1\).

Corollary 1

If effect (2) is applied with \(q_{j}=p_{j}\), for all \(j\in N\), then the resulting problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}P_{r}\right) \right| C_{\max }\) is solvable in \(O\left( n\log n\right) \) time by sequencing jobs in non-increasing order of \(b_{j}\). Moreover, if \(b_{j}=1\), for all \(j\in N\), then an arbitrary permutation of jobs results in an optimal solution to the resulting problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+P_{r}\right) \right| C_{\max }\).

Notice that the latter part of Corollary 1 for problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+P_{r}\right) \right| C_{\max }\) is also proved in Gordon et al. (2008).

3 Minimization of makespan with precedence constraints

In this section, we study problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\right. \left. \left( 1+b_{j}Q_{r}\right) ,\hbox {SP-prec}\right| C_{\max }\), in which precedence constraints are imposed on the set of jobs, and the graph that defines these constraints is series-parallel; see Valdes et al. (1982) and Tanaev et al. (1984) for definitions and notions related to this well-studied class of graphs.

Research on scheduling problems under series-parallel precedence constraints was initiated by Lawler (1978), who presented a polynomial-time algorithm for minimizing the weighted sum of the completion times on a single machine subject to series-parallel constraints. Soon after, it was discovered that many other scheduling problems can be solved by a similar approach, provided that their objective functions possess specific properties, related to an extended notion of a priority function that is defined for subsequences of jobs rather than just for individual jobs. The definition below can be found in Tanaev et al. (1984) and Monma and Sidney (1979).

Definition 1

Let \(\pi ^{\alpha \beta }=(\pi ^{\prime }\alpha \beta \pi ^{\prime \prime })\) and \(\pi ^{\beta \alpha }=(\pi ^{\prime }\beta \alpha \pi ^{\prime \prime })\) be two permutations of n jobs that differ only in the order of the subsequences \(\alpha \) and \(\beta \) (here subsequences \(\pi ^{\prime }\) and/or \(\pi ^{\prime \prime }\) can be dummy permutations with no elements). For a function \(\Phi (\pi )\) that depends on a permutation, suppose that there exists a function \(\omega (\pi )\) such that for any two permutations \(\pi ^{\alpha \beta }\) and \(\pi ^{\beta \alpha }\), the inequality \(\omega (\alpha )>\omega (\beta )\) implies that \(\Phi (\pi ^{\alpha \beta })\le \Phi (\pi ^{\beta \alpha })\), while the equality \(\omega (\alpha )=\omega (\beta )\) implies that \(\Phi (\pi ^{\alpha \beta })=\Phi (\pi ^{\beta \alpha })\). In this case, function \(\Phi \) is called a priority-generating function, while function \(\omega \) is called its priority function. For a (partial) permutation \(\pi \), the value of \(\omega (\pi )\) is called the priority of \(\pi \).

A priority function applied to a single job becomes a 1-priority for that job. Thus, for function \(\Phi (\pi )\) to be priority-generating, it is necessary that the problem of minimizing \(\Phi (\pi )\) admits 1-priorities. On the other hand, the existence of 1-priorities does not imply that they can be extended to a priority function. Intuitively, a priority function allows us to rank not only individual jobs but also partial permutations. The fastest known algorithm minimizes a priority-generating function under series-parallel precedence constraints in \(O(n\log n)\) time; see, e.g., Monma and Sidney (1979) and Tanaev et al. (1984).

Various single machine scheduling problems with time-changing effects and series-parallel precedence constraints have been studied in Gordon et al. (2008), Wang and Xia (2005) and Dolgui et al. (2012). In particular, Gordon et al. (2008) study problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+P_{r}\right) ^{A},\right. \left. \hbox {SP -prec}\right| C_{\max }\) for a positive integer A. For \(A=1\), the objective function is sequence independent (see Corollary 1), and the problem is solvable in \(O\left( n\right) \) time under arbitrary precedence constraints since any feasible permutation is optimal. For \(A=2\), the objective function is proved to be priority-generating and the problem is therefore solvable in \(O\left( n\log n\right) \) time. On the other hand, for \(A=3\), the objective function is proved not to generate a priority function.

Below, we use Definition 1 to prove that for problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {SP-prec}\right| C_{\max }\) the objective function is priority-generating.

Theorem 2

For the single machine problem to minimize the makespan under the cumulative effect (2), the objective function is priority-generating and

$$\begin{aligned} \omega (\pi )=\frac{\sum _{j=1}^{\left| \pi \right| }p_{\pi (j)}b_{\pi (j)}}{\sum _{j=1}^{\left| \pi \right| }q_{\pi (j)}} \end{aligned}$$
(5)

is its priority function, so that problem \(~1\left| p_{j}\left( \pi ;r\right) =p_{j}\right. \left. \left( 1+b_{j}Q_{r}\right) ,\hbox {SP-prec}\right| C_{\max }\) is solvable in \(O\left( n\log n\right) \) time.

Proof

For a (partial) permutation \(\pi \), we denote the length of \(\pi \), i.e., the number of jobs in \(\pi \), by \(\left| \pi \right| \). For a partial permutation \(\pi \) consider a schedule S such that \(\pi \) is contained as a subsequence in a full permutation that defines schedule S. Assume that for S the following holds: (i) the first job in \(\pi \) starts at time \(\tau \); and (ii) the sum of the \(q_{j}\) values of the jobs that precede the first job in \(\pi \), i.e., those completed by time \(\tau \), is equal to \(\zeta \). Under these assumptions, let \(C_{\max }(\pi ;\tau ;\zeta )\) denote the maximum completion time of the jobs in \(\pi \). By definition, for problem\(~1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) , \hbox {SP-prec}\right| C_{\max }\) we deduce

$$\begin{aligned} C_{\max }(\pi ; \tau ; \zeta )= & {} \tau +C_{\max }(\pi ; 0; \zeta )\\= & {} \tau +\sum _{k=1}^{\left| \pi \right| }p_{\pi (k)}\left( 1+b_{\pi \left( k\right) }\left( \zeta +\sum _{i=1}^{k-1}q_{\pi (i)}\right) \right) . \end{aligned}$$

Let \(\pi ^{\alpha \beta }=(\pi _{1}\alpha \beta \pi _{2})\) and \(\pi ^{\beta \alpha }=(\pi _{1}\beta \alpha \pi _{2})\) be two permutations of all jobs that only differ in the order of the subsequences \(\alpha \) (containing u jobs) and \(\beta \) (containing v jobs). Define \(\Delta C=C_{\max }(\pi ^{\alpha \beta })-C_{\max }(\pi ^{\beta \alpha })\) and let \(\zeta ^{\prime }\) denote the total sum of the \(q_{j}\) values of the jobs in \(\pi _{1}\). Then \(\Delta {C}=C_{\max }(\alpha \beta \pi _{2};0;\zeta ^{\prime })-C_{\max }(\beta \alpha \pi _{2};0;\zeta ^{\prime })\). Furthermore,

$$\begin{aligned}&C_{\max }(\alpha \beta \pi _{2};0;\zeta ^{\prime })\\&\quad =C_{\max }(\alpha \beta ; 0; \zeta ^{\prime }) \\&\qquad +\sum _{k=1}^{\left| \pi _{2}\right| }p_{\pi _{2}(k)}\left( 1+b_{\pi _{2}(k)}\left( \zeta ^{\prime } +\sum _{i=1}^{u}q_{\alpha (i)}+\sum _{j=1}^{v}q_{\beta (j)}+\sum _{i=1}^{k-1}q_{\pi _{2}(i)}\right) \right) , \\&C_{\max }(\beta \alpha \pi _{2};0;\zeta ^{\prime })\\&\quad =C_{\max }(\beta \alpha ; 0; \zeta ^{\prime })\\&\qquad +\sum _{k=1}^{\left| \pi _{2}\right| }p_{\pi _{2}(k)}\left( 1+b_{\pi _{2}(k)}\left( \zeta ^{\prime }+\sum _{j=1}^{v}q_{\beta (j)}+\sum _{i=1}^{u}q_{\alpha (i)}+\sum _{i=1}^{k-1}q_{\pi _{2}(i)}\right) \right) , \end{aligned}$$

so that \(\Delta {C}=C_{\max }(\alpha \beta ; 0; \zeta ^{\prime })-C_{\max }(\beta \alpha ; 0; \zeta ^{\prime })\). To prove the theorem, we derive conditions under which \(\Delta C\le 0\).

Further, we deduce

$$\begin{aligned}&C_{\max }(\alpha \beta ; 0; \zeta ^{\prime })\\&\quad =C_{\max }(\alpha ;0;\zeta ^{\prime })\\&\qquad +\sum _{k=1}^{v}p_{\beta (k)}\left( 1 +b_{\beta \left( k\right) }\left( \zeta ^{\prime }+\sum _{i=1}^{u}q_{\alpha (i)}+\sum _{j=1}^{k-1}q_{\beta (j)}\right) \right) \\&\quad =\sum _{k=1}^{u}p_{\alpha (k)}\left( 1+b_{\alpha \left( k\right) }\left( \zeta ^{\prime }+\sum _{i=1}^{k-1}q_{\alpha (i)}\right) \right) \\&\qquad +\sum _{k=1}^{v}p_{\beta (k)}\left( 1+b_{\beta \left( k\right) }\left( \zeta ^{\prime }+\sum _{i=1}^{u}q_{\alpha (i)}+\sum _{j=1}^{k-1}q_{\beta (j)}\right) \right) , \\&C_{\max }(\beta \alpha ; 0; \zeta ^{\prime })\\&\quad =C_{\max }(\beta ;0;\zeta ^{\prime })\\&\qquad +\sum _{k=1}^{u}p_{\alpha (k)}\left( 1 +b_{\alpha \left( k\right) }\left( \zeta ^{\prime }+\sum _{j=1}^{v}q_{\beta (j)}+\sum _{i=1}^{k-1}q_{\alpha (i)}\right) \right) \\&\quad =\sum _{k=1}^{v}p_{\beta (k)}\left( 1+b_{\beta \left( k\right) }\left( \zeta ^{\prime }+\sum _{j=1}^{k-1}q_{\beta (j)}\right) \right) \\&\qquad +\sum _{k=1}^{u}p_{\alpha (k)}\left( 1+b_{\alpha \left( k\right) }\left( \zeta ^{\prime }+\sum _{j=1}^{v}q_{\beta (j)}+\sum _{i=1}^{k-1}q_{\alpha (i)}\right) \right) , \end{aligned}$$

so that for \(\Delta C\) we derive

$$\begin{aligned} \Delta C\!=\! & {} \sum _{k=1}^{u}p_{\alpha (k)}\left( \left( 1+b_{\alpha \left( k\right) }\left( \zeta ^{\prime }+\sum _{i=1}^{k-1}q_{\alpha (i)}\right) \right) \right. \\&\left. -\left( 1+b_{\alpha \left( k\right) }\left( \zeta ^{\prime }+\sum _{j=1}^{v}q_{\beta (j)}+\sum _{i=1}^{k-1}q_{\alpha (i)}\right) \right) \right) \\&+\sum _{k=1}^{v}p_{\beta (k)}\left( \left( 1+b_{\beta \left( k\right) }\left( \zeta ^{\prime }+\sum _{i=1}^{u}q_{\alpha (i)}+\sum _{j=1}^{k-1}q_{\beta (j)}\right) \right) \right. \\&\left. -\left( 1+b_{\beta \left( k\right) }\left( \zeta ^{\prime }+\sum _{j=1}^{k-1}q_{\beta (j)}\right) \right) \right) . \end{aligned}$$

Proceeding further, we obtain

$$\begin{aligned} \Delta C= & {} -\sum _{k=1}^{u}p_{\alpha (k)}b_{\alpha \left( k\right) }\sum _{j=1}^{v}q_{\beta (j)}\\&\quad +\sum _{k=1}^{v}p_{\beta (k)}b_{\beta \left( k\right) }\sum _{i=1}^{u}q_{\alpha (i)}. \end{aligned}$$

Dividing by \(\sum _{k=1}^{u}q_{\alpha (k)}\sum _{i=1}^{v}q_{\beta (i)}\), we deduce that \(\Delta C\le 0\), provided that

$$\begin{aligned} \frac{\sum _{k=1}^{u}p_{\alpha (k)}b_{\alpha \left( k\right) }}{\sum _{k=1}^{u}q_{\alpha (k)}}\ge \frac{\sum _{i=1}^{v}p_{\beta (i)}b_{\beta \left( k\right) }}{\sum _{i=1}^{v}q_{\beta (i)}}. \end{aligned}$$

For an arbitrary (partial) permutation \(\pi \), define the function \(\omega (\pi )\) by (5). It is easily verified that \(\omega (\alpha )>\omega (\beta )\) implies \(C_{\max }(\pi ^{\alpha \beta })\le C_{\max }(\pi ^{\beta \alpha })\), while \(\omega (\alpha )=\omega (\beta )\) implies \(C_{\max }(\pi ^{\alpha \beta })=C_{\max }(\pi ^{\beta \alpha })\), as required by Definition 1. \(\square \)

Theorem 2 holds irrespective of the sign of \(b_{j}, j\in N\). Observe that if (5) is applied to a single job j, i.e., to a permutation of length one, then the priority function becomes a 1-priority function \(\omega (j)=\left( p_{j}b_{j}\right) /q_{j}\), which is consistent with Theorem 1. Besides, if \(q_{j}=p_{j}\) and \(b_{j}=b\) for all \(j\in N\), then \(\omega (j)\) becomes constant, i.e., for problem \(1\left| p_{j}\left( \pi ; r\right) =p_{j}\left( 1+bP_{r}\right) \right| C_{\max }\) any permutation is optimal, which is consistent with Corollary 1 and Gordon et al. (2008).

4 Minimization of makespan with machine maintenance

In this section, we consider the effect (2) in the deterioration form, i.e., with \(b_{j}>0\). A single rate-modifying maintenance activity is introduced into a schedule, which is able to improve the processing quality of the machine.

An instance of problem \(1\left| p_{j}\left( \pi ; r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) , \right. \left. \hbox {MP}(\lambda )\right| C_{\max }\) is defined by the arrays of positive numbers \(p_{j}, q_{j}\) and \(b_{j}, j\in N\), as well as by positive numbers \(\lambda , \mu \) and \(\sigma \). The duration of the maintenance period (MP) is given by (3). For a job \(j\in N\) scheduled after the MP, the normal processing time changes from \(p_{j}\) to \(\sigma p_{j}\).

In a schedule with a single MP, the jobs are split into two groups: group 1 consists of the jobs scheduled before the maintenance and group 2 contains all other jobs. Let \(N_{i}\) be the set of jobs in group i and \(\left| N_{i}\right| =n_{i}\), for \(i\in \left\{ 1,2\right\} \). Due to Theorem 1, we may assume that the jobs in each group are sequenced in non-increasing order of the 1-priorities \(\left( p_{j}b_{j}\right) /q_{j}\). This is why throughout this section the jobs are renumbered so that

$$\begin{aligned} \frac{p_{1}b_{1}}{q_{1}}\ge \frac{p_{2}b_{2}}{q_{2}}\ge \cdots \ge \frac{p_{n}b_{n}}{q_{n}}. \end{aligned}$$
(6)

Let \(\mathbf {x=}\left( x_{1},x_{2},\ldots , x_{n}\right) \) denote a vector with 0–1 components. Problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) , \hbox {MP}(\lambda )\right| C_{\max }\) belongs to a range of scheduling problems that can be reduced to minimizing a function of the form

$$\begin{aligned} F\left( \mathbf {x}\right) =H\left( \mathbf {x}\right) +K, \end{aligned}$$
(7)

where

$$\begin{aligned} H\left( \mathbf {x}\right) =\sum _{1\le i<j\le n}^{n}u_{i}v_{j}x_{i}x_{j}-\sum _{j=1}^{n}h_{j}x_{j}, \end{aligned}$$
(8)

is known as the half-product function. The coefficients \(u_{j}\) and \(v_{j}\) are non-negative integers, while \(h_{j}\) is an integer that can be either negative or positive.

Let a vector that is optimal for the problem of minimizing function (8), or equivalently, (7) be denoted by \(\mathbf {x}^{*}=\left( x_{1}^{*},x_{2}^{*},\ldots ,x_{n}^{*}\right) \). Notice that we are only interested in the instances of the problem of minimizing function (8) for which the optimal value \(H\left( \mathbf {x}^{*}\right) \) is strictly negative; otherwise, setting all decision variables to zero solves the problem. On the other hand, below and in fact in most known applications it is assumed that constant K is such that \(F\left( \mathbf {x}^{*}\right) >0\).

To proceed, we need to refine the definition of an FPTAS for the problem of minimizing function \(H(\mathbf {x})\) which takes both positive and negative values. For such a problem, an FPTAS delivers a solution vector \(\mathbf {x}^{\varepsilon }\) such that \(H(\mathbf {x}^{\varepsilon })-H(\mathbf {x}^{*})\le \varepsilon \left| H(\mathbf {x}^{*})\right| \). For the problem of minimizing a function of the form (7) with \(F\left( \mathbf {x}^{*}\right) >0\), an FPTAS outputs a solution vector \(\mathbf {x}^{\varepsilon }\) such that \(F(\mathbf {x}^{\varepsilon })\le \left( 1+\varepsilon \right) F(\mathbf {x}^{*})\).

Badics and Boros (1998) prove that the problem of minimizing function (8) is NP-hard. The first FPTAS for minimizing function (8) in strongly polynomial time is due to Erel and Ghosh (2008), with the running time of \(O\left( n^{2}/\varepsilon \right) \). This running time should be seen as the best possible, since just computing the value of the objective function for a given vector \(\mathbf {x}\) takes \(O\left( n^{2}\right) \) time. However, it is known that an FPTAS for minimizing the function \(H(\mathbf {x})\) does not necessarily behave as an FPTAS for minimizing the function \(F(\mathbf {x}) \) of the form (7) with an additive constant. This is due to the fact that the optimal value of \(H(\mathbf {x})\) is negative and K can be positive; see Erel and Ghosh (2008), Kellerer and Strusevich (2012, (2016) for discussion and examples.

For the problem of minimizing a function of the form (7), Erel and Ghosh (2008) outline a procedure, which may behave as an FPTAS.

Theorem 3

For the problem of minimizing a function of the form (7), denote the lower and upper bounds on the value of \(F(\mathbf {x}^{*})\) by LB and UB, respectively, i.e., \(\hbox {LB}\le F(\mathbf {x}^{*})\le UB\). If the ratio UB/LB is bounded from above by some positive \(\gamma \), then there exists an algorithm that delivers a solution \(\mathbf {x}^{0}\) such that \(F(\mathbf {x}^{0})-LB\le \varepsilon LB\) in \(O(\gamma n^{2}/\varepsilon )\) time.

Theorem 3 is proved by Erel and Ghosh (2008). If the value of \(\gamma \) is bounded from above by a polynomial of the length of the input of the problem, then the algorithm from Theorem 3 designed by Erel and Ghosh (2008) behaves as an FPTAS. Moreover, if \(\gamma \) is a constant, then such an FPTAS requires the best possible running time of \(O\left( n^{2}/\varepsilon \right) \). In what follows, we refer to the algorithm from Theorem 3 as the \(\gamma \)-FPTAS.

Given problem \(1\left| p_{j}\left( \pi ; r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}(\lambda )\right| C_{\max }\), introduce a Boolean variable \(x_{j}\) in such a way that

$$\begin{aligned} x_{j}=\left\{ \begin{array}{l} 1,\quad ~\mathrm {if~job~}j~\mathrm {is~scheduled~in~the~first~group} \\ 0,\quad ~\mathrm {otherwise}\end{array}\right. \end{aligned}$$

for each job \(j,~1\le j\le n\).

Taking the jobs in order of their numbering given by (6), if job \(j\in N\) is scheduled in the first group, then it completes at time

$$\begin{aligned} C_{j}=p_{j}x_{j}\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}x_{i}\right) , \end{aligned}$$

so that the MP starts at time \(\tau =\sum _{j=1}^{n}p_{j}x_{j}\left( 1+b_{j}\sum _{i=1}^{j-1} q_{i}x_{i}\right) \). If job j is scheduled in the second group, then its completion time is given by

$$\begin{aligned} C_{j}= & {} \tau +\left( \lambda \tau +\mu \right) +\sigma p_{j}(1-x_{j})\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}(1-x_{i})\right) \\= & {} \left( \lambda +1\right) \sum _{j=1}^{n}p_{j}x_{j}\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}x_{i}\right) \\&\quad +\,\sigma p_{j}(1-x_{j})\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}(1-x_{i})\right) +\mu . \end{aligned}$$

This implies that in order to solve problem \(1\left| p_{j}\left( \pi ; r\right) \right. \left. =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}\left( \lambda \right) \right| C_{\max }\), we need to minimize the function

$$\begin{aligned} Z\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{j=1}^{n}p_{j}x_{j}\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}x_{i}\right) \\&+\,\sigma \sum _{j=1}^{n}p_{j}(1-x_{j})\left( 1+b_{j}\sum _{i=1}^{j-1}q_{i}(1-x_{i})\right) +\mu \\= & {} \sum _{j=1}^{n}\left( \lambda +1\right) b_{j}p_{j}x_{j}\left( \sum _{i=1}^{j-1}q_{i}x_{i}\right) \\&+\sum _{j=1}^{n}\sigma b_{j}p_{j}(1-x_{j})\left( \sum _{i=1}^{j-1}q_{i}(1-x_{i})\right) \\&+\left( \lambda +1\right) \sum _{j=1}^{n}p_{j}x_{j}+\sigma \sum _{j=1}^{n}p_{j}(1-x_{j})+\mu . \end{aligned}$$

We show that the above function admits a representation in the form (7). As in (4), define \(w_{j}=b_{j}p_{j}, j\in N\), and rewrite \(Z\left( \mathbf {x}\right) \) as

$$\begin{aligned} Z\left( \mathbf {x}\right)= & {} \sum _{1\le i<j\le n}^{{}}\left( \lambda +1\right) q_{i}w_{j}x_{i}x_{j}\nonumber \\&+\sum _{1\le i<j\le n}^\sigma q_{i}w_{j}(1-x_{i})(1-x_{j}) \nonumber \\&+\left( \lambda +1\right) \sum _{j=1}^{n}p_{j}x_{j}+\sigma \sum _{j=1}^{n}p_{j}(1-x_{j})+\mu . \end{aligned}$$
(9)

Function (9) is written in the form that appears as an objective function in the so-called symmetric quadratic knapsack problem; see Kellerer and Strusevich (2012, (2016) for reviews.

Since

$$\begin{aligned}&\sum _{1\le i<j\le n}^{{}}q_{i}w_{j}(1-x_{i})(1-x_{j})\\&\quad =\sum _{1\le i<j\le n}^{{}}q_{i}w_{j}x_{i}x_{j}-\sum _{j=1}^{n}\left( w_{j}\left( \sum _{i=1}^{j-1}q_{i}\right) \right. \\&\qquad \left. +\,q_{j}\left( \sum _{i=j+1}^{n}w_{i}\right) \right) x_{j} +\sum _{1\le i<j\le n}^{{}}q_{i}w_{j}, \end{aligned}$$

and

$$\begin{aligned}&\left( \lambda +1\right) \sum _{j=1}^{n}p_{j}x_{j}+\sigma \sum _{j=1}^{n}p_{j}(1-x_{j})\\&\quad =\left( \lambda -\sigma +1\right) \sum _{j=1}^{n}p_{j}x_{j}+\sigma \sum _{j=1}^{n}p_{j}, \end{aligned}$$

function (9) derived above may be written as

$$\begin{aligned} Z\left( \mathbf {x}\right)= & {} \sum _{1\le i<j\le n}^{{}}\left( \lambda +\sigma +1\right) q_{i}w_{j}x_{i}x_{j} \nonumber \\&+\sum _{j=1}^{n}\left( \left( \lambda -\sigma +1\right) p_{j}-\sigma \left( w_{j}\left( \sum _{i=1}^{j-1}q_{i}\right) \right. \right. \nonumber \\&\left. \left. +\,q_{j}\left( \sum _{i=j+1}^{n}w_{i}\right) \right) \right) x_{j} \nonumber \\&+\,\left( \mu +\sigma \left( \sum _{1\le i<j\le n}^{{}}q_{i}w_{j}+\sum _{j=1}^{n}p_{j}\right) \right) . \end{aligned}$$
(10)

This is clearly a representation in the form (7) with

$$\begin{aligned} u_{j}= & {} \left( \lambda +\sigma +1\right) w_{j},~v_{j}=q_{j},~j\in N; \\ h_{j}= & {} \left( \lambda -\sigma +1\right) p_{j}-\sigma \left( w_{j}\left( \sum _{i=1}^{j-1}q_{i}\right) \right. \\&\left. +\,q_{j}\left( \sum _{i=j+1}^{n}w_{i}\right) \right) , ~j\in N; \\ K= & {} \mu +\sigma \left( \sum _{1\le i<j\le n}^{{}}q_{i}w_{j}+\sum _{j=1}^{n}p_{j}\right) . \end{aligned}$$

According to Theorem 3, in order to obtain a \(\gamma \)-FPTAS for the problem of minimizing a function of the form (7), we are required to find the bounds LB and UB on the value of \(Z(\mathbf {x}^{*})\) and to prove that the ratio UB/LB is bounded from above by a constant \(\gamma \). Notice, since we aim at obtaining an FPTAS with the best possible running time of \(O\left( n^{2}/\varepsilon \right) \), we need to find the required lower and upper bounds in no more than \(O\left( n^{2}\right) \) time. This can be done as described below.

Assume that the integrality constraint of the decision variables \(x_{j}\) is relaxed, i.e., the condition \(x_{j}\in \left\{ 0,1\right\} \) is replaced by \(0\le x_{j}\le 1, j\in N\). If \(\mathbf {x}^{C}=(x_{1}^{C},\ldots , x_{n}^{C}), 0\le x_{j}^{C}\le 1\), is the corresponding solution vector and \(Z(\mathbf {x}^{C})\) denotes the optimal value of the function (10) for the continuous relaxation, then clearly \(Z(\mathbf {x}^{C})\le Z(\mathbf {x}^{*})\), i.e., we may set \(\hbox {LB}= Z(\mathbf {x}^{C})\).

As demonstrated in Kellerer and Strusevich (2010), the relaxation of the problem of minimizing a convex function of the form (7), even with an additional linear knapsack constraint, reduces to finding the minimum cost flow with a convex quadratic cost function in a special network. The latter problem is studied by Tamir (1993) who gives a solution algorithm that in the case under consideration requires \(O\left( n^{2}\right) \) time.

Notice that a function of the form (8) is proved convex, provided that the items are numbered in non-decreasing order of the ratios \(v_{j}/u_{j}, j\in N\); see Kellerer and Strusevich (2010). In our case, the required numbering is guaranteed by (6), so that the objective function \(Z(\mathbf {x})\) as given in (10) is convex and Tamir’s algorithm is applicable. Thus, a lower bound \(\hbox {LB}= Z(\mathbf {x}^{C})\) on the value \(Z(\mathbf {x}^{*})\) can be found in \(O\left( n^{2}\right) \) time.

To obtain an upper bound, we perform an appropriate rounding of the fractional components of vector \(\mathbf {x}^{C}\). A simple rounding algorithm is described below.

Algorithm round

Step 1. :

Given a vector \(\mathbf {x}^{C}=(x_{1}^{C},\ldots , x_{n}^{C}), 0\le x_{j}^{C}\le 1\), a solution to the continuous relaxation of the problem of minimizing (10), determine the sets \(I_{1}=\left\{ j\in N,~x_{j}^{C}\le \frac{1}{2}\right\} \) and \(I_{2}=\left\{ j\in N,~x_{j}^{C}>\frac{1}{2}\right\} \) and find vector \(\mathbf {x}^{H}=(x_{1}^{H},\ldots , x_{n}^{H})\) with components

$$\begin{aligned} x_{j}^{H}=\left\{ \begin{array}{lll} 0 &{}\quad \text {if} &{}\quad j\in I_{1} \\ 1 &{}\quad \text {if} &{}\quad j\in I_{2}\end{array}\right. . \end{aligned}$$
Step 2. :

Output vector \(\mathbf {x}^{H}=(x_{1}^{H},\ldots ,x_{n}^{H})\) as a heuristic solution to the problem of minimizing function (10), and therefore, function (9).

The running time of Algorithm Round is \(O\left( n\right) \). Clearly, the inequalities \(Z(\mathbf {x}^{C})\le Z(\mathbf {x}^{*})\le Z(\mathbf {x}^{H})\) hold, i.e., we may take \(Z(\mathbf {x}^{H})\) as an upper bound UB on the optimal value \(Z(\mathbf {x}^{*})\). We now estimate the ratio \(\gamma =UB/LB=Z(\mathbf {x}^{H})/Z(\mathbf {x}^{C})\).

Theorem 4

Let \(\mathbf {x}^{C}\) be an optimal solution of the continuous relaxation of the problem of minimizing function \(Z(\mathbf {x})\) of the form (10), and \(\mathbf {x}^{H}\) be a vector found by Algorithm Round. Then

$$\begin{aligned} \gamma =\frac{Z(\mathbf {x}^{H})}{Z(\mathbf {x}^{C})}\le 4. \end{aligned}$$

Proof

For a vector \(\mathbf {x}^{C}\), let \(I_{1}\) and \(I_{2}\) be the index sets found in Step 2 of Algorithm Round. For a vector \(\mathbf {x}=\left( x_{1},\ldots ,x_{n}\right) \), where \(0\le x_{j}\le 1\), using the representation (9) define

$$\begin{aligned} Z_{1}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{1} \end{array}}q_{i}w_{j}x_{i}x_{j}\\&+\,\sigma \sum _{\begin{array}{c} 1\le i<j\le n\\ i,j\in I_{1} \end{array}}q_{i}w_{j}\left( 1-x_{i}\right) \left( 1-x_{j}\right) ; \\ Z_{2}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{\begin{array}{c} 1\le i<j\le n \\ i\in I_{1},j\in I_{2} \end{array}}q_{i}w_{j}x_{i}x_{j}\\&+\,\sigma \sum _{\begin{array}{c} 1\le i<j\le n\\ i\in I_{1},j\in I_{2} \end{array}}q_{i}w_{j}\left( 1-x_{i}\right) \left( 1-x_{j}\right) ; \\ Z_{3}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{\begin{array}{c} 1\le i<j\le n \\ i\in I_{2},j\in I_{1} \end{array}}q_{i}w_{j}x_{i}x_{j}\\&+\,\sigma \sum _{\begin{array}{c} 1\le i<j\le n\\ i\in I_{2},j\in I_{1} \end{array}}q_{i}w_{j}\left( 1-x_{i}\right) \left( 1-x_{j}\right) ; \\ Z_{4}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{2} \end{array}}q_{i}w_{j}x_{i}x_{j}\\&+\,\sigma \sum _{\begin{array}{c} 1\le i<j\le n\\ i,j\in I_{2} \end{array}}q_{i}w_{j}\left( 1-x_{i}\right) \left( 1-x_{j}\right) ; \\ Z_{5}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{j\in I_{1}}p_{j}x_{j}+\sigma \sum _{j\in I_{1}}p_{j}(1-x_{j}); \\ Z_{6}\left( \mathbf {x}\right)= & {} \left( \lambda +1\right) \sum _{j\in I_{2}}p_{j}x_{j}+\sigma \sum _{j\in I_{2}}p_{j}(1-x_{j}). \end{aligned}$$

By the rounding conditions in Step 2 of Algorithm Round, we derive

$$\begin{aligned} Z_{2}(\mathbf {x}^{H})=Z_{3}(\mathbf {x}^{H})=0, \end{aligned}$$

while

$$\begin{aligned} \displaystyle Z_{1}(\mathbf {x}^{H})=\sigma \sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{1} \end{array}}q_{i}w_{j}; \\ \displaystyle Z_{1}(\mathbf {x}^{C})\ge \frac{\sigma }{4}\sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{1} \end{array}}q_{i}w_{j};\\ \displaystyle Z_{4}(\mathbf {x}^{H})=\left( \lambda +1\right) \sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{2} \end{array}}q_{i}w_{j}; \\ \displaystyle Z_{4}(\mathbf {x}^{C})\ge \frac{\lambda +1}{4}\sum _{\begin{array}{c} 1\le i<j\le n \\ i,j\in I_{2} \end{array}}q_{i}w_{j}; \\ \displaystyle Z_{5}(\mathbf {x}^{H})=\sigma \sum _{j\in I_{1}}p_{j}; \\ \displaystyle Z_{5}(\mathbf {x}^{C})\ge \frac{\sigma }{2}\sum _{j\in I_{1}}p_{j}; \\ \displaystyle Z_{6}(\mathbf {x}^{H})=\left( \lambda +1\right) \sum _{j\in I_{2}}p_{j}; \\ \displaystyle Z_{6}(\mathbf {x}^{C})\ge \frac{\lambda +1}{2}\sum _{j\in I_{2}}p_{j}. \end{aligned}$$

Thus, we have that

$$\begin{aligned} Z(\mathbf {x}^{H})= & {} \sum _{k=1}^{6}Z_{k}(\mathbf {x}^{H})+\mu =Z_{1}(\mathbf {x}^{H})+Z_{4}(\mathbf {x}^{H})\\&+\,Z_{5}(\mathbf {x}^{H})+Z_{6}(x^{H})+\mu \\\le & {} 4Z_{1}(\mathbf {x}^{C})+4Z_{4}(\mathbf {x}^{C})+2Z_{5}(\mathbf {x}^{C})+2Z_{6}(x^{C})+\mu \\\le & {} 4\sum _{k=1}^{6}Z_{k}(\mathbf {x}^{C})+4\mu =4Z(\mathbf {x}^{C}), \end{aligned}$$

as required. \(\square \)

It follows immediately from Theorem 4 that for the problem of minimizing function (9) (or, equivalently, function (10)), Theorem 3 is applicable, i.e., the problem admits a \(\gamma \)-FPTAS with \(\gamma =4\). Hence, in terms of the original scheduling problem, we obtain the following statement.

Theorem 5

Problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}(\lambda )\right| C_{\max }\) admits an FPTAS that requires \(O\left( n^{2}/\varepsilon \right) \) time.

Notice that Theorem 5 cannot be improved for problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+b_{j}Q_{r}\right) ,\hbox {MP}(0)\right| C_{\max }\), i.e., for the case of a constant MP duration, since the underlying Boolean programming problem still remains that of minimizing a half-product function.

This is in contrast with the results obtained in Kellerer et al. (2013) for a similar, but simpler problem \(1\left| p_{j}\right. \left. \left( \pi ;r\right) =p_{j}\left( 1+bP_{r}\right) ,\hbox {MP}(\lambda )\right| C_{\max }\), in which it is additionally assumed that \(\sigma =1\), i.e., the MP fully restores the machine back to the default conditions. For problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+bP_{r}\right) ,\hbox {MP}(0)\right| C_{\max }\), an FPTAS requires only \(O\left( n/\varepsilon \right) \) time, since the underlying Boolean programming problem takes the form of a Subset-sum problem, with a linear objective function.

For the case of \(\lambda >0\), Kellerer et al. (2013) also rely on Theorem 3, but in order to demonstrate that problem \(1\left| p_{j}\left( \pi ; r\right) =p_{j}\left( 1+bP_{r}\right) ,\hbox {MP}(\lambda )\right| C_{\max }\) with \(\sigma =1\) admits a \(\gamma \)-FPTAS, an approximate solution to \(1\left| p_{j}\left( \pi ;r\right) \right. \left. =p_{j}\left( 1+bP_{r}\right) ,\hbox {MP}(0)\right| C_{\max }\) is used as a lower bound LB, and the ratio UB/LB is proved to be bounded by \(\gamma \), where \(\gamma \) is found as a linear function of \(\lambda \). To make Theorem 3 applicable, an additional assumption is made that \(\lambda \le 1\).

The approach described in this paper, based on Algorithm Round and Theorem 4, can also be applied to handle problem \(1\left| p_{j}\left( \pi ;r\right) =p_{j}\left( 1+bP_{r}\right) ,\hbox {MP}(\lambda )\right| C_{\max } \) with \(\lambda >0\) and \(\sigma =1\). It will lead to a \(\gamma \)-FPTAS with the running time of \(O\left( n^{2}/\varepsilon \right) \), as in Kellerer et al. (2013), but no additional assumptions regarding the value of \(\lambda \) are needed.

Notice that the results in this section can be extended to handle an enhanced model in which it is assumed that the normal processing time of a job \(j\in N\) scheduled after the MP changes from \(p_{j}\) to \(\sigma _{j}p_{j} \), with a job-dependent factor \(\sigma _{j}>0\), provided that these factors are such that for each pair of jobs i and j, the inequality

$$\begin{aligned} \frac{p_{i}w_{i}}{q_{i}}\le \frac{p_{j}w_{j}}{q_{j}} \end{aligned}$$

implies

$$\begin{aligned} \frac{\sigma _{i}p_{i}w_{i}}{q_{i}}\le \frac{\sigma _{j}p_{j}w_{j}}{q_{j}}. \end{aligned}$$

Similar assumptions are common in the literature on scheduling with rate-modifying maintenance, see, e.g., Lee and Leon (2001) who argue in favour of their practical relevance.

5 Conclusion

The paper introduces a rather general model for scheduling with changing processing times under a cumulative effect. For the problem of minimizing the makespan on a single machine, we adopt Smith’s rule to solve the problem in \(O\left( n\log n\right) \) time. We show that the problem with series-parallel precedence constraints can also be solved in \(O\left( n\log n\right) \) time since its objective function is priority-generating. The problem with a rate-modifying maintenance activity, which allows us to (partly) restore the processing conditions of the machine, is linked to a Boolean programming problem with a quadratic objective function, namely the half-product problem. Adapting the results previously known for that problem, we provide an FPTAS that takes \(O\left( n^{2}/\varepsilon \right) \) time to solve the problem of minimizing the makespan with a single maintenance period.

The next step in studying the models with cumulative deterioration could be a search for approximation algorithms or schemes that would allow us to handle multiple maintenance periods.