1 Introduction

Lawler (1973) considered the following single-machine scheduling problem. Each of \(n\) simultaneously available jobs is to be processed without preemption on a single machine, which can process at most one job at a time. Job \(j,\ j =1,\ldots ,n,\) has a processing time \(p_j \ge 0\). To each of the jobs \(j\) is assigned a non-decreasing cost function \(\varPhi _j(t)\) that specifies the cost incurred if job \(j\) is completed at time \(t\). There are precedence constraints represented by an acyclic directed graph \(G=(V,E)\) with \(n\) vertices, where each vertex \(v \in V\) corresponds to one of the jobs. Note that graph \(G\) contains all transitive arcs. If job \(j_1\) precedes job \(j_2\) (\(j_1 \rightarrow j_2\)), i.e., \((j_1,\ j_2) \in E\), then the processing of job \(j_2\) cannot start before the completion of processing job \(j_1\). The objective is to find a feasible schedule \(s\) that minimizes the maximum cost \(\varPhi _{\max }(s)=\max _{j=1,\ldots ,n}\varPhi _j(C_j(s)),\) where \(C_j(s)\) is the completion time of job \(j\) under schedule \(s\).

A feasible schedule may be presented by the permutation \(\pi =(\pi (1),\ldots , \pi (n))\) of jobs. Here \(\pi (k)\) is the job in position \(k\) of permutation \(\pi \). The permutation feasibility means that job \(j_1\) is allocated on the left of job \(j_2\) in the permutation if \(j_1 \rightarrow j_2\). Denote by \(C_j(\pi )=p_{\pi (1)}+\ldots + p_{\pi (k)}\) the completion time of job \(j=\pi (k)\) allocated in position \(k\) of \(\pi \). A feasible permutation that minimizes \(\varPhi _{\max }(\pi )\) is called optimal. The formulated problem \(1|prec|\varPhi _{\max }\) is solved by Lawler’s minmax cost algorithm that may be described as follows:

Algorithm 1

Lawler (1973)

  1. 1.

    Set \(k = n\) and \(T=\sum _{j=1}^n p_j\).

  2. 2.

    Let \(\varOmega \) be the set of terminal vertices of graph \(G=(V,E)\), i.e., vertices without successors.

  3. 3.

    Find vertex \(u \in \varOmega \) such that \(\varPhi _u(T) = \min \{\varPhi _v(T)|v \in \varOmega \}\). Break ties arbitrarily.

  4. 4.

    Set \(\pi (k) = u\), \(k = k - 1\), \(T=T-p_u\) and modify graph \(G\) by setting \(V = V \setminus \{u\}\) and removing all arcs adjacent to \(u\).

  5. 5.

    If \(k > 0\), then go to Step 2. If \(k = 0\), then the optimal permutation is constructed.

The time complexity of the algorithm is \(O(n^2)\). It is supposed that graph \(G\) is represented by its adjacency matrix, and we assume that each value of \(\varPhi _j(t)\) can be calculated in \(O(1)\) time.

Lawler’s algorithm delivers one specific optimal solution while there can exist other optimal solutions. Lin and Wang (2007) derived necessary and sufficient conditions for the optimality of a schedule for a number of scheduling problems without precedence constraints, in particular, for problem \(1||L_{\max }\), where \(L_{\max }\) is the maximum lateness.

We present necessary and sufficient conditions for the optimality of a schedule for the general problem \(1 |\mathrm{prec}|\varPhi _{\max }\) with strictly increasing cost functions. For non-decreasing cost functions these conditions are sufficient.

The traditional scheduling models, in which values of all numerical parameters are given in advance, fail in many practical applications. For a number of the real-world situations, values of some parameters (processing and transportation times, release and due dates, etc.) are not precisely known in advance. Everything that is known is the set of possible values for each of the parameters. These parameters are not controllable, each of them takes any value from the given set, and the choice of the particular value does not depend on the will of the decision maker. If such uncertain parameters are included in the model, then one obtains a model and a corresponding optimization problem under uncertainty.

For a problem under uncertainty, each possible choice of the parameter values generates a scenario. Once a scenario is given, we have to solve a deterministic problem. The main peculiar feature of any problem under uncertainty is that we do not know in advance, which particular scenario will be realized.

One of the most popular approaches of dealing with problems under uncertainty is the worst-case or robust approach, Kouvelis and Yu (1997). Under this approach, the decision maker expects that the real scenario will be the worst possible from the point of view of the problem to be solved, and constructs a solution that is the best one under the expected worst scenario. Intuitively, a robust solution is a solution that remains suitable whatever scenario finally occurs. There are different ways to define the worst scenario. The two main concepts are the worst objective function value and the maximum deviation of the objective function value from the optimum.

To simplify the formal description of the robust solutions, consider as an example a scheduling problem where the aim is to minimize an objective function. Then in the first concept, the decision maker’s aim is to minimize the objective function under maximally unsuitable values of the uncertain parameters. In the second concept, the decision maker tries to find a schedule that minimizes the maximum possible difference between the objective function values for the schedule obtained and the optimal schedule for a real scenario. Respectively, these approaches are called minmax and minmax regret approaches, see Kouvelis and Yu (1997).

To be more formal, denote the set of all possible scenarios by \(X\) and the set of all feasible schedules by \(S\). Denote the objective function by \(F(x,s)\). Here we point out that the objective function depends on schedule \(s \in S\) and on scenario \(x \in X\). Then the minmax approach is to find a schedule \(s\) that minimizes the function

$$\begin{aligned} \max \{F(x,s)|x \in X\}, \end{aligned}$$

whereas the minmax regret approach is to find a schedule \(s\) that minimizes the function

$$\begin{aligned} \max \{F(x,s)-F(x,s^*(x))|x \in X\}, \end{aligned}$$

where \(s^*(x)\) is an optimal schedule for scenario \(x\).

It appears that Wald (1950) was the first who proposed the minmax approach to deal with uncertainty and Savage (1954) is the author of the minmax regret approach.

In Lawler (1973) the cost functions of jobs are arbitrary non-decreasing functions. In our problem, we assume decomposable non-decreasing functions that will be defined in Sect. 3. A processing time \(p_j\) and a cost function parameter \(\lambda _j\) are associated with each job \(j\). We suppose that the additional parameters \(\lambda _j\) are uncertain and the set of possible values of \(\lambda _j\) for each job \(j\) is an interval. In our case, the set of all possible scenarios is the Cartesian product of these intervals. We examine the two mentioned approaches: the minmax approach and the minmax regret approach. In both cases we actively use Lawler’s algorithm.

The only known result for problems of this type is presented in Kasperski (2005), where the problem \(1|\mathrm{prec}|L_{\max }\) was considered under uncertainty of processing times and due dates of the jobs. Also in Kasperski (2005) the set of possible values for each of the uncertain parameters is an interval. An \(O(n^4)\) algorithm has been developed for constructing a schedule that is optimal with respect to the minmax regret criterion. In our model, we consider a more general objective function that includes \(L_{\max }\) as a special case. At the same time, we are dealing with a single uncertain parameter for each job. One of our results is an \(O(n^2)\) algorithm for constructing a schedule that is optimal with respect to the minmax regret criterion.

We denote by \(A_G(v)\) and \(B_G(v)\) the set of all successors of vertex \(v\) in graph \(G\) and the set of all predecessors of \(v\), respectively.

This paper is organized as follows: In Sect. 2 we consider a deterministic version of the main problem and formulate necessary and sufficient conditions for a feasible permutation to be optimal. In addition, we present some related results that are used in the subsequent sections. In Sect. 3 we deal with the main problem under uncertainty. We present \(O(n^2)\) algorithms for constructing schedules that are optimal with respect to the minmax criterion (Sect. 3.1) and to the minmax regret criterion (Sect. 3.2). In the last section, we summarize our results and outline possible directions for further research.

2 Conditions of optimality

For our problem \(1|\mathrm{prec}|\varPhi _{\max }\), we require first that all functions \(\varPhi _j(t)\) are strictly increasing. Compared to Lin and Wang (2007), where necessary and sufficient conditions of a permutation optimality were derived for problem \(1||L_{\max }\), difficulties arise in connection with the precedence constraints and with the presence of zero-length jobs. In Lin and Wang (2007), all processing times are implicitly assumed to be strictly positive.

Let a feasible permutation \(\pi \) be given. We describe a job exchange, as Lawler did in his original paper, but in a slightly modified form. Suppose the job permutation \(\pi \) can be represented schematically as follows:

with segments \(A,\) \(B,\) and \(C\) and jobs \(v\) and \(u\) with \(p_u > 0\) and \(u \not \rightarrow v\), i.e., \(u \notin B_G(v)\). We consider only jobs \(u\) that are close to \(v\) so that the set \(W=B \cap A_G(u)\) of all successor jobs of \(u\) in \(B\) contains only zero-length jobs, i.e., if \(j\in W\) then \(p_j=0\). Set \(U =(u,W)\) and move subpermutation \(U\) to the right just following \(v\). Then we obtain the feasible permutation \(\pi '\):

where \(B' = B {\setminus } W\). Here it is understood that the inner order of jobs in \(B'\) and \(W\) is saved from the order in \(\pi \). The following definition describes the transitions from \(\pi \) to \(\pi '\) that yield some kind of strict improvement, at least locally.

Definition 1

Consider a feasible permutation \(\pi \) and a job \(v\) in \(\pi \) with completion time \(t\). We say that job \(v\) has a local improvement in \(\pi \) if there exists job \(u\), as described above, which satisfies the conditions \(\varPhi _j(t) < \varPhi _v(t)\) for all \(j \in \{u\}\cup W\).

If job \(v\) has a local improvement in \(\pi \), then for the permutation \(\pi '\) above, we get the following properties:

  1. 1.

    The positions of jobs \(j \in A \cup C\) are unchanged. Hence these jobs keep the same completion times and costs.

  2. 2.

    Let \(R=\{u\}\cup \{v\}\cup B\). Then the maximum cost of the jobs in \(R\) is strictly reduced, i.e., \(\max _{j\in R}\varPhi _j(C_j(\pi '))\) \(<\) \(\max _{j\in R} \varPhi _j(C_j(\pi ))\). The validity of the property follows from the conditions \(p_u>0\) and \(\varPhi _j(t) < \varPhi _v(t)\) for all \(j \in \{u\}\cup W\).

  3. 3.

    We have \(\varPhi _{\max }(\pi ') \le \varPhi _{\max }(\pi )\).

Definition 2

Job \(v\) is called critical in permutation \(\pi ,\) if \(\ \varPhi _{\max }(\pi )\ =\ \) \(\varPhi _{v}(C_{v}(\pi ))\).

Theorem 1

If all functions \(\varPhi _j(t)\) are strictly increasing, then a feasible permutation \(\pi \) is optimal for problem \(1|\mathrm{prec}|\varPhi _{\max }\) if and only if there exists a critical job in \(\pi \) without any local improvement.

Proof

Necessity. Let \(\pi \) be an optimal permutation. We prove the necessity by induction on the number of critical jobs.

Consider a situation with a single critical job \(v\). If job \(v\) has a local improvement in \(\pi \), then in view of the above properties of the local improvement, there exists a feasible permutation \(\pi '\) such that \(\varPhi _{v}(C_{v}(\pi ')) < \varPhi _{v}(C_{v}(\pi ))\), and the values of the cost functions of all other jobs are less than \(\varPhi _{v}(C_{v}(\pi ))\). So, \(\varPhi _{\max }(\pi ') < \varPhi _{\max }(\pi )\) that contradicts the optimality of \(\pi \).

Suppose that the theorem conditions are necessary for the optimality of any permutation with less than \(l \ge 2\) critical jobs.

Consider the case with \(l\) critical jobs. If each of the critical jobs has a local improvement, then consider the critical job \(v=\pi (k)\) in the left-most position. Acting similarly to the case with a single critical job, we can move job \(u\) (see the definition of local improvement) along with its successors in positions \(j,\ j<k,\) to the positions to the right of \(v\). As a result, we obtain a feasible permutation \(\pi ',\) in which job \(v\) is not a critical job and the general number of critical jobs in \(\pi '\) equals \(l-1\). Besides, \(\varPhi _{\max }(\pi ') \le \varPhi _{\max }(\pi )\), i.e., \(\pi '\) is an optimal permutation. All these \(l-1\) critical jobs are in the same positions as in permutation \(\pi \). Since for \(\pi ,\) each of these \(l-1\) critical jobs has a local improvement, each of them has a local improvement in permutation \(\pi '\) as well. It means that for the optimal permutation \(\pi '\) with \(l-1\) critical jobs, the theorem conditions are violated. That contradicts our induction hypothesis.

Sufficiency Suppose that in a feasible permutation \(\pi ,\) there exists a critical job \(v=\pi (k)\) without local improvements. We have \(\varPhi _{\max }(\pi )= \varPhi _{v}(C_v(\pi )) \ge \varPhi _j(C_j(\pi ))\) for all \(j \in \{1, \ldots , n\}\). Let \(\pi '\) be an optimal permutation. If \(C_v(\pi ') \ge C_v(\pi ),\) then \(\varPhi _{\max }(\pi ') \ge \varPhi _{\max }(\pi )\) and \(\pi \) is an optimal permutation.

Denote \(L(v)=\{\pi (l) \notin B_G(v)|\ l<k,\ p_{\pi (l)}>0\}\), remembering that job \(v\) is in position \(k\) in \(\pi \).

Suppose that for an optimal permutation \(\pi '\), we have \(C_v(\pi ') < C_v(\pi )\). Then there exists at least one job \(\pi (l) \in L(v)\) on the right of \(v\) in \(\pi ',\) otherwise we would have \(C_v(\pi ') \ge C_v(\pi )\).

Let \(\pi (l)\) be the right-most job from \(L(v)\) in \(\pi ',\) then \(C_{\pi (l)}(\pi ') \ge C_{v}(\pi )\). Since \(v\) does not have a local improvement in \(\pi ,\) we have \(\varPhi _{\pi (l)}(C_v(\pi )) \ge \varPhi _v(C_v(\pi ))\). Therefore, \(\varPhi _{\pi (l)}(C_{\pi (l)}(\pi ')) \ge \varPhi _{\pi (l)}(C_v(\pi )) \ge \varPhi _v(C_v(\pi ))\) and \(\varPhi _{\max }(\pi ') \ge \varPhi _{\max }(\pi ),\) i.e., \(\pi \) is again an optimal permutation.\(\square \)

Corollary 1

If all \(\varPhi _j(t)\) are non-decreasing functions, then a feasible permutation \(\pi \) is optimal for problem \(1|\mathrm{prec}|\varPhi _{\max }\) if there exists a critical job in \(\pi \) without any local improvement.

Definition 3

Given a feasible permutation \(\pi \) and a job \(v\) in \(\pi \) with completion time \(t\). Let job \(u\) be on the left of \(v\) in \(\pi \), and \(U\) is the set of jobs between \(u\) and \(v\) in \(\pi \). We say that job \(v\) has a weak improvement in \(\pi \) if \(\varPhi _u(t) < \varPhi _v(t)\) and none of the jobs in \(U \cup \{v\}\) is a successor of \(u\).

Lemma 1

If all functions \(\varPhi _j(t)\) are non-decreasing and job \(v\) has a weak improvement in a feasible permutation \(\pi \), then there exists a feasible permutation \(\pi '\) such that \(C_v(\pi ') \le C_v(\pi ),\) \(\varPhi _u(C_u(\pi ')) < \varPhi _v(C_v(\pi ))\), and \(\varPhi _{\max }(\pi ') \le \varPhi _{\max }(\pi )\).

Proof

To construct the mentioned permutation \(\pi ',\) it is enough to move job \(u\) from its position to position of job \(v\) in \(\pi \), simultaneously shifting all jobs of the set \(U \cup \{v\}\) one position to the left. \(\square \)

Evidently, if a job has no weak improvement in a feasible permutation \(\pi ,\) then it also has no local improvement in \(\pi \).

Corollary 2

If all functions \(\varPhi _j(t)\) are non-decreasing, then a feasible permutation \(\pi \) is optimal for problem \(1|\mathrm{prec}|\varPhi _{\max }\) if there exists a critical job in \(\pi \) without any weak improvement.

Corollary 3

If for problem \(1|\mathrm{prec}|\varPhi _{\max }\), permutation \(\pi \) is constructed by Algorithm 1, then none of the jobs has a weak improvement in \(\pi \).

Proof

Suppose that job \(\pi (k)\) has a weak improvement in \(\pi \). Consider the \(k\)th run of Step 3 in Algorithm 1. The definition of weak improvement means that in this run, there is at least one job \(j\) in the current set \(\varOmega \) such that \(\varPhi _j(T)<\varPhi _{\pi (k)}(T)\). In accordance with the description of Step 3, job \(j\) has to be chosen to be placed in position \(k\) in \(\pi \) instead of job \(\pi (k)\).

\(\square \)

3 Decomposable cost functions with uncertainty

Now we formulate our main single-machine problem under uncertainty. We give in detail the differences between our problem and its deterministic version considered so far with \(n\) non-preemptive jobs and precedence constraints presented by graph \(G\). We restrict the cost functions of jobs to be decomposable into a sum of two components. The first component is a monotone function that depends on the job completion time. The second component is an uncertain parameter.

Job \(j\) has a processing time \(p_j \ge 0\) and an additional real-valued parameter \(\lambda _j\). Each job \(j\) has an associated cost function \(\varPhi _j(t,\lambda _j)=\varphi (t)+\lambda _j\) that presents a cost to be incurred if the job processing is completed at time \(t\). Here \(\varphi (t)\) is a non-decreasing function. The aim is to find a job permutation \(\pi \) that is feasible with respect to the precedence constraints and that minimizes the maximum cost

$$\begin{aligned} \varPhi _{\max }(\pi )=\max \{\varPhi _j(C_j(\pi ),\lambda _j)|j=1,\ldots ,n\}. \end{aligned}$$

The best known examples of functions \(\varPhi _j(t,\lambda _j)=\varphi (t)+\lambda _j\) are

  • lateness of job \(j:\) \(L_j(t,\ d_j)=t-d_j\), where \(d_j \ge 0\) is a due date of job \(j\); here \(\lambda _j=-d_j,\) and

  • delivery time of job \(j:\) \(D_j(t,\ q_j)=t+q_j\), where \(q_j \ge 0\) is the transportation time of job \(j\), i.e., after completing the processing, job \(j\) is to be delivered to its end user, and the delivering takes \(q_j\) time units; here \(\lambda _j=q_j\).

We consider the problem above under uncertainty of the additional parameter \(\lambda _j\): for each job \(j\) there are given a lower bound \(\lambda ^-_j\) and an upper bound \(\lambda ^+_j\) of possible values of \(\lambda _j\). The actual value of \(\lambda _j\) is unknown; \(\lambda _j\) may take any value from the interval \([\lambda ^-_j, \lambda ^+_j]\) of real numbers.

We denote the uncertain version of the problem by

$$\begin{aligned} 1| \mathrm{prec};\, \lambda _j \in [\lambda ^-_j, \, \lambda ^+_j] | \varPhi _{\max }. \end{aligned}$$

Further, we suppose that the calculation of each value of function \(\varphi (t)\) takes a constant time.

Suppose we are given particular values \(\lambda ^-_j, \lambda ^+_j\), and \(p_j\) for each \(j \in \{1,\ldots ,n\},\) a particular function \(\varphi (t)\) and a particular graph \(G\) presenting the precedence constraints. If we choose the values of the uncertain parameters in any feasible way, i.e., \(\lambda _j \in [\lambda _j^-,\lambda _j^+]\) for each \(j \in \{1,\ldots ,n\}\), we get scenario \(I\) for the problem \(1|\mathrm{prec};\, \lambda _j \in [\lambda ^-_j, \, \lambda ^+_j] | \varPhi _{\max }\) under uncertainty. We denote by \(\mathcal {I}\) the set of all scenarios. Clearly, set \(\mathcal {I}\) is infinite if \(\lambda ^-_j \ne \lambda ^+_j\) for at least one \(j \in \{1,\ldots ,n\}\).

Given scenario \(I \in \mathcal {I}\), we have a deterministic version of the problem in question, and we can find an optimal permutation using Lawler’s algorithm.

For a given scenario \(I \in \mathcal {I}\) and a permutation \(\pi \), we write \(\varPhi _{\max }(I,\pi )\) to denote the corresponding value of the objective function, and we write \(\varPhi _{\max }^*(I)\) to denote the minimum value of the objective function for scenario \(I\). Let \(\pi ^*(I)\) be an optimal permutation for scenario \(I\).

In the following sections we consider the two robust approaches, described earlier, to solve the problem \(1|\mathrm{prec};\, \lambda _j \in [\lambda ^-_j, \, \lambda ^+_j] | \varPhi _{\max }\) under uncertainty: minmax and minmax regret criteria.

3.1 Minmax criterion

Since each function \(\varPhi _j(t,\lambda _j)=\varphi (t)+\lambda _j\) is strictly increasing with the parameter \(\lambda _j,\) for any permutation \(\pi \), the value of the objective function \(\varPhi _{\max }(I,\pi )\) gets its maximum if each \(\lambda _j\) takes its maximum value.

Thus, to obtain a permutation that is optimal with respect to the minmax criterion, it is enough to solve the problem of minimizing function \(\varPhi _{\max }(I^+,\pi ),\) where \(I^+\) is the scenario with \(\lambda _j = \lambda ^+_j\) for all \(j= 1,\ldots , n\). Hence the permutation may be constructed by Algorithm 1.

The following statement characterizes the quality of solutions obtained according to the minmax criterion.

Theorem 2

Let \(\pi ^{+}\) be an optimal permutation for scenario \(I^+\). Then for any scenario \(I^0\) the bound

$$\begin{aligned}&\varPhi _{\max }(I^0,\pi ^+)-\varPhi _{\max }(I^0,\pi ^*(I^0))\\&\quad \le \max \{\lambda _j^+ - \lambda _j^-|\ j=1,\ldots ,n\} \end{aligned}$$

holds, and this bound is tight.

Proof

Let scenario \(I^0\) be defined by the vector \(\ \lambda ^0=\) \((\lambda ^0_1,\ldots , \lambda ^0_n)\). For simplicity, denote permutation \(\pi ^*(I^0)\) by \(\pi ^0\). Without loss of generality, suppose that the jobs are numbered in accordance with permutation \(\pi ^{0}\). Let job \(l\) be a critical job in permutation \(\pi ^{0}\) for scenario \(I^+\), i.e., \(\varPhi _{\max }(I^+,\pi ^{0})=\varphi (\sum _{j=1}^l\ p_j)+\lambda _l^+\). Then \(\varphi (\sum _{j=1}^l\ p_j)+\lambda _l^+ \ge \varPhi _{\max }(I^+,\pi ^+)\).

For \(I^0\) we have \(\varPhi _{\max }(I^0,\pi ^0)\ \ge \ \varphi (\sum _{j=1}^l\ p_j)\ +\ \lambda _l^0\). Therefore, \(\varPhi _{\max }(I^+,\pi ^+)\ -\ \varPhi _{\max }(I^0,\pi ^0)\ \le \ \lambda ^+_l - \lambda ^0_l\).

Evidently, \(\varPhi _{\max }(I^+,\pi ^+)\ \ge \ \varPhi _{\max }(I^0,\pi ^+)\). Thus, \(\delta =\varPhi _{\max }(I^0,\pi ^+)\ -\ \varPhi _{\max }(I^0,\pi ^0)\ \le \ \varPhi _{\max }(I^+,\pi ^+)\ -\ \varPhi _{\max }(I^0,\pi ^0)\ \le \ \lambda ^+_l - \lambda ^0_l\ \le \ \max _{1\le j \le n}\{\lambda ^+_j - \lambda _j^-\}\).

Now we construct a simple example that shows the tightness of the bound. Given number \(A>1,\) let \(\varphi (t)=t;\ n=2;\ p_1=A,\ p_2=1, \) \(\lambda ^-_1=1+\varepsilon ,\ \lambda ^+_1=A+1+\varepsilon ;\ \lambda ^-_2=2, \lambda ^+_2=A+1,\) where \(\varepsilon >0\) is an arbitrary small number. It is easy to see that \(\pi ^+=(1,2)\). Suppose that \(\lambda ^0_1=1+\varepsilon , \lambda ^0_2=A+1\). Then \(\pi ^0=(2,1),\) \(\varPhi _{\max }(I^0,\pi ^+)=2A+2, \varPhi _{\max }(I^0,\pi ^0)=A+2+\varepsilon \). Thus, \(\delta =A-\varepsilon \), and \(\delta \) tends to \(A\), while \(\varepsilon \) tends to 0\(.\square \)

It is interesting to note that in case of the “optimistic” scenario \(I^-\) with \(\lambda ^-= (\lambda ^-_1, \ldots , \lambda ^-_n)\), we get the same bound.

Note 1

Let \(\pi ^{-}\) be an optimal permutation for scenario \(I^-\). Then for any scenario \(I^0\), the bound

$$\begin{aligned}&\varPhi _{\max }(I^0,\pi ^-)-\varPhi _{\max }(I^0,\pi ^*(I^0))\\&\quad \le \max \{\lambda _j^+ - \lambda _j^-|\ j=1,\ldots ,n\} \end{aligned}$$

holds, and this bound is tight.

The proof of this statement is similar to that of Theorem 2.

3.2 Minmax regret criterion

Let us define the minmax regret criterion for problem \(1|\mathrm{prec}; \lambda _j \in [\lambda ^-_j, \lambda ^+_j]|\varPhi _{\max }\).

Definition 4

For problem \(1|\mathrm{prec}; \lambda _j \in [\lambda ^-_j, \lambda ^+_j]\,|\varPhi _{\max }\), a permutation is called optimal with respect to the maximum regret if it minimizes the function

$$\begin{aligned} \Delta (\mathcal{{I}}, \pi ) = \max _{I \in \mathcal{{I}}} \{\varPhi _{\max }(I, \pi )-\varPhi _{\max }(I, \pi ^*(I))\}. \end{aligned}$$
(1)

The approach based on minimizing function (1) is known also as the robust deviation decision approach, see Kouvelis and Yu (1997).

Denote by \(\lambda '=(\lambda '_1, \ldots , \lambda '_n)\) the collection of the parameter values \(\lambda \) that defines scenario \(I \in \mathcal {I}\) and rewrite the function \(\varPhi _{\max }(I, \pi )-\varPhi _{\max }(I, \pi ^*(I))\) from (1) in the following way:

$$\begin{aligned}&\varPhi _{\max }(I, \pi )\ -\varPhi _{\max }(I, \pi ^*(I))\\&\quad =\max _{j=1,\ldots ,n}\{\varphi (C_j(\pi )) + \lambda '_j\}- \varPhi _{\max }(I, \pi ^*(I))\\&\quad =\max _{j=1,\ldots ,n}\{\varphi (C_j(\pi ))\ -(\varPhi _{\max }(I, \pi ^*(I))\ -\ \lambda '_j)\}. \end{aligned}$$

Thus,

$$\begin{aligned} \varDelta (\mathcal{{I}}, \pi )&= \max _{I \in \mathcal{{I}}} \Bigg \{\max _{j=1,\ldots ,n}\{\varphi (C_j(\pi ))\\&\quad - (\varPhi _{\max }(I, \pi ^*(I))\ -\ \lambda '_j)\}\Bigg \}\\&= \max _{j=1,\ldots ,n}\Bigg \{\max _{I \in \mathcal{{I}}}\{\varphi (C_j(\pi ))\\&\quad -\, (\varPhi _{\max }(I, \pi ^*(I))\ -\ \lambda '_j)\}\Bigg \}\\&= \max _{j=1,\ldots ,n}\Bigg \{\varphi (C_j(\pi ))\\&\quad -\ \min _{I \in \mathcal{{I}}}\{\varPhi _{\max }(I, \pi ^*(I))\ -\ \lambda '_j\}\Bigg \}. \end{aligned}$$

Definition 5

For each \(h \in \{1,\ldots , n\}\) define scenario \(I_h \in \mathcal{{I}}\) by setting \(\lambda _h = \lambda ^+_h\) and \(\lambda _j = \lambda ^-_j\) for all \(j \ne h\).

Denote by \(\pi _h^*\) an optimal permutation for scenario \(I_h\). The following lemma shows the importance of scenarios \(I_h\).

Lemma 2

Given scenario \(I^0 \in \mathcal{{I}},\) let \(\pi ^0\) be an optimal permutation for \(I^0\). Then for any \(h \in \{1,\ldots , n\}\) the following inequality is valid:

$$\begin{aligned} \varPhi _{\max }(I_h, \pi ^*_h)\ -\ \lambda _h^+ \le \varPhi _{\max }(I^0, \pi ^0)\ -\ \lambda _h^0, \end{aligned}$$
(2)

where the collection \(\lambda ^0=(\lambda _1^0, \ldots , \lambda _n^0)\) defines scenario \(I^0\).

Proof

Construct scenario \(I'\) replacing \(\lambda _j^0\) with \(\lambda _j^-\) in scenario \(I^0\) for all \(j \ne h\). Evidently, \(\varPhi _{\max }(I', \pi ^0) \le \varPhi _{\max }(I^0, \pi ^0)\). Since \(\varPhi _{\max }(I_h, \pi ^*_h)\ \le \varPhi _{\max }(I_h, \pi ^0)\), to prove the validity of (2), it is enough to show that the inequality

$$\begin{aligned} \varPhi _{\max }(I_h, \pi ^0)\ -\ \lambda _h^+ \le \varPhi _{\max }(I', \pi ^0)\ -\ \lambda _h^0 \end{aligned}$$

holds.

If \(h\) is not a critical job in \(\pi ^0\) for scenario \(I_h,\) then \(\varPhi _{\max }(I_h, \pi ^0) = \varPhi _{\max }(I',\pi ^0)\) and \(\varPhi _{\max }(I_h,\pi ^0)-\lambda ^+_h \le \varPhi _{\max }(I',\pi ^0)-\lambda ^0_h\).

If \(h\) is a critical job, then \(\varPhi _{\max }(I_h, \pi ^0)\ -\ \lambda ^+_h\ =\) \((\varphi (C_h(\pi ^0))+\lambda ^+_h) - \lambda ^+_h =\) \((\varphi (C_h(\pi ^0))+\lambda ^0_h) - \lambda ^0_h \le \) \(\varPhi _{\max }(I', \pi ^0)\ -\ \lambda _h^0\).\(\square \)

Corollary 4

The maximum regret is of the form

$$\begin{aligned} \Delta (\mathcal{{I}},\pi )= \max _{j=1,\ldots , n}\{\varphi (C_j(\pi ))+d_j\}, \end{aligned}$$

where \(d_j=-(\varPhi _{\max }(I_j, \pi _j^*) - \lambda ^+_j)\).

Proof

Lemma 2 implies that

$$\begin{aligned} \Delta (\mathcal{{I}},\pi )&= \max _{j=1,\ldots , n}\Bigg \{\varphi (C_j(\pi ))\\&\quad -\,\min _{I\in \mathcal{{I}}}\{\varPhi _{\max }(I, \pi ^*(I)) - \lambda '_j\}\Bigg \}\\&= \ \max _{j=1,\ldots , n}\{\varphi (C_j(\pi ))-(\varPhi _{\max }(I_j, \pi ^*_j) - \lambda ^+_j)\}. \end{aligned}$$

\(\square \)

Evidently, \(\Delta (\mathcal{{I}},\pi )\) belongs to the class of maximum cost functions (with decomposable costs). Therefore, Lawler’s algorithm may be used to minimize \(\Delta (\mathcal{{I}},\pi )\). The following two-step algorithm constructs a permutation that gives the minimum of the maximum regret \(\Delta (\mathcal{{I}},\pi )\).

Algorithm 2

  1. 1.

    For each scenario \(I_j \in \mathcal{{I}},\ j= 1, \ldots , n,\) calculate the value \(\varPhi _{\max }(I_j,\pi ^*_j)\) and set \(d_j=-\varPhi _{\max }(I_j,\pi ^*_j) + \lambda ^+_j\).

  2. 2.

    Apply Algorithm 1 to problem \(1|\mathrm{prec}|\Delta _{\max }\) with cost function \(\Delta _j(t)=\varphi (t)+d_j\) to obtain an optimal permutation \(\pi ^*\), and the minimum value \(\Delta ^*=\Delta _{\max }(\pi ^*)\).

The validity of Algorithm 2 is immediate. Using Corollary 4 and Definition 4, \(\pi ^*\) is an optimal permutation, and \(\Delta ^* =\Delta (\mathcal{{I}},\pi ^*)\) is the minimum value for the minmax regret criterion.

Below (see Algorithm 3) we show that the running time of Step 1 of Algorithm 2 is \(O(n^2)\) if we use a special procedure to implement this step. Thus, the running time of Algorithm 2 is \(O(n^2)\).

Note that the idea of using scenarios \(I_h\) in the minmax regret approach is due to Averbakh (2000). In Averbakh (2000) the single-machine sequencing problem with the maximum weighted tardiness criterion is considered. However, in this problem, an uncertain weight is given for each job and all the other parameters (i.e., the processing times and the due dates) are precisely known. In such a situation, the problem of minimizing the maximum regret criterion can be solved in time \(O(n^3)\). As it is mentioned in Averbakh (2000), the results hold if instead of weighted tardiness \(w_jT_j\) we consider any function \(w_jf_j\) with uncertain weight \(w_j\), where \(f_j\) is non-decreasing in \(C_j\).

Now we describe a procedure that implements Step 1 of Algorithm 2. Given problem \(1|\mathrm{prec};\) \(\lambda _j \in [\lambda ^-_j, \lambda ^+_j]\,|\varPhi _{\max }\), we describe an algorithm for calculating the values \(\varPhi _{\max }(I_h,\pi _h^*)\) for all \(n\) scenarios \(I_h \in \mathcal{{I}},\ h=1, \ldots , n,\) in \(O(n^2)\) time.

Algorithm 3

  1. 1.

    Construct scenario \(I^- \in \mathcal{{I}}\) by setting \(\lambda _j=\lambda _j^-\) for all \(j=1,\ldots , n\). Apply Algorithm 1 to construct an optimal permutation \(\pi ^-\) for scenario \(I^-\). Compute the value \(\varPhi _{\max }(I^-,\pi ^-)\). Renumber the jobs according to permutation \(\pi ^-\) (i.e., \(\pi ^- = (1, \ldots , n)\) after the renumbering). Compute \(C_j(\pi ^-)\) for all \(j=1,\ldots , n\). Set \(h=n\).

  2. 2.

    Construct scenario \(I_h\) replacing \(\lambda _h^-\) with \(\lambda _h^+\) in \(I^-\). Set \(v=h-1,\ \pi _h^*=\pi ^-\) and \(J_h=(h)\).

  3. 3.

    If \(C_h(\pi ^-)+\lambda _h^+ \le \varPhi _{\max }(I^-,\pi ^-)\), then set \(\varPhi _{\max }(I_h,\pi _h^*)=\varPhi _{\max }(I^-,\pi ^-)\) and go to Step 7.

  4. 4.

    If \(v=0\) or \(\lambda _h^+ \le \lambda _v^-,\) then go to Step 6, otherwise, go to Step 5.

  5. 5.

    If job \(v\) is a predecessor of job \(h\), then set \(J_h=(v,J_h)\). Otherwise, modify permutation \(\pi _h^*\) exchanging positions of job \(v\) and the subpermutation \(J_h\). Set \(v=v-1\) and go to Step 4.

  6. 6.

    Calculate \(\varPhi _{\max }(I_h,\pi _h^*)\) for the current permutation \(\pi _h^*\).

  7. 7.

    Set \(h=h-1\). If \(h \ge 1,\) then go to Step 2, otherwise, the process of calculating the values \(\varPhi _{\max }(I_h,\pi _h^*)\) has been completed.

Theorem 3

Algorithm 3 finds permutations \(\pi _h^*\) correctly, \(h=1,\ldots ,\ n\), and its running time is \(O(n^2)\).

Proof

According to Corollary 3, none of the jobs in permutation \(\pi ^-\) has a weak improvement. After transforming scenario \(I^-\) into \(I_h,\) job \(h\) is the only job in \(\pi ^-\) that may have a weak improvement.

In Step 5 of Algorithm 3, we eliminate weak improvements if they exist. Indeed, if job \(h\) has a weak improvement in \(\pi ^-,\) then there is job \(l\) with \(l<h\) such that \(\lambda _l^- < \lambda _h^+\) and \(A_G(l)\cap \{\pi (l+1), \ldots , \pi (h)\} = \emptyset \). Thus, if \(\lambda _h^+ \le \lambda _v^-\) and \(v \notin B_G(h)\) in Step 5, then \(h\) has no weak improvements in the current permutation \(\pi _h^*\). Otherwise, we would have \(\lambda _l^- < \lambda _v^-\) and \(v \notin A_G(l),\) i.e., job \(v\) would have a weak improvement in \(\pi ^-\).

Thus, if \(h\) really has a weak improvement in the current permutation \(\pi _h^*,\) then there are only two possibilities for job \(v\). Either \(v \rightarrow h\) or \(\lambda _v^- < \lambda _h^+\). In the first case, we include \(v\) into the current subpermutation \(J_h\); in the latter case, we interchange the positions of \(v\) and \(J_h\) and obtain a new feasible permutation \(\pi _h^*\) with the value \(\varPhi _{\max }(I_h,\pi _h^*)\), which is not greater than the one of the previous permutation \(\pi _h^*\) (see Lemma 1). In the new permutation none of the jobs (excluding possibly job \(h\)) has a weak improvement.

If we have a situation with \(v=0\) or \(\lambda _h^+ \le \lambda _v^-,\) then in the current permutation \(\pi _h^*\), job \(h\) has no weak improvements. In view of Corollary 2, the final permutation \(\pi _h^*\) is optimal for scenario \(I_h\).

If the inequality \(C_h(\pi ^-)+\lambda _h^+ \le \varPhi _{\max }(I^-,\pi ^-)\) holds in Step 3, then \(\pi ^-\) is an optimal permutation for scenario \(I_h\). Indeed, if \(\lambda _h^+>\lambda _h^-\), then the validity of the above inequality means that there exists a critical job \(j \ne h\) in \(\pi ^-\) and the optimality of \(\pi ^-\) follows from Corollary 2.

The running time of Algorithm 3 is \(O(n^2)\). In fact, Step 1 takes \(O(n^2)\) time. For each \(h\), each run of steps 2–5 and 7 takes \(O(1)\) time (for each pair \(v\) and \(h\) checking the validity or invalidity of the relation \(v \rightarrow h\) takes \(O(1)\) time if we use the representation of graph \(G\) by its adjacency matrix). Each of the steps 2–5 and 7 repeats at most \(n\) times. Step 6 takes \(O(n)\) time and for each \(h\) it runs at most once. Thus, the overall complexity of steps 2–7 is \(O(n^2)\).\(\square \)

There are situations where there exists a permutation that is optimal for all scenarios simultaneously. Therefore, having an instance of a problem under uncertainty, it is reasonable to check the existence of such a permutation.

Definition 6

A permutation \(\pi \) is called globally optimal for problem \(1|\mathrm{prec}; \lambda _j \in [\lambda ^-_j, \lambda ^+_j]\,|\varPhi _{\max }\) if \(\pi \) is optimal for all scenarios \(I \in \mathcal{{I}}\) simultaneously.

We get immediately two characterizations for the global optimality.

Theorem 4

A globally optimal permutation \(\pi \) is obtained if Algorithm 2 returns the value \(\Delta ^*=0\). If \(\Delta ^*>0\) then no globally optimal permutation exists. This also means that \(\pi \) is globally optimal if and only if \(\pi \) is optimal for all instances \(I_h\).

Consider a simple example illustrating the notion of global optimality.

Set \(\varphi (t)=t;\ n=3;\ p_1=1,\ p_2=3,\ p_3=0;\ \lambda _1^-=\lambda _1^+=1,\ \lambda _2^-=5,\ \lambda _2^+=7,\ \lambda _3^-=4,\ \lambda _3^+=8\). Graph \(G\) has a unique arc \((1,2)\).

For this instance we have only three feasible permutations: \((1,2,3),\ (1,3,2)\), and \((3,1,2)\). Algorithm 2 gives permutation \(\pi =(3, 1, 2)\), and it is a globally optimal permutation since \(\Delta ({I},\pi )=0\). It is interesting to note that the minmax approach produces the same permutation, whereas the minmin approach (minimizing the objective function under scenario \(I^-\)) generates permutation \(\pi ^-=(1, 2, 3),\) which is not globally optimal (e.g., we have \(\varPhi _{\max }(I^+,\pi )=11\) and \(\varPhi _{\max }(I^+,\pi ^-)=12\)). There is one more globally optimal permutation \((1,3,2)\), but this permutation cannot be constructed with minmax or minmin approaches.

Consider the following example, which shows that a globally optimal permutation may exist in quite complicated situations.

Set \(\varphi (t)=t;\ n=8\). Values \(p_j,\ \lambda _j^-\) and \(\lambda _j^+\) are presented in Table 1.

Table 1 Values \(p_j,\ \lambda _j^-\) and \(\lambda _j^+\)

Graph \(G\) is presented by the set of its arcs: \((4,5),\) \((3,6), \) \((6,7),\) \((6,2),\) \((5,2),\) \((8,5)\). Here we do not list the transitive arcs of the graph. We have \(\pi =(8, 3, 6, 7, 4, 5, 1, 2)\) as a globally optimal permutation.

4 Conclusion

Our two main results related to the well-known minmax cost algorithm of Lawler (1973) are necessary and sufficient conditions for the permutation optimality and an \(O(n^2)\) algorithm for constructing a schedule that minimizes the maximum regret criterion for the problem under interval uncertainty whenever the cost functions are decomposable.

Possible directions for further research include both considering more general cost functions (compared to decomposable functions) and addressing situations with more than one uncertain parameter for each job. Another possibility is to consider the relative regret approach instead of the absolute regret approach discussed in the paper. There are also other criteria that deal with uncertainty such as the OWA operator (Ordered Weighted Averaging aggregation operator). OWA operators were introduced by Yager (1988) to work with multicriteria problems. Recently, OWA operators were used in connection with discrete optimization problems under uncertainty (see Kasperski and Zieliński (2013)) with a discrete scenario set containing a finite number of possible scenarios. In our case of interval uncertainty with the infinite set of scenarios, the direct use of OWA operators is impossible, but some special cases of the general OWA operator may be considered.

One should mention the considerable difference between uncertainty problems with the discrete scenario set and those with interval uncertainty as in our case. The former problems are very often more difficult than the problems with the interval uncertainty. For example, in the recent paper by Aissi et al. (2011), the strong NP-hardness of finding the minmax decision for the single-machine scheduling problem to minimize the number of late unit-time jobs was established under the due date uncertainty and the discrete scenario set where the number of the scenarios is the input parameter. Earlier, the NP-hardness of finding the minmax decision for the same problem was established in Aloulou and Della Croce (2008) under the processing time uncertainty and the discrete scenario set even for the case of two scenarios. Unlike the discrete scenario set situation, finding the minmax decision for this problem is polynomially solvable for the case of interval uncertain due dates and processing times. It is enough to set the minimum values for all due dates, the maximum values for all processing times and to solve the obtained deterministic problem, using for instance the algorithm of Moore (1968).