1 Introduction

Scheduling problems are an inseparable part of every manufacturing and service system. As one of the most practical problems in today’s world, scheduling problems have a considerable impact on increasing the productivity. Parallel machine scheduling problems are generally categorized into three classes: (1) identical machines (\({P}_{m}\)), where all machines operate at the same processing time on a job; (2) uniform machines (\({Q}_{m}\)), where there are machines with different speeds but a constant rate for all jobs per machine; and (3) unrelated machines (\({R}_{m}\)), where each machine is able to operate at different rates and each job can be processed with different times on each machine; i.e., the processing time of a job is both machine and job dependent [1]. This paper deals with the concept of sensitivity analysis. In this regard, unrelated parallel machine scheduling problem with operation sequence– and machine-dependent setup times where rework processes are allowed and machine eligibility restrictions and release time are considered is investigated.

The remainder of this paper is organized as follows. In Sect. 2, the relevant literature is reviewed. In Sect. 3, the mathematical model is described. Section 4 explains the solution representation method employed in this study. The numerical examples used are given in Sect. 5. In Sect. 6, sensitivity analysis procedure is discussed and the analysis of the sensitivity of the model with regard to some parameters is presented. Eventually, conclusions and future work suggestions are presented in Sect. 7.

2 Literature Review

Unrelated parallel machine scheduling problem has been a favorite field of research during the last decade and is widely applicable in all manufacturing and service providing systems. In such case, a set of \(n\) jobs are to be processed using a set of \(m\)  parallel machines. Each job can only be processed on one machine and each machine can only process one job at a time. Machines are able to operate at different rates and the processing time of each job on each machine may vary; i.e., the processing time of a job is both machine and job dependent [1]. As one of the recent studies in the field of unrelated parallel machines, the study by Pei et al. [2] can be referred. They provided a concise guide to scheduling problems with learning and deteriorating effects. They designed a new classification scheme based on levels of different domains, namely, effects, processing ways, processing time functions, and manufacturing environments. According to their proposed scheme, at the first stage, the scheduling problems are distinguished into three categories: learning effects, deteriorating effects, and combined effects. In the next stage, models are refined along three lines in each category, which are general processing way, batch scheduling, and group scheduling. Then, they analyzed the evolvement of related scheduling models and conducted a critical analysis on the proposed algorithms by combining the attributes of actual processing time functions and manufacturing environments [2].

Many studies in the literature ignore setup time or include it in processing time [1]. That is considered when setup time is slight compared to processing time, or sequence of jobs is not dependent on setup times. Even though this assumption simplifies the analysis of the scheduling problem, it affects the solution quality in a negative manner and makes the model inapplicable for real-life implementations, since many applications need explicit treatment of setup times [3]. Pei et al. addressed a serial-batching scheduling problem with arbitrary release times to minimize the makespan, where the position of the current job and the sum of previous jobs’ processing times determine each job’s actual processing time. They took the effects of learning and deterioration into consideration at the same time. By combining society and civilization (SC) algorithm with variable neighborhood search (VNS), they presented a novel hybrid SC-VNS algorithm to solve the studied problem. Computational experiments indicated the superiority of the proposed solution approach compared to the other algorithms examined [4].

Sequence- and machine-dependent setup times are dependent on both the job currently being scheduled and the last scheduled job. They usually occur while switching between different types of jobs on machines. In this situation, setup time depends on the type of job that is processed on the machine and also the type of previous job which has been already processed on the machine. This kind of setup time can be found in some manufacturing processes and service provider systems. Some of its industrial applications are drilling operations for printed circuit board fabrication, dicing operations for semiconductor wafer manufacturing, and some main operations in textile manufacturing like warp making, weaving, dyeing, and cloth cutting [5]. Despite the importance of unrelated parallel machine problem with sequence- and machine-dependent setup times, it has not been thoroughly investigated in the literature. Among those who dealt with this problem, Lin and Ying [5] can be mentioned. They proposed a hybrid artificial bee colony algorithm to solve the manufacturing unrelated parallel machine scheduling problem with machine-dependent and job sequence–dependent setup times trying to minimize the makespan. Manupati et al. [6] developed a multi-objective-based evolutionary artificial immune non-dominated sorting genetic algorithm (AI-NSGA-II) for handling unrelated parallel machine scheduling problem with release dates and sequence- and machine-dependent setup times to minimize the total completion time, the number of tardy jobs, the total flow time, and the machine load variation. The quality of their proposed method was ensured in comparison with two other evolutionary algorithms. Zhang et al. [7] proposed a combinatorial evolutionary algorithm to solve the unrelated parallel machine scheduling problem with worker resources, learning effect, and sequence- and machine-dependent setup times.

Lots of scheduling problems are dedicated to the case where all machines are allowed to process all jobs with no restriction, whereas the real-world conditions are in conflict with this assumption in most cases. Generally, a subset of machines is allowed to process an individual job, not all of them. This constraint is usually referred to as processing set or machine eligibility restriction in the literature of scheduling problems. Food processing plants, wireless communication networks, hospital operating room management, and steel production industries are some examples of practical applications of scheduling problems with machine eligibility restrictions in manufacturing environments. Bektur and Saraç [8] studied unrelated parallel machine scheduling problem with sequence-dependent setup times, common server, and machine eligibility. Tabu search (TS) and simulated annealing (SA) were used to minimize the total weighted tardiness. Attempting to minimize the makespan, Afzalirad and Rezaeian [9] addressed a scheduling problem of unrelated parallel machine in which resource constraints, machine eligibility, precedence constraints, release dates, and sequence-dependent setup times were taken into consideration. Genetic algorithm (GA) and artificial immune system (AIS) were developed to find optimal or near optimal solutions. Yunusoglu and Topaloglu Yildiz [10] investigated multi-resource-constrained unrelated parallel machine scheduling problem with the aim of minimizing maximum completion time and developed an exact solution approach based on constraint programming (CP). Sequence-dependent setup times, precedence relations, machine eligibility restrictions, and release dates were the restrictions involved. Foroutan et al. [11] investigated unrelated parallel machine scheduling problem to minimize costs of tardiness and hiring workers. Sequence- and machine-dependent setup times, precedence constraints, and machine eligibility were considered. Multi-objective simulated annealing (MOSA), multi-objective tabu search (MOTS), and a hybrid evolutionary algorithm were employed to deal with large size instances.

In manufacturing environments, many reasons such as machine incompetence, instability of manufacturing systems, inadequacy of preventative maintenance, and human errors make it reasonable to consider the possibility of producing defective jobs. Hence, rework processes are taken into account in order to retrieve imperfect jobs. Regarding rework procedures, semiconductor, steel, and glass are proper examples of process industries where many practical applications can be found [12]. Rework assumption in unrelated parallel machine scheduling problems has been considered by many researchers in recent years. Ramezanian and Saidi-Mehrabad [13] addressed a multi-product unrelated parallel machine scheduling problem considering the probability of producing imperfect jobs. They formulated a mixed-integer non-linear programming model with the aim of minimizing makespan. Considering random reworks, Wang et al. [14] dealt with unrelated parallel machine scheduling to minimize expected total weighted tardiness and total weighted completion time. Approximate methods based on priority rule, GA, and sensitivity analysis (SA) were proposed to handle medium- and large-scale problems. Rambod and Rezaeian [15] addressed an unrelated parallel machine scheduling problem with rework processes, setup times, and machine eligibility restrictions. They formulated the problem mathematically and presented metaheuristic algorithms including a genetic algorithm and two bee algorithms (BA1 and BA2) to find optimal/suboptimal solutions.

Every statistical model consists of a series of equations, input factors, parameters, and variables aiming to describe the process under investigation. The input depends on a number of uncertainty sources such as absence of information, errors of measurement, and poor or partial understanding of the driving forces and mechanisms. Such uncertainty limits the confidence in the output of the model. In addition, models may have to deal with the natural inherent variableness of the system, like the happening of stochastic occurrences [16]. Sensitivity analysis is utilized to clarify how the output of any given model depends on its input parameters. Therefore, SA is a significant technique to test the quality of given models. It is also an effective tool to check the reliability and robustness of any solution [17]. In other words, sensitivity analysis is necessary to understand the effect of a given parameter variation on the output of a model. Nevertheless, there are few studies on sensitivity analysis of scheduling problems. Kolen et al. [18] studied sensitivity of some dispatching rules to changes of the operation processing times in a parallel machine environment.

Penz et al. [19] studied the performance of static scheduling policies under the influence of online disturbances. They proved that the independence of tasks can assure the sensitivity not to exceed the square root of the magnitude of the perturbation. Hall and Posner [20] carried out a review on sensitivity analysis applied in scheduling problems. They tried to devise a new optimal scheduling for problems with modified parameters by applying the optimal schedule on the original problem. To the authors’ knowledge, there have been no studies focusing on sensitivity analysis of unrelated parallel machines. Therefore, in this research, sensitivity analysis is performed on the scheduling problem of unrelated parallel machines considering the possibility of rework, sequence- and machine-dependent setup times, release time, and machine eligibility constraints with the aim of minimizing makespan. To do so, the mathematical model suggested by Rambod and Rezaeian [15] is employed. This analysis tries to make a better understanding of the effect of parameter variations and their interactions on the output of the model. Injection molding departments and LCD manufacturing processes are two well-known applications of the model under investigation in production environments, where unrelated parallel machines are applied to process different components and sequence- and machine-dependent setup times are frequently observed [21].

3 Problem Formulation

There are \(n\) independent jobs to schedule on a set of \(m\) unrelated parallel machines. Each machine can process only one job at a time and the process of each job is allowed to be done on only one machine. At time zero, not all jobs are ready to be processed. The probability of imperfect items is assumed. Hence, a finite number of rework activities are employed to improve imperfect items to meet an acceptance level of quality. For each job, the first operation is deterministic and referred to as the work process, whereas the next ones are probabilistic and mentioned as rework processes. At any time, only one operation per job is permitted. Processing times, including inspection times, are considered deterministic. There are setup actions between successive operations such that setup times are dependent on both operation sequence and machine. Each machine can process some of the jobs; therefore, for any given job, only a subset of them is capable of processing. Machine breakdown is not considered, so all machines can operate any time. Also, job preemption is not permitted (see Fig. 1).

Fig. 1
figure 1

The illustration of the proposed problem

3.1 Indices

\(n\) :

number of jobs

\(m\) :

number of machines

\(i\) :

index of machines \((i=1,\dots ,m)\)

\(j, h\) :

indices of jobs \((j, h = 1, \dots, n + m)\)

\(l, l'\) :

indices of process. \(l,\;l'=\left\{\begin{array}{lc}1&\mathrm{main}\;\mathrm{process}\\2,...,\;L_j&\mathrm{rework}\;\mathrm{process}\end{array}\right.\)

3.2 Parameters

\({p}_{ijl}\) :

Defective probability of operation \(l\) of job \(j\) on machine \(i ({p}_{ij{L}_{j}}=0)\). Job \(j\) is permitted for processing (both main and rework processes) at most \({L}_{j}\) times

\({t}_{ij}\) :

main processing time of job \(j\) on machine

\(i{\alpha }_{ijl}\) :

decreasing coefficient of processing time of rework process \(l (0\le {\alpha }_{ijl}\le 1\)

\(pt_{ijl}\) :

processing time of process \(l\) of job \(j\) on machine\((pt_{ij1}=t_{ij},\;pt_{ijl}=pt_{ij(l-1)}a_{ijl})\)

\(su_{ihl'jl}\) :

setup time required to switch from operation \(l'\) of job \(h\) to operation \(l\) of job \(j\) on machine \(i\)

\(Z_{ijl}=\left\{\begin{array}{cc}1&\mathrm{if}\;\mathrm{operation}\;l\;\mathrm{of}\;\mathrm{job}\;j\;\mathrm{can}\;\mathrm{be}\;\mathrm{processed}\;\mathrm{on}\mathrm{machine}\;i\\0&\mathrm{otherwise}\end{array}\right.\)

\({R}_{j}\) :

release time of job \(j\)

\(M\) :

an arbitrary large number

3.3 Decision Variables

\({S}_{jl}\) :

Start time of process \(l\) of job \(j\)

\({C}_{jl}\) :

Completion time of process \(l\) of job \(j\)

\({C}_{j}\) :

Completion time of job \(j\)

\(x_{ijl} = \left\{\begin{array}{ll}1\ & \begin{array}{ll}\mathrm{if}\;\mathrm{operation}\;l\;\mathrm{of}\;\mathrm{job}\;j\;\mathrm{is}\\\mathrm{processed}\;\mathrm{on}\;\mathrm{machine}\;i\end{array}\\0 & \mathrm{otherwise}\end{array}\right.\)  

\({y}_{h{l}^{^{\prime}}jl} = \left\{\begin{array}{ll}1\ & \begin{array}{ll}\mathrm{if}\;\mathrm{operation}\;l\;\mathrm{of}\;j\;\mathrm{is}\;\mathrm{processed}\;\mathrm{directly}\\\mathrm{after}\;\mathrm{operation}\;l\mathit'\;\mathrm{of}\;\mathrm{job}\;h\;\mathrm{on}\;\mathrm{machince}\;i\end{array}\\0&\mathrm{otherwise}\end{array}\right.\)

3.4 Problem Formulation

$$Min\ Z = {C}_{max}$$
(1)
$$\begin{array}{ll}{C}_{\mathrm{max}}\ge E\left({C}_{j}\right) & \forall j\end{array}$$
(2)
$$\begin{array}{ll}E\left(C_j\right)=\sum\limits_{i=1}^mC_{j1}.\left(1-p_{ij1}\right).\;x_{ij1}+\sum\limits_{i-2}^{L_j}(\sum\limits_{i-1}^mC_{jl}\;.\;\prod\limits_{l'}^{l-1}(\sum\limits_{i-1}^mp_{ijl'}.\;x_{ijl'}).\;\left(1-p_{ijl}\right).\;x_{ijl}) & \forall j\end{array}$$
(3)
$$\begin{array}{ll}\sum\limits_{i=1}^{m}{x}_{ijl}=1 & \forall l,\;j=m+1,\dots ,m+n\end{array}$$
(4)
$$\begin{array}{ll}{C}_{jl}={S}_{jl}+\sum\limits_{i=1}^{m}{pt}_{ijl}.{x}_{ijl} & \forall j,l \end{array}$$
(5)
$$\begin{array}{ll}{S}_{jl}\ge {C}_{j\left(l-1\right)}+\sum\limits_{i=1}^{m}{su}_{ih{l}^{^{\prime}}jl}.{x}_{ih{l}^{^{\prime}}}.{y}_{h{l}^{^{\prime}}jl} & \forall h,{l}^{^{\prime}}, j=m+1,\dots ,m+n , l=2,\dots ,{L}_{j}\end{array}$$
(6)
$$\begin{array}{ll}S_{j1}\geq C_{hl'}+\left[\sum\limits_{i=1}^msu_{ihl'jl}\;.\;x_{ihl'}-(1-y_{hl'jl}).\;M\right] & \forall j,\;h,\;l,\;l'\end{array}$$
(7)
$$\begin{array}{ll}{S}_{j1}\ge {R}_{j}+\sum\limits_{i=1}^{m}{su}_{ih{l}^{^{\prime}}jl}. {x}_{ih{l}^{^{\prime}}}.{y}_{h{l}^{^{\prime}}j1}&\forall h,{l}^{^{\prime}},j=m+1,\dots ,m+n\end{array}$$
(8)
$$\begin{array}{ll}\sum\limits_{h=1}^{m+n}\sum\limits_{{l}^{^{\prime}}=1}^{{L}_{j}}{y}_{h{l}^{^{\prime}}jl}=1&\forall l,j=m+1,\dots ,m+n\end{array}$$
(9)
$$\begin{array}{ll}\overset{}{\underset{}{\sum\limits_{j=1}^{m+n}}}\sum\limits_{l'=1}^{L_j}Y_{hl'jl}\leq1&\forall h,\;l'\end{array}$$
(10)
$$\begin{array}{ll}Y_{hl'jl}\leq\sum\limits_{j=1}^{m+n}x_{ihl'\;}.\;x_{ijl}&\forall j,h,l,l'\end{array}$$
(11)
$$\begin{array}{ll}{x}_{ijl}\le {Z}_{ijl}&\forall j,j,l\end{array}$$
(12)
$$\begin{array}{ll}{x}_{jj1}=1&j=1,\dots ,m\end{array}$$
(13)
$$\begin{array}{ll}{S}_{j1}=0&j=1,\dots ,m\end{array}$$
(14)
$$\begin{array}{ll}X_{ijl'}Y_{hl'jl}=\{0,1\}&\forall i,j,l,h,l'\end{array}$$
(15)

Equation (1) minimizes the expected maximum completion time. Constraint (2) calculates maximum completion time. Constraint (3) is the expected value function \(E({C}_{j})\) by which the expected completion time of each job \(j\) is determined. This constraint is formulated based on Bayes’ theorem as follows. \({O}_{ijl}\) represents the order of operations.

$$\begin{aligned}E\left({C}_{j}\right)&=\sum_{i=1}^{m}({C}_{j1}\left|\mathrm{Completed at} {O}_{ij1}\right).\left(1-{p}_{ij1}\right).{x}_{ij1}\\&\;+\sum_{i=1}^{m}({C}_{j2}\left|\mathrm{Completed at} {O}_{ij2}\right)\times\left[{p}_{ij1}.\left(1-{p}_{ij2}\right)\right].{x}_{ij1}+\dots\\&\;+\sum_{i=1}^{m}({C}_{j{L}_{j}}\left|\mathrm{Completed at }{O}_{ij{L}_{j}}\right)\times \prod_{l\mathrm{^{\prime}}=1}^{l-1}(\sum_{i=1}^{m}{p}_{ijl\mathrm{^{\prime}}}.{x}_{ijl\mathrm{^{\prime}}}).(1-{p}_{ij{L}_{j}}).{x}_{ijl})\end{aligned}$$
(16)

Constraint (4) guarantees that only one machine processes each operation of each job. Constraint (5) calculates the completion time of each operation of each job. Constraint (6) limits the start time of each operation of each job to be greater than the sum of completion time of previous operation of the same job and the setup time that is required to switch between previous and current operations on the same machine. Constraint (7) assures that a successor operation is allowed to start only when its predecessor is completed. The last two constraints are related to observing the primacy between the operations pertaining to the same job and respecting the priority between the operations of different jobs, respectively. Constraint (8) limits the start time of the first operation of each job to be greater than the sum of job release time and the setup time that is required to switch from previous operation to current operation on the same machine. Constraints (9) and (10) state predecessor/successor restrictions between two consecutive operations. Constraint (11) indicates that a valid sequence is the one with operations processed on the same machine. Constraint (12) restricts machine eligibilities. Constraint (13) assigns a dummy job to each machine. Constraint (14) indicates that dummy jobs must start at time zero on each machine; i.e., dummy jobs are the first assigned job on each machine. Finally, constraint (15) defines the binary decision variables.

4 Solution Representation Description

In this study, a solution representation method which is proposed by Ramezanian and Saidi-Mehrabad [13] is applied to indicate a feasible solution. Two types of elements are included in the schematic structure of problem representation method. Operations and their assigned machines are contained in the first and second row respectively, so the solution is entirely represented as a vector. The same symbol is used to name all work and rework processes of a job and for any given solution vector. The order of occurrence in operation sequence is used to interpret them. Therefore, \({L}_{j}\) is defined as the exact number of the appearances of the \(j\) th job in the operation sequence in which the first appearance is identified as the main process and the next ones are recognized as rework processes.

An example is described in the following. An unrelated parallel machine scheduling problem with two machines, three jobs, and two dummy jobs is considered. Three operations are required per job. For each job, the first operation is referred to as the main process and the next ones as rework processes which are probabilistic. Therefore, to compute the completion time, an expected value approach is applied. It should be noted that generating the base sequence is done in accord with minimization of the expected makespan. However, in case that some jobs are successfully able to pass the test center, the related rework processes are deleted and the operation sequence is updated consequently. The real makespan can be determined only when all jobs pass the test center. Below are the basic operation sequences for the described instance where all main and rework processes appeared in the solution vector are depicted in Fig. 2. They can be converted to a list of the order of operations (\({O}_{ijl}\)) as follows:

Fig. 2
figure 2

The solution representation method

Machine 1: \({O}_{111}\to {O}_{131}\to {O}_{141}\to {O}_{142}\to {O}_{143}\to {O}_{132}\to {O}_{133}\)

Machine 2: \({O}_{221}\to {O}_{251}\to {O}_{252}\to {O}_{253}\)

5 Experiments and Results

In order to explain the scheduling problem, a two-machine example including three jobs and three operations with one main and two rework activities is solved using the data given in Tables 1, 2, and 3 and illustrated in Figs. 3 and 4. The data described in [15] is used. For each problem, job processing times are generated from the uniform [10, 100]. In the same way, release times are produced via [5, 20]. The probability of jobs being defective is a random real number between 0.2 and 0.4, and decreasing coefficients of rework processing times are between 0.25 and 0.35 [11, 13].

Table 1 Data generated for operation 1
Table 2 Data generated data for operation 2
Table 3 Data generated for operation 3
Fig. 3
figure 3

Depiction of \(m=2\), \(n=3\) and \({L}_{j}=3\) when the unforced idleness is not allowed

Fig. 4
figure 4

Depiction of \(m=2\), \(n=3\) and \({L}_{j}=3\) when the unforced idleness is allowed

The first operations loaded on machines 1 and 2 are dummy jobs \({j}_{1}\) and \({j}_{2}\) respectively. Setup, processing, and release times required for dummy jobs are assumed to be zero. As a matter of fact, the only reason to consider dummy jobs is to determine the first operation in each machine’s sequence, so they have no real impact on the objective function. The sequences of operation on available machines are illustrated in a schematic structure in Figs. 3 and 4 in which the number of machines and dummy jobs is equal. To interpret the figures, a description of an arbitrarily chosen operation sequences is presented in the following. For instance, the first operation assigned to machine 1 is the dummy job \({j}_{1}\) that precedes the first job \({j}_{3}\). The release time of \({j}_{3}\) is equal to 7; therefore, the start of the processing of \({j}_{3}\) cannot be earlier than this time. Setup action for preparing the process of \({j}_{3}\) starts at time 7, since setup time is required. The setup time between successive operations of a same job is assumed to be zero.

Figures 3 and 4 show the same problem in two different structures: when (1) unforced idleness is not allowed and when (2) unforced idleness is allowed. Since the objective function is to minimize the makespan \({(C}_{\mathrm{max}})\) (Eq. 1) and the makespan is greater equal to the maximum expected value for completion times \((\mathrm{max}E\left({C}_{j}\right), \forall j)\) (Eq. 2), the other completion times except for the largest one can be delayed as long as they do not exceed \({C}_{\mathrm{max}}\), so the objective value will remain unchanged. This analysis leads to a better understanding of the objective function by measuring the effect of unforced idleness as an input factor on the objective function as the output. In the first condition, machines work continuously and unforced idleness is not allowed. The expected completion times are as follows:

$$E\left({C}_{3}\right)=41.96\;;\;E\left({C}_{4}\right)=65.74\;;\; E\left({C}_{5}\right)=51.61$$

Consider job \(3\) as an example to clarify how these values are determined. Since \({x}_{131},\;{x}_{132},\;{x}_{133}=1\), Eq. (3) which is used to calculate the expected completion times becomes \({C}_{31}\times \left(1-{p}_{131}\right)+ {C}_{32}\times {p}_{131}\times \left(1-{p}_{132}\right)+{C}_{33}\times {p}_{131}\times {p}_{132}\times \left(1-{p}_{133}\right)\). By inserting the required values from Tables 1, 2, and 3 and Fig. 3 into this equation, \(E\left({C}_{3}\right)\) is determined as \(30\times \left(1-0.23\right)+81.8\times 0.23\times \left(1-0.43\right)+82.3\times 0.23\times 0.43\times (1-0)=41.96\). The same goes for \(E\left({C}_{4}\right)\) and \(E\left({C}_{5}\right)\). According to the expected completion times obtained from the first condition, it is clear that \({j}_{4}\) has the greatest expected value among jobs. Hence, the operations of \({j}_{3}\) and \({j}_{5}\) can be moved along the scheduling horizon by adding the unforced idleness while the expected completion times of \({j}_{3}\) and \({j}_{5}\) do not exceed the expected value of \({j}_{4} (E\left({C}_{3}\right),E\left({C}_{5}\right)\le E\left({C}_{4}\right))\). Thus, in the second condition, it is assumed that unforced idleness is allowed and the expected completion times are as follows:

$$E\left({C}_{3}\right)=E\left({C}_{4}\right)=E\left({C}_{5}\right)=65.74$$

6 Sensitivity Analysis

Understanding how a model responds to input changes has a fundamental importance to ensure the correct use of the model. Sensitivity analysis (SA) is the study that specifies how an uncertainty in the output of a mathematical model can be attributed to different origins of uncertainty in its inputs [22]; i.e., it is a method of systematic changes in variables of a model to determine the impacts of such variations. In an overall view, it can be noted that SA addresses the types of questions that arise from parameter changes.

The system under study involves several parameters and its complexity is attributable to the wide range of variability of those parameters. However, it is a hard task to track the impacts of all the parameters along with their interactions. Therefore, this paper focuses on parameters distinguished to be the most appropriate to perform sensitivity analysis. The outcome of the sensitivity analysis is shown in Table 4. In this data set, six different scenarios are assumed. In scenarios 1 to 6, the effects of an increase in rework process, jobs, machines, machine eligibility restrictions, setup times, and decreasing coefficients on the objective function value are investigated, respectively. To reach the exact optimal solutions for small-size problems, the exact software LINGO was utilized and for large-size ones, the genetic algorithm is utilized [15].

Table 4 Sensitivity analysis results

6.1 Analysis of Scenario (1) Results

By increasing the number of rework processes, the objective function value can be increased, but this increase is not tangible because the processing time of each operation is a small percentage of prior operation of the same job. By increasing the number of rework processes, the processing times of rework activities are shorter than their previous activities. Therefore, increasing the number of rework processes has a negligible effect on the objective function value. The above analysis is shown clearly in Fig. 5. However, when the size of problem grows, the impact of increasing the number of rework processes on the objective value becomes significant as Fig. 6 illustrates.

Fig. 5
figure 5

The effect of increasing the number of rework processes on the objective value

Fig. 6
figure 6

The effect of increasing the number of rework processes on the objective value as the size of problem grows

6.2 Analysis of Scenario (2) Results

Increasing the number of jobs has a significant impact on the objective function value, because in this case, at least one job with one or more than one rework process is allocated to the existing machines. A two-machine example which includes four jobs and two operations with one main and one rework process is illustrated in Fig. 7. The expected completion times are as follows:

Fig. 7
figure 7

Obtained result for \(m=2\), \(n=4\) and \({L}_{j}=2\)

$$E\left({C}_{3}\right)=E\left({C}_{4}\right)=E\left({C}_{5}\right)=76.51$$
$$E\left({C}_{6}\right)=72.18$$

Each job has a long processing time and its rework processing time is a small percentage of the previous operation of the job. The results of cases from scenario (2) show that if the number of jobs increases, the objective value can be deteriorated drastically. The sharp increase in objective value is shown in Fig. 8. However, when the size of problem grows, the impact of increasing the number of jobs on the objective value becomes less intense as Fig. 9 illustrates.

Fig. 8
figure 8

The impact of increasing the number of jobs on the objective value

Fig. 9
figure 9

The impact of increasing the number of jobs on the objective value as the size of problem grows

6.3 Analysis of Scenario (3) Results

In this scenario, the results of increasing the number of machines on the objective value are provided. In order to clarify these results, a three-machine example which includes three jobs and three operations for each job is illustrated in Fig. 10. The expected completion times are as follows:

Fig. 10
figure 10

Obtained result for \(m=3\), \(n=3\) and \({L}_{j}=3\))

$$E\left({C}_{4}\right)=39.56;\;E\left({C}_{5}\right)=49.66;\;E\left({C}_{6}\right)=51.61$$

As the results indicate, by increasing the number of machines, the objective value is decreased significantly. The reason is that in this case, operations can be allocated to more machines with better assignment and the length of the operation sequence on each machine can be shortened and finally, the objective function value can be improved. It should be noticed that increasing the number of machines up to a specific number can improve the objective value. In other words, adding extra machines more than this threshold has no effect on the objective value. This analysis is shown graphically in Fig. 11.

Fig. 11
figure 11

The impact of increasing the number of machines on the objective value

6.4 Analysis of Scenario (4) Results

The majority of scheduling studies address the problems where all jobs are allowed to be processed on all machines with no restriction. This assumption is considered in case 1 where there is no machine eligibility restriction for jobs. In most cases, this assumption conflicts with the real-world environments. Hence, the machine eligibility restrictions are imposed in cases 2 and 3. Figure 12 illustrates case 2 of scenario (4) where the main process and rework activities for each job have to be processed on the same machine. The processing sets of operations are as follows:

Fig. 12
figure 12

Obtained result for \(m=2\), \(n=3\) and \({L}_{j}=3\) with machine eligibility restrictions

$${M}^{31},{M}^{32},{M}^{33},{M}^{51},{M}^{52},{M}^{53}=\left\{{M}_{1}\right\}$$
$${M}^{41},{M}^{42},{M}^{43}=\left\{{M}_{2}\right\}$$

Figure 13 illustrates case 3 of scenario (4) where the machine eligibility restrictions for all operations are imposed randomly. The processing sets of operations are as follows:

Fig. 13
figure 13

Obtained result for \(m=2\), \(n=3\) and \({L}_{j}=3\) with machine eligibility restrictions

$${M}^{31},{M}^{33},{{M}^{41},{M}^{43},M}^{51},{M}^{52}=\left\{{M}_{1},{M}_{2}\right\}$$
$${M}^{32}=\left\{{M}_{1}\right\};\;{M}^{42}=\left\{{M}_{2}\right\};\;{M}^{53}=\left\{{M}_{1}\right\}$$

Clearly in both cases, the objective function value is worsened. The results of scenario (4) cases show that by imposing the machine eligibility restrictions, the objective value cannot be improved. In most cases, imposing the eligibility constraints makes the objective value deteriorated clearly.

6.5 Analysis of Scenario (5) Results

This section discusses the effect of increasing the setup times on the objective function value and operation sequences are investigated. It is necessary to note that setup times depend on operation sequence as well as machine. Figure 14 illustrates case 2 of scenario (5) in which the required setup time to prepare j3 for processing on machine 1 is increased 10 time units. As is clear from Fig. 14, the operation sequence on machine 1 is changed and the objective function value is deteriorated. In case 3, by adding 4 time units to all the setup times, the objective function value is increased more than case 2, but the operation sequence on both machines is not changed. The results indicate that the objective value can be worsened by increasing the setup times between two consecutive operations. Also, change in operation sequence can occur under certain circumstances.

Fig. 14
figure 14

Obtained result for \(m=2\), \(n=3\) and \({L}_{j}=3\) in case 2 of scenario (5.)

6.6 Analysis of Scenario (6) Results

The processing time of each rework activity is a percentage of previous operation processing time of the same job, and this percentage is known as a decreasing coefficient named \({0\le \alpha }_{ijl}\le 1\) in this study. The results of cases from scenario (6) show the effect of increasing these coefficients on the objective function value and operation sequences. In case 2, decreasing coefficient of the second operation of \(j_{4}\) on machine 2 is increased 0.15, and in case 3, the same coefficients of all operations on all machines are increased 0.15. The schematic structure of case 2 and case 3 is illustrated in Figs. 15 and 16, respectively. As is clear from these figures, by increasing these coefficients, the objective function value is increased in the expected direction and the operation sequences on machines are changed. The results of cases from scenario (6) show that by increasing the decreasing coefficients, the objective function value can get worse and the operation sequences on machines can be changed.

Fig. 15
figure 15

Obtained result for \(m=2\), \(n=3\) and \({L}_{j}=3\) in case 2 of scenario (6)

Fig. 16
figure 16

Obtained result for \(m=2\), \(n=3\) and \({L}_{j}=3\) in case 3 of scenario (6)

As mentioned previously, it is a hard task to track interactions of all parameters which are analyzed in this study. For this reason, only the interactions between the number of rework processes, jobs, and machines are considered here.

6.7 Analysis of Interaction Between the Number of Rework Process and Job

Consider a two-machine example with three jobs and three operations containing one main and two rework operations (\(m=2\), \(n=3\), \({L}_{j}=2\)). By adding one rework process to all jobs (\(m=2\), \(n=3\), \({L}_{j}=3\)), no tangible effect on the objective function value is observed. On the other hand, increasing one job to the mentioned problem (\(m=2\), \(n=4\), \({L}_{j}=2\)) can intensely make the objective value worse. The results show that increasing the number of jobs has more negative impact on the objective function value than rework processes.

6.8 Analysis of Interaction Between the Number of Rework Process and Machine

It is necessary to note that the effects of increasing the number of rework processes and machines on the objective value are in conflict with each other. In other words, increasing the number of rework processes can deteriorate and increasing the number of machines can improve the objective value. As it was stated in the previous section, the effect of increasing the number of rework processes on the objective value is negligible. On the other side, increasing the number of machines has a clear impact on the objective value where by increasing one machine to the mentioned problem (\(m=3\), \(n=3\), \({L}_{j}=3\)), the objective value is improved significantly. As a result, the sensitivity of model with regard to increasing the number of machines is greater than the number of rework processes.

6.9 Analysis of Interaction Between the Number of Job and Machine

In this section, the interaction between the number of jobs and machines is discussed. Consider a two-machine example with three jobs and three operations containing one main and two rework processes (\(m=2\), \(n=3\), \({L}_{j}=3\)). By adding one job to the mentioned problem (\(m=2\), \(n=4\), \({L}_{j}=3\)), the objective function value would be deteriorated. In order to understand the impact of increasing the number of machines on the new problem, one machine is added (\(m=3\), \(n=4\), \({L}_{j}=3\)). By increasing one machine, the objective value is improved significantly. It seems that the positive impact of increasing the number of machines on the objective value is greater than the negative effect of increasing the number of main jobs in the expected direction. In order to have a better understanding of interaction between increasing the number of jobs and machines, consider the computational results of cases 1 to 4 of scenario (2) and cases 9 to 12 of scenario (3). The results indicate that the effect of increasing the number of machines on the objective value is more tangible than the number of jobs. The results can be seen in Fig. 17.

Fig. 17
figure 17

The interaction between the number of jobs and machines

6.10 Practical Insights

Unrelated parallel machines with sequence- and machine-dependent setup times are largely applied in production environments to process different components, e.g., injection molding departments or LCD manufacturing processes [21]. In such industries, a series of jobs must be completed, the output of which is the final product that can be delivered to the customer. Since the completion time of the last job is the product delivery time, in such cases, \({C}_{\mathrm{max}}\) is used. Sensitivity analysis helps the manufacturer’s decision-making process by evaluating the impact of input parameters on the objective function value. For example, according to the obtained results, if the intention is to keep the objective function constant while increasing the number of jobs, this goal can be achieved by increasing the number of machines with a ratio lower than the rate of increase in the number of jobs. On the other hand, it can be said that achieving this goal by reducing the number of reworks requires extra effort, because increasing the number of jobs has a more negative effect on the value of the objective function than rework processes. In another scenario, if the goal is to reduce \({C}_{\mathrm{max}}\) while increasing the number of jobs, it is attainable with a combination of increasing the number of machines while reducing the number of reworks at appropriate rates.

7 Conclusions and Future Work

In this paper, an unrelated parallel machine scheduling problem was considered where the defective items are allowed and machine eligibility restrictions and release times are assumed. Rework process is applied to improve the imperfect items to meet an acceptance level of quality. There are setup actions between successive operations where setup times are dependent on both operation sequences and machines. To evaluate the presented model, the expected makespan was employed as the performance criterion. In order to understand the impact of parameter variation on the output of the model, sensitivity analysis was performed for parameters distinguished as the most appropriate. To perform the sensitivity analysis, numerical experiments were conducted by considering six different scenarios, each investigating the effects of changes in the six essential parameters on the objective function value. Those parameters were the number of rework processes, number of jobs, number of machines, machine eligibility restrictions, setup times, and decreasing coefficients. Also, the results of the analysis of each scenario were discussed. In addition, the interactions between three parameters including the number of rework processes, jobs, and machines were tracked.

For future research, assumptions like precedence constraints, machines breakdown, and transportation constraints can be considered to improve the proposed model. Furthermore, for each operation, instead of using historical data to estimate the parameters of decreasing coefficient and defective probability which was applied in this study, a probability distribution can be employed.