Keywords

1 Introduction

In the twenty-first century, the scheduling research area has made extraordinary advances in the development of techniques that enable improved solutions to practical problems [1, 2]. Notwithstanding the strengths of current techniques, the problems being addressed by current scheduling methods are generally NP-hard and solved only approximately [3]; there is scope for improvement in techniques for accommodating different classes of constraints and for optimizing under different sets of objective criteria [47]. The running time or time complexity of an algorithm expresses the total number of elementary operations, such as additions, multiplications, and comparisons, for each possible problem instance as a function of the size of the instance. The input size of a typical scheduling problem is bounded by the number of jobs n, the number of machines m and the number of bits to represent the largest integer (the processing time, tardiness, the due date etc.,). An algorithm is said to be polynomial or a polynomial-time algorithm, if its running time is bounded by a polynomial in input size [37]. The most real-world problems are difficult to solve to optimality [3, 79]. So, Polynomial-time algorithms (PTA) was introduced by Cobham in year 1964 in deterministic machine models and later by Edmonds in 1965 saying that polynomial time represents efficient computation. An algorithm with rational input is said to run in polynomial time if there is an integer say k such that it runs in O (nk) times where n is the given input size, and all numbers in intermediate computations can be stored with O (nk) bits. We term it as a linear-time algorithm when the value of k becomes unit. PTA are persistently called “efficient” or “good”. This big O notation is used to classify algorithms by how they respond (based on processing time requirements) to changes in input factor or size. Big O notation has utility when efficiency is looked into for analyzing algorithms. The number of hierarchy depends on the particulars of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of hierarchy needed to execute an algorithm. In parallel machine scheduling (PMS), the relationships between time and space being the criteria of analysis of complexity, it is important to study for deterministic and non-deterministic problems [18]. Although traditional techniques such as complete enumeration, dynamic programming, integer programming, and branch and bound were used to find the optimal solutions for small- and medium-sized problems, they do not provide efficient solutions for the problems with large size [10, 11] (Table 1).

Table 1 The time complexity of different types of problem seen in the literature

2 Notation and Classification

The use of α|β|γ notation given by Graham et al. [1, 2, 4, 8] for scheduling problems, where α is the machining environment, β is the set of restrictions, and γ is the objective function. Say, α = 1 which denotes a single machine, while α = P is a parallel machine environment. For γ, C j is the total completion time objective. Parallel Machines (PM): means more than one machine is performing the same function. Table 2 gives the general notation/parameters considered in any scheduling problem. The PM can be:

Table 2 Notation/parameters for scheduling
  • Identical: all machines have the same speed factors, and they can process all the jobs.

  • Uniform: parallel machine system with different speed factor, and each job has a single operation.

  • Unrelated: there is no relation between machines.

In a Parallel Machine Environment we consider a simple case say Pm|rj, Mj|wjTj which denotes a system with m machines in parallel. Job j arrives at release date rj and has to leave by the due date dj. Job j may be processed only on one of the machines belonging to the subset Mj. If job j is not completed in time a penalty wjTj is incurred.

A Complexity Hierarchy may be in following order as per nature of problem.

  1. 1.

    1||Cmax,

  2. 2.

    P2||Cmax,

  3. 3.

    F2||Cmax,

  4. 4.

    Jm||Cmax,

  5. 5.

    FFc||Cmax.

  6. 6.

    1||Lmax,

  7. 7.

    1|prmp|Lmax,

  8. 8.

    1|rj|Lmax.

  9. 9.

    1|rj, prmp|Lmax,

  10. 10.

    Pm || Lmax.

One standard approach for designing polynomial time approximation algorithms for a (difficult, NP-hard) optimization problem P is stated as follows:

  1. (a)

    Relax some of the constraints of the hard problem P to get an easier problem P′ (the so-called relaxation).

  2. (b)

    Work out (in polynomial time) an optimal solution S t for this easier relaxed problem p′.

  3. (c)

    Translate (in polynomial time) the solution S t into an approximate solution S for the original problem P.

  4. (d)

    Analyze the quality of solution S for P by comparing its cost to the cost of solution S′ for P′.

3 Scheduling Algorithms

Scheduling theory is concerned with the optimal allocation of scarce resources to activities over. Time horizon [46].The practice of this field dates to the first time two humans contended for a shared resource and developed a plan to share it without bloodshed. Algorithm may be defined as a succession of operations producing a solution to a problem through data manipulation. These data can be constants, or variables, or both kinds which can be arranged into data structures. Algorithms can be viewed as: precise type and approximate type. Precise analysis is quite tedious and at times unattainable to perform.

Thus scheduling algorithm arises [10, 11]. It is classified based on

  1. 1.

    Basic

  2. (a)

    as soon as possible

  3. (b)

    as late as possible

  4. 2.

    Time constrained

  5. (a)

    force directed

  6. (b)

    integer linear programming

  7. (c)

    iterative refinement

  8. 3.

    Resource constrained

  9. (a)

    List based

  10. (b)

    static lists

  11. 4.

    Miscellaneous

  12. (a)

    Simulated annealing (SA)

  13. (b)

    path based

Figure 3.4 gives details of type of problem in a more elaborate way. Further heuristics can be classified into three types [12]. They are

  • Index-development based on dispatching rules etc.

  • Solution-construction like NEH.

  • Solution-improvement (metaheuristics such as tabu search, SA etc).

Unfortunately, a simple, accurate, and time-invariant cost model for parallel machines does not exist The LPT, MULTIFIT, COMBINE, LISTFIT heuristics can also be applied in PMS for solving problems [1, 2, 1316, 1934] (Fig. 1).

Fig. 1
figure 1

The classification of scheduling algorithims

An exact solution can be found by diverse methods of reduced enumeration, typically by a branch-and-bound algorithm. It is doubtful that an exact solution can be found by a polynomial-time algorithm. An algorithm is called an approximation algorithm if it is possible to found analytically how close the generated solution is to the optimum (either in the worst-case or on usual). The performance of a heuristic algorithm is usually analyzed experimentally, all the way through a number of runs using either generated instances or known benchmark instances. We define a \( \rho \) approximation algorithm to be an algorithm that runs in polynomial time and delivers a solution of value at most \( \rho \) times the optimum for any instance of the problem, i.e., \( \frac{{{\text{F}}\left( {\text{SA}} \right)}}{{{\text{F}}\left( {S{\text{OPT}}} \right)}} \le \rho \). The value of \( \rho \) is called a worst-case ratio bound. OPT stands for optimum value (Fig. 2).

Fig. 2
figure 2

Different types of problems as observed in scheduling

In Fig. 3 P is polynomial time complexity problem and NP-hard belongs to non-deterministic polynomial. NP-Complete problems are the hardest problems in NP and P is subsets of NP.

Fig. 3
figure 3

A typical view of complexity classes and their relationships

As incase of PMS problem which is considered as hard optimization problems, finding this optimal solution is too hard because of the following reasons:

  • Even with the best programming language available.

  • Even with the fastest modern computer available.

  • Even with the best programmer in the world.

  • Even with the best and latest operating system.

  • Even more years in the future.

The time complexity of an algorithm is the number of steps performed by this algorithm. For instance, our enumeration algorithm for the \( P||C_{{{\rm{max}}}} \) has time complexity big O(m n), since it evaluate m n solutions. In any parallel identical machine scheduling denoted as \( {\text{P}}||{\text{C}}_{ \hbox{max} } \) the following parameters are looked into.

Given data/information:

for each job, its duration

Constraint:

perform all jobs

Decision:

assign jobs to machines

Objective:

end the last job as early as possible

4 Literature Review

Classical PMS considers a series of identical machines with a number of jobs and diverse processing times [1, 8, 1034]. It assumes that the jobs are ready at time zero, and machines are endlessly available during the whole scheduling horizon. The simplest makespan problem arises in classical PMS when jobs are sequence independent and preemption is allowed. When preemption is permitted, the processing of a job can be interrupted and the remaining processing can be completed later, possibly on a different machine. When preemption of the jobs is permitted on all machines, the minimum makespan is obtained by: \( M {{ = {\max}}}\left[ {\sum\nolimits_{j = 1}^{n} {{{p_{j} } \mathord{\left/ {\vphantom {{p_{j} } m}} \right. \kern-0pt} m},\max_{j} \left\{ {p_{j} } \right\}} } \right] \) where n is the number of jobs, p j is the processing time of task j, and m is number of machines

It is shown that parallel machine makespan-minimization problem is NP-hard [1, 2, 6, 10] so far for the two-machine scheduling problem. Moreover, the two machine problem can be solved by the pseudo polynomial algorithm but solving problems with more than two machines is very tough and it becomes a Non-deterministic Polynomial-time hard problem which is NP-hard. Using some heuristics for generating one or more near-optimal individuals in the initial step can get better the last solutions obtained by meta-heuristic algorithms. Different criterion can be used for evaluating the efficiency of scheduling algorithms, the most important of which are makespan and flowtime [23]. Many researchers studied PMS problems in past. Cheng and Sin [8] and later Mokotoff [1] surveyed a PMS problem and Allahverdi et al. [13, 31] investigated a comprehensive review of setup time research for scheduling problems classifying into batch, non-batch, sequence independent, and sequence-dependent setup. Potts and Kovalyov [14] reviewed the literature on family scheduling models with single-machine, shop problems, and parallel machine. Brono et al. [17] proved that even a two-machine system for finding the weighted sum of flow times with an unequally weighted set of jobs is NP-hardness [19, 20, 27, 34]. A comparative analysis of PMS studied by Behera and Laha [18] indicates Listfit is better than all other algorithms.

5 Conclusions and Future Research

From the extensive literature review presented here, it can be concluded that interest in the area of PMS is growing. More direct search methods need to be explored for suitability to simulation optimization problems in PMS algorithms.

In this work, we consider a comprehensive survey of the PMS problems which is one of the most common and thoroughly studied problems in the scheduling literature. The papers surveyed include exact as well as heuristic techniques for many different multi-objective approaches. In numerous papers, SA is compared with Tabu search on scheduling problems and SA is observed to perform better than Tabu Search. SA forces the designer to either spend too much time or incur losses on the quality of solutions in scheduling problems. Different types of methods such as LPT, MULTIFIT, LISTFIT is studied in the literature [1, 2, 8, 13, 18, 19, 31]. Research in PMS will continue and is promising and there is scope for improvement.