1 Introduction

Rapid enhancement in numbers and sizes of computational problems is radically motivating to design the modern parallel architecture. The applications that are computationally expensive are using parallel CPU architecture (multiprocessor) vastly [4].Task scheduling techniques play an effective role in efficient utilization of such multiprocessor systems. Task scheduling basically refers to allocate N number of tasks onto M number of available processing units (CPU) with the objectives to enhance the performance of the system. However, number of constraints exposed by both, the applications and the hardware infrastructure, makes the task scheduling an NP-hard problem [2, 3]. Many other factors such as task inter-dependency, discriminating nature of tasks, uniformity/diversity in task’s execution time, processors topology etc. makes the scheduling problem further complex. Because of such complexities, scheduling continue to be an important research area with the possibility of applying various heuristics and meta-heuristics for a reasonably good solution [18].

Various heuristics and meta-heuristics techniques for multiprocessor task scheduling problem are available in literature [1, 2, 615, 25]. Jin et al. [2]. presents a comparative study on different heuristics; Min–Min heuristic by Ibarra and Kim [6], Chaining heuristics by Djordjevic and Tosic [7], A* search by Kcafil et al. [8], Simulated annealing by Monte Carlo [9], Tabu Search by Tian [10] and Porto et al. [11], highest level first with estimated times (HLFET) by Adam et al. [12], insertion scheduling heuristic (ISH) by Kruatrachue and Lewis [13, 14], duplication scheduling heuristic (DSH) by Kruatrachue and Lewis [13, 14] and by Ahmad and Kwok [15] and genetic algorithm (GA) by Hou et al. [25]. Jin et al. [2] presents a performance study number of heuristics/meta-heuristics for homogeneous multiprocessor task scheduling and considering task precedence constraint and task inter-communication delays. They implemented different heuristics on two well-known problems of linear algebra: LU decomposition and Gauss–Jordan elimination. Furthermore, Ahmad and Ali [1] proposed a multiprocessor task scheduling that uses particle swarm optimization (PSO) and evaluated its performance on Gauss–Jordan elimination problem. They also compared its performance with DSH and GA heuristics and concluded that PSO performs better than GA.

Blum et al. [17] observed that complementary characteristics of different optimization heuristics benefits from hybridization. Inspired from this, an improved PSO and GA hybridization for multiprocessor task scheduling problem is proposed in this paper. According to [1], PSO provides better solutions over other heuristics/meta-heuristics for multiprocessor task scheduling problem. Also, the work of [16] infers that GA works effectively when used as a local search optimization technique in a hybrid meta-heuristic. Together, GA and PSO have been used widely to optimize solutions for different applications [26, 2830, 33].

This paper hybridizes the PSO and the GA for multiprocessor DAG scheduling problem in which PSO works as a solution builder and GA as solution refiner to be used as a local search optimization technique. Initial population is produced by PSO in form of particles (solutions). Few good particles enter into the new GA generation. GA operators, crossover and mutation, generate some newer and better particles after few generation which updates global best solution obtained using PSO. This procedure is repeated until the termination condition of the PSO is met. The new generated particles in GA component are expected to be better as these are produced from the genetic characteristics of the best fitness particles. The novelty of the proposal is that in each iteration GA component builds the new solutions using their parent’s features to act as local search technique and PSO component constructs diverse solution which provide a global search. Thus, the proposed PSO–GA hybrid meta-heuristic inculcates the advantages of both the meta-heuristics with positive feedback mechanism which is not so in other hybrid PSO, GA meta-heuristics [26, 2830, 33]. Furthermore, to the best of author’s knowledge, the hybrid PSO, GA has not been applied earlier for the multiprocessor DAG scheduling problem.

The outline of the paper is as follows. An illustration of the multiprocessor task scheduling problem is given in Sect. 2. The proposed meta-heuristic for multiprocessor task scheduling using PSO-GA is fully described in Sect. 3. The performance of the proposed meta-heuristic with comparative study with other heuristics is given in Sect. 4. Finally, the work is concluded in Sect. 5.

2 Multiprocessor DAG scheduling problem

This section focuses on the multiprocessor task scheduling problem with precedence constraint and communication amongst the tasks [1, 18]. All the parameters, related to the problem are assumed to be deterministic. Also, it is assumed that there are M homogeneous processors in the multiprocessor system and N tasks. Job/tasks, can be represented in form of directed acyclic graph (DAG) as G (V, E), where V and E denote the set of nodes and the set of directed edges, respectively. A node \(n_{i} \in V\) represents task number followed by some weight w (n i ) depicting processing time of the task n i . A directed edge \((n_{i} ,n_{j} ) \in E\) represents the communication and the precedence between the two tasks n i and n j . Precedence (n i , n j ) indicates that node n j cannot start its execution before n i . An edge (n i , n j ) is assigned some weight w (n i , n j ) which represents the communication between n i and n j . If tasks n i and n j are assigned to the same processor, communication becomes zero i.e., n j can start its execution latest by finish time (n i ). Otherwise n j will start its execution on some other processor latest by finish time \((n_{i} ) + w\left( {n_{i} , n_{j} } \right)\). The objective is to assign N number of DAG tasks onto M number of homogeneous processors with the given precedence and communication constraints such that makespan of the DAG is minimized.

Figure 1 shows a DAG with 9 nodes (tasks) represented by oval inside which node number and processing time is shown. Each edge denotes the precedence relation between the nodes along with the communication cost. Figure 2 shows a feasible schedule with respect to the DAG of Fig. 1 on two homogeneous processors that gives makespan of 25 time units each.

Fig. 1
figure 1

Directed acyclic graph

Fig. 2
figure 2

A feasible schedule corresponding to Fig. 1

3 The proposed meta-heuristic

In this section, the proposed hybrid meta-heuristic (PSO-GA) with complete flowchart describing its various features is demonstrated. It begins by exploring discrete solutions using PSO which further are exploited using GA for a global best (G best), instantly. The process is repeated until a termination condition of PSO (Sect. 3.2.1) is met.

3.1 Framework

The algorithmic flowchart of the proposed hybrid meta-heuristic (PSO-GA) for task scheduling in multiprocessor system is presented in Fig. 3. It starts with random initial population (particles). Smallest position value (SPV) rule [1, 5] is applied thereafter to produce sequence vector corresponding to each particle. The fitness of each particle is calculated and personal best (P best) and global best (G best) are updated. If at least one termination condition of PSO is met, the algorithm terminates otherwise it continues. Consequently, all the parameters including inertia weight, velocity vector, position vector and Pbest of each particle are updated. To update the G best, t number of best particles (with minimum fitness) are extracted using a module min_fitness() and submitted to the GA part of the meta-heuristic as an initial population of GA. GA operators, crossover and mutation [18], are applied to exploit these submitted particles. Using SPV rule, each solution is scored and using a selection method [18] again t best solutions are forwarded to the next GA generation. The best solution of the GA (B sol) out of them is updated. For each PSO iteration, same GA procedure is repeated until a termination condition of GA (Sect. 3.2.1) is met. When it exits from the GA part, G best is updated. The PSO–GA hybrid meta-heuristic repeatedly continues until at least one predefined termination condition of PSO (Sect. 3.2.1) is met.

Fig. 3
figure 3

Flowchart of the proposed hybrid PSO–GA

3.2 PSO-related terms

This section introduces PSO and its related terms and operators embedded into the proposed PSO–GA hybrid meta-heuristic.

3.2.1 Basic terms

Kennedy et al. [19] proposed PSO as an optimization technique that mimics the behavior of social creatures i.e., particles in food searching [1, 5]. In this technique, all particles search food in multidimensional search space based on their two important characteristics; position (suggested solution) and velocity (rate of change of particle position). If any particle finds optimal path to food location, it attracts other particles to follow its path. The optimal path is determined by fitness function. All particles move toward the optimal solution updating their personal best (Pbest) and global best (G best) solution. Finally, all particles reach to the destination following the most optimal path.

Position vector \(\vec{X}\) referred as position vector of length N, where N represents the number of dimensions or nodes in the task graph. It is represented by \(\vec{X} = \left[ {x_{1} \ldots x_{i} \ldots x_{N} } \right]\), where x i represents the position value in the ith dimension.

Velocity vector \(\vec{V}\) referred as velocity vector of length N, where N represents the number of dimensions or nodes in the task graph. Velocity vector is represented by \(\overrightarrow { V} = \left[ {v_{1} \ldots v_{i} \ldots v_{N} } \right]\), where, v i represents velocity value in the ith dimension.

Inertia weight \(\omega^{k} ,\) an important parameter, used to control the impact of pervious velocity on the current velocity (kth iteration).

Personal best \(P{\text{best}}_{i}^{k}\) represents the local best fitness position of ith particle until kth iteration.

Global best \(G_{\text{best}}^{k}\) represents globally best fitness position achieved by global best particle until kth iteration.

Termination condition Two different termination conditions are used; one is the given number of iterations and the other is the convergence of the solution. In PSO, convergence is checked as \(G_{\text{best}}\) of the PSO component equal to the CP length (critical path discussed in next subsection). In GA, convergence is checked as B sol of the GA component (shown in Fig. 3) equal to the CP length. Number of iterations is different for both PSO and GA components and are decided empirically.

3.2.2 Other related terms

Few other important related terms that may not belong to general PSO but specially used in this meta-heuristic, are as follows.

Smallest position value (SVP), a heuristic proposed by Tasgetiren et al. [5], is used to convert continuous value vector of PSO into a discrete value vector so that it may apply to all sequencing class kind of problems. This concept is similar to the random keys concept proposed by Bean [31] for genetic algorithm. With this heuristic, it is easy to convert the continuous position value vector of wandering particles into discrete activity vector. Conclusively, this heuristic finds the discrete value sequence vector i.e. \(\overrightarrow { S }\) by sorting particle’s continuous value position vector \(\vec{X}\) in ascending order. The detailed description for SPV is given in [1, 5] and the pseudo-code is as follows.

$$\varvec{SPV}(\varvec{X}_{{\varvec{ij}}}^{\varvec{k}} )$$

{

Sort \(X_{ij }^{k}\) in ascending order

Enumerate \(S_{ij}^{k}\) with discrete values where \(S_{ij}^{k} = {\text{dimention}}\left( {X_{ij}^{k} } \right)\)

}

A demonstration of SPV rule is given in Table 1 in which the values corresponding to \(S_{ij}^{k}\) represent the ascending order of the activities of the ith particle in jth dimension at kth iteration corresponding to their position values.

Table 1 Particle encoding representation

Critical path length CP length is a source to sink node path having highest makespan [1, 5] as represented in Eq. 1. The motive of CP length is to provide a bound to the optimal solution [1, 20].

$${\text{CP length}} = \mathop \sum \nolimits W_{j}$$
(1)

W j is processing time of task j belonging to the critical path, j \( \subset \) N and N is the number of tasks in the directed acyclic graph (DAG). To parallelize this DAG, at least M minimum number of processors is required which is obtained using Eq. 2.

$$M = \frac{{\mathop \sum \nolimits W_{i} }}{CP length} {\kern 1pt} \;1 \le i \le N$$
(2)

According to Eqs. 1 and 2, CP length is equal to the optimal schedule if there is M minimum number of processors available and the communication cost is negligible.

3.2.3 Initial particle generation

Initially, position and velocity of the particles is generated randomly. \(X_{ij}^{0}\) depicts the position vector for the ith particle corresponding to jth dimension at 0th iteration and is generated using Eq. 3.

$$X_{ij}^{0} = X_{ \hbox{min} } + \left( {X_{ \hbox{max} } - X_{ \hbox{min} } } \right)*r$$
(3)

Where, X min and X max have values 0.0 and 4.0, respectively to make the procedure random and r takes uniform random values between 0 and 1 as given in literature [1, 5].

The velocity vector for ith particle corresponding to jth dimension at 0th iteration is generated using Eq. 4.

$$V_{ij}^{0} = V_{ \hbox{min} } + \left( {V_{ \hbox{max} } - V_{ \hbox{min} } } \right)*r$$
(4)

Where, V min and V max have values −4.0 and 4.0, respectively and r is defined as uniform random value between 0 and 1. These values are taken for randomization purpose as given in [1, 5].

Sequencing vector \(\overrightarrow {{S_{i}^{0} }}\) denotes the continuous position vector value of each particle i which is converted into discrete value permutation vector using SPV rule. Fitness, \(F_{i }\) of the ith particle, is evaluated using fitness function (Sect. 3.4). Personal best is initialized for each i particle i.e., Pbest i  = X i . Global best is initialized as G best = X b ;b = argmin i {F i }.

A representation of encoding of ith particle with jth dimensions is shown in Table 1. Where 1 ≤ j ≤ (N = 7). First row of the table represents the node numbers of the DAG. Second and third rows represent the position and velocity values generated randomly corresponding to the nodes of the DAG, respectively. Last row of the table represents the sequence position vector, produced using SPV rule. According to the SPV rule, position vector \(\left( {X_{i,j}^{0} } \right)\) is sorted in ascending order corresponding to its values and SPV \((S_{ij}^{0} )\) is the sequences of nodes with respect to sorted indexed (jth) nodes (tasks) as shown in Table 1. This sequence vector, as a solution, is evaluated using fitness function (Sect. 3.4) considering all problem constraints. The values in SPV \((S_{ij}^{0} )\varvec{ }\) represent the node numbers of the DAG.

3.2.4 PSO updating rules

After initialization, a predefined number of iterations are performed in which the particles evolve to achieve optimal solution. According to standard PSO procedure, updating rules for inertia weight and velocity, position of the particles are required. Updating rules used in this work, are as follows [1, 5].

Inertia weight updating rule: \(\omega^{k}\) at kth iteration is updated using Eq. 5 as follows.

$$\omega^{k} = \omega^{k - 1} *\alpha$$
(5)

Where, ω is predefined as \(\omega = 0.9\) and α is a decrementing factor randomly generated between 0 and 1.

Velocity vector updating rule: The velocity at kth iteration is updated using Eq. 6.

$$\overrightarrow {{V_{k} }} = \omega *\overrightarrow {{V_{k - 1} }} + c_{1} r_{1} \left( {\overrightarrow {{P{\text{best}}_{k - 1} }} - \overrightarrow {{X_{k - 1} }} } \right) + c_{2} r_{2} \left( {\overrightarrow {{G_{\text{best}} }} - \overrightarrow {{X_{k - 1} }} } \right)$$
(6)

Where, c 1 and c 2 are self-recognition and social constant, respectively and r 1, r 2 are uniform random number between 0 and 1.

Position vector updating rule: At kth iteration, position of the particle is updated using Eq. 7.

$$\overrightarrow {{{\text{X}}_{\text{k}} }} = \overrightarrow {{{\text{X}}_{{{\text{k}} - 1}} }} + \overrightarrow {{{\text{V}}_{\text{k}} }}$$
(7)

3.3 GA-related terms and operators

This section briefly introduces GA, its related terms and operators specific to the proposed hybrid PSO-GA meta-heuristic.

GA is a well-known search technique applied for combinatorial problems in search of optimal solution [18, 21]. It is based on the principal of evolution and natural genetics. It works efficiently for large search space. It starts with a set of initial random generated solutions called initial population consisting of chromosomes. Each chromosome represents a potential solution to the specified problem and is composed of string of genes. Genes may be represented using binary {0, 1}, integer or real values depending upon the applications. Genetic operators such as crossover, mutation, and selection are iteratively applied to the population. Over the number of generation, GA converges to a near optimal solution.

Encoding Input for the GA component of the proposed PSO–GA meta-heuristic is the position vector (real values) of PSO-generated solution. This encoding is shown in Table 2. A chromosome for 10 real values is shown depicting the particle values at the corresponding position.

Table 2 GA encoding

Crossover operator The work uses a random crossover which generates new ‘individuals’ by combining the portions of the parents’ (solutions) genetic material [18]. A random two-point crossover is used and all the positions of string between randomly generated two points of both the parents are remapped. The overall procedure is shown in Fig. 4.

Fig. 4
figure 4

Random crossover procedure [18]

An illustration of the random crossover operator, which consists of three steps, is given in Fig. 5. In step 1, two cutting points are randomly selected for the substrings str1 and str2 from both the parents. Sorting Priorities (SP) corresponding to the values of substrings is generated in increasing order. The values of substring str1 are exchanged according to the sorted priority of str2 and vice versa as shown in step 2. In step 3, output offspring v1’ and v2’ are obtained.

Fig. 5
figure 5

An Illustration of random crossover operator

Fig. 6
figure 6

Swap mutation operator [18]

Mutation Operator Simple swap mutation operator is used in this work. Two positions are randomly selected and their contents are swapped as shown in Fig. 7. This procedure guarantees to generate legal offspring. The swap mutation operator usually converts less effective solution to more effective solution. The overall processor is shown in Fig. 6.

Fig. 7
figure 7

Illustration of the swap mutation operator

Selection Roulette wheel selection [18, 22] is used in this meta-heuristic. All solutions are placed on roulette wheel where better solution has larger portion on the wheel. This gives a fair chance to each individual to be a potential parent in proportion to their fitness value. Best probability of selecting a parent is generated by a spinning roulette wheel along with the size of its slots for every parent. Obviously, parents with bigger fitness value (larger slot sizes) have higher probability to be chosen.

3.4 Fitness function

The motive of the work is to optimize the scheduling length of the given problem represented as DAG. Hence, the solution can be scored by makespan which is the complete scheduling length as defined in Eq. 8.

$${\text{makespan}} = { \hbox{max} }\left( {{\text{FT}}\left( n \right)_{i,j} } \right)$$
(8)

Where, FT(n) i,j represents the finish time of task n i on best suitable processor p j and 1 ≤ i ≤ N, 1 ≤ j ≤ M.

This fitness function evaluates the solution using serial schedule scheme (SSS) [32]. The SSS is used when the generated solutions i.e., particles or chromosomes are invalid (not following dependency constraint). Basically, SSS schedules all the activities sequentially until a valid or feasible solution is achieved. The detailed description of the SSS procedure is given in [32].

4 Experimental studies

This section details the performance of the PSO–GA hybrid meta-heuristic applied for task scheduling in multiprocessor. It has been compared with some other heuristics as proposed in literature [2, 614, 25]. The compared nine heuristics/meta-heuristics are: Min–Min, Chaining, A*search, Simulated Annealing, Tabu Search, HLFET, ISH, GA and PSO. The DAG for job/task is generated for two standard linear algebra problems. Some random DAG is also used. The proposed PSO–GA meta-heuristic is simulated in Matlab.

4.1 Test bed

There are no commonly used and standard tasks set benchmarks available to study the performance of the heuristics/meta-heuristics on task scheduling problems [23]. Researchers have often used random task graphs of known optimal schedules [1, 23]. For effective comparative study, two well-known standard problems of linear algebra; LU decomposition [2] and Gauss–Jordan elimination (GJE) [2, 24] have been used. These are tested on nine heuristics as defined in [1, 2]. Figure 8a, b depict the pictorial representation of LU decomposition and GJE problem.

Fig. 8
figure 8

Pictorial representation of GJE, LU decomposition and random graphs

Further, to study the behavior of the proposed PSO–GA on large sizes of task graphs, few experiments are done on random generated task graphs. Random task graphs are pictorially represented in Fig. 8c.

4.2 Experimental analysis

For the performance analysis of the PSO–GA, two set of experiments have been performed in this section. In the first subsection, the PSO-GA is evaluated on two well-known linear algebra problems i.e., GJE and LU decomposition graphs along with some known optimal solutions of different heuristics/meta-heuristics [1, 2]. Next subsection demonstrates the comparative results of the PSO–GA with PSO [1] and GA [25] on randomly generated graphs.

4.2.1 Benchmark task graphs experiments

This subsection depicts comparative results of the proposed PSO–GA meta-heuristic on GJE and LU decomposition graphs with other available heuristics. The parameters to generate task graphs corresponding to GJE and LU decomposition problems and input parameters of the PSO–GA are given in Table 3 and Table 4, respectively. All the parameters and number of iterations in both the tables are taken from other contemporary meta-heuristics [1, 2] for the purpose of fair comparative study. Inter-process communication cost on the edges of GJE and LU decomposition DAGs is taken same as in the models [1, 2, 23]. For random task graph experiments (Sect. 4.2.2), it is randomly generated between certain ranges as given in Table 5. The constants C 1 and C 2 (self-recognition and social constant, respectively) are taken as 2 against the suggested value of 2.05 in “standard” PSO [27]. The second and third rows details about the processing time (in time unit) taken by each node of the task graph and communication cost (in time unit) of the edges, respectively in both the tables.

Table 3 GJE task graphs experimental details
Table 4 LU decomposition task graphs experimental details
Table 5 Input values for random task graphs experiment

Comparative result of all the heuristics with the PSO–GA meta-heuristic is shown in Figs. 9 and 10 corresponding to GJE and LU decomposition graphs, respectively.

Fig. 9
figure 9

Comparative study of 10 heuristics on GJE graphs

Fig. 10
figure 10

Comparative study of 10 heuristics on LU decomposition graphs

From the experimental results of Figs. 9 and 10, it is observed that the proposed PSO–GA meta-heuristic outperforms all the other heuristics/meta-heuristics for GJE and LU decomposition problem. The experimental analysis also exhibit that GA and PSO are popular and most competing heuristics to solve large and complex problems. Hence, for further experimentation, we select these two techniques for the comparison purpose.

The experimental results shown in Figs. 9 and 10 are limited to four numbers of processors and same size of problems (i.e. task graphs). This is to make a fair comparative study with other existing state of the arts heuristics [1, 2] as these have been applied for the same configuration. The results shown in the Figs. 9 and 10 are overlapping because of limited problem size and high precedence constraints among the nodes of DAGs. Also results cannot be further optimized increasing the number of processors because of limited problem size and precedence constraints among the nodes and the proposed PSO–GA has improved the quality of the solution to its max using the same configuration.

To further evaluate the performance of the PSO–GA in comparison to the dominating meta-heuristics i.e., GA and PSO, experiments have been performed by varying numbers of processors and using large sizes of JGE and LU decomposition graphs. Results are shown in in Figs. 11, 12, 13 and 14. To do this set of experiments, all the parameters except initial population and numbers of processors for GJE and LU decomposition task graphs are taken from Tables 3 and 4, respectively. For all the experiments, initial population is taken as 150 and numbers of processors are shown in the respective figures.

Fig. 11
figure 11

Comparative study of three meta-heuristics using GJE taskgraph-465 with varied number of processors

Fig. 12
figure 12

Comparative study of three meta-heuristics using GJE taskgraph-820 with varied number of processors

Fig. 13
figure 13

Comparative study of three meta-heuristics using LU decomposition taskgraph-464 with varied number of processors

Fig. 14
figure 14

Comparative study of three meta-heuristics using LU decomposition taskgraph-819 with varied number of processors

Figures 11 and 12 shows the performance of the PSO–GA in comparison to PSO and GA with varied number of processors on GJE task graphs of 645 and 820 nodes, respectively. From the results, it is observed that when numbers of processors are increased with respect to the large size of task graphs, the performance of the proposed PSO-GA is rapid improving in comparison to others.

To make it more clear, the impact of increasing numbers of processors on the proposed PSO–GA, an experimental analysis on large sizes of LU decomposition task graphs of 464 and 819 nodes is also done and results are shown in Figs. 13 and 14, respectively. The results infer that the PSO–GA again outperforms other heuristics with the varied number of processors corresponding to large sizes of LU decomposition task graphs.

The Figs. 13 and 14 exhibit the similar pattern as that of small sizes of DAGs. To observe the performance of the model on random task graphs, the next section presents an experimental analysis of the PSO-GA model using random task graphs.

4.2.2 Random task graphs experiment

In this experiment, the performance of the PSO–GA is studied by comparing it with PSO [1] and GA [25] over the large size random DAGs. The PSO-GA meta-heuristic is also tested by varying the number of processors. Table 5 represents the input parameters for random task graphs and meta-heuristic techniques used in random task graph experiments.

On the basis of the given input parameters (Table 5), six large sizes of random DAGs are generated with 100, 200, 500, 800, 1,000, and 2,000 tasks. Most of the parameters conforms to that of Tables 3 and 4 suitable for heuristics implementation and comparison with various state of the art. For a fair comparison, all the parameters, number of iterations and fitness function etc. are taken to be the same. The comparative results of GA, PSO and PSO–GA for random task graphs are shown in Fig. 15.

Fig. 15
figure 15

Comparative Study of three meta-heuristics on random graphs

The parameters are taken for these experiments are shown in Table 5. From the Fig. 15, it is observed that the performance of the PSO–GA is far better than other meta-heuristics. This set of experiments also proves that the proposed hybrid PSO–GA meta-heuristic effectively outperforms PSO [1] and GA [25] in all the cases.

5 Conclusion

The paper proposes a hybrid PSO–GA meta-heuristic using PSO and GA meta-heuristics to solve the multiprocessor DAG scheduling problem using homogenous multiprocessor system. In the proposed PSO–GA meta-heuristic, PSO works as a solution builder and GA works as a solution refiner. Novelty of the proposal is the way GA is used inside the PSO i.e., good generated PSO solutions are further refined using GA. Moreover, at each iteration of the PSO, few best of these particles with their updated position and velocity is fed to the GA component of the model and refined. Global best solution of PSO is updated using a best solution suggested by GA part of the model. The computational results of the PSO–GA are compared with other heuristics/meta-heuristics [1, 2] on two well-known linear algebra problems; GJE, LU decomposition and random-generated graphs. Initially, the results are compared with nine other states of the art using GJE and LU decomposition tasks graphs and few numbers of processors. From the results, it is observed that the PSO–GA outperforms nine states of the arts heuristics. Further, to test the behavior of the PSO–GA on larger sizes of GJE, LU tasks graphs on large number of processors, some experiments are done. The results are compared with two well-known and most effective meta-heuristics (for large search space problem) i.e., GA and PSO and it is found that the proposed PSO–GA meta-heuristic is effective and scalable to solve the large sizes of problems also. The performance of the PSO–GA is also tested on various large sizes of random tasks graphs with varying number of processors. Again, results proof the effectiveness of the proposed model.

Thus, it is concluded that PSO–GA hybrid meta-heuristic is a promising candidate for multiprocessor task scheduling problem. The proposed PSO–GA meta-heuristic can also be applied to solve other larger and complex combinatorial optimization problems effectively.