1 Introduction

Solving the complex optimization problems within the restricted time is a crucial subject in the field of engineering optimization. The conventional methods become tedious and time-consuming for solving the complex optimization problems, and these techniques are not having any guarantee to achieve the global optimal solution. Therefore, in order to overcome this issue, metaheuristic-based optimization methods are developed. These methods are having the capability to achieve the global or near global optimum solution within the limited time. Some of the well-known metaheuristic optimization algorithms are: genetic algorithm (GA) and its variants (parallel GA, real coded GA, hybrid interval GA, etc.), tabu search (TS), ant colony optimization (ACO), particle swarm optimization (PSO) and its variants (e.g., culture-based PSO, niching PSO, aging theory inspired PSO, etc.), differential evolution (DE) and its variants (e.g., DE with self-adapting control parameter, DE with multi-population ensemble, DE with optimal external archive), artificial bee colony (ABC) algorithm, imperialist competitive algorithm (ICA), biogeography-based optimization (BBO), gravitational search algorithm (GSA), firefly algorithm (FFA), cuckoo search (CS), bat algorithm (BA), etc. (Lau et al. 2016; Rao and Patel 2012; Irawan et al. 2016).

Several metaheuristic algorithms have been proposed in the last decade. Some prominent algorithms are: spiral optimization, differential search algorithm, teaching-learning-based optimization (TLBO), cuckoo search algorithm (CSA), colliding bodies optimization algorithm, whale optimization algorithm (WOA), centripetal accelerated particle swarm optimization algorithm, crisscross optimization algorithm, ant lion optimization (ALO), cat swarm optimization (CSO), bacteria forging optimization (BFO), thermal exchange optimization algorithm (TEOA), gray wolf optimizer (GWO) algorithm, chemotherapy science-based optimization algorithm (CSOA) and hybrid algorithms (Mirjalili and Lewis 2016; Salmani and Eshghi 2017; Kohli and Arora 2017; Rao and Saroj 2017; Kaveh and Dadras 2017).

The advanced optimization algorithms are having their individual merits, but they require tuning of their own algorithmic-specific parameters. For example, GA requires proper tuning of crossover probability, selection operator, mutation probability. SA algorithm requires the tuning of cooling schedule and initial temperature of annealing. PSO requires the setting of social and cognitive parameters and inertia weight. BBO algorithm needs the setting of emigration rate, immigration rate, etc. Similarly, ICA, DE and other algorithms mentioned in the above paragraph are having their own algorithmic-specific parameters to be tuned for the effective execution of the algorithm. These parameters are known as algorithm-specific parameters and required to be set in addition to the common control parameters (i.e., number of iterations and population size and elite size). All the existing population-based advanced optimization algorithms are required to set values of the common control parameters, but the algorithm-specific parameters are specific to the specific algorithm and these are also to be set as explained above.

The performance of the metaheuristic-based algorithms is strongly influenced by the algorithmic-specific parameters. The appropriate setting of these parameters is very much necessary. The improper tuning of these parameters may lead to an increase in the computational cost or tending toward the local optimal solution. Therefore, to resolve the issue of setting of algorithm-specific parameters, teaching-learning-based-optimization (TLBO) algorithm was developed which is an algorithm-specific parameter-less algorithm (Rao 2016a, b). Keeping in view of the good performance of the TLBO algorithm, another algorithm-specific parameter-less algorithm has been recently proposed and it is named as Jaya algorithm (Rao 2016c).

It is to be mentioned here that the subpopulation-based advanced optimization methods have been developed by the researchers in order to improve the diversity of search process by dividing the whole population into sub-groups and assigning these throughout the search space. This maintains the diversity of the search mechanism by assigning subpopulations to various search areas rather than concentrating on single area. In this method, each subpopulation is associated with either exploring or exploiting the search processes of the algorithm (Nguyen et al. 2012; Cruz et al. 2011). The interface between the subpopulations is done by a combine and split process when there is an effective improvement in the current global solution is observed. The subpopulation-based algorithms are found more effective as compared to the single population-based algorithms.

Branke et al. (2000) developed the self-organizing scout’s multi-population evolutionary algorithm. Li and Yang (2008) developed the multi-swarm PSO algorithm. Yang and Li (2010) developed the clustering-based PSO. Rao and Patel (2012) developed the multiple teachers TLBO. A multi-population ABC algorithm was developed by Nseef et al. (2016). These subpopulation-based methods are helpful to maintain the population diversity. The subpopulation-based optimization algorithms are advantageous (Li et al. 2015) mainly because of the reason that the overall diversity of the search mechanism can be maintained with distribution of the whole population into groups and various subpopulations can be located in different regions of search space.

The algorithm’s performance is affected by the selection of number of subpopulations which is associated with the complexity of the problems. This point cannot be recognized in advance for the given problems, and it regularly changes during the search process. The solutions in the subpopulations may also not be enough for enough diversity. In order to address these issues, the present work proposes a self-adaptive multi-population elitist (SAMPE) Jaya algorithm. In order to effectively monitor the problem landscape, the SAMPE-Jaya algorithm adaptively changes the number of subpopulations based on change strength of the solution. The objectives of the present study are:

  1. (a)

    To propose a SAMPE-Jaya algorithm that adapts the number of subpopulations based on the change strength of the problem.

  2. (b)

    To investigate the performance of the proposed method on standard benchmark problems.

  3. (c)

    To investigate the performance of the proposed method for an engineering application of a micro-channel heat sink design.

The basic difference between island-model GA and proposed SAMPE-Jaya algorithm is that the island-model GA uses only two groups (i.e., Master Island and Slave Islands). However, the proposed SAMPE-Jaya algorithm uses the subpopulations adoptively. Multiple-population (or multiple-deme, or parallel) GAs are more sophisticated, as they consist of several subpopulations which exchange individuals occasionally. This exchange of individuals is called migration, and it is controlled by several parameters. However, the number of demes in this method is to be tuned for the better performance of the algorithms. The tuning of number of subpopulations (number of demes) is a critical issue in the parallel evolutionary algorithm (Cantu-Paz 1998; Mambrini and Sudholt 2014). Therefore, this issue is resolved by the proposed SAMPE-Jaya algorithm which decides the number of subpopulation adaptively.

The following section describes the proposed self-adaptive multi-population elitist-Jaya algorithm.

2 Self-adaptive multi-population elitist-Jaya algorithm

Rao and Saroj (2017) developed self-adaptive multi-population (SAMP) Jaya algorithm, and the proposed self-adaptive multi-population elitist (SAMPE) Jaya algorithm is an extension of the SAMP-Jaya algorithm to improve its performance. Let Z(x) is an objective which is being optimized. Assume that at any iteration i, number of design variables is ‘d’ (i.e., \(q=1, 2,\ldots ,d\)), population size ‘P’ (i.e., \(r =1, 2,\ldots ,P\)). If \({X}_{\mathrm{q,r,i}}\) is the value of the qth variable for the rth candidate during the ith iteration, then this value is modified based upon the following Eq. (2.1).

$$\begin{aligned} {{\varvec{X'}}}_{{{\varvec{q}}},{{\varvec{r}}},{{\varvec{i}}}}= & {} {{\varvec{X}}}_{{{\varvec{q}}},{{\varvec{r}}},{{\varvec{i}}}} + {{\varvec{r}}}_{1}( {{\varvec{X}}}_{{{\varvec{q}}},{{\varvec{best}}},{{\varvec{i}}}}- \vert {{\varvec{X}}}_{{{\varvec{q}}},{{\varvec{r}}},{{\varvec{i}}}}\vert ) \nonumber \\&- {{\varvec{r}}}_{2}({{\varvec{X}}}_{{{\varvec{q}}},{{\varvec{worst}}},{{\varvec{i}}}}- \vert {{\varvec{X}}}_{{{\varvec{q}}},{{\varvec{r}}},{{\varvec{i}}}}\vert ) \end{aligned}$$
(2.1)

where \({X}_{q,best,i}\) is the value of the qth variable for the best solution and \({X}_{q,worst,i}\) is the value of the qth variable for the worst solution in the population. \({X}'_{q,ri}\) is the new value of \(X_{q,r,i}\) and \({r}_{1}, {r}_{2}\) are random numbers in the range of [0, 1]. The term “\(r_{1}({X}_{q,best,i}-\vert {X}_{q,r,i} \quad \vert )\)” indicates that the solution tries to approach the best solution, and the term “\(-r_{2}(X_{q,worst,i}-\vert X_{q,r,i}\vert )\)” shows that the solution tries to avoid the worst solution. \(X'_{q,r,i}\) is accepted if function value produced by it is better.

In SAMPE-Jaya algorithm, the following modifications are added to the basic Jaya algorithm:

  1. (a)

    The proposed algorithm uses number of subpopulations by dividing it into number of groups based on the quality of the solutions (value of fitness function). Furthermore, the worst solutions of the inferior group (populations having poor fitness values) are replaced by the solutions of the superior group such as populations having higher fitness values (elite solutions). Use of the number of subpopulations distributes the solution over the search space rather than concentrating in a particular area. Therefore, the proposed algorithm should produce optimum solution and monitor the problem landscape changes.

  2. (b)

    During the search process, SAMPE-Jaya algorithm modifies the number of subpopulations based on change strength of the problem for monitoring the landscape changes. It means that the number of subpopulations will be increased or decreased. In the SAMPE-Jaya algorithm, the number of subpopulations is modified adaptively based on the strength of the solution change (e.g., improvement in the fitness value). This feature supports the search process for tracing the optimum solution and improving the exploration and diversification of the search process. Furthermore, the duplicate solutions are replaced by the newly generated solutions for maintaining the diversity and enhancing the exploration procedure.

The basic steps of the SAMPE-Jaya algorithm are as follows:

Fig. 1
figure 1

Flowchart of SAMPE-Jaya algorithm

Step 1 It starts with the setting of the number of design variables (d), number of populations (P), elite size (ES) and termination criterion. (Termination criterion for the present work is maximum number of function evaluations (\(\hbox {FE}_{\mathrm{max}}).)\)

Step 2 Next step is to calculate the initial solutions based on the defined fitness function for the defined problem.

Step 3 The entire population is grouped into m number of groups based on the quality of the solutions (initially m=2 is considered) and replace the worst solutions (equals to ES) of the inferior group with solutions of the superior group (elite solutions).

Step 4 Each subpopulation uses Jaya algorithm for modifying the solutions in each group independently. Modified solutions are kept if and only if these are better than the old solutions.

Step 5 Combine the entire subpopulation. Check whether Z(best_before) is better than Z(best_after).

Here, Z(best_before) is the previous best solution of the entire population and Z(best_after) is the current best solution in the entire population. If the value of Z(best_after) is better than the value of Z(best_before), m is increased by 1 \(({m}={m}+1)\) with the aim of increasing the exploration feature of the search process. Otherwise, m is decreased by 1 (\(m= m-1\)) as the algorithm needs to be more exploitive than explorative.

Step 6 Check the stopping condition(s). If the search process is reached to the maximum number of function evaluations, then terminate the loop and report the best optimum solution. Otherwise, follow the steps of:

  • (a) Replace the duplicate solutions with randomly generated new solutions

  • (b) For re-dividing the populations go to Step 3.

The flowchart of the proposed SAMPE-Jaya algorithm is presented in Fig. 1.

All the metaheuristic algorithms (except TLBO algorithm) require tuning of algorithmic-specific parameters. The tuning of these parameters is in addition to the common control parameters (e.g., population size, elite size and number of iterations). The performance of the metaheuristic algorithms is much affected by the tuning of these algorithmic-specific parameters. The improper tuning of the algorithmic-specific parameters may lead to local optimal solution.

The Jaya algorithm is an algorithmic-specific parameter-less algorithm similar to teaching-learning-based optimization (TLBO) algorithm. However, it is much simpler and powerful than TLBO algorithm and it is having a single phase rather than two phases of TLBO (i.e., teacher phase and learner phase). Equation (2.1) is used for upgrading the solution quality during the search process. For ensuring better exploration of the search space, two random numbers r1 and r2 are used. The absolute value of the candidate solution (\({\vert }{Xq},{r},{i } {\vert }\)) in Eq. (2.1) helps the algorithm to further increase the exploration ability. These features make the algorithm to converge toward global optimal solution rather than toward a local optimal solution.

The following equation is used for generating the initial solutions:

$$\begin{aligned} {X}\_{\mathrm{intial}}= & {} {X}\_{\mathrm{min}} + {rand}(0,1)^*({X}\_{\mathrm{max}}\nonumber \\&- {X}\_{\mathrm{min}}) \end{aligned}$$
(2.2)

where rand(0, 1) is random number between 0 and 1 with normal distribution whose mean is zero and standard deviation is 1. X_min and X_max are the minimum and maximum values of design variables, respectively.

The random numbers between 0 and 1 in Eq. (2.2) are used to distribute the populations into entire search space. Furthermore, r1 and r2 between 0 and 1 are used to explore the entire search space for getting global optimal solution within the specified limit.

Now, if the ranges of the random numbers used in producing the next solutions are changed and if the ranges of random numbers are increased beyond 1, it means we are allowing the algorithm to search beyond the specified limits of the design variables. This may lead to infeasible solutions, and complexity of the search process may be increased. Furthermore, if the ranges of random numbers are reduced (0 to less than l), it means the algorithm is not allowed to explore the entire search space. This may lead to trap the algorithm in local optima.

Furthermore, random numbers with normal distribution are used in metaheuristics for the search process. Normalizing the values of the random numbers in each iteration may not serve the purpose of metaheuristics. It may increase the exploration rate but may have poor exploitation rate. This may lead the algorithm toward local optima with higher computational cost. Furthermore, the elitist version of multi-population Jaya algorithm is purposed in this work.

Now, for investigating the performance of the proposed SAMPE-Jaya algorithm, it is used for solving 30 unconstrained benchmark problems (including 15 problems of CEC 2015), three large-scale problems, 6 constrained benchmark problems and 4 well-known constrained mechanical design optimization problems taken from the literature (Ngo et al. 2016). The next section presents the analysis of the results obtained by the SAMPE-Jaya algorithm and its comparison to the other approaches.

3 Optimization results and discussion

The SAMPE-Jaya algorithm is coded in MATLAB R2009b with a laptop of HP Pavilion g6 Notebook PC of 4 GB RAM memory, 1.9-GHz AMD A8 4500M APU CPU. The performance of the SAMPE-Jaya algorithm is compared with various optimization algorithms: GA and its variants, PSO and its variants, DE, TLBO, etc. The performance of the SAMPE-Jaya algorithm is evaluated on thirty unconstrained problems (including CEC 2015 function), three large-scale problems, six constrained problems and four well-known constrained mechanical design problems considered from the work of Ngo et al. (2016). Furthermore, the proposed SAMPE-Jaya algorithm is used for the design optimization of a micro-channel heat sink (MCHS). The performance analysis of SAMPE-Jaya algorithm on unconstrained benchmark problems is presented in the following section.

Table 1 Comparative results for the unimodal and multimodal problems with maximum function evaluations of 50,000
Table 2 Comparative results for the unimodal and multimodal problems with maximum function evaluations of 200,000

3.1 Unconstrained benchmark problems

The performance analysis of the SAMPE-Jaya algorithm is presented in this section on fifteen unconstrained benchmark problems and fifteen computationally expensive unconstrained benchmark problems taken from CEC 2015 (Ngo et al. 2016). The optimum value of function \(\hbox {O}_{8}\) is − 418.9829*dimension, and the values of the global optimum values of rest of the problems are zero.

3.1.1 Analysis of results related to unimodal and multimodal problems

The results obtained by SAMPE-Jaya algorithm for the unimodal and multimodal problems are compared with GA and its variants, PSO, its variants, SAMP-Jaya and Jaya algorithms. Tables 1 and 2 present the comparison of results. The results achieved by SAMPE-Jaya algorithm for functions (\(\hbox {O}_{1}\) to \(\hbox {O}_{15})\) with 50,000 and 200,000 function evaluations over 30 independent runs for the 30 dimension problems are presented in Tables 1 and 2, respectively. The results are compared with the Jaya algorithm (Rao and Saroj 2017), self-adaptive multi-population (SAMP) Jaya algorithm (Rao and Saroj 2017), extraordinariness particle swarm optimizer (EPSO) (Ngo et al. 2016), adaptive inertia weight PSO (AIWPSO) (Nickabadi et al. 2011), gravitational search algorithm (GSA) (Rashedi et al. 2009), Frankenstein’s PSO (F-PSO) (Oca and Stutzle 2009), real coded genetic algorithm (RGA)(Haupt and Haupt 2004), comprehensive learning PSO (LPSO) (Liang and Qin 2006), cooperative PSO (CPSO) (Bergh and Engelbrecht 2004), PSO (Eberhart and Kennedy 1995) and fully informed particle swarm (FIPS)(Mendes et al. 2004).

It can be observed from Table 1 that the mean function value recorded by using the SAMPE-Jaya algorithm is better or equal in 10 cases out of 15 cases as compared to the other algorithms. The SAMPE-Jaya algorithm is able to get the global optimum values of the function \(\hbox {O}_{6}\)and \(\hbox {O}_{14}\). Similarly, the values of standard deviation (SD) recorded by using the SAMPE-Jaya algorithm for the same case are better or equal in 10 cases out of 15. For the rest of cases (\(\hbox {O}_{4}, \hbox {O}_{8}, \hbox {O}_{12}\)and \(\hbox {O}_{13})\), performance of the proposed SAMPE-Jaya algorithm is competitive except for the objective O\(_{9}^{.}\) It can be concluded based on these results that the performance of the SAMPE-Jaya algorithm is better as compared to the other algorithms.

Table 2 presents the performance comparison of the SAMPE-Jaya algorithm with the other optimization algorithms for maximum function evaluations of 200,000. It can be observed from Table 2 that the mean function value recorded by using the SAMPE-Jaya algorithm is better or equal in 12 cases out of 15 in comparison with the other approaches. The proposed SAMPE-Jaya algorithm is able to get the global optimum value of the function \(\hbox {O}_{3,}\hbox {O}_{6}\) and \(\hbox {O}_{14.}\) Similarly, the values of SD obtained by the proposed SAMPE-Jaya algorithm for these problems are better or equal in 11 cases out of 15. For the rest the cases, performance of the proposed SAMPE-Jaya algorithm is competitive except for the objective \(\hbox {O}_{9.}\) It can be concluded based on these results that the performance of the SAMPE-Jaya algorithm is better as compared to the other algorithms.

3.1.2 Analysis of the results related to CEC computationally expensive benchmark problems

The benchmark problems considered in this section are from CEC 2015. The objective of all functions is minimization. Problems 1–9 are shifted and rotated problems; problems 10–12 are hybrid functions; and problems 13–15 are composite functions. The detailed information about CEC 2015 problems can be retrieved from the literature (Ngo et al. 2016).

The computational experiments are carried out by following the guidelines of CEC 2015. The maximum function evaluations (MFEs) of 500 and 1500 are considered as one of the termination criteria with ten dimension and thirty dimension problems, respectively. Second stopping criterion is, while the error value (current optimum value–global optimum value) is less than 1.00E−03. Average of the minimum error value is recorded over 20 independent runs. The averaged minimum values are used for the performance comparison of the proposed SAMPE-Jaya algorithm with the other algorithms.

The computational results obtained by proposed SAMPE-Jaya algorithm are compared with EPSO, DE, (\(\mu +\lambda )\)-evolutionary strategy (ES), specialized and generalized parameters experiments of covariance matrix adaption evolution strategy (CMAES-S and CMAES-G) (Andersson et al. 2015). Table 3 presents the comparison of the computational results. It can be observed from Table 3 that the results obtained by using the SAMPE-Jaya algorithm are better in 12 cases for 10-D and 9 cases for 30-D problems. An observation can be made from this table that the results achieved by the proposed SAMPE-Jaya algorithm are better or competitive in comparison with other algorithms used for this problem. The bold values in Table 3 show the minimum mean error values obtained by different algorithms for each function.

The computational complexity of the SAMPE-Jaya algorithm is evaluated for 30-D and 10-D problems as per the guidelines of the CEC 2015. The computational complexity of the algorithms is defined as the complexity in the computational time when the dimensions of the problem are increased. The test function provided by CEC 2015 is run on the same computer which is used for optimization of problems of CEC 2015. The time (\({T}_{0})\) required for the complete execution of this function was found to be 0.3447 s. Next step is to record the average processing time (\({T}_{1})\) for each problem. The computational complexity of the of the algorithm (\({T}_{1}/{T}_{\mathrm{o}})\) is calculated and is shown in Table 4. In Table 4, the values \({T}_{\mathrm{B}}/{T}_{\mathrm{A}}\) disclose the complexity of the algorithm when the dimension of the problems changes. The value of \({T}_{\mathrm{B}}/{T}_{\mathrm{A}}\) equals to 1 means that it is not any having complexity when problem changes from 10-D to 30-D.

The values \({T}_{\mathrm{B}}/{T}_{\mathrm{A}}\) more than one disclose complexity of computational time of the algorithms. Functions FCEC3 and FCEC5 are having the higher complexity in terms of computational time because of multi-modality of the functions. Similarly, hybrid functions FCEC11 and FCEC12 and composite functions FCEC13–FCEC15 have shown the higher complexity in terms of computational time. For the remaining problems, the value of \({T}_{\mathrm{B}}/{T}_{\mathrm{A}}\) is almost equal to 3.5 which reveal that the computational complexity of the present algorithm is increased 3.5 times when the dimensions of the problems are changed from 10 to 30. The computational complexity of the SAMPE-Jaya algorithm is about 3 for the problems FCEC1, FCEC2, FCEC4, FCEC6, FCEC7, FCEC8, FCEC9, FCEC10 FCEC13 and FCEC14. It shows that computational complexity of the SAMPE-Jaya algorithm is increased about three times when the dimension of the problems changes from 10 to 30.

The computational complexity of SAMPE-Jaya algorithm is more than 4 for the problems FCEC3, FCEC5, FCEC11, FCEC12 and FCEC15. This increment in the computational complexity is due to the complexity and multi-modality of these problems. However, the computational complexity of the SAMPE-Jaya algorithm is less as compared to SAMP-Jaya and EPSO algorithms. As the computational complexity of the other algorithms for CEC 2015 problems is not available (except EPSO) in the literature, the computational complexity of the proposed SAMPE-Jaya algorithm cannot be compared with others. It can also be observed from Table 4 that the computational complexity (\({T}_{\mathrm{B}}/{T}_{\mathrm{A}})\) of the present approach is less in comparison with EPSO for all the problems of CEC 2015.

Table 3 Comparative results for mean error obtained by different approaches for CEC 2015 problems
Table 4 Computational complexity of the SAMP-Jaya algorithm

It can be observed from Tables 1, 2 and 3 that the performance of the SAMPE-Jaya algorithm is better or competitive as compared to the other algorithms. However, it becomes necessary to prove the significance of the proposed SAMPE-Jaya algorithm over the other algorithms with some statistical test. Therefore, a well-known statistical method known as ‘Friedman test’ (Joaquin et al. 2016) is used to compare the performance of the proposed SAMPE-Jaya algorithm with the other algorithms. Mean values of the fitness functions obtained by different methods are considered for this test. This method first finds the rank of algorithms for the individual problems and then calculates the average rank to get the final rank of the each algorithm for the considered problems. This test was performed with assuming \(\chi ^{2}\) distribution and with \(k-1\) degree of freedom.

The mean ranks of the algorithms for the unimodal and multimodal problems with maximum function evaluations of 50,000 are presented in Table 5. It can be observed from this table that the performance of the proposed SAMPE-Jaya algorithm is better than the other methods. It has obtained higher rank as compared to the other algorithms used for this problem. Comparison of the same on the bar chart is presented in Fig. 2. The mean rank of the algorithms for the unimodal and multimodal problems with maximum function evaluations of 200,000 is presented in Table 6. It can be observed from this table that performance of the proposed SAMPE-Jaya algorithm is better than the other algorithms used for the optimization of the same problem. The proposed SAMPE-Jaya algorithm is having higher rank in this case also as compared to the other algorithms. Figure 3 presents the comparison of the ranks on the bar chart.

Table 5 Friedman rank test for unimodal and multimodal problems with 50,000 function evaluations
Fig. 2
figure 2

Friedman rank test for unimodal and multimodal problems with 50,000 function evaluations

Table 6 Friedman rank test for unimodal and multimodal problems with 200,000 function evaluations
Fig. 3
figure 3

Friedman rank test for unimodal and multimodal problems with 200,000 function evaluations

The comparison of performance of the proposed SAMPE-Jaya algorithm for CEC 2015 problems is shown in Tables 7 and 8 for 10-D and 30-D problems, respectively. The average rank obtained by the SAMPE-Jaya algorithm for the 10-D problems is 1.6667, which is better than the other algorithms. Similarly, average rank obtained by the SAMPE-Jaya algorithm for the 30-D problems is 2.2, which is better than the rest of algorithms. Figures 4 and 5 present the comparison of the average ranks of algorithm on bar charts for the 10-D and 30-D problems, respectively.

Thus, the Friedman rank test confirms that the performance of the proposed SAMPE-Jaya algorithm is better for the considered unconstrained benchmark problems and it has obtained the highest rank as compared to the other algorithms used for the optimization all these cases considered for the comparison.

Table 7 Friedman rank test for CEC 2015 problems with 10 dimensions
Table 8 Friedman rank test for CEC 2015 problems with 30 dimensions
Fig. 4
figure 4

Friedman rank test for CEC 2015 problems with 10 dimensions

Fig. 5
figure 5

Friedman rank test for CEC 2015 problems with 30 dimensions

3.2 Analysis of results related to constrained benchmark problems

This section presents comparison of performance of the proposed SAMPE-Jaya algorithm on six constrained benchmark optimization problems considered from literature (Ngo et al. 2016). In this study, the proposed SAMPE-Jaya algorithm is run 30 times for each case and performance comparison of the proposed SAMPE-Jaya algorithm with other algorithm is carried out based on the best function value in the entire run (Best), worst function value (Worst), mean of the best function value in entire run (mean), deviation of the best solution from the mean best solution (SD) and mean function evaluations (MFEs) required for the computation. In order to deal with the constraints imposed on these objectives, a static penalty function is used for penalizing the fitness value of the objective function.

The results obtained by the SAMPE-Jaya algorithm are compared with the other optimization algorithms, and these are: EPSO (Ngo et al. 2016), hybrid real-binary PSO (HPSO) (Jin and Rahmat-Samii 2010), \(\alpha \) constraint simplex method (\(\alpha \) simplex) (Takahama and Sakai 2005), improved stochastic ranking (ISR) (Runarsson and Xin 2005), GA hybrid Nelder–Mead simplex search and PSO (NM-PSO) (Zahara and Kao 2009), hybrid evolutionary algorithm and adaptive constraint handling technique (HEAA) (Wang et al. 2009), artificial bee colony (ABC) (Karaboga and Basturk 2007), cultural algorithms with evolutionary programming (CAEP) (Coello and Becerra 2004), differential evolution with dynamic stochastic selection (DEDS) (Zhang et al. 2008), stochastic ranking (SR) (Runarsson and Xin 2000), differential evolution (DE) (Lampinen 2002), particle swarm optimization with differential evolution (PSO-DE) (Liu et al. 2010), simple multi-membered evolution strategy (SMES) (Mezura-Montes and Coello 2006), cultured differential evolution (CULDE) (Becerra and Coello 2006), changing range genetic algorithm (CRGA) (Amirjanov 2006), self-adaptive penalty function (SAPF) (Tessema and Yen 2006), adaptive segregational constraint handling evolutionary algorithm (ASCHEA) (Hamida and Schoenauer 2002), homomorphous mappings (HM) (Koziel and Michalewicz 1999), chaotic gray wolf optimization (CGWO) algorithm (Kohli and Arora 2017), gravitational search algorithm (GSA), particle swarm optimization (PSO), whale optimization algorithm (WOA) (Mirjalili and Lewis 2016), thermal exchange optimization algorithm (TEOA) (Kaveh and Dadras 2017), SAMP-Jaya and Jaya algorithms.

The comparison of results for the problem 1 is shown in Table 9. It can be observed from this table that the best value found by the SAMPE-Jaya algorithm is better or competitive in comparison with the other algorithms. Similarly, the value of mean and worst function value obtained by the SAMPE-Jaya algorithm are better as compared to the other algorithms. The proposed SAMPE-Jaya algorithm requires 96.52, 83.77, 91.13, 97.56, 94.92, 82.64, 86.84, 82.64, 59.43, 99.13, 94.59, 93.91, 87.84, 96.08, 99.18, 75.66, 7.64 and 20.26% fewer function evaluation in comparison with SR, SMES, ISR, SAPF, ABC, CRGA, PSO, DE, HM, DEDS, HEAA, CULDE, \(\alpha \)-simplex, ASCHEA, EPSO, SAME-Jaya and Jaya algorithms, respectively.

Table 9 Comparison of the statistical results for constrained problem 1

Table 10 presents the comparison of the results obtained by SAMPE-Jaya algorithm for problem 2 with different algorithms. It can be observed from this table that the best function value achieved by the SAMPE-Jaya algorithm and other algorithms are same. The values of worst, average andSD achieved by using the proposed SAMPE-Jaya algorithm are better or competitive in comparison with the rest of the algorithms. The value of MFEs required by the SAMPE-Jaya algorithm is 98.48, 98.92, 99.83, 98.26, 97.57, 83.81, 34.37, 97.94, 98.26, 98.98, 98.98, 98.56, 98.88, 87.96, 99.17, 28.12 and 33.16% less as compared to ISR, DEDS, ASCHEA, GA, PSO-DE, NM-PSO, CAEP, HPSO, DE, CRGA, SR, SAPF, SMES, CULDE, PSO, HEAA, \(\alpha \)-simplex, ABC, EPSO, SAMP-Jaya and Jaya algorithms, respectively.

Table 10 Comparison of the statistical result for constrained problem 2

Comparison of the statistical result for problem 3 is shown in Table 11. It can be observed from this table that the best function value achieved by the SAMPE-Jaya algorithm is same or better as compared to the rest of the algorithms. The values of worstmean and SD obtained by the SAMPE-Jaya algorithm are better or competitive in comparison with the rest of the algorithms. The value MFEs required by the SAMPE-Jaya algorithm is 99.78, 99.77, 98.57, 97.71, 96.80, 78.67, 13.54, 97.28, 97.71, 98.66, 98.66, 98.66, 98.10, 99.36, 98.90, 5.32 and 12.63% less as compared to ASCHEA, HM, DEDS, PSO-DE, CULDE, DE, CRGA, SR, PSO, ABC, SMES, ISR, SAPF, HEAA, EPSO, \(\alpha \)-simplex, EPSO, SAMP-Jaya and Jaya algorithms, respectively.

Table 11 Comparison of the statistical result for constrained problem 3

Comparison of the statistical result for problem 4 is presented in Table 12. It can be observed from this table that the best function value obtained by the SAMPE-Jaya algorithm is same or better in comparison with the other algorithms. The values of worst, mean andSD obtained by the proposed SAMPE-Jaya algorithm are also better or competitive in comparison with the other algorithms. The value of MFEs required by the SAMPE-Jaya algorithm is 98.05, 97.91, 91.66, 91.66, 99.08, 81.73, 87.84, 41.64, 94.14, 91.66, 85.41, 87.84, 70.85, 87.03, 89.24, 90.97, 91.66, 88.23, 87.84, 76.65, 3.95 and 4.83% less as compared to ASCHEA, HM, GA2, GA1, GA, HS, SMES, CRGA, SAPF, SR, HEAA, DE, CULDE, PSO, DEDS, ISR, \(\alpha \)-simplex, PESO, CoDE, ABC, EPSO, SAMP-Jaya and Jaya algorithms, respectively.

Table 12 Comparison of the statistical result for constrained problem 4

The comparison of the statistical result for problem 5 is presented in Table 13. It can be observed from this table that the best function value recorded by using the SAMPE-Jaya algorithm is better as compared to the other algorithms. The values of worst mean andSD obtained by the SAMPE-Jaya algorithm are also better or competitive in comparison with the rest of the algorithms. The value of MFEs required by the proposed SAMPE-Jaya algorithm is 99.19, 99.25, 87.24, 77.51, 83.95, 86.11, 83.95, 88.86, 95.31, 82.69, 79.32, 97.75, 95.31, 95.31,77.50, 95.00, 94.37, 94.14, 96.31, 77.50, 4.25 and 68.80% less as compared to HM, ASCHEA, SR, CAEP, PSO, HPSO, PSO-DE, CULDE, DE, HS, CRGA, SAPF, SMES, ABC, DELC, DEDS, HEAA, ISR, \(\alpha \)-simplex, EPSO, SAMP-Jaya and Jaya algorithms, respectively.

Table 13 Comparison of the statistical result for constrained problem 5

The comparison of the statistical results for problem 6 is shown in Table 14. It can be observed from this table that the best function value achieved by using the SAMPE-Jaya algorithm is better competitive in comparison with the other algorithms. The values of worst and mean obtained by the proposed SAMPE-Jaya algorithm are also better or competitive as compared to the rest of the algorithms. The value of SD obtained by the proposed SAMPE-Jaya algorithm is minimum as compared to the other algorithms. The value MFEs required by the proposed SAMPE-Jaya algorithm is 98.52, 77.57, 90.80, 90.80, 84.25, 90.80, 95.58, 93.10, 93.68, 90.36, 88.96, 88.96, 90.19, 99.72, 92.90, 93.69, 84.25, 77.93, 98.42, 4.74 and 19.11% less as compared to ASCHEA, CULDE, ABC, PSO-DE, SMES, SAPF, GA, ISR, SR, HEAA, DELC, DEDS, DE, \(\alpha \)-simplex, PESO, ABC, EPSO, SAMP-Jaya and Jaya algorithms, respectively.

It can be observed from results of Tables 9, 10, 11, 12, 13 and 14 that the proposed SAMPE-Jaya algorithm has performed better or competitive in comparison with the rest of the algorithms. The SAMPE-Jaya algorithm requires less number of mean function evaluations in comparison with the other algorithms. It can be concluded from above results that the proposed SAMPE-Jaya algorithm is performing well on constrained benchmark problems as compared to the rest of the algorithms.

Table 14 Comparison of the statistical result for constrained problem 6

3.3 Analysis of results related to constrained engineering design problems

Furthermore, the capability of proposed the SAMPE-Jaya algorithm is tested in this section by applying it on four constrained mechanical designs benchmark problems taken from the literature. Description of the problems can be found from the literature (Ngo et al. 2016). Problem 1 is the minimization of the welded beam design cost. Problem 2 is the minimization of the total cost of a pressure vessel. Problem 3 is the problem of tension/compression spring design. Minimization of the weight of the spring is the objective of this problem. Problem 4 is a minimization problem of design of speed reducer.

Table 15 presents the statistical results over 30 runs of 22 algorithms for the test problem 1. It can be observed from this table that for the welded beam design problem proposed SAMPE-Jaya algorithm has obtained same best and mean values as compared to elitist-TLBO (Rao and Waghmare 2014) EPSO, SAMP-Jaya, Jaya algorithm and DE-PSO but the same is better in comparison with the rest of the algorithms. NM-PSO had obtained better value of the objective function. However, mean function value obtained by this approach is inferior to the proposed SAMPE-Jaya algorithm. Standard deviation (SD) and the worst solution obtained by the SAMPE-Jaya algorithm are superior to the rest of the algorithms considered for comparison. It requires lesser numbers of function evaluations for obtaining the best value as comparison to the remaining algorithms considered for the comparison. The SAMPE-Jaya algorithm is superior in terms of robustness as compared to the remaining algorithms for the welded beam design problem. It requires approximately 98.06, 95.35, 97.73, 85.97, 94.19, 93.03, 94.27, 98.06, 90.72, 94.19, 99.48, 90.71, 95.35 and 20.06% lesser function evaluations in comparison with the GA3, GA4, CAEP, CPSO, HPSO, PSO-DE, MGA, SC, DE, UPSO, EDE, EPSO, ETLBO, WOA, PSO, GSA (Mirjalili and Lewis 2016) and Jaya algorithm, respectively. It can be concluded based on this results that for the welded beam design problem the SAMPE-Jaya algorithm performs better as compared to the rest of the algorithms.

Table 15 Comparison of the statistical result for welded beam design problem

Table 16 presents the comparison of statistical results of 25 algorithms for the test problem 2. Performance of the SAMPE-Jaya algorithm is superior to the rest of the algorithm for pressure vessel design problem in terms of mean solution, best solution and worst solution. The value reported by using CGWO (Kohli and Arora 2017) has obtained better values of best function value and mean function value. It is due to the consideration of value of decision variables x\(_{1}\)(thickness of shell), and x\(_{2}\)(thickness of head) as continuous variables instead of discrete variables. Therefore, the results reported by the CGWO are not feasible. The SD value produced by Jaya algorithm is better in comparison with the rest of the algorithms. The number of function evaluations required for this problem is lesser for the present methods in comparison with the remaining algorithms expect elitist-TLBO. It can be concluded based on these results that the SAMPE-Jaya algorithm performs better for this problem also as compared to the rest of the algorithms in term of quality of solution.

Table 16 Comparison of the statistical result for pressure vessel design problem
Table 17 Comparison of the statistical result for tension/compression spring problem

The comparison of the statistical results over 30 independent runs of 27 algorithms for the test problem 3 is shown in Table 17. For tension/compression problem, SAMPE-Jaya algorithm is better in comparison with the other algorithms in terms of best value except NM-PSO. In terms of mean value and worst value of function, NM-PSO is superior. Function evaluations required by SAMPE-Jaya algorithm are lesser in comparison with the remaining algorithms expect EPSO. It is reported that the CGWO (Kohli and Arora 2017) has obtained the better values of the best, worst and mean for this problem. However, second constrained (g\(_{2})\) imposed on this is violated. It seems that lower value of penalty was used. Hence, results reported by using CGWO are infeasible.

The comparison of the statistical results over 30 independent runs of fourteen algorithms for the test problem 4 is shown in Table 18. For speed reducer problem, the performance of the SAMPE-Jaya algorithm is better as compared to the rest of the algorithms in term of all parameters considered for the comparison except SD. The Jaya algorithm has achieved better value of SD for this problem. However, the MFEs required for this problem using SAMPE-Jaya algorithm is reduced by 13.61 and 51.86% in comparison with SAMP-Jaya and Jaya algorithms, respectively. The present approach has produced improved results for this problem. The best value of the objective function is improved by 8%.

Table 18 Comparison of the statistical result for speed reducer design problem

It can be concluded from Tables 15, 16, 17 and 18 that the performance of the proposed SAMPE-Jaya algorithm is better or competitive to the other algorithms for the standard mechanical design problems. It requires comparatively less function evaluations for achieving the optimal solutions.

Furthermore, the performance of the proposed SAMPE-Jaya algorithm is tested on three large-scale problems taken from the literature (Cheng and Jin 2015). The dimensions of the considered problems are: 100, 500 and 1000. Table 19 presents comparison of the proposed SAMPE-Jaya algorithm with other algorithms. It can be observed from Table 19 that the performance of SAMPE-Jaya algorithm is better for Rosenbrock and Griewank function and competitive for Rastrigin function as compared to the other algorithms for the considered large-scale problems. Hence, it can be concluded based on these results that the proposed SAMPE-Jaya algorithm is performing satisfactorily for the large-scale problems also.

Table 19 Performance of SAME-Jaya with large-scale problems

After evaluating the performance of the SAMPE-Jaya algorithm on standard benchmark problems, a practical case study of micro-channel heat sink (MCHS) design optimization is considered. The motivation of the present work is to make an attempt to see whether any improvement is possible in the design of MCHS by using a SAMPE-Jaya algorithm.

The next section presents the application of the SAMPE-Jaya algorithm for the design optimization of MCHS.

4 Application of SAMPE-Jaya algorithm for the case study of a heat sink design

This case study was introduced by Husain and Kim (2010). A 10 mm \(\times \) 10 mm \(\times \) 0.42 mm silicon-based micro-channel heat sink (MCHS) considered by Husain and Kim (2010) is shown in Fig. 6. Water was used as coolant liquid, and it flowed into the micro-channel and left at the outlet. The silicon substrate occupied the remaining portion of heat sink. No slip condition was assumed at the inner walls of the channel, i.e., \(u=0\).

The thermal condition in the z-direction was given as:

$$\begin{aligned} -{k}_{\mathrm{s}}((\partial {T}_{\mathrm{s}})/(\partial {x}_{\mathrm{i}}))= & {} q \,\hbox {at}\, z = 0 \,\hbox {and} \, {k}_{\mathrm{s}}((\partial {T}_{\mathrm{s}})/(\partial {x}_{\mathrm{i}})) \nonumber \\&= 0 \,\hbox {at}\, z ={l}_{\mathrm{z}} \end{aligned}$$
(4.1)

The design variables considered by Husain and Kim (2010) were \(\alpha = {w}_{\mathrm{c}}/{h}_{\mathrm{c}}, \beta = {w}_{\mathrm{w}}/{h}_{\mathrm{c}}\), and \(\gamma = {w}_{\mathrm{b}}/{w}_{\mathrm{c}}\), where \({w}_{\mathrm{c}}\) is the micro-channel width at bottom; \({w}_{\mathrm{b}}\) is the micro-channel width at top; \({w}_{\mathrm{w}}\) is the fin width and hc is the micro-channel depth. \({h}_{\mathrm{c}}\) is kept \(400 \mu \mathrm{m}\) during the whole optimization procedure.

Fig. 6
figure 6

Conventional diagram of trapezoidal MCHS (Husain and Kim 2010)

In this case study, two objective functions were considered and those were (i) thermal resistance associated with heat transfer performance and (ii) the pumping power to drive the coolant or to pass the coolant through the micro-channel. Table 20 shows design variables \(\alpha \), \(\beta \) and \(\gamma \), and their limits for both rectangular (\(w_\mathrm{b}/w_\mathrm{c}=1\)) and trapezoidal (\(0.5<{w_\mathrm{b}}/{w_\mathrm{c}}< 1\)) cross sections of MCHS.

Table 20 Design variables and their ranges for case study

The two objective functions considered are, thermal resistance and pumping power. The thermal resistance is given by:

$$\begin{aligned} {R}_{\mathrm{TH}}=\Delta \hbox {Tmax}/({As*q}) \end{aligned}$$
(4.2)

where As is area of the substrate subjected to heat flux and \(\Delta \hbox {Tmax}\) is the maximum temperature in MCHS, which is given as:

$$\begin{aligned} \Delta \hbox {Tmax} = {T}_{\mathrm{s,o}}-{T}_{\mathrm{f,i}} \end{aligned}$$
(4.3)

The pumping power to move the coolant (water) through MCHS is calculated as:

$$\begin{aligned} {\bar{P}}={n}^*{{u}_{\mathrm{avg}}}^{*}{{A}_{\mathrm{c}}}^*\Delta {p} \end{aligned}$$
(4.4)

where \(\Delta p\) was the pressure drop and \({u}_{\mathrm{avg}}\) was the mean velocity.

Pumping power and thermal resistance compete with each other because a decrease in pumping power contributes to an increase in thermal resistance. Husain and Kim (2010) calculated the objectives by using Navier–Stokes and heat conduction equations at specific design points. The response surface approximation (RSA) was then used to obtain the functional forms of the two objective functions. The polynomial responses are expressed as:

$$\begin{aligned} R_{\mathrm{TH}}= & {} 0.096 + 0.31^{*}\alpha - 0.019^{*}\beta - 0.003^{*}\gamma \nonumber \\&-0.007^{*}\theta ^{*}\beta + 0.031^{*}\alpha ^{*}\gamma -0.039^{*}\beta ^{*}\gamma \nonumber \\&+ 0.008^{*}\alpha ^{2 }+ 0.027^{*}\beta ^{2}+ 0.029^{*}\gamma ^{2} \end{aligned}$$
(4.5)
$$\begin{aligned} {\bar{P}}= & {} \quad 0.94\quad -\quad 1.695^*\alpha - 0.387^{*}\beta - 0.649^{*}\gamma \nonumber \\&- 0.35^{*}\alpha ^{*}\beta + 0.557^{*}\alpha ^{*}\gamma - 0.132^{*}\beta ^{*}\gamma \nonumber \\&+0.903^{*}\alpha ^{2 }+ 0.016^{*}\beta ^{2 }+ 0.135^{*}\gamma ^{2} \end{aligned}$$
(4.6)
Table 21 Design variables of objective functions by using TLBO, Jaya, SAMPE-Jaya and hybrid MOEA for case study 3
Table 22 Comparison of results of hybrid MOEA, numerical analysis, TLBO, Jaya and SAMPE-Jaya for case study 3

The design variables \(\alpha \), \(\beta \) and \(\gamma \) are in terms of the ratios of the micro-channel width at bottom to depth (i.e., \({w}_{\mathrm{c}}/{h}_{\mathrm{c}})\), fin width to the micro-channel depth (i.e., \({w}_{\mathrm{w}}/{h}_{\mathrm{c}})\) and micro-channel width at top to width at bottom (\({w}_{\mathrm{b}}/{w}_{\mathrm{c}})\), respectively. Solving Eqs. (4.5) and (4.6) for \(\alpha ,\beta \) and \(\gamma \) will give the optimum values of the dimensions of the micro-channel, i.e., \({w}_{\mathrm{c}}, {w}_{\mathrm{w}}, {w}_{\mathrm{b}}\) and \({h}_{\mathrm{c}}\). The three design variables \(\alpha ,\beta \) and \(\gamma \) have significant effect on the thermal performance of micro-channel heat sink. Design and manufacturing constraints can be handled in a better way, and Pareto optimal solutions can be spread over the whole range of variables. The Pareto optimal analysis provides information about the active design space and relative sensitivity of the design variables to each objective function which is helpful in comprehensive design optimization. Thus, Eqs. (4.5) and (4.6) have the physical meaning. The design variables and rages are shown in Table 20.

Fig. 7
figure 7

Convergence of Jaya and SAMPE-Jaya algorithms for MCHS problem with equal weights of the objective functions

Fig. 8
figure 8

Pareto optimal curve for MCHS problem

The solution obtained by a priori approach depends on the weights assigned to various objective functions by designer or decision maker. By changing the weights of importance of different objective functions, a dense spread of the Pareto points can be obtained. Following a priori approach in the present work, the two objective functions are combined into a single objective function. The combined objective function Z is formed as:

$$\begin{aligned} \hbox {Minimize}; Z= & {} w_{1} \left( \frac{Z1}{Z1min.}\right) +w_2 \left( \frac{Z2}{Z2min.}\right) Z1 \nonumber \\= & {} R_\mathrm{th} \, \hbox {and} \, Z2 ={\bar{P}} \end{aligned}$$
(4.7)
Table 23 Hypervolume for case study of heat sink
Table 24 Standard deviation of the solutions in each iteration using Jaya and SAMPE-Jaya algorithms

where \(w_{1}\) and \(w_{2}\) are the weighs assigned to the objective functions Z1 and Z2, respectively, between 0 and 1. These weights can be assigned to the objective functions according to the designer’s/decision maker’s priorities. Z1min and Z2min are the optimum values of the Z1 and Z2, respectively, obtained by solving the optimization problem when only one objective is considered at a time and ignoring the other. Now, Eq. (4.7) can be used to optimize both the objectives simultaneously.

Husain and Kim (2010) used these surrogate models and a hybrid MOEA involving NSGA-II and sequential quadratic programming (SQP) method to find out the Pareto optimal solutions. Husain and Kim (2010) used NSGA-II algorithm to obtain Pareto optimal solutions, and the solutions were refined by selecting local optimal solutions for each objective function using a sequential quadratic programming (SQP) method with NSGA-II solutions as initial solutions. Then K-means clustering method was then used to group the global Pareto optimal solutions in to five clusters. The whole procedure was termed as a hybrid multi-objective optimization evolutionary algorithm (MOEA).

Now, the model considered by Husain and Kim (2010) is attempted using SAMPE-Jaya algorithm. Husain and Kim (2010) used a hybrid MOEA coupled with surrogate models to obtain the Pareto optimal solutions. Rao et al. (2016) used TLBO and Jaya algorithms to obtain the Pareto optimal solutions.

The values of the design variables given by the SAMPE-Jaya algorithm, Jaya algorithm, TLBO algorithm, hybrid MOEA and numerical analysis are shown in Table 21. Table 22 shows the results comparison of SAMPE-Jaya algorithm, Jaya algorithm, TLBO algorithm, hybrid MOEA and numerical analysis. It can be observed from Table 22 that the SAMPE-Jaya algorithm has performed better as compared with hybrid MOEA, Numerical analysis, TLBO and Jaya algorithms for different weights of the objective functions for the bi-objective optimization problem considered. The performance of TLBO algorithm comes next to Jaya algorithm. Figure 7 presents the convergence of Jaya and SAMPE-Jaya algorithms for MCHS problem with equal weights.

Figure 8 shows Pareto fronts obtained by using SAMPE-Jaya algorithm, Jaya algorithm, TLBO algorithm and hybrid MOE algorithm representing five clusters. Also, it can be observed that the SAMPE-Jaya algorithm has provided better results than the hybrid MOEA proposed by Husain and Kim (2010). Every peak end of the Pareto curve represents the higher value of one objective and lower value of another.

In order to make a fair comparison between the performances of the algorithms for the multi-objective optimization problems a quantity measure index known as hypervolume is calculated. Hypervolume is defined as the n-dimensional space that is enclosed by a set of points. It encapsulates in a single unary value a measure of the spread of the solutions along the Pareto front, as well as the closeness to the Pareto optimal front.

Table 23 presents the value of hypervolume obtained by the various algorithms for this case study. An observation can be made from Table 23 that value of the hypervolume obtained by the SAMPE-Jaya for the heat sink design optimization is better than the MOGA, numerical method, TLBO and Jaya algorithm. Hence, it can be concluded that the performance of the SAMPE-Jaya algorithm is better than the hybrid MOEA, TLBO and Jaya algorithms.

Furthermore, for checking the diversity of the search process, standard deviation of the objective function values is calculated and recorded after each iteration and shown in Table 24. It can be observed from this table that the value of standard deviation is not zero after any iteration, and it is different also for each iteration. This shows that the algorithm is continuously exploring the search process, and it does not fall in the trap of local minima. The early convergence of Fig. 7 shows that the algorithm has reached to global optimum value or near global optimum value of the objective function in a few iterations.

The next section presents the conclusions of this work.

5 Conclusions

This study proposes an elitist-based self-adaptive multi-population Jaya algorithm. The performance of the proposed algorithm is examined on the small as well as large-scale unconstrained and constrained benchmark problems in addition to the computationally expensive problems of the CEC 2015. The Friedman rank test is used to find the average rank of the algorithm, and it is observed that the proposed algorithm is better than the other algorithms. Furthermore, the proposed method is used for the design optimization problem of a micro-channel heat sink (MCHS). In the proposed method, multi-population search scheme is used for enhancing the search mechanism of the Jaya algorithm which divides the population into a number of subpopulations adaptively. This subpopulation-based scheme can be easily integrated with single population-based advanced optimization algorithms. The results of the SAMPE-Jaya for the benchmark problems are found better or competitive to the latest reported methods used for optimization of the same problems. In the case of MCHS, the proposed SAMPE-Jaya algorithm has obtained better Pareto optimal solutions as compared to those of hybrid MOEA, numerical analysis, TLBO and Jaya algorithms.

The concept of SAMPE-Jaya algorithm is simple, and it is not having any algorithmic-specific parameters to be tuned. Therefore, it may be easily implemented on the engineering problems where the problems are usually complicated with a number of design parameters and having the discontinuity in the objective function.