1 Introduction

Optimization is a problem-solving technique and its purpose is to calculate the parameters that will produce the optimum result of a function. In other words, the goal in optimization is to find a global optimum for an objective function defined on a specific search space and under some constraints. Most problems in the real world such as scheduling, vehicle routing, bio-computing and engineering problems, etc., are concerned with finding the maximum or minimum value of a function efficiently. So such real-world problems can be called optimization problems. Large number of decision variables included optimization problems are known as large-scale optimization problems. Large-scale global optimization (LSGO) term is used for this particular category of global optimization problems (Molina et al. 2018b).

Optimization problems should be modeled mathematically in order to be solved by computers. When the deal is large-scale problems, both the modeling and the solution process can become quite complicated depending on the number of variables and data types. Some factors make large-scale problems extremely difficult. Depending on size increase, search space also increases exponentially. That changes in both complexity and properties of some functions and increases the number of dependent variables (Li et al. 2013). Due to all these factors, the solution requires a very large computational cost. Therefore, more effective and efficient new search techniques are needed to explore all promising regions in the search space that expands exponentially with the number of problem dimensions within a given time. As a result, researchers have proposed various algorithms and techniques to overcome these problems. These studies can be roughly divided into three classes; standard evolutionary algorithms, cooperative coevolution (CC)-based evolutionary algorithms and memetic algorithms (Singh and Jana 2017). The proposed method in this paper is a novel local search method which can be classified as a memetic algorithm.

Depending on the developing technology, large-scale problems have emerged in many areas such as business, science and engineering. Therefore, large-scale optimization problems and solution techniques have become an important research area and are getting more and more attention in last decades. International competitions and private sessions attract researchers’ attention to focus on LSGO and play a key role in designing powerful, scalable and efficient algorithms in this area. The first special session about LSGO was held back in 2008 at the IEEE Congress of Evolutionary Computation Since then, the CEC has held a special session on LSGO every year, except 2009 and 2011. A detailed review and analysis about CEC special sessions and competitions of LSGO can be found in Molina et al. (2018a).

Objectives of this work can be summarized as follows:

  1. 1.

    To design a local search method which can be an effective option for LSGO.

  2. 2.

    To analyze the effect of local search methods on a main framework.

  3. 3.

    To analyze the performance of a memetic framework which uses proposed local search method by comparing it with recently proposed algorithms.

In line with these research objectives, the reminder of the paper is organized as follows. In Sect. 2, related work about memetic algorithms and local search methods are given. Section 3 presents a novel local search method in detail. Numerical experiments and performance evaluation are given in Sect. 4. Finally, in Sect. 5, conclusions are made.

2 Background and related works

The optimization of large-scale problems is considered a challenging task. Because the increased complexity depending on size increase makes it impossible to search for every possible solution or combination when searching for solution of a problem. For this reason, it is preferable to use heuristic methods that can find acceptable quality solutions at reasonable time. Classic heuristic algorithms have difficulty in dealing with the problems caused by size increase and thus lose their advantages and effectiveness in design. For this case, “curse of dimensionality” term which refers to the difficulties of finding optimum in high-dimensional space using extensive search is used (Chen et al. 2015). Motivated by the purpose, many effective and robust metaheuristic algorithms have been proposed to obtain high-quality solutions with low calculation costs.

One of the most effective methods used for LSGO problems is memetic algorithm (MA) which is the combination of evolutionary algorithms (EAs) with local search (Moscato 1989). MAs are inspired from combination of natural adaptation and lifetime learning of the individuals in a population. MA is a kind of EA that apply separate processes to exploit search space, for example, using extra local search methods to improve their fitness (Krasnogor and Smith 2005).

When considering related papers and the competitions held in the special sessions, it is seen that memetic algorithms have an important and effective place in the LSGO research area. The first winner of the LSGO competitions was multiple trajectory search (MTS) (Tseng and Chen 2008) which is a combination of three local searches presented in CEC’2008. Results of MTS have led many researchers to realize that local searches can significantly improve the results of EAs. Then, researchers head toward to embed local search technics to existing optimization algorithms. The algorithms that adopt memetic approach and have success in CEC competitions are briefly mentioned in chronologically in following lines. MA-SW-Chains (Molina et al. 2010) is a memetic algorithm which is combination of genetic algorithm with Solis and Wets algorithm (Solis and Wets 1981). Likewise, self-adaptive differential evolution with multi-trajectory search (SaDE-MMTS) (Zhao et al. 2011) is combined JADE with modified version of MTS-LS1 which is the first one of the local searches of the MTS algorithm. Multiple offspring sampling (MOS) (LaTorre et al. 2012) is a hybrid approach combining the Solis & Wets Algorithm and MTS-LS1 local search techniques. In the improved version of MOS (LaTorre et al. 2013), MTS-LSI-Reduced which is a modified version of MTS-LS1 and Solis and Wets are used as local search methods. The improved version of MOS has been also the winner and unrivaled algorithm until 2018 in CEC LSGO Competitions. Iterative Hybridization of DE with local search (IHDELS) (Molina and Herrera 2015) is a memetic algorithm that combines differential evolution algorithm with two local search methods [MTS-LS1 and L-BFGS-B (Morales and Nocedal 2011)] in an iterated way. Coral reefs optimization with substrate layers (CRO-SL) (Salcedo-Sanz et al. 2016) is a memetic algorithm which is used MTS-LS1 as local search method. MLSHADE-SPA (Hadi et al. 2019) is a memetic framework that combines three population-based algorithms with a modified version of MTS-LS1 as local search. SHADE-ILS (Molina et al. 2018b) is combined the exploration power of SHADE algorithm with the exploitation ability of several local searches (MTS-LS1 and L-BFGS-B).

When the memetic algorithms are examined which are used in the LSGO field, it can be seen that three of the methods used as local search come into prominence which are MTS-LS1, Solis and Wets, L-BFGS-B.

MTS (Tseng and Chen 2008) uses three different local search methods for looking for candidate solutions that are neighbor to the current solution. Among these local search methods in this algorithm, MTS-LS1 is the most effective one for LSGO. MTS-LS1 evaluates each dimension one by one from the first dimension to the last dimension. MTS-LS1 is a local search method which is suitable for separable problems but sensitive to rotations (Molina et al. 2018b). MTS-LS1 and its variants were used as local search methods in algorithms and frameworks like SHADE-ILS (Molina et al. 2018b), MTS (Tseng and Chen 2008), SaDE-MMTS (Zhao et al. 2011), MOS (LaTorre et al. 2012, 2013), IHDELS (Molina and Herrera 2015), CRO-SL (Salcedo-Sanz et al. 2016), MLSHADE-SPA (Hadi et al. 2019).

The Solis and Wets Algorithm is a local search method which adapts the step size according to consecutive failures or successes of the movement from another solution. This method was used in MA-SW-Chains (Molina et al. 2010) and both 2012 (LaTorre et al. 2012) and 2013 (LaTorre et al. 2013) are versions of MOS which are pioneering algorithms in LSGO field. More detailed information about Solis and Wets search algorithm can be found in Solis and Wets (1981).

L-BFGS-B (Morales and Nocedal 2011) is used as a local search method that uses an approximation of the gradient to improve the search and sensitive to rotation (Molina et al. 2018b). This method is used in IHDELS (Molina and Herrera 2015) and SHADE-ILS (Molina et al. 2018b).

As can be seen in these examples (Fig. 1), the combination of a population-based method and a local search has been used in many of the best-performing methods. This technique which provides a good balance between exploration and exploitation, enables design of effective and efficient algorithms to overcome the challenges of LSGO.

Fig. 1
figure 1

Graphical representation of related works

Golden ratio (GR) is an irrational number with fractal properties and is equal to value of 1.61803398874989 and inverse golden ratio (IGR) is 0.6180339887. GR and IGR have the same digits after the decimal point. It defines an especial proportion which can be experienced in several natural phenomena like the crystals geometry, the placement of stems in plants, the body parts proportions, and in the proportions of human. Nature, biology, computational science, stock market, mathematics, engineering, industrial design, architecture, art, etc., can be given as examples of fields where golden ratio is used (Ciucurel et al. 2018; Dabbour 2012; Lu 2003; Shekhawat 2015). More detail information about Golden Ratio can be obtained from (Hemenway 2005; Iosa et al. 2018; Livio 2008; Luttge and Souza 2019).

Golden ratio is also used in optimization algorithms as a coefficient or search technique or selection method.

Golden ratio is used as a coefficient in golden ratio particle swarm optimization (GRPSO) (Manikantan et al. 2012). In this algorithm, c1 and c2 parameters in the basic PSO velocity equation are replaced by inverse golden ratio (IGR) and gold ratio (GR), respectively.

Golden section search (GSS) is widely used as a local search method in optimization for unimodal functions. In this method, the golden ratio is used to determine the new lower and upper bounds for next step in order to divide the search space. GSS is constantly narrows the search space to seek the optimum solution (Aurasopon and Khamsen 2019; Bansal et al. 2013; Koupaei et al. 2016; Kumar et al. 2014; Neri and Tirronen 2009; Zhang 2019).

In the use of the golden ratio as a selection method, the population is divided into subgroups determined by the golden ratio. Each subgroup contains a certain number of individuals which have similar quality. The probability of each individual being chosen depends on the group to which it belongs. This approach provides a better balance between elitism and diversity (Cuevas et al. 2018).

In the GRGLS, the method proposed in this study, the golden ratio was used as a coefficient to adjust the adaptive step size. During the search process, the step size is increased or decreased according to the improvement in individual’s fitness value. This ensures a better balance between exploration and exploitation. So that GRGLS does not have any limitations according to its problem characteristics (unimodal, multimodal, etc.).

3 Proposed method

3.1 Improved MSHADE-SPA (IMSHADE-SPA)

MLSHADE-SPA (Hadi et al. 2019) which is a memetic framework chosen as the main framework on solving LSGO problems effectively. This memetic framework is a combination of population-based algorithms (LSHADE-SPA (Mohamed et al. 2017), EADE (Mohamed 2017), and ANDE (Mohamed and Almazyad 2017)) and a local search method (MMTS).

MLSHADE-SPA uses different strategies in the search process, such as sharing computational resources, divide and conquer, and population reduction. While sharing computational resources, \( { \hbox{max} }_{\text{FEs}} \) is divided into two main parts. EAs consume half part of \( { \hbox{max} }_{\text{FEs}} \), remains is consumed by local search method. EAs use their computational resources in two stages. In the first stage, LSHADE-SPA works on all dimensions. In the second stage, all EAs work together by sharing dimensions (divide and conquer) and computational resources allocated for each EAs according to improvement ratio. Distribution of computational resources is shown in Fig. 2.

Fig. 2
figure 2

Distribution of computational resources of the MLSHADE-SPA

In this framework, linear population size reduction (LPSR) is applied until population size reaches the minimum number of population. The population size to be reduced (\( {\text{Pop}}_{\text{Reduce}} \)) is calculated by Eq. (1). After \( {\text{Pop}}_{\text{Reduce}} \) is calculated, first, individuals are sorted according to their fitness values. Individuals which have the worst fitness values are removed from the current population until the population size reaches \( {\text{Pop}}_{\text{Reduce}} \). With the effect of LPSR, population size will reach the min pop size within the first half of \( \max_{\text{FEs}} \).

$$ {\text{Pop}}_{\text{Reduce}} = {\text{round}}\left[ {\left( {\frac{{{\text{Pop}}_{\text{init}} - {\text{Pop}}_{ \hbox{min} } }}{{0.5*\max_{\text{FEs}} }} } \right)*c_{\text{FEs}} + {\text{Pop}}_{\text{init}} } \right] $$
(1)

where \( c_{\text{FEs}} \) is the current fitness evaluation number, \( \max_{\text{FEs}} \) is the maximum fitness evaluation number, \( {\text{Pop}}_{\text{init}} \) is the initial population size and \( {\text{Pop}}_{\hbox{min} } \) is the minimum population size.

MLSHADE-SPA is improved by replacing its local search method (MMTS) with proposed GRGLS local search method and it is named as Improved MSHADE-SPA (IMSHADE-SPA). Figure 3 illustrates the structure of the Improved MSHADE-SPA.

Fig. 3
figure 3

Structure of improved MSHADE-SPA

3.2 A novel local search method: golden ratio guided local search with dynamic step size (GRGLS)

Evolutionary algorithms (EAs) usually use the combination of nature-inspired rules and randomness to achieve the optimal solution of optimization problems. EAs perform two fundamental search phases which are exploration and exploitation. Visiting completely new points in a search area is called exploration, and fine-tuning those points for the purpose of improving within the neighborhood of previously visited locations is called exploitation. The success of an evolutionary algorithm depends on the capacity to find a good balance between exploration and exploitation. The purpose of this balance is to prevent the stuck in local solutions that are not optimum or to detect sensitive solutions that may be closer to the target solution near the available solution. In EAs, the search process usually begins with exploration, and then gradually turns into exploitation (Črepinšek et al. 2013).

Memetic algorithms are the combination of population-based global search and the heuristic local search. In exploration phase, population-based algorithm guides the search and find candidate solutions by exploring the solution space. Then the exploitation phase is triggered by calling the local search method. Local search usually starts from a candidate solution and iteratively moves to a neighbor solution thus can further improve a candidate solution in the search space by exploiting its neighborhood. This collaboration between global search and local search enables the solution to be searched in solution space much more thoroughly and makes the search efficiency of MAs outperform EAs. As a result, memetic algorithms provide higher performance than using either stand-alone population-based global search or stand-alone local search (Lin and Chen 2011).

In this paper, a novel local search method named as golden ratio guided local search with dynamic step size (GRGLS) is proposed. This method can be used in memetic algorithms for the purpose of developing better solutions for large-scale optimization problems. In proposed local search method, golden ratio is used for changing the step size dynamically.

When the local search methods which used in memetic algorithms are considered, it is common that searching along each dimension from the first to the last is an effective method in the LSGO field. MTS-LS1 and its variants that use this technique are preferred by effective and efficient algorithms (Hadi et al. 2019; LaTorre et al. 2012, 2013; Molina and Herrera 2015; Molina et al. 2018b; Salcedo-Sanz et al. 2016; Tseng and Chen 2008; Zhao et al. 2011). Another effective operation used in both solis and wets algorithm and MTS-LS1 is to look at the points in the opposite directions of the current dimension. The proposed local search method GRGLS incorporates both techniques by researching solution at opposite directions of current solution with changing the step size under the guidance of the golden ratio.

In golden ratio guided local search with dynamic step size (GRGLS), each dimension of problem has its own step size. After results are evaluated of the experiments made by giving different initial values to the step size, the step size optimum initial value has been determined as 0.1 for each dimension. Local search is applied to the best individual which has the highest fitness value in the population along each dimension from the first to the last. In all following equations, \( x_{i} \) represents current position, \( x_{i + 1} \) represents next position, \( {\text{dss}}_{i} \) represents step size of corresponding dimension, \( {\text{GoldenRatio}} \) represents 1.618, \( {\text{InverseGoldenRatio}} \) represents 0.618.

Firstly, the point which is forward of the best individual as far as its own step size is evaluated for purpose to investigate a better solution by Eq. (2).

$$ x_{i}^{t + 1} = x_{i}^{t} + {\text{dss}}_{i}^{t} $$
(2)

New candidate solution which is formed by changing one of its parameter is evaluated by object function. If the new solution is better than the best solution, new solution assigns as best individual and step size of changed dimension is increased by golden ratio. If new solution is worse than best solution, the point which is backward of the best individual as far as its own step size is evaluated for purpose to investigate a better solution by Eq. (3).

$$ x_{i}^{t + 1} = x_{i}^{t} - {\text{dss}}_{i}^{t} $$
(3)

New candidate solution which is formed by changing one of its parameter is evaluated by object function. If the new solution is better than the best solution \( \left( {f\left( {x_{\text{best}}^{t} } \right)} \right) \), new solution assigns as best individual and step size of changed dimension is increased by golden ratio, otherwise step size of changed dimension is decreased by inverse golden ratio Eq. (4).

$$ {\text{dss}}_{i}^{t + 1} = \left\{ {\begin{array}{*{20}l} {{\text{dss}}_{i}^{t} *{\text{GoldenRatio}},\;f\left( {x_{i}^{t + 1} } \right) \le f\left( {x_{\text{best}}^{t} } \right)} \hfill \\ {{\text{dss}}_{i}^{t} *{\text{InverseGoldenRatio}},\;f\left( {x_{i}^{t + 1} } \right) > f\left( {x_{\text{best}}^{t} } \right)} \hfill \\ \end{array} } \right. $$
(4)

All these steps are repeated by moving to the next dimension until it reaches the last dimension of problem. Flowchart and pseudo-code of GRGLS are given in Figs. 4 and 5.

Fig. 4
figure 4

Flowchart of the GRGLS

Fig. 5
figure 5

Pseudo-code of the GRGLS

4 Performance evaluation

4.1 Experimental environment

Experiments were performed on the computer platform which is based on an Intel(R) Core(TM) i7-7700HQ 2.80 GHz processor, 16.0 GB of RAM, and the Microsoft Windows 10 Enterprise operating system.

All experimental works were conducted by using MATLAB. The source code is available on Uymaz (2020). Code of MLSHADE-SPA, used as main framework in our experiments, is downloaded from authors’ website (Mohamed 2019). The proposed method GRGLS and MTS-LS1 were also coded in MATLAB.

4.2 Benchmark set

Performance analysis experiments were carried out with the IEEE CEC 2013 benchmark set (Li et al. 2013). It is the latest defined and most recent benchmark set for LSGO. This benchmark set is designed to be as similar to the features of real-world problems as possible. To accomplish this, the characteristics included in the functions are “Non-uniform subcomponent sizes, imbalance in the contribution of subcomponents, functions with overlapping subcomponents and new transformations to the base functions” (Li et al. 2013). This benchmark set includes 15 continuous optimization functions which have different separability degrees. All functions in benchmark set are the minimization problems and the optimum function values of them are zero. The dimensions of all functions in benchmark set are 1000 except for F13 and F14 where D is 905. Main properties are given in Table 1 but detailed information about the benchmark set can be found in Li et al. (2013).

Table 1 Functions list

4.3 Experimental results

In this study, two performance analysis experiments were performed to verify the efficiency of the proposed local search method GRGLS. Firstly, GRGLS’s performance has been compared with MTS-LS1 and MMTS local search methods each by placing it in the same main framework. Secondly, the IMLSHADE-SPA algorithm which is formed by replacing the local search method of MLSHADE-SPA algorithm with GRGLS has been compared with nine recently proposed algorithms.

4.3.1 Effect of proposed local search method on main framework’s performance

In the first phase of experiments, the main framework was chosen and results obtained by using only one from GRGLS, MTS-LS1, MMTS at a time in the same main framework were evaluated.

In MTS-LS1, the initial step values for each dimension are determined as the difference between the upper bound and the lower bound. Local search is only applied to all dimensions of the best individual. Step size of each dimension can be different. Local search is applied by adding or subtracting step size to the corresponding dimension. Firstly, step size is subtracted from solution. If it doesn’t reach a better result, half of the step size added to the original value of current dimension. If a good result is not still achieved as a result of these two moves, it decreases the step size by half and checks whether the new step size below 1.00E−15. If it falls below this value, the step size is updated by multiplying the difference between upper and lower by 0.4. More detailed information about this method can be obtained from Tseng and Chen (2008).

In MMTS which is used in MLSHADE-SPA, the initial step values for each dimension are determined by multiplying random number in range [0, 1] to the difference between the current minimum and maximum values of the dimension. One of these two values whichever is smaller is assigned as the initial values for each dimension. When searching for the corresponding dimension, firstly, the point which is forward (+) of the best individual as far as its own step size is evaluated for the purpose to investigate a better solution. If it reaches a better result, the search will continue with this direction by adding another step size until it fails to generate a better result or reach the upper bound. In other words, the step size increases linearly as long as the result is successful. Then it runs the same procedure backward (−) from the point at which it reached its last successful result. More detailed information about this method can be obtained from Hadi et al. (2019).

Briefly, GRGLS, MTS-LS1, MMTS are basically three similar local search methods with differences.

The similarities between these methods are;

  • Applying local search to the best individual which has the highest fitness value in the population,

  • Searching along one dimension from the first dimension to the last dimension,

  • Looking at the opposite points of the current dimension while creating a new solution.

The differences between these methods are;

  • Each method follows a different path when assigning initial values to the array that holds the step sizes for each dimension.

  • Each method uses different coefficients when changing the step size.

  • Proposed method increases step size exponentially when finds a better solution while others keeps step size unchanged or increases linearly, in the same situation.

Experiments are carried out by codes which obtained from MLSHADE-SPA authors’ website. Firstly, original codes which include MMTS as a local search method were run and results were recorded. Then the same experiments were performed in equal conditions with replacing MMTS with MTS-LS1 (Tseng and Chen 2008) and results has been stored. Finally, the same experiments were performed in equal conditions with replacing MMTS with GRGLS. The initial population size, minimum population size and iteration number parameters of the framework were used as 250, 20 and 50, respectively. In all experiments, the suggested instructions for CEC special sessions and competitions have been followed. All reported results are the averages of 25 independent runs for each test problem. Maximum fitness evaluation (\( \max_{\text{FEs}} \)) was set to 3.0e6. In all these experiments, results are recorded when FEs = 1.2e5, 6.0e5, and 3.0e6 according to CEC competition guidelines (Li et al. 2013) and results of FEs = 1.2e5 are listed in Table 2, results of FEs = 6.0e5 are listed in Table 3 and results of FEs = 3.0e6 are listed in Table 4.

Table 2 Results (mean values) for the 1.2e5 FEs milestone
Table 3 Results (mean values) for the 6.0e5 FEs milestone
Table 4 Results (mean values) for the 3.0e6 FEs milestone

In addition, the Wilcoxon signed-rank test was conducted for statistical analysis using the best values recorded at the end of each run. The results of these analyses between GRGLS and other both local search methods are demonstrated for 1.2e5, 6.0e5, and 3.0e6 FEs in Tables 5, 6 and 7, respectively.

Table 5 The results of two-sided Wilcoxon signed-rank test for the 1.2e5 FEs milestone (α = 0.05)
Table 6 The results of two-sided Wilcoxon signed-rank test for the 6.0e5 FEs milestone (α = 0.05)
Table 7 The results of two-sided Wilcoxon signed-rank test for the 3.0e6 FEs milestone (α = 0.05)

Considering the results of 1.2e5 fitness evaluation it is seen that GRGLS is the method that obtains the best result in 8 of 15 functions. It is in second place in the general ranking. In 1.2e5 fitness evaluation, the proposed method’s performance is lower than other methods in full separable functions.

Considering the results of 6.0e5 fitness evaluation it is seen that GRGLS is the method that obtains the best result in 9 of 15 functions. It is observed that the proposed method improves itself in all functions when compared with the results of 1.2e5 fitness evaluation. The development of F10 and F12 functions which are multimodal and member of two different function groups is remarkable. GRGLS is ranked first in the general ranking in 6.0e5 fitness evaluation.

Considering the results of 3.0e6 fitness evaluation it is seen that GRGLS is the method that obtains the best result in 10 of 15 functions. When compared with the results of 6.0e5 fitness evaluation, only rank of F5 and F9 is changed for the proposed method. After 6.0e5 fitness evaluation, the proposed method is still keep the first position in general ranking.

Averages of the rankings specified in Tables 2, 3, and 4 are listed in Fig. 6. The lowest value in the ranking averages represents the most successful algorithm. The results in Fig. 6 show that the proposed method is more successful than the others. Although MTS-LS1 is the best method in 1.2e5 fitness evaluation, after 6.0e5 fitness evaluation GRGLS has taken the first place and has maintained its success by obtaining the best result for 10 of 15 functions in 3.0e6 fitness evaluation.

Fig. 6
figure 6

General performances of methods

The convergence curves of GRGLS, MTS-LS1, MMTS for the selected six problems: F1, F5, F7, F11, F13, F15 are presented in Fig. 7. Each of these functions has been selected from different function groups. Y axes represent mean values of 25 independent runs of related function. X axes represent a recorded point in each 2000 fitness evaluations. For example, 300 value at x axis corresponds to 6e5 FEs.

Fig. 7
figure 7

Convergence curves of sample functions

Convergence curves confirm that GRGLS is more successful compared to other local search methods as in the results shown in Tables 2, 3 and 4. The three methods produce similar results except full separable functions up to approximately 1.5e6 fitness evaluations. At the point performance of the methods are differs from each other. The proposed method draws a robust convergence characteristic from beginning to the end. So, it can be seen that GRGLS is more successful compared to other local search methods.

To analyze the problem-solving performance of the local search methods the Wilcoxon signed-rank test was used in addition to averages best values tables, averages of the rankings tables and convergence curves. The test was conducted by using the global minimum values obtained as a result of 25 runs for problem-based pairwise comparison of the algorithms. In Tables 5, 6 and 7, statistical comparisons by Wilcoxon signed-rank test are demonstrated between GRGLS and other local search methods. The last rows of Tables 5, 6 and 7 show the total counts in the (+/=/−) format for the three statistical significance cases in the pairwise comparison. Tables 5, 6 and 7 show that GRGLS can achieve statistically better results than the comparison algorithms, with a level of significance α = 0.05. Especially, advantage of the GRGLS seems better as fitness evaluation number increases.

4.3.2 Comparison with other LSGO algorithms

MMTS has been used as local search method in MLSHADE-SPA’s original version. It is a modified version of MTS-LS1. In this work, MMTS has been replaced with proposed method GRGLS in MLSHADE-SPA’s framework and thus MLSHADE-SPA’s performance has been improved. This improved version of MLSHADE-SPA was named as IMLSHADE-SPA.

In the second part of experiments, the IMLSHADE-SPA algorithm was compared with some state-of-the-art algorithms. Briefly information about the algorithms used in the comparison given in Table 8.

Table 8 Summary Information of Compared Algorithms

Table 9 shows the comparison results of proposed IMLSHADE-SPA and other algorithms using CEC 2013 LSGO benchmark set. Comparisons of the results were obtained from the mean of 25 independent runs at 3.0e+6 FEs. Results of LSGO algorithms were directly taken from related references in Table 8. Best results of the functions were marked bold in Table 9.

Table 9 Comparison mean values of IMLSHADE-SPA with other LSGO algorithms

When considering the best results of functions in Table 9 and ranking average in Table 10, the following results were achieved:

Table 10 Ranks of algorithms on the CEC 2013 functions
  • In fully separable functions (F1, F2, F3), the best results were obtained from MOS, VGDE, LSGOjDE, VGDE algorithms, respectively. When average rankings in Table 10 evaluated as a function group, VGDE is the most successful algorithm in fully separable functions.

  • In functions with a separable subcomponent (F4, F5, F6, F7) and functions with no separable subcomponents (F8, F9, F10, F11), different algorithms achieved best results for each function but when average ranking evaluated, results shows that IMLSHADE-SPA is the most successful algorithm in both function groups.

  • All best results of overlapping functions (F12, F13, F14) and non-separable function (F15) were obtained by IMLSHADE-SPA and therefore IMLSHADE-SPA is the most successful algorithm for these function groups.

  • As a result of this comparison between all algorithms in 15 functions, the proposed method IMLSHADE-SPA takes the first place while the second algorithm is MOS and the third algorithm is VGDE.

The success order of each algorithm among all algorithms for the corresponding function is listed in Table 10.

Scores of algorithms in two different metrics which are Ranking and Formula One Score on the CEC 2013 benchmark functions are listed in Table 11. For each metric, algorithms are placed in order from best to worst. Average ranks of each algorithm in Table 11 prove that IMLSHADE-SPA is the best algorithm among ten algorithms.

Table 11 Scores of algorithms using ranking and formula one score

Formula One Score is a metric that inspired from formula one racing scoring system. According to the method, algorithms are sorted according to the performance from best to worst. Then the best 10 algorithm takes 25, 18, 15, 12, 10, 8, 6, 4, 2, and 1 point, respectively. The rest of the algorithms take zero point. The algorithm’s formula one score is calculated by summing the gained points for each function. After all scores are calculated, the most point gainer algorithm is selected as the best algorithm. This metric also is used for comparing the performance of LSGO algorithms (Hadi et al. 2019; Maučec and Brest 2019). According to Formula One Score, IMLSHADE-SPA is the best algorithm with 268 points.

5 Conclusion

In this work, a local search method named as Golden Ratio Guided Local Search with Dynamic Step Size (GRGLS) is introduced.

CEC 2013 benchmark set is used for evaluating the performance of proposed local search method. According to the experimental results, the proposed novel local search method GRGLS achieves well results in all functions in general and a memetic framework which uses GRGLS as local search method significantly outperforms better than many recently proposed algorithms. In addition, the superior performance of GRGLS on overlapping and non-separable functions is remarkable.

As a result, the analyses indicated that the proposed GRGLS can be used as an effective and efficient local search method for LSGO research area.

For future works, the performance of GRGLS in full separable functions can be improved by applying different techniques or hybridizing with another powerful local search methods. In addition, GRGLS can be used as a local search technique in different memetic frameworks.