1 Introduction

Optimization problem is a common issue of many engineering domains, where an optimal solution is searched inside a complex space. When the problem is not solvable analytically or it is solvable but requires too much time, numerical approaches could help, although a globally optimized solution is not guaranteed. Meta-heuristic algorithms are among these numerical approaches taking their inspiration usually from the nature [1]. Besides developing a new meta-heuristic algorithm, another common approach is to hybridize different algorithms to merge powerful sides of the algorithms with the aim of getting a better algorithm.

The particle swarm optimization (PSO) algorithm is one of the most successful optimization algorithms frequently used in the literature. As reviewed below, there are many studies utilizing the PSO algorithm to solve various optimization problems successfully due to its simplicity, small number of parameters and fast convergence speed. Since the PSO algorithm is very successful on various problems from different domains, we select it as one of the algorithms in this hybridization study. When we examine the PSO algorithm, its exploitation ability is very high but its exploration ability is poor [2,3,4,5]. By exploitation, we mean that the algorithm is very successful to perform a local search. Exploration ability makes an algorithm better at finding good starting positions, possibly near the global minimum. A good optimization algorithm is required to balance exploitation and exploration [2]. High exploration ability makes the PSO algorithm to suffer from converging to local minimum when its starting position is far away from global minimum [2,3,4,5]. To reduce the chance of trapping into a local minimum, the PSO algorithm do not directly accept new candidate positions. Instead, it prefers to chose random positions with a small possibility for some particles while the algorithm is proceeding on its iterations. However, this approach may be very risky since a random position may coincide with a terrible point in the search space. Our aim in this study is to improve this downside of the PSO algorithm. For this purpose, we need an algorithm with a good exploration ability. At this point, we utilized the recently proposed grey wolf optimizer (GWO) algorithm [6], since it is reported as starting with a high exploration [7]. In our approach, the GWO algorithm is used to support the PSO algorithm by replacing some particles with the new ones obtained by running the GWO algorithm for a small number of iterations. Hence, instead of jumping to a risky random position (which may result in trapping into a local minima or slowing down the optimization before reaching global minimum), we aim to find a better alternative by making use of exploration ability of the GWO algorithm.

In the literature, there are many hybrid approaches using meta-heuristic optimization methods. Below, we first review the literature proposing various hybrid approaches combining various methods and then the literature hybridizing only the PSO algorithm with different methods.

Mahmood et al. developed a hybrid GA method to optimize the real-time task allocation problem aiming to minimize the power consumption of DVS-enabled multiprocessor systems [8]. They have developed their hybrid method by combining the exploitation of the stochastic evolution algorithm with exploration ability of the GA. They proposed a specialized crossover operator and also defined a perturb operator to replace the mutation operator in the GA. Liu et al. presented a hybrid approach by combining ant colony optimization (ACO) algorithm and the GA algorithm [9]. They transferred the ants to the new generation using crossover, mutation and selection operators of the GA. They tested their method with optimal path selection problem. Torkaman et al. developed a hybrid approach using the GA and the SA methods [10]. The initialized population with the GA was improved using the SA to achieve a high fitness value. Lam et al. used the GA with the gradient method [11]. Since the GA was sufficient in the global search but insufficient in the local search, they integrated the gradient method which was successful in the local search. They started the search using the GA method to reduce to the local search area, and then quickly achieved the optimum position with the gradient method. In a similar approach, Yuce et al. developed a hybrid method to enhance the weakness of the bees algorithm (BA) in the global search combining it with the GA method [12]. The GA algorithm was forced to run the global search again at the best locations found by the BA. They tested their hybrid approach on the single machine scheduling problem. Vosoughi and Darabi developed a hybrid approach combining the conjugate gradient (CG) and the GA methods [13]. First, they divided the problem space into evenly spaced layers. Then, they found the point of each layer that had the highest fitness value by the CG method. Finally, they performed a curved line/plane matching process through the best locations of all layers. Based on this curved line/plane, the entire search space was divided into several sub-spaces. Each sub-space was ranked in ascending order of fitness values and the GA method was activated. The GA method run the search process starting from the lowest sub-space and continued until the stop criteria was met. Chen and Huang developed a hybrid approach by combining the extension matrix (EM) and the GA for feature selection [14]. They first established an integer programming model using the EM, and then applied the GA to find an optimal feature subset. They showed the working of their method by applying it to intrusion detection problem. During the training phase of the detector, optimal features were selected with their hybrid approach.

Mafarja and Mirjalili developed two different hybrid optimization models by combining the SA method with the whale optimization algorithm (WOA) in two different ways [15]. In the first model, they used the SA method as an operator of the WOA method. In the WOA method, the best position was achieved by optimizing random movements with the SA. In the second model, the position with the highest fitness value was first found by the WOA, and then a final local search was performed with the SA method to find the best location. They have tested their hybrid approach with the feature selection process. Taheri et al. developed a hybrid artificial neural network (ANN)–artificial bee colony (ABC) algorithm that estimates ground vibration during mine blasting [16]. The ANN’s weight and bias values for the network structure were determined by the ABC algorithm. Xue et al. developed a hybrid approach aiming to improve the ABC algorithm [17]. They used differential evolution (DE) algorithm in the generation of the new population where the ABC is weak. They improved the performance of the ABC algorithm, using the candidate solution generation strategy.

Singh and Singh used the GWO and sine cosine algorithm (SCA) together [18]. They optimized the position changes of the wolves with the SCA by removing random movements of alpha wolves in the GWO algorithm. Pan et al. developed a hybrid algorithm using the GWO and flower pollination algorithm (FPA) [19]. They updated the position of the wolves in the GWO algorithm using the position updates of the FPA algorithm.

Javidrad and Nazari developed a hybrid approach using the PSO algorithm and simulated annealing (SA) algorithms [20]. They applied a turn-based switching approach between the algorithms to combine them. The optimization started with the PSO algorithm, and was switched to the SA algorithm when the PSO could not find a better result than its previous iteration. While the SA algorithm was continuing the optimization, the turn passed to the PSO algorithm when the SA algorithm was not able to find a better fitness value. This iterative switching process between two algorithms continued until the desired number of iterations was reached or a stop criteria was met.

Kumar and Vidyarthi developed another hybrid approach by mixing the GA and the PSO methods [21]. They aimed to find the optimum location by starting the search process with the PSO method to reduce the search domain to a local region, and then proceeding with the GA method.

Garg proposed a hybrid approach by combining the strengths of the PSO and the GA methods [22]. The GA aims to transfer good individuals to the next generation. In the PSO, however, all individuals continue to take part in the population throughout the iterations. Garg fused two methods such that at each iteration the GA method prepared the next generation to be used by the PSO by applying crossing, mutation and selection processes.

In another study, Ali and Tawhid combined the PSO and the GA methods in three phases [23]. First, the PSO algorithm was utilized to find a stable fitness value. Second, they divided the current population into sub-populations and then increased the diversity within each of the sub-populations using the GA’s crossover operator. In this phase, the GA’s mutation operator prevented falling to local minima. In the last phase, the optimization was again switched to the PSO algorithm, and the PSO was repeated until a maximum number of iterations was reached or the stop criteria was met.

Kaur and Mahajan developed a hybrid method using ant colony algorithm (ACA) and the PSO algorithm [24]. In this method, the population was first developed with the ACA and then the optimization was continued by the PSO algorithm.

Ghasemi et al. combined the adaptive neuro-fuzzy inference system (ANFIS) and the PSO algorithm [25]. They optimized the ANFIS structure with the PSO to minimize the difference between the outputs obtained with the ANFIS training and the actual results.

Hasanipanah et al. used support vector regression (SVR) and the PSO method to predict air-overpressure caused by mine blasting [26]. The hyper-parameters of each kernel was computed by the PSO method. They tested the SVR method with three different kernels.

Similar to our study, there are hybrid methods employing the PSO and GWO algorithms. In Chopra et al. and Kamboj, the PSO and GWO algorithms were used at each iteration one after the other [27, 28]. The population obtained with one algorithm was utilized by the other one in the next iteration. Here, the aim was to improve the population by exploiting the strengths of the algorithms at each iteration separately. The difference between these two studies is that in [27], all of the population were being transferred from one method to the other at every iteration, however, in [28], the population of one method was being updated with the best individual of the other method obtained in its last iteration. Singh and Singh conducted another hybrid study using the PSO and GWO algorithms [29]. Instead of running the algorithms one by one at each iteration, they run them in parallel using a mixture of the algorithms’ governing equations.

When the method of Chopra et al. is examined, no method can focus on solving the problem alone due to different characteristics of the algorithms. The method of Kamboj also faces a similar situation such that, though the whole population is not transferred, algorithms affect each other by directing the other algorithm towards the best solution they obtained on their turn. Updating the population of one method with the best individual of other method may harm the stability of both methods and hence, instabilities may arise. Although, the method of Singh and Singh run both algorithms in parallel, it is similar to the method of Kamboj such that the governing equations of the PSO method were updated to utilize the best three individuals of the GWO directly instead of using its own global best particle. Similarly, this approach also may result in instabilities. We have included these three hybrid methods in our experiments to compare their performances with our hybrid approach. For the sake of simplicity, in the rest of the paper, we will call these three studies, namely [27], [28], and [29] as HC, HK and HS, respectively.

In our study, unlike the PSO and GWO hybrid approaches reported above, the optimization process is under control of the PSO method. Some individuals of the PSO algorithm which is selected with a small possibility are replaced by the best individual of the GWO algorithm. The GWO algorithm is run with a small population for a small number of iterations so that partially improved individuals are created. Since the GWO algorithm starts with a high exploration ability, these partially improved individuals have an important potential to avoid falling into a local minimum. With our hybrid approach, the stability of the PSO algorithm remains intact and gets support from the GWO algorithm in terms of exploration.

The performance of an optimization algorithm should be tested on some benchmark functions as well as on a real-life problem to assess its effectiveness. For these purposes, we evaluated our hybrid approach on several benchmark functions taken from the literature [30], on the optimal nesting problem of two-dimensional patterns which is encountered as a hard optimization problem in many sectors today, and as well as on other two different real-world problems. These two problems are the parameter estimation for frequency-modulated (FM) sound waves [31] and process flowsheeting problem [32].

When we consider the optimal nesting problem of two-dimensional patterns, at the forefront are the sectors such as leather, textiles, paper, glass and furniture. In the majority of these sectors, the desired patterns are cut out from a spool-wounded material whose width is fixed and length is variable. However, unlike others, there is no specific shape of the material in the leather sector. Leather materials have irregular shapes and may include low-quality and non-usable regions with again irregular shapes. Therefore, two-dimensional nesting problem is more difficult in the leather sector than in other sectors. In addition, since the natural leather is a non-recyclable raw material, the nesting and cutting operations have to be done much more carefully than other sectors.

The leather materials used in this study are natural leather and have irregular shapes. It is aimed to place the two-dimensional irregular-shaped patterns on two-dimensional irregular-shaped material in the most suitable way to minimize wastage. In the literature, this problem is also known as two-dimensional irregular nesting problem [33] or two-dimensional cutting stock problem [34, 35]. Cherri et. al developed two different models for this problem and evaluate them in a real-world scenario for metal cutting industry. Their two models were based on direct trigonometry and no-fit polygon covering, respectively. They reported their no-fit polygon covering model as outperforming their direct trigonometry model and one other model based on no-fit polygon (NFP). In [34] and [35], the LNP problem in automotive industry to produce car seats was studied. Optimum placement of the patterns without any overlap was solved with the help of the NFP technique. Before placement, the patterns were grouped according to their shape, and they were placed by giving precedence to big and regular patterns. In [36], Crispin et al. aimed to place shoe patterns onto real leather using genetic algorithm (GA) and also utilizing the NFP technique. In another study [37], Yuping et al. studied placement of irregular-shaped patterns onto leather by developing a fast algorithm based on simulated annealing (SA). During the placement, they utilized shifting operation to prevent overlaps between the patterns and overflow of the patterns outside of the leather hide.

The main highlights of this study are as follows: (1) we have developed a new hybrid PSO–GWO optimization method. (2) We validated our method using five different benchmark functions taken from the literature. (3) We evaluated our method on 3 different real-world problems, namely parameter estimation for frequency-modulated sound waves [31], process flowsheeting problem [32] and the leather nesting problem [34, 35]. (4) We compared the performance of our algorithm with the PSO, GWO, ABC [38], social spider algorithm (SSA) [39] algorithms as well as with three different and recently proposed hybrid PSO–GWO algorithms [27,28,29].

The rest of the paper is organized as follows. Section 2 explains the employed optimization methods and our hybrid approach. In Sect. 3, we define the optimization problems utilized to evaluate our approach. Section 4 presents the experimental results along with the discussions. Finally, Sect. 5 concludes the paper.

2 Optimization methods

The hybrid algorithm proposed in this study utilizes the PSO and GWO meta-heuristic methods. The PSO is a well-known and widely used method [40,41,42,43]. The GWO, although being relatively new in the literature, is again a meta-heuristic method reported to give successful results like the PSO algorithm [6, 44, 45]. A hybrid approach has been introduced using these two algorithms that yields successful results. We describe our method in detail in the following sections.

2.1 Particle swarm optimization (PSO)

The PSO is a population-based meta-heuristic optimization method developed by Kennedy and Eberhart [40]. It is inspired by the social behavior of bird flocking or fish schooling when searching for food. Initially, the initial population is generated randomly within the search domain in the PSO method. The best location of each particle and the position information of the best particle in the swarm are constantly kept in memory. All particles in the swarm update their positions using the following equations in each iteration:

$$\begin{aligned} \overline{x}_{n+1}^{i}& = \overline{x}_{n}^{i}+\overline{v}_{n+1}^{i}, \end{aligned}$$
(1)
$$\begin{aligned} \overline{v}_{n+1}^{i}& = \omega \overline{v}_{n}^{i}+c_{1}r_{1}(\overline{p}_{n}^{i}-\overline{x}_{n}^{i})+c_{2}r_{2}(\overline{p}_{n}^{g}-\overline{x}_{n}^{i}). \end{aligned}$$
(2)

Here, i refers to the particle in the swarm. n is the iteration step carried out, and \(r_{1}\) and \(r_{2}\) values represent random numbers in the range [0, 1]. \(\omega\) is the inertia weight parameter. The coefficients \(c_1\) and \(c_2\) represent the optimization parameters, \(\overline{x}\) is the position vector, \(\overline{v}\) is the velocity vector, and \(\overline{p}^{i}\) is the best position information that ith particle has achieved and finally, \(\overline{p}^{g}\) represents the best position information available in the swarm.

In the PSO algorithm, the new position and velocity of a particle are not accepted with a small possibility; instead, it is replaced by a random position within the search space. The aim of this operation is to escape from the local minimums.

The search continues until an optimum result is achieved or a maximum predefined number of iterations is reached.

2.2 Grey wolf optimizer (GWO)

The GWO algorithm is inspired by the leadership hierarchy of grey wolves [6, 44, 45]. Grey wolves are at the top of the food chain. There are four types of grey wolves within the leadership hierarchy. These are alpha, beta, delta and omega wolves. In the GWO algorithm, alpha wolves represent the solution with the best result. Beta and delta wolves represent the second and third best solutions in the population. Omega wolves are the best solution candidates. The GWO algorithm assumes that hunting is performed by alpha, beta and delta wolves, while omega wolves follow these wolves.

Grey wolves’ hunting includes the following three main parts: (1) Tracking, chasing, and approaching the prey. (2) Pursuing, encircling, and harassing the prey till it stops moving. (3) Attacking the prey. Encircling the prey is modeled mathematically in the following equations:

$$\begin{aligned} D& = |C \times X_{\text {p}}(t)-X(t)|, \end{aligned}$$
(3)
$$\begin{aligned} X(t+1)& = X_{\text {p}}(t)-A \times D. \end{aligned}$$
(4)

Here, t is the number of instantaneous iterations, \(X_{\text {p}}\) is the position of the prey, X is the location of grey wolves, and A and C are the coefficients for the vectors. The coefficients A and C are calculated as shown below:

$$\begin{aligned} A& = a \times (2 \times r_{1}-1), \end{aligned}$$
(5)
$$\begin{aligned} C& = 2 \times r_{2}. \end{aligned}$$
(6)

Here, the number of a is linearly decreasing from 2 to 0, as the number of iterations decreases. \(r_{1}\) and \(r_{2}\) represent uniformly selected random numbers between [0,1].

Grey wolves are led by alpha wolves to detect the location of the prey. Sometimes beta and delta wolves help the alpha wolf. The GWO algorithm assumes that the best solution is the alpha wolf, and the second best and the third solutions are beta and delta wolves, respectively. For this reason, the other wolves in the population move according to the position of these three wolves. This is formulated mathematically in the following equations:

$$\begin{aligned} D_{\alpha }& = |C_{1} \times X_{\alpha }-X(t)|, \nonumber \\ D_{\beta }& = |C_{2} \times X_{\beta }-X(t)|, \nonumber \\ D_{\delta }& = |C_{3} \times X_{\delta }-X(t)|. \end{aligned}$$
(7)

The values \(X_{\alpha }\), \(X_{\beta }\) and \(X_{\delta }\) represent the best three wolves in each iteration, respectively.

$$\begin{aligned} X_{1}& = |X_{\alpha }-a_{1}D_{\alpha }|, \nonumber \\ X_{2}& = |X_{\beta }-a_{2}D_{\beta }|, \nonumber \\ X_{3}& = |X_{\delta }-a_{3}D_{\delta }|,\end{aligned}$$
(8)
$$\begin{aligned} X_{\text {p}}(t+1)& = \frac{X_{1}+X_{2}+X_{3}}{3}. \end{aligned}$$
(9)

Here, the new position of the prey is expressed as \(X_{\text {p}}(t+1)\) as the mean of the positions of three best wolves in the population.

Grey wolves finish the hunting by attacking the prey. For attacking, they must get close enough to the prey. When Eq. 5 is examined, A takes values that vary from [− 2a, 2a] while a takes decreasing values from 2 to 0. When |A| value is greater than or equal to 1, existing hunts are abandoned to find better solutions. Assuming that the prey got close enough for values less than 1, grey wolves are forced to attack the prey. This approach prevents the wolves getting stuck on local minimum.

When the GWO algorithm reaches the desired number of iterations, the search is completed.

2.3 Hybrid PSO–GWO (HPSGWO)

Our Hybrid PSO–GWO algorithm (HPSGWO) has been developed without changing the general operation of the PSO and GWO algorithms. The PSO algorithm can achieve successful results in almost all real-world problems. However, a solution is needed to reduce the possibility of the PSO algorithm to trap into a local minimum. In our proposed method, the GWO algorithm is utilized to support the PSO algorithm to reduce the possibility of falling into a local minimum. As mentioned in Sect. 2.1, the PSO algorithm directs some particles to random positions with small possibility to avoid local minimums. As explained in Sect. 1, these directions may have some risks causing to move away from global minimum. The exploration ability of the GWO algorithm is used to prevent these risks by directing some particles to positions that are partially improved by the GWO algorithm instead of directing them to random positions. However, the running time is extended since the GWO algorithm is also employed in addition to the PSO algorithm. But, as will be presented in Sect. 4, when the success of the results and the amount of extra time needed are taken into consideration, the extended time can be regarded as tolerable depending on the optimization problem solved. The increased success can tolerate additional time requirement in sector such as leather where the wastage is much more critical as long as the solution is obtained in a feasible time (in minutes).

figure a
Fig. 1
figure 1

Flowchart of the HPSGWO

The algorithm and the flowchart of our HPSGWO method are given in Algorithm 1 and Fig. 1, respectively.

3 Definition of optimization problems

We evaluate our hybrid approach on five different benchmark functions (\(F_1, F_2, F_3, F_4\) and \(F_5\)) and on three different real-world problems. Table 1 presents the definition of these benchmark functions which are taken from Congress on Evolutionary Computation (CEC) 2005 benchmark functions [30]. All of these functions have a dimension of 30. The global minimum values, \(F_{\min }\), of these functions are also given in Table 1. The real-world problems utilized are parameter estimation for frequency-modulated sound waves (PEFMSW) [31], process flowsheeting problem (PFP) [32] and the leather nesting problem [34, 35]. The definitions for the PEFMSW and PFP are given in Table 2 together with their dimensions and \(F_{\min }\) values. We will also call the PEFMSW and PFP as \(F_6\) and \(F_7\), respectively, in the rest of the paper. The following section defines the leather nesting problem.

Table 1 Definitions of the benchmark functions
Table 2 Definitions for two real-world problems, namely the PEFMSW and PFP

3.1 Definition of the leather nesting problem

The leather nesting problem (LNP) is an optimization problem where desired patterns are required to be placed on a leather material with an objective of minimizing the wasted material. When the material is an imitation leather, the wastage can be easily recycled as in wood, glass, textile, and paper materials. However, when it is the natural leather, the wastage should be reduced to minimum, since the natural leather is a non-recyclable raw material. Unnecessary wastage brings big losses to large-scale companies. In addition to the non-recyclable characteristics of the natural leather, there are also some other issues which make the problem even harder: (1) natural leather does not have a regular shape, and can have tears and holes, and (2) the shoe patterns used in the production also do not have regular shapes.

The LNP problem is also named in the literature as a two-dimensional irregular nesting process [33,34,35].

The LNP problem aims to place shoe patterns \((P=\{P_{1}, P_{2}, P_{3},\ldots , P_{N}\})\) with irregular shapes onto the irregular shaped leather material (M) with minimal waste. Rotation of the patterns is allowed. However, overlap which indicates that there are at least two patterns intersecting, and overflow which means that there is at least one pattern whose boundaries are exceeding the leather are not allowed. The following equation gives the definition of these two constraints:

$$\begin{aligned}&P_{i}\cap P_{j}=\emptyset \quad \quad 1\le i,j\le N\nonumber \\&\quad P_{i}\subseteq M \quad \quad 1\le i\le N. \end{aligned}$$
(10)

Here, N denotes the number of patterns that are placed. The problem to be minimized after defining the constraints is expressed as shown below:

$$\begin{aligned} \begin{array}{ll}&{\text {Minimize}} \sum \limits _{\begin{array}{c} i=1 \end{array}}^{N}\sum \limits _{\begin{array}{c} i=1\\ i\ne j \end{array}}^{N}{\text {Overlap}}(P_{i}, P_{j})+\sum \limits _{\begin{array}{c} i=1 \end{array}}^{N}{\text {Overflow}}(P_{i})+{\text {Cost}}(P_{i})\\ &{} \quad \quad \quad \quad \quad \quad {\text {subject}} \quad {\text {to}} \quad \quad P_{i}\cap P_{j}=\emptyset \quad \quad 1\le i,j\le N \\ &{} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad P_{i}\subseteq M \quad \quad 1\le i\le N. \end{array} \end{aligned}$$
(11)

The acceptable solutions are those where the sum of the overlap and overflow values is zero. The cost function, given in the following equation, determines the quality of the solution in acceptable situations:

$$\begin{aligned} {\text {Cost}} = \sum \limits _{\begin{array}{c} i=1 \end{array}}^{C}{\text {cost}}\_{\text {vertex}}(i). \end{aligned}$$
(12)

Here, C is the number of vertexes for a polygon (representing a pattern). While the \({\text {cost}}\_{\text {vertex}}\) function calculates the cost of a vertex, first, the minimum distances from this vertex to the material boundary and all other polygons are calculated. Then, the shortest value in all calculated distances gives the cost of that vertex. Since the polygons are placed one by one, the cost function forces the placement polygons closer to other previously placed polygons and the bounds of the material so that the wasted material is minimized. In Fig. 2, the calculation of cost for one of the vertexes of a polygon is illustrated. See Appendix for a detailed explanation of how the minimum distances between polygons are calculated.

Fig. 2
figure 2

Calculation of Cost for one of the vertexes of U-shaped polygon. The minimum distances of the vertex point to other polygons and to the bounds of the material are as follows: \(d_1=7,d_2=3,d_3=6\sqrt{2}\). The cost of the vertex point equals to \(\min (d_1,d_2,d_3) = d_2\) which is 3 units

4 Experimental results and discussion

To evaluate the performance of our hybrid optimization method and to compare it with the PSO, GWO, ABC, SSA, HC, HK and HS methods, we conducted experiments with five different benchmark functions and three different real-world problems presented in Sect. 3.

In all of the experiments, parameters of the PSO were selected as follows [46,47,48,49]: \(\omega =0.7298\), and \(c_{1}=c_{2}=1.49445\). Similarly, during the whole experiments, parameters of the SSA were set to: [50,51,52]: \(r_{\text {a}}=1.0\), \(p_{\text {c}}=0.7\) and \(p_{\text {m}}=0.1\).

Experiments on \(F_1, F_2, F_3, F_4\) and \(F_5\) were repeated 25 times for each method and for each function. Experiments with \(F_6\) and \(F_7\) (PEFMSW and PFP) were conducted by applying each method 100 times to each problem. In these experiments, the methods are compared in terms of their best and worst results along with the mean and standard deviation statistics of their results. In the experiments performed with \(F_1\sim F_7\), we terminated the algorithms in each run when they reach a maximum number of \(10^4\times D\) objective function evaluations. Here, D denotes the dimension of the corresponding function.

In the LNP experiments, we used four different shoe models with a total of 40 patterns and four different leather materials by representing them as polygons. Figure 3 shows these patterns belonging to real shoe models. The leather materials extracted from real hides and used in the evaluations are illustrated in again Fig. 3. We should note that all the patterns and materials have irregular shapes. During the evaluations, each of four shoe model patterns were placed onto four different materials separately. For each model and material pair, each optimization method was repeated five times. Furthermore, each optimization method was given with the same number of shoe patterns to be placed onto the current material. In these experiments, the population size is set to 50 and each method was run for 150 iterations in each test. However, since HK and HS methods resulted in very poor performance with default values, we had to run them by increasing the population size and the number of iterations to 150 and 450, respectively, only then we were able to obtain a comparable performance.

Fig. 3
figure 3

The patterns of the four different shoe models and leather materials used: a Model 1, b Model 2, c Model 3, and d Model 4 patterns; e Material 1, f Material 2, g Material 3, and h Material 4

Fig. 4
figure 4

The results of nesting for the GWO, PSO, and our HPSGWO methods using Model 2 and Material 4 are illustrated to explain the definition of used area necessary to calculate ME metric

To evaluate the results of the LNP experiments, two different performance metrics are utilized, namely: (1) mean efficiency (ME) calculated as the average percentage of the occupied area of the placed patterns at the end of the optimization over the used area, and (2) mean placement time (MPT) calculated by averaging the required times to place all patterns in each of five different runs. During the calculation of the ME, the used area is taken as the area extracted from the whole area of the material by removing its unused portion beginning from the boundary point of the farthest pattern along the placement direction. For instance, the used areas for three example placements shown in Fig. 4 are the areas below straight lines for each of them. Experiments related with MPT were all conducted on the same computer with Intel Core i5-6500 3.2 GHz processor and 4 GB of RAM.

In Table 3, the results of the experiments performed with \(F_1, F_2, F_3, F_4\) and \(F_5\) are illustrated. When the results are examined, our approach, HPSGWO, remarks as the only method achieving the best positions not reachable by other methods for all functions. The HPSGWO is able to find the global minimum for \(F_2\), and its best position obtained for \(F_5\) is very close to the global minimum. In all aspects, our approach outperforms the GWO, ABC, HC, HK and HS methods.

Table 3 Results of the experiments conducted with \(F_1, F_2, F_3, F_4\) and \(F_5\)

The PSO is giving better results than the GWO according to Table 3. Our hybrid approach is performing better than the PSO in all functions except for \(F_4\). This means that via hybridizing the PSO with a less performing method, we are getting a better method than both of the PSO and GWO which clearly shows the success of our hybridization strategy. From this successful result, we can deduce that we managed to improve reportedly poor exploration ability of the PSO by merging it with the GWO.

There are only two cases where HPSGWO falls behind only the PSO and SSA methods in terms of only mean value. The SSA and PSO give better mean results than the HPSGWO only for \(F_2\) and \(F_4\), respectively. However, when we consider best values, our method is able to reach better positions than these two methods indicating the improved exploration ability of our approach.

Table 4 presents the results of the experiments conducted with \(F_6\) and \(F_7\). The HPSGWO and PSO are the only two methods able to reach the best values which are also global minimums. In this case, we can compare two methods by looking at the mean values. In terms of mean values, the HPSGWO is giving better results then the PSO in both problems. This indicates that via the exploration contribution of the GWO, our approach is able to obtain better results on the average. Although the SSA results in better mean value for \(F_7\) than the HPSGWO, it is not able to find global minimum which is achieved by our approach.

Table 4 Results of the experiments conducted with \(F_6\) and \(F_7\)

The results of the LNP experiments are given in Table 5. When the mean efficiency is considered, the HPSGWO algorithm is more successful than all of the other methods. When we calculate the average ME increasesFootnote 1 over all experiments, the HPSGWO performs better than the PSO, GWO, ABC, SSA, HC, HK and HS with 4.77%, 4.85%, 28.55%, 16.39%, 12.05%, 21.87% and 40.75%, respectively. When we consider the SSA, although it gives comparable results for \(F_1\sim F_7\), for the LNP its ME results are always behind our approach. The low performances of the HK and HS even with an increased population size and number of iterations indicate that they are not suitable for the LNP.

Table 5 Results obtained in the LNP experiments. Bold values indicate experiments that the HPSGWO algorithm is better than all other methods used. For ME, the higher is better; for MPT, the lower is better

If we examine the visual placement results in Fig. 4 given to better illustrate the calculation of our ME metric, the better performance of the HPSGWO is seen when the difference between the straight lines is inspected. The HPSGWO managed to place all of the patterns into a smaller used area, hence leaving a larger remaining area for a later use. The PSO and GWO methods have produced similar results to each other. We can deduce from these placements that the PSO and GWO trap into local minimums while our approach achieves a better result.

When the mean placement times in Table 5 are considered, the results show that the HPSGWO performs better than the other hybrid methods but not better than the PSO, GWO, ABC and SSA. When we consider that our approach utilizes the GWO whenever needed without modifying the governing equations of the PSO and GWO, this increased time complexity is inevitable. In the worst case, 2.77 times (for Material 1 and Model 1), in the best case, 1.63 times (for Material 2 and Model 2), and on the average 2.07 times more time is needed. However, when we consider the enhancement in the ME, the longer time need is not intolerable as the order of the placement time is in minutes. Since the natural leather is not recyclable, the decreased leather wastage is more critical for the leather industry.

Additionally, in Fig. 5, we present average convergence of only the PSO, GWO and HPSGWO methods to better illustrate the success of our hybrid approach. This convergence graph is depicted by running three methods on Model 4 and Material 4. For each method and for each iteration in Fig. 5, the cost of all patterns is averaged. We should note that, in the experiments, each pattern is placed onto the material one by one, and methods are iterated 150 times to place one pattern. This result clearly indicates that our hybrid approach converges to a lower cost value with less iterations than the PSO and GWO. Hence, we can conclude that our approach successfully enhances the exploration ability of the PSO.

Fig. 5
figure 5

The convergence comparison graph for the placement of a single sample pattern

5 Conclusion

In this paper, a new hybrid algorithm has been presented using the PSO and GWO algorithms. Our approach aims to prevent the PSO algorithm from falling into local minimums by utilizing the exploration ability of the GWO algorithm. For this purpose, some particles in the PSO algorithm were replaced with a small possibility by a partially improved particle obtained via the GWO algorithm. We have tested the performance of our algorithm and compared it with the PSO, GWO, ABC, SSA meta-heuristic methods, and as well as with three different hybrid approaches of the PSO and GWO proposed in the literature. To evaluate our method comprehensively, we conducted experiments with five different benchmark functions and three different real-world problems. The results of the experiments indicate that our hybrid approach performs better than the conventional PSO, GWO algorithms, ABC, SSA and three hybrid PSO–GWO approaches. Since our hybrid approach do not modify the governing equations of the methods and apply the GWO with a small probability in the main flow of the PSO, its time complexity is higher than the PSO and GWO. However, our average convergence experiment demonstrate that our approach converges to lower cost values with few number of iterations. Additional time complexity of our approach should be considered depending on the problem it is applied, since in problems such as the LNP, obtaining more optimal solutions is much more critical as long as the increased time complexity is feasible.