Introduction

In recent years, optimization has become a fascinating area of research due to the increasing complexity and diversity of optimization problems across various fields such as engineering, wireless sensor network (Gokulraj et al. 2021; Abdulai et al. 2023; Vahabi et al. 2022; Raja and Mookhambika 2022; Navin Dhinnesh and Sabapathi 2022; Dao et al. 2023; Jain et al. 2023; Thalagondapati and Singh 2023; Verma and Jain 2023; Boyineni et al. 2024), forecasting (Kim and Moon 2019; Roy et al. 2020; Dong et al. 2022; Nayak et al. 2023; Singh and Rizwan 2023; Wang et al. 2023; Danandeh Mehr et al. 2023), search engine optimization (Sethuraman et al. 2019), science (Chakrabarti and Chakrabarty 2019; Gupta et al. 2020; Muruganantham and Gnanadass 2021; Avvari and Vinod Kumar 2022) etc. The progress in optimization problems has made it challenging for traditional optimization techniques to effectively solve them. These techniques typically rely on gradient-based approaches or assume convexity in the problem space (Hariharan et al. 2023).

The limitations of traditional optimization have prompted researchers to explore alternative strategies capable of navigating the complexities of contemporary optimization problems. This exploration has led to the development and application of meta-heuristic algorithms. These algorithms do not require the problem to be convex or differentiable and are capable of searching large and complex spaces more efficiently (Rao 2019). They provide a framework for developing solution strategies that are adaptable, robust, and capable of finding satisfactory solutions with less computational effort.

Most of these algorithms are applied in different areas such as artificial neural network (Movassagh et al. 2021), forecasting (Sengar and Liu 2020; Murali et al. 2020), malware detection (Alzubi et al. 2022a; b, 2023), economic load dispatch problem (Padhi et al. 2020), optical wavelength division multiplexing (WDM) system (Bansal et al. 2017; Bansal 2021) and wireless sensor network (Halllafi et al. 2023; Vasanthi and Prabakaran 2023; Saranraj et al. 2022; Srinivas and Amgoth 2023; Khalifa et al. 2023; Dash, 2023). The first and most well-known meta-heuristic algorithm is Genetic Algorithm (GA) (Holland, 1992; Goldberg 1989). GA algorithm is inspired by the process of natural selection in genetics and have applications in diverse fields including networking and scheduling (Shinde and Bichkar 2023).

Thereafter, many researchers came up with new and hybrid ways to find the best solution based on evolutionary, food-searching, and physical principles of the universe. A few of the widely used methods include Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995), Gravitational Search Algorithm (GSA) (Rashedi et al. 2009), Monkey Search (MS) (Sharma et al. 2016), Differential Evolution (DE) (Storn and Price 1997), Simulated Annealing (SA) (Zhang and Wang, 1993), Artificial Bee Colony (ABC) (Garg 2014), Multi-Objective Generalized Teacher-Learning-Based-Optimization Algorithm (Ram et al. 2022).

Out of these, PSO is particularly effective at optimizing problems with continuous variables and has a rapid convergence rate compared to earlier algorithms. However, it has limitations in the exploration phenomenon due to which it may get stuck in local optima, particularly for functions with multiple local optima. Over the years, researchers have proposed various methods of PSO such as variants, improvements and hybridization to deal with its limitations (Houssein et al. 2021; Gad 2022). The different variants of PSO, such as binary, chaotic and multi-objective have been developed to enhance the performance of PSO.

Hybridization of algorithms is another way to enhance the performance of an algorithm by combining their best parts. The need for such hybridization arises from the fact that there are many different types of functions ranging from simple to complex real-world problems that need to be optimized. Since a single algorithm is typically designed around a single logical strategy, it cannot optimize every type of function. Also, different types of functions require different search strategies. This is where the concept of hybridization becomes relevant. By merging two distinct approaches, it can enhance the performance of more functions than the individual approach.

One hybridization approach with PSO is PSOGSA, proposed by Mirjalili and Hashim (2010) in which the ability of exploration in GSA is combined with the ability of exploitation in PSO. Chopra et al. (2016) hybridized PSO with the grey wolf optimization (GWO) algorithm to solve the economic load dispatch problem by imitating the grey wolves’ leadership hierarchy and hunting mechanism. Yang et al. (2020) proposed three strategies to enhance the global optimization ability of the Butterfly Optimization Algorithm (BOA) (Arora and Singh 2019). The strategies include initializing BOA using a chaotic cubic map, applying a nonlinear parameter control strategy to the power exponent, and combining BOA with the PSO algorithm in a hybrid approach. The goal of these strategies is to address some of the limitations of the basic BOA and improve its ability to find the global optimum. However, it is important to note that the effectiveness of these strategies may vary depending on the specific optimization problem at hand. The study suggests that making innovative modifications and hybridizing with other algorithms can potentially improve the optimization capabilities of an algorithm. This paper introduces a novel hybrid approach called hybrid pelican-particle swarm optimization algorithm (HPPSO) by combining the search principle of Pelican Optimization Algorithm (POA) with PSO that eliminates the stagnation effect of PSO. POA was proposed by Pavel Trojovsky and Mohammad Dehghani, taking inspiration from the foraging behavior of pelicans in search of food. It is highly effective at exploring and is particularly suitable for optimizing functions with a bowl-shaped structure. But there is no assurance that the optimization solutions obtained through the use of POA will always be the global optimum for all optimization problems. The hybridization process in this paper differs from previous works as it combines the exploration phase of POA and the exploitation phases of PSO algorithms to create a novel high-performing algorithm. More detailed information about the algorithm’s search principle is explained in Sect. 4 of the paper. The main contributions of the paper are as follows:

  • A novel hybrid optimization algorithm, called hybrid pelican-particle swarm optimization algorithm (HPPSO) has been proposed by combining two meta-heuristic algorithms PSO and POA.

  • To validate the proposed HPPSO algorithm, it has been tested on 33 benchmark mathematical functions in MATLAB (R2023a).

  • The obtained results of the proposed HPPSO algorithm are compared with conventional PSO and POA along with other numerous hybridized algorithms of PSO such as PSOGSA, HFPSO, PSOBOA and PSOGWO.

  • The performance of the proposed HPPSO algorithm has been analyzed statistically through convergence curve, boxplot and a non-parametric Wilcoxon sign rank test.

  • From the above analyses, the proposed hybrid algorithm performs better than other compared algorithms used in the paper.

The subsequent sections of the paper are structured as follows: Sect. 2 explains the working mechanisms of the PSO and POA algorithms. Section 3 presents the proposed HPPSO algorithm. Section 4 contains the results, including the performance evaluations and statistical analysis. Finally, Sect. 5 presents the conclusion and the future scope.

1 Related work

This section provides a concise explanation of the working principles and basic parameters of PSO and POA algorithms that are essential components of our proposed algorithm. Since POA is the latest algorithm and PSO is widely used, we will simply provide the basic concepts of these algorithms to facilitate a better understanding of our proposed algorithm.

1.1 Particle swarm optimization

Kennedy and Eberhart developed the PSO algorithm in 1995, drawing an inspiration from the social behavior of a swarm of particles moving in a search space. In the algorithm, each member of the swarm is referred to as a particle and has two essential features: velocity and position. These features are utilized in determining the optimal value. The algorithm starts by initializing a population of particles, each with a randomly generated position and velocity in a search space with ’d’ dimensions and a swarm of ’N’ particles and evaluates the objective (fitness) function at each position.

Then, the velocity and position of each particle are updated by using the Eqs. (1) and (2).

$$\begin{aligned} v_{k}(t+1) = \omega \times v_{k}(t) + c_{1}\times r_{1} \times (pbest_{k} - x_{k}(t)) + c_{2}\times r_{2} (gbest - x_{k}(t)) \end{aligned}$$
(1)
$$\begin{aligned} x_{k}(t+1) = x_{k}(t) + v_{k}(t+1) \end{aligned}$$
(2)

where:

  • \(x_{k}(t)\): the current position of the \(k^{th}\) particle at time t

  • \(\omega\): the inertia weight

  • \(v_{k}(t)\): the current velocity of the \(k^{th}\) particle at time t

  • \(pbest_{k}\): the personal best position of the \(k^{th}\) particle

  • gbest: the global best position of any particle in the swarm

  • \(c_{1}\): cognitive constant and \(c_{2}\) is social constant

  • \(r_{1}\) and \(r_{2}\) are two random numbers that take values between 0 and 1.

The inertia weight controls the impact of the particle’s previous velocity on its current velocity while \(c_{1}\), \(c_{2}\), \(r_{1}\) and \(r_{2}\) control the influence of the personal and global best positions on the particle’s movement.

At each iteration, the fitness of each particle is evaluated based on the fitness function. If a particle’s fitness is better than its personal best, it’s personal best is updated. If the particle’s fitness is better than the global best, the global best is updated. The algorithm terminates when a predefined stopping criterion is met, such as reaching a maximum number of iterations or finding a solution satisfying a specified fitness level. The final solution is the global best position found by any particle in the population.

Despite the successful implementation of PSO in various optimization problems, it still has some limitations. One major limitation of the PSO algorithm is the risk of premature convergence. This happens when the particles in the swarm converge to a suboptimal solution rather than exploring the entire search space, resulting in the inability to reach the global optimal solution.

1.2 Pelican optimization algorithm

Pelican optimization algorithm (POA) is also a population-based algorithm that takes inspiration from the natural behaviors of pelicans. The algorithm is designed to mimic the strategies and behavior that pelicans exhibit during hunting. The algorithm was proposed by Pavel Trojovsky and Dehghani (2022) in which pelicans are considered as the members of the population.

The approach of pelicans when they hunt for food is replicated to improve the candidate solution in two phases through simulation after initializing the position of pelicans randomly in the search area. These two phases are as follows:

Phase 1: Exploration phase (Moving towards food source): In this phase, the algorithm tries to explore the search space for pelicans to find their food source (prey) randomly. Once their prey is detected, the pelicans move towards them. The position of the \(i^{th}\) pelican candidate solution is updated in this phase using the following equations:

$$\begin{aligned} X^{New\_P1}_{i}=\left\{ \begin{array}{ll} X_{i} + rand.(L_{P}-I.X_{i}), &{} F_{p} < F_{i}\\ X_{i} + rand.(X_{i}-L_{P}), &{} \hbox {else} \end{array} \right. \end{aligned}$$
(3)

where:

  • \(X_{i}\): is the initial position of the candidate solution

  • \(L_{P}\): is the location of prey

  • \(F_{p}\): is the fitness function

  • I: is 1 or 2, selected randomly for each iteration

If the value of the fitness function at the new position is better than the value at the current position, then the pelican’s new position is considered by using the following equation:

$$\begin{aligned} X_{i}=\left\{ \begin{array}{ll} X^{New\_P1}_{i}, &{} F^{P1}_{i} < F_{i}\\ X_{i}, &{} \hbox {else} \end{array} \right. \end{aligned}$$
(4)

Phase 2: Exploitation phase (Winging on the water surface): After the exploration phase, the algorithm enters into the exploitation phase. In this phase, the pelicans use their wings to create a space over the water’s surface, allowing the prey to move upwards. This process enhances the local search ability. The position of the \(i^{th}\) pelican candidate solution is updated in phase 2 using the following equations:

$$\begin{aligned} X^{New\_P2}_{i} = X_{i} + R.(1-(c_{ite}/Max_{ite})).(2.rand-1).X_{i}, \end{aligned}$$
(5)

Where:

  • \(c_{ite}\): is the current iteration

  • \(Max_{ite}\): is the maximum number of iterations

  • R: is 0.2,

Then, the process of accepting or rejecting a new pelican position is utilized by the following equation:

$$\begin{aligned} X_{i}=\left\{ \begin{array}{ll} X^{New\_P2}_{i}, &{} F^{P2}_{i} < F_{i}\\ X_{i}, &{} \hbox {else} \end{array} \right. \end{aligned}$$
(6)

2 The proposed HPPSO algorithm

This section describes the thought process that went into developing the proposed HPPSO algorithm and its structure and basic working principles.

2.1 Basic idea

Achieving a global optimum requires an optimization algorithm to maintain a balance between exploring and exploiting solutions. When trying to solve an optimization problem, exploration brings in variety while exploitation suggests intensity. We have already seen that PSO’s stagnation effect arises when the algorithm’s exploration and exploitation phases are out of balance. Therefore, the idea of hybridization with the excellent exploration capability of POA is used to overcome it. In the proposed algorithm, POA is used to generate a working solution by initially exploring the search space and then the PSO is applied to optimize the solution by improving upon the POA’s output.

2.2 Implementation of the algorithm

In the proposed HPPSO algorithm, the initial positions of ’P’ particles are randomly generated within the boundaries of a search area that has ’D’ dimensions. After initializing the position of particles, the primary goal of the algorithm is to explore the search area thoroughly to identify the best possible solutions for a given problem. To achieve this, the algorithm uses the phase 1 mechanism of POA. This mechanism employs the algorithm to explore the search area efficiently. Once the exploration phase is complete, the algorithm moves into the exploitation phase. In this phase, the HPPSO algorithm passes the particles to the PSO technique as initial points for the exploitation phase (Fig. 1).

Algorithm 1
figure a

Hybrid pelican particle optimization algorithm (HPPSO)

Fig. 1
figure 1

Flowchart of the proposed HPPSO algorithm

The PSO mechanism then utilizes the information gathered from the exploration phase to guide the particles towards the best-known solutions in the search space. It helps to improve the diversity of the population and avoid getting stuck in local minima.

3 Results and discussion

To validate the proposed HPPSO algorithm, 33 standard benchmark mathematical functions have been employed. The details of these benchmark functions and parameter settings of the compared algorithms are listed in Sect. 4.1. In section 4.2, a comparative analysis of the obtained results has been carried out to evaluate the performance of HPPSO algorithm with other existing hybridization algorithms of PSO including conventional PSO and POA. Section 4.3 introduces the result analysis by Wilcoxon signed rank test while Sect. 4.4 introduces the boxplot analysis of the obtained result. These statistical analyses has confirmed the effectiveness of the HPPSO Algorithm.

3.1 Experiment settings

The proposed HPPSO algorithm has been implemented in MATLAB R2023a on a computer system consisting of a core i5-11300 H CPU @ 3.10GHz, with 16 GB of RAM.

3.1.1 Parameter settings

In order to achieve the fairness of the test experiment, the parameters of the existing hybridization algorithms of PSO and conventional PSO and POA are same in the simulation experiments. The experiment has been conducted using a population size of 50 and a maximum iteration of 1000 with 20 independent runs for each function. Table 1 provides information about the control parameters of the compared algorithms.

Table 1 Parameters value of the compared algorithms

3.1.2 Different benchmark mathematical functions

The proposed HPPSO algorithm’s effectiveness is analyzed by testing its performance on three types of functions: Unimodal benchmark functions, Multimodal benchmark functions, and Fixed dimension multimodal functions. The detail description of these functions are listed in Table 2 (Digalakis and Margaritis 2001; Molga and Smutnicki 2005).

Table 2 Descriptions of 33 Standard benchmark functions

3.2 Statistical results and convergence curves analysis

This section discusses the results of implementing the HPPSO algorithm on 33 well-known mathematical functions. The obtained results are compared with recent existing hybridizing optimization algorithms of PSO including conventional PSO and POA. Table 3 demonstrates a comparative analysis of proposed HPPSO and other existing algorithms based on their mean value, standard deviation and best value. From this table, certain conclusions can be drawn:

The proposed HPPSO algorithm performs better than all other existing algorithms used in this study for functions F1-F5. Specifically, when applied to these functions, the HPPSO algorithm is able to achieve better optimization results and is stable due to the lowest standard deviation than other compared algorithms. For function F6, the results indicate that HPPSO displayed the global optimum solution. For function F7, the HPPSO algorithm performs superior to other algorithms with a low standard deviation that indicates the stability of the HPPSO algorithm. For function F8, the HPPSO algorithm performs better than all other compared algorithms. Additionally, the HPPSO algorithm produces the best result which is \(-\)10298.183.

The HPPSO algorithm obtained the global optimum solution for the function F9. For F10, the HPPSO algorithm produces the third-best optimal value while PSOBOA performs superior to all the algorithms in terms of their mean values. Additionally, the HPPSO algorithm produces the global optimum value. For function F11, the HPPSO algorithm obtains the global optimum value.

Table 3 Statistical results on 33 standard benchmark functions
Fig. 2
figure 2figure 2

Convergence curves of all compared algorithms on 33 standard benchmark functions

For function F12, the HPPSO algorithm produces the second-best optimal value while the HFPSO algorithm performs better than other algorithms. The HPPSO algorithm performs worse but better than PSOBOA and PSOGWO for the function F13. The PSO and POA provide better results that are close to the global optimal for function F14. For function F15, the HPPSO algorithm produces the second-best optimal value while the POA algorithm performs better than other algorithms. For functions F16-F20, HPPSO outperforms all other algorithms and reaches global optima. The HPPSO algorithm produces the second-best optimal value for the functions F21 and F23 and the third-best optimal value for the function F22.

For functions F24-F32, the HPPSO performs superior to other compared algorithms and second best for function F33. Thus, The HPPSO algorithm shows a balanced approach between exploring new possibilities and exploiting known solutions, which is demonstrated by its performance on the 33 standard benchmark functions. This balance enables the algorithm to provide the best results for almost all functions that no other algorithm could achieve. As a result, it is highly efficient in achieving an optimal value. So, the proposed HPPSO algorithm is more efficient than other algorithms in finding the optimal solution. This is because of its POA search mechanism which allows for better exploration and avoids local minima. Its performance on functions demonstrates that it is effective in managing complex tasks that require a balance between exploration and exploitation, as well as the ability to avoid local minima.

Moreover, the convergence curves for the 33 benchmark functions optimized by the proposed HPPSO algorithm and other algorithms are depicted in Fig. 2. These curves illustrate that the HPPSO algorithm exhibits superior convergence capabilities. This fast convergence suggests that the algorithm has the ability to find optimal solutions quickly. Specifically, in the case of unimodal functions (F1-F7), the HPPSO algorithm consistently identifies better solutions and converges steadily. In contrast, the conventional PSO and POA algorithm tend to get stuck in local optima and are outperformed by the HPPSO algorithm. Even for F24-F33 functions, the convergence curves evident that the HPPSO algorithm frequently identifies superior solutions and converges rapidly. However, the convergence curves of HPPSO are not effective for multidimensional functions.

3.3 Results analysis by Wilcoxon signed rank test

In section 4.2, the performance of the HPPSO algorithm was evaluated in terms of mean and standard deviation on 33 benchmark mathematical functions. However, these values are not enough to validate the results obtained by the algorithms. A Wilcoxon signed-rank test (Wilcoxon, 1992) is used to validate the results by using the mean values obtained for 33 standard benchmark functions. The test is used to identify the statistically significant differences between the paired algorithms in terms of their mean value instead of ranking to the algorithm’s performance.

Table 4 p-value of Wilcoxon sign rank test

In this test, the probability value (p-value) is used to determine if there are any significant differences between the results or not. The lower p-value indicates a greater level of significance and stronger evidence for rejecting the null hypothesis, implying that the performance of the two algorithms being compared has a statistically significant difference. The results of the Wilcoxon signed-rank test are presented in Table 4 and suggest that the performance of the proposed HPPSO algorithm is significantly different from other existing algorithms used in this study at a 5% level of significance.

3.4 Boxplot analysis

The boxplot analysis is used for evaluating and comparing the performance of the algorithms. This graphical approach facilitated the visualization of key statistics, including minimum and maximum data points (whisker edges) and the interquartile range (box width). By employing this analysis, the study is able to effectively assess the dispersion, central tendency, and data agreement characteristics of the algorithms. The boxplots of all compared algorithms for 33 benchmark mathematical functions are presented in Fig. 3. The results of this analysis highlight the consistently superior performance of the HPPSO algorithm in comparison to the other algorithms.

Fig. 3
figure 3figure 3

Boxplot of all compared algorithms on 33 standard benchmark functions

4 Conclusion and future scope

In the present paper, a novel Hybrid Pelican-Particle Swarm Optimization (HPPSO) algorithm has been presented using PSO and POA to efficiently solve complex optimization problems. HPPSO uses the good exploration capability of POA to overcome the stagnation effect of PSO. This is achieved by updating particles in exploitation phase of PSO obtained through exploration phase of POA. The performance of HPPSO has been tested on 33 benchmark mathematical functions and compared it with conventional PSO and POA along with other numerous hybridized algorithms of PSO (PSOGSA, HFPSO, PSOBOA and PSOGWO). The statistical analysis of HPPSO algorithm has been carried out through convergence curves, boxplot and a non-parametric wilcoxon signed rank test. These analyses indicate that HPPSO performs better than other algorithms in terms of achieving better optima. So, HPPSO is an effective algorithm that can handle complex optimization problems and avoid local optima. Thus, it is a promising choice for optimization problems.

However, it’s important to acknowledge that the HPPSO algorithm introduced in this paper has certain limitations, one of which is its inability to effectively optimize multimodal functions (F10, F12-F14). In future, several research opportunities can be explored to improve the proposed HPPSO algorithm and address the above mention limitations. For instance, the robustness and accuracy of the proposed modification could be tested on different engineering, combinatorial optimization and real-world problems.