Abstract
Particle Swarm Optimization (PSO) has drawn attention due to its widespread use in scientific and engineering fields. However, it suffers from a major limitation which is its slow exploration capability leading to stagnation. To overcome this limitation, various algorithms have been hybridized to improve the exploration phase of PSO but still there is a need to improve it further. Keeping this in mind, this paper proposes a novel hybrid meta-heuristic algorithm called the Hybrid Pelican-Particle Swarm Optimization (HPPSO) for solving complex optimization problems. The purpose of hybridization is motivated by the excellent exploration capability of the Pelican Optimization Algorithm (POA). The performance of the proposed HPPSO has been tested on 33 standard benchmark functions in MATLAB (R2023a). For evaluation, the obtained results of proposed HPPSO algorithm are compared with conventional PSO and POA along with other numerous hybridized algorithms of PSO (PSOGSA, HFPSO, PSOBOA, and PSOGWO). The results are analyzed statistically through convergence curves, boxplot and a non-parametric Wilcoxon signed rank test. These analyses show that the proposed HPPSO algorithm achieves a better optimum than other algorithms used in the present paper.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
In recent years, optimization has become a fascinating area of research due to the increasing complexity and diversity of optimization problems across various fields such as engineering, wireless sensor network (Gokulraj et al. 2021; Abdulai et al. 2023; Vahabi et al. 2022; Raja and Mookhambika 2022; Navin Dhinnesh and Sabapathi 2022; Dao et al. 2023; Jain et al. 2023; Thalagondapati and Singh 2023; Verma and Jain 2023; Boyineni et al. 2024), forecasting (Kim and Moon 2019; Roy et al. 2020; Dong et al. 2022; Nayak et al. 2023; Singh and Rizwan 2023; Wang et al. 2023; Danandeh Mehr et al. 2023), search engine optimization (Sethuraman et al. 2019), science (Chakrabarti and Chakrabarty 2019; Gupta et al. 2020; Muruganantham and Gnanadass 2021; Avvari and Vinod Kumar 2022) etc. The progress in optimization problems has made it challenging for traditional optimization techniques to effectively solve them. These techniques typically rely on gradient-based approaches or assume convexity in the problem space (Hariharan et al. 2023).
The limitations of traditional optimization have prompted researchers to explore alternative strategies capable of navigating the complexities of contemporary optimization problems. This exploration has led to the development and application of meta-heuristic algorithms. These algorithms do not require the problem to be convex or differentiable and are capable of searching large and complex spaces more efficiently (Rao 2019). They provide a framework for developing solution strategies that are adaptable, robust, and capable of finding satisfactory solutions with less computational effort.
Most of these algorithms are applied in different areas such as artificial neural network (Movassagh et al. 2021), forecasting (Sengar and Liu 2020; Murali et al. 2020), malware detection (Alzubi et al. 2022a; b, 2023), economic load dispatch problem (Padhi et al. 2020), optical wavelength division multiplexing (WDM) system (Bansal et al. 2017; Bansal 2021) and wireless sensor network (Halllafi et al. 2023; Vasanthi and Prabakaran 2023; Saranraj et al. 2022; Srinivas and Amgoth 2023; Khalifa et al. 2023; Dash, 2023). The first and most well-known meta-heuristic algorithm is Genetic Algorithm (GA) (Holland, 1992; Goldberg 1989). GA algorithm is inspired by the process of natural selection in genetics and have applications in diverse fields including networking and scheduling (Shinde and Bichkar 2023).
Thereafter, many researchers came up with new and hybrid ways to find the best solution based on evolutionary, food-searching, and physical principles of the universe. A few of the widely used methods include Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995), Gravitational Search Algorithm (GSA) (Rashedi et al. 2009), Monkey Search (MS) (Sharma et al. 2016), Differential Evolution (DE) (Storn and Price 1997), Simulated Annealing (SA) (Zhang and Wang, 1993), Artificial Bee Colony (ABC) (Garg 2014), Multi-Objective Generalized Teacher-Learning-Based-Optimization Algorithm (Ram et al. 2022).
Out of these, PSO is particularly effective at optimizing problems with continuous variables and has a rapid convergence rate compared to earlier algorithms. However, it has limitations in the exploration phenomenon due to which it may get stuck in local optima, particularly for functions with multiple local optima. Over the years, researchers have proposed various methods of PSO such as variants, improvements and hybridization to deal with its limitations (Houssein et al. 2021; Gad 2022). The different variants of PSO, such as binary, chaotic and multi-objective have been developed to enhance the performance of PSO.
Hybridization of algorithms is another way to enhance the performance of an algorithm by combining their best parts. The need for such hybridization arises from the fact that there are many different types of functions ranging from simple to complex real-world problems that need to be optimized. Since a single algorithm is typically designed around a single logical strategy, it cannot optimize every type of function. Also, different types of functions require different search strategies. This is where the concept of hybridization becomes relevant. By merging two distinct approaches, it can enhance the performance of more functions than the individual approach.
One hybridization approach with PSO is PSOGSA, proposed by Mirjalili and Hashim (2010) in which the ability of exploration in GSA is combined with the ability of exploitation in PSO. Chopra et al. (2016) hybridized PSO with the grey wolf optimization (GWO) algorithm to solve the economic load dispatch problem by imitating the grey wolves’ leadership hierarchy and hunting mechanism. Yang et al. (2020) proposed three strategies to enhance the global optimization ability of the Butterfly Optimization Algorithm (BOA) (Arora and Singh 2019). The strategies include initializing BOA using a chaotic cubic map, applying a nonlinear parameter control strategy to the power exponent, and combining BOA with the PSO algorithm in a hybrid approach. The goal of these strategies is to address some of the limitations of the basic BOA and improve its ability to find the global optimum. However, it is important to note that the effectiveness of these strategies may vary depending on the specific optimization problem at hand. The study suggests that making innovative modifications and hybridizing with other algorithms can potentially improve the optimization capabilities of an algorithm. This paper introduces a novel hybrid approach called hybrid pelican-particle swarm optimization algorithm (HPPSO) by combining the search principle of Pelican Optimization Algorithm (POA) with PSO that eliminates the stagnation effect of PSO. POA was proposed by Pavel Trojovsky and Mohammad Dehghani, taking inspiration from the foraging behavior of pelicans in search of food. It is highly effective at exploring and is particularly suitable for optimizing functions with a bowl-shaped structure. But there is no assurance that the optimization solutions obtained through the use of POA will always be the global optimum for all optimization problems. The hybridization process in this paper differs from previous works as it combines the exploration phase of POA and the exploitation phases of PSO algorithms to create a novel high-performing algorithm. More detailed information about the algorithm’s search principle is explained in Sect. 4 of the paper. The main contributions of the paper are as follows:
-
A novel hybrid optimization algorithm, called hybrid pelican-particle swarm optimization algorithm (HPPSO) has been proposed by combining two meta-heuristic algorithms PSO and POA.
-
To validate the proposed HPPSO algorithm, it has been tested on 33 benchmark mathematical functions in MATLAB (R2023a).
-
The obtained results of the proposed HPPSO algorithm are compared with conventional PSO and POA along with other numerous hybridized algorithms of PSO such as PSOGSA, HFPSO, PSOBOA and PSOGWO.
-
The performance of the proposed HPPSO algorithm has been analyzed statistically through convergence curve, boxplot and a non-parametric Wilcoxon sign rank test.
-
From the above analyses, the proposed hybrid algorithm performs better than other compared algorithms used in the paper.
The subsequent sections of the paper are structured as follows: Sect. 2 explains the working mechanisms of the PSO and POA algorithms. Section 3 presents the proposed HPPSO algorithm. Section 4 contains the results, including the performance evaluations and statistical analysis. Finally, Sect. 5 presents the conclusion and the future scope.
1 Related work
This section provides a concise explanation of the working principles and basic parameters of PSO and POA algorithms that are essential components of our proposed algorithm. Since POA is the latest algorithm and PSO is widely used, we will simply provide the basic concepts of these algorithms to facilitate a better understanding of our proposed algorithm.
1.1 Particle swarm optimization
Kennedy and Eberhart developed the PSO algorithm in 1995, drawing an inspiration from the social behavior of a swarm of particles moving in a search space. In the algorithm, each member of the swarm is referred to as a particle and has two essential features: velocity and position. These features are utilized in determining the optimal value. The algorithm starts by initializing a population of particles, each with a randomly generated position and velocity in a search space with ’d’ dimensions and a swarm of ’N’ particles and evaluates the objective (fitness) function at each position.
Then, the velocity and position of each particle are updated by using the Eqs. (1) and (2).
where:
-
\(x_{k}(t)\): the current position of the \(k^{th}\) particle at time t
-
\(\omega\): the inertia weight
-
\(v_{k}(t)\): the current velocity of the \(k^{th}\) particle at time t
-
\(pbest_{k}\): the personal best position of the \(k^{th}\) particle
-
gbest: the global best position of any particle in the swarm
-
\(c_{1}\): cognitive constant and \(c_{2}\) is social constant
-
\(r_{1}\) and \(r_{2}\) are two random numbers that take values between 0 and 1.
The inertia weight controls the impact of the particle’s previous velocity on its current velocity while \(c_{1}\), \(c_{2}\), \(r_{1}\) and \(r_{2}\) control the influence of the personal and global best positions on the particle’s movement.
At each iteration, the fitness of each particle is evaluated based on the fitness function. If a particle’s fitness is better than its personal best, it’s personal best is updated. If the particle’s fitness is better than the global best, the global best is updated. The algorithm terminates when a predefined stopping criterion is met, such as reaching a maximum number of iterations or finding a solution satisfying a specified fitness level. The final solution is the global best position found by any particle in the population.
Despite the successful implementation of PSO in various optimization problems, it still has some limitations. One major limitation of the PSO algorithm is the risk of premature convergence. This happens when the particles in the swarm converge to a suboptimal solution rather than exploring the entire search space, resulting in the inability to reach the global optimal solution.
1.2 Pelican optimization algorithm
Pelican optimization algorithm (POA) is also a population-based algorithm that takes inspiration from the natural behaviors of pelicans. The algorithm is designed to mimic the strategies and behavior that pelicans exhibit during hunting. The algorithm was proposed by Pavel Trojovsky and Dehghani (2022) in which pelicans are considered as the members of the population.
The approach of pelicans when they hunt for food is replicated to improve the candidate solution in two phases through simulation after initializing the position of pelicans randomly in the search area. These two phases are as follows:
Phase 1: Exploration phase (Moving towards food source): In this phase, the algorithm tries to explore the search space for pelicans to find their food source (prey) randomly. Once their prey is detected, the pelicans move towards them. The position of the \(i^{th}\) pelican candidate solution is updated in this phase using the following equations:
where:
-
\(X_{i}\): is the initial position of the candidate solution
-
\(L_{P}\): is the location of prey
-
\(F_{p}\): is the fitness function
-
I: is 1 or 2, selected randomly for each iteration
If the value of the fitness function at the new position is better than the value at the current position, then the pelican’s new position is considered by using the following equation:
Phase 2: Exploitation phase (Winging on the water surface): After the exploration phase, the algorithm enters into the exploitation phase. In this phase, the pelicans use their wings to create a space over the water’s surface, allowing the prey to move upwards. This process enhances the local search ability. The position of the \(i^{th}\) pelican candidate solution is updated in phase 2 using the following equations:
Where:
-
\(c_{ite}\): is the current iteration
-
\(Max_{ite}\): is the maximum number of iterations
-
R: is 0.2,
Then, the process of accepting or rejecting a new pelican position is utilized by the following equation:
2 The proposed HPPSO algorithm
This section describes the thought process that went into developing the proposed HPPSO algorithm and its structure and basic working principles.
2.1 Basic idea
Achieving a global optimum requires an optimization algorithm to maintain a balance between exploring and exploiting solutions. When trying to solve an optimization problem, exploration brings in variety while exploitation suggests intensity. We have already seen that PSO’s stagnation effect arises when the algorithm’s exploration and exploitation phases are out of balance. Therefore, the idea of hybridization with the excellent exploration capability of POA is used to overcome it. In the proposed algorithm, POA is used to generate a working solution by initially exploring the search space and then the PSO is applied to optimize the solution by improving upon the POA’s output.
2.2 Implementation of the algorithm
In the proposed HPPSO algorithm, the initial positions of ’P’ particles are randomly generated within the boundaries of a search area that has ’D’ dimensions. After initializing the position of particles, the primary goal of the algorithm is to explore the search area thoroughly to identify the best possible solutions for a given problem. To achieve this, the algorithm uses the phase 1 mechanism of POA. This mechanism employs the algorithm to explore the search area efficiently. Once the exploration phase is complete, the algorithm moves into the exploitation phase. In this phase, the HPPSO algorithm passes the particles to the PSO technique as initial points for the exploitation phase (Fig. 1).
The PSO mechanism then utilizes the information gathered from the exploration phase to guide the particles towards the best-known solutions in the search space. It helps to improve the diversity of the population and avoid getting stuck in local minima.
3 Results and discussion
To validate the proposed HPPSO algorithm, 33 standard benchmark mathematical functions have been employed. The details of these benchmark functions and parameter settings of the compared algorithms are listed in Sect. 4.1. In section 4.2, a comparative analysis of the obtained results has been carried out to evaluate the performance of HPPSO algorithm with other existing hybridization algorithms of PSO including conventional PSO and POA. Section 4.3 introduces the result analysis by Wilcoxon signed rank test while Sect. 4.4 introduces the boxplot analysis of the obtained result. These statistical analyses has confirmed the effectiveness of the HPPSO Algorithm.
3.1 Experiment settings
The proposed HPPSO algorithm has been implemented in MATLAB R2023a on a computer system consisting of a core i5-11300 H CPU @ 3.10GHz, with 16 GB of RAM.
3.1.1 Parameter settings
In order to achieve the fairness of the test experiment, the parameters of the existing hybridization algorithms of PSO and conventional PSO and POA are same in the simulation experiments. The experiment has been conducted using a population size of 50 and a maximum iteration of 1000 with 20 independent runs for each function. Table 1 provides information about the control parameters of the compared algorithms.
3.1.2 Different benchmark mathematical functions
The proposed HPPSO algorithm’s effectiveness is analyzed by testing its performance on three types of functions: Unimodal benchmark functions, Multimodal benchmark functions, and Fixed dimension multimodal functions. The detail description of these functions are listed in Table 2 (Digalakis and Margaritis 2001; Molga and Smutnicki 2005).
3.2 Statistical results and convergence curves analysis
This section discusses the results of implementing the HPPSO algorithm on 33 well-known mathematical functions. The obtained results are compared with recent existing hybridizing optimization algorithms of PSO including conventional PSO and POA. Table 3 demonstrates a comparative analysis of proposed HPPSO and other existing algorithms based on their mean value, standard deviation and best value. From this table, certain conclusions can be drawn:
The proposed HPPSO algorithm performs better than all other existing algorithms used in this study for functions F1-F5. Specifically, when applied to these functions, the HPPSO algorithm is able to achieve better optimization results and is stable due to the lowest standard deviation than other compared algorithms. For function F6, the results indicate that HPPSO displayed the global optimum solution. For function F7, the HPPSO algorithm performs superior to other algorithms with a low standard deviation that indicates the stability of the HPPSO algorithm. For function F8, the HPPSO algorithm performs better than all other compared algorithms. Additionally, the HPPSO algorithm produces the best result which is \(-\)10298.183.
The HPPSO algorithm obtained the global optimum solution for the function F9. For F10, the HPPSO algorithm produces the third-best optimal value while PSOBOA performs superior to all the algorithms in terms of their mean values. Additionally, the HPPSO algorithm produces the global optimum value. For function F11, the HPPSO algorithm obtains the global optimum value.
For function F12, the HPPSO algorithm produces the second-best optimal value while the HFPSO algorithm performs better than other algorithms. The HPPSO algorithm performs worse but better than PSOBOA and PSOGWO for the function F13. The PSO and POA provide better results that are close to the global optimal for function F14. For function F15, the HPPSO algorithm produces the second-best optimal value while the POA algorithm performs better than other algorithms. For functions F16-F20, HPPSO outperforms all other algorithms and reaches global optima. The HPPSO algorithm produces the second-best optimal value for the functions F21 and F23 and the third-best optimal value for the function F22.
For functions F24-F32, the HPPSO performs superior to other compared algorithms and second best for function F33. Thus, The HPPSO algorithm shows a balanced approach between exploring new possibilities and exploiting known solutions, which is demonstrated by its performance on the 33 standard benchmark functions. This balance enables the algorithm to provide the best results for almost all functions that no other algorithm could achieve. As a result, it is highly efficient in achieving an optimal value. So, the proposed HPPSO algorithm is more efficient than other algorithms in finding the optimal solution. This is because of its POA search mechanism which allows for better exploration and avoids local minima. Its performance on functions demonstrates that it is effective in managing complex tasks that require a balance between exploration and exploitation, as well as the ability to avoid local minima.
Moreover, the convergence curves for the 33 benchmark functions optimized by the proposed HPPSO algorithm and other algorithms are depicted in Fig. 2. These curves illustrate that the HPPSO algorithm exhibits superior convergence capabilities. This fast convergence suggests that the algorithm has the ability to find optimal solutions quickly. Specifically, in the case of unimodal functions (F1-F7), the HPPSO algorithm consistently identifies better solutions and converges steadily. In contrast, the conventional PSO and POA algorithm tend to get stuck in local optima and are outperformed by the HPPSO algorithm. Even for F24-F33 functions, the convergence curves evident that the HPPSO algorithm frequently identifies superior solutions and converges rapidly. However, the convergence curves of HPPSO are not effective for multidimensional functions.
3.3 Results analysis by Wilcoxon signed rank test
In section 4.2, the performance of the HPPSO algorithm was evaluated in terms of mean and standard deviation on 33 benchmark mathematical functions. However, these values are not enough to validate the results obtained by the algorithms. A Wilcoxon signed-rank test (Wilcoxon, 1992) is used to validate the results by using the mean values obtained for 33 standard benchmark functions. The test is used to identify the statistically significant differences between the paired algorithms in terms of their mean value instead of ranking to the algorithm’s performance.
In this test, the probability value (p-value) is used to determine if there are any significant differences between the results or not. The lower p-value indicates a greater level of significance and stronger evidence for rejecting the null hypothesis, implying that the performance of the two algorithms being compared has a statistically significant difference. The results of the Wilcoxon signed-rank test are presented in Table 4 and suggest that the performance of the proposed HPPSO algorithm is significantly different from other existing algorithms used in this study at a 5% level of significance.
3.4 Boxplot analysis
The boxplot analysis is used for evaluating and comparing the performance of the algorithms. This graphical approach facilitated the visualization of key statistics, including minimum and maximum data points (whisker edges) and the interquartile range (box width). By employing this analysis, the study is able to effectively assess the dispersion, central tendency, and data agreement characteristics of the algorithms. The boxplots of all compared algorithms for 33 benchmark mathematical functions are presented in Fig. 3. The results of this analysis highlight the consistently superior performance of the HPPSO algorithm in comparison to the other algorithms.
4 Conclusion and future scope
In the present paper, a novel Hybrid Pelican-Particle Swarm Optimization (HPPSO) algorithm has been presented using PSO and POA to efficiently solve complex optimization problems. HPPSO uses the good exploration capability of POA to overcome the stagnation effect of PSO. This is achieved by updating particles in exploitation phase of PSO obtained through exploration phase of POA. The performance of HPPSO has been tested on 33 benchmark mathematical functions and compared it with conventional PSO and POA along with other numerous hybridized algorithms of PSO (PSOGSA, HFPSO, PSOBOA and PSOGWO). The statistical analysis of HPPSO algorithm has been carried out through convergence curves, boxplot and a non-parametric wilcoxon signed rank test. These analyses indicate that HPPSO performs better than other algorithms in terms of achieving better optima. So, HPPSO is an effective algorithm that can handle complex optimization problems and avoid local optima. Thus, it is a promising choice for optimization problems.
However, it’s important to acknowledge that the HPPSO algorithm introduced in this paper has certain limitations, one of which is its inability to effectively optimize multimodal functions (F10, F12-F14). In future, several research opportunities can be explored to improve the proposed HPPSO algorithm and address the above mention limitations. For instance, the robustness and accuracy of the proposed modification could be tested on different engineering, combinatorial optimization and real-world problems.
References
Abdulai JD, Adu-Manu KS, Katsriku FA, Engmann F (2023) A modified distance-based energy-aware (mDBEA) routing protocol in wireless sensor networks (WSNs). J Amb Intell Human Comput 14(8):10195–10217
Alzubi OA, Alzubi JA, Alazab M, Alrabea A, Awajan A, Qiqieh I (2022) Optimized machine learning-based intrusion detection system for fog and edge computing environment. Electronics 11(19):3007
Alzubi OA, Alzubi JA, Al-Zoubi AM, Hassonah MA, Kose U (2022) An efficient malware detection approach with feature weighting based on harris hawks optimization. Cluster Comput 25(4):2369–2387
Alzubi OA, Alzubi JA, Alzubi TM, Singh A (2023) Quantum mayfly optimization with encoder-decoder driven LSTM networks for malware detection and classification model. Mobile Netw Appl 28(2):795–807
Arora S, Singh S (2019) Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput 23(3):715–734
Avvari RK, Vinod Kumar DM (2022) Multi-objective optimal power flow with efficient constraint handling using hybrid decomposition and local dominance method. J Inst Eng (India): Ser B 103(5):1643–1658
Aydilek IB (2018) A hybrid firefly and particle swarm optimization algorithm for computationally expensive numerical problems. Appl Soft Comput 66:232–249
Bansal S (2021) Nature-inspired hybrid multi-objective optimization algorithms in search of near-ogrs to eliminate fwm noise signals in optical wdm systems and their performance comparison. J Inst Eng (India): Ser B 102(4):743–769
Bansal S, Singh AK, Gupta N (2017) Optimal golomb ruler sequences generation for optical WDM systems: a novel parallel hybrid multi-objective bat algorithm. J Inst Eng (India): Ser B 98(1):43–64
Boyineni S, Kavitha K, Sreenivasulu M (2024) Rapidly-exploring random tree-based obstacle-aware mobile sink trajectory for data collection in wireless sensor networks. J Amb Intell Human Comput 15(1):607–621
Chakrabarti A, Chakrabarty K (2019) A proposal to adjust the time-keeping systems for savings in cycling operation and carbon emission. J Inst Eng (India) Ser B 100(6):541–550
Chopra N, Kumar G, Mehta S (2016) Hybrid GWO-PSO algorithm for solving convex economic load dispatch problem. Int J Res Adv Technol 4(6):37–41
Danandeh Mehr A, Rikhtehgar Ghiasi A, Yaseen ZM, Sorman AU, Abualigah L (2023) A novel intelligent deep learning predictive model for meteorological drought forecasting. J Amb Intell Human Comput 14(8):10441–10455
Dao TC, Tam NT, Quy NQ, Binh HTT (2023) An energy-efficient scheme for maximizing data aggregation tree lifetime in wireless sensor network. J Amb Intell Human Comput 14(9):12329–12344
Dash D (2023) Plane sweep algorithms for data collection for energy harvesting wireless sensor networks using mobile sink. J Amb Intell Human Comput 14(5):6307–6320
Derrac J, García S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evolut Computat 1(1):3–18
Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):481–506
Dong X, Deng S, Wang D (2022) A short-term power load forecasting method based on k-means and SVM. J Amb Intell Human Comput 13(11):5253–5267
Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst Appl 152:113377
Gad AG (2022) Particle swarm optimization algorithm and its applications: a systematic review. Arch Computat Methods Eng 29(5):2531–2561
Garg H (2014) Solving structural engineering design optimization problems using an artificial bee colony algorithm. J Ind Manag Opt 10(3):777–794
Gokulraj J, Senthilkumar J, Suresh Y, Mohanraj V (2021) Data prediction approaches for efficient data transmission using optimized Leibler distance matrix-based data aggregation in wireless sensor network. J Amb Intell Human Comput 3(26):1–8
Goldberg DE (1989) Genetic algorithms in search. Optimization and Machine Learning, Addison Wesley, Reading
Gupta S, Garg R, Singh A (2020) ANFIS-based control of multi-objective grid connected inverter and energy management. J Inst Eng Ser B 101(1):1–14
Hallafi A, Barati A, Barati H (2023) A distributed energy-efficient coverage holes detection and recovery method in wireless sensor networks using the grasshopper optimization algorithm. J Amb Intell Human Comput 14(10):13697–13711
Hariharan U, Rajkumar K, Akilan T, Ponmalar A (2023) A multi-hop protocol using advanced multi-hop Dijkstras algorithm and tree based remote vector for wireless sensor network. J Amb Intell Human Comput 14:6877–6895
Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73
Houssein EH, Gad AG, Hussain K, Suganthan PN (2021) Major advances in particle swarm optimization: theory, analysis, and application. Swarm Evolut Computat 63:100868
Jain D, Shukla PK, Varma S (2023) Energy efficient architecture for mitigating the hot-spot problem in wireless sensor networks. J Amb Intell Human Comput 14(8):10587–10604
Kennedy J, Eberhart R (1995) Particle swarm optimization. Proc Int Conf Neural Netw 4:1942–1948
Khalifa B, Al Aghbari Z, Khedr AM (2023) CAPP: coverage aware topology adaptive path planning algorithm for data collection in wireless sensor networks. J Amb Intell Human Comput 14(4):4537–4549
Kim J, Moon N (2019) BiLSTM model based on multivariate time series data in multiple field for forecasting trading area. J Amb Intell Human Comput 13:1–10
Laskar NM, Guha K, Chatterjee I, Chanda S, Baishnab KL, Paul PK (2019) HWPSO: a new hybrid whale-particle swarm optimization algorithm and its application in electronic design optimization problems. Appl Intell 49(1):265–291
Mirjalili S, Hashim SZM (2010) A new hybrid PSOGSA algorithm for function optimization. In: proceeding of ICCIA’10- international conference on computer and information application, pp 374-377
Molga M, Smutnicki C (2005) Test functions for optimization needs. Test Funct Optimizat Needs 101(48):01–43
Movassagh AA, Alzubi JA, Gheisari M, Rahimi M, Mohan S, Abbasi AA, Nabipour N (2021) Artificial neural networks training algorithm integrating invasive weed optimization with differential evolutionary model. J Amb Intell Human Comput 14:6017–6025
Murali P, Revathy R, Balamurali S, Tayade AS (2020) Integration of RNN with GARCH refined by whale optimization algorithm for yield forecasting: a hybrid machine learning approach. J Amb Intell Human Comput 11:1–13
Muruganantham B, Gnanadass R (2021) Solar integrated time series load flow analysis for practical distribution system. J Inst Eng Ser B 102(4):829–841
Navin Dhinnesh ADC, Sabapathi T (2022) Probabilistic neural network based efficient bandwidth allocation in wireless sensor networks. J Amb Intell Human Comput 13(4):2001–2012
Nayak JR, Shaw B, Sahu BK (2023) A fuzzy adaptive symbiotic organism search based hybrid wavelet transform-extreme learning machine model for load forecasting of power system: a case study. J Amb Intell Human Comput 14(8):10833–10847
Padhi S, Panigrahi BP, Dash D (2020) Solving dynamic economic emission dispatch problem with uncertainty of wind and load using whale optimization algorithm. J Inst Eng (India) Ser B 101(1):65–78
Raja J, Mookhambika N (2022) A novel energy harvesting with middle-order weighted probability (EHMoWP) for performance improvement in wireless sensor network (WSN). J Amb Intell Human Comput 13(12):5465–5476
Ram SDK, Srivastava S, Mishra KK (2022) A multi-objective generalized teacher-learning-based-optimization algorithm. J Inst Eng (India) Ser B 103(5):1415–1430
Rao SS (2019) Engineering optimization: theory and practice. Wiley, NJ, p 10
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Information Sci 179(13):2232–2248
Roy SS, Samui P, Nagtode I, Jain H, Shivaramakrishnan V, Mohammadi-Ivatloo B (2020) Forecasting heating and cooling loads of buildings: a comparative performance analysis. J Amb Intell Human Comput 11:1253–1264
Saranraj G, Selvamani K, Malathi P (2022) A novel data aggregation using multi objective based male lion optimization algorithm (DA-MOMLOA) in wireless sensor network. J Amb Intell Human Comput 13(12):5645–5653
Sengar S, Liu X (2020) Ensemble approach for short term load forecasting in wind energy system using hybrid algorithm. J Amb Intell Human Comput 11(11):5297–5314
Sethuraman J, Alzubi JA, Manikandan R, Gheisari M, Kumar A (2019) Eccentric methodology with optimization to unearth hidden facts of search engine result pages. Recent Patents Comput Sci 12(2):110–119
Sharma A, Sharma A, Panigrahi BK, Kiran D, Kumar R (2016) Ageist spider monkey optimization algorithm. Swarm Evolut Computat 100(28):58–77
Shinde AS, Bichkar RS (2023) Energy efficient active/sleep scheduling of sensor nodes in target based WSN using genetic algorithm with dither creeping mutation. J Amb Intell Human Comput 14(6):7649–7662
Singh U, Rizwan M (2023) Analysis of wind turbine dataset and machine learning based forecasting in SCADA-system. J Amb Intell Human Comput 14(6):8035–8044
Srinivas M, Amgoth T (2023) Delay-tolerant charging scheduling by multiple mobile chargers in wireless sensor network using hybrid GSFO. J Amb Intell Human Comput 14(12):16063–16079
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Opt 11(4):341–359
Thalagondapati V, Singh MP (2023) A self-organized priority-based MAC protocol in wireless sensor networks based on SDR-RHSO optimal relay node selection and HL-ANN wake-up scheduling. J Amb Intell Human Comput 14(8):11093–11102
Trojovsky P, Dehghani M (2022) Pelican optimization algorithm: a novel nature-inspired algorithm for engineering applications. Sensors 22(3):855
Vahabi S, Mojab SP, Eslaminejad M, Dashti SE (2022) Eam: energy aware method for chain-based routing in wireless sensor network. J Amb Intell Human Comput 13(9):4265–4277
Vasanthi G, Prabakaran N (2023) Reliable network lifetime and energy-aware routing protocol for wireless sensor network using hybrid particle swarm-flower pollination search algorithm. J Amb Intell Human Comput 14(12):16183–16193
Verma RK, Jain S (2023) Energy and delay efficient data acquisition in wireless sensor networks by selecting optimal visiting points for mobile sink. J Amb Intell Human Comput 14(9):11671–11684
Wang X, Wang Y, Peng J, Zhang Z (2023) Multivariate long sequence time-series forecasting using dynamic graph learning. J Amb Intell Human Comput 14(6):7679–7693
Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics 1(6):80–83
Zhang C, Wang HP (1993) Mixed-discrete nonlinear optimization with simulated annealing. Eng Opt 21(4):277–291
Zhang M, Long D, Qin T, Yang J (2020) A chaotic hybrid butterfly optimization algorithm with particle swarm optimization for high-dimensional optimization problems. Symmetry 12(11):1800
Funding
The first author is extremely grateful to the Council of Scientific & Industrial Research (CSIR) for providing Junior Research Fellowship (JRF) with file number 09/1152(0024)/2020-EMR-I and encouragement, which made this research possible.
Author information
Authors and Affiliations
Contributions
Amit Raj contributed to methodology development, manuscript writing, and results compilation. Parul Punia contributed to the preparation of the figures and tables used in the manuscript. Pawan Kumar contributed towards the data analysis, supervision, writing review, and editing of the manuscript. All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Conflict of interest
There are no Conflict of interest to disclose by the authors in relation to the current study.
Ethical approval
The author did not conduct any studies involving human participants or animals for this article.
Informed consent
All the authors have approved the manuscript and agree with its submission to the International Journal of System Assurance Engineering and Management.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Raj, A., Punia, P. & Kumar, P. A novel hybrid pelican-particle swarm optimization algorithm (HPPSO) for global optimization problem. Int J Syst Assur Eng Manag 15, 3878–3893 (2024). https://doi.org/10.1007/s13198-024-02386-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13198-024-02386-9