1 Introduction

Optimization is a real-world problem that is illustrated as the minimization or maximization of an objective function according to some constraints. In order to optimize real-world problems, they must be mathematically formulated, first. Mathematically, all optimization problems can be divided into two types based on the derived cost function, i.e. linear and nonlinear. In the literature, three different approaches have been adopted to solve optimization problems, i.e. deterministic, analytical and stochastic (Elijah 2012). Moreover, all deterministic approaches are based on a given initial condition and/or assumption, whereas analytical approaches have some pre-defined targets for a particular problem. Both of these methods diverge if the dimensionality factor of an optimization problem increases, dramatically. Therefore, to resolve the issue of quick divergence, the third approach, i.e. stochastic is largely used to solve and find near optimum solutions to various real-world optimization problems.

Among stochastic approaches, a population-based technique known as PSO (particle swarm optimization) is widely used to solve the optimization problems (Kenndy and Eberhart 1995; Eberhart and Kennedy 1995). PSO is a powerful and modern swarm intelligence technique to solve the global optimization problems. The particles find the objective of real-world optimization problem by emulating the behaviours of fish schools and bird flocks. Population is referred as swarm and each individual agent is a candidate solution that is referred as particle. Generally, two approaches are used to initialize the swarm, i.e. random initialization of the population and swarm initialization according to the problem variables. Each particle i in a swarm has some attributes and it is associated with two vectors, i.e. velocity vector \(v_{i} = [v_{i}^1, v_{i}^2,\ldots , v_{i}^D]\) and position vector \( x_{i} = [x_{i}^1, x_{i}^2, \ldots , x_{i}^D]\), where D represents the problem dimensionality factor. Velocity and position are initialized randomly within the search space range. Throughout the development process, the velocity and position of ith particle are updated on dimension D according to the following two equations:

$$\begin{aligned} v_{i}(t+1)= & {} v_{i}(t)+c_{1}r_{1}(\hbox {pbest}_{i}(t)-x_{i}(t))\nonumber \\&\, +\, c_{2}r_{2}(\hbox {gbest}_{i}(t)-x_{i}(t)) \end{aligned}$$
(1)
$$\begin{aligned} x_{i}(t+1)= & {} x_{i}(t)+ v_{i}(t+1) \end{aligned}$$
(2)

where \(c_{1}\) and \(c_{2}\) represent acceleration coefficients (or weights) and have constant values equal to 2.0, usually. Furthermore, \(r_{1}\) and \(r_{2}\) are two random numbers, which are uniformly distributed, and belong to (0, 1).

The velocity update equation (i.e. Eq. 1) has three terms on the right-hand side in which the first term is called momentum or inertia weight. The second term is called cognitive part that represents the personal influence of the particle. The third term is called the social part that denotes the social and collective influence of the particle. The main role of the acceleration coefficients is to adjust the balance between exploitation and exploration of each particle in the search space during its movement. Moreover, \(V_{\max }\) is also introduced to further constrain the movement of each particle within the boundary of the search space. The value of \(V_{\max }\) is given according to the problem’ minimum and maximum boundaries. Moreover, small values for \(V_{\max }\) cause exploitation (local minima), and large values for \(V_{\max }\) cause exploration (local maxima). Each particle memorizes its best position that has found in the history. The best position that is found by each particle itself is called personal best \(P_\mathrm{best}\) position. However, the best position found by the whole swarm is called global best or \(G_\mathrm{best}\). In the beginning, each particle moves with random velocity in the search space. Subsequently, each particle dynamically fine-tunes its movement velocity matching to the experiences of its own and other participants. The particle position will be updated constantly until the final optimal solution is found or maximum number of iterations are reached. Figure 1 describes the main process of traditional PSO algorithm.

Like traditional evolutionary algorithms, sensitivity of parameter and premature convergence are the tightly coupled issues associated with the PSO algorithm. The proposed model, that is a novel variant of the traditional PSO scheme, is introduced to resolve the aforementioned issues and more likely enhancing its search capacity. The proposed module is focused on how to: (i) control parameters modification (i.e. weight of inertia and acceleration coefficients); (ii) design topological structures of the model that is used for updating the velocity; and (iii) form a hybrid of PSO and evolutionary algorithms (Van den Bergh and Petrus Engelbrecht 2006; Eberhart and Shi 2001, 2004; Ozcan and Mohan 1999; Cheng and Jin 2015; Robinson et al. 2002; Juang 2004). Besides these, evolutionary state estimation (ESE)-based approaches that divides the search space into several sub-stages are also suggested to improve PSO search efficiency and convergence speed (Zhan et al. 2009). However, the four states do not guarantee the suitability of such approaches to large-scale, scalable, optimization problems, in particular, high dimensional.

In this paper, we introduce a novel N state switching PSO (NS-SPSO) algorithm. The NS-SPSO algorithm is based on the assumptions to further improve the performance of NS-MJPSO (Rahman 2016) by excluding some complex steps such as: (i) determine the jumping probability according to the domain knowledge; however, in practice, it is very difficult to have enough domain knowledge of the optimization problem; and (ii) reduce the computational complexity and burden induced by the Markov chain process. In the proposed NS-SPSO algorithm, particles velocities are updated purely according to the evolutionary factor value. The particle switches from one state to another state according to the assessment of its current evolutionary factor. Furthermore, the choice of each particle either: to stay in the current state; or to switch to other state, is made by how large the value of evolutionary factor is. First of all, the population distribution and the mean distance are determined by using Euclidean distance. The evolutionary factor is then derived by using the population distribution and the mean distance of each particle from the global best. In the population distribution, we determine how far the particles are away from each other and also their global optimums. The main states are described as exploration, exploitation, convergence and jumping-out states (Zhan et al. 2009). However, in the N states, we further divide the main four states into sub-states or stages. Each state is assigned a value in the range of (0, 1).

The algorithmic parameters such as inertia weights and acceleration coefficients are assigned appropriate weights according to each sub-state or stage. In the N states, N number of acceleration coefficients are assigned, but its appropriate value is selected during the evaluation process according to the current state. Subsequently, the algorithm converges to the optimum in just few iterations. For that reason, we have adopted the concept of linearly time decreasing inertia weight for the proposed NS-SPSO algorithm. Extensive simulations have been carried out to evaluate the performance of our proposed NS-SPSO algorithm through applying it to 12 most widely used benchmark functions. The benchmark functions consist of six uni-modal and six multi-modal problems. The results produced by the NS-SPSO are then compared with NS-MJPSO and other state-of-the-art algorithms (PSO variants). The average/best evaluation values are shown in the tables and further illustrated in the graphical figures. The proposed algorithm has consumed the shortest computation time and also has produced second best results (in terms of accuracy) in comparison with all other variants—with the notable exception of NS-MJPSO. Furthermore, the proposed algorithm has solved the problem of premature convergence to some extent by the concept of state switching. The particle successfully learns from the population distribution about the neighbourhood and also the global best position. Then, the particles efficiently move towards the global optimum in shorter time. Moreover, the classification of N states has induced the balance between local and global search regions. Following are the main contributions of our work:

  1. 1.

    an N-state switching PSO is proposed that extends the four-states approach (Zhan et al. 2009) into N-state for large-scale optimization problem;

  2. 2.

    an additional parameter based on evolutionary switching is introduced;

  3. 3.

    an existing mechanism for calculating the inertia weight parameter (Shi and Eberhart 1998b) is being adopted; and

  4. 4.

    the proposed approach is evaluated on various lower and high-dimensional (uni-modal and multi-modal) benchmark functions.

The rest of work in this paper is organized as follows. Section 2 is dedicated to summarize the background literature, the structure of basic PSO algorithm and its developments towards the proposed work. A brief description of the problem is presented in Sect. 3. In Sect. 4, the structure of the proposed algorithm is presented. In the following Sect. 5, the performance of proposed NS-SPSO algorithm is thoroughly examined in comparison to the other well-known PSO variants. In Sect. 6, we have briefly summarized our proposed work along with several directions for further research.

2 Related work

PSO is a meta-heuristic, population-based algorithm which was first introduced by Kennedy and Eberhart in 1995 (Kenndy and Eberhart 1995; Eberhart and Kennedy 1995). The main concept is inspired by the swarm intelligent behaviour and choreography of birds flocking and fish schooling (Kennedy et al. 2001). PSO imitates the participant agents called particles to get to the optimum location in the search space. After the random initialization of population, the particles compare their current position to their neighbours and thus move to the new position. Basically, each particle is a candidate solution and it keeps track of the best places found by itself within the trial history. It is denoted as personal best or \(P_\mathrm{best}\) symbolically, whereas the best value ever found by entire swarm is called global best or \(G_\mathrm{best}\). All particles are collectively named as swarm. PSO algorithm is based on two simple equations denoted as velocity update \(v_{i}\), and position update \(x_{i}\). A constant value 2 is used for the acceleration coefficients \(c_{1}\) and \(c_{2}\). Apart from that, two uniformly distributed random numbers denoted as \(\hbox {rand}_{1}\) and \(\hbox {rand}_{2}\) have also been used. The simplified structure, good quality solutions, quick convergence and algorithm reliability are the main characteristics that have attracted researchers in various fields. PSO has been applied to various real-world optimization problems (Eberhart and Shi 2001; Krohling and dos Santos Coelho 2006; Ho et al. 2008; Liu et al. 2007; Eberhart and Shi 2004; Wang et al. 2013; Hu et al. 2015) in the last two decades. Due to the limitations of getting trapped into local optimum and excessive evaluations the basic PSO has been further modified. Several variants have been developed with extra capabilities.

In PSO modified versions, the diversity of the swarm has been improved by introducing the various structures for topologies in Suganthan (1999) and Kennedy (1999). Kennedy and Mendes (2002) have proposed two different types of topologies named as ring and Von-Neumann topologies. A novel fully informed PSO (FIPS) has been proposed in Mendes et al. (2004). In FIPS, the particles learn from their peers with the best fitness in their neighbourhood. Another variant, the comprehensive learning PSO (CLPSO) has been developed in Liang et al. (2006). This algorithm has also contributed to the area of topological improvements of the PSO. The performance of the above-mentioned algorithms are investigated on various uni-modal and multi-modal benchmark functions. Another variant of PSO is developed in Qu et al. (2013) that derives the mean distance of particles in the local neighbourhood. This algorithm has been used for the problems having many local optima.

PSO algorithm is used in combination with the other techniques such as evolutionary techniques (Zhan et al. 2009), genetic algorithm (Robinson et al. 2002; Valdez et al. 2014) and ant bee colony (Shelokar et al. 2007). Additional parameters have been introduced into PSO algorithm. The concept of niching has been incorporated with PSO algorithm in Brits et al. (2007). Gaussian mutation has been introduced by Higashi and Iba (2003). Another adaptive particle swarm optimization (APSO) algorithm has been proposed (Zhan et al. 2009), evolutionary state information has been used as an additional term added to the process of basic PSO. Four evolutionary states \(S_1\), \(S_2\), \(S_3\) and \(S_4\) (exploration, exploitation, convergence and jumping out) have been introduced. Each state has assigned an appropriate value from the fuzzy membership interval of the particle current state. The evolutionary factor \(E_\mathrm{f}\) has been used to initialize the population distribution, and to measure the mean distance between the global best and other particles in the swarm. Four states have been described by taking population distribution information \(E_\mathrm{f}\) in to account, which describes convergence, exploration, exploitation and jumping-out states, respectively. Fuzzy classification method is used for classifying the states, which results in some limitations of excessive computation of acceleration coefficients in each generation, swarm stagnation in the local optima, if the current global best is the local optimum and the last one is the complicated implementation of classification method. The authors state that dividing the search space into sub-stages increases search efficiency and convergence speed. However, the four states still do not guarantee to solve the local optimum and pre-mature convergence for high-dimensional optimization problems.

Furthermore, in the PSO performance studies, the computation time has been considered as the initial objective for improvement. Another aim that has been considered is solving the problem of local optima or premature convergence (Ho et al. 2008; Liu et al. 2007; Ciuprina et al. 2002; Liang et al. 2006). The given improved variants have been thoroughly investigated by applying them into numerous real-world problems. However, due to the nonlinear, multi-modal, high-dimensional and complex types of the real-world problems there is still a desirable room for further enhancement to the PSO algorithm. In response to that, the supplementary techniques have been merged to significantly control the parameters of PSO algorithm (Zhan et al. 2007, 2009; Tang et al. 2011). The topological structures have been improved to explore the search space, ensure global optimum and avoid premature convergence (Liang et al. 2006).

A novel hybrid type, switching PSO (SPSO), has been proposed in Tang et al. (2011). Evolutionary state information is used to find the mean distance of all the particles. Then Markov chain is applied to randomly switch particle within the four states according to certain transition probability. Furthermore, appropriate values of acceleration coefficients have been predefined for all states. The main consideration of the switching PSO is to establish the balance between local, global search region and converge quickly to the global optimum in few iterations. The switching mechanism has ensured that the particle will change its state according to certain probability and will not get trapped into local optimum prematurely. SPSO has shown the best performance for benchmark functions and genetic regulatory networks (GRN) application (Tang et al. 2011). Therefore, to make the existing SPSO robust and more accurate, we have proposed several modifications to the current SPSO algorithm.

In Weibo et al. (2018), a randomly occurring distributedly delayed PSO algorithm (RODDPSO) is suggested. In RODDPSO, the evolutionary state is computed through using the evolutionary factor. Based on the evolutionary state, the particle switches from one state to another. To reduce the chances of stagnation locally and to explore the search space, the time delays occurring randomly, which shows the previous \(P_\mathrm{best}\) and \(G_\mathrm{best}\) particles, are incorporated in the velocity update equation. Empirical evaluation suggests that RODDPSO outperforms some well-known PSO variants over eight benchmark functions. Previous studies show that PSO suffers from premature convergence, particularly, in problem relating to data clustering. RODDPSO has been evaluated for data clustering. Similarly, density-based PSO variants are presented for data clustering in Alswaitti et al. (2018) and Ling et al. (2016); and evaluated over various benchmark functions.

The proposed N state switching PSO algorithm (NS-SPSO) is the modified version of our previously developed algorithm NS-MJSPO described in Rahman (2016) and Rahman et al. (2020), where N is number of possible states that can be any positive value. In Rahman et al. (2020), the transition matrix based on the probability of each particle is used to predict the next state of the particle using Markovian jumping mechanism. Basically, the main idea is similar to four-state versions given in APSO and SPSO (Zhan et al. 2009; Tang et al. 2011). The NS-SPSO algorithm is based on evolutionary techniques. The N states are visualized as sub-states or stages of four states. Furthermore, the evolutionary states are described by calculating and then using the population distribution, mean distance using Euclidean space, maximum, minimum values and also the index of global best particle in the population distribution information. The inertia weight \(\omega \) is an important parameter of the PSO algorithm, which has first been introduced by Shi and Eberhart (1998a). Shi and Eberhart (1998b) proposed the concept of linearly decreased inertia weight (LDIW) to compute the value of \(\omega \). It iteratively decreases the value of \(\omega \) from 0.9 to 0.4 on basis of Eq. 9. The idea behind this decrease is possibly the state of the particle—in initial stages, the particle is far away the target; therefore, larger value for \(\omega \) is proffered.

Besides these, PSO has been applied to a variety of optimization problems including machine learning techniques to improve feature selection of text and document clustering. These falls under the category of PSO application. For example, Abualigah and Khader (2017) and Abualigah et al. (2018) used PSO for feature selection, i.e. an unsupervised learning approach to choose a subset of most informative text features in order to improve the performance of the text clustering and minimize its computational time. The algorithm is known as FSPSOTC (Abualigah et al. 2018). The authors proposed ‘H-FSPSOTC’ a hybrid PSO algorithm using genetic operator to improve its efficiency (Abualigah and Khader 2017). Their results show that the proposed feature selection method improved the text clustering outcomes through assisting the k-mean text clustering to make more similar groups.

In this work, we compute the inertia weight \(\omega \) by combing the evolutionary factor and the time varying strategy (Shi and Eberhart 1999). Acceleration coefficients \(c_{1}\) and \(c_{2}\) both takes N number of tuned values. The proposed algorithm is then applied to 12 commonly used uni-modal and multi-model functions of various dimensions. The results are compared with some well-known and most cited algorithms. The proposed algorithm has performed well in terms of the shortest computation time and average/best evaluation values in comparison with all variants in comparison except NS-MJPSO in accuracy for most of the benchmark functions. However, in few problems some additional parameter tuning is required to improve the quality of solution in terms of better evaluation values.

Fig. 1
figure 1

The particle and parameter social learning behaviour (Rahman 2016) (this generalized figure denotes the movement of each particle that could be in any state out of the N states—as shown in Fig. 2)

2.1 The basic framework of PSO algorithm

The PSO algorithm refers to the intelligent searching behaviour of all participants named as particles. The population of all particles is called swarm of size n, where each individual particle i is a candidate solution in the problem space. Each particle i holds two vectors quantities, the first one is the velocity of ith particle in Dth dimension and t time is represented as \(v_i(t)=[(v_{i1}(t),v_{i2}(t),\ldots ,v_{iD}(t))]\) and the second one is the position of the ith particle in Dth dimension and in time t is denoted as \(x_i(t)=(x_{i1}(t),x_{i2}(t),\ldots ,x_{iD}(t))\), where D represents dimension of the solution search space. The swarm velocities and positions are initialized randomly with their respective boundaries \(x_\mathrm{in}(t)\in [x_{\min ,n},x_{\max ,n}]\) (\(1\le n \le D\)) with \(x_{\min ,n}\) and \(x_{\max ,n}\) of the search space; where \(V_{\max }\) is maximum velocity set to the \(20\% \) of the search space (Eberhart and Shi 2001). During the process of algorithm evaluation iteratively, the particle i with dth dimension is updated as follows.

$$\begin{aligned} v_{i}(t+1)= & {} \omega v_{i}(t)+c_{1}\hbox {rand}_{1}(\hbox {pbest}_{i}(t)-x_{i}(t)) \nonumber \\&+\, c_{2}\hbox {rand}_{2}(\hbox {gbest}_{i}(t)-x_{i}(t))\end{aligned}$$
(3)
$$\begin{aligned} x_{i}(t+1)= & {} x_{i}(t)+ v_{i}(t+1) \end{aligned}$$
(4)

where \(\omega \) is called the inertia weight (Shi and Eberhart 1998a), \(c_{1}\) and \(c_{2}\) are denoted as acceleration coefficients (Eberhart and Kennedy 1995). Further, \(\hbox {rand}_{1}\) and \(\hbox {rand}_{2}\) are two uniformly distributed random numbers generated between [0, 1] (Kenndy and Eberhart 1995). Similarly, \(\hbox {Pbest}\) represents \(\hbox {pbest}_{i} = (\hbox {pb}_{i1}, \hbox {pb}_{i2}, \ldots , \hbox {pb}_{iD})\). The personal best is the particle having the best fitness value found by the ith particle so far; and \(\hbox {gbest}\) means \(\hbox {gbest}\) denoted as \(\hbox {gbest}_{D} = (\hbox {gb}_{1}, \hbox {gb}_{2}, \ldots , \hbox {gb}_{D})\). The global best is the particle with the best fitness value found by the entire swarm. Notre that, \(n_\mathrm{Best}\) is used for global best in the neighbourhood version, \(G_\mathrm{Best}\) for the global version, and \(L_\mathrm{Best}\) for the local version of the PSO. The particle’s personal experience and its social interaction determines the direction towards its best position, iteratively. The movement of each and every particle in the search space and the influence of its parameter is shown in Fig. 1 (Weber and Van Noije 2012).

3 Problem description

In traditional PSO, various issues, such as parameter sensitivity, getting stuck in local optima and weak robustness, affect its performance, particularly, for large-scale optimization problems. Therefore, rich literature suggests various PSO variants, in different problem domains, in order to: (i) enhance the search performance; and (ii) address one or more aforementioned issues. Among these, our previously proposed NS-MJPSO algorithm (Rahman 2016) has shown to be successful based on the assumptions that: (i) we know how to determine the jumping probability according to the prior knowledge; and (ii) we do not really care about the computational burden induced by the extra stage of the Markovian state jumping (Rahman 2016). In practice, however, it is quite often that we have less domain knowledge about the optimization problem and the computational burden is a concern. Therefore, it is essential to tackle this issue. Moreover, evolutionary state estimation and divisibility of the search space into several sub-stages does not guarantee fast convergence and mature convergence, in particular, for large-scale, high-dimensional, optimization problems. In this case, we have proposed another novel PSO algorithm, which is the NS-SPSO algorithm. For NS-SPSO algorithm, we update the velocity purely based on the evolutionary factors where the state switches from one to another according to the evaluation of its evolutionary factor. In other words, the possibility for the state switching or staying is determined by how large the evolutionary factor is. Our proposed NS-SPSO algorithm is then examined through applications to some benchmark functions.

4 The novel N state switching PSO

This section elaborates the development of novel NS-SPSO for the enhancement of global search performance. A new switching parameter \(\delta (t)\) is introduced in the basic PSO velocity update Eq. (5). The value of N states along-with other parameters is initialized; where N represents the evolutionary state number and is described as \(N \in \{4, 5, 6,\ldots , n-1, n\}\). The basic idea of state division is shown in Fig. 2. It is to be noted that the original four (4) state model—convergence, exploration, jump out and exploitation (Tang et al. 2011), is divided into different sub-states. Moreover, the likelihood of dividing a single state into multiple states is also possible such as (i) in exploitation state where particles are stuck or (ii) due to the utilization of small \(c_1\) and large \(c_2\) values, particles enter into the pre-mature convergence state where division of every state into at least two sub-state is mandatory which lead to \(N = 5\) or 6. Note that, stucking a particle in a particular stage means that the algorithm pre-maturely converges without further optimizing the objective. In order words, the suboptimal value is computed in first few iterations and is repeatedly computed the same until the end. Furthermore, the convergence state means that the particle has already achieved its target position. The accuracy and search ability of a PSO based variant is significantly improved if a particular state is divided into multiple states. Furthermore, the proposed sub-state mechanism reduces the probability of skipping a particle during the transition process. Although sub-state mechanism resolves the aforementioned problem, a high computational overhead, i.e. in terms of search space and dimensionality of the problem, is major issue associated with the small state model. Note that, the division of a single state into multiple stages may not be beneficial in all cases, as investigated in Sect. 5.3. Therefore, it is essential to evaluate and estimate an appropriate number of states for a particular problem with respect to its dimensionality. The novel N state switching PSO (NS-SPSO) is investigated by applying to 12 uni-modal and multi-modal widely used benchmark functions (Liang et al. 2006; Suganthan et al. 2005) which are given in Sect. 5.

$$\begin{aligned} v_{i}(t+1)= & {} \omega v_{i}(t)+c_{1}(\delta (t))r_{1}(t)(\hbox {pbest}_{i}(t)-x_{i}(t))\nonumber \\&+\, c_{2}(\delta (t))r_{2}(t)(\hbox {gbest}_{i}(t)-x_{i}(t)), \end{aligned}$$
(5)
$$\begin{aligned} x_{i}(t+1)= & {} x_{i}(t)+ v_{i}(t+1) \end{aligned}$$
(6)
Fig. 2
figure 2

The concept of N states using the Markov chain process (Rahman et al. 2020)

4.1 Prediction of evolutionary states

In the beginning of population distribution, the particles are dispersed in the search space. However, in the evolutionary process the particles group together iteratively in the later stages and find their local and global optimal positions in the search space. The extraction of information from the population distribution and using that for further describing the evolutionary state is an important research topic in PSO. Hence, the population distribution information in each generation is important to be recorded. A clustering based technique was introduced for evolutionary state estimation in Zhang et al. (2007) and Zhan et al. (2007), whereas fuzzy classification method is used for calculating four evolutionary states in Zhan et al. (2009).

In the first step of population distribution, the mean distance from the global best particle in the search space for each i is derived. The particles having smaller distance from the global best are close to the convergence state and it switches to the other state according to evolutionary factor. Furthermore, the particles located far away from the global best switch to another state with higher values of its parameters. The mean distance is calculated by using Euclidean matrix as follows (Zhan et al. 2009; Tang et al. 2011):

$$\begin{aligned} P_{d}(i)= \frac{1}{N-1}\sum _{j=1,j\ne i}^{N}\sqrt{\sum _{k=1}^{D}(x_{i}(k)-\bar{x}_{j}(k))^2} \end{aligned}$$
(7)

In Eq. (7), N represents the swarm size and D stands for dimensions of the problem. Evolutionary factor \(E_\mathrm{f}\) has been introduced by Zhan et al. (2009), and it has further been used by Tang et al. (2011). Note that, the value of N-state is pre-determined. Then, the computed value for \(E_\mathrm{f}\), based on the Euclidean distance, and comparing the \(E_\mathrm{f}\) value with States(\(\delta \)) (as shown in Eq. 8) describes the current state in the particular N-states. For example, if \(N=8\) and \(0 \le E_\mathrm{f} < 2/N\) then the particle is in the second stage of the exploration state.

This paper presents a novel switching mechanism by extending from four states up to N states (Rahman 2016). It further divides the four states to possible sub-states. The sub-states smoothly describe the unit of association for particle in a particular state. By dividing into sub-states, we assume significant improvement in adopts more suitable values for its parameters. The sub-states represent the certain stages according to the value of N states. Hence, by increasing the number of states the performance of the algorithm will be improved in terms of accuracy in the evaluation results, but the computation burden will increase slightly. An auxiliary parameter \(\delta \), as described in Sect. 4.3, is used in the new velocity update Eq. (5) in Sect. 4. Initially, \(c_{1}(\delta (0))\) and \(c_{2}(\delta (0))\) with \(\delta \) at position/state 0, are assigned 2. Then, the appropriate value for the \(c_1\) and \(c_2\) are automatically assigned during the runtime.

Here, we have derived the mean distance of all \(P_{\mathrm{d}}(i)\) by using Eq. (7) and find \(P_{\mathrm{dg}}\) the global best particle (which corresponds to the usual global best of the traditional PSO gbest), \(P_{\mathrm{d}(\max )}\) as the maximum mean distance and \(P_{\mathrm{d}(\min )}\) as the minimum mean distances. Consider the values derived by using Eq. (7) in the population distribution and then compute the evolutionary factor using Eq. (8) (Fig. 3).

$$\begin{aligned} E_{\mathrm{f}}= & {} \frac{P_{\mathrm{dg}}-P_{\mathrm{d}(\min )}}{P_{\mathrm{d}(\max )}-P_{\mathrm{d}(\min )}}\in [0,1]\nonumber \\ \hbox {States} (\delta )= & {} {\left\{ \begin{array}{ll} 1, &{} 0 \le E_{\mathrm{f}}< \frac{1}{N}, \\ 2, &{} \frac{1}{N} \le E_{\mathrm{f}}< \frac{2}{N}, \\ 3, &{} \frac{2}{N} \le E_{\mathrm{f}}< \frac{3}{N}, \\ \vdots &{} \vdots \\ N, &{} \frac{N-1}{N} \le E_{\mathrm{f}} < 1 \end{array}\right. } \end{aligned}$$
(8)
Fig. 3
figure 3

The switching parameters flow diagram

The division of solution space exploration approaches, which are found in other population-based optimization algorithms, such as APSO (Zhan et al. 2009; Tang et al. 2011), into sub-stages improve their convergence and, as well as, search efficiency. This is due to the fact that the entire population is intelligently managed and controlled according to particle’s initial position. For example, if the particle is far away from its gbest, then maximum acceleration weight is assigned to speed up its movement towards the target position. Similarly, if particles are too close to their target position their speed is controlled accordingly.

4.2 Mechanism for inertia weight calculation

The inertia weight \(\omega \) has the significant contribution in the control of PSO algorithm. It has the main influence on the global and local search performance. If the value of \(\omega \) is small then it causes exploitation in the local search region. Large \(\omega \) drag the swarm towards global search region. In this study, \(\omega \) is computed by using the linearly decreasing strategy (LDIW) proposed in Shi and Eberhart (1999). In LDIW strategy, the value of \(\omega \) is iteratively decreased from 0.9 to 0.4 based on the idea that in initial stages the particle needs larger values to control its movement towards gbest. The main reason to select this method is the suitability of the decreasing factor with the current state of the particle.

$$\begin{aligned} \omega =(\omega _{\max }- \omega _{\min })\times \frac{\text{ iter }}{\text{ maxiter }}+\omega _{\min } \end{aligned}$$
(9)

Here, we have initialized \(\omega = 0.9\) as maximum and 0.5 values as minimum. The value of inertia weight \(\omega \) is linearly decremented with time, so as to accelerate the velocity of each particle according to its current position. Further developments and investigation of various methods regarding inertia weight have been briefly described in Rahman (2016). The complete structure of NS-SPSO is described by the following flowchart in Fig. 4, and the steps are shown in Algorithm 1:

Fig. 4
figure 4

N state switching PSO algorithm flowchart

4.3 Selection of acceleration coefficients

In the proposed NS-SPSO algorithm, the acceleration coefficients are selected and adjusted manually according to the problem. N number of acceleration coefficients are required. For instance, if \(N = 4\) then we have to initialize four values for each acceleration coefficient \(C = [2, N]\). Each value is designated to a particular state. The N acceleration coefficient values are pre-initialized. Initially, \(c_{1}(\delta (0))\) and \(c_{2}(\delta (0))\) are assigned 2. Then the appropriate value for the acceleration coefficient is automatically assigned during the program execution time. The strategy for selecting the acceleration coefficients for each state is described in Tang et al. (2011) as follows:

In the proposed NS-SPSO technique, the large value of \(E_{\mathrm{f}}\) describes the state as jumping-out-state. As the particle has the intention to jump from the local optimum towards the global optimum; a large value of social learning factor \(c_{2}(\delta (4)) = 2.2\) and smaller value of cognitive factor \(c_{1}(\delta (4)) = 1.8\) are assigned. Subsequently, the particle flies towards the global best region very quickly. According to this strategy, the proposed algorithm converges to global optimum in few iterations.

Similarly, a relatively small value of \(E_{f}\) describes the current state in the exploration-state, according to that a large value of \(c_{1}(\delta (3)) = 2.2\) and smaller value of \(c_{2}(\delta (3)) = 1.8\) are assigned to let the particle explore search spaces on its personal influence.

Moreover, in the exploitation-state, a large value of \(c_{1}(\delta (2)) = 2.1\) and smaller value of \(c_{2}(\delta (2)) = 1.9\) are pre-initialized. Slight changes have been made to preserve the balance in local and global search performance. Subsequently, in the convergence-state equal values to both \(c_{1}(\delta (1)) = 2.0\) and \(c_{2}(\delta (1)) = 2.0\) because all particles group together in the convergence-state. The steps in the proposed NS-SPSO algorithm are shown in Algorithm 1 and their explanation follows as given. From step 1 to step 5, all the required parameters are being initialized. In subsequent steps, the mean distance or Euclidean distance for each particle is computed from step 7 to step 16. Then, from step 17 to step 18 the evolutionary factor is computed. Next, based on the \(E_\mathrm{f}\) value and \(\hbox {States}(\delta )\) the current state is computed in step 19 to step 23. In step 24, the next state of each particle is predicted in step 24. Finally, in step 25, the particle is moved through updating its velocity and acceleration coefficients. Note that, step 7 to step 25 are repeated for each particle until all iterations are completed.

figure a

4.4 Computational complexity

The computational complexity of NS-SPSO depends on three various parts: (a) fitness evaluations; (b) mean-based population distribution—\(E_\mathrm{f}\) calculation; and (c) learning of the swarm behaviour. In respect of (a), time cost of the fitness evaluation is dependent on the problem size (dimension), which is beyond the scope of our current work. Therefore, we describe the time complexity of the proposed algorithm with respect to (b) and (c).

In respect of (b), the time required to calculate the evolutionary factor (\(E_\mathrm{f}\)) of each particle is constant and is achieved using the Euclidean distance. For m particles, the time required for computing the population distribution is given by:

$$\begin{aligned} T_m = \mathcal {O}(m) \end{aligned}$$
(10)

Moreover, behaviour learning is an essential process for updating the particle behaviour; and we assume it similar to other learning mechanisms in various PSO algorithms (Cheng and Jin 2015). As described in Cheng and Jin (2015), ‘the time complexity of behaviour learning process is indispensable in a population-based stochastic search algorithm’. Therefore, for n-dimensional problem which consists of m number of particles, the time complexity of the learning process can be obtained as follows:

$$\begin{aligned} T_l = \mathcal {O}(m \times n) \end{aligned}$$
(11)

Therefore, the total complexity T of the NS-SPSO algorithm can be described as:

$$\begin{aligned} T = \mathcal {O} (m(1 + n)) \end{aligned}$$
(12)

5 The experimental work

The proposed NS-SPSO has been evaluated over 12 commonly used benchmark functions that are given in Table 1\(f_{1}\) to \(f_{12}\) taken from Cheng and Jin (2015). Few of them are uni-modal (\(f_1\) to \(f_5\)), and several are multi-modal (\(f_6\) to \(f_{12}\)) as described in Cheng and Jin (2015) Initially, the proposed NS-SPSO is applied to the 12 benchmarks functions \(f_{1}\) to \(f_{12}\) in 30 dimensions using different values for N. Then, the evaluation results are compared with published values of six state-of-the-art algorithms. All the variants were re-implemented and evaluated for 30 independent trials. The published results of all the variants are taken from Cheng and Jin (2015) for comparison. Additionally, the proposed algorithm were also tested for its scalability on high-dimensional functions in Sect. 5.4. These consist of various 50 dimensional shifted or rotated functions from \(f_{13}\) to \(f_{19}\); and their dimensionality were set to 100, 500 and 1000. All the experimental work have been conducted on a PC with an Intel Core i5-3320M 2.6 GHz CPU and Microsoft Windows 10 Pro 64-bit system. The benchmark test experiments of the 12 uni-modal and multi-modal problems for the proposed NS-SPSO and other PSO variants in comparison are all coded in MATLAB version R2015a. It is also worth to be mentioned that because of the evolutionary control and switching techniques the algorithm converge to its optimum in the early stages of the function evaluations.

Mathematical benchmark functions \(f_1{-}f_{12}\) are taken as for testing high-dimensional problems. Out of those functions, \(f_1{-}f_5\) functions represent uni-modal. Function \(f_6\) represents a step function that has a single minimum and is disjointed. Function \(f_7\) represents a quadratic and noisy function, where random [0, 1] denotes a uniformly distributed arbitrary variable between [0, 1], whereas functions \(f_8{-}f_{12}\) describes multi-modal functions, with intent that the number of local minima grows exponentially in conjunction with the problem dimensionality factor (Yao et al. 1999; Törn and Žilinskas 1989). Such kind of problems emerge to be the most challenging class of problems for evolutionary optimization algorithms. Considering uni-modal functions for classical and fast evolutionary methods convergence rates are more fascinating and attractive as compared to the end optimized results, as other methods are specified for optimization uni-modal functions. However, in considering multi-modal functions, the end optimization results are much more significant and important as they contemplate the strength of algorithm in getting away from local optima and finding a near-global optimum position.

Various metrics were used to evaluate the performance of the proposed algorithm. For example, the optimization error denote the difference between the real optimum value and achieved value. Similarly, the near convergent rate specifies the smallest achievable value. Moreover, the computational time shows the wall clock time which an algorithm takes to converge. Similarly, the impact of number of states (N), for high-dimensional optimization functions, on the performance of the proposed NS-SPSO algorithm has been investigated through repeated experimentations. The discussion is consistent with the scalability of the proposed approach.

Table 1 The 12 common standard functions for experimentation using 30 dimensions (Rahman 2016)
Table 2 Parameter coefficients of the PSO variants for comparison—m, and R denote each swarm population size and regrouping period (Liang and Suganthan 2005); moreover \(\chi \) denote the constriction coefficient parameter, as described in Mendes et al. (2004)
Table 3 The 12 standard test functions’ optimization errors—for each procedure and test function, the primary row characterizes the statistical mean or average value (\(\mu \)), while the subsequent row designates the statistical standard deviation (\(\sigma \))
Fig. 5
figure 5

Various test functions and their convergence profiles (the smallest values denote the best approaches); \(f_1\) to \(f_6\)—from leftward to right-side and uppermost to bottommost. The proposed NS-SPSO algorithm has revealed the finest performance on ten out of twelve test functions (\(f_1\) to \(f_3\); \(f_5\); \(f_6\) and \(f_8\) to \(f_{12}\)), together with 4 uni-modal and 6 multi-modal functions

Fig. 6
figure 6

Various test functions and their convergence profiles (the smallest values denote the best approaches); \(f_7\) to \(f_{12}\)—from leftmost to rightmost and topmost to bottommost. The proposed NS-SPSO algorithm has revealed the finest performance on ten out of twelve test functions (\(f_1\) to \(f_3\); \(f_5\); \(f_6\) and \(f_8\) to \(f_{12}\)), together with 4 uni-modal and 6 multi-modal functions

5.1 Performance analysis of NS-SPSO in benchmark functions

To analyse the performance of the proposed NS-SPSO algorithm over various benchmark functions, we compare it to the newly developed NS-MJPSO (Rahman et al. 2020; Rahman 2016); and five other variants of the PSO algorithm. All the variants have been coded and re-implemented for comparison purposes. The published values are used in the tables here produced by Cheng and Jin (2015). We compare NS-SPSO to our newly developed NS-MJPSO algorithm, as described in Rahman (2016); since we aim to further enhance its performance by reducing the computational burden. Our second comparative variant is the local-neighbourhood PSO (local-PSO) (Kennedy and Mendes 2002), third is the global best version (GPSO) (Shi and Eberhart 1999), fourth is the dynamic multi-swarm version of PSO (DMS-PSO) (Liang and Suganthan 2005; Cheng et al. 2013), fifth is the fully informed PSO (FIPS) (Mendes et al. 2004), and the sixth and last one is the comprehensive-learning PSO (CLPSO) (Liang et al. 2006). The required parameters along-with the values are described here in Table 2.

The proposed NS-SPSO has characterized outstanding performance on 9 out of 12 problems (i.e. \(f_{1}\) to \(f_{6}\) and \(f_{8}\) to \(f_{12}\)) containing both uni-modal and multi-modal problems. We have shown the performance of the proposed NS-SPSO algorithm individually for each function in Table 3. Moreover, Figs. 5and 6 visually show the convergence rates of various algorithms over various benchmark function—the lower the curve, the minimum is the value. We have executed the algorithms for 30 independent trails due to the randomness of the algorithms results. In the empirical results, we store the mean, small evaluation errors and standard deviation for each function in all trails. In Table 3, the statistical results are given over 30 independent trails for the newly developed NS-SPSO and all other comparison algorithms. To simplify our findings, a ranking method is developed to show their significance. Over the 30 runs’ population of data, a statistical two-tailed t test was carried out with 95% confidence interval, i.e. significance level \(\alpha = 0.05\) (Zakarya and Gillam 2019; Khan et al. 2020). As shown in Table 3, if the proposed NS-SPSO technique outperforms other algorithms, a plus sign ‘+’ was inserted in front of the corresponding results. Similarly, if no significance differences were observed between NS-SPSO and other algorithm, an equal sign ‘=’ was used. Furthermore, other algorithms outperform the proposed NS-SPSO algorithm, a minus sign ‘–’ was used. At the bottom Table 3, the total number of ‘+’, ‘=’ and ‘-’ is outlined. Moreover, best results for each function are shown in boldface (Cheng and Jin 2015).

The newly developed NS-SPSO has produced the best value (global optimum) in comparison with all other algorithms in terms of minimum values (lower curves). Further, comparatively NS-SPSO is faster in execution than the closest rivals. Therefore, the newly developed NS-SPSO has the promising performance for uni-modal problems. As shown in Fig. 5, the NS-SPSO is on top rank for functions like \(f_{1}\), \(f_{2}\), \(f_{3}\), \(f_{4}\), \(f_{5}\) and \(f_{6}\). Furthermore, the NS-MJPSO is on rank 2 for these functions—NS-SPSO outperforms the NS-MJPSO algorithm. Moreover, the newly developed NS-SPSO is the fastest one in execution time in comparison with all other algorithms. However, the newly developed NS-SPSO and NS-MJPSO algorithms have promising performance for such kind of optimization problems.

Fig. 7
figure 7

Average computation of the proposed NS-SPSO in comparison

Table 4 Optimization errors using NS-SPSO for various number of states N on 10 basic test functions (\(f_1\)\(f_{10}\)), the first value represents the mean value (\(\mu \)) and the second value denotes the standard deviation (\(\sigma \))

For multi-modal function \(f_{7}\), as shown in Fig. 6, the CLPSO and DMS-PSO have the minimum values; and our algorithm diverge quickly. The main reason is that the proposed algorithm is a gbest-based approach, which leads the population towards a single position. However, in multi-modal problems the lbest-based approaches would be more suitable (Chowdhury et al. 2014; Ghosh et al. 2012). We are working to consider the lbest-based version of the proposed NS-SPSO algorithm, in the near future. In the same setting, the performance of the NS-MJPSO is not good for this particular function. Similarly, for multi-modal function \(f_{8}\), algorithms like NS-MJPSO, CLPSO, DMS-PSO and GPSO have the best performance. Based on our investigation, we found that the newly developed NS-SPSO needs some further parameters’ adjustment in order to produce good and relatively optimal results, for these multi-modal functions. For other functions like \(f_{9}\), \(f_{10}\), \(f_{11}\) and \(f_{12}\), algorithms like NS-SPSO, NS-MJPSO, CLPSO, DMS-PSO and GPSO have similar and almost comparable performance in terms of optimal evaluation values. Furthermore, NS-SPSO and NS-MJPSO are relatively faster than the other algorithms. In short, due to the gbest nature of the proposed NS-SPSO algorithm, the algorithm has performed well rather than other rivals for uni-modal functions. Unfortunately, we observed its worst performance for multi-modal functions. For scalable optimization, we consider high-dimensional optimization functions and several other PSO variants, as described in Sect. 5.4. The evaluation is being carried out using different setting for dimensionality and different values for N, i.e. the number of states.

5.2 Computation time of the proposed NS-SPSO

In this section, we analyse the average computational time of the proposed NS-SPSO algorithm and others. The computational time represents the CPU clock time, i.e. the absolute CPU time spent in evaluation—the lower its is, the faster is the approach. Further, all algorithms were experimentally evaluated with the same number of iterations, i.e. \(2 \times 10^5\) over the same 12 benchmark functions. The average computational times for all algorithms are shown in Fig. 7. The Local PSO (LPSO) has the smallest average computational time and stands at rank 1. The proposed NS-SPSO algorithm has the second shortest computational time and stands at rank 2. Note that, NS-SPSO also scores the second best, in terms of minimum values for most of the uni-modal and multi-modal problems, algorithm with respect to accuracy.

Table 5 Seven high-dimensional test functions and their optimization errors—for each algorithm and test function, the primary row characterizes the statistical average or mean value (\(\mu \)) while the subsequent row designates the statistical standard deviation (\(\sigma \))

5.3 Impact of number of states on results

As described earlier, the contribution of our proposed algorithm is twofold: (i) the utilization of N states rather than four states used in the SPSO algorithm; and (ii) the adaptation of linearly decreasing inertia weight (LDIW) (Shi and Eberhart 1998b). In respect of (i), the number of acceleration coefficient also varies. If \(N=8\), then there will be total 8 values (pairs) for \(c_1\) and \(c_2\), where each pair of (\(c_1\), \(c_2\)) relates to a particular state among the standard 4 states, i.e. jumping out, exploration, exploitation and convergence. In the same way, for \(N = 12\) there will be twelve values (pairs) each one for \(c_1\) and \(c_2\). As described earlier in Sect. 4, the values of \(c_1\), \(c_2\) are pre-determined and tuned accordingly because they help particles in escaping from one state to others. Consequently, a variety of values will produce variations in outcomes. In order to automatically compute \(c_1\), \(c_2\), we have to use a simple approach where in every pair, \(c_1\) value is decreased or increased with 0.05 and the related \(c_2\) value is increased or decreased with the same value. For example, beneath we are describing eight values for each pair of \(c_1\), and \(c_2\) when \(N=8\).

$$\begin{aligned} c_1, c_2 = {\left\{ \begin{array}{ll} 2,2.05,2.1,2.15,2.2,2.25,1.8,1.85 \\ 2,1.95,1.9,1.85,1.8,1.75,2.2,2.15 \\ \end{array}\right. } \end{aligned}$$

We have found that increasing the N value for high-dimensional problems even increases the convergence speed and accuracy, but at a small increase in computational time. Note that, accuracy reflects the total number of positive hits for the least fitness evaluation test in 30 experimental hits. In addition, it is noted that no further improvement is made when increasing the value of N, in particular, for low-dimensional problems. For instance, the least function evaluation value for \(f_0\), with 30 dimensions, was achieved at \(N = 12\); thus, fixing \(N = 20\) just increases the algorithm’s computational time with no benefits and performance gains. Likewise, as the number of N states grows, the effort involved in choosing sufficient acceleration coefficients is also increasing. Table 4 depicts fitness values for ten test functions (from \(f_1\) to \(f_{10}\)) when considered for optimization using the suggested PSO variant (NS-SPSO) and 5 different values for states, i.e. N. Because, for both functions, i.e. \(f_{11}\) and \(f_{12}\), we did not overlooked any considerable improvements (minimization or maximization), thus they are not listed in Table 4.

5.4 Results for high-dimensional problems

Besides the above twelve benchmarks functions, NS-SPSO is further tested on seven more functions by setting the search dimensionality to 500-D and 1000-D, respectively (Tang et al. 2007). For low-dimensional (20-D) optimization problems, NS-SPSO has shown robust performance on 12 benchmark functions in comparison with five representative PSO variants. However, in order to verify the scalability of the proposed NS-SPSO, we are keen to further investigate its performance on large-scale (high-dimensional) optimization problems, whose search dimensionality is normally larger than 100. For this purpose, SL-PSO is tested on a large-scale optimization test set (denoted as \(f_{13}\) to \(f_{19}\)), which was proposed in the CEC’08 special session on large-scale optimization. Afterwards, SL-PSO is further tested on \(f_{13}\) to \(f_{19}\) by setting the search dimensionality to 100-D, 500-D, and 1000-D, respectively.

Four different algorithms, based on their evaluation for large-scale problems, were considered for this evaluation and comparison. Amongst the four, CCPSO2 (Li and Yao 2011) is the most widely used state-of-the-art PSO variant for large-scale optimization. Similarly, the DMS-L-PSO is the enhanced version of the DMS-PSO with a local search operator. DMS-PSO is described in Sect. 2. Moreover, NS-MJPSO (Rahman et al. 2020) is our own developed version of the PSO. The last one is the SL-PSO (Cheng and Jin 2015) that have outperformed for high-dimensional problems. We varied the number of states (N) for the proposed NS-SPSO algorithm in various runs, and the best results were summarized in Table 5.

The results obtained in Table 5 show that, on average, NS-SPSO performs better than the NS-MJPSO, DMS-PSO and CL-PSO. For example, for 100-dimensional functions, it outperfomed all the closest rivals. Similarly, for 500-dimensional functions, its performance is guaranteed but comparable to the NS-MJPSO algorithm. Moreover, for 1000-dimensional functions, largely, the proposed algorithm outperformed the other ones. However, the DMS-L-PSO is always able to find the real global optimum, regardless of the dimension, although it performs not so well on the other five test functions. Further, the proposed algorithm has comparable performance with SL-PSO and DMS-L-PSO. On the other hand, NS-SPSO outperforms NS-MJPSO and CCPSO2. The reasons for such out performance include: (i) the increasing number of states produces chances for state switching; (ii) adequate acceleration coefficients for particular states; and (iii) the large number of parameters set controls sensitivity of the algorithm.

In order to demonstrate that there are statistical significant differences amongst the results produced by the proposed algorithms and the closest ones, particularly, SL-PSO and DMS-L-PSO, a t test was performed over the data gathered in various runs. The confidence interval is set to 95%. The t test result (P value) shows that there are no significant differences amongst these; hence, NS-SPSO is comparable to these two algorithms.

6 Conclusions and future work

In this paper, we have proposed a novel PSO variant called N state switching particle swarm optimization (NS-SPSO) algorithm. The algorithm has combined the evolutionary method for population distribution with the traditional PSO algorithm. The performance of the newly developed NS-SPSO algorithm is demonstrated through various metrics over evaluating 12 various benchmark functions. These benchmark functions include several uni-modal, multi-modal and nonlinear type problems. Furthermore, several variants of the classical PSO were coded, re-implemented, which are widely known for their best performance and capabilities to solve large-scale optimization problems. These include GPSO, LPSO, FIPS, DMS-PSO and CL-PSO. The newly developed NS-SPSO algorithm is also used in the tournament for the same objective functions. The significance of the proposed algorithm was demonstrated though analysing statistical results obtained on various benchmark functions. Our empirical evaluation suggests that NS-SPSO algorithm has the shortest computational time, fast convergence ratio and scores for the second best method in terms of accuracy.

It is notable that the presented NS-SPSO algorithm could be useful to a diversity of modern optimization problems; for example, power systems and healthcare informatics, in which the precision and accurateness is the key fear of these optimization problems. We intend to put on the proposed NS-SPSO algorithm to the well-known economic load dispatch (ELD) issue with the purpose to decline and minimize the total power generation costs (Rahman 2016). Furthermore, in order to select suitable values of the control parameters, we will also examine the steadiness among the number of evolutionary states N and the total computational cost. It is notable that as the total number of states N rises, the choices for the acceleration coefficients also rises consequently. In similar situations, the programmed and adaptive approximation of the control parameters would be the chief priority task in forthcoming research. Similarly, in future research we will consider NS-SPSO for: complex many-objectives, high-dimensional, optimization problems (Han et al. 2019); and further improve its computational time. Moreover, we observed through experimentation that the proposed algorithm does not perform well for multi-modal function due to the gbest version of PSO implementation. In the near future, we look forward to preparing the lbest version of the NS-SPSO algorithm for multi-modal functions.