1 Introduction

In the late nineteenth century, the theory of classical mechanics experienced several issues in reporting the physical phenomena of light masses and high velocity microscopic particles. In 1920s, Bohr’s atomic theory [1], Heisenberg’s discovery of quantum mechanics [2] and Schrödinger’s [3] discovery of wave mechanics influenced the conception of a new field, i.e., the quantum mechanics. In 1982, Feynman [4] stated that quantum mechanical systems can be simulated by quantum computers in exponential time, i.e., better than with classical computers. Till then, the concept of quantum computing was thought to be only a theoretical possibility, but over the last three decades the research has evolved such as to make quantum computing applications a realistic possibility [5].

In the last two decades, the field of swarm intelligence has got overwhelming response among research communities. It is inspired by nature and aims to build decentralized and self-organized systems by collective behavior of individual agents with each other and with their environment. The research foundation of swarm intelligence is constructed mostly upon two families of optimization algorithms, i.e., ant colony optimization (Dorigo et at. [6] and Colorni et al. [7]) and particle swarm optimization (PSO) (Kennedy and Eberhart [8]). Originally, the swarm intelligence is inspired by certain natural behaviors of flocks of birds and swarms of ants.

In the mid 1990s, particle swarm optimization technique was introduced for continuous optimization, motivated by flocking of birds. The evolution of PSO-based bio-inspired techniques has been in an expedite development in the last two decades. It has got attention from different fields such as inventory planning [9], power systems [10], manufacturing [11], communication networks [12], support vector machines [13], to estimate binary inspiral signal [14], gravitational waves [15] and many more. Similar to evolutionary genetic algorithm, it is inspired by simulation of social behavior, where each individual is called particle, and group of individuals is called swarm. In multi-dimensional search space, the position and velocity of each particle represent a probable solution. Particles fly around in a search space seeking potential solution. At each iteration, each particle adjusts its position according to the goal of its own and its neighbors. Each particle in a neighborhood shares the information with others [16]. Later, each particle keeps the record of best solution experienced so far to update their positions and adjust their velocities accordingly.

Fig. 1
figure 1

Particles movement in PSO and QPSO algorithm

Since the first PSO algorithm proposed, the several PSO algorithms have been introduced with plethora of alterations. Recently, the combination of quantum computing, mathematics and computer science have inspired the creation of optimization techniques. Initially, Narayanan and Moore [17] introduced quantum-inspired genetic algorithm (QGA) in 1995. Later, Sun et al. [16] applied the quantum laws of mechanics to PSO and proposed quantum-inspired particle swarm optimization (QPSO). It is the commencement of quantum-behaved optimization algorithms, which has subsequently made a significant impact on the academic and research communities alike.

Recently, Yuanyuan and Xiyu [18] proposed a quantum evolutionary algorithm to discover communities in complex social networks. Its applicability is tested on five real social networks, and results are compared with classical algorithms. It has been proved that PSO lacks convergence on local optima, i.e., it is tough for PSO to come out of the local optimum once it confines into optimal local region. QPSO with mutation operator (QPSO-MO) is proposed to enhance the diversity to escape from local optimum in search [19]. Protopopescu and Barhen [20] solved set of global optimization problems efficiently using quantum algorithms. In future, the proposed algorithm can be integrated with matrix product state-based quantum classifier for supervised learning [21,22,23].

In this paper, we have combined QPSO with Cauchy mutation operator to add long jump ability for global search and natural selection mechanism for elimination of particles. The results showed that it has great tendency to overcome the problem of trapping into local search space. Therefore, the proposed hybrid QPSO strengthened the local and global search ability and outperformed the other variants of QPSO and PSO due to fast convergence feature.

The illustration of particles movement in PSO and QPSO algorithm is shown in Fig. 1. The big circle at center denotes the particle with the global position and other circles are particles. The particles located away from global position are lagged particles. The blue color arrows signify the directions of other particles, and the big red arrows point toward the side in which it goes with high probability. During iterations, if the lagged particle is unable to find better position as compared to present global position in PSO, then their impact is null on the other particles. But, in QPSO, the lagged particles move with higher probability in the direction of gbest position. Thus, the contribution of lagged particles is more to the solution in QPSO in comparison with PSO algorithm.

The organization of rest of this paper is as follows: Sect. 2 is devoted to prior work. In Sect. 3, the quantum particle swarm optimization is described. In Sect. 4, the proposed hybrid QPSO algorithm with Cauchy distribution and natural selection mechanism is presented. The experimental results are plotted for a set of benchmark problems and compared with several QPSO variants in Sect. 5. The correctness and time complexity are analyzed in Sect. 6. QPSO-CD is applied to three constrained engineering design problems in Sect. 7. Finally, Sect. 8 is the conclusion.

2 Prior work

Since the quantum-behaved particle swarm optimization was proposed, various revised variants have been emerged. Initially, Sun et al. [16] applied the concept of quantum computing to PSO and developed a quantum Delta potential well model for classical PSO [24]. It has been shown that the convergence and performance of QPSO are superior as compared to classical PSO. The selection and control of parameters can improve its performance, which is posed as an open problem. Sun et al. [25] tested the performance of QPSO on constrained and unconstrained problems. It has been claimed that QPSO is a promising optimization algorithm, which performs better than classical PSO algorithms. In 2011, Sun et al. [26] proposed QPSO with Gaussian distribution (GAQPSO) with the local attenuator point and compared its results with several PSO and QPSO counterparts. It has been proved that GAQPSO is efficient and stable with superior features in quality and robustness of solutions.

Further, Coelho [27] applied GQPSO to constrained engineering problems and showed that the simulation results of GQPSO are much closer to the perfect solution with small standard deviation. Li et al. [28] presented a cooperative QPSO using Monte Carlo method (CQPSO), where particles cooperate with each other to enhance the performance of original algorithm. It is implemented on several representative functions and performed better than the other QPSO algorithms in context of computational cost and quality of solutions. Peng et al. introduced [29] QPSO with Levy probability distribution and claimed that there are very less chances to be stuck in local optimum.

Researchers have applied PSO and QPSO to real-life problems and achieved optimal solutions as compared to existing algorithms. Ali et al. [30] performed energy-efficient clustering in mobile ad-hoc networks (MANET) with PSO. The similar approach can be followed to analyze and execute mobility over MANET with QPSO-CD [31]. Zhisheng [32] used QPSO in economic load dispatch for power system and proved superior to other existing PSO optimization algorithms. Sun et al. [33] applied QPSO for QoS multicast routing. Firstly, the QoS multicast routing is converted into constrained integer problems and then effectively solved by QPSO with loop deletion task. Further, the performance is investigated on random network topologies. It has been proved that QPSO is more powerful than PSO and genetic algorithm. Geis and Middendorf [34] proposed PSO with Helix structure for finding ribonucleic acid (RNA) secondary structures with same structure and low energy. The QPSO-CD algorithm can be used with two-way quantum finite automata to model the RNA secondary structures and chemical reactions [35,36,37]. Bagheri et al. [38] applied the QPSO for tuning the parameters of adaptive network-based fuzzy inference system (ANFIS) for forecasting the financial prices of future market. Davoodi et al. [39] introduced a hybrid improved QPSO with Neldar Mead simplex method (IQPSO-NM), where NM method is used for tuning purpose of solutions. Further, the proposed algorithm is applied to solve load flow problems of power system and acquired the convergence accurately with efficient search ability. Omkar [40] proposed QPSO for multi-objective design problems, and results are compared with PSO. Recently, Fatemeh et al. [41] proposed QPSO with shuffled complex evolution (SP-QPSO) and its performance is demonstrated using five engineering design problems. Prithi and Sumathi [42] integrated the concept of classical PSO with deterministic finite automata for transmission of data and intrusion detection. The proposed algorithm QPSO-CD can be used with quantum computational models for wireless communication [43,44,45,46,47,48,49].

3 Quantum particle swarm optimization

Before we explain our hybrid QPSO-CD algorithm mutated with Cauchy operator and natural selection method, it is useful to define the notion of quantum PSO. We assume that the reader is familiar with the concept of classical PSO; otherwise, reader can refer to particle swarm optimization algorithm [50, 51]. The specific principle of quantum PSO is given as:

In QPSO, the state of a particle can be represented using wave function \(\psi (x, t)\). The probability density function \(|\psi (x, t)|^2\) is used to determine the probability of particle occurring in position x at any time t [16, 33]. The position of particles is updated according to equations:

$$\begin{aligned} x_{i, j}(t+1)= & {} p_{i, j} (t) \pm \alpha . |mbest_{i, j}(t)-x_{i, j}(t)|. ln(1/u) \end{aligned}$$
(1)
$$\begin{aligned} p_{i, j}(t)= & {} (\phi .P_{i, j}(t)+(1-\phi ).G_j(t)), (1 \le i \le N, 1 \le j \le M) \end{aligned}$$
(2)

where each particle must converge to its local attractor \(p=(p_1, p_2,\ldots , p_D)\), where D is the dimension, N and M are the number of particles and iterations, respectively, \(P_{i, j}\) and \(G_{j}\) denote the previous and optimal position vector of each particle respectively, \(\phi =c_1.r_1/(c_1r_1+c_2r_2)\), where \(c_1\); \(c_2\) are the acceleration coefficients, \(r_1\); \(r_2\) and u are normally distributed random numbers in (0, 1), \(\alpha \) is contraction-expansion coefficient and mbest defines the mean of best positions of particles as:

$$\begin{aligned} mbest_{i, j}(t)=\dfrac{1}{N} \sum _{i=1, j=1}^{N, M}P_{i, j}(t)=\bigg (\dfrac{1}{N}\sum _{i=1}^{N}P_{i, 1}(t), \dfrac{1}{N}\sum _{i=1}^{N}P_{i, 2}(t), . . ., \dfrac{1}{N}\sum _{i=1}^{N}P_{i, D}(t) \bigg ) \end{aligned}$$
(3)

In Eq. (1), \(\alpha \) denotes contraction-expansion coefficient, which is setup manually to control the speed of convergence. It can be decreased linearly or fixed. In PSO, \(\alpha < 1.782\) to ensure convergence performance of the particle. In QPSO-CD, the value of \(\alpha \) is determined by \(\alpha =1-(1.0-0.5)\)k/M, i.e., decreases linearly from 1.0 to 0.5 to attain good performance, where k is present iteration and M is maximum number of iterations.

4 Hybrid particle swarm optimization

The hybrid quantum-behaved PSO algorithm with Cauchy distribution and natural selection strategy (QPSO-CD) is described as follows:

The QPSO-CD algorithm begins with the standard QPSO using Eqs. (1), (2) and (3). The position and velocity of particles cannot be determined exactly due to varying dynamic behavior. So, it can only be learned with the probability density function. Each particle can be mutated with Gaussian or Cauchy distribution. We mutated QPSO with Cauchy operator due to its ability to make larger perturbation. Therefore, there is a higher probability with Cauchy as compared to Gaussian distribution to come out of the local optima region. The QPSO algorithm is mutated with Cauchy distribution to increase its diversity, where mbest or global best position is mutated with fixed mutation probability (Pr). The probability density function (f(x)) of the standard Cauchy distribution is given as:

$$\begin{aligned} f(x)=\dfrac{1}{\pi (1+x^2)} ~ ~ ~ ~ -\infty< x < \infty , \end{aligned}$$
(4)

It should be noted that mutation operation is executed on each vector by adding Cauchy distribution random value (D(.)) independently such that

$$\begin{aligned} x^{'}=x+\phi D(.) \end{aligned}$$
(5)
figure a

where \(x^{'}\) is new location after mutated with random value to x. At last, the position of particle is selected and the particles of swarm are sorted on the basis of their fitness values after each iteration. Further, substitute the group of particles having worst fitness values with the best ones and optimal solution is determined. The main objective of using natural mechanism is to refine the capability and accuracy of QPSO algorithm.

The natural selection method is used to enhance the convergence characteristics of proposed QPSO-CD algorithm, where the fitter solutions are used for the next iteration. The procedure of selection method for N particles is as follows:

$$\begin{aligned} F(X(t))=\{F(x_1(t)), F(x_2(t)), \ldots , F(x_N(t))\} \end{aligned}$$
(6)

where X(t) is position vector of particles at time t and F(X(t)) is the fitness function of swarm. Next step is to sort the particles according to their fitness values from best one to worst position such that

$$\begin{aligned} F(X^{'}(t))= & {} \{F(x_1^{'}(t)), F(x_2{'}(t)) , \ldots ,F(x_N{'}(t))\}\nonumber \\ X^{'}(t)= & {} \{x_1^{'}(t), x_2{'}(t), \ldots , x_N{'}(t)\} \end{aligned}$$
(7)

In Algorithm 1, SF and Sx are the sorting functions of fitness and position, respectively. On the basis of natural selection parameters and fitness values, the positions of swarm particles are updated for the next iteration,

$$\begin{aligned} X^{'}(t)= & {} \{x_1^{''}(t), x_2{''}(t), \ldots , x_S{''}(t)\},\nonumber \\ X^{''}_{k}(t)= & {} \{x_1^{'}(t), x_2{'}(t), \ldots , x_Z{'}(t)\} \end{aligned}$$
(8)

where \((1 \le k \le S)\), S denotes the selection parameter, Z signifies the number of best positions selected according to fitness values such that \(S=N/Z\) and \(X^{''}(t)\) is updated position vector of particles. The selection parameter S is generally set as 2 to replace the half of worst positions with the half of best positions of particles. It improves the precision of the direction of particles, protects the global searching capability and speeds up the convergence.

5 Experimental results

The performance of proposed QPSO-CD algorithm is investigated on representative benchmark functions, given in Table 1. Further, the results are compared with classical PSO (PSO), standard QPSO, QPSO with delta potential (QDPSO) and QPSO with mutation operator (QPSO-MO). The details of numerical benchmark functions are given in Table 1.

Table 1 Details of benchmark functions
Fig. 2
figure 2

Effectiveness of QPSO-CD for sphere function \(f_1\)

Fig. 3
figure 3

Effectiveness of QPSO-CD for Rosenbrock function \(f_2\)

Table 2 Comparison results of Sphere and Rosenbrock functions

The performance of QPSO has been widely tested for various test functions. Initially, we have considered four representative benchmark functions to determine the reliability of QPSO-CD algorithm. For all the experiments, the size of population is 20, 40 and 80 and dimension sizes are 10, 20 and 30. The parameters for QPSO-CD algorithm are as follows: the value of \(\alpha \) decreases from 1.0 to 0.5 linearly; the natural selection parameter S \(=\) 2 is taken, \(c_1, c_2\) correlation coefficients are set equal to 2.

The mean best fitness values of PSO, QPSO, QDPSO, QPSO-MO and QPSO-CD are recorded for 1000, 1500 and 2000 runs of each function. Figures 2, 3, 4 and 5 depict the performance of functions \(f_1\) to \(f_4\) with respect to mean best fitness against the number of iterations. In Table 2, P denotes the population, dimension is represented by D and G stands for generation. The numerical results of QPSO-CD showed optimal solution with fast convergence speed and high accuracy. The results showed that QPSO-CD performs better on Rosenbrock function than its counterparts in some cases. When the size of population is 20 and dimension is 30, the results of proposed algorithm are not better than QPSO-MO, but QPSO-CD performs better than PSO, QPSO and QDPSO. The performance of QPSO-CD is significantly better than its variants on Greiwank and Rastrigrin functions. It has outperformed other algorithms and obtained optimal solution (near zero) for Greiwank function. In most of the cases, QPSO-CD is more efficient and outperformed the other algorithms.

Fig. 4
figure 4

Effectiveness of QPSO-CD for Greiwank function \(f_3\)

Fig. 5
figure 5

Effectiveness of QPSO-CD for Rastrigrin function \(f_4\)

Table 3 Comparison results of Greiwank and Rastrigrin functions

6 Correctness and time complexity analysis of a QPSO-CD algorithm

In this Section, the correctness and time complexity of a proposed algorithm QPSO-CD is analyzed and compared with the classical PSO algorithm.

Theorem 1

The sequence of random variables \(\{S_n, n \ge 0\}\) generated by QPSO with Cauchy distribution converges to zero in probability as n approaches infinity.

Proof

Recall, the probability density function of standard Cauchy distribution and its convergence probability [52] are given as

$$\begin{aligned}&f(s)=\dfrac{1}{\pi (1+s^2)} ~ \text {for} -\infty< s < \infty ,\nonumber \\&P(x \le S_n \le y) = \dfrac{1}{\pi } \int _{y}^{x} \dfrac{\hbox {d}s}{(1+s^2)}, ~ \forall x \le y \end{aligned}$$
(9)

Consider a random variable \(Q_n\) interpreted as

$$\begin{aligned} Q_n = \alpha S_n, ~ \alpha =\dfrac{1}{n^\lambda } \end{aligned}$$

where \(\lambda \) denotes a fixed positive constant. Correspondingly, the probability density function can be calculated as

$$\begin{aligned} P(Q_n \le q)= & {} P(\alpha S_n \le q) \\= & {} P\left( S_n \le \dfrac{q}{\alpha }\right) \\= & {} \int _{-\infty }^{\dfrac{q}{\alpha }} f(s) \hbox {d}s. \dfrac{\hbox {d}P(q_n \le q)}{\hbox {d}q}\\= & {} \dfrac{1}{\alpha } f\left( \dfrac{q}{\alpha }\right) \end{aligned}$$

i.e., the probability density function of random variable \(Q_n\).

$$\begin{aligned} P(|Q_n|>\xi )= & {} P(|S_n|>\xi n^\lambda )\\= & {} P(S_n> \xi n^\lambda )+ P(S_n > - \xi n ^\lambda )\\= & {} P(\xi n^\lambda< S_n< \infty )+ P(- \infty< S_n < - \xi n^\lambda ) \end{aligned}$$

Using Eq. (9), the probability density function of random variable \(Q_n\) becomes

$$\begin{aligned} P(|Q_n|>\xi )= & {} \dfrac{1}{\pi }\int _{\xi n^\lambda }^{\infty }\dfrac{\hbox {d}s}{\pi (1+s^2)} + \dfrac{1}{\pi }\int _{\infty }^{-\xi n^\lambda }\dfrac{\hbox {d}s}{\pi (1+s^2)} \\= & {} \left[ 1+\dfrac{1}{\pi } \int _{\xi n^\lambda }^{-\xi n^\lambda }\dfrac{\hbox {d}s}{\pi (1+s^2)}\right] \\= & {} 1-\dfrac{1}{\pi } \int _{-\xi n^\lambda }^{\xi n^\lambda }\dfrac{\hbox {d}s}{\pi (1+s^2)}= 0 ~ \text {as} ~ n \rightarrow \infty \end{aligned}$$

This completes the proof of the theorem. \(\square \)

Definition 1

Let \(\{S_n\}\) a random sequence of variables. It converges to some random variable s with probability 1, if for every \(\xi > 0\) and \(\lambda >0\), there exists \(n_1(\xi , \lambda )\) such that \(P(|S_n-s|< \xi )> 1-\lambda , \forall n > n_1\) or

$$\begin{aligned} P\left( \lim _{n \rightarrow \infty } |S_n-s|< \xi \right) =1 \end{aligned}$$
(10)

The efficiency of the QPSO-CD algorithm is evaluated by number of steps needed to reach the optimal region \(R(\xi )\). The method is to evaluate the distribution of number of steps needed to hit \(R(\xi )\) by comparing the expected value and moments of distribution. The total number of stages to reach the optimal region is determined as \(W(\xi )=\hbox {inf}\{n \mid f_n \in R(\xi )\}\). The variance \(V(W(\xi ))\) and expectation value \(E(W(\xi ))\) are determined as

$$\begin{aligned} E(W(\xi ))= & {} \sum _{n=0}^{\infty }nx_n, \end{aligned}$$
(11)
$$\begin{aligned} V(W(\xi ))= & {} E(W^2(\xi ))-\{E(W(\xi ))\}^2\nonumber \\= & {} \sum _{n=0}^{\infty }n^2x_n-\left( \sum _{n=0}^{\infty }nx_n\right) ^2 \end{aligned}$$
(12)

In fact, the \(E(W(\xi ))\) depends upon the convergence of \(\sum _{n=0}^{\infty }nx_n\). It is needed that \(\sum _{j=0}^{\infty }x_n=1\), so that QPSO-CD can converge globally. The number of objective function evaluations are used to measure time. The main benefit of this approach is that it shows relationship between processor and measure time as the complexity of objective function increases. We used Sphere function \(f(x)=x^{\mathrm{T}}.x\) with a linear constraint \(g(x)=\sum ^{n}_{j=0}x_j \ge 0\) to compute the time complexity. It has minimum value at 0. The value of optimal region is set as \(R(\xi )=R(10^{-4}).\) To determine the time complexity, the algorithms PSO and QPSO-CD are executed 40 times on f(x) with initial scope [− 10, 10]\(^{N}\), where N denotes the dimension. We determine the mean number of objective function evaluations (\(W(\xi )\)), the variance (\(V(W(\xi ))\)), the standard deviation (SD) (\(\sigma _{W(\xi )}\)), the standard error (SE) (\(\sigma _{W(\xi )}/ \sqrt{40}\)) and ratio of mean and dimension (\(W(\xi )/N\)). The contraction coefficient \(\alpha = 0.75\) is used for QPSO-CD and constriction coefficient \(\chi =0.73\) for PSO with acceleration factors \(c_1=c_2=2.25\).

Table 4 Results of the time complexity for QPSO-CD algorithm
Table 5 Results of the time complexity for PSO algorithm
Fig. 6
figure 6

Time complexity results for PSO and QPSO-CD

Fig. 7
figure 7

Comparison of correlation coefficients of PSP and QPSO-CD

Tables 1 and 2 show the statistical results of time complexity test for QPSO-CD and PSO algorithm, respectively. Figure 6 indicates that the time complexity of proposed algorithm increases nonlinearly as the dimension increases. However, the time complexity of PSO algorithm increases adequately linearly. Thus, the time complexity of QPSO-CD is lower than PSO algorithm. A Pearson correlation coefficient method is used to show the relationship between the mean and dimension [53]. In Fig. 7, QPSO-CD shows a strong correlation between \(W(\xi )\) and N, i.e., the correlation coefficient R \(=\) 0.9996. For PSO, the linear correlation coefficient R \(=\) 0.9939, which is not so phenomenal as that in case of QPSO-CD. The relationship between mean and dimension clearly shows that the value of correlation coefficient is fairly stable for QPSO-CD as compared to PSO algorithm.

7 QPSO-CD for constraint engineering design problems

There exists several approaches for handling constrained optimization problems. The basic principle is to convert the constrained optimization problem to unconstrained by combining objective function and penalty function approach. Further, minimize the newly formed objective function with any unconstrained algorithm. Generally, the constrained optimization problem can be described as in Eq. (13).

The objective is to minimize the objective function f(x) subjected to equality \((h_j(x))\) and inequality \((g_i(x))\) constrained functions, where p(i) is the upper bound and q(i) denotes the search space lower bound. The strict inequalities of form \(g_i(x) \ge 0\) can be converted into \(-g_i(x)\le 0\) and \(h_i(x)\) equality constraints can be converted into inequality constraints \(h_i(x) \ge 0\) and \(h_i(x) \le 0\). Sun et al. [25] adopted non-stationary penalty function to address nonlinear programming problems using QPSO. Coelho [27] used penalty function with some positive constant, i.e., set to 5000. We adopted the same approach and replace the constant with dynamically allocated penalty value.

$$\begin{aligned}&\min \limits _{x}=f(x)\nonumber \\ \text {subject to}&\nonumber \\&g_i(x) \le 0, i=0, 1,\ldots n-1\nonumber \\&h_j(x)=0, j=1,2,\ldots r\nonumber \\&p(i)\le x_i \le q(i), 1 \le i \le m\nonumber \\&x=\{x_1,x_2, x_3,\ldots , x_m\} \end{aligned}$$
(13)

Usually, the procedure is to find the solution for design variables that lie in search space upper and lower bound constraints such that \(x_i \in [p(i), q(i)]\). If solution violates any of the constraint, then the following rules are applied

$$\begin{aligned}&x_i=x_i + \{p(x_i)-q(x_i)\}.~ \hbox {rand}[0,1]\nonumber \\&x_i=x_i - \{p(x_i)-q(x_i)\}.~ \hbox {rand}[0,1] \end{aligned}$$
(14)

where rand[0, 1] is randomly distributed function to select value between 0 and 1. Finally, the unconstrained optimization problem is solved using dynamically modified penalty values according to inequality constraints \(g_i(x)\). Thus, the objective function is evaluated as

$$\begin{aligned} F(x)=\left\{ \begin{array}{ll} f(x)&{}\text {if}~ g_i(x)\le 0\\ f(x)+y(t). \sum \limits _{i=1}^{n} g_i(x)&{}\text {if}~ g_i(x)>0\\ \end{array} \right\} \end{aligned}$$
(15)

where f(x) is the main objective function of optimization problem in Eq. (13), t is the iteration number and y(t) represents the dynamically allocated penalty value.

In this Section, QPSO-CD is tested for three-bar truss, tension/compression spring and pressure vessel design problems consisting different members and constraints. The performance of QPSO-CD is compared and analyzed with the results of PSO, QPSO, and SP-QPSO algorithms as reported in the literature.

7.1 Three-bar truss design problem

Three-bar truss is a constraint design optimization problem, which has been widely used to test several methods. It consists cross section areas of three bars \(x_1\) (and \(x_3\)) and \(x_2\) as design variables. The aim of this problem is to minimize the weight of truss subject to maximize the stress on these bars. The structure should be symmetric and subjected to two constant loadings \(P_1=P_2=P\) as shown in Fig. 8. The mathematical formulation of two design bars (\(x_1\), \(x_2\)) and three restrictive mathematical functions are described as:

$$\begin{aligned} \text {Min}\quad f(x)= & {} (2\sqrt{2}x_1+x_2). l\nonumber \\ \text {Subject to:}&\nonumber \\ g_1(x)= & {} \dfrac{\sqrt{2}x_1+x_2}{\sqrt{2}x_1^{2}+2x_1x_2}P-\sigma \le 0,\nonumber \\ g_2(x)= & {} \dfrac{x_2}{\sqrt{2}x_1^{2}+2x_1x_2}P-\sigma \le 0 \end{aligned}$$
(16)
$$\begin{aligned} g_3(x)= & {} \dfrac{1}{x_1+\sqrt{2}x_2}P-\sigma \le 0, ~ \text {where}\\ 0 \le x_1, x_2\le & {} 1, l=100cm, P=\sigma =2 KN/cm^{2} \end{aligned}$$
Fig. 8
figure 8

Structure of three-bar truss

Table 6 Comparison of optimal results for three-bar truss problem

The results are obtained by QPSO-CD are compared with its counterparts in Table 6. For three-bar truss problem, QPSO-CD is superior to optimal solutions previously obtained in literature. The difference of best solution obtained by QPSO-CD among other algorithms is shown in Fig. 9.

Fig. 9
figure 9

Optimal results of PSO, QPSO, SP-QPSO and QPSO-CD algorithms for three-bar truss problem

7.2 Tension/compression spring design problem

The main aim is to lessen the volume V of a spring subjected to tension load constantly as shown in Fig. 10. Using the symmetry of structure, there are practically three design variables (\(x_1, x_2, x_3\)), where \(x_1\) is the wire diameter, the coil diameter is represented by \(x_2\) and \(x_3\) denotes the total number of active coils. The mathematical formulation for this problem is described as:

$$\begin{aligned} \text {Min}\quad f(x)= & {} (x_3+2)x_2x_1^2,\nonumber \\ \text {Subject to:}&\nonumber \\ g_1(x)= & {} \dfrac{1-x_2^{3}x_3}{71785x_16{4}} \le 0,\nonumber \\ g_2(x)= & {} \dfrac{4x_2^{2}-x_1x_2}{12566(x_2x_1^{3}-x_1^{4})}+\dfrac{1}{5108x_1^{2}}-1 \le 0,\nonumber \\ g_3(x)= & {} \dfrac{1-140.45x_1}{x_2^2x_3}\le 0,\nonumber \\ g_4(x)= & {} \dfrac{x_2+x_1}{1.5}-1\le 0,\nonumber \\ \text {where}&\nonumber \\ 0.05 \le x_1\le & {} 2, 0.25 \le x_2, \le 1.3, 2 \le x_3 \le 15, \end{aligned}$$
(17)
Fig. 10
figure 10

Structure of tension/compression spring

Table 7 Comparison of optimal results for tension spring design problem
Fig. 11
figure 11

Results of PSO, QPSO, SP-QPSO and QPSO-CD methods for tension spring design problem

It has been observed that QPSO algorithm with Cauchy distribution and natural selection strategy is robust and obtains optimal solutions than PSO and QPSO, shown in Table 7. The difference between best solutions found by QPSO-CD (\(f(x)=0.00263\)) and other algorithms for tension spring design problem are reported in Fig. 11.

7.3 Pressure vessel design problem

Fig. 12
figure 12

Design of pressure vessel

Initially, Kannan and Kramer [54] studied the pressure vessel design problem with the main aim to reduce the total fabricating cost. Pressure vessels can be of any shape. For engineering purposes, a cylindrical design capped by hemispherical heads at both ends is widely used [55]. Figure 12 describes the structure of pressure vessel design problem. It consists four design variables (\(x_1, x_2, x_3, x_4\)), where \(x_1\) denotes the shell thickness \((T_\mathrm{s})\), \(x_2\) is used for head thickness (\(T_\mathrm{h}\)), \(x_3\) denotes the inner radius (R) and \(x_4\) represents the length of vessel (L). The objective function and constraint equations are described as:

$$\begin{aligned} \text {Min}\quad f(x)=&0.6224x_1x_3x_4 + 1.7781x_2x_3^{2}+ 3.166x_1^{2}x_4\nonumber \\&+19.84x_1^{2}x_3,\nonumber \\ \text {Subject to:}&\nonumber \\ g_1(x)= & {} -x_1+0.0193x_3 \le 0,\nonumber \\ g_2(x)= & {} -x_2+0.00954x_3\le 0,\nonumber \\ g_3(x)= & {} -\pi x_3^{2}x_4-\dfrac{4}{3}\pi x_3^{3}+1296000 \le 0,\nonumber \\ g_4(x)= & {} x_4-240 \le 0,\nonumber \\ \text {where}&\nonumber \\ 1\times 0.0625\le & {} x_1, x_2 \le 99 \times 0.0625, 10 \le x_3 ,x_4 \le 200 \end{aligned}$$
(18)
Table 8 Comparison of optimal results for Pressure vessel design problem
Fig. 13
figure 13

Optimal results of PSO, QPSO, SP-QPSO and QPSO-CD techniques for pressure vessel design problem

The optimal results of QPSO-CD are compared with the SP-QPSO, QPSO and PSO best results noted in the previous work and are given in Table 8. The best solution obtained from QPSO-CD is better than other algorithms as shown in Fig. 13.

8 Conclusion

In this paper, a new hybrid quantum particle swarm optimization algorithm is proposed with natural selection method and Cauchy distribution. The performance of the proposed algorithm is experimented on four benchmark functions, and the optimal results are compared with existing algorithms. Further, the QPSO-CD is applied to solve engineering design problems. The efficiency of QPSO-CD is successfully presented with superiority than preceding results for three engineering design problems: three-bar truss, tension/compression spring and pressure vessel. The efficiency of QPSO-CD algorithm is evaluated by number of steps needed to reach the optimal region, and it is proved that time complexity of proposed algorithm is lower in comparison to classical PSO. In the context of convergence, the experimental outcomes show that the QPSO-CD converge to get results closer to the superior solution.