In Ref. [1], the authors have proposed an improved version of particle swarm optimization (PSO) using fractional-order calculus concepts, in which fractional calculus is used to control its convergence. During the past several years, the fractional-order PSO algorithm has attracted the attention of several researchers [24], and many new researches on PSO model improvement have also conducted [57].

One error occurred in their designed fractional-order PSO model, which is described in the following section.

As for the original PSO algorithm, the particle movement is characterized by two vectors, namely the current position \(x\) and the velocity \(v\). At time \(t\), each particle updates its velocity by the following equation:

$$\begin{aligned} v_{t+1} -v_t =\phi _1 (b-x)+\phi _2 (g-x), \end{aligned}$$
(1)

where \(b\) denotes the best position found by the particle so far, and \(g\) denotes the global best position achieved by the whole swarm so far. \(\phi _{1}\) and \(\phi _{2}\) are the randomly uniformly generated terms. For simplicity, symbols and notation are employed with the same meanings as those in Ref. [1]. From the classical integer-order area, the fractional-order PSO algorithm extends the velocity derivative to the fractional-order area, yielding

$$\begin{aligned} D^{\alpha }[v_{t+1} ]=\phi _1(b-x)+\phi _2 (g-x). \end{aligned}$$
(2)

Thus, Pires et al. [1] derive the new velocity updating strategy as shown below:

$$\begin{aligned}&v_{t+1}-\alpha v_t -\frac{1}{2}\alpha v_{t-1} -\frac{1}{6}\alpha (1-\alpha )v_{t-2} \nonumber \\&\quad -\frac{1}{24}\alpha (1-\alpha )(2-\alpha )v_{t-3} \nonumber \\&\quad =\phi _1 (b-x)+\phi _2 (g-x). \end{aligned}$$
(3)

According to Ref. [1], (1) is the special case of (3) when \(\alpha \) \(=\) 1. However, it can be observed that (1) cannot be deduced from (3). Therefore, the key formula of the fractional-order PSO algorithm in Ref. [1] is wrong.

To correct the formula, we substitute the following discrete time implementation expression of fractional differential into (2) again.

$$\begin{aligned} D^{\alpha }[x(t)]=\frac{1}{T^{\alpha }}\sum _{k=0}^r {\frac{(-1)^{k}\Gamma (\alpha +1)x(t-kT)}{\Gamma (k+1)\Gamma (\alpha -k+1)}}, \end{aligned}$$
(4)

where \(T\) is the sampling period and \(r\) is the truncation order. \(r=4\) is used in this paper, which is in agreement with Ref. [1].

Hence, the fractional-order behavior of PSO can be written as

$$\begin{aligned}&v_{t+1}\!-\!\alpha v_t \!-\!\frac{1}{2}\alpha (1\!-\!\alpha )v_{t-1} -\frac{1}{6}\alpha (1-\alpha )(2-\alpha )v_{t-2} \nonumber \\&\quad -\frac{1}{24}\alpha (1-\alpha )(2-\alpha )(3-\alpha )v_{t-3} \nonumber \\&\quad =\phi _1 (b-x)+\phi _2 (g-x) \end{aligned}$$
(5)

That is

$$\begin{aligned}&v_{t+1} \!=\!\alpha v_t \!+\!\frac{1}{2}\alpha (1-\alpha )v_{t-1} \!+\!\frac{1}{6}\alpha (1-\alpha )(2-\alpha )v_{t-2} \nonumber \\&\quad +\frac{1}{24}\alpha (1-\alpha )(2-\alpha )(3-\alpha )v_{t-3} \nonumber \\&\quad +\phi _1 (b-x)+\phi _2 (g-x) \end{aligned}$$
(6)

In the following sections, we revalidate the performance of the fractional-order PSO algorithm using (6), and the results are compared with those obtained by Pires et al. in Ref. [1]. To indicate the difference, we denote the PSO algorithm described in this comment as FPSO-2, while the algorithm presented in Ref. [1] is denoted by FPSO-1. The test functions adopted herein are the five well-known functions namely Bohachevsky 1, Colville, Drop wave, Easom, and Rastrigin, which are the same expressions as presented in Ref. [1]. Parameters of the FPSO algorithms are also in agreement with Ref. [1] as well, which are set as follows: the population size is 10, the maximum number of iteration is 200, and \(\phi _{1}\) and \(\phi _{2}\) are randomly uniformly generated in [0, 1]. Moreover, the value of \(\alpha \) reduces according to \(\alpha (t)= 0.9-0.6 t / 200, t = 0, 1,{\ldots },200\). For the purpose of reducing statistical errors, each algorithm is tested 201 times independently for every function and the median results are used in the comparison. Figures 1, 2, 3, 4 and 5 demonstrate the iteration evolutionary progresses. The correct results for the PSO with fractional-order velocity are indicated by black solid lines.

Fig. 1
figure 1

Evolution of the Bohachevsky 1 function using the FPSOs

Fig. 2
figure 2

Evolution of the Colville function using the FPSOs

Fig. 3
figure 3

Evolution of the Drop wave function using the FPSOs

Fig. 4
figure 4

Evolution of the Easom function using the FPSOs

Fig. 5
figure 5

Evolution of the Rastrigin function using the FPSOs

Moreover, the global minimum value of the Droop wave function is \(f^{*}(x) = -1.0\), rather than \(f^{*}(x) = 0.0\) given by Ref. [1].