1 Introduction

Flood routing is a fundamental step in disaster managements; thus, we carry out intensive research in this area and we figure out that flood routing procedures can be classified into two types: hydrologic methods and hydraulic ones. Hydrologic methods employ the basic principle of continuity and the relationship between the discharge and the temporary storage dedicated for the excess volumes of water during the flooding periods. On the other hand, hydraulic methods employ approximate solutions for gradually varied and unsteady flow in open channels, based on either the convection-diffusion equations or the one-dimensional Saint-Venant equations.

Compared with hydrologic techniques, the hydraulic methods usually present a more accurate description of the flood wave profile, whereas these methods are restricted to particular applications because of their higher requirements for computing technologies. In fact, the hydrologic routing approaches are relatively easy to execute and give fairly precise results. Among all the hydrologic models used for flood routing, the Muskingum model is the most widely used in view of its simplicity.

Muskingum model was designed for the first time by McCarthy for flood control and management in the Muskingum River, Ohio. According to a large group of water resources engineers, the Muskingum model is a very valuable tool for flood forecasting and disaster management. But they indicate that the most difficult task is to estimate parameters of employing the Muskingum model because these parameters can not be graphically derived from historical inflow or outflow hydrography.

In the past years, some researchers adopted many optimization methods to optimize the parameters of the Muskingum model. In 1997, Mohan [1] used genetic algorithm to estimate these parameters and verified its performance in comparison with another method proposed by Yoon and Padmanabhan [2]. After a while, Geem [3] introduced the Broydene-Fletchere-Goldfarbe-Shanno (BFGS) method, which searches the solution area based on gradients to estimate the Muskingum parameters. Later, another remarkable approach was proposed by Chen [4] where he applied Gray-encoded accelerating genetic algorithm (GAGA) to optimize these parameters. In 2009, Chu [5] applied PSO to estimate the parameters from another perspective. Afterward, Luo [6] utilized an immune clonal selection algorithm (ICSA) for the same task. In 2011, Barati [7] used Nelder–Mead simplex method (NMSM) to estimate these parameters. At the same time, Xu et al. [8] adopted differential evolution to estimate these parameters. BFGS and NMSM belong to local optimization algorithms, while the rest ones of the aforementioned methods belong to global optimization algorithms.

PSO is a swarm intelligence algorithm based on the imitation of social interaction and creatures’ communication such as bird flocks and fish schools [9]. PSO shares many similarities with swarm intelligence optimization algorithms and has been proved to be an effective approach, which can tackle a variety of difficult optimization problems. In [10], an improved cooperative PSO was applied to train the feedforward neural network. In 2012, Ghosh [11] proposed an inertia-adaptive PSO with particle mobility factor to optimize the global optimization problems. Meanwhile, Wang [12] used a converging linear PSO to train support vector data descriptors. Then, Jia [13] combined multi-objective PSO with Pareto-optimal solutions to solve the batch processes problem. Another important application for PSO was proposed by Lee [14] using a hybrid GA-PSO for network decomposition. In addition, PSO was applied to the optimization of fuzzy controllers [15]. Recently, Altun [16] solved the problem of cost optimization of mixed feeds by a PSO.

In recent years, there are many excellent variants of PSO have been designed, e.g., comprehensive learning PSO [17], DMS-PSO [1820], adaptive PSO [27], orthogonal learning PSO [28], PSO with an aging leader and challengers [29], cooperatively coevolving PSO [30], distance-based locally informed PSO [31], quantum-based PSO [32], and parallel hybrid PSO [33]. In this paper, we combine the PSO with the NMSM to optimize the parameters of the Muskingum model.

The remainder of our work is organized as follows. We discuss the background of Muskingum models in Sect. 2. We then describe the main principles of the PSO and the NMSM and present a hybrid PSO in Sect. 3. Section 4 consists of experimental results and analyses. Section 5 provides the conclusions of the paper.

2 Muskingum models

In this section, we introduce the Muskingum model in order to describe the flood routing. Using the continuity equations [4], we introduce the flow conditions with two different locations on the same river as follows:

$$\begin{aligned}&\frac{\hbox {d}W}{\hbox {d}t}=I-Q,\end{aligned}$$
(1)
$$\begin{aligned}&W=k(xI+(1-x)Q), \end{aligned}$$
(2)

In the two equations above, \(I\) denotes the inflow discharges and \(Q\) denotes the outflow discharges, \(W\) denotes the storage volume, \(k\) and \(x\) represent the model parameters, and \(t\) denotes the time.

To forecast a flood wave, we have deduced the following routing equation Eq. (3) by combining the Eqs. (1) and (2):

$$\begin{aligned}&Q(i)=c_{1}I(i)+c_{2}I(i-1)+c_{3}Q(i-1),\; i=2,\cdots ,n, \end{aligned}$$
(3)

where \(I(i)\) and \(Q(i)\) describe the observed inflow discharge and the observed outflow discharge, respectively, at a time interval \(t_{i}\), \(n\) is the maximum time point in all time; and \(c_{1}\), \(c_{2}\), and \(c_{3}\) are constant values and must satisfy the following three equations [34]:

$$\begin{aligned}&c_{1}=\frac{0.5\varDelta t-kx}{k-kx+0.5\varDelta t},\end{aligned}$$
(4)
$$\begin{aligned}&c_{2}=\frac{0.5\varDelta t+kx}{k-kx+0.5\varDelta t},\end{aligned}$$
(5)
$$\begin{aligned}&c_{3}=\frac{k-kx-0.5\varDelta t}{k-kx+0.5\varDelta t}, \end{aligned}$$
(6)

where \(\varDelta t\) is the time step.

From Eqs. (46), we can obtain Eqs. (7) and (8):

$$\begin{aligned}&k=\frac{c_{2}+c_{3}}{c_{1}+c_{2}}\varDelta t,\end{aligned}$$
(7)
$$\begin{aligned}&x=\frac{c_{1}+c_{2}}{2(c_{2}+c_{3})}+\frac{c_{1}}{c_{1}-1}, \end{aligned}$$
(8)

We apply the following fitness function so as to estimate these parameters efficiently,

$$\begin{aligned} \hbox {min }\,\,f(k,x)&=\sum \limits _{i=1}^{n-1}|(kx-0.5\varDelta t)I(i+1)-(kx+0.5\varDelta t)I(i)+\nonumber \\ & \quad Q(i+1)+[-k(1-x)+0.5\varDelta t]Q(i)| \end{aligned}$$
(9)

3 Hybrid particle swarm optimization algorithm

In this section, first, we introduce the standard PSO algorithm, followed by an explanation of the NMSM. Finally, our new hybrid PSO (HPSO) algorithm is presented.

3.1 Particle swarm optimization algorithm

3.1.1 The fundamental principle of PSO

Based on its unique search mechanism, PSO algorithm randomly initializes the particle swarm within the feasible solution space and velocity space in the first place. Put in another way, the initial position and velocity of the particle can be determined. The position can be subsequently adopted as to characterize the problem solution. For instance, the position and velocity of the particle in the i place within a d-dimensional search space can be expressed as \(X_i=[x_{i,1},x_{i,2},\cdots ,x_{i,d}]\) and \(V_i=[v_{i,1},v_{i,2},\cdots ,v_{i,d}]\), respectively. The best position of each particle at the time \(t\), i.e., (\(pbest\)), \(P_i=[p_{i,1},p_{i,2},\cdots ,p_{i,d}]\), and the best known position of swarm, i.e., (\(gbest\))\(P_g\), can be ascertained through evaluating the objective function of each particle. The velocity and position of each particle can be updated using the following equations:

$$\begin{aligned} v_{i,j}(t+1)=\omega v_{i,i}(t)+c_1r_1[p_{i,j}-x_{i,j}(t)]+c_2r_2[p_{g,j}-x_{i,j}(t)] \end{aligned}$$
(10)
$$\begin{aligned} x_{i,j}(t+1)=x_{i,j}(t)+v_{i,j}(t+1),j=1,2,\cdots ,d \end{aligned}$$
(11)

Here, \(\omega\) is the inertia weight factor, \(\omega\) is a linearly decreasing, which changes from 0.9 to 0.4. \(c_1\) and \(c_2\) are positive acceleration constants, \(n_1\) and \(n_2\) refer to random numbers uniformly distributed between 0 and 1. In addition, the migration of particles can be appropriately limited if we make some settings to the particles? velocity range, i.e., \([v_{\rm min},v_{\rm max}]\) and their position interval, i.e., \([x_{\rm min},x_{\rm max}]\).

As can be seen from the above equations, updating the velocity of particles has to go through three stages.

Firstly, the effect of the current velocity of the particle gets reflected. By taking the current status of particles into consideration, the algorithm can strike a balance between its global and local search ability;

Secondly, the effect of the cognition modal can be checked. In actuality, this is the very impact of the memory of the particle itself, which enables the particle to do global search in order to avoid local optimum;

Thirdly, the effect of the social modal can be seen. In other words, the impact of the swarm information demonstrates the information sharing among particles. Under the combined effects of these three stages, the particle is able to use the information sharing mechanism according to historical experience and constantly adjusts its position in order to find a better solution [35].

The above-mentioned way of updating particle positions in every generation in PSO algorithm is described in Fig. 1.

Fig. 1
figure 1

The process of position update with one particle

PSO algorithm has two versions, namely the global version and the local version. In the global version, the two extreme values of particle tracking are the best location of the particle and that of the whole population. In the local version, the particle only tracks its own optimal position. Instead of tracking the best position of the population, the particle tracks the best location of all the particles in the topology field, whose equation of updating its velocity can be seen as follows:

$$\begin{aligned} v_{i,j}(t+1)=wv_{i,j}(t)+c_1r_1[p_{i,j}-x_{i,j}(t)]+c_2r_2[p_{l,j}-x_{i,j}(t)] \end{aligned}$$
(12)

Here, \(P_l=[p_{l,1},p_{l,2},\ldots ,p_{l,d}]\) is the best location in the local field. A global version PSO algorithm is adopted in the present paper.

3.1.2 The flow of PSO algorithm

The process of basic PSO algorithm is as follows.

Step 1:

Initialize randomly the position and velocity of each particle in the population. If the search space is \(d\)-dimensional, each particle contains \(d\) variables.

Step 2:

Evaluate all the particles in the population and store their current positions and fitness values in their \(pbest\). The positions and fitness values of individuals with optimal fitness values in \(pbest\) stored in \(gbest\).

Step 3:

Update the position and velocity of each particle in accordance with Eqs. (10) and (11), respectively.

Step 4:

Evaluate all the particles in the population.

Step 5:

Compare the current fitness value of each particle in the whole population with the fitness value of its \(pbest\). If the current fitness value is superior, use the particles current position and fitness value to update its \(pbest\).

Step 6:

Compare the fitness values of all the and \(pbest\), \(gbest\) and update \(gbest\).

Step 7:

If the termination criterion is met, output \(gbest\) and the fitness value. Meanwhile, stop updating the algorithm. Otherwise, go to Step 3.

3.1.3 Features of PSO algorithm

To sum up, PSO algorithm has the following advantages.

  1. 1.

    The algorithm has wide universality and does not rely on the information of the problem.

  2. 2.

    It uses swarm search and has memory capacity, which can retain the optimal information of local individuals and the global population.

  3. 3.

    Its principle is simple and easy to implement.

  4. 4.

    Using cooperative search, it utilizes the local information of individuals and global information of swarm to guide the search.

There is no denying that PSO algorithm has some shortcomings, to be more detailed,

  1. 1.

    Its local search ability is poor with low accuracy.

  2. 2.

    The algorithm cannot guarantee that the global optimal solution can be searched, which is highly likely to fall into local optimum.

  3. 3.

    The performance of the searching algorithm is, to some extent, dependent on the parameters.

  4. 4.

    The theory behind the algorithm is yet to be improved, in particular, the principle guiding the algorithm design is missing [36].

3.2 Nelder–Mead simplex method

3.2.1 Basic principle of NMSM

Nelder–Mead simplex method (NMSM),also known as variable polyhedron search method, is a traditional direct unconstrained optimization algorithm. The advantages of NMSM lie in its low computational complexity, fast search speed, and strong local search ability [37]. The basic ideas of NMSM are as follows: In the \(N\)-dimensional space, a polyhedron is composed of \(N+1\) vertexes. Then, the fitness value of each vertex is calculated, and therefore, the vertexes with the best value, the sub-optimal value, and the worst value can be identified. Furthermore, through strategies of reflection, expansion, shrink, and contraction, a vertex with a better value gets found to replace the worst vertex. Ultimately, a new polyhedron can be generated. By repeating this iteration, an optimal or close-to-optimal solution will be found. NMSM starts with a randomly generated initial simplex and takes the following steps to constantly change the shape of simplex: First, find the vertex with the maximum value of the objective function \(W\), the vertex with the minimum value \(B\), and the one with the second largest value \(N_w\), respectively; Second, calculate the values of \(C\), the centroid of vertexes which totals \(D\), except for Vertex \(W\), and further the value of reflection point of \(C\) on the part of \(W\). If the objective function value at the location of \(R\) is smaller than that located at \(B\), one outward expansion in the same direction needs to be executed; otherwise, if \(f_R\) is greater than the function value of \(N_w\), execute one inward contraction in the direction accordingly. When detecting the trough, the method executes one inward contraction in all directions. This can help the whole simplex get close to the lowest point and proceed to the next iteration [39].

3.2.2 Flow of NMSM

The NMSM [38] was proposed by John Nelder and Roger Mead (1965) to obtain a minimization of a value for an fitness function in a many dimensional space. NMSM was applied for solving the optimization problems since it can obtain an accurate value for fitness functions. Let \(f(x)\) denotes the fitness function, and \(x_{i}\) denotes the group of points in the current simplex, \(i\in \{1,\ldots ,N+1\}\). NMSM procedure has the following steps:

Step 1: (Order):

The inequality group \(f(x_{1})\le f(x_{2})\le \cdots \le f(x_{i})\le \cdots \le f(x_{N+1})\) must be satisfied for the \(N+1\) vertices.

Step 2: (Reflect):

First, calculate the reflection point \(x_{R}\) of the simplex by \(x_{R}=\overline{x}+\alpha (\overline{x}-x_{N+1}),\;(\alpha >0)\), here, \(\overline{x}=\frac{1}{N}\sum \limits _{i=1}^{N}\) is the centroid of the \(n\) best points then compute \(f_{R}=f(x_{R})\). If \(f_{1}\le f_{R}\le f_{N}\), accept the reflected point \(x_{R}\) and end the iteration.

Step 3: (Expand):

If \(f_{R}<f_{1}\), compute the expansion point \(x_{E}\) of the simplex by \(x_{E}=\overline{x}+\beta (x_{R}-\overline{x})\),(\(\beta >1\) and \(\beta >\alpha\)). Calculate \(f_{E}=f(x_{E})\). If \(f_{E}<f_{R}\), accept \(x_{E}\) and end the iteration; if \(f_{E}\ge f_{R}\), accept \(x_{R}\) and end the iteration.

Step 4: (Contract):

If \(f_{R}\ge f_{N}\), execute a contraction operation between \(\overline{x}\) and the better of \(x_{R}\) and \(x_{n+1}\). a. Outside contraction: If \(f_{N}\le f_{R}<f_{n+1}\), compute the outside contraction point \(x_{C}\) by \(x_{C}=\overline{x}+\gamma (x_{R}-\overline{x})\), (\(0<\gamma <1\)). Calculate \(f_{C}=f(x_{C})\). If \(f_{C}<f_{R}\), accept \(x_{C}\) and end the iteration; otherwise, continue with Step 5. b. Inside contraction: If \(f_{R}\ge f_{N+1}\), compute the inside contraction point \(x_{CC}\) by \(x_{CC}=\overline{x}-\gamma (\overline{x}-x_{N+1})\). Calculate \(f_{CC}=f(x_{CC})\). If \(f_{CC}<f_{R}\), accept \(x_{CC}\) and end the iteration; or else continue with Step 5.

Step 5: (Shrink):

Compute the \(N\) points by \(v_{i}=x_{1}+\sigma (x_{i}-x_{1})\)(\(0<\sigma <1\)). Also calculate \(f(v_{i})\), \(i=2,\cdots ,N+1\). The unordered vertices of the simplex at the next iteration comprise of \(x_{1},v_{2},\cdots ,v_{N+1}\). Furthermore, go to Step 1.

3.3 Hybrid particle swarm optimization (HPSO) algorithm

3.3.1 Feasibility analysis of hybrid algorithm

  1. 1.

    Organic integration of the mechanism: NMSM is a deterministic method for optimization while PSO is an algorithm based on random distribution. The organic integration of NMSM and PSO not only enriches the searching behavior in the optimization process, but also improves the search ability and efficiency of HPSO in a bid to obtain high-quality solutions [37].

  2. 2.

    Good combination of operation: compared with other swarm intelligence algorithms, HPSO is higher in search efficiency and faster in outlining the shape of the objective function, which provides a good initial point for NMSM and gives full play to the powerful local search ability of NMSM. Thereby, NMSM and PSO can be organically combined.

  3. 3.

    Wide applicability: PSO and NMSM do not require derivation or other auxiliary information. The embedding of NMSM in PSO does not limit but instead, enhances the applicability of HPSO algorithm.

  4. 4.

    Characteristics of parallel computing: Both PSO and NMSM have the feature of parallel computing; hence, it is very suitable to combine the two methods.

3.3.2 Hybrid strategy

Based on PSO process, NMSM that constitutes of HPSO algorithm is introduced to improve the local fine-tuning of algorithms and increase PSO algorithm’s probability to converge the global optimal solution. In each iteration, PSO algorithm is first used to perform the global optimization, followed by simplex method to conduct the local search of some elite particles among the particle swarm in the domain featuring good solutions find out a better solution [39]. Specifically,

  1. 1.

    PSO algorithm: as PSO algorithm has the powerful global search ability, it is easy to for the particle swarm to search the surrounding area with the global optimal solution after PSO operation. Based on this, NMSM is utilized to perform the local search so as to obtain optimal solution with higher precision.

  2. 2.

    NMSM: P (population size) swarm particles optimized by PSO in each iteration are sorted according to their fitness value. The first \(S\) selected particles with the best fitness value constitute the NMSM graphics with \(S\) vertexes. \((S-1)\) vertexes \(X_1,X_2,\cdots ,X_{S-1}\) with the best response get selected from \(S\) vertexes, and the centroid of \(S-1\) vertexes \(X_c\) is calculated. The rest vertexes \(X_s\) pass through the centroid \(X_c.\) \(X_S^{'}\) scalability mapping generates \(X\) vertexes to constitute a new NMSM graphics. \(S\) new particles are generated with the repeated method above. Fitness value of each updated particle is calculated, from which particles with the best response is selected to replace the best individuals in the original swarm, and constitute the next generation with the rest individuals in the original swarm. The elite individuals in the population, after repeated iterations with the simplex method, find out an approximate optimal position. In addition, the search accuracy of the algorithm and the probability to find out the optimal solution sooner can be increased. In general, \(S\) , the number of individuals, should not be too large. The ratio of \(S\) and \((S/P)\) between \(10\,\%\) and \(20\,\%\) is appropriate. Due to the fast convergence speed of PSO in the early evolution, the NMSM with small probability is conducted for local search to find out the optimal solution, in order to lower the amount of calculation and improve the computational efficiency. In the late evolution, when swarm enters the development stage of the local search, the NMSM optimization search with large probability is adopted. Following this idea, an adaptive strategy model based on the evolution stages is given in the following section [37]. The evolutionary process is divided into three stages:

  • The first stage: \(\tau \in [0,T_1]:T_1=aT\)

  • The second stage:\(\tau \in [T_1,T_2]:T_2=(1-a)T\)

  • The third stage:\(\tau \in [T_2,T]\)

  • \(T\) is the maximum evolution algebra, \(\tau\) is evolution algebra, and the value of \(a\) is usually set as 0.382. \(p\), the probability of NMSM used in each stage, can be seen in Table 1.

Table 1 Probability called of NMSM in different stages

3.3.3 Characteristics of hybrid algorithm

Based on the framework of PSO, hybrid algorithm introduces the NMSM to perform repeated simplex search and iteration on some elite particles in the swarm. Characteristics of this method are as follows:

  1. 1.

    Due to its own defects of the inherent mode, every search algorithm with single structure and mechanism is generally difficult to realize highly efficient optimization, and so is PSO algorithm. HPSO algorithm is better in optimization performance than single PSO algorithm and another optimization method.

  2. 2.

    PSO, an algorithm based on random search, has a strong ability of global optimization but gradually slow convergence speed in later algorithm stage. NMSM, a deterministic and descent method, has a superior local search ability. Using the polyhedral reflection, expansion, compression, and other properties, it is able to quickly find out the local optimal solution. The two algorithms complement each other, and therefore, their combination is conducive to the improvement of the global and local search ability and efficiency.

  3. 3.

    NMSM is simple featuring low complexity, and fast speed [37]. Probability in stage is used to perform simplex iteration and update on some elite particles in the swarm, with a limited number of particles involved. Therefore, the hybrid algorithm combining NMSM algorithm and PSO algorithm does not require much computation.

3.3.4 Flow of HPSO

Based on the previous work, the process of HPSO is described as follows:

Step 1:

assign values parameters of HPSO, including population size of particle swarm \(P\), compressibility factor K, acceleration factor \(\varphi _1,\varphi _2\), probability \(P\) of NMSM, and the maximum algebra \(T\) required for evolution computation;

Step 2:

initialize population, generate initial velocity and position of particles, and calculate particles’ fitness value according to the evaluation function;

Step 3:

set the present position of particle as \(pbest\), and the best position of the initial group as \(gbest\);

Step 4:

update the velocity and position of each particle and evaluate the particles fitness value;

Step 5:

sort the particle swarm according to the fitness value; for the first \(S\) elite particles, use NMSM with the possibility \(P\) to perform repeated iterations and updates on the first \(S\) selected excellent individuals; calculate the fitness value of each updated particle; replace the best individuals of the original swarm with the particle with the best response, and ultimately, constitute the next negation with the remaining individuals of the original group;

Step 6:

compare the fitness value of new individuals and their \(pbest\) fitness value; if the fitness value of the particle is better than \(pbest\) fitness value, set \(pbest\) as the new position;

Step 7:

compare the fitness value of new individuals with that of \(gbest\); if the fitness value of the particle is better than \(gbest\) fitness value, set \(gbest\)as the new position;

Step 8:

if the evolution computation of the particle swarm achieves the allowed maximum algebra \(T\) or the evaluation value is less than the set accuracy, output the optimal result \(gbest\) and iteration ends. Otherwise, if \(\tau =\tau +1\), search continues from step 4.

From the above, we can see that PSO searches the approximate space of the optimal solution located, then it gives the solution to NMSM for further depth search. The roles of the two algorithms are different; the most important role of HPSO is to strengthen the division and the cooperation through the merging and the recombination between PSO and NMSM [39]. In a word, PSO performs an exploration while using NMSM for an exploitation to make HPSO search the optimal solution very precisely and faster.

4 Experiments and result analyses

4.1 simulation results of real example

The Muskingum model investigated in our paper is from Ref. [40], where the model was in the south canal between Chenggou river and Linqing river in China, and the time interval \(\varDelta t\) = 12 h. The detailed information can be found in Ref. [41].

The parameters \(k\) and \(x\) are required in this model, and the significance of these parameters can be clearly seen in Eqs. (46).

The parameters \(k\) and \(x\) play an important role in the Muskingum model. In our study, \(k\) and \(x\) are optimized with respect to the same criterion, termed the sum of the least residual absolute value, and the form of the fitness function is shown in Eq. (9).

To facilitate the experiments, we used matlab2012a to program a m-file for implementing the algorithms on a PC with a 32-bit windows 7 operating system, a 2GB RAM, and a CPU of Pentium Dual-core with 2.7 GHz. The standard errors have been measured for 30 independent runs. For all the intelligence algorithms, the two parameters are set as follows: population size \(N_p=20\), the maximum number of iterations \(Max _{IT}\) = 1,000. These parameters of HPSO, CLPSO, DMS-PSO, and PSO used are set the same: the acceleration constants \(c_1=2.1\), \(c_2=2.1\); the inertia weight factor \(\omega\) is a linearly decreasing, which changes from 0.9 to 0.4. These parameters of SaDE, DPSDE, and DE used are set the same: constriction factor \(F_c=0.5\), crossover rate \(P_c=0.6\), These parameters of GAGA, GGA, and BGA used are set the same: selection rate is from Roulette, crossover rate \(P_c=0.8\), mutation rate \(P_m=0.05\).

Fig. 2
figure 2

The comparison of fitness values between HPSO and other methods

Fig. 3
figure 3

The comparison of convergence between HPSO and other methods

Fig. 4
figure 4

The simulation results of HPSO for 1960 flood routing

Fig. 5
figure 5

The simulation results of HPSO for 1961 flood routing

Fig. 6
figure 6

The simulation results of HPSO for 1964 flood routing

Table 2 A comparison of flood routing between different methods
Table 3 Statistic data of description
Table 4 A comparison of calculated outflow with different methods for 1960
Table 5 A comparison of calculated outflow with different methods for 1961
Table 6 A comparison of calculated outflow with different methods for 1964

We made a comparison between using our new proposed model HPSO and another nine evolution algorithms PSO [5], CLPSO [17], DMS-PSO [20], EPSDE [21, 22], SaDE [23, 24], DE [8], GAGA [4], GGA [25] and BGA [26], and the comparison results show that: for HPSO, the average absolute error (AAE) is 6.8447, and the average relative error (ARE) is 2.0151. For CLPSO, DMS-PSO, and PSO, the AAE is 7.1628, 7.0358, 7.2463, and the ARE is 2.0187, 2.0460, 2.1093, respectively. For SaDE, EDSDE, and DE, the AAE is 7.2302, 7.0983, 7.1317, and the ARE is 2.1375, 2.0761, 2.0734, respectively. For GAGA, GGA, and BGA, the AAE is 7.3471, 7.2919, 7.2390, and the ARE is 2.1275, 2.1136, 2.1084, respectively. The experimental data for the model are listed in Table 1 in detail. These results proved that HPSO has higher precision compared with the above three different algorithms.

On the other hand, we also perform the same test on the four conventional methods, and the test results are as follows: for NMSM [7], the AAE is 6.9878, and the ARE is 2.0570. For the nonlinear programming method (NPM) [42], the AAE is 7.1021, and the ARE is 2.0811. For the least residual square method (LRSM) [34], the AAE is 7.4119, and the ARE is 2.1941. And for test method (TM) [34], the AAE is 8.7407, and the ARE is 2.6305. These results also show that HPSO has higher precision compared with these conventional methods, such as the NPM, the LRSM, and the TM (Fig. 2).

It is shown in Tables 2, 3 and Fig. 2, the fitness value \(f\) is 185.7115 for HPSO, it is the second best fitness value among the 14 methods. The evaluation number (EN) of the fitness function is 2000; it is the second smallest number in terms of function evaluation among the 14 methods. From Fig. 3, we can see that HPSO is second fastest method among the 14 methods behind SaDE in terms of convergent speed.

The experimental results of the HPSO for the example in the practice are shown in Figs. 2, 3 and Tables 2, 3. In terms of evaluation number of objective function, SaDE is the best in 14 methods, HPSO ranked as the second best method, behind SaDE and ahead of the other methods. In terms of objective function value, CLPSO is the best in 14 methods, HPSO ranked as the second best method, behind CLPSO and ahead of the other methods. In terms of error and standard deviation, HPSO dominating it all the methods. We can conclude that the results obtained by our proposed HPSO are satisfactory in terms of precision and convergence.

Figure 4 gives the measured discharges and calculated discharges for the Muskingum model by the HPSO for 1960. Figure 5 gives the measured discharges and calculated ones for the Muskingum model by the HPSO for 1961. Figure 6 gives the measured discharges and calculated ones for the Muskingum model by the HPSO for 1964.

From Table 2 and Figs. 2, 4, 5, and 6, we can see clearly that the simulation results obtained with our HPSO are satisfactory in terms of accuracy. The HPSO has been proved to be an efficient method to minimize the fitness function for the Muskingum model.

Tables 4, 5, and 6 show separately the comparisons of the best corresponding computed outflows obtained from various techniques such as TM, LRSM, NPM, NMSM, BGA, GGA, GAGA, DE, EPSDE, SaDE, PSO, DMS-PSO, CLPSO, and HPSO for 1960, 1961, and 1964. By comparing the results from the above three tables, HPSO outperforms the other algorithms in the three different periods.

As seen in the experimental results, the HPSO ranks as the first, first, first, second, second, second, respectively, out of 14 advanced methods in terms of absolute error, relative error, standard deviation, number of function evaluation, fitness value, convergent speed. To sum up: the overall performance of HPSO is very good.

4.2 Statistical results of p value

The Wilcoxon signed rank test was proposed by FWilcoxon in 1945. When there is concrete numerical value for the differences between the paired data of two groups, the signed test only adopts the positive (\(R>0\)) and negative (\(R<0\)) information. Information of differences in e size is not used. The Wilcoxon signed ranks test not only takes into consideration the positive and negative signs but also the differential size, so it has higher efficiency than the signed test [44].

The steps of this method are as follows:

Step 1:

calculate the \(d_i\) value of every paired data, sort the \(d_i\) absolute value from small to large, and use the mean rank if the \(d_i\) value is equal.

Step 2:

recover the positive and negative number after the numbered rank, gain positive rank sum \(R^+\) and negative rank sum \(R^-\), and select the smaller one from \(R^+\) and \(R^-\) as the statistic \(T\) value of the Wilcoxon signed ranks test.

Step 3:

calculate p value and draw conclusions.

In this experiment, each algorithm independently runs 30 times. HPSO algorithm is compared with other 13 algorithms, respectively, to test their performance differences.

In this study, \(R^+\) and \(R^-\) are defined as follows:

$$\begin{aligned} R^{+}&= \sum _{d_i>0}rank(d_i)+\frac{1}{2}\sum _{d_i=0}rank(d_i)\end{aligned}$$
(13)
$$\begin{aligned} R^{-}&= \sum _{d_i<0}rank(d_i)+\frac{1}{2}\sum _{d_i=0}rank(d_i) \end{aligned}$$
(14)

Here, \(R^+\) indicates that the value of HPSO is better than that of the other algorithm, whereas \(R^-\) refers to the opposite scenario; and \(d_i = 0\) indicates that two algorithms have the same test results.

P value is the probability of the observation results of the sample and the more extreme results when the null hypothesis is true. If the p value is extremely small, it means that the probability of the conditions when the null hypothesis is true is very low. If such conditions take place, we have reason to refuse the null hypothesis according to the small probability principle. The smaller p value, we have the more adequate reason to refuse the null hypothesis. In conclusion, the smaller p value, the more significant the results are.

Table 7 Results of the Wilcoxon signed ranks test

In Table 7, while comparing HPSO with CLPSO, we set the null hypothesis H0 as HPSO better than CLPSO, and the alternative hypothesis H1 as HPSO not better than CLPSO. When comparing HPSO with other algorithms, respectively, the hypothesis is opposite, namely the null hypothesis H0 indicates that HPSO is not better than the other algorithm, whereas the alternative hypothesis H1 indicates that HPSO outperforms the other algorithm. Therefore, one-side test should be used. During the analysis with SPSS software, the significance value of the accuracy (one-side) is selected, with 0.05 chosen as the significance level in this study. According to Table 7, by comparison of HPSO and CLPSO, p value is far less than 0.05, so we reject H0 and accept H1 that CLPSO outperforms HPSO. Comparing HPSO with SaDE, we accept H0, with the p value higher than 0.05, suggesting that HPSO is not better than SaDE. While comparing HPSO with the other 11 algorithms, the p value is found far less than 0.05 all the time, so we accept H1 that HPSO is better than the other 11 algorithms, and the differences are statistically significant. HPSO is thus proven very effective.

5 Conclusion

In our paper, we have developed a new hybrid PSO (HPSO) heuristic algorithm so as to estimate the parameters of the Muskingum model. HPSO has better possessing capabilities than the GAs and is much easier to implement. We have conducted intensive experiments to compare the key performance of our presented algorithm with other superior estimation methods. The HPSO has the advantage that it does not require assumptions of the initial values of the model parameters compared with conventional methods. Furthermore, the results demonstrate that HPSO can achieve a higher degree of accuracy and faster convergent speed to estimate the Muskingum model parameters, which leads to accurate predictions of outflow and guarantees the effectiveness of outflow forecasting. It is worth to mention that no derivative is required by the HPSO method; moreover, HPSO will produce a better solution via making full use of advantages of HPSO and NMSM. In a word, HPSO is an effective and feasible parameter estimation method for Muskingum model, and it can be widely applied in the field of hydrology and hydraulics.