Keywords

1 Introduction

Axial flow pumps are huge consumers of energy in various industries. This kind of blade pumps has characters of great flux, lower head, and high fluids flow. Because of its broad use in agriculture, irrigation and massive water project, so many researchers pay attention to the area of axial flow pump design [1]. So it is essential to improve the efficiency of such equipment through the design optimization. Optimization of axial flow pump is a multi-objective optimization problem rather than a single objective optimization problem that has been considered so far in the literature. Luo et al. [14] investigated an multi-object optimum design for hydraulic turbine guide vane based on NSGA-II algorithm. They tried to minimize the loss of total pressure and maximize the minimal pressure in guide vane. Finally, for the optimized guide vanes, the loss was reduced, and the cavitations performance was improved. Zhang et al. [20] presented a multi-objective shape optimization of helico-axial multiphase pump impeller based on NSGA-II and ANN. They tried to maximum the pressure rise and pump efficiency. After the optimization using NSGA-II multi-objective genetic algorithm, the five stages of optimized compression cells were manufactured and applied in experiment test. The result shows that the pump pressure rise and the pump efficiency have increased which indicated that the method is feasible.

NPSHr and efficiency in axial flow pumps are important objective functions to be optimized simultaneously in a real world complex multi-objective optimization problem. These objective functions are either obtained from experiments or computed using very timely and high cost CFD approaches, which cannot be used in an iterative optimization task unless a simple but effective meta-model is constructed over the response surface from numerical or experimental data [18]. Therefore, modelling and optimization of the parameters is applied in this paper by using GMDH-type neural networks and modified multi-objective Particle Swarm Optimization in order to minimize the NPSHr and maximize the efficiency of the pumps simultaneously.

In this paper, pump NPSHr and efficiency are numerically investigated using commercial software ANSYS. Then based on the numerical results of geometrical parameters and pump two objective functions, GMDH neural networks are applied in commercial software DTREG in order to obtain polynomial models of NPSHr and efficiency. The obtained simple polynomial models are then used in a Pareto based modified multi-objective PSO optimization approach to find the best possible combinations of NPSHr and efficiency, known as Pareto front.

2 Definition of Variables and CFD Simulation of Axial Flow Pump

2.1 Definition of Objective Functions

As present in Sect. 1, both the NPSHr and efficiency in axial flow pump are important objective functions to be optimized simultaneously. The efficiency of a axial flow pump is defined by

$$ \upeta = \frac{{{\text{P}}_{\text{e}} }}{\text{P}} $$
(1)

where Pe is the useful power transferred from pump to the liquid. Pe can be given by

$$ {\text{P}}_{\text{e}} =\uprho{\text{gQH}} $$
(2)

where P is shaft power.

NPSHr means the required net positive suction head which defines the cavitation characteristic of axial flow pump. It is the energy in the liquid required to overcome the friction losses the suction nozzle to the eye of the impeller without causing vaporization [15]. NPSHr varies with design, size, and the operating conditions [2]. It will lead to reduction or stop of the fluid flow and damage the pump with the increasing of the NPSHr. In the handbook of pump design, the NPSHr can be calculated with the following formula:

$$ {\text{NPSHr}} = \frac{{{\text{P}}_{\text{in}} - {\text{P}}_{ \hbox{min} } }}{{\uprho{\text{g}}}} + \frac{{{\text{v}}_{0}^{2} }}{{2{\text{g}}}} $$
(3)

where Pin is the pressure at inlet, Pmin is the minimum pressure at the whole blade surface which can be obtained from the post processing of ANSYS Fluent software simulation [6]. \( \uprho \) is the density of fluid and v0 is the inlet velocity.

2.2 Definition of Design Variables

The design variables in this paper are hub angle βh, chord angle βc, cascade solidity of chord σcc = l/t), maximum thickness of blade H. There are two sections are defined in the blades, one on hub and another on shroud as shown in Fig. 1.

Fig. 1
figure 1

Design variables of impeller blades

So there are four design variables namely: βh, βc, σc and H. The various designs can be generated and evaluated in ANSYS Fluent by changing the geometrical independent parameters as shown in Table 1. Consequently, some meta-models can be optimally constructed using the GMDH type neural networks in commercial software DTREG, which will be further used in multi-objective Pareto based design of axial flow pump using modified PSO method. In this way, 81 various CFD analyses have been performed.

Table 1 Design variables and their range

2.3 Flow Analysis

Because of the incompressible fluid flow, the equations of continuity and balance of momentum are given as

$$ \frac{{\partial {\text{V}}_{\text{i}} }}{{\partial {\text{x}}_{\text{i}} }} = 0 $$
(4)
$$ \frac{{{\text{DV}}_{\text{i}} }}{\text{Dt}} = - \frac{1}{\uprho}\frac{{\partial {\text{p}}}}{{\partial {\text{x}}_{\text{i}} }} +\upnu\frac{{\partial^{2} {\text{V}}_{\text{i}} }}{{\partial {\text{x}}_{\text{j}} \partial {\text{x}}_{\text{j}} }} - \frac{\partial }{{\partial {\text{x}}_{\text{j}} }}\overline{{{\text{u}}_{\text{i}} {\text{u}}_{\text{j}} }} $$
(5)

The physical model that used in the solver is the Reynlds-Averaged Navier-Stokes equations and the k-ε turbulence models is used [17]. The k-ε equations are given as

$$ \frac{\text{Dk}}{\text{Dt}} = \frac{\partial }{{\partial {\text{x}}_{\text{j}} }}\left[ {\left( {{\text{C}}_{\text{k}} \frac{{{\text{k}}^{2} }}{\upvarepsilon} +\upnu} \right)\frac{{\partial {\text{k}}}}{{\partial {\text{x}}_{\text{i}} }}} \right] - \overline{{{\text{u}}_{\text{i}} {\text{u}}_{\text{j}} }} \frac{{\partial {\text{V}}_{\text{i}} }}{{\partial {\text{x}}_{\text{j}} }} $$
(6)
$$ \frac{{{\text{D}}\upvarepsilon}}{\text{Dt}} = \frac{\partial }{{\partial {\text{x}}_{\text{j}} }}\left[ {\left( {{\text{C}}_{\text{k}} \frac{{{\text{k}}^{2} }}{\upvarepsilon} +\upnu} \right)\frac{{\partial\upvarepsilon}}{{\partial {\text{x}}_{\text{j}} }}} \right] - {\text{C}}_{{\upvarepsilon1}} \frac{\upvarepsilon}{\text{k}}\overline{{{\text{u}}_{\text{i}} {\text{u}}_{\text{j}} }} \frac{{\partial {\text{V}}_{\text{i}} }}{{\partial {\text{x}}_{\text{j}} }} - {\text{C}}_{{\upvarepsilon2}} \frac{{\upvarepsilon^{2} }}{\text{k}} $$
(7)

For the grid generation, tetragonal, hexagonal mesh type was used in software ANSYS Gambit 2.4.6. Pump and around of impeller areas used tetragonal mesh, the other areas were filled with hexagonal mesh [9].

Boundary conditions are as follows: non-slip conditions was applied all around the walls, mass flow rate at the pumps inlet, static boundary condition is used at the outlet. The simulation is continued until the solution converged with a total residual of less than 0.0001 [8].

Operating conditions are shown in the Table 2.

Table 2 Operating conditions in simulation

The results of numerical simulation using ANSYS Fluent are shown in Table 3, moreover a path line and total pressure contour of the simulation are shown in Figs. 2 and 3. Total 81 CFD simulation results can be used to build the response surface of both the efficiency and the NPSHr using GMDH type neural networks. Such meta models can be used for the Pareto based multi-objective optimization of the axial flow pump using modified PSO method.

Table 3 Numerical results of CFD simulation
Fig. 2
figure 2

Path line of CFD simulation

Fig. 3
figure 3

Total pressure contour of CFD simulation

3 Meta-models Building Using GMDH-Type Neural Network

Group Method of Data Handling (GMDH) polynomial neural network are self-organizing approach by which gradually complicated models are generated based on the evaluation of their performances on a set of multi input-single output data pairs [12]. GMDH networks were originated in 1968 by Prof Alexey G. Ivakhnenko who was working at that time on a better prediction of fish population in rivers at the Institute of Cybernetics in Kyiv (Ukraine). This algorithm can be used to model complex system without having specific knowledge of the system. The main idea of GMDH is to build an analytical function in a feed forward network based on a quadratic node transfer function [5] whose coefficients are obtained using regression technique.

3.1 Structure of a GMDH Network

The meaning of self-organizing means the connections between neurons in the network are not fixed but rather are selected during training to optimize the network. The number of layer in the network also is selected automatically to produce maximum accuracy without over fitting.

As shown in Fig. 4, the first layer (at the left) presents one input for each predictor variable. Every neuron in the second layer draws its inputs from two of the input variables. The neurons in the third layer draw their inputs from two of the neurons in the previous layer and this progress through each layer. The final layer (at the right) draws its two inputs from the previous layer and produces a single value which is the output of the network [16].

Fig. 4
figure 4

The structure of a basic GMDH network

The formal definition of identification problem is to find a function ƒ that can be approximately used instead of the actual one ƒa. In order to predict output y for a given input vector X = (x1, x2, x3, …., xn) as close as possible to its actual output ya. For the given M observation of multi input-single output data pairs, there have

$$ {\text{y}}_{\text{ai}} = {\text{f}}_{\text{a}} ({\text{x}}_{\text{i1}} ,{\text{x}}_{\text{i2}} ,{\text{x}}_{\text{i3}} , \ldots ,{\text{x}}_{\text{in}} )\text{ }({\text{i}} = 1,2, \ldots ,M) $$
(8)

For any given input vector X = (x1, x2, x3, …., xn), there have

$$ {\text{y}}_{\text{i}} = {\text{f}}({\text{x}}_{\text{i1}} ,{\text{x}}_{\text{i2}} ,{\text{x}}_{\text{i3}} , \ldots ,{\text{x}}_{\text{in}} )\text{ }({\text{i}} = 1,2, \ldots ,{\text{M}}) $$
(9)

In order to determine the GMDH neural network, the square of difference between the actual output and the predicted one is minimized, there have

$$ \sum\limits_{{{\text{i}} = 1}}^{\text{M}} {\left[ {{\text{f}}\left( {{\text{x}}_{{{\text{i1}},}} {\text{x}}_{{{\text{i2}},}} {\text{x}}_{\text{i3}} , \ldots ,{\text{x}}_{\text{in}} } \right) - {\text{y}}_{\text{ai}} } \right]}^{2} \to { \hbox{min} } $$
(10)

The most popular base function used in GMDH is the Volterra functional series in the form of

$$ {\text{y}}_{\text{a}} = {\text{a}}_{0} + \sum\limits_{{{\text{i}} = 1}}^{\text{n}} {{\text{a}}_{\text{i}} {\text{x}}_{\text{i}} } + \sum\limits_{{{\text{i}} = 1}}^{\text{n}} {\sum\limits_{{{\text{j}} = 1}}^{\text{n}} {{\text{a}}_{\text{ij}} {\text{x}}_{\text{i}} {\text{x}}_{\text{j}} } } + \sum\limits_{{{\text{i}} = 1}}^{\text{n}} {\sum\limits_{{{\text{j}} = 1}}^{\text{n}} {\sum\limits_{{{\text{k}} = 1}}^{\text{n}} {{\text{a}}_{\text{ijk}} {\text{x}}_{\text{i}} {\text{x}}_{\text{j}} {\text{x}}_{\text{k}} } } } + \ldots $$
(11)

where ya is the Kolmogorov-Gabor polynomial [5]. It is use complete quadratic polynomials of two variables as transfer functions in the neurons. These polynomials can be represented by the form as show below:

$$ {\text{y}} = {\text{a}}_{0} + {\text{a}}_{1} {\text{x}}_{\text{i}} + {\text{a}}_{2} {\text{x}}_{\text{j}} + {\text{a}}_{3} {\text{x}}_{\text{i}} {\text{x}}_{\text{j}} + {\text{a}}_{4} {\text{x}}_{\text{i}}^{2} + {\text{a}}_{5} {\text{x}}_{\text{j}}^{2} $$
(12)

3.2 Meta-models Building in DTREG

The input and output data used in such modelling evolve two different data tables obtained from CFD simulation. Both of the tables consists four variables as inputs that is βh, βc, σc and H (as shown in Fig. 1) and two outputs that is efficiency η and NPSHr. There have 81 patterns which can be used to train and test GMDH neural network. The corresponding polynomial representation for NPSHr is as follows:

$$ {\text{N}}( 3) = - 0. 6 4 3 2- 0.0 1 5\upbeta_{\text{h}} + 0. 1 7\upbeta_{\text{c}} + 0.0 3 2 5\upbeta_{\text{h}}^{ 2} - 0.0 2 7 4\upbeta_{\text{c}}^{ 2} + 2. 3 {\text{e}} - 6\upbeta_{\text{h}}\upbeta_{\text{c}} $$
$$ \begin{aligned} {\text{N}}( 1) = & 3. 5 2 4- 0. 1 3 4\upbeta_{\text{h}} + 0.0 2 4 5\sigma_{\text{c}} + 0.0 4 1 8\upbeta_{\text{h}}^{ 2} + 2.0 5 6 {\text{e}} - \\ & 1 2\sigma_{\text{c}}^{ 2} + 2.0 1 {\text{e}} - 5\upbeta_{\text{h}}\upsigma_{\text{c}} \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}\left( 7\right) = & 5. 6 4 3- 0. 2 1 3 4 {\text{H}} - 0.0 2 1 9 2 { } + 0.00 2 1 {\text{H}}^{ 2} + 0.000 4\upbeta_{\text{h}}^{ 2} \\ & + 2. 4 5 {\text{e}} - 5 {\text{H}}\upbeta_{\text{h}} \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}\left( 4\right) = & - 1. 8 9+ 0. 2 1 3\upbeta_{\text{c}} + 0.0 1 5\upsigma_{\text{c}} - 0.00 4\upbeta_{\text{c}}^{ 2} + 1. 1 9 {\text{e}} - 1 1\upsigma_{\text{c}}^{ 2} \\ & + 1. 2 4- 5\upbeta_{\text{c}}\upsigma_{\text{c}} \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}( 9) = & 6. 9 4 1- 3. 7 8 4 5 {\text{ N}}( 3) + 0. 5 4 1 8 {\text{N}}( 1) + 0. 4 7 2 3 {\text{ N}}( 3)^{ 2} \\ & + 0.0 2 4 5 {\text{ N}}( 1)^{ 2} + 0.0 4 7 5 {\text{ N}}( 3){\text{ N}}( 1) \\ \end{aligned} $$
$$ {\text{N}}\left( 6\right) = 6. 1 4 7- 2. 1 8 7 3 {\text{ N}}\left( 7\right) - 0. 7 1 2 2 {\text{ N}}\left( 4\right) + 0. 2 1 8 {\text{ N}}\left( 7\right)^{ 2} + 0.0 9 4 7 {\text{ N}}\left( 4\right)^{ 2} + 0. 4 2 3 4 {\text{ N}}\left( 7\right){\text{ N}}\left( 4\right) $$
$$ \begin{aligned} {\text{NPSHr}} = & - 0. 4 1 2- 0.0 6 2 {\text{N}}( 9) + 1. 1 4 8 {\text{N}}\left( 6\right) + 0.0 8 1 2 {\text{ N}}( 9)^{ 2} \\ & + 0.0 4 1 2 {\text{ N}}\left( 6\right)^{ 2} - 0. 1 2 7 1 {\text{ N}}( 9){\text{ N}}\left( 6\right) \\ \end{aligned} $$

The corresponding polynomial representation for efficiency is as follows:

$$ {\text{N}}\left( 4\right) = - 0. 5 4 1 2- 0. 4 1 1\upbeta_{\text{h}} + 2. 1 4 3\upbeta_{\text{c}} + 0.0 1 6\upbeta_{\text{h}}^{ 2} - 0.0 1 2 4\upbeta_{\text{c}}^{ 2} + 0.000 1 2\upbeta_{\text{h}}\upbeta_{\text{c}} $$
$$ {\text{N}}\left( 6\right) = 1 6. 8 9 5+ 1. 2 1 8 3 {\text{H}} + 0. 4 9 5\upsigma_{\text{c}} - 0.00 8 {\text{H}}^{ 2} - 0.00 4 1\upsigma_{\text{c}}^{ 2} + 0.00 1 2 {\text{H}}\upsigma_{\text{c}} $$
$$ \begin{aligned} {\text{N}}( 1) = & - 1 5.0 3+ 1. 9 6\upbeta_{\text{c}} + 0. 6 1 8 1\upsigma_{\text{c}} - 0.0 2 3\upbeta_{\text{c}}^{ 2} - 0.00 4 1\upsigma_{\text{c}}^{ 2} \\ & + 0.00 1 2\upbeta_{\text{c}}\upsigma_{\text{c}} \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}\left( 7\right) = & 3 4. 10 7+ 1. 2 5 4 {\text{H}} - 0. 4 9\upbeta_{\text{h}} - 0.00 8 2 {\text{H}}^{ 2} + 0.00 9 2\upbeta_{\text{h}}^{ 2} \\ & + 0.000 5 9 {\text{H}}\upbeta_{\text{h}} \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}( 9) = & 5 4. 3 2 4- 0. 5 9 1 2 {\text{N}}\left( 4\right) - 1.0 1 2 6 {\text{N}}\left( 6\right) + 0.00 5 2 1 {\text{N}}\left( 4\right)^{ 2} \\ & + 0.00 6 3 {\text{N}}\left( 6\right)^{ 2} + 0.0 1 9 1 {\text{N}}\left( 4\right){\text{N}}\left( 6\right) \\ \end{aligned} $$
$$ \begin{aligned} {\text{N}}( 3) = & 6 1. 80 1- 0. 50 1 2 {\text{ N}}( 1) - 1.0 1 2 5 {\text{N}}\left( 7\right) + 0.00 4 3 {\text{ N}}( 1)^{ 2} \\ & + 0.00 7 5 1 {\text{N}}\left( 7\right)^{ 2} + 0.0 1 4 {\text{N}}( 1){\text{N}}\left( 7\right) \\ \end{aligned} $$
$$ \begin{aligned}\upeta = & 0. 7 2 4 1- 3.0 1 7 4 {\text{N}}( 9) + 5.0 1 2 4 {\text{N}}( 3) - 2. 3 10 4 {\text{ N}}( 9)^{ 2} \\ & - 2. 5 9 8 3 {\text{N}}( 3)^{ 2} + 5. 3 6 2 {\text{N}}( 9){\text{N}}( 3) \\ \end{aligned} $$

4 Apply Multi-objective Optimization by Using Modified PSO Method

Particle Swarm Optimization (PSO) is a computational method that optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. PSO is a population-based research algorithm. PSO is originally developed by Kennedy and Eberhart [10]. It was first intended for simulation social behaviour, as a stylized representation of the social behaviour of bird flock or fish school. This algorithm originally adopted for balancing weights in neural networks [4], PSO already became a popular global optimizer. There are one study reported in literature that extend PSO to multi-objective problem [3]. A dynamic neighbourhood particle swarm optimization (DNPSO) for multi objective problems was presented [7]. In their study, for each generation, particles of swarm find their new neighbours. The best local particle in the new neighbourhood is choice as gbest for each particle. A modified DNPSO is introduced by find the nearest n particles as the neighbour of the current particle based on the distances between the current particle from others [19].

PSO algorithm is similar to other algorithms based on the principles are accomplished according to the following equations:

$$ {\text{v}}_{\text{ij}}^{{{\text{t}} + 1}} = {\text{wv}}_{\text{ij}}^{\text{t}} + {\text{c}}_{1} {\text{r}}_{1} ({\text{pbest}}_{\text{ij}}^{\text{t}} - {\text{x}}_{\text{ij}}^{\text{t}} ) + {\text{c}}_{2} {\text{r}}_{2} ({\text{gbest}}_{\text{ij}}^{\text{t}} - {\text{x}}_{\text{ij}}^{\text{t}} ) $$
(13)
$$ {\text{x}}_{\text{ij}}^{{{\text{t}} + 1}} = {\text{x}}_{\text{ij}}^{\text{t}} + {\text{v}}_{\text{ij}}^{\text{t}} $$
(14)
$$ \begin{gathered} {\text{i}} = 1, \, 2, \ldots ,\;{\text{N}} \hfill \\ {\text{j}} = 1, \, 2, \ldots ,\,{\text{n}} \hfill \\ \end{gathered} $$

where x is the particle current position, v is the particle current velocity, t is point of iterations (generations), w is inertia weight, c1 and c2 is acceleration constants, r1 and r2 is random values range [0,1], pbest is the personal best position of a given particle and gbest is the position of the best particle of the entire swarm.

The algorithm developed by Kennedy and Eberhart inspired by the insect swarm (or fish benches or bird flocks) and their coordinated movements. This algorithm pays attention to the information sharing of pbest and gbeast but just considered the experience of pbest and gbest and ignored the communication of other particles. So an improved particle swarm optimization method (IPSO) was developed as the equations shown below:

$$ {\text{v}}_{\text{ij}}^{\text{t + 1}} = {\text{wv}}_{\text{ij}}^{\text{t}} + {\text{c}}_{1} {\text{r}}_{1} ({\text{pbest}}_{\text{ij}}^{\text{t}} - x_{\text{ij}}^{\text{t}} ) + {\text{c}}_{2} {\text{r}}_{2} ({\text{gbest}}_{\text{ij}}^{\text{t}} - {\text{x}}_{\text{ij}}^{\text{t}} ) + {\text{c}}_{3} {\text{r}}_{3} {\text{CR}} $$
(15)
$$ {\text{CR}} = \left\{ {\begin{array}{*{20}c} {{\text{pbest}}_{\text{kj}}^{\text{t}} - {\text{x}}_{\text{ij}}^{\text{t}} } & {\begin{array}{*{20}c} {\text{if}} & {{\text{ran}} < {\text{cp}}} \\ \end{array} } \\ 0 & \quad{\text{other}} \\ \end{array} } \right. $$
(16)

where k means the kth particle and k ≠ i, cp is communication probability, ran is the random values range [0,1]. The communications between particles were considered and could supply much information in order to search optimal solutions in IPSO. Function test indicated that IPSO increase the ability in the search of optimal solutions [13]. But at the later evolution process, with the disappearance of the swarm diversity, the optimization is easier to be trapped into local optimum. For the disadvantages of IPSO, a modified PSO (MPSO) was developed inspired by bacterial foraging algorithm.

Passino originally proposed the Bacterial Foraging Algorithm [11] in 2002. It is inspired by the abstract and simulate of the food engulf of bacterium in human intestinal canal. There have three steps: chemotactic, reproduce and elimination-dispersal to guide the bacterium to the nutrient-rich area.

Elimination-dispersal happened when bacterium got stimulate from outside and then move to the opposite direction. Chemotactic is the action of bacterium gather to the nutrient-rich area. For example, an E. coli bacterium can move in two different ways. It can run (swim for a period of time) or it can tumble, and it alternates between two modes of operation its entire lifetime (i.e., it is rare that the flagella will stop rotating).

As introduced, if the flagella rotate clockwise, each flagellum pulls on the cell, and then the net effect is that each flagellum operates relatively independently of the others. Sometimes the bacterium does not have a set direction of movement and there is little displacement.

The bacterium has behaviour of climbing nutrient gradients. The motion patterns that the bacterium will generate in the presence of chemical attractants and repellants are called chemotactic. For E. coli, encounters with serine or aspirate result in attractant responses, whereas repellent responses result from the metal ions Ni and Co, changes in pH, amino acids like leucine, and organic acids like acetate [11]. So for the behaviour of draw on advantages and avoid disadvantages, bacterium can search better food source, increase the chance of surviving and enhance the adaptive capacity to varies environment. If there have harmful stimulus in the process of MPSO, it is easy to get rid of local optimal by applying chemotactic operation. The function as shown below:

$$ {\text{v}}_{\text{ij}}^{{{\text{t}} + 1}} = {\text{wv}}_{\text{ij}}^{\text{t}} - {\text{c}}_{1} {\text{r}}_{1} ({\text{pbest}}_{\text{ij}}^{\text{t}} - {\text{x}}_{\text{ij}}^{\text{t}} ) - {\text{c}}_{2} {\text{r}}_{2} ({\text{gbest}}_{\text{ij}}^{\text{t}} - x_{\text{ij}}^{\text{t}} ) - {\text{c}}_{3} {\text{r}}_{3} {\text{CR}} $$
(17)

The flow chart of MPSO as shown in Fig. 5:

Fig. 5
figure 5

Flow chart of MPSO

The polynomial neural network models obtained in Sect. 3 are now employed in a multi-objective optimization procedure using modified PSO method in order to investigate the optimal performance of the axial flow pump. Two conflicting objectives efficiency η and NPSHr that to be simultaneously optimized with the design variables βh, βc, σc and H.

Design optimization problem of the objective function and constraints as a function of the following equation:

$$ \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\text{Maximize}} & {\upeta = {\text{f}}_{1} (\upbeta_{\text{h}} ,\upbeta_{\text{c}} ,\upsigma_{\text{c}} ,{\text{H}})} \\ \end{array} } \\ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\text{ }{\text{Minimize}}} & {{\text{NPSHr}} = {\text{f}}_{2} (\upbeta_{\text{h}} ,\upbeta_{\text{c}} ,\upsigma_{\text{c}} ,{\text{H}})} \\ \end{array} } \\ {{\text{Subject to}}:\begin{array}{*{20}c} {\begin{array}{*{20}c} {36^\circ \le\upbeta_{\text{h}} \le 54^\circ } \\ {21^\circ \le\upbeta_{\text{c}} \le 25^\circ } \\ \end{array} } \\ {0.75 \le\upsigma_{\text{c}} \le 0.85} \\ {7 \le {\text{H}} \le 11} \\ \end{array} } \\ \end{array} } \\ \end{array} } \right. $$
(18)

The obtained non-dominated optimum solutions based on Pareto front as shown in Fig. 6. These points demonstrate the trade-offs in objective function NPSHr and efficiency. We can find that all the optimum design points in the Pareto front are non-dominated and could be chosen as optimum pump. But choose a better value for any objective function would cause a worse value for another objective. The solutions shown in Fig. 6 are the best possible design points. If we chose any other decision variables, the corresponding values of objectives will be worse.

Fig. 6
figure 6

Pareto front of NPSHr and efficiency

We use the mapping method to find a trade-off optimum design points compromising both of the objective functions.

$$ Mapped{\text{ value}} = \frac{{f - f_{\hbox{min} } }}{{f_{\hbox{max} } - f_{\hbox{min} } }} $$
(19)

In the mapping method, the value of objective functions of all non-dominated points are mapped into interval 0 and 1. Using sum of these values for each non dominated point, the trade-off point simply is the one having the minimum sum of those values. Consequently, the optimum design point A is the trade-off points which have been obtained from the mapping method. There have a comparison (as shown in Table 4) of the results for traditional method and MPSO method.

Table 4 Comparison of the results for traditional method and MPSO method

In Table 4, comparison of the obtained best compromise solution and the traditional solution as shown. It is clear that in this comparison, NPSHr was decreased by 11.68 % and efficiency was increased by 4.24 % simultaneously.

5 Conclusions

In this paper, multi objective optimization of axial flow pump based on modified Particle Swarm Optimization(PSO) approach are used for Pareto based optimization. Two different polynomial relations for NPSHr and efficiency have been found by GMDH type neural networks using experimentally validated CFD simulations. The obtained polynomial functions were used in an modified PSO optimization process and obtained Pareto front of NPSHr and efficiency. After the mapping method was applied, an optimal solution of the axial flow pump impeller was obtained: NPSHr was decreased by 11.68 % and efficiency was increased by 4.24 % simultaneously. It means this method is feasible and can be applied in impeller design.