Keywords

1 Introduction

HPSO-MFO comprises of best characteristic of both Particle Swarm Optimization [1] and Moth-Flame Optimizer [2] algorithm. HPSO-MFO result expresses that it has ability to converse faster with comparatively optimum solution for both unconstrained and constrained function.

Population-based algorithms, based on randomization consists of two main phases for obtaining better results that are exploration (unknown search space) and exploitation (best solution). In this HPSO-MFO, MFO is applied for exploration as it uses logarithmic spiral path so covers large uncertain search space with less computational time to explore possible solution or to converse particle toward optimum value. Most popular PSO algorithms have ability to attain near optimal solution avoiding local solution.

Contemporary works in hybridization are PBIL-KH [3] the population-based incremental learning (PBIL) with KH, a type of elitism is applied to memorize the krill with the best fitness when finding the best solution, KH-QPSO [4] is intended for enhancing the ability of the local search and increasing the individual diversity in the population, HS/FA [5] the exploration of HS and the exploitation of FA are fully exerted, CKH [6] the chaos theory into the KH optimization process with the aim of accelerating its global convergence speed, HS/BA [7], CSKH [8], DEKH [9], HS/CS [10], HSBBO [11] are used for the speeding up convergence, thus making the approach more feasible for a wider range of real-world applications.

Recently, trend of optimization is to improve performance of meta-heuristic algorithms [12] by integrating with chaos theory, levy flights strategy, adaptive randomization technique, evolutionary boundary handling scheme, and genetic operators like as crossover and mutation. Popular genetic operators used in KH [13] that can accelerate its global convergence speed. Evolutionary constraint handling scheme is used in Interior Search Algorithm (ISA) [14] that avoid upper and lower limits of variables.

The structure of the paper can be given as follows: Introduction; description of participated algorithms; competitive results analysis of unconstraint test benchmark problem and constrained relay coordination problem finally acknowledgement, and conclusion based on results is drawn.

2 Standard PSO and Standard MFO

2.1 PSO (Particle Swarm Optimization)

The PSO (particle swarm optimization) algorithm was discovered by James Kennedy and Russell C. Eberhart in 1995 [1]. This algorithm is inspired by simulation of sociological expression of birds and fishes. PSO includes two terms P best and G best. Position and velocity are updated over the course of iteration from these mathematical equations:

$$v_{ij}^{t + 1} = wv_{ij}^{t} + c_{1} \,*\,R_{1} (P_{best}^{t} - (X)^{t} ) + c_{2} \,*\,R_{2} (G_{best}^{t} - (X)^{t} )$$
(1)
$$\begin{aligned} & (X)^{t + 1} = X^{t} + v^{t + 1} ,\left( {{\text{i}} = 1, 2 ,\ldots ,{\text{No}}.\;{\text{of}}\;{\text{Particles}}} \right) \\ & and\,\left( {{\text{j}} = 1, 2 ,\ldots ,{\text{No}}.\;{\text{of}}\;{\text{Generators}}.} \right) \\ \end{aligned}$$
(2)

where,

$$w = w^{\hbox{max} } - \frac{{(w^{\hbox{max} imum} - w^{\hbox{min} imum} )\,*\,iteration}}{\hbox{max} imum\;iteration}$$
(3)

wmax = 0.4 and wmin = 0.9.

\(v_{ij}^{t}\), \(v_{ij}^{t + 1}\) is the velocity of jth member of ith particle at iteration number (t) and (t + 1). (Usually C1 = C2 = 2), r1 and r2 Random number (0, 1).

Flow Chart for PSO Algorithm is shown in Fig. 1.

Fig. 1
figure 1

Convergence characteristics of benchmark test functions

2.2 Moth-Flame Optimizer

Moth-Flame optimizer was first introduced by Seyedali Mirjalili in 2015 [2]. MFO is a population-based meta-heuristic algorithm. The MFO algorithm is three-rows that approximate the global solution of the problems defined as follows:

$${\text{Moth}}\,{\text{Flame}}\,{\text{Optimizer}} = \left[ {{\text{I}},\;{\text{P}},\;{\text{T}}} \right],$$
(4)

I is the function that yield an uncertain population of moths and corresponding fitness values. Considering these points, we define a log (logarithmic scale) spiral for the MFO algorithm as follows:

$$S\left( {M_{i} ,F_{j} } \right) = D_{i} \,*\,e^{bt} \,\cos \left( {2\pi t} \right) + F_{j}$$
(5)

where \(D_{i}\) expresses the distance of the moth for the \(jth\) flame, b is a constant for expressing the shape of the log (logarithmic) spiral, and t is a random value in [− 1, 1].

$$Di = \left| {Fj - Mi} \right| Z$$
(6)

where \(M_{i}\) indicate the \(ith\) moth, \(F_{j}\) indicates the \(jth\) flame, and where expresses the path length of the \(ith\) moth for the \(jth\) flame.

The number of flames is adaptively decreased over the course of iterations. We use the following formula:

$$no.\,of\,flame = round\left( {N - l\,*\,\frac{N - 1}{T}} \right)$$
(7)

where l indicates the current number of iteration, N indicates the maximum number of flames, and T is the maximum number of iterations.

2.3 The Hybrid PSO-MFO Algorithm

A set of Hybrid PSO-MFO is combination of separate PSO and MFO. The drawback of PSO is the limitation to cover small search space while solving higher order or complex design problem due to constant inertia weight. This problem can be tackled with Hybrid PSO-MFO as it extracts the quality characteristics of both PSO and MFO. Moth-Flame Optimizer is used for exploration phase as it uses logarithmic spiral function so it covers broader area in uncertain search space. Because both of the algorithms are randomization techniques so we use term uncertain search space during the computation over the course of iteration from starting to maximum iteration limit. Exploration phase means capability of algorithm to try out large number of possible solutions. Position of particle that is responsible for finding the optimum solution of the complex nonlinear problem is replaced with the position of Moths that is equivalent to position of particle but highly efficient to move solution toward optimal one. MFO directs the particles faster toward optimal value, reduces computational time. As we know that PSO is a well-known algorithm that exploits the best possible solution from its unknown search space. So combination of best characteristic (exploration with MFO and exploitation with PSO) guarantees to obtain best possible optimal solution of the problem that also avoids local stagnation or local optima of problem. Hybrid PSO-MFO merges the best strength of both PSO in exploitation and MFO in exploration phase toward the targeted optimum solution.

$$v_{ij}^{t + 1} = w\,v_{ij}^{t} + c_{1} R_{1} (Moth\_Pos^{t} - X^{t} ) + c_{2} R_{2} (Gbest^{t} - X^{t} )$$
(8)

3 Simulation Results for Unconstraint Test Benchmark Function

Unconstraint benchmark test functions are solved using HPSO-MFO algorithm. Four benchmark test functions (F1-F4) are performed to verify the HPSO-MFO algorithm in terms of exploration and exploitation. These test functions are shown in Table 1. Results are shown in Table 2, HPSO-MFO algorithm able to given more competitive results compared to standard PSO and MFO algorithm. The convergence characteristics of HPSO-MFO is shown in Fig. 1. Search agent no. is 30 and maximum iteration no. is 500 used for all Unconstraint benchmark test functions.

Table 1 Unconstraint benchmark test functions
Table 2 Result for unconstraint benchmark test functions

4 Overcurrent Relay Coordination with Common Configuration in Power System

Overcurrent relay is used for primary and backup protection in distribution power systems. To minimize the total operating time relays should be coordinated and set at the optimum values [15, 16].

All relays used in this paper are identical and they show the normal IDMT (Inverse Definite Minimum Time) characteristics represented in terms of equations are as follows:

$$t = \frac{0.14\,*\,(TMS)}{{PSM^{(0.02)} - 1}}$$
(9)

where t is the operating time of relay, PSM is plug setting multiplier, and TMS represents time multiplier setting.

$$PSM = \frac{{I_{relay} }}{PS}$$
(10)

For linear problem PSM is constant, so t decreases to

$$t = \alpha_{p} \,*\,(TMS)$$
(11)
$$\alpha_{p} = \frac{0.14}{{PSM^{(0.02)} - 1}}$$
(12)

The target is to minimize the objective function given by:

$$F_{\hbox{min} } = \sum\limits_{p = 1}^{n} {\alpha_{p} \,*\,(TMS)_{p} }$$
(13)

The optimal results are given in Table 3. Figure 2 show the Convergence Characteristics of Overcurrent Relay Coordination for Parallel Feeder, fed from a single end. The constraints are taken from [15]. Search agent no. is 30 and maximum iteration no. is 500 used for solve the Over current relay coordination problem.

Table 3 Values of TMS for parallel feeder system, fed from a single end
Fig. 2
figure 2

Convergence characteristics of overcurrent relay coordination for parallel feeder, fed from a single end

Minimize

$$Z = \left\{ {\begin{array}{*{20}l} {3.106X_{1} + 6.265X_{2} + 3.106X_{3} } \hfill \\ { + 6.265X_{4} + 2.004X_{5} } \hfill \\ \end{array} } \right\}$$
(14)
$$6.265X_{4} - 3.106X_{2} \ge 0.2,$$
(15)
$$6.265X_{1} - 3.106X_{3} \ge 0.2,$$
(16)
$$4.341X_{1} - 2.004X_{5} \ge 0.2,$$
(17)
$$4.341X_{4} - 2.004X_{5} \ge 0.2,$$
(18)

5 Conclusions

The drawback of PSO is the limitation to cover small search space while solving higher order or complex design problem due to constant inertia weight. This problem can be tackled with Hybrid PSO-MFO as it extracts the quality characteristics of both PSO and MFO. MFO is used for exploration phase as it uses logarithmic spiral function so it covers broader area in uncertain search space. So MFO directs the particles faster toward optimal value, reduces computational time. HPSO-MFO is tested on four unconstrained and one overcurrent relay as constrained problems. HPSO-MFO gives optimal results in most of the cases and in some cases results are inferior that demonstrate the enhanced performance with respect to original PSO and MFO.