1 Introduction

Several real-world problems must be solved by optimizing more than one objective. In a few cases, one objective must be minimized, while the other must be maximized (Sherinov et al. 2011). In this paper, we propose a multi-objective version of the imperialistic competitive algorithm (ICA) (Atashpaz-Gargari and Lucas 2007) for solving global multi-objective optimization problems based on imperialistic competition. The ICA is a global optimization strategy, in which the initial algorithm population consists of two types of countries, i.e., imperialists and colonies. Imperialistic competition is the most important part of the algorithm and causes the colonies to converge to the global minimum of the objective function. Moving the colonies toward their relevant imperialist—assimilating—and generating new countries in each empire using revolution are other important parts of the algorithm.

A large number of multi-objective optimization algorithms have been proposed. Among these are multi-objective evolutionary algorithms (MOEAs) (Deb 2001; Fonseca and Fleming 1993; Horn et al. 1994; Srinivas and Deb 1995; Zitzler and Thiele 1998). A priority of multi-objective optimization algorithms is to simultaneously find several Pareto-optimal solutions. Another priority is to optimize conflicting objectives when one must be minimized and the other must be maximized. Consequently, multi-objective optimization algorithms have gained popularity in the last two decades. Therefore, the aim of this study is to develop a multi-objective optimization algorithm based on imperialistic competition—specifically, the multi-objective imperialistic competitive algorithm (MOICA)—which uses a population of countries of the following two types: imperialists and colonies. In every empire, there is an imperialist, which is considered the local best for that empire. Accordingly, the MOICA generates a local non-dominated solution (LNDS) set for each empire. Then, it calculates the global non-dominated solution (GNDS) set of the LNDSs of each empire, which is the final set of non-dominated solutions.

In terms of search and competition, the ICA is similar to particle swarm optimization (PSO), which simulates the social behaviors of animals, such as bird flocking (Kennedy and Eberhart 1995). Additionally, the ICA has a local best in each empire (i.e., the imperialist country) and a global best, which is the strongest of the imperialists. By beginning with several empires, each with several colonies, competition can occur between empires. This competition leads to the development of powerful empires and the collapse of weaker empires (Atashpaz-Gargari and Lucas 2007). The ICA is one of the numerous algorithms used for solving optimization problems. Among these are search heuristics, such as genetic algorithms (GAs), which belong to the class of evolutionary algorithms. These algorithms generate solutions to optimization problems by imitating the process of natural evolution (Mitchell 1999). Another example is ant colony optimization (ACO), which is inspired by the behavior of ants foraging for food (Dorigo and Blum 2005). In contrast, simulated annealing is an example of a global optimization algorithm. It is a generic probabilistic metaheuristic that locates a good approximation to the global optimum of a given function in a large search space (Kirkpatrick et al. 1983).

Multiple applications of the ICA exist, particularly in engineering. In computer engineering, the ICA is applied to data clustering and image processing for solving problems such as skin color detection and template matching (Seyedmohsen and Abdullah 2014). For example, Duan et al. (2010) presented a template matching method based on a chaotic ICA that used a correlation function. They prevented the problem of falling into a local best solution by introducing chaotic behavior into the ICA, which improved its global convergence. Another example of the application of the ICA is the integrated product mix-outsourcing optimization problem (Nazari-Shirkouhi et al. 2010). Vedadi et al. (2015) applied the ICA in electrical engineering by presenting an ICA-based maximum power point-tracking algorithm to rapidly and precisely find the global maximum power point of a power-voltage string under partial shading conditions. Goudarzi et al. (2013) used the ICA as a heuristic technique for optimizing the location of capacitors in radial distribution systems. Another example of the application of the ICA is in geoscience, where it is used to locate the critical failure surface and compute the safety factor in slope stability analysis based on the limit equilibrium approach (Kashani et al. 2014). Jordehi (2016) proposed a solution to flexible AC transmission systems (FACTS) allocation problems so that low overload and voltage deviation values result from line outage contingencies and demand growth. In this study, thyristor-controlled phase shifting transformers and thyristor-controlled series compensators have been used as FACTS devices.

Variants of the ICA have been presented in the literature. Niknam et al. (2011) proposed an efficient hybrid algorithm based on the modified ICA (MICA) and k-means, referred to as the K-MICA, to optimally cluster n objects into k clusters. This approach was used to overcome local optima obstacles. The K-MICA was tested for robustness and compared favorably to several algorithms, including ACO, PSO, GA, TS, honey bee mating optimization, and k-means. Razmjooy et al. (2013) proposed a hybrid algorithm by combining the ICA and an artificial neural network to solve skin classification problems. The authors used a multilayer perceptron network to manage problem constraints and the ICA to search for high-quality and minimum-cost solutions. Ebrahimzadeh et al. (2012) proposed a novel hybrid intelligent method for recognizing the common types of control chart patterns. The proposed method included the following two primary modules: a clustering module and a classifier module. The authors used a combination of the MICA and the k-means algorithm in the former module to cluster input data. In addition, a mutation operator was introduced into the proposed algorithm by changing the assimilation process.

In this paper, a new multi-objective ICA (MOICA) is proposed. The MOICA uses the idea of imperialism by incorporating competition between empires. Every empire has a set of imperialists and a set of colonies. The primary idea in this algorithm is to have an LNDS for every empire. Therefore, all imperialists in an empire are considered to be non-dominated solutions, whereas all colonies are considered to be dominated solutions. Moreover, in addition to LNDS sets, there is one GNDS set, which is created from the LNDS sets of all empires. Two main operators of the proposed algorithm, i.e., assimilation and revolution, use the GNDS and LNDS sets during the assimilation and revolution of colonies, respectively. Another significant feature of the proposed algorithm is that no special parameter is used for diversity preservation, which enables algorithm to prevent extra computations to maintain the spread of solutions. The proposed algorithm with the assimilation and revolution operators produces good results that are comparable with the state-of-the-art algorithms used in this study.

The remainder of this paper is organized as follows: Section 2 presents a brief description of a single-objective ICA. In Sect. 3, the proposed MOICA is discussed in detail. Thereafter, in Sect. 4, experimental results and simulations are presented by comparing the proposed algorithm with other multi-objective optimization algorithms. Lastly, in Sect. 5, the conclusions of this study are presented.

2 ICA review

The primary idea of the ICA is the competition that occurs between empires because the aim of each empire is to possess more colonies. This competition—along with assimilation, or moving colonies toward their relevant imperialist, and revolution, or abrupt changes in sociopolitical characteristics—enables the algorithm to reach the global optimum of a cost function. During competition among empires, it is possible for a colony to become better than the imperialist of its empire. In this case, the ICA switches the positions of the imperialist and colony; thus, the colony becomes the imperialist and the former imperialist becomes a colony. The power of each empire is inversely proportional to its cost function. Therefore, the lower the cost of the empire, the more powerful it is (Atashpaz-Gargari and Lucas 2007). When an empire has exhausted its colonies, it becomes powerless and as a result, it collapses and terminates. Consequently, the number of empires gradually decreases until only one of the most powerful imperialist states remains. However, the ICA’s termination criterion is reaching the user-specified number of iterations, regardless of state. Similarly, the algorithm will continue to the number of iterations specified by the user even if only one empire remains, because it is not ensured that the optimum solution has been found when only one empire remains.

Algorithm – ICA (Atashpaz-Gargari and Lucas 2007)

  1. 1.

    Select a few random points on a function and initialize empires.

  2. 2.

    Move colonies toward their relevant imperialist (i.e., assimilation).

  3. 3.

    Randomly replace a few colonies with newly generated colonies (i.e., revolution).

  4. 4.

    Compute the costs of an imperialist and all colonies.

  5. 5.

    If there is a colony in an empire that has a lower cost than that of the imperialist, then exchange their positions.

  6. 6.

    Compute the total cost of all empires relative to the power of the imperialist and its colonies.

  7. 7.

    Select the weakest colony, i.e., a colony with the highest cost, from the weakest empire, i.e., the empire with the highest cost, and give it to the empire that is the most likely to possess it, thereby engendering imperialistic competition.

  8. 8.

    Eliminate powerless empires.

  9. 9.

    If the termination condition is not satisfied, return to Step 2.

After random initialization of the population, the objective function is evaluated and the individuals in the population are assigned their cost values. The individuals of size N with the minimum costs are selected to be imperialists. The remaining individuals become colonies that are proportionally distributed among imperialists based on their costs.

2.1 Assimilation

Assimilation is the process of moving the colonies toward the imperialist within the same empire. This process is one of the most important parts of the ICA because it is related to the improvement in the colonies of a particular empire. Figure 1 describes the movement of a colony toward its imperialist in a randomly deviated direction to search for different points around the imperialist. As shown in the figure, the new position of the colony is x, which is a random variable with a uniform distribution. Thus, we have, \(x~\sim ~U\)(0, \(\beta \times d\)), where \(\beta \) is a number between 1 and 2 and d is the distance between the colony and imperialist (Atashpaz-Gargari and Lucas 2007). In addition, \(\theta \) is a random variable with a uniform distribution, which is \(\theta ~\sim ~U\) (\(-\gamma ,\gamma \)), where \(\gamma \) is a parameter that adjusts the deviation from the original direction (Atashpaz-Gargari and Lucas 2007).

Fig. 1
figure 1

Moving colonies toward their relevant imperialist in a randomly deviated direction

The mathematical formulation of ICA assimilation may be demonstrated as follows: Let

$$\begin{aligned} Col\_Pos=[p_{1}, p_{2} , {\ldots }, p_{n}] \end{aligned}$$
(1)

be the vector containing the colony’s position and let

$$\begin{aligned} Imp\_Pos=[p'_{1}, p'_{2}, {\ldots }, p'_{n}] \end{aligned}$$
(2)

be the vector containing the imperialist’s position, where n is the dimension of the optimization problem. Now, let d be the vector containing the elementwise difference of (1) and (2) as follows:

$$\begin{aligned} d=[p_{1-} p'_{1}, p_{2-} p'_{2},{\ldots },p_{n-} p'_{n}]. \end{aligned}$$
(3)

Therefore, the calculation of the new colony’s position is

$$\begin{aligned} Col\_Pos\_New~=~Col\_Pos~+~\theta *\beta *r*d, \end{aligned}$$
(4)

where r is a random variable with a uniform distribution between 0 and 1. The value of \(\theta \) is selected to be in a range of (-\(\pi \) / 4, \(\pi \) / 4) radians, and \(\beta \) is selected to be approximately 2 (Atashpaz-Gargari and Lucas 2007).

2.2 Revolution

Revolution is the process of generating new countries in the empire (Atashpaz-Gargari and Lucas 2007). This occurs owing to an abrupt change in sociopolitical characteristics. While generating new countries, a few of the current countries (colonies) are randomly replaced by the newly created countries.

2.3 Imperialistic competition

Imperialistic competition occurs after the assimilation and revolution operations are applied to the colonies during each iteration of the algorithm, as shown in Fig. 1. Imperialistic competition starts with the computation of the total cost of all empires. The total cost of an empire can be expressed as follows (Atashpaz-Gargari and Lucas 2007):

$$\begin{aligned} \mathrm{TC}_k = \mathrm{IC}_k + \varepsilon *\mathrm{mean}({\mathrm{CE}_k}), \end{aligned}$$
(5)

where \(TC_k \) is the total cost of empire k, \(\mathrm{IC}_k \) is the imperialist cost of empire k, \(\mathrm{CE}_k \) is the cost of the colonies of empire k, and \(\varepsilon \) is a small value of approximately 0.1 to make the total cost of an empire depend mostly on the imperialist (a larger value for \(\varepsilon \) will make the total cost depend on the imperialist and the colonies of the empire).

Competition among the empires is realized by excluding the weakest empire from the competition and allowing other empires to compete for the weakest colony in the excluded weakest empire. The following mathematical formulation describes the possession probabilities of the competing empires for the weakest colony (Atashpaz-Gargari and Lucas 2007):

$$\begin{aligned} p_k =\left| {\frac{\mathrm{NTC}_k }{\sum \nolimits _{i=1}^N \mathrm{NTC}_i }} \right| , \end{aligned}$$
(6)

where \(p_k \) is the possession probability of empire kN is the number of imperialists, and \( \mathrm{NTC}_k \) is normalized total cost, which is computed as

$$\begin{aligned} \mathrm{NTC}_k =\mathrm{TC}_k +\hbox {max}({\mathrm{TC}_i}). \end{aligned}$$
(7)

The final step in the competition between imperialists is to obtain a vector containing the differences between possession probabilities and the uniformly distributed random values between (0, 1) as follows:

$$\begin{aligned} D=[{p_1 -r_1 , p_2 -r_2 ,\ldots ,p_N -r_N}] \end{aligned}$$
(8)

where N is the number of imperialists. The possessor of the weakest colony in the weakest empire is the one whose corresponding index in vector D contains the maximum value.

3 Proposed MOICA

3.1 Algorithm overview

The proposed MOICA implements the idea of imperialism by incorporating competition among empires. The primary concept of the MOICA is that there are several non-dominated solution sets, i.e., imperialists, per empire and one GNDS set, which contains the best imperialists among all empires. All empires attempt to possess other empires’ colonies based on their power. Therefore, all empires have the opportunity to assume control of one or more colonies of the weakest empire. In an iteration of the algorithm, the colonies of each empire make changes with respect to their positions by changing their cost values. As previously mentioned, a colony, C, in an empire may become better than a few of the current set of imperialists. In such a case, the new colony, C, with better cost becomes a member of the empire’s imperialists, that is, a member of the set of non-dominated solutions. Thus, the previous imperialist, I, which is dominated by C, becomes a colony.

The MOICA has an important yet simple feature in its implementation. Specifically, it has several non-dominated solution sets, which makes it different from numerous other multi-objective optimization algorithms. Initially, there are N empires. Therefore, every empire possesses Pareto-optimal solutions, or LNDSs. Therefore, the total number of LNDSs will initially be N. Moreover, there is a set of GNDSs, which is obtained from the N LNDSs. Because the set of LNDSs for each empire is updated during iterations, the GNDS is updated accordingly. This implies that the algorithm has one GNDS throughout its implementation, whereas the number of LNDSs gradually decreases as empires collapse during competition. Figure 2 illustrates an example of three empires (\(E_{1}\), \(E_{2}\), and \(E_{3}\)) with their colonies and LNDS sets, i.e., imperialists, which are set in bold.

Fig. 2
figure 2

GNDS and LNDS sets of three empires

Imperialists that are taken into an area in Fig. 2 are the best imperialists among all empires that form the GNDS. There is a possibility that none of the imperialists will be included in the GNDS of an empire. An example of this scenario is \(E_{1}\) in Fig. 2. Therefore, the use of the GNDS in this algorithm is considerably important because the colonies of all empires are assimilated toward randomly selected imperialists from the GNDS, which enables the algorithm to prevent local optima. If we consider only one empire in Fig. 2, for example, \(E_{2}\), it is apparent that the circles in bold form the non-dominated solution set, \(E_{2}\). The assimilation and revolution operations are detailed in the following sections.

Non-dominance in the proposed algorithm is calculated based on fronts; solutions that are assigned a value of 1 belong to the first front, while solutions with a front value of 2 are assigned to the second front, and so on. As a result, the LNDS and GNDS sets contain only solutions that belong to the first front. Another significant feature of the proposed algorithm is that no special parameter is used for diversity preservation, which enables the algorithm to prevent extra computation to maintain the spread of solutions. Even though a share parameter is not used in the MOICA, the solution spread in the results obtained from our simulations and experiments was excellent owing to the assimilation technique used in the algorithm. As described in the previous section, all colonies of an empire move toward one imperialist available in the empire. However, in the proposed algorithm, the colonies of an empire move toward one of the imperialists, I, in the GNDS set. The imperialist, I, toward which the colonies move is randomly selected in each iteration from the GNDS set. Therefore, the prevention of a share parameter is derived from the multi-objective nature of the algorithm, in which every solution in a non-dominated solution set is valid, so that there is no single solution. For clarity, we first describe the proposed algorithm. Then, each part of the algorithm is detailed.

Algorithm – Primary procedure of the MOICA

  1. 1.

    Begin

  2. 2.

    Initialization

    1. a.

      Initialize problem parameters, such as the objective function name, number of variables, and the lower and upper bounds of decision variables.

    2. b.

      Initialize algorithm parameters, such as population size, number of initial empires, number of iterations, and other coefficients used in the assimilation and revolution operations.

  3. 3.

    Evaluate objective functions and assign cost values to each country.

  4. 4.

    Apply non-domination sorting [15].

  5. 5.

    Create initial empires.

  6. 6.

    For each iteration i do:

    1. a.

      For each empire j do:

      1. i.

        Obtain the GNDS.

      2. ii.

        Apply assimilation: Move colonies toward one randomly selected imperialist in the GNDS set and apply economic changes with probability \(p_{e}\).

      3. iii.

        Apply revolution: Generate new countries from the LNDS set according to probability \(p_{r }\) and revolution rate \(\alpha \).

      4. iv.

        Evaluate objective functions and assign cost values to all colonies.

      5. v.

        Update the LNDS for empire j.

      6. vi.

        Calculate the total power of empire j.

    2. b.

      End for

  7. 7.

    Unite similar empires.

  8. 8.

    Apply imperialistic competition and terminate powerless empires.

  9. 9.

    End for

  10. 10.

    Display results.

3.2 Non-domination sorting

Various methods have been proposed in the literature for determining non-dominance. In these methods, each solution in a search space is assigned a rank value, which indicates whether the solution is dominated by other solutions. In most cases, the lower the rank value, the less the solution is dominated by other solutions. For example, a rank value of one indicates that the solution is non-dominated. Another approach of ascertaining non-dominance is to not assign a rank value to solutions and divide them into fronts instead (Deb et al. 2002), which is the approach that we use in this study. Figure 3 illustrates non-dominated solutions with fronts for the minimization problem.

Fig. 3
figure 3

Non-dominance using fronts

A front with a value of one contains non-dominated solutions, whereas the front value of two is the set of solutions dominated by the solutions from the first front only. Solutions with a front value of three are dominated by the solutions from the previous fronts. Therefore, in the proposed algorithm, every empire has its own LNDS. This LNDS is intended to include the imperialists of the empire; there is no single imperialist in the empire. This implies that all other solutions have front values larger than one, such that dominated solutions are considered the colonies of the empire.

3.3 Assimilation

Assimilation, the movement of colonies toward imperialists, is implemented similarly, as explained in Sect. 2. However, there are several imperialists in the GNDS set. Thus, one imperialist should be selected to serve as a target for the movement of colonies. The use of the GNDS in assimilation instead of LNDS sets enables the algorithm to escape local minima faster. The selection of the target imperialist is randomly performed for each empire in each iteration of the algorithm.

Figure 4 illustrates the assimilation procedure implemented in this algorithm. In the figure, the black circles and red triangle indicate non-dominated solutions—the GNDS set— that are imperialists of the whole population. The red triangle is the randomly selected target imperialist toward which the colonies are moving. For simplicity, only one moving colony is shown in the figure and is indicated by a blue circle. Parameters such as \(\theta \), d, and x are explained in Sect. 2; however, the values used for a few parameters are different, which will be discussed later. Owing to randomized selection of the target imperialists and deviation \(\theta \), the diversity in the algorithm is preserved. In Fig. 4, the angle is denoted by \(\theta \), because the deviation \(\theta \) is used in the decision space, which may not be the same as in the objective space. Therefore, even if deviations in the decision and objective spaces differ, deviation still exists in the objective space, which is denoted by \(\theta \).

Fig. 4
figure 4

Assimilation of a colony toward a randomly selected imperialist from the GNDS set

To improve the local search of the proposed algorithm, another new operation is added immediately after assimilation. This operation is the influence of economic changes on the empire, which has a probability of being engaged, as described in the pseudocode below. The higher the value of \(p_{e}\), the lower the probability of performing the operation. In most cases, a value of 0.9 is used, to incite a few economic changes. UpperBound and LowerBound are vectors that indicate the decision space of the decision variables for the given objective function. rand() is a uniformly generated random value between (0, 1). The variables and parameters Col_Pos_New,Col_Pos, \(\theta \), \( \beta \) and r are the same as in Sect. 2. However, d is different in this operation because it contains an elementwise difference of a colony and a randomly selected imperialist from the GNDS.

Procedure: Assimilation with local search: economic changes

  1. 1.

    Randomly select an imperialist \(I_{G}\) from the GNDS.

  2. 2.

    for each colony in empire i do

  3. 3.

    set d to the elementwise difference of a colony and \(I_{G}\)

  4. 4.

    Col_Pos_New = Col_Pos +  \(\theta * \beta *r*d\)

  5. 5.

    end for

  6. 6.

    if rand() > \(p_{e}\) do

  7. 7.

    R = UpperBound – LowerBound;

  8. 8.

    for each decision variable i in R do

  9. 9.

    w(i) = (abs(UpperBound(i))*rand())\(^{rand()/R(i)}\)

    (abs(LowerBound(i))*rand())\(^{rand()/R(i)}\);

  10. 10.

    end for

  11. 11.

    ColoniesOfEmpire  =  ColoniesOfEmpire .* w;

  12. 12.

    end if

3.4 Revolution

The revolution operation in the proposed algorithm is completely different from the one in ICA because there is no random generation of new colonies. The new revolution operation has two parts, which are performed based on probability \(p_{r}\). The first part is the generation of a new colony by the random selection of elements from two randomly selected imperialists in the LNDS set of one empire. If there is only one imperialist in the LNDS set, then one more individual is randomly generated. For the second part of the revolution process, a few imperialists are modified and replaced by randomly chosen colonies.

Procedure: Revolution

  1. 1.

    if rand() > \(p_{r}\)

  2. 2.

    for each colony in empire i do

    1. a.

      Select two imperialists \({{\varvec{I}}}_{1}\) and \({{\varvec{I}}}_{2}\) from LNDS (if the set contains one imperialist only, then generate one more randomly)

    2. b.

      Generate two random points \(P_{1}\) and \(P_{2 }\) between 1 and the length of individuals

    3. c.

      Split every imperialist into three blocks using points \(P_{1}\) and \(P_{2}\)

    4. d.

      Generate new colony C by combining the first and third blocks from \({{\varvec{I}}}_{1}\) and the second block from \({{\varvec{I}}}_{2}\)

    5. e.

      Replace colony i in an empire with C

  3. 3.

    end for

  4. 4.

    else

  5. 5.

    for \(i~=~1\) to RevolutionRate * NumberOfColoniesInEmpire

    1. a.

      Select one imperialist \({{\varvec{I}}}_{i }\) from LNDS randomly

    2. b.

      Update \({{\varvec{I}}}_{i}\) by adding to its every element a random value between (0.001, 0.09) or (\(-0.09\), \(-0.001\))

  6. 6.

    end for

  7. 7.

    Update randomly selected colonies by newly generated ones

  8. 8.

    end if

The GNDS is used to select imperialists for updating colonies during assimilation. On the other hand, imperialists from the LNDS of the same empire are used in the revolution process. Therefore, both assimilation and revolution of colonies enable the algorithm to escape local minima and reach global optimal solutions.

3.5 Possessing an empire

Every empire is possessed by the set of imperialists, which is the non-dominated set of solutions within the empire itself and is defined as the LNDS in this algorithm. However, in terms of possession of the empire, it is possible that all individuals of an empire will be in the LNDS, and thus, there are no dominated solutions within an empire. Consequently, assimilation and revolution will not be applicable. Therefore, one more parameter \(\emptyset \) was added to this algorithm. It indicates the maximum percentage of imperialists that an empire can have. Consequently, when obtaining the LNDS of an empire, the determination of whether the percentage of imperialists exceeds \(\emptyset \) is made. If so, then the best maximum imperialists allowed are retained; the others are moved to the set of colonies. The total power of an empire is equal to the number of non-dominated solutions in the empire’s population. Although an empire with a lower number of non-dominated solutions may contain better solutions than one with more non-dominated solutions, the total power is still equal to the cardinality measure, regardless of dominance.

3.6 Uniting similar empires

The MOICA uses different approaches to unite similar empires in comparison with ICA; in ICA, empires are united when an empire’s imperialist is very close to another’s imperialist. This is achieved by calculating the distance between the positions of two imperialists and comparing this calculated distance with the distance threshold parameter, which is originally set to 0.02. The distance threshold used here is not for diversity preservation; it is only used for measuring how close two empires are. If the distance is less than or equal to the specified threshold, then the empires are united.

In the proposed algorithm, this approach cannot be applied because there are several imperialists in an empire. Thus, all imperialists must be considered when comparing empires’ similarity. Consequently, the empire similarity comparison is implemented using the generational distance metric (Van Veldhuizen and Lamont 1998), which enables the calculation of the generational distance between two or more sets of non-dominated solutions. The generational distance GD is defined as:

$$\begin{aligned} \mathrm{GD}=\frac{1}{|{S^{*}}|}\sum \nolimits _{r\in S^{*}} \mathrm{min}\left\{ {d_{rx} \hbox {|}x\in S_j } \right\} , \end{aligned}$$
(9)

where \(S^{*}\) is a reference solution set for the evaluation of the solution set \(S_{j}\) and \(d_{xr}\) is the distance between the current solution x and reference solution r, given as

$$\begin{aligned} d_{rx} =\sqrt{( {f_1 ( r )-f_1 ( x )} )^{2}+( {f_2 ( r )-f_2 ( x )} )^{2}+( {f_3 ( r )-f_3 ( x )} )^{2}+\ldots +( {f_k ( r )-f_k ( x )} )^{2}} \end{aligned}$$
(10)

where k is the number of objective functions to be optimized. Figure 5 illustrates an example computation of generational distance for two objective functions.

Fig. 5
figure 5

Generational distance for uniting empires

3.7 Imperialistic competition

Imperialistic competition plays an important role in this algorithm, gradually decreasing the number of weak empires and increasing the number of strong empires. The weakest empire in the proposed algorithm is the one with the smallest number of non-dominated individuals, whereas the strongest empire is the one with the largest number of non-dominated individuals. Imperialistic competition is constructed so that the stronger an empire is, the more chances it has of obtaining control of a weak colony in a weak empire. Consequently, it obtains possession of it. Weak empires slowly lose their colonies during this competition and are soon terminated because of their powerlessness, which means that these empires are left with no countries.

Procedure: Imperialistic Competition

  1. 1.

    Construct a vector of the total powers P for all empires.

  2. 2.

    Select the weakest empire E with the lowest total power.

  3. 3.

    Construct a vector of random values \(\mathbf{R}~\sim ~U(0,1)\) of size P.

  4. 4.

    Calculate D = R - P for each empire.

  5. 5.

    The empire with the maximum value in D will possess the randomly selected colony in empire E.

  6. 6.

    Terminate E if it has no colonies.

3.8 Computational complexity of MOICA

The time complexity for implementing non-domination sorting in the MOICA is the same as the time complexity for non-domination sorting in NSGA-II, i.e., \(O(M(2~N) ^{2})\), where M is the number of objectives and N is the number of solutions, i.e., the population size. Considering the time complexities of assimilation and revolution operations, in the worst case, it is possible for \(N-1\) colonies to be assimilated/revolt if there is only one dominating imperialist. Therefore, in every iteration, for both assimilation and revolution, the time complexity is O(N). Another consideration is the time complexity for uniting similar empires, which is \(O(K^{2})\) in every iteration, where K is the number of empires in the population. Consequently, the overall time complexity of the MOICA is \(O(M(2~N)^{2}+K^{2})\). Comparing the time complexities of the MOICA and NSGA-II, we conclude that they are almost the same, since \(K^{2}\) is related to the number of empires, which is usually very low in comparison with population size N, and could even be omitted.

Table 1 Unconstrained test problems used in this study

4 Experimental results

This section details the experiments and simulations conducted in this study. To obtain the experimental results and verify the effectiveness of the proposed algorithm, several bi-objective and tri-objective optimization problems were selected from the literature as test problems. ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 were obtained from Zitzler et al. (2000), in addition to test problems from Kursawe (1990), Fonseca and Fleming (1998), and Schaffer (1987). Moreover, ten unconstrained test functions were employed from the Congress on Evolutionary Competition (CEC) 2009 Special Session and Competition (Zhang et al. 2009)—UF1, UF2, UF3, UF4, UF5, UF6, UF7, UF8, UF9, and UF10.

Table 1 details all the unconstrained test problems used in this study, except the CEC 2009 test functions, which can be found in (Zhang et al. 2009). The performance metrics used to evaluate our results with the Pareto-optimal solutions are hypervolume (HV) (Zitzler and Thiele 1998), epsilon indicator (EI) (Zitzler et al. 2003), and inverted generational distance (IGD). The IGD metric used in this study is the jMetal version.

In addition, this section compares the results of the proposed algorithm with those of state-of-the-art multi-objective optimization algorithms.

4.1 Discussion

All experimental results were obtained by executing each algorithm ten times. The maximum number of function evaluations was set to 25,000. For a few test functions, it was set to 5000 to verify the performance of the algorithms with higher and lower numbers of function evaluations. The population size was set to 100 for all algorithms. The dimension of the individuals in the population was set to 30 for all test functions. The tables below describe the average HV and EI, which were obtained from several executions of the given algorithm. The IGD was obtained from the average value from several executions of the algorithms.

The proposed algorithm used the following parameters. The initial number of empires was set to 8. From several tests, it was evident that its performance was poor with far fewer or far more than 8 initial empires. The parameter \(\theta \) had a random value between (0, 1) and \(\beta \) had a random value between (0, 5). The parameter for the percentage of imperialists \(\emptyset \) was set to 0.3, so that at most 30% of the empire’s population was considered imperialist. Thus, 70% of the space was left for colonies in an empire, so more assimilations and revolutions were performed. The revolution rate \(\alpha \) was set to 0.3, and the parameter used in the revolution process \(p_{r }\) was set to 0.5. The optimal value for the parameter for applying economic changes \(p_{e}\) may differ across test functions. For example, for UF9, the result was best when \(p_{e}\) was set to around 0.2; nonetheless, in most cases, it was found to be between 0.8 and 1 based on a trial-and-error approach. The values for the above parameters were chosen as the best suitable values for the proposed algorithm after the conduction of a number of experiments. Therefore, the parameters for the MOICA were tuned using a non-iterative algorithmic approach (Eiben and Smit 2011), such that the parameters were generated during initialization and were then tested.

Table 2 Hypervolume results for unconstrained test problems with 25,000 function evaluations
Table 3 Epsilon indicator results for unconstrained test problems with 25,000 function evaluations
Table 4 IGD results for unconstrained test problems with 25,000 function evaluations
Fig. 6
figure 6

Cardinality measure of MOICA, OMOPSO, NSGA-II, and SPEA2 on Schaffer

The first three test problems addressed in this section are Fonseca, Kursawe, and Schaffer. Then, the ZDT set of problems is discussed and the results of the set of unconstrained problems from CEC 2009 are described. Values in bold are the best results obtained. All algorithms performed well in terms of convergence and divergence for each of the problems below. The cardinality measure, i.e., the number of non-dominated solutions, is important, as having more candidate solutions means more chances for good convergence. One of the main features that distinguish the MOICA is the cardinality measure, which is very good for most problems. Supplementary materials related to the proposed method are available in Online Resource 1.

The five real-valued ZDT problems are presented in Table 1 (ZDT5, the omitted problem, is binary-encoded). Incidentally, since it is binary-encoded, ZDT5 has often been omitted from analysis elsewhere in the EA literature.

Fig. 7
figure 7

Non-dominated MOICA, OMOPSO, NSGA-II, and SPEA2 solutions on ZDT4

Fig. 8
figure 8

Non-dominated MOICA, OMOPSO, NSGA-II, and SPEA2 solutions on ZDT6

Tables 2, 3, and 4 contain the HV, EI, and IGD results of the MOICA, NSGA-II, SPEA2, and OMOPSO for the unconstrained test problems in Table 1. Table 4 also includes the MOEA/D-AWA algorithm (Qi et al. 2014). On average, the results for HV and EI are similar for all algorithms. However, the MOICA performs considerably better in terms of IGD.

As stated above, one of the features of the MOICA is its ability to produce many candidate solutions. The Schaffer test problem is an example that illustrates the cardinality measure of the MOICA. Figure 6 illustrates the Pareto found by all algorithms for the Schaffer test problem. Although the HV and EI results are good for all algorithms, as shown in Tables 2 and 3, respectively, Fig. 6 shows that the MOICA and OMOPSO have considerably better cardinality measures than NSGA-II and SPEA2.

For ZDT1, ZDT2, and ZDT3, all algorithms performed equally well. However, with respect to the ZDT4 test problem, the MOICA performed considerably better than all other algorithms in this study. In the ZDT4 test problem, the MOICA demonstrated its power in terms of convergence and divergence. It was successful in this test problem and others because of the method by which it searches the available space. It does so by setting many different empires in the beginning of the algorithm for which LNDS sets are positioned in different parts of the search space. This enables the algorithm to search the whole search space and to consequently obtain good convergence and divergence. Figure 7 illustrates the Pareto found by four algorithms for the ZDT4 test problem.

Figure 7 demonstrates how the spread of solutions, convergence, and divergence are effectively preserved in the MOICA compared to the other algorithms. Figure 8 illustrates the ZDT6 test problem, which is another good example for illustrating the performance of the MOICA compared to the other algorithms. In ZDT6, both the MOICA and OMOPSO performed well compared to NSGA-II and SPEA2; however, NSGA-II performed better than SPEA2.

Table 5 Hypervolume results for the CEC 2009 unconstrained test problems with 25,000 function evaluations
Table 6 Epsilon indicator results for the CEC 2009 unconstrained test problems with 25,000 function evaluations

Figures illustrating the Paretos of algorithms for other test problems are not provided here because they have nearly the same Paretos as those determined herein.

Table 7 IGD results for the CEC 2009 unconstrained test problems with 25,000 function evaluations
Fig. 9
figure 9

Non-dominated MOICA, OMOPSO, NSGA-II, and SPEA2 solutions on UF10

Tables 5, 6, and 7 contain hypervolume, epsilon indicator, and IGD results for the UF1-UF10 unconstrained test problems from CEC 2009 with 25 function evaluations, for which the MOICA, on average, again produces reasonably good results.

Table 8 Hypervolume results for the CEC 2009 unconstrained test problems with 5000 function evaluations
Table 9 Epsilon indicator results for the CEC 2009 unconstrained test problems with 5000 function evaluations
Table 10 IGD results for the CEC 2009 unconstrained test problems with 5000 function evaluations
Table 11 MOICA’s ranking compared to algorithms in CEC 2009

The MOICA produced competitive results on the test functions from CEC 2009 compared to the other algorithms. Harmony NSGA-II and Harmony MOEAD (Doush and Bataineh 2015) also performed well, whereas DMCMOABC (Xiang and Zhou 2015) performed the worst. Figure 9 presents the MOICA’s results alongside those of the other algorithms, as well as the Pareto-optimal for the UF10 unconstrained test function. Tables 5, 6, and 7 illustrate that the MOICA performs better than the other algorithms with respect to UF10. In addition, it is clear from Fig. 4 that the MOICA is within the objective space of the Pareto-optimal, unlike the other algorithms.

Tables 8, 9, and 10 show the results for the UF1-UF7 test functions with a maximum of 5000 function evaluations. The MOICA’s average performance is either similar or better than the performance of the other algorithms, even for such few function evaluations. This result likewise proves that the MOICA quickly converges to global optimal solutions.

Table 11 presents the MOICA’s ranking compared with the algorithms used in the CEC 2009 competition for unconstrained functions. The ranking is based on the average IGD metric.

4.2 Friedman aligned ranks test

To check the statistical similarity of our results to those of other algorithms and determine the MOICA’s rank among its competitors, we implemented the Friedman aligned ranks test for all average IGD scores achieved by the 13 algorithms in the CEC 2009 MOO contest along with the proposed MOICA. Table 12 shows the average rank values for all algorithms and the p value of the test. The subscripted numbers for the best scores indicate the order of the corresponding algorithms. The average rank value of the MOICA is the smallest, which indicates that the MOICA is the best-performing algorithm among the 13 analyzed. Meanwhile, the p value is very close to zero, indicating that there is significant statistical difference among the results of all algorithms, such that the MOICA is statistically different from its competitors. The Friedman aligned ranks test is also implemented over IGD scores obtained with 25,000 function evaluations by the most popular MO algorithms given in Table 7. Results in Table 13 indicate that the MOICA again performs best and is comparable to the six competing algorithms.

Table 12 Friedman aligned ranks statistics and the corresponding p value over all algorithms in Table 11
Table 13 Friedman aligned ranks statistics and the corresponding p value over all algorithms in Table 7

5 Conclusion

In this paper, we propose a MOICA for solving global multi-objective optimization problems. The search mechanism used in this algorithm starts several empires with LNDSs in positions around the search space. In addition, revolution operations, which enable the MOICA to competitively converge and diverge, were proposed and compared to three existing algorithms. Experimental results with three metrics showed that for most test functions, the MOICA was competitive with the baseline algorithms. The MOICA’s success can be traced to its global non-dominated solutions set and approach to assimilating colonies toward these solutions, because assimilation, in which small deviations are utilized for better convergence and divergence, enables the MOICA to cover the entire search space.