1 Introduction

With the development of evolutionary computation, some representative evolutionary algorithms only suitable for either single or multi-objective optimization problems have been proposed in the literature [16]. Nevertheless, in practice, because of different requirements and goals, many optimization problems, e.g., university timetable, engineering design, flow-shop scheduling, and so on, frequently change their type of optimization; in other words, they sometimes include only one objective within some time span but multiple conflicting objectives within another time span. This requires that a single search method could deal concurrently with single and multi-objective optimization so-called omni-optimization. Thus, some advanced omni-optimization techniques are desired when concurrently finding the optimal solution(s) and Pareto-optimal solutions for such kind of problem, but fewer achievements are reported in the literature.

Although lots of researchers still focus on exploring evolutionary algorithms for single or multi-objective optimization problems, the research on omni-optimization is in progress. Deb et al. [7] made an encouraging attempt to develop the concept of omni-optimization. In their work, an omni-optimizer, based on the well-known NSGA-II [3], was reported. In such optimizer, the classical crowding distance method included in NSGA-II was modified to calculate distances between candidates both in the design space and in the objective space. The purpose of such modification is to guarantee that the optimizer is effective for multi-global optimization problems. Especially, when being degenerated to solve single-objective problems, it is similar to the standard evolutionary algorithm (μ + λ)-ES. The experimental results in the present work indicate that such optimizer is competitive for some multi-objective problems. Additionally, in order to deal with practical engineering design and simulation-based optimization problems, several researchers also investigated omni-optimization with discrete variables by developing improved genetic algorithms [810]. For instance, in Klanac and Jelovica [8], constraints were converted into additional objectives, and the weighting strategy assumed designing the scheme of fitness. The optimizer is based on vectorization and one conventional genetic algorithm. However, it remains unanswered how to decide the weights of objectives.

In the field of intelligent optimization, another outstanding computational paradigm, Artificial Immune Systems, has been exhaustively studied, especially, immune optimization. Many researchers made great contribution to sufficiently excavating the inherent metaphors of the immune system in order to solve multi-modal optimization problems, and accordingly, lots of immune genetic or immune optimization algorithms were developed [1115]. These algorithms are designed specially to deal only with either single or multi-objective problems. As associated to the relation between bio-inspiration and existing immune-based optimization algorithms, one can easily know that the immune metaphors of density and vaccine are usually cited to improve evolutionary algorithms [16, 17], and also that three immune principles of negative selection, clonal selection and immune network are borrowed to explore immune optimization algorithms.

The negative selection principle and the concept of vaccine have been preliminarily taken into account for function optimization [1820], especially constrained optimization. For instance, in Aragón [18], a T-cell model, based on the process of T-cell response, was presented in the context of constrained optimization. In this model, a dynamic tolerance factor was designed specially to identify the constraint-violation degree of an individual, while several mutation operators were developed to evolve different types of cells so that those better cells were found as fast as possible. In [20], the concept of vaccine was utilized skillfully to design an artificial immune system for multi-modal function optimization, in which, after the design space was divided into equal subspaces, the vaccine was randomly picked up in each subspace. The authors claimed that it was able to achieve the promising performance.

The clonal selection principle is an extremely useful bio-inspiration in artificial immune systems. A large number of immune characteristics and mechanisms such as learning, memory, diversity and affinity maturation have been widely cited to investigate immune optimization. The first optimization algorithm for multi-modal function optimization, clonal selection algorithm originally proposed by de Castro and Von Zuben [21], was originated from the principle. After such pioneering work, a series of immune-based optimization techniques for single or multi-objective problems were established [22, 23]. Especially, some original artificial immune models were reported [2432], among which a few constraint-handling techniques had been successfully applied to constrained multi-objective problems [31, 32]. More details can be found in [26, 27]. In addition, more recent studies indicate that multi-objective immune optimization is a competitive and active research topic [3337].

As associated to the clonal selection principle and immune network theory, de Castro and Von Zuben [38] suggested an artificial immune network (aiNet) solving data analysis and clustering. Thereafter, in order to handle multi-modal function optimization with static or dynamic environments, their group, together with Timmis, extended aiNet into two valuable versions (opt-aiNet and dopt-aiNet) with slight differences [39, 40]. Nevertheless, although immune optimization was investigated intensively, few achievements useful for omni-optimization have been found in the literature. Coelho and Von Zuben [41] suggested an improved artificial immune network (omni-aiNet) for such kind of optimization which extended the version of opt-aiNet [39]. omni-aiNet and opt-aiNet share some main mechanisms, but there are some essential differences. In omni-aiNet, one reported mechanism, Gene Duplication, is introduced to enhance the ability of evolution. The fundamental experiments in the present study hint that this algorithm is potential for single-objective problems with fewer constraints, but it also exposes some drawbacks when solving multi-objective problems.

In the present work, an artificial immune optimization system for omni-optimization with nonlinear constraints, simply written as omni-AIOS, is developed. It is not an extension version of the previous work, and especially, it differs from omni-optimizer and omni-aiNet [7, 41]. More details can be found in Sect. 4.3. In omni-AIOS, in order that the current population can be divided rapidly into subclasses with different levels, an evaluation index is first designed specially to decide the importance of individual in the population, which helps omni-AIOS enhance the speed of sorting. Second, after each subclass is required to eliminate redundant individuals by means of a suppression radius index, those survival individuals in each subclass proliferate their clones, where the size of clones for each individual is the number of individuals suppressed by it. Simulation illustrates that omni-AIOS is a competitive optimizer for omni-optimization with nonlinear constraints.

2 Preliminaries and basic theoretical results

2.1 Problem formulation and basic concepts

Consider the following constrained omni-optimization (P):

$$ \begin{array}{ll} \underset{x\in \Upomega}{\text Min}\, {\bf f}({\bf x})=\left(f_{1}({\bf x}),f_{2}({\bf x}), \ldots,f_{M}({\bf x})\right)\\ {\rm s}.{\rm t}{.,}\,g_{i}({\bf x})\leq 0,\; h_{j}({\bf x})=0,\; 1\leq i\leq I, 1\leq j\leq J,\\ \end{array} $$

with M ≥ 1, where f i (x) is the i-th sub-objective function with 1 ≤ i ≤ M, and g j (x) and h k (x) are the nonlinear constraint functions with 1 ≤ j ≤ I and 1 ≤ k ≤ J; \(\Upomega\) is a bounded and closed domain in R p. \({\bf x}\in \Upomega\) is said to be a feasible solution if it satisfies the above constraints; otherwise, it is called an infeasible solution. For \({\bf x}, {\bf y}\in \Upomega\), it is said that x dominates y in the objective space, simply written as \({\bf x}\prec_{f} {\bf y}\), if f i (x) ≤ f i (y) with 1 ≤ i ≤ M, and there exists l such that f l (x) < f l (y) with 1 ≤ l ≤ M; especially, when \(M=1,\,{\bf x}\prec_{f} {\bf y}\) is equivalent to f(x) < f(y). Notice that it is easy to prove that the relation of dominance between candidates is of the transitive property. Next, assume that \(\Upgamma({\bf x})\) is a penalty-based violation function which measures the magnitude of violations of the constraints for candidate x. For simplicity, define it as follows in this paper.

$$ \Upgamma({\bf x})=\sum_{i=1}^{I}{\text max}\{g_{i}({\bf x}),0\}+\sum_{j=1}^{J}h_{j}^{2}({\bf x}). $$
(1)

Here, if \(\Upgamma({\bf x})<\Upgamma({\bf y})\), candidate x is said to be better than candidate y. Thereafter, the concept of constraint-dominance [42] is cited to compare two candidates for problem (P).

Definition 2.1

Let \({\bf x},{\bf y}\in \Upomega\), it is said that x constrained-dominates y (simply written as \({\bf x}\prec_{c} {\bf y}\)), if one of the following conditions holds:

  1. (a)

    x and y are feasible, and \({\bf x}\prec_{f}{\bf y}\);

  2. (b)

    x is feasible but y is not;

  3. (c)

    x and y are infeasible, and \(\Upgamma({\bf x})<\Upgamma({\bf y})\).

Based on the above definition, one can see that for arbitrary candidates x and y in \(\Upomega\), only one is true among \({\bf x}\prec_{c}{\bf y},{\bf y}\prec_{c}{\bf x}\) and x| c y, where x| c y denotes \({\bf x}\not\prec_{c}{\bf y}\) and \({\bf y}\not \prec_{c}{\bf x}\).

Definition 2.2

\({\bf x}\in \Upomega\) is called an optimal solution (M = 1) or a Pareto-optimal one (M > 1) if and only if there does not exist \({\bf y}\in \Upomega\), s.t. \({\bf y}\prec_{c} {\bf x}\). Let X be a finite subset in \(\Upomega\); \({\bf x}\in X\) is said to be a best or nondominated individual with respect to X only if there is not \({\bf y}\in\,X\), s.t. \({\bf y}\prec_{c} {\bf x}\).

2.2 Evaluation index and basic properties

One can easily see from Definition 2.1 that the relation of constraint-dominance \(\prec_{c}\) is of the transitive property, i.e., if \({\bf x} \prec_{c}{\bf y}\) and \({\bf y} \prec_{c}{\bf z}\), then \({\bf x} \prec_{c}{\bf z}\). Further, let m(x) stand for the set of remembers dominated by x in the finite subset X in the sense of constraint-dominance, and accordingly one can obtain \(m({\bf y})\subset m({\bf x})\) if \({\bf x}\prec_{c}{\bf y}\). Also, let \(\Upomega^{\prime}\) represent the set of feasible candidates in X. Based on such preliminaries, define a bivariate function B etter (xy),

$$ B_{etter}({\bf x},{\bf y})=\left\{\begin{array}{ll} 1&\quad {\bf x}\in\Upomega', {\bf x}\prec_{c} {\bf y},\\ 0&\quad {\bf x}\in\Upomega', {\bf x}\not\prec_{c} {\bf y} \vee {\bf x}\notin\Upomega',{\bf x}\prec_{c} {\bf y},\\ -1&\quad {\bf x}\notin\Upomega',{\bf x}\not\prec_{c} {\bf y}. \end{array} \right. $$
(2)

This way, an evaluation index function \(L_{evel}(.): X\rightarrow R\), which decides the importance of individual in X, is given by

$$ L_{evel}({\bf x})=\sum_{{\bf z}\in X}b_{etter}({\bf x},{\bf z}). $$
(3)

Especially, by Definition 2.1, one knows that Eq. 3 is equivalent to the following formula,

$$ L_{evel}({\bf x})=\left \{\begin{array}{ll} |m({\bf x})|,&\quad {\bf x}{\rm \,is\,feasible},\\ |m({\bf x})|-|X|,&\quad {\bf x}\,{\rm is\, infeasible}, \end{array}\right. $$
(4)

where |Z| represents the number of elements in set Z. Following the design of the evaluation index, the following properties are acquired, among which only the proof of Theorem 2.1 is given in "Appendix 1".

Theorem 2.1

Given \({\bf x},{\bf y}\in X\), if L evel (x) > L evel (y), then \({\bf y} \not\prec_{c}{\bf x}.\)

This theorem hints that x is not worse than y if L evel (x) > L evel (y). Further, one can conveniently decide whether a candidate is superior to another one in X through the following corollary.

Corollary 2.1

Given \({\bf x},{\bf y}\in X,\) there are the following properties according to whether x and y are feasible:

  1. (a)

    if x is feasible, then L evel (x) ≥ 0; otherwise, there must be L evel (x) < 0;

  2. (b)

    if x and y are feasible, \({\bf x}\prec_{c}{\bf y} \Leftrightarrow L_{evel}({\bf x})>L_{evel}({\bf y}),\) and x| c yL evel (x) = L evel (y);

  3. (c)

    if x and y are infeasible, \(L_{evel}({\bf x})\leq L_{evel}({\bf y})\Rightarrow {\bf x} \not\prec_{c }{\bf y}.\)

3 Clonal selection principle

The natural immune system is one of the most important organisms for living bodies. When an antigen intrudes the immune organism, the receptors of the specific lymphocytes, e.g., B, T and memory cells, commence responding to the antigen, owing to the intrinsic functional of immune protection. The repetitive process of recognition, proliferation, learning, memory and combat against one such antigen guarantees that a large number of plasma and memory cells can be created. During this process of immune response, some maturated B cells can be found in that the active ones suffer a repetitive process of affinity maturation after cloning. Thereafter, such maturated B cells secrete antibodies capable of neutralising the intruding antigen, among which some antibodies become memory cells which circulate between blood, lymph, or tissues. If the living bodies encounter the same or similar antigens once again, memory cells can rapidly destruct them. Such response can be explained by the two well-known principles of clonal selection and immune network. The clonal selection principle formulates a process of immune evolution, which B cells react to the invading antigen. To this point, some B cells that best match to the antigen are first selected to proliferate by cloning. Further, the clones differentiate into plasma and memory cells. The memory cells can urge the immune system to make a faster response. Second, the plasma cells suffer a process of affinity maturation. Subsequently, a suppression mechanism is taken to eliminate those redundant cells.

The above immune response theory is a useful bio-inspiration capable of being utilized to investigate omni-AIOS for omni-optimization. To achieve such goal, the self-maintenance of diversity based on suppression is devoted to find diverse candidates, and the metaphors of antibody evolution above are contributed to search simultaneously for multiple high-quality candidates.

4 Formulation of omni-AIOS and designs of mechanisms

4.1 omni-AIOS’ formulation

Let antigen Ag stand for problem (P) as in Sect. 2.1, and antibody Ab be viewed as a real-coded feasible or infeasible candidate. Memory cells represent those best antibodies without any constraint violation found until the current population. omni-AIOS can be described in detail below.

In the above formulation, steps 5 and 6 aim at rapidly sorting the current population into subclasses, in which the elements of each subclass have the same importance. Step 10 eliminates the same or similar elements through the information on antibody distribution, while proliferating some clones by means of the survival antibodies. Steps 10 to 11 create new antibodies for each subclass. Steps 4 to 16 constitute a loop of immune evolution which finds the desired solutions.

4.2 Module illustration

In the case of multi-objective optimization as in problem (P), i.e., M > 1, Corollary 2.1 as in Sect. 2.2 provides a guideline for rapidly dividing a finite population Z into subclasses; however, when M = 1, another segment method is needed to divide population Z. All these can be achieved by the following module.

In the above module, after ranking the antibodies by means of Eq. 3, A n is split into subclasses. Note that in the case of M > 1, if there is at least a feasible antibody in F 1, all elements in such subclass are feasible. The notation of d(xy) represents the Euclidean distance between x and y. In addition, from the viewpoint of computational complexity, the maximal computational complexity is O(N log N2 ), because of the sorting method.

In Update (M emory , F 1), those elements in F 1 without any constraint violation are admitted to enter into M emory . Further, eliminate those identical or dominated elements in terms of the above concept of constraint-dominance. Thereafter, if the size of the memory set is beyond M e with M e  ≤ N, the well-known crowding distance method is used to delete those redundant elements. Notice that the distance measure in such method is executed in the objective space but in the design space if a multi-global optimization problem is solved.

Proliferate (F j ) only keeps those diverse antibodies in F j and decide their clonal sizes. To this end, when M > 1, a suppression radius ρ j (x) of \({\bf x}\in F_{j}\) is designed as follows,

$$ \rho_{j}({\bf x})=\frac{\sigma_{j}}{1+\sigma_{j}+d_{\rm min}({\bf x})}, $$
(5)

where d min(x) is the Euclidean distance between x and the x’s nearest element in F j ; σ j is the variance of distances d min(x) with \({\bf x}\in F_{j}\), which reflects the diversity information on F j . Subsequently, in relation to Eq. 5, the set of those antibodies in the neighborhood of x is defined as

$$ c_{j}({\bf x})=\{ {\bf y}\in F_{j}: d({\bf x},{\bf y})<\rho_{j}({\bf x})\}; $$
(6)

so, |c j (x)| is viewed as the clonal size of x. Notice that once some antibody, e.g. y, belongs to the neighborhood of x, |c j (y)| is set as 0; in such case, y is said to be suppressed by x. After so, those antibodies in F j , which are not suppressed by others, proliferate their clones through

$$ F_{j}^{\prime}=\{({\bf x},|c_{j}({\bf x})|): {\bf x}\in F_{j}\}. $$
(7)

Besides, when M = 1, pick up the best element in F j e.g. x 0, and accordingly, F j  = {(x 0,|F j |)}.

In the above scheme, when M > 1, different antibodies are allocated different suppression radiuses, which avoids effectively the difficulty of deciding the adjustable population suppression radius. Equations 5 and 6 hint an important idea: the larger the subclass variance, the more the suppressed antibodies; conversely, the fewer antibodies are suppressed.

Mutate (F j  ) only keeps those mutated clones. In other words, all clones in F j   change their genes with the same mutation rate p m through the polynomial mutation [3], namely, \({\bf x}'={\bf x}+\gamma_{j}\Updelta_{\rm max}\), where γ j is defined as follows:

$$ \gamma_{j}=\left\{\begin{array}{ll} (2u)^{\frac{1}{\eta_{j}+1}}-1,&\quad {\rm if}\,u<0.5,\\ 1-[2(1-u)]^{\frac{1}{\eta_{j}+1}},&\quad {\rm if}\,u\geq 0.5,\\ \end{array}\right. $$
(8)

with uniformly distributed random variable \(u\in (0,1)\). Notice that \(\eta_{j}=\mu(1+\frac{j-1}{L_{p}})\) with μ > 1, and L p is the number of online obtained subclasses in step 6 as in omni-AIOS.

Select \((F_{1}\cup C^{\ast})\) consists of some better and diverse antibodies from a combinational population \(F_{1}\cup C^{\ast}\). Let F f (X) and F inf (X) be composed of feasible and infeasible antibodies in X respectively. Hence, the desired set is decided through the following pseudo-code.

In the above module, F non (X 1) comprises of nondominated antibodies in X 1 if M > 1 and different ones otherwise. Also, Crowding (Xm) only keeps m antibodies in X by means of the crowding distance method in which the distance measure is the same as that in Update (M emory , F 1).

4.3 Comparative analysis

This section presents the main differences of omni-optimizer [7], omni-aiNet [41] and omni-AIOS. Besides the schemes of constraint-handling and mutation, these three kinds of approaches have apparently different characteristics and performances, due to their bio-inspiration and design ideas. The major differences are listed below.

  • Bio-inspiration. omni-optimizer bases on the theory of evolution and genetics. omni-aiNet is related to the clonal selection principle, immune network theory and DNA-based gene duplication. omni-AIOS is an immune optimization algorithm only based on the clonal selection principle. Although omni-aiNet and omni-AIOS share the biological inspirations of cloning, hypermutation and selection, their schemes of operations, except the similar mutation, are greatly different, which can be known below.

  • Parameter settings. omni-AIOS demands 6 parameters, i.e., population size, memory size, maximal iteration number, suppression magnitude, distribution index for mutation and mutation rate, whereas either of omni-optimizer and omni-aiNet has 7 parameters [41]. Especially, omni-AIOS and omni-optimizer have the fixed population size, but omni-aiNet performs with a dynamic population size. Additionally, omni-optimizer consists of 6 operations: Tournament, Crossover, Mutation, Ranking, Crowding distance and Sorting; omni-aiNet is composed of 7 operations: Cloning, Hypermutation, Gene duplication, Ranking, Grid procedure, Binary tournament and Random insertion; omni-AIDOS only includes 5 operations: Memory update, Fast sorting, Dynamic proliferation, Mutation and Selection.

  • Comparison between individuals. These approaches adopt the same idea of penalty to handle the constraints. omni-AIOS performs comparison between individuals in terms of the concept of constraint-dominance, but omni-optimizer and omni-aiNet do so through constrained \(\varepsilon\)-dominance.

  • Population sorting. omni-AIOS utilizes one dominance-based evaluation index proposed in this work to decide the levels of individuals in the current population with computational cost O(Nlog N2 ), and then those individuals with the same level are put into the same subclass; on the other hand, omni-optimizer and omni-aiNet uses the well-known nondominated sorting method based on \(\varepsilon\)-dominance to divide the population, where the computational cost is O(MN 2).

  • Evolution. In the process of evolution, although omni-optimizer, omni-aiNet and omni-AIOS share the same scheme of polynomial mutation, their distribution indices for mutation, η, are determined with different fashions. In omni-optimizer, η is a fixed parameter defined by the user; omni-aiNet determines it dynamically at the interval [5, 20]; omni-AIOS decides it dynamically at the interval [μ, 2μ). It should be pointed out that omni-aiNet and omni-AIOS involve in completely different proliferation schemes. Namely, omni-aiNet demands that each individual in the current population with size N proliferate a clonal subset of fixed size N c so that the sum of sizes of clonal subsets is NN c , whereas omni-AIOS utilizes one niching-like suppression radius function proposed in this work to delete those redundant individuals in the current population; in omni-AIOS, those survival individuals receive their respective clonal sets with different sizes, in which the number of clones for a given survival individual is the number of individuals suppressed by it, and hence the number of all clones is N.

  • Selection. omni-optimizer picks up N better and diverse individuals in a combinational population of size 3N to constitute the next population, relying upon the crowding distance method of which Euclidean distances are calculated both in the design space and in the objective space. In omni-aiNet, the process of selection is carried out in a combinational population of size N(1 + N c ), by means of the conventional nondominated sorting method and the grid procedure. However, omni-AIDOS selects N better individuals in a combinational population of size 2N through the evaluation index above and the crowding distance method in which Euclidean distances are taken into account in the object space or only in the design space (when a multi-global optimization problem is solved).

4.4 Computational complexity and convergence

In this subsection, two theoretical results are given, whose proofs are displayed in "Appendix 1". Following the description of omni-AIOS above, one can see that steps 5, 6, 7, 10 and 15, which decide omni-AIOS’ computational complexity, need to compare those antibodies or calculate their objectives and constraints.

Theorem 4.1

The maximal complexity of omni-AIOS is O(N(N(M + I + J) + p)), where MIJ, and p are mentioned in Sect. 2.1.

In order to analyze omni-AIOS’ convergence, let S be the antibody space (i.e., design space), and suppose that S is a finite set. Let S N represent a state space composed of antibody populations whose sizes are not beyond N. Each element of the state space is called a state. omni-AIOS can be formulated by the evolution chain,

$$ A_{n}\rightarrow (F_{1},F_{2},\ldots)\rightarrow (F^{\prime}_{1},F^{\prime}_{2},\ldots)\rightarrow C^{\ast} \rightarrow A_{n+1}. $$

Since its operations are independent of n, the above chain is a Markov chain. Further, let \(M_{f}(S,\prec_{c})\) be the set of the optimal or Pareto-optimal solution(s) of problem (P) as in Sect. 2.1. This way, the following conclusion can be acquired.

Theorem 4.2

omni-AIOS is convergent for any initial population distribution, i.e.,

$$ \lim_{n\rightarrow \infty}Pr\left\{A_{n}\cap M_{f}(S,\prec_{c})\neq\emptyset\right\}=1. $$
(9)

5 Performance criteria

Three criteria [1, 31], suitable for multi-objective optimization, are cited below. Assume that two algorithms A and B are executed respectively K times on a given multi-objective problem. Accordingly, two serials of solution sets are acquired, {A k } K k=1 and {B k } K k=1 .

  • Average coverage ratio (ACR). ACR is a criterion which measures the whole difference of the optimized qualities for algorithms A and B, defined as

    $$ {\rm ACR}(A,B)=\frac{1}{K^{2}}\sum^{K}_{i,j=1}C(A_i, B_j), $$
    (10)

    where

    $$ C(A_i, B_j)=\frac{|\{y\in B_j: \exists x\in A_i, s.t. x\prec_{c} y\}|}{|B_j|}. $$
    (11)

    Equation 10 means that algorithm A has the globally better solution quality than algorithm B if ACR(AB) > ACR(BA).

  • Average density (AD) and average coverage scope (ACS). AD can be used to evaluate the average distribution performance of the solution sets found, given by

    $${\rm AD}=\frac{1}{K}\sum^{K}_{i=1} \sqrt{\frac{1}{|A_{i}|-1}\sum^{|A_{i}|}_{j=1}(\bar{d}_{i}-d_{ij})^{2}}, $$
    (12)

    where

    $$ d_{ij}=\underset{j\neq l,1\leq l\leq|A_i|}{\rm min}\{d(x_j,x_l)|x_j,x_l\in A_i\}, \bar{d}_{i}=\frac{1}{|A_i|}\sum_{j=1}^{|A_i|}d_{ij}. $$

    Equation 12 hints that the solution sets gotten by algorithm A have good distribution if AD is small. Average coverage scope ACS is the average coverage width of all the solution sets obtained by such algorithm.

6 Numerical experiments

This section examines omni-AIOS’ performances including its searching effect, efficiency and computational complexity, relying upon sixteen single and multi-objective, uni-global benchmark problems and two single and multi-objective, multi-global test problems listed in detail in "Appendix 2". All experiments are executed on a personal computer with CPU/3.0 GHz and RBM/2.0 GB. Two representative omni-optimization techniques, i.e., omni-optimizer and omni-aiNet proposed by Deb and Tiwari [7] and by Coelho and Von Zuben [41], respectively, are chosen to compare against omni-AIOS. In order to list the statistical results, the memory set of M emory as in Sect. 4.2 is also inserted into omni-optimizer and omni-aiNet. It is used to store and update the solutions found. In addition, omni-AIOS and omni-optimizer have the same population size 60; in omni-aiNet, the initial population size is also given 60; the number of generations between suppressions is designated as 1, for which the purpose is to require that the three approaches have the same terminational criterion. The same terminational iteration of such three methods is specified according to different kinds of optimization problems below. The remaining parameter settings of omni-optimizer and omni-aiNet are taken from the references [7, 41]. For omni-AIOS, by experimental tuning the rational value ranges of β, μ and p m are obtained for all the test problems, i.e., \(\beta \in (0.1,0.6),\,\mu\in [5, 10]\) and \(p_{m} \in [0.8,1]\). The values of such parameters are different according to different kinds of test problems, which can be found in the following subsections.

6.1 Single-objective, uni-global constrained optimization

In this subsection, the three algorithms are tested on eight benchmark problems g01 to g08 given in "Appendix 2"; especially, the dimensions of g02 and g03 are taken 20. Since the problems g01 to g04 are solved relatively easily, all the algorithms take their terminational iterations 2000 for these four problems and 5000 otherwise; the size of M emory is 1. In omni-AIOS, take β = 0.5, μ = 10 and p m  = 0.8. Following these preliminaries, the three algorithms are respectively executed 100 runs on each test problem. The acquired statistical results are listed in Tables 1 and 2 below.

Table 1 Statistical results obtained by omni-AIOS with 100 runs

Through the Tables 1 and 2, the approaches can all deal effectively with g01, g03 and g08. For g02 and g04, omni-aiNet and omni-AIOS can acquire the better effects than omni-optimizer. Notice that although g05 has not any feasible solution, omni-optimizer and omni-AIOS can all find the suboptimal solutions, and omni-AIOS presents the relatively stable searching effect because of the values on Mean and SD; however, omni-aiNet can only acquire some solutions with somewhat large constraint violations. When solving g06, omni-aiNet can get the best effect, and omni-AIOS is secondary; when dealing with g07, either omni-optimizer or omni-aiNet can find better solutions than omni-AIOS, but the values on Best and Mean illustrate that omni-AIOS can acquire the stable effect for 100 runs. On the other hand, through the runtime of each algorithm as in Tables 1 and 2 for each test problem, one can see that the spent runtime of omni-AIOS is at most 63.29% of that required by omni-aiNet for any of g01, g02, g03 and g04, and 33.78% for each of the other test problems. It is also easy to see that omni-AIOS is globally faster than omni-optimizer when solving the test problems.

Table 2 Statistical results gotten by omni-optimizer and omni-aiNet with respectively 100 runs

Summarily, omni-AIOS can obtain the relatively satisfactory effects for all the test problems except g06, while taking less time to solve each problem. For the two algorithms compared, omni-aiNet spends more time to deal with these problems except g05, and their performance effects have slight differences.

6.2 Multi-objective, uni-global constrained optimization

Depending on the three performance criteria as in Sect. 5, the three algorithms are examined their abilities of solving constrained multi-objective problems by means of a suite of difficult problems CTP1 to CTP7 and one difficult engineering optimization problem SPR as in "Appendix 2". The size of M emory is set as 50. Further, each of the above algorithms is continually executed 30 runs on each of the problems. In omni-AIOS, take β = 0.5, μ = 10 and p m  = 1. On one hand, the dimensions of those theoretical test problems CTP1 to CTP7 are all designated as 100; the purpose doing so is to observe whether the above three algorithms can efficiently solve high-dimensional optimization problems with constraints. Especially, take J = 30 in CTP1; namely, CTP1 includes 30 constraints. Thereafter, the statistical values are listed in Table 3 below, while the nondominated points found by the three algorithms with respectively only one execution for each test problem are drawn in Fig. 1 below. Table 3 and Fig. 1 can draw the following conclusions.

Table 3 Comparison of statistical results of nondominated sets found with 30 executions. ACR, AD and ACS are mentioned in Sect. 5
Fig. 1
figure 1

Comparison of nondominated fronts for CPT 1, CPT2, …, CPT7 and SPR

The experimental results illustrate that omni-optimizer, omni-aiNet and omni-AIOS can all find some feasible solutions for each of the theoretical test problems CTP1 to CTP7, but their optimized qualities have significant differences. The values on ACR in Table 3 illustrate that each nondominated set gotten by omni-AIOS can almost cover 98% of that obtained by either of the other two algorithms for each of the theoretical test problems except CTP1 and CTP6, and on the other hand, for each of such test problems except CPT6, each of nondominated sets found by omni-optimizer can almost cover 92% of that obtained by omni-aiNet. All these results indicate that omni-AIOS can obtain the best effects when solving the above high-dimensional multi-objective optimization problems. Besides, the statistical values on AD and ACS in Table 3 show that the nondominated points in the objective space found by omni-AIOS are with satisfactory distribution and wide coverage scope; especially, one can see that those nondominated points gained by omni-AIOS and omni-aiNet have similar distribution for some test problems. It should be pointed out that omni-optimizer seems to be able to find better distributed nondominated points than either of omni-AIOS and omni-aiNet for some problems; in fact, many nondominated points found by it have great similarity, which can be known in terms of the figures except Fig. 1(8).

On the other hand, in the case of solving the engineering optimization problem, although the three algorithms can all find some feasible solutions, through Table 3 one can know that omni-AIOS can acquire the best optimized quality in that each nondominated set found by it covers 100% of that gotten by either of omni-optimizer and omni-aiNet, which can also be illustrated by Fig. 1(8). Furthermore, those nondominated sets gotten by omni-AIOS are with the widest coverage scope and the satisfactory distribution, in comparison with those obtained by the other two algorithms. Further, for one such problem, omni-aiNet can obtain the better solution quality than omni-optimizer, as each nondominated set gotten by the former covers 75% of that acquired by the latter. Thus, for this problem, omni-AIOS can acquire the best effect, and omni-aiNet is secondary.

In addition, the three algorithms all adopt the constraint-dominance concept to handle the constraints, whereas they have different efficiencies. Table 3 shows that omni-AIOS is faster than either of the other two algorithms when searching for the Pareto-optimal solutions for each test problem; especially, omni-AIOS’ average runtime is at most 1/2 of that required by omni-aiNet for CTP1, CTP6, CTP7 and SPR. When solving each of other problems, the average runtime demanded by omni-aiNet is almost the same as that required by omni-AIOS. Further, one can also see that omni-AIOS spends at most 1/12 of the runtime required by omni-optimizer to solve each test problem. The main reason is that the evaluation index proposed in this work reduces greatly omni-AIOS’ computational complexity. Experiments illustrate that omni-optimizer has high complexity, because it adopts the crowding distance method involving both the design space and the object space.

Summarily, for each of the above multi-objective problems, the three algorithms display their respective behaviors. omni-AIOS can obtain the more satisfactory effect and higher efficiency than any of omni-optimizer and omni-aiNet. On the other hand, omni-optimizer can acquire the better effect than omni-aiNet, but it results in high-computational complexity.

6.3 Single-objective, multi-global optimization

Here, an optimization problem G01, proposed by Deb and Tiwari [7], is listed in "Appendix 2". It includes twenty optimal solutions \(2k-\frac{1}{2}\) with \(k=1,2,\ldots, 20\). The above algorithms execute 20 times with the same terminational iteration number 500. The size of M emory is set as 20. In omni-AIOS, take β = 0.3,  μ = 6 and p m  = 0.3.

Although the three algorithms can all find the minimum for a single run, they have subtle differences with respect to sizes of solution sets found and computational time. omni-optimizer, omni-aiNet and omni-AIOS acquire in turn 15, 17 and 20 optimal solutions, which can be known from Fig. 2(1) below. Relatively, omni-aiNet can get more optimal solutions for some runs. On the other hand, they spend averagely 1.31, 1.28 and 1.29 s to find optimal solutions within a run period. Summarily, omni-AIOS is also competitive for such problem.

Fig. 2
figure 2

(1) comparison of sizes of solution sets found for G01; the notations of +, x and o denote the minimal points gotten by omni-optimizer, omni-aiNet and omni-AIOS, respectively. (2) nondominated fronts found for G02

6.4 Multi-objective, multi-global optimization

The bi-objective optimization problem G02 with dimension 10 given in "Appendix 2", proposed by Deb and Tiwari [7], is cited to examine the above algorithms with the same terminational iteration 500. The size of M emory is 50. In omni-AIOS, take β = 0.5, μ = 10 and p m  = 1. Similarly solving the test problems as in Sect. 6.2, the methods can obtain their statistical results listed in Table 4 below. Their Pareto-fronts acquired with respectively only one execution are drawn in Fig. 2(2) below.

Table 4 Comparison of statistical results for G02

Table 4 indicates that omni-optimizer and omni-aiNet are difficult in finding the Pareto-optimal solutions in comparison to omni-AIOS, as their nondominated points found are far from the theoretical Pareto-front, which can be known through Fig. 2(2). omni-AIOS can acquire the best effect for such problem, and omni-optimizer is secondary. In addition, omni-optimizer spends more time to solve such problem than either of omni-aiNet and omni-AIOS. All these present that omni-AIOS is useful for multi-objective, multi-global optimization problems.

7 Conclusions

With the increasing requirement of solving real-world optimization problems, omni-optimization will become increasingly popular. Thus, this work suggests an artificial immune optimization system (omni-AIOS) capable of handling omni-optimization, inspired by the dynamic characteristics and mechanisms of the immune system and also the existing concept of constraint-dominance. In omni-AIOS, an efficient population division scheme is developed to enhance the speed of optimization by means of an evaluation index proposed in this work, and meanwhile the niching-like proliferation scheme strengthens the diversity of population as well as the ability of exploration and exploitation. Further, the theoretical results demonstrate that omni-AIOS is convergent with low-computational complexity. Experimentally, it is compared against the two existing optimization algorithms for omni-optimization. The experimental results demonstrate that: (1) omni-aiNet is more suitable for single-objective problems but somewhat high-computational complexity; omni-optimizer is more useful for multi-objective problems but high-computational complexity, and (2) in comparison with omni-optimizer and omni-aiNet, omni-AIOS has low-computational complexity, while being capable of effectively dealing with omni-optimization with nonlinear constraints and low or high dimensions.