Keywords

1 Introduction

Mixed-integer nonlinear programming problem (MINLP) is the important class of nonlinear optimization problems. A MINLP is an optimization problem where the objective functions and constraints are nonlinear functions of the decision variables with some of the decision variables having integer restriction. If objective functions as well as constraints are linear function, then the corresponding problem is called a mixed-integer linear programming problem (MILP). A MINLP having the all the variable as integer is called integer nonlinear programming problem (INLP).

In last few decades, several population-based heuristic algorithms have been designed. These algorithms try to search the global optimal solution of general nonconvex optimization problems. Several variants of these algorithms have been suggested to handle constraints and for solving mixed-integer optimization problems. Two of the major classes of algorithms where the research had been focused are evolutionary algorithms and swarm-based algorithms. Evolutionary algorithms try to mimic the process of natural evolution. Some of the algorithms which belong to the class of evolutionary algorithms are genetic algorithms, genetic programming, evolutionary strategies, evolutionary programming.

Swarm intelligence is based on group behaviour of simple individuals (swarms) in which individuals independently may not show intelligence but as group they show intelligent behaviour. Swarm intelligence is intelligent behaviour shown by a system which emerges due to cooperative interaction of components of the system to achieve a goal which may not be achievable by individual efforts. Ant colony optimization, particle swarm optimization, and artificial bee colony algorithm are among some of the most popular swarm intelligence-based techniques used to solve optimization problems.

Evolutionary algorithms and swarm intelligence have been quite successful in solving many engineering applications having highly nonlinear, nonconvex, nondifferentiable, and multimodal models, and a variety of modifications have been proposed to tackle MINLPs. Many algorithms based on EAs have been effectively used to find the solution of MINLP problem [2, 3].

Lin [4] designed a mimetic algorithm combined with an evolutionary Lagrange method for solving MINLPs. Lin et al. [5] proposed a hybrid differential evolution method to solve MINLPs. The method works with two phases called accelerated phase and migrating phase used to maintain exploration and exploitation. Later a modified coevolutionary hybrid differential evolution for MINLPs was suggested by Lin et al. [6]. Yan et al. [7] introduce a memory-based lineup competition algorithm having cooperation and bi-level competition mechanism for exploration and exploitation. Xiong et al. [8] introduced a hybrid genetic algorithm for finding a globally compromise solution of a mixed-discrete fuzzy nonlinear programming. Cheung et al. [9] developed a hybrid algorithm which combines genetic algorithm and grid search to solve MINLP.

Cardoso et al. [10] presented a modified simulated annealing (M-SIMPSA). This method uses combination of simulate annealing and Nelder and Mead simplex method in the inner loop and Metropolis algorithm ([11,12,13]) in the outer loop. Some of the other population-based algorithms used to solve MINLP are simulated annealing technique [10, 14], tabu search method [15], multistart scatter search [16], and particle swarm optimization [17].

Among the above-discussed methods, genetic algorithms (GAs) [3, 9, 18, 19] are the most successful. GA is an iterative process that works with a set of the solutions (population) which are modified by genetic operators to guide the solution towards the optimum solution in the search space. Crossover and mutation are one of the essential operators of GA. Crossover helps in exploring the promising zones of the search space, using the information from chromosomes (solutions), and mutation assists in avoiding premature convergence by preserving sufficient disparity within the population. This work extends the recently developed RCGA, BEXPM by Thakur et al. [1], named as “MI-BEXPM”. The performance of MI-BEXPM is analysed and compared with other algorithms on the basis of twenty test problems as well as real-life problems.

The rest of paper is organized as the following: Sect. 2 proposed the algorithm to find the solution of the mixed-integer optimization problem. The experimental setup used in current study is detailed in Sect. 3. Analysis of the results for twenty test problems and discussion is given in Sect. 4. The efficiency of MI-BEXPM for solving real-life mixed-integer optimization problems is analysed and compared with other algorithms in Sect. 5. Finally, conclusions from the comparative study are drawn in Sect. 6.

2 Proposed GA

GA belongs to a class of population-based iterative algorithms, which tries to find the near global optimal solution of an optimization problem. The search in GA is governed by three main genetic operators, viz. selection, crossover, and mutation. These operators are applied iteratively to direct the search during the evolution of the population during the search process. We will discuss about the operators used in this study in the following subsections.

2.1 Selection

Selection operator works on the principle of the survival of fittest. It is used to discard the inferior individuals from the population and filter relatively better fit individuals to participate in the biological evolution process. After applying selection operator, an intermediate pool of the population (mating pool) is constructed. Lot of selection techniques have been proposed in the literature. Some of the popular selection operators are ranking [20], roulette wheel [21], stochastic uniform sampling (SUS) [22], and tournament [20] which are widely used selection operators. We have employed tournament selection operator in this work. Tournament selection selects a subset of the population randomly and conducts a fitness-based competition among the chosen solutions. The cardinality of this subset is called tournament size. The winner of the tournament becomes a part of mating pool. The process is repeated until the cardinality of this mating pool becomes equal to the population size.

2.2 Crossover

Crossover operator in GA mimics the process of chromosomal crossover in biology to produce the recombinant chromosomes. Individuals from mating pool are randomly chosen to participate in the crossover process with certain probability called crossover probability (\(p_{c}\)). Here, we used BEX crossover [1] to produce a pair of offspring solutions within the variable bounds from a pair of parent solutions lying within the variable bounds. BEX crossover is a parent-centric operator and has one scale parameter \(\lambda\). For small (large) values of \(\lambda\), offspring produced are spread near (away) from the parents. Also for a fixed value of \(\lambda\), the spread of child chromosome is proportional to the that of the parent solutions. Steps to generate child chromosomes \(C_{1}\) and \(C_{2}\) using parent chromosomes \(P_{1}\) and \(P_{2}\) via BEX crossover operator are as follows:

  1. 1.

    Randomly choose two parent chromosomes \(P_{1}\) and \(P_{2}\) from the mating pool (population after employing GA operator).

  2. 2.

    Randomly generate a uniform number \(r_{c} \in \left( {0,1} \right)\). If \(r_{c}\) is less than prescribed crossover rate \(p_{c}\), then crossover is applied to \(P_{1}\) and \(P_{2}\), otherwise it will pass as such for mutation.

  3. 3.

    If \(r_{c} < p_{c}\), then \(C_{1}\) and \(C_{2}\) is produced using Eqs. (1) and (2)

$$C_{1} = P_{1} + \gamma_{1} \left| {P_{2} - P_{1} } \right|$$
(1)
$$C_{2} = P_{2} + \gamma_{2} \left| {P_{2} - P_{1} } \right|$$
(2)

where

$$\gamma_{j} = \left\{ {\begin{array}{*{20}l} {\lambda \,{ln}\left\{ {{ \exp }\,\left( {\frac{{B_{l} - P_{j} }}{{\lambda \left| {P_{2} - P_{1} } \right|}}} \right) + u\left( {1 - { \exp }\,\left( {\frac{{B_{l} - P_{j} }}{{\lambda \left| {P_{2} - P_{1} } \right|}}} \right)} \right)} \right\}} \hfill & {{\kern 1pt} {\text{if}}\,p\; \le \;0.5\;{\kern 1pt} } \hfill \\ { - \lambda \,{ln}\left\{ {1 - u\left( {1 - { \exp }\,\left( {\frac{{B_{u} - P_{j} }}{{\lambda \left| {P_{2} - P_{1} } \right|}}} \right)} \right)} \right\}} \hfill & {{\kern 1pt} {\text{if}}\,p\; > \;0.5\;{\kern 1pt} } \hfill \\ \end{array} } \right.$$

for \(j\,{ \in }\,\left\{ {1,2} \right\}\), \(u,p \in \left( {0,1} \right)\) are random numbers following uniform distribution, \(\lambda > 0\) is a scaling parameter, \(B_{l} = \{ B_{l}^{1} ,B_{l}^{2} , \ldots ,B_{l}^{n} \}\) and \(B_{u} = \{ B_{u}^{1} ,B_{u}^{2} , \ldots ,B_{u}^{n} \}\) are lower and upper bound of the decision variable.

2.3 Mutation

It is inspired from the biological mutation in which changes in a DNA sequence occurs by altering one or more gene values of the chromosome. Mutation operator in GA is applied for minimizing the possibility to stick into the local or suboptimal solution. It gives a small random perturbation to explore the neighbourhood of the current solution. Not all the individuals go through the mutation phase. It is applied with a relatively small probability (\(p_{m}\)) called probability of mutation and tries to give a random drift to solution to be in a promising zones of the search space. Here, power mutation [23] is being used for this purpose. The search power of mutation is controlled by index parameter (p). Larger (smaller) the value of p has higher (smaller) possibility to introduce perturbation in the muted solution. The probability of generating mutated solution on either side is proportional to its relative position from the variable bounds of the decision variable. The muted solution \((x_{i}^{k + 1} )\) from the current solution \((x_{i}^{k} )\) is produced as follows:

$$x_{j}^{k + 1} = \left\{ {\begin{array}{*{20}l} {x_{j}^{k} - t_{j} \left( {x_{j}^{k} - L_{j} } \right)} \hfill & {{\text{if}}\;\frac{{\;x_{j}^{k} \; - \;L_{j} }}{{\;U_{j} \; - \;L_{i} }}\; \le \;s{\kern 1pt} } \hfill \\ {x_{j}^{k} + t_{j} \left( {U_{j} - x_{j}^{k} } \right)} \hfill & {{\text{Otherwise}}{\kern 1pt} } \hfill \\ \end{array} } \right.$$

Here, \(k\) refers to the current generation, \(s \in \left( {0,1} \right)\), \(t_{j}\) (\(j\,{ \in }\,\left\{ {1,2, \ldots ,n_{v} } \right\}\); \(n_{v} = \#\) of decision variables) are random numbers which follow uniform distribution and power distribution, respectively.

2.4 Truncation Technique

In this work, truncation technique based on floor and ceiling function is applied to each \(x_{i} \in I\) (here, \(I\) refer to the set of variables having integer restrictions). It helps in maintaining the randomness within the newly generated population and reduces the chance of producing similar integer value for same real values that lies within two similar successive integer values [18, 24].

$$x_{i}^{k + 1} = \left\{ {\begin{array}{*{20}l} {\left\lfloor {x_{i}^{k} } \right\rfloor } \hfill & {{\kern 1pt} \;{\text{if}}\;\;p\; \le \;0.5\;{\kern 1pt} } \hfill \\ {\left\lceil {x_{i}^{k} } \right\rceil } \hfill & {{\kern 1pt} {\text{Otherwise}}{\kern 1pt} } \hfill \\ \end{array} } \right.$$

where \(p \in (0,1)\) is a uniform random number.

2.5 Constraint Handling Technique

Due to non-requirement of penalty parameter unlike other penalty constraint handling techniques, parameter-free penalty (PFP) method suggested by Deb [25] is applied to handling constraints. This approach can be easily embedded and executed while evaluating the fitness function during the search process. The fitness function using PFP is evaluated as follows

$$\varTheta \left( z \right) = \left\{ {\begin{array}{*{20}l} {G\left( z \right);} \hfill & {{\kern 1pt} {\text{if}}\;z\;is\;{\text{feasible}}{\kern 1pt} } \hfill \\ {G_{\rm{max}} \left( z \right) + \sum\limits_{k = 1}^{{r_{1} }} {\left( {{\Gamma}_{k} (z)} \right)} + \sum\limits_{k = 1}^{{r_{2} }} {|\zeta_{j} |;} } \hfill & {{\kern 1pt} {\text{otherwise}}{\kern 1pt} } \hfill \\ \end{array} } \right.$$

where \(G_{\rm{max}} (z)\), \({\Gamma}_{k}\), \(r_{1}\), \(\zeta_{j}\), and \(r_{2}\) are the worst feasible value, inequality constraint, \(\#\) of inequality constraints, equality constraint, and \(\#\) of equality constraints, respectively.

3 Experimental Setup

The test bed selected for the comparative study consists of a number of real and/or integer decision variables having linear and/or nonlinear inequality constraints. The best/optimum solutions reported in the literature are summarized in Table 1.

Table 1 Problems considered for study

As discussed in [18], each problem is run 100 times with distinct initial population. Here, successful run is that run whose objective function value lies within 1% range of the reported best/optimal solution. For each problem percentage of successful runs (ps), average function evaluations of successful runs (avg) are measured.

$$ps = \frac{{{\kern 1pt} T_{s} {\kern 1pt} * 100}}{{{\kern 1pt} T_{r} {\kern 1pt} }}$$
(3)
$${\text{avg}} = \frac{{{\kern 1pt} T_{f} {\kern 1pt} }}{{{\kern 1pt} T_{s} {\kern 1pt} }}$$
(4)

Here, \(T_{s} =\) total \(\#\) of successful runs, \(T_{r} =\) total runs, and \(T_{f} =\) sum of function evaluations of successful runs.

The parameter settings of MI-BEXPM algorithm used to conduct this experiment are shown in Table 2. Population size = 10 times the total number of decision variable for each considered problem except 16, 17, and 18 problems (chosen to be three times).

Table 2 Parameter settings of MI-BEXPM

4 Results and Discussions

Results observed using MI-BEXPM are compared with MILXPM, RST2ANU, and AXNUM on the basis of ps and avg for twenty problems. Table 3 demonstrates successful runs for each problem corresponding to each algorithm for twenty test problems. MI-BEXPM shows \(100\%\) success rate in twelve problems, whereas MILXPM, RST2ANU, and AXNUM show \(100\%\) success rate only in ten, eleven, and eight problems, respectively. Moreover, the minimum success rate of MI-BEXPM to solve problem is greater than \(50\%\) for each problem, while MILXPM, RST2ANU (unsuccessful to obtain optimal solution within 100 runs for 4th problem), and AXNUM have two, seven, and six problems, respectively, which have less than \(50\%\) success rate.

Table 3 Percentage of successful runs and average function evaluations of successful runs—twenty problems

MI-BEXPM, MILXPM, and AXNUM show equivalent success rate for Problem 1, but MI-BEXPM has lesser average number of function execution (shown in Table 3), while in problem-4 and problem-7 MILXPM outperforms better in terms of success rate but MI-BEXPM performs better in average number of function evaluation than other algorithms. And for problem-11, MILXPM performs better than other algorithms in terms of both \(ps\) and avg.

For overall performance comparison of MI-BEXPM for the chosen test suite, performance index (PI) is applied, suggested by Bharti [32] and used by several to compare the performance of the algorithm [18, 23, 24, 33]. It is based on successful run, average number of function evaluations, and average time of execution of successful runs. In the current study, all the algorithms considered to be compared are not run on same machine so it is absurd to consider time. Consequently

$${\text{PI}} = \frac{1}{{P_{n} }}\sum\limits_{i = 1}^{{P_{n} }} {(k_{1} s_{1}^{i} + k_{2} s_{2}^{i} )}$$
(5)

where \(s_{1}^{i}\;{\text{and}}\; s_{2}^{i}\) is the ratio of \(T_{s}\) to \(T_{r}\) and minimum of average function evaluation among algorithm to the average function evaluation of each algorithm, respectively, for the \(i{\text{th}}\) problem. \(k_{j}\) is the weight correspond to each \(s_{j} , j = \{ 1,2\}\) such that \(\sum\nolimits_{j = 1}^{2} {k_{j} = 1}\). Assume \(\{ k_{1} = k\}\), then \(k_{2} = \{ 1 - k\}\). From Fig. 1, it can be observed that MI-BEXPM is superior than MILXPM, RST2ANU, and AXNUM in terms of \(PI\).

Fig. 1
figure 1

Performance index

5 Application of MI-BEXPM

In previous section, we have demonstrated the effectiveness of MI-BEXPM for solving benchmark test problems. In this section, we further analyse the performance of MI-BEXPM on a set of mixed-integer real-life problems. The problems considered for this purpose are some of the popular problems available in the literature (stated in Table 1). The parameter setting of MI-BEXPM while solving these problems is kept same as discussed in Sect. 3.

The best results obtained by MI-BEXPM and those reported in the literature are shown in Table 4. From these results, it is observed that result obtained by MI-BEXPM is better than that reported in [27, 34, 35]. Cases where the results are similar to [31, 36,37,38,39], it is uses lesser function calls.

Table 4 Results of designing of gear train

Table 5 refers to the results obtained by MI-BEXPM and reported in the literature. From these results, it can be easily observed that solution obtained by MI-BEXPM and reported by Gandomi et al. [30] are superior to rest of the algorithms considered. It is to be noted that the best solution obtained by MI-BEXPM and stated in Gandomi et al. [30] are same, but MI-BEXPM is able to solve this problem with lesser number of function evaluation than Gandomi et al. [30].

Table 5 Results of designing of reinforced concrete beam

Tables 6, 7 and 8 demonstrates the best results obtained by MI-BEXPM and available in the literature. The solution obtained by MI-BEXPM for these problems is better among the feasible solutions considered in this study.

Table 6 Results of designing of speed reducer
Table 7 Results of designing of welded beam (A)
Table 8 Results of designing of welded beam (B)

6 Conclusions

In the present study, BEX-PM GA developed for continuous variable constrained optimization problems is modified to solve mixed-integer variables constraint optimization problems. The new variant of BEX-PM GA is named as MI-BEXPM. The results obtained are compared on a set of twenty mixed-integer constrained optimization benchmark problems mentioned in the literature. It is observed that MI-BEXPM outperforms other algorithms MILXPM, RST2ANU, and AXNUM individually in several problems. The overall performance is found to be better than these algorithms on the basis of performance index used extensively in the literature for such type of comparative studies.

Moreover, the performance of MI-BEXPM has been analysed for solving real-life mixed-integer optimization problems in comparison with other existed methods from the literature. Analysing the obtained results, MI-BEXPM not only performs well for test benchmark problems, also performs well for real-life problems.

From the above study, it can be concluded that MI-BEXPM is a promising algorithm among the class of algorithms considered in this study for solving test problems as well as real-life problems.