1 Introduction

Optimization is the process of obtaining optimum results for a given problem while satisfying certain constraints. Several requirements in different fields like science, mathematics, engineering and finance can be framed as optimization problems. Some of the applications of optimization are training neural networks, tuning of controllers, designing digital filters, etc. Though many classical optimization algorithms do exist, they are problem dependent and require gradient information to reach an optimum solution. Further, in some cases, classical methods may fail to attain global optima as they get stuck around local optima making the algorithm unsuitable for that particular problem.

The present fast-paced engineering world is the result of continuous improvements over several centuries. Particularly, this has been achieved through inspiration from numerous intelligent processes that exist in nature. In fact, understanding and modelling of such processes has led to the development of many optimization techniques. These techniques have always been the driving force in solving large number of complex problems involving several variables.

For the past few decades, metaheuristic algorithms have been introduced to solve many complex optimization problems and such algorithms are gaining popularity because of the following key reasons:

  • Simplicity Most optimization algorithms are based on simple phenomenon which can be represented and described by simple mathematical expressions and methods.

  • Independency from gradient Unlike traditional optimization methods like gradient descent, metaheuristic algorithms do not use gradient for their implementation. This feature has been very helpful when the function under consideration either does not have gradient or it is difficult to obtain.

  • Local optima avoidance Due to randomness and exploration factor of optimization algorithms, they have an inherent capability to avoid local optima.

  • Problem independence Most optimization algorithms consider the problems as black boxes and therefore are treated as universal algorithms.

Further, for an optimizer, the two most desired features are exploitation and exploration. The overall capability of an optimization algorithm is highly dependent on these two features. Exploitation refers to rigorously searching the promising search space for global optima while exploration refers to searching for new promising search space. It is noteworthy that improved exploitation leads to fast convergence to optimal solution and improved exploration leads to avoidance of local optima. On the other hand, it may also be noted that very high exploitation leads to convergence towards local optima before the solutions could reach near global optima and very high exploration may lead to slow convergence of solution towards the global optima. Therefore, for a good optimization algorithm, there must exist a balance between exploitation and exploration.

2 Related works

As mentioned above, understanding and modelling of many natural processes and phenomena has led to the creation of several optimization techniques which have been very helpful in solving complex scientific problems. These algorithms can be broadly divided into following four categories:

Evolutionary algorithms This category is based on evolutionary processes present in the nature. In this class of algorithms, firstly, random population is generated and their fitness is calculated. Following this, new generation is evolved based on the stated rules of evolutionary algorithm. Genetic algorithm (GA) (Holland 1992) is the most popular in this category. In this algorithm, new population is generated by the process of crossover, cloning and mutation. Further, many algorithms have been developed in this category. Some of them are Evolutionary Strategy (François 1998), Genetic Programming (Koza 1994), Population-Based Incremental Learning (Baluja 1994), Fast Evolutionary Programming (Yao and Liu 1996), Differential Evolution (Storn and Price 1997), Grammatical Evolution (Ryan and Collins 1998), Enhanced GA (Coello and Montes 2002), Gene Expression Programming (Ferreira 2006), Co-Evolutionary Differential Evolution (CEDE) (Huang et al. 2007), Biogeography-Based Optimizer (Simons 2008), Asexual Reproduction Optimization (Farasat et al. 2010), States of Matter (Cuevas et al. 2014), Adaptive Dimensional Search (Hasançebi and Azad 2015), Stochastic Fractal Search (SFS) (Salimi 2015) and Multi-Verse Optimizer (MVO) (Mirjalili et al. 2016).

Swarm-based optimization algorithms This class of algorithms are based on social behaviour of animals. Collectively these are called swarms and are inspired from how swarms interact with each other in order to get their food. Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995) is the most popular algorithm in this category. In PSO, each particle changes its position based on personal best, global best, previous velocity and inertia. Following this, there had been several algorithms which make use of swarm-based optimization algorithm. Some of the examples are Ant Colony Optimization (Dorigo and Di Caro 1999), Marriage in Honey Bees Optimization Algorithm (Abbass 2002), Wasp Swarm Optimization (Pinto et al. 2005), Bees Algorithm (Pham et al. 2006), Cat Swarm Optimization (Chu et al. 2006), Co-Evolutionary Particle Swarm Optimization (CEPSO) (Krohling and Dos santos coelho 2006), Glow-Worms Optimization (Krishnanand and Ghose 2006), Artificial Bee Colony (Karaboga and Basturk 2007), Monkey Search Algorithm (Zhao and Tang 2008), Bee Collecting Pollen (Lu and Zhou 2008), Dolphin Partner Optimization (Yang et al. 2009), Group Search Optimizer (He et al. 2009), Cuckoo Search Algorithm (CSA) (Yang 2009b), Termite Colony Optimization (Hedayatzadeh et al. 2010), Firefly Algorithm (Yang 2009a), Bat Algorithm (BA) (Yang 2010), Hunting search (Oftadeh et al. 2010), Enhanced PSO (Gao and Hailu 2010), Krill Herd Algorithm (Gandomi and Alavi 2012), Migrating Birds Optimization (Duman et al. 2012), Fruit Fly Algorithm (Pan 2012), Flower Pollination Algorithm (Yang 2012), Enhanced CSA (Gandomi et al. 2013), Dolphin Echolocation Algorithm (Kaveh and Farhoudi 2013), Social Spider Optimization (Cuevas et al. 2013), Symbiotic Organisms Search (Cheng and Prayogo 2014), Grey Wolf Optimizer (Mirjalili et al. 2014), Bird Mating Optimizer (Askarzadeh 2014), Animal Migration Optimization (Li et al. 2014), Chicken Swarm Optimization (Meng et al. 2014), Firework Algorithm (Tan and Zhu 2015), Moth Flame Optimization (Mirjalili 2015a), Ant Lion Optimizer (Mirjalili 2015b), Elephant Herding Optimization (Wang et al. 2016a), Monarch Butterfly Optimization (Wang et al. 2016b), Dragon-Fly Algorithm (Mirjalili 2016a, b), Whale Optimization Algorithm (Mirjalili and Lewis 2016), Lion Optimization Algorithm (2016) (Yazdani and Jolai 2016), Spotted Hyena Optimizer (SHO) (Dhiman and Kumar 2017) and Salp Swarm Algorithm (Mirjalili et al. 2017).

Physics-inspired optimization These methods draw inspiration from physical processes present in the nature. One of the oldest algorithms in this category is Simulated Annealing (SA) (Van Laarhoven and Aarts 1987). Annealing means heating a solid and then slowly letting it to cool down. In SA, the process of annealing is used mathematically to solve problems. Other examples of physics-based optimization algorithm are as follows. Harmony search (Woo Geem et al. 2001), Big Bang–Big Crunch (Erol and Eksin 2006), Colonizing Weeds (Mehrabian and Lucas 2006), Gravitational Search Algorithm (Rashedi et al. 2009), Intelligent Water Drops (Hosseini 2009), Charged System Search (Kaveh and Talatahari 2010), Grenade Explosion Method (Ahrari and Atai 2010), Chemical-Reaction-Inspired Metaheuristic (Lam and Li 2010), Artificial Chemical Reaction Optimization Algorithm (Alatas 2011), Galaxy-Based Search Algorithm (Hosseini 2011), Curved Space Optimization (Moghaddam et al. 2012),Water Cycle Algorithm (Eskandar et al. 2012), Black Hole Algorithm (Hatamlou 2013), Mine Blast Algorithm (Sadollah et al. 2013), Colliding Bodies Optimization (Kaveh and Mahdavi 2014), Forest Optimization Algorithm (Ghaemi and Feizi-Derakhshi 2014), Optics Inspired Optimization (Husseinzadeh Kashan 2014), Ecogeographic-Based Optimization (Zheng et al. 2014), Ray Optimization Algorithm (Kaveh 2014b), Tree Seed Algorithm (Kiran 2015), Water Wave Optimization (Zheng 2015), Lightning Search Algorithm (Shareef et al. 2015), Ions Motion Algorithm (Hatamlou et al. 2015), Runner-Root Algorithm (Merrikh-Bayat 2015), Electromagnetic Field Optimization (Abedinpourshotorban et al. 2016), Water Evaporation Optimization (Kaveh and Bakhshpoori 2016), Vibrating Particles System (Kaveh and Ilchi Ghazaan 2017) and Thermal Exchange Optimization (Kaveh and Dadras 2017).

Human-based optimization This optimization class draws inspiration from behaviour and activities performed by humans. It may be noted that humans are the most intelligent species in this world, and this very fact offers good inspiration for developing optimization algorithms. One of the recent algorithms in this class is Jaya algorithm (Venkata 2016) which takes inspiration from human behaviour of following best and avoiding worst. Other examples of human-based algorithm are as follows. Tabu Search (Glover 1989), Seeker-Based optimization (Dai et al. 2006), Imperialist Competitive Algorithm (Atashpaz-Gargari and Lucas 2007), Teaching Learning-Based Optimization (Rao et al. 2007), Interior Search (Gandomi 2014), Soccer League Competition Algorithm (Moosavian and Kasaee Roodsari 2014), Exchange Market Algorithm (Ghorbani and Babaei 2014), Group Counselling Optimization Algorithm (Eita and Fahmy 2014), Tug of War Optimization (Kaveh and Zolghadr 2016), Most Valuable Player Algorithm (Bouchekara 2017) and Volleyball Premier League Algorithm (Moghdani and Salimifard 2018).

In addition to the above major classes, there have been several algorithms which are inspired from mathematics concepts like geometry, algebra, etc. The Method of Moving Asymptotes (Svanberg 1987), Nonlinear Integer and Discrete Programming (NIDP) (Sandgren 1990), Generalized Convex Approximation (Chickermane and Gea 1996) and Sin Cosine Algorithm (Mirjalili 2016b) are such algorithms.

From the above-presented survey, one can easily infer that different metaheuristic algorithms have been developed to target different problems. Therefore, in the interest of technical development, there is always a need for a new algorithm to be developed and evaluated for particular problem so as to obtain superior results than the existing algorithms. The new algorithm introduced in this paper, life choice-based optimizer (LCBO), comes under the category of human-based algorithm. It is based on how a person makes a decision in life to attain his/her goal. Generally, a person makes decision based on different parameters which are dependent upon his colleagues. This very fact has been the key motivation of this work. Further, according to no free lunch (NFL) theory (Wolpert and Macready 1997), no algorithm performs best for all problems. Though several optimization algorithms, as mentioned above, already exist, NFL says that no algorithm is uniformly perfect and therefore there is always a need to develop superior methods. Furthermore, superior techniques are always required to be developed and tested for different scientific problems as they will save the time and effort of the scientific community, thereby making a significant contribution to the domain.

The paper is organized into five sections. Following introduction and survey in Sect. 1 and 2, respectively, in Sect. 3, inspiration, mathematical formulation and representation of LCBO are presented. In Sect. 4, the details of the used 29 benchmark functions, which consist of unimodal, multimodal and six CEC-2005 composite functions (Liang et al. 2005; Suganthan et al. 2005), are provided. In Sect. 5, LCBO is tested on optimization of benchmark functions and the comparative study of the obtained results with other recently reported popular algorithms has been presented in this section. Further, this section also includes the investigations of scalability and convergence tests for enhanced dimensions. LCBO is also investigated for solving two engineering benchmark problems, namely, pressure vessel design (PVD) and cantilever beam design (CBD), and the comparative performance results have been reported in this section. Finally, Sect. 6 draws the conclusion and presents the future scope of research for LCBO. The mathematical descriptions of the investigated engineering problems are given in “Appendix” section.

3 Life choice-based optimizer

In this section, the inspiration and mathematical details of the LCBO algorithm are presented. Mathematical modelling of the LCBO algorithm has been presented highlighting all the background formulations and their expressions.

3.1 Inspiration

The LCBO algorithm is inspired by carefully observing the life cycle of an human being and his work ethics during active life where a person is motivated and has several different aims and targets to achieve. It is noteworthy that human is truly the most intellectual species and is thus far smarter and strategic. Humans always took inspiration from nature and thereby learnt new things. For example, certain Yogasanas like Gomukhasana (Cow-Face Pose) and Simhasana (Lion Pose) are practised for healthy lifestyle world over. The ability to learn from our fellow creatures and species has always been a crucial factor that has helped humans to emerge as far more superior than any other species. Humans have understood the significance of food chain and lifecycle that nature has enforced upon all species. Humans are able to realize the significance of each species and roles played by them for sustenance of life, so instead of focusing on complete extinction of other species, they have considered animals and plants as a part of a big family and focused on mutual survival. Humans have also built restricted zones for animals, created wildlife reserve throughout and are highly resolute to protect the endangered species from extinction. They have tamed animals, adopted them as pets and hence focused on mutual survival. Thus, humans have the capability to understand things better than any other species; that’s why a lot of focus and investment has been made for creating machines that are able to think and act like humans, for example recently built humanoid robots. Recently, Sophia, a humanoid robot, became the first robot to receive citizenship in a country (Saudi Arabia) and was also named the United Nations Development Program Innovation Champion, also the first humanoid to hold a United Nations title (https://www.hansonrobotics.com/sophia/). Therefore, there exists lot of scope to develop new and future technologies which are based on human behaviour and thus the novel algorithm LCBO is also inspired from the choices and thinking pattern of humans to accomplish a target.

Inspiration from Jaya optimization technique The algorithm proposed in this work is also inspired from the already established recent algorithm Jaya which makes use of selective influence. It may be noted that in Jaya (Venkata 2016), only the best and worst search agent affects the current search agent, whereas in the proposed optimizer, Eq. 6, which is only a particular/optional branch in the proposed algorithm according to random number generation, the best and better search agents (explained later in Sect. 3.2.2) also affect the current search agent resulting in better exploitation.

3.2 LCBO algorithm

In the proposed LCBO, the following three concepts can be used to completely describe it. These are presented in the following subsections.

3.2.1 Learning from the common best group

Human is always inspired by one thing or the other, whether it is his/her senior, some celebrity or fellow mates. When a person has some target in sight, he/she ponders and studies about how the best people in that field work to create a strategy in order to achieve targets. He/she always tries to take something resourceful from the best in the fields to achieve the target and derive a pattern or parameter by observing the superior person’s efficiency and work on it so that he/she can develop some skills to achieve the target or solve the problem under consideration. For a given population \( X \) with sorted fitness values/cost functions, Eq. 1 represents the learning from the best feature of the LCBO algorithm:

$$ X_{j}^{{\prime }} = \sum\limits_{k = 1}^{n} {{{\left[ {rand\left( k \right) * X_{k} } \right]} \mathord{\left/ {\vphantom {{\left[ {rand\left( k \right) * X_{k} } \right]} n}} \right. \kern-0pt} n}} $$
(1)

Here, in summation, \( k \) varies from \( 1 \) to \( n \), where \( n \) is a parameter in the algorithm and is equal to the ceil of the square root of the population considered to solve the problem. Parameter \( X_{j} \) is the \( j{\text{th}} \) or the current search agent in process, and \( X_{j}^{{\prime }} \) represents that \( X_{j} \) will be updated only if \( X_{j}^{{\prime }} \) has better fitness than \( X_{j} \). Figure 1 depicts this feature of the algorithm. The search agent in the centre of the circles represents the current search agent in the process. The search agent is affected only by the position of the common best \( n \) search agents, and the level of influence is decided by the random numbers as shown in Fig. 1 by arrows of variable lengths.

Fig. 1
figure 1

Learning from the common best group

3.2.2 Knowing very next best

Everyone wants to achieve his/her target, like achieving the dream job or purchasing a dream car but to accomplish large targets or dream, it takes lot of time and perseverance. Instead of completely focusing onto massive targets, one must be able to realize the current position and the very nearest target in sight. Further, one also needs to understand how to move to a better position from the current position. So, the current target should also be prioritized. Therefore, there is a requirement to focus both on final destination as mentioned above and on the very next destination to achieve future goals. This operation is implemented with the help of the following algorithm:

$$ f1 = 1 - {{\left( {currentChances - 1} \right)} \mathord{\left/ {\vphantom {{\left( {currentChances - 1} \right)} {\left( {numberOfChances - 1} \right)}}} \right. \kern-0pt} {\left( {numberOfChances - 1} \right)}} $$
(2)
$$ f2 = 1 - f1 $$
(3)
$$ bestDiff = f1 * r1 * \left( {X_{1} - X_{j} } \right) $$
(4)
$$ betterDiff = f2 * r1 * \left( {X_{j - 1} - X_{j} } \right) $$
(5)
$$ X_{j}^{'} = X_{j} + rand() *betterDiff + rand()*bestDiff $$
(6)

Here, \( f1 \) and \( f2 \) vary linearly from \( 0 \) to 1 and 1 to 0, respectively. The value of \( r1 \) is constant 2.35, and \( X_{j - 1} \) refers to the position of the search agent whose fitness was just better than current search agent till the previous iteration. Further, \( X_{1} \) refers to the best position of search agent that has been achieved till the previous iteration. The position of \( X_{j} \) will only be updated to \( X_{j}^{{\prime }} \) if \( X_{j}^{{\prime }} \) has better fitness than \( X_{j} \). From Fig. 2, one can see that the current search agent is only affected by the search agents which have the best fitness value and agent that has just better fitness value and the level of influence is determined by Eqs. 36.

Fig. 2
figure 2

Knowing presently best

3.2.3 Reviewing mistakes

If humans are stuck somewhere or the technique they have been using to solve the problem under consideration is not working, they have the natural intelligence to review things and do proper analysis of the technique which is being used to solve the problem and try alternate methods. They are also capable of doing things in reverse to evaluate and approach the problem in a completely different manner, and it also increases the exploration part in the algorithm by trying to look at things from a completely different perspective.

$$ X_{j}^{{\prime }} = Xmax - (X_{j} - Xmin)* rand() $$
(7)

The technique described by Eq. 7 is named as Avi escape technique and has been used as generalized technique to increase the exploration of algorithm. Here, in the algorithm, \( Xmax \) and \( Xmin \) are the upper and lower bound values, respectively. It is similar to GA (Holland 1992) and CSA (Yang and Deb 2009) where new agents are created to solve the problem by using upper bound and lower bound values. Here, \( X_{j} \) is the current agent in the process of evaluation.

As usual with all the optimization methods, the LCBO will start with population size, lower and upper bounds and number of iterations. For the first iteration, population is generated and corresponding fitness is evaluated and ordered along with the agents. Now, the positions of the agents and their fitness are updated iteratively till the target fitness has been obtained or the number of iterations is exhausted. It may be noted that only one of the three operations as described above will be executed for updating of the agents depending on the value of random number. The pseudocode presented in the next section describes the operations in an orderly manner.

3.3 Pseudocode

The following is the pseudocode of the LCBO.

figure a

4 Details of the used benchmark functions

In order to determine the optimization efficiency of an algorithm, its critical testing is required. The importance of exploration and exploitation has already been stated in Introduction section, and thus for checking the overall performance of the algorithm, the benchmark test functions have been carefully chosen and have been presented in the following subsections. For systematic evaluation of the LCBO algorithm, the 29 chosen functions are divided into following three parts.

4.1 Unimodal functions (functions 1 to 7)

In the chosen unimodal functions (1 to 7), there exists only a single local optima value and hence it is the global minimum value of the respective function. These functions are used to test the exploitation affinity of the algorithm. Algorithms which are able to optimize these functions have great exploitation ability. As there is only a single minimum value, the LCBO algorithm should be able to reach quickly towards the global minima. The details of these functions are presented in Table 1.

Table 1 Unimodal benchmark functions

4.2 Multimodal benchmark functions (functions 8 to 23)

These functions consist of many local optima and therefore are difficult to solve than unimodal functions. The search agents sometimes get stuck in the local optima and are unable to escape. It is noteworthy that functions 8 to 13 are of variable dimension and 14 to 23 are of fixed-dimension multimodal benchmark functions. The mathematical details of these functions are tabulated in Tables 2 and 3. It may be noted that the difficulty level of these functions increases with the search area, number of local optima and number of dimensions. The ability to explore new search region plays a vital role in evaluation of these functions, and hence they are good for determining the exploration ability of the algorithm.

Table 2 Multimodal benchmark functions
Table 3 Fixed-dimension multimodal benchmark functions

4.3 CEC composite benchmark functions (functions 24 to 29)

These are six composite benchmark functions taken from CEC-2005. These are rotated and shifted classical variants of standard functions and are of greatest complexity among all benchmark functions in terms of difficulty. They have a large number of local optima values which are very difficult to escape from. The functions are available in Table 4. The detailed equations and expression of the benchmark function are available in the CEC-2005 technical reports (Liang et al. 2005; Suganthan et al. 2005).

Table 4 Composite benchmark functions

5 Results and discussions

In this section, the experimental setup used to conduct various tests and the details regarding the evaluation of tests and obtained results of the used benchmark functions are presented. In subsection 5.1, experimental setup arrangement is presented which includes details regarding function testing such as population, iteration and system software version. In subsection 5.2, the study of the results of LCBO and other algorithms has been carried out. In subsection 5.3, the analysis of results of functions 1 to 13 having very high dimension (200) shows the adaptability of LCBO for dealing with high-complexity problems. In subsection 5.4, the convergence curve patterns are analysed. Engineering problem solving is an important component for the testing of any proposed optimization method. Therefore, two important design benchmark problems, namely PVD and CBD, are investigated for LCBO in subsection 5.5.

5.1 Experimental setup

For investigating the optimization capability of LCBO, each of the functions described earlier was optimized 30 times independently and the results in terms of average fitness value of 30 runs along with the standard deviations for each function or application have been recorded. The optimization technique offering least average fitness and deviation is considered as the winning technique. The software used for all the investigation was MATLAB™ on Windows 10 and 64 bits i-5 Processor 7th Generation, 2.5 GHz and 8 GB RAM.

5.2 Benchmark functions’ testing results

In this section, comparative study of LCBO algorithm with the other popular and latest algorithms has been presented. The presentation has been organized into three different subsections: Sects. 5.2.1, 5.2.2 and 5.2.3, for detailed and complete analysis of the performance of LCBO algorithm. For comparative performance analysis, optimization of 29 benchmark functions using seven potential reported optimization techniques, namely SHO, GWO, PSO, MFO, MVO, SCA and GA, has been used. It may be noted that SHO, GWO, MFO, MVO and SCA are the most recent ones as they were reported in 2017, 2014, 2014, 2016 and 2016, respectively. The results and parameters of these algorithms were already reported in SHO (Dhiman and Kumar 2017) for the above-mentioned benchmark functions. It is noteworthy that the population and iteration used for benchmark function optimization in (Dhiman and Kumar 2017) were 30 and 1000, respectively, for each algorithm. In order to offer a fair competition, same population and iteration values were chosen for LCBO algorithm.

5.2.1 Functions 1–7 (unimodal)

Table 5 presents the obtained results in terms of the average fitness values and the standard deviations for the optimized unimodal functions. As mentioned above along with the LCBO, the results of seven other optimization methods as reported by Dhiman and Kumar (2017) are also presented. Based on the average fitness values and the deviations obtained, one can clearly infer that LCBO offered least values. Therefore, for optimization of unimodal benchmark functions, it is concluded that LCBO is a superior optimization method as compared to the seven other methods. LCBO algorithm offers the best results for functions 1 to 5 and second best results for functions 6 and 7.

Table 5 Result of unimodal benchmark functions

5.2.2 Functions 8–23 (multimodal)

In line with the unimodal function optimization, functions 8 to 23 were investigated under multimodal function optimization. Table 6 presents the obtained results wherein it can be inferred that for functions 8, 9, 10, 11, 13, 15, 18, 19, 20, 21 and 23, LCBO is the clear winner. On the other hand, for functions 12, 14, 16, 17 and 22, it is the second or third best. Therefore, for optimization of multimodal benchmark functions also, it can be concluded that LCBO is a superior optimization method as compared to the seven other methods.

Table 6 Result of multimodal benchmark functions

5.2.3 Functions 24–29 (composite CEC benchmark functions)

In order to further test the capability of the LCBO, the next experiment was to test the complex function optimization. For the same, six composite benchmark functions were taken from CEC 2005. These are rotated and shifted classical variants of standard functions and therefore offer greatest complexity among all the benchmark functions. They also have multiple local optima values, and it is usually difficult to escape from these local optima. From the results given in Table 7, one can clearly observe that LCBO algorithm gives the best result for four out of the six functions (24, 25, 26 and 28). This confirms the LCBO’s ability to easily escape local minima and move towards global minima. It also ensures superior balance between exploration and exploitation ability as exhibited by LCBO.

Table 7 Result of composite benchmark functions

5.3 Scalability test

Scalability test is an important component of evaluation of any optimization technique. It assesses the ability of an optimization technique to handle higher dimensions’ test functions. The complexity of an optimization problem increases exponentially with increase in the number of dimensions, and solving any problem set with large number of unknown variables is always a challenge. In this test, functions 1 to 13, defined in Tables 1 and 2, were used wherein the dimensions were increased from 30 to 200. For scalability performance, population and iterations were kept as 30 and 1000, respectively. In line with previous subsection, 30 independent trials were executed and the results in terms of average and standard deviation were recorded. For comparison purpose, scalability results of seven techniques, namely ALO, PSO, SMS, BA, FPA, CSA and GA, as reported in (Mirjalili 2015b) were used. It may be noted that in this work the population and iteration values of 100 and 5000 were used by competing methods against the 30 and 1000 for LCBO. The setting of these parameters is a real challenge to LCBO. The results of scalability test are presented in Table 8 wherein it can be confirmed that LCBO algorithm gives the best result for all functions from 1 to 13 except function 6. This proves the dominance of the LCBO algorithm in dealing with functions with large dimensions and clearly shows that LCBO is superior as it gives better results in significantly lower number of function calls. The performance of other algorithms is considerably poorer than LCBO. Dull performance of the rest of the algorithms also highlights the fact that large dimensions’ problem solving is quite difficult.

Table 8 Scalability results for 200-dimensions

5.4 Convergence analysis

Having dealt with the scalability analysis, the next activity was to study the convergence. Convergence pattern is useful for understanding the exploration and exploitation ability of an optimization algorithm. For the same, in this section, a total of 16 scalable functions were tested and their convergences were recorded for varying dimensions. The investigated dimensions were 30, 50, 80, 100 and 200. Figures 3 and 4 depict the convergence plots of LCBO for the 16 functions considered for varying dimensions. It may be noted that Figs. 3 and 4 represent the plots for unimodal and multimodal functions, respectively. As evident from these plots, LCBO survives the test of scalability and passes it with flying colours.

Fig. 3
figure 3

Convergence plots of unimodal test functions under varying dimension

Fig. 4
figure 4

Convergence plots of multimodal test functions under varying dimension

Figure 5 shows the convergence of the LCBO for CEC test functions. Again in all the presented cases, LCBO convergence can be clearly observed.

Fig. 5
figure 5

Convergence plots of composite benchmark functions

Based on the above-presented results, the following remarks about LCBO can be inferred.

  • The exploitation ability of LCBO algorithm is very impressive as can be seen from results of unimodal functions optimization.

  • The exploration ability of LCBO algorithm is great as can be seen from its superior result than the other algorithms. In none of the multimodal functions, it offered unsatisfactory result and was always in top 3 in about 95% of the benchmark functions.

  • In multimodal composite CEC functions, which are extremely difficult to handle, it gave best result in four out of six functions.

  • LCBO has a very good balance between exploration and exploitation and thus has a very wide scope for modification and future work.

Further, the optimization techniques are usually applied for real-life problem solving and they are expected to be a strong tool in this aspect also. For the same, two benchmark engineering problems have been taken up as described in the following section.

5.5 Engineering applications

Engineering problems are generally constraint based, and the optimization algorithms are required to be modified accordingly so as to apply in these applications. Different types of penalty functions are used for handling constraints. The basic idea behind using these penalty functions is that when the search agents go out of range or violate given constraints, then some form of penalty is imposed to the cost function so that these agents are modified. The following are the popular types of penalty functions.

  • Static penalty This type of penalty function is completely independent of the number of iterations, and this type of penalty varies with the square of magnitude of amount of violation.

  • Dynamic penalty In this type of penalty function, the penalty value varies with time and may increase or decrease with the current iteration value. Usually, it increases with time.

  • Annealing penalty In this type of penalty function, the penalty coefficients are changed with iteration whenever the algorithm gets stuck in the local optima and only the active constraints are considered in each iteration that is generally increased with iteration.

  • Death penalty Whenever any constraint is violated by the search agent, it is assigned zero fitness and there is no need to compute extent of violation of constraints.

In this work, death penalty has been imposed for both the following design problems. This was done by assigning zero fitness to the search agents violating the constraints.

5.5.1 Pressure vessel design

In this problem, it is required to reduce the cost of fabrication of the vessel. The mathematical description of the PVD problem has been taken the same as in (Salimi 2015), and mathematical expressions are provided in “Appendix” section. There are four constraint conditions apart from cost function minimization. The population and iteration, for optimizing PVD, were kept as 30 and 400, respectively. The performance of LCBO algorithm has been compared with other algorithms, namely GA, CEPSO, CEDE, PSO, NIDP and SFS, as reported in (Salimi 2015). Table 9 presents the best results obtained out of 30 trials of LCBO and compares the results reported in (Salimi 2015). From Table 9, it can be concluded that LCBO offers the least cost function and therefore, is the most suitable technique for presser vessel design. It is able to maintain all the given constraints while leading to the optimal solution.

Table 9 Comparison of the best solution for pressure vessel design with other algorithms

In addition to the above, statistical analysis of the 30 cost function values was also performed and the results are presented in Table 10. From Table 10, one can infer that the best, mean and worst values of cost function of LCBO are the least among the investigated methods. The number of function evaluations (FE) gives the computation load of a given method. As seen from Table 10, LCBO makes use of very small number of FE and therefore is a very light optimization technique.

Table 10 Statistical analysis of cost function values of various algorithms for pressure vessel design problem

5.5.2 Cantilever beam design

CBD is one of the most widely tackled engineering problems. The mathematical description of the CBD problem has been the same as in (Wolpert and Macready 1997), and brief expressions are given in “Appendix” section. In this problem, one needs to find the optimum values of the five parameters of the beam within the given bounds. The designed parametric values should be so as to yield the minimum cost function while obeying the given constraint.

The population and iteration, for optimizing CBD, were kept as 30 and 1000, respectively. The performance of LCBO algorithm has been compared with other algorithms such as MFO, MMA, GCA_1, GCA_2, CSA and SOS which is the same as considered in (Mirjalili 2015a). Table 11 presents the best results obtained of 30 trials of LCBO and compares them with the results reported in (Mirjalili 2015a). From Table 11, it can be concluded that LCBO offers the least cost function and hence is the most suitable technique for CBD. It is able to maintain the given constraint while leading to the optimal solution.

Table 11 Comparison of the LCBO solution for cantilever beam design with other algorithms

Based on the detailed investigations presented in both the case studies reported above, it can be clearly concluded that LCBO has performed extremely well.

6 Conclusion

In this work, a life choice-based optimizer (LCBO) has been proposed and investigated. LCBO essentially makes use of the fundamental choices humans make in life to sort priorities and always move ahead to improve and achieve life objectives. The proposed LCBO algorithm has been described, and its performance has been assessed for exploration and exploitation on several benchmark functions. The functions used included varieties such as unimodal, multimodal and composite CEC-2005 benchmark functions. Detailed investigations on scalability and convergence were conducted and presented. Additionally, application of the LCBO algorithm on two important practical engineering problems was also investigated. The performance comparison between LCBO and other popular algorithms clearly revealed the superiority of LCBO over other algorithms in dealing with different optimization problems.

Overall, based on the presented investigations, it is concluded that LCBO is a competent algorithm which can compete with recent algorithms such as Spotted Hyena Optimizer, Moth Flame Optimizer, The Ant Lion Optimizer and Grey Wolf Optimizer as well as the standard algorithms like Particle Swarm Optimization and Genetic Algorithm. For future research work, several development options are available such as multiobjective form of LCBO and binary version of the algorithm which are a big possibility and application of this algorithm for various different fields related to optimization and parameter determination could be looked into.