1 Introduction

Over the past few decades, the numerical optimization problems have become increasingly complex and require highly efficient methods to solve. For example, design cost problems in engineering and accuracy problems in data mining often demand methods to find optimum from a large number of available solutions, without wasting efforts in searching sub-optimal regions. Due to complex nature and highly non-convex landscapes, the search-space related to these problems pose several challenges to optimization methods [1]. These include exceptionally complex modalities of the search environment; proportional to the problem size, as well as, growing problem dimensionality [2]. The conventional deterministic methods, based on simple calculus rules perform exhaustive search, cannot produce as efficient solutions as heuristic (trial-and-error) methods within limited computational resources. The non-deterministic methods, also called metaheuristic algorithms, have generated extraordinary results in several practical real-world optimization problems [3, 4]. Numerous applications of these methods can be found in extensive literature related to metaheuristic research. To name a few domains, drug design [5], feature selection [6], motif discovery problem [7, 8], engineering, medical, agriculture, finance and economics are among the beneficiaries [9, 10].

Amongst some of the most successful metaheuristic algorithms are simulated annealing (SA) [11], particle swarm optimization (PSO) [12], genetic algorithms (GA) [13], ant colony optimization (ACO) [14], and artificial bee colony (ABC) [15]. Whereas, some recently introduced methods have also earned adequate attention among researchers due to their efficient problem solving ability; such as, Grey wolf optimization (GWO) [16], Harris’ hawks optimizer (HHO) [4], bacterial foraging optimization (BFO) [17], moth-flame optimization (MFO) [18], salp swarm algorithm (SSA) [19], and whale optimization algorithm (WOA) [20].

Generally speaking, algorithms that maintain simplicity, ease of implementation as well as ability to avoid local optima are those which stand out among others. This is the reason despite increasingly introduced novel metaheuristic algorithms, only a few retain interest in research community while others often vanish. Among other features, trade-off balance between exploration and exploitation is always critical for metaheuristic algorithms [21]. The algorithms that maintain such balance on a large variety of optimization problems, are the successful ones. Nevertheless, a general procedure of any metaheuristic algorithm can be outlined as Algorithm 1.

figure a

Besides animals and inspects, powerful phenomena from physics, chemistry, and mathematics have been derived for producing novel optimization methods. The prominent methods devised by the inspirations borrowed from the discipline of Physics are: charged system search (CSS) [22] follows Coulomb and Newtonian’s laws, gravitational search algorithm (GSA) [23] based on gravitational theory, ray optimization (RO) [24] observes ray theory and Henry gas solubility optimization (HGSO) based on Henry law [25]. The optimization methods introduced on the basis of Chemistry is chemical reaction optimization (CRO) [26] which mimics molecular interactions in a chemical reaction. Sine-cosine algorithm (SCA) [27] utilizes trigonometry functions and stochastic fractal search (SFS) [28] employs a mathematical concept called fractals.

Essentially, the laws of physics have already produced significant outcomes when formulated into optimization techniques; examples are: thermal exchange optimization (TEO) [29] utilizes Newtonian’s laws of cooling, lightening search algorithm (LSA) [30] is inspired by the lightening phenomena in nature, atom search optimization (ASO) [31], equilibrium optimization (EO) [32], magnetic optimization algorithm (MOA) [33] borrows idea from Bict-Savart law of electromagnetism, electromagnetic field optimization (EFO) [34] and ions motion optimization (IMO) [35] are based on attraction-repulsion mechanism of electromagnets and ions versus cations, respectively. In this vein, this study is motivated to continue taking inspirations from the laws of physics. This time, we present Archimedes optimization algorithm (AOA) which is based on the law of physics called Archimedes’ principle.

The metaheuristic algorithms mentioned earlier include those that are simple yet effective while others are complicated as well. Some of these algorithms have proven track record of solving numerous optimization problems while others are being effectively modified for improved search performance. Additionally, there is constant influx of new ideas competing with classic methods like SA, PSO, ABC, and ACO while implemented on complex and highly non-linear optimization problems [36]. Due to no-free-lunch theorem [37] which explains the reason why one algorithm or method cannot outperform others on all optimization problems, the room for improvement in existing methods and the opportunities to introduce new methods will always exist. Because, some algorithms will be good on one types of problems and poor on others. However, successful metaheuristic algorithm may be considered as the one which performs well, or at least produces acceptable solutions, on most of the problems. But, keeping in view the wider spectrum of optimization problems, one cannot guarantee that algorithm has been tried and test rigorously. However, we can validate a metaheuristic performance on commonly agreed test suite. Based on the argument, this research also employs the commonly used test environment for experimenting and evaluating performance of the proposed AOA algorithm.

Archimedes’ principle explains the law of buoyancy. It states the relationship between an object immersed in a fluid (let’s say water) and buoyant force applied on it. Accordingly, an object’s buoyancy is subject to an upward force equal to weight of the fluid displaced. If the object’s weight is greater than the weight of fluid displaced, the object will sink. Otherwise, it will float when the weight of object and displaced fluid is same. In AOA, the population individuals are the objects immersed in fluid. These objects have density, volume, and acceleration which play important role in buoyance of an object. The idea of AOA is to reach a point where objects are neutrally buoyant; meaning that the net force of fluid is equal to zero. We examined AOA performance by employing an extensive test-bed comprising of unconstrained benchmark functions and constrained engineering design problems, and found that the proposed approach is efficient with its global search ability.

To sum up, a new population-based algorithm called Archimedes optimization algorithm (AOA) based on the law of physics known as Archimedes’ principle is proposed in this paper to compete with the state-of-the-art and recent optimization algorithms, including other physics-inspired methods. It is worth mentioning that the presented algorithm maintains balance between exploration and exploitation. This characteristic makes AOA suitable for solving complex optimization problems with many local optimal solutions because it keeps a population of solutions and investigates a large area to find the best global solution. In summary, the main contributions of this research are as follows:

  1. 1.

    We propose a new population-based algorithm, namely Archimedes optimization algorithm (AOA), which mimics the Archimedes’ principle.

  2. 2.

    The statistical significance, convergence ability, exploitation - exploration ratio and the diversity of AOA solutions are evaluated.

  3. 3.

    A series of experiments, comprising of CEC’17 test suite and real-world engineering design problems, are performed to investigate the impact of the proposed algorithm over a challenging test suite in metaheuristic literature.

  4. 4.

    The search efficiency of AOA is validated against well-established algorithms like GA, PSO, L-SHADE, and LSHADE-EpSin, as well as, recent additions like WOA, SCA, HHO, and EO.

This paper is organized as follows. Section 2 explains the basics of Archimedes’ principle and design framework of the proposed AOA algorithm. The concept of exploration and exploitation is given in Section 3 which also presents a way to measure these important features of an optimization algorithm. The empirical evaluation of AOA on extensive test environment is given in Section 4. Here, a comprehensive analysis and comparison is also made against the selected metaheuristic algorithms. The final discussion and conclusion is presented in Section 5.

2 Design framework

The proposed Archimedes optimization algorithm (AOA) generally emulates what happens when objects of different weights and volumes are immersed in a fluid. It captures the related phenomenon explained by Archimedes’ principle which is described in the following subsection. Next, the implementation of this law of physics in terms of an optimization algorithm is explained.

2.1 Archimedes’ principle

Archimedes’ principle states that when an object is completely or partially immersed in a fluid, the fluid exerts an upward force on the object equal to weight of the fluid displaced by the object. Figure 1 shows that when an object is immersed in a fluid, it will be experienced by an upward force, called buoyant force, equal to weight of the fluid displaced by the object [38].

Fig. 1
figure 1

a An object is immersed in a fluid, and b The volume of fluid displaced

2.1.1 Theory

Assume that many objects immersed in the same fluid (Fig. 2) and each one tries to reach the equilibrium state. The immersed objects have different densities and volumes that cause different accelerations.

Fig. 2
figure 2

Many objects immersed in the same fluid

The object will be in the equilibrium state if the buoyant force Fb is equal to the object’s weight Wo:

$$ \begin{array}{@{}rcl@{}} F_{b} &=& W_{o}, \\ p_{b} v_{b} a_{b}&=&p_{o} v_{o} a_{o} \end{array} $$
(1)

where p is the density, v is the volume, and a is the gravity or acceleration, subscripts b and o are for fluid and immersed object, respectively. This equation can be rearranged as:

$$ a_{o} = \frac{p_{b}v_{b}a_{b}}{p_{o}v_{o}} $$
(2)

If there is another force influenced on the object like collision with another neighbouring object (r), the equilibrium state will be:

$$ \begin{array}{@{}rcl@{}} F_{b} &=& W_{o}, \\ W_{b}-W_{r}&=&W_{o}, \\ p_{b} v_{b} a_{b}-p_{r} v_{r} a_{r}&=&p_{o} v_{o} a_{o} \end{array} $$
(3)

2.2 Archimedes optimization algorithm (AOA)

AOA is a population-based algorithm. In the proposed approach, the population individuals are the immersed objects. Like other population-based metaheuristic algorithms, AOA also commences search process with initial population of objects (candidate solutions) with random volumes, densities, and accelerations. At this stage, each object is also initialized with its random position in fluid. After evaluating the fitness of initial population, AOA works in iterations until termination condition meets. In every iteration, AOA updates the density and volume of every object. The acceleration of object is updated based on condition of its collision with any other neighbouring object. The updated density, volume, acceleration determines the new position of an object. Following is the detailed mathematical expression of AOA steps.

2.2.1 Algorithmic steps

In this section, we introduce mathematical formulation of the AOA algorithm. Theoretically, AOA can be considered as a global optimization algorithm as it encompasses both exploration and exploitation processes. Algorithm 2 presents the pseudo-code of the proposed algorithm; including population initialization, population evaluation, and updating parameters. Mathematically, steps of the proposed AOA are detailed as following.

Step 1—Initialization

Initialize the positions of all objects using (4):

$$ O_{i} = lb_{i}+rand\times (ub_{i}-lb_{i}); i={1,2,...,N} $$
(4)

where Oi is the i th object in a population of N objects. lbi and ubi are the lower and upper bounds of the search-space, respectively.

Initialize volume (vol) and density (den) for each i th object using (5):

$$ \begin{array}{@{}rcl@{}} den_{i}&=&rand \\ vol_{i}&=&rand \end{array} $$
(5)

where rand is a D dimensional vector randomly generates number between [0,1]. And finally, initialize acceleration (acc) of i th object using (6):

$$ acc_{i}=lb_{i}+rand\times(ub_{i} - lb_{i}) $$
(6)

In this step, evaluate initial population and select the object with the best fitness value. Assign xbest, denbest, volbest, and accbest.

Step 2—Update densities, volumes

The density and volume of object i for the iteration t + 1 is updated using (7):

$$ \begin{array}{@{}rcl@{}} den_{i}^{t+1}&=&de{n_{i}^{t}}+rand\times (den_{best}-de{n_{i}^{t}}) \\ vol_{i}^{t+1}&=&vo{l_{i}^{t}}+rand\times (vol_{best}-vo{l_{i}^{t}}) \end{array} $$
(7)

where volbest and denbest are the volume and density associated with the best object found so far, and rand is uniformly distributed random number.

Step 3—Transfer operator and density factor

In the beginning, collision between objects occurs and, after a period of time, the objects try to reach at equilibrium state. This is implemented in AOA with the help of transfer operator TF which transforms search from exploration to exploitation, defined using (8):

$$ TF=\exp\left( \frac{t-t_{\max}}{t_{\max}}\right) $$
(8)

where transfer TF increases gradually with time until reaching 1. Here t and \(t_{\max \limits }\) are iteration number and maximum iterations, respectively. Similarly, density decreasing factor d also assists AOA on global to local search. It decreases with time using (9):

$$ d^{t+1}=\exp \left( \frac{t_{\max }-t}{{t}_{\max }} \right)-\left( \frac{t}{{t}_{\max}} \right) $$
(9)

where dt+ 1 decreases with time that gives the ability to converge in already identified promising region. Note that proper handling of this variable will ensure balance between exploration and exploitation in AOA.

Step 4.1—Exploration phase (collision between objects occurs)

If TF ≤ 0.5, collision between objects occurs, select a random material (mr) and update object’s acceleration for iteration t + 1 using (10):

$$ ac{{c}_{i}}^{t+1}=\frac{de{{n}_{mr}}+vo{{l}_{mr}}\times ac{{c}_{mr}}}{de{{n}^{t+1}_{i}}\times vol_{i}^{t+1}} $$
(10)

where deni, voli, and acci are density, volume, and acceleration of object i. Whereas accmr, denmr and volmr are the acceleration, density, and volume of random material. It is important to mention that TF ≤ 0.5 ensures exploration during one third of iterations. Applying value other than 0.5 will change exploration-exploitation behavior.

Step 4.2—Exploitation phase (no collision between objects)

If TF > 0.5, there is no collision between objects, update object’s acceleration for iteration t + 1 using (11):

$$ ac{{c}^{t+1}_{i}}=\frac{de{{n}_{best}}+vo{{l}_{best}}\times ac{{c}_{best}}}{den^{t+1}_{i}\times vol^{t+1}_{i}} $$
(11)

where accbest is the acceleration of the best object.

Step 4.3—Normalize acceleration

Normalize acceleration to calculate the percentage of change using (12):

$$ acc^{t+1}_{i-norm}=u\times \frac{acc^{t+1}_{i}-\min (acc)}{\max(acc)-\min (acc)}+l $$
(12)

where u and l are the range of normalization and set to 0.9 and 0.1, respectively. The \(acc^{t+1}_{i,norm}\) determines the percentage of step that each agent will change. If the object i is far away from global optimum, acceleration value will be high—meaning that the object will be in the exploration phase; otherwise, in exploitation phase. This illustrates how the search transforms from exploration to exploitation phase. In normal case, the acceleration factor begins with large value and decreases with time. This helps search agents move towards the global best solution and at the same time they move away from local solutions. But, it is noteworthy to mention that there may remain a few search agents that need more time to stay in exploration phase than normal case. Hence, AOA achieves the balance between exploration and exploitation.

Step 5—Update position

If TF ≤ 0.5 (exploration phase), the ith object’s position for next iteration t + 1 using (13)

$$ x_{i}^{t+1}={x_{i}^{t}}+{C}_{1}\times rand\times acc^{t+1}_{i-norm}\times d\times (x_{rand}-{x_{i}^{t}}) $$
(13)

where C1 is constant equals to 2. Otherwise, if TF > 0.5 (exploitation phase), the objects update their positions using (14).

$$ {x}^{t+1}_{i} = {{x}^{t}_{best}}+F\times {{C}_{2}}\times rand\times ac{{c}{~}^{t+1}_{i-norm}}\times d\times (T\times {{x}_{best}}-{{x}^{t}_{i}}) $$
(14)

where C2 is a constant equals to 6. T increases with time and it is directly proportional to transfer operator and it is defined using T = C3 × TF. T increases with time in range \(\left [ {{C}_{3}}\times 0.3, 1 \right ]\) and takes a certain percentage from the best position, initially. It starts with low percentage as this results in large difference between best position and current position, consequently step-size of random walk will be high. As the search proceeds, this percentage increases gradually to decrease difference between the best position and the current position. This leads to achieving an appropriate balance between exploration and exploitation.

F is the flag to change the direction of motion using (15):

$$ F=\left\{ {-1 \text{if} P>0.5}^{+1 \text{if} P\le 0.5} \right. $$
(15)

where P = 2 × randC4.

Step 6—Evaluation

Evaluate each object using objective function f and remember the best solution found so far. Assign xbest, denbest, volbest, and accbest.

figure b

2.2.2 Sensitivity analysis

Generally, full factorial and fractional factorial design techniques are applied for parameter sensitivity analysis in an algorithm, however high computational cost becomes a major limitation in this regard. Secondly, because metaheuristic algorithms are stochastic in nature, and generate varying solutions each time executed, running them multiple times when considering full factorial designs for evaluating all parameter combinations for all test functions will be nearly infeasible for the length of an initial study [39], like in this case. Therefore, in this subsection, we provide general parameter configuration guidance for AOA control variables C1 to C4. We perform sensitivity analysis, with limited full factorial design approach, using three functions selected from three different categories of CEC’17 test suite. The selected functions are 50 dimensional Shifted and Rotated Rastrigin’s function (f5), Hybrid Function 2 (N = 3) (f12), and Composite Function 6 (N = 5) (f26). The range of values for these parameters are as followings: C1 ∈ {1, 2}, C2 ∈ {2, 4, 6}, C3 ∈ {1, 2}, and C4 ∈ {0.5, 1}; however, based on optimization problem landscape and difficulty, different values may also be experimented. In our preliminary testing, based on several scenarios given in Table 1 and illustrated via Fig. 3, it is clear that scenario number 23 is the best values for these parameters. With parameter settings as C1 = 2, C2 = 6, C3 = 2, and C4 = 0.5, AOA achieved the best cost function values.

Fig. 3
figure 3

Cost function values achieved for different scenarios presented in Table 1 pertaining to AOA parameters C1 to C4

Table 1 Sensitivity analysis for AOA parameters under different scenarios

3 Exploration and exploitation

The efficient search ability of a metaheuristic algorithm heavily relies on two essential foundations: exploration and exploitation [21, 40, 41]. Exploration refers to the search in far reached neighbourhoods for finding global optimum, whereas exploitation is focusing search on already identified potential neighbourhood for converging to optimum solution. Generally, a metaheuristic algorithm starts search process with more exploration and less exploitation; but gradually, the ratio inverts as the search progresses towards the end [44]. Accordingly, the population individuals need to spread all over the search-space and gradually converge to the promising region.

It is crucial to maintain trade-off balance between the two contradictory abilities. This is possible by measuring exploration and exploitation, to adjust search mechanism. The empirical analysis of population diversity, using exploration and exploitation measurements, can be performed when solving a variety of optimization problems. To determine local search ability, problems with unimodal nature are suitable. While, for analysing global search ability, multimodal optimization problems are considered. This study obtained exploration and exploitation measurement while solving a variety of optimization problems, using the approach presented by K. Hussain et al. [41]. The research used dimension-wise diversity measurement. According to the method, the increased average distance within a dimension means exploration, whereas the decreased distance refers to exploitation which suggests that the population individuals are close to each other on the search space. In case of insignificant average diversity within certain iterations, it is suggested that the population has converged. Equation (16) explains the process of measuring dimension-wise diversity:

$$ \begin{array}{@{}rcl@{}} Div_{j} &=& \frac{1}{N}\sum\limits_{i=1}^{N}\text{median}(x^{j})-{x_{i}^{j}}; \\ Div^{t} &=& \frac{1}{Dim}\sum\limits_{j=1}^{Dim}Div_{j} \end{array} $$
(16)

In (16), \({x_{i}^{j}}\) denotes the j th dimension of the i th population individual and median(xj) the median value of the j th dimension of whole population with N individuals. Divj denotes the average diversity for dimension j and Divt the average population diversity for iteration t. When the diversity is measured for all iterations, the percentage of exploration and exploitation can be achieved using (17):

$$ \begin{array}{@{}rcl@{}} Exploration<percent> &=& \frac{Div^{t}}{Div_{max}}\times 100; \\ Exploitation<percent> &= & \frac{|Div^{t}- Div_{max}|}{Div_{max}}\\ &&\times 100 , t={1,2,...,t_{max}} \end{array} $$
(17)

where Divt is population diversity of t th iteration and Divmax is the maximum diversity found in all tmax iterations.

Having discussed the exploration-exploitation measurement using population diversity, it is important to mention that the population diversity does not guarantee that an efficient exploration is being performed by the search agents. However, it definitely clarifies that the search is being performed on distant locations in search space. On the other hand, it also suggests that the candidate solutions under consideration are diversified and not stagnant. Contrarily, population with reduced diversity indicates exploitation; yet, it is essential that the search is being performed around global optimum region. That said, it can be contemplated that exploration-exploitation measurement using population diversity has certain limitations. Nevertheless, in the absence of direct exploration-exploitation measurement approach [42], this study follows the method proposed in [41]. Because, in previous research, mostly population diversity is adopted as a measure for controlling explorative and exploitative search strategies. To the best of authors knowledge, there exists only one research [43] that proposed a direct measure of exploration-exploitation for evolutionary algorithms, calling it ancestry tree-based approach; however, its limitation to evolutionary algorithms motivates this study to apply diversity-based exploration-exploitation measurement on the proposed method.

4 Experimental results

To investigate effectiveness of the proposed AOA algorithm, we have employed 29 test functions of CEC’17 test suite and four constrained engineering design problems, which are widely used in existing empirical literature [16, 20, 23, 45]. All of these functions are minimization problems which are useful for evaluating optimization method characteristics such as search efficiency and convergence rate.

4.1 Parameter settings

Since a metaheuristic algorithm is stochastic in nature, its results may vary each time an algorithm is executed. Therefore, we performed each experiment 30 times with parameter settings mentioned in Table 2. As mentioned in the table, apart from AOA, several other algorithms including well established methods like GA, PSO, L-SHADE, and LSHADE-EpSin, as well as, newly introduced methods like WOA, SCA, HHO, and EO were also used to solve the same experimental suite for comparison purpose. For fair comparison, all the algorithms were run with a maximum of 1000 iterations (30000 function evaluations). The algorithms were programmed in MATLAB 8.0.604 (2014b) 64-bit version and executed on computation environment of Intel Core i7 CPU 2.00GHz, 2.5GHz, 8GB RAM and 64-bit operating system. Besides the parameters of the selected algorithms presented in Table 2, the common settings include population size (N = 30), maximum iterations (\(t_{\max \limits }=1000\)), and 30 independent runs for each optimization problem.

Table 2 Parameter settings of AOA and selected algorithms

4.2 CEC 2017 test suite analysis

This study employed more complex optimization problems encompassed in CEC’17 test suite [46] for better performance evaluation of the AOA algorithm. The test-bed comprises of 30 functions in which function f2 is excluded, hence 29 functions of different modalities and complexities are to be employed for the testing of AOA and other counterpart algorithms selected in this study. Functions from f1 and f3 are unimodal, f4 to f10 are multimodal, f11 to f20 are hybrid, and f21 to f30 are composition functions. All of the test functions maintain a hyperspace of [− 100,100]D. Overall, the CEC’17 test suite is reasonably complex and dynamic, which can be used for extensive study of exploration and exploitation capabilities of a metaheuristic algorithm.

4.2.1 Statistical results

When employing 29 test functions of CEC’17 test suite, the experimental study proved that AOA showed its efficient search performance for this test-bed. Considering the results of 30 and 50 dimensional CEC’17 functions (Tables 3 and 4), AOA outperformed other selected metaheuristic algorithms for 19 and 16 functions, respectively. According to Table 3, EO achieved better optimum values than AOA for four functions (f6, f20, f22, and f23), while it performed equally well as AOA for f29 with 30 dimensions. Similarly, PSO also performed better than AOA for two 30 dimensional CEC’17 functions (f12 and f18), while it generated equally better solution as of AOA for functions f11, f18, and f25. The least performers for these functions were GA, L-SHADE, LSHADE-EpSin, WOA, SCA, and HHO. On the other hand, for 50 dimensional CEC’17 functions (Table 4), AOA produced superior results than others for 16 functions. On the other hand, AOA underperformed than EO for f6 and f22, L-SHADE for f7, f9, f17, f20, and f23, PSO for f3, f11, f13, f15, f25, and f28. In this context, GA, LSHADE-EpSin, WOA, SCA, and HHO methods remained the least performers.

Table 3 Statistical results obtained for the CEC’17 functions with Dim = 30
Table 4 Statistical results obtained for the CEC’17 functions with Dim= 50

To statistically validate the results achieved by AOA, we performed nonparametric Wilcoxon Ranksum test, as it serves to produce meaningful comparison between the proposed and alternative methods. The p-values for the test are given in Table 5 where △, ▽, and ≈ indicate that AOA is significantly better than the alternative method, it is significantly inferior than the other, or insignificantly different from the competitive method, respectively. According to Table 5, AOA results remained significantly better than GA, LSHADE-EpSin, and SCA for all CEC’17 test functions. On the other hand, for majority of the test suite, AOA generated significantly better solutions than WOA and HHO, except for f22 where these methods remained insignificantly different. In case of PSO and EO, AOA generated significantly inferior results for five and two functions, respectively, and AOA remained insignificant for four functions; rest, AOA remained significantly superior method than PSO and EO.

Table 5 Wilcoxon ranksum test results for AOA vs. other algorithms (CEC’17 functions with Dim = 50)

4.2.2 Convergence analysis

The convergence ability of AOA and other eight counterparts is depicted via Figs. 4 and 5 for CEC’17 test suite with 30 and 50 dimensions, respectively. In these figures, the convergence curves for the selected functions are presented, since graphs for all 29 functions enlarge the length of the paper significantly. According to convergence curves for nine selected functions f5, f8, f12, f14, f16, f19, f21, f26 and f30 the proposed AOA algorithm showed faster convergence ability. It is because AOA performed search effectively by maintaining trade-off balance between exploration and exploitation. Specially, in case of f18, f21, f26 and f30 for 30 and 50 dimensions, AOA not only converged faster than other eight algorithms, it also managed to find comparatively much better optimum solutions.

Fig. 4
figure 4

Convergence curves of competitor algorithms on CEC’17 functions with 30 dimensions

Fig. 5
figure 5

Convergence curves of competitor algorithms on CEC’17 functions with 50 dimensions

4.2.3 Exploration-exploitation analysis

The statistical results of AOA on CEC’17 presented in this section prove its efficiency, for further extensive analysis, we also recorded exploration-exploitation ratios during search process. To analyze the relevant search behaviors, Fig. 6 illustrates exploration and exploitation maintained by AOA while solving some of the 50 dimensional CEC’17 problems.

Fig. 6
figure 6

Exploration and exploitation phases in AOA on the CEC’17 functions with 50 dimensions

As it can be observed from line graphs presented in Fig. 6 that AOA algorithm started with high exploration and low exploitation, but mostly later transformed into exploitation strategy during most of the iterations. However, on f18 and f20 with 50 dimensions, AOA performed exploration higher than exploitation during searching for global optimum location (Fig. 6). Similar behavior was noticed on f7, f18, and f20 with 50 dimensions (Fig. 6). When observing the general trend, compared to exploration and exploitation behavior of AOA, the algorithm maintained relatively dynamic behavior on CEC’17 functions (Fig. 6).

4.3 Engineering design problems

We further tested AOA for solving four constrained engineering design problems: tension/compression spring design, welded beam design, pressure vessel design, and speed reducer design.

4.3.1 Welded beam design problem

The first problem is to minimize the cost of welded beam design (Fig. 7). Coello [47] first proposed this problem, since then it is used as benchmark for performance evaluation of optimization methods. The cost is optimized subject to shear stress (τ), bending stress (σ) in the beam, buckling load on the bar (Pb), end deflection of the beam (δ), and side constraints. The four decision variables are h(x1), l(x2), t(x3), b(x4). Appendix A provides mathematical details of the problem.

Fig. 7
figure 7

Welded beam design problem

The results of AOA on welded beam problem are presented in Tables 6 and 7. The best solution obtained by AOA and other counterparts are given in Table 6, whereas statistical comparison is presented in Table 7. According to the comparative results, AOA achieved optimum parameter values resulting in best cost function value 1.72485 for this particular design problem. Moreover, AOA also showed better convergence ability than other eight counterparts while solving this problem (Fig. 7b).

Table 6 Best solution obtained from competitor algorithms for the welded beam problem
Table 7 Results obtained from competitor algorithms for the welded beam problem

4.3.2 Tension/compression spring design problem

The second problem is to minimize weight of the spring (Fig. 8) based on certain constraints; such as, outside diameter limits, surge frequency, minimum deflection, and shear stress. The detail of this problem is well explained in [48]. The problem consists of three independent variables d or x1 (wire diameter), D or x2 (coil diameter), and P or x3 (number of active coils). The mathematical model of tension/compression spring design problem is provided in Appendix B.

Fig. 8
figure 8

Tension/compression string design problem

On this problem also, AOA generated best parameter values among other competitive algorithm (Table 8). Table 9 suggests that the objective function value for tension/compression spring design achieved by AOA (0.01268) is lower than the ones achieved by eight other algorithms. The convergence ability of AOA and other algorithms is illustrated via Fig. 8b.

Table 8 Best solution obtained from competitor algorithms for the tension/compression spring problem
Table 9 Results obtained from competitor algorithms for the tension/compression spring problem

4.3.3 Speed reducer design problem

The third engineering design optimization problem taken in this study is speed reducer design problem (Fig. 9). Here, an optimization method is to optimize or minimize the weight of the mechanical device based on certain constraints associated with multiple components such as gear teeth bending stress, surface stress, shafts stresses, and transverse deflections of the shafts [49]. The weight is calculated with the help of seven variables, respectively, face width (b or x1), teeth module (m or x2), pinion teeth number (z or x3), first shaft length between bearings (l1 or x4), second shaft length between bearings (l2 or x5), and first diameter (d1 or x6) and second shafts (d2 or x7). Appendix C provides mathematical expression of the problem.

Fig. 9
figure 9

Speed reducer design problem

The proposed AOA algorithm generated optimum values for the design parameters of speed reducer problem, compared to rest of the methods applied (Table 10). This resulted in best objective design value 2.995E + 03 against the counterpart methods (Table 11). The convergence graph presented in Fig. 9b also shows that AOA converged to a better optimum location than the competitive algorithms.

Table 10 Best solution obtained from the competitor algorithms for the speed reducer problem
Table 11 Results obtained from competitor algorithms for the speed reducer problem

4.3.4 Pressure vessel design problem

Lastly, the fourth engineering design problem considered in this study is pressure vessel design problem (Fig. 10). The problem is to minimize the cost of pressure vessel design [50]. The cost is optimized with the help of four design variables: Ts or x1 (shell thickness), Th or x2 (head thickness), R or x3 (inner radius), and L or x4 (cylinder length). The mathematical model of the speed reducer design problem is described in Appendix D.

Fig. 10
figure 10

Pressure vessel design problem

According to the results of pressure vessel design problem (Table 12), the proposed approach AOA achieved parameter settings. The objective function value achieved by AOA was 5.90E + 03 which is better than other algorithms applied on this problem (Table 13). Figure 10b also suggests that AOA maintained efficiency convergence ability on this problem as well.

Table 12 Best solution obtained from the competitor algorithms for the pressure vessel design problem
Table 13 Results obtained from competitor algorithms for the pressure vessel design problem

The statistical significance of the proposed method against others in case of engineering problems is reported in Table 14 via p-values generated using nonparametric Wilcoxon Ranksum test. In Table 14, △, ▽, and ≈ indicate that AOA is significantly better than the alternative method, it is significantly inferior than the other, or insignificantly different from the competitive method, respectively. The p-values suggest that AOA produced significantly better results than GA, PSO, L-SHADE, LSHADE-EpSin, WOA, and HHO on all engineering design problems. On the other hand, when compared with SCA results, AOA results were significantly better than SCA for three engineering problems except welded bean design problem. However, AOA remained insignificantly different from EO on all engineering design problems.

Table 14 Wilcoxon ranksum test results for AOA vs. other algorithms (CEC’17 functions with Dim = 50)

5 Conclusion and future works

Generally, different critical considerations regarding the design and performance of an optimization method are simplicity, robustness, and flexibility. Appropriately dealing these features, an optimization algorithm can be made widely acceptable in the research community. In this connection, recently introduced metaheuristic algorithms, especially those inspired by physics, have produced interesting results. This study proposes a new physics-based metaheuristic algorithm that mimics the Archimedes law, called Archimedes optimization algorithm (AOA). In the design of AOA, important criteria related to the simplicity, efficiency, adaptability, and flexibility are effectively ensured. AOA is not only simple, as it holds few control parameters, yet it is robust because the proposed approach can solve optimization problems by generating objective function values with minimum error. The AOA algorithm also maintains ability to adapt its pool of candidate solutions for avoiding trap in the suboptimal locations. The search efficacy of the proposed approach was tested on complex test functions and four constrained engineering optimization problems. Based on the results, it is evident that AOA not only produced the high quality solutions but also proved its efficiency in finding the global optimum solution faster than several other recently introduced counterparts selected in this study. The AOA algorithm outperformed well-established optimization methods like GA and PSO, state-of-the-art like L-SHADE and LSHADE-EpSin, and other recently introduced methods like WOA, SCA, HHO, and EO. The proposed algorithm exhibited robust search efficiency by balancing exploration and exploitation abilities. The promising solutions on multimodal and composite functions confirmed its exploration, whereas its exploitation was validated by the results of unimodal landscapes.

Future potential research directions related to the proposed AOA involve application on solving more real-world problems. This will help further validate that the algorithm is flexible to generate optimum solutions on a variety of optimization problems.