7.1 Introduction

Many real-life problems have been formulated as optimization problems. They have been applied in many branches of real-life such as finance, portfolio selection, medical science, data mining, etc. [1,2,3,4]. Global optimization constitutes one important part of the theory of optimization. It has many application areas in engineering such as electrical power distribution and design of space trajectories [5, 6]. Global optimization is a very active research area because of the problems becoming more and more complicated from year to year due to increasing number of variables and structure of the problems (non-smoothness). Up to now, many new theories and algorithms have been presented to solve global optimization problems [7, 8]. There exist two different type of methods which are based on local searches (Monotonic Basin Hopping, Hill Climbing Methods, etc. [9, 10]) and not based on local searches (Branch and Bound, DIRECT, etc. [11, 12]). We consider the methods based on local searches. For local search based methods the major difficulties for global optimization are listed below:

  1. a.

    When finding any local minimizer by using a local solver, how to escape from this current local minimizer.

  2. b.

    How to ignore the local minimizers of which their values are greater than the value of current minimizer and find a lower minimizer of the objective function.

  3. c.

    How to evaluate the convergence to the global minimizer, and determine the stopping criteria.

In different point of view, global optimization approaches can be classified into three main classes: stochastic methods, heuristic methods, and deterministic methods.

Stochastic methods are quite simple, very efficient in black box problems and robust with respect to the increment of dimension of the problem but some of the stochastic methods can find only a local solution instead of the global one. The very well-known stochastic approaches are Random Search and Adaptive Search, Markovian Algorithms, etc. [13, 14]. The population algorithms are included in stochastic methods but we handle them in the heuristic methods.

The heuristic methods are based on the simulation of the biological, physical, or chemical processes. These methods are easy applicable and they converge the solution rapidly. However, they can give different results if they are run again. The very well-known methods are Genetic Algorithm [15], Simulated Annealing Algorithm [16,17,18], Particle Swarm Optimization [19, 20], and Artificial Bee Colony Algorithm [21, 22]. In recent years, the hybridizations of the heuristic global optimization algorithms have come into prominence [23,24,25,26,27].

The convergence to the solution is guaranteed in deterministic approaches. This is the outstanding property of the deterministic approaches. However, these methods converge the solution quite slowly [28]. There exist important methods such as Branch and Bound algorithms [11], Covering methods [12], Space Filling Curve methods [29, 30], and other methods [31, 32].

Auxiliary function approach is one of the most important one among the methods on global optimization. These methods are developed according to deterministic search strategies by constructing an auxiliary function to escape from the current local minimizer to a better one, among such methods are Tunneling Method [33], Filled Function Method (FFM) [34,35,36], and Global Descent Method [37].

The first auxiliary function method was introduced by Levy and Montalvo [33]. Cetin et al. developed the tunneling algorithm to resolve constrained global optimization problems [38]. However, many important studies related to tunneling algorithms have been published in [39,40,41].

Among other methods, FFM can be considered an effective approach to solve different global optimization problems, so it seems to have several features over others, for example, it is more simple to find a better local minimizer sequentially compared to other methods. The FFM was presented for the first time by Ge [34, 35] and improved in [42,43,44]. Many valuable studies have been presented in order to make filled function applicable for different type of problems such as non-smooth problems [45, 46], constrained optimization problems [47], system of nonlinear equations [48], etc. [49, 50]. Recently, the next generation of filled function or auxiliary function approaches have been developed [51,52,53,54,55].

The FFM presents a good idea for solving global optimization problems. In general, the filled function mechanism is described as follows:

  1. 1.

    Choose any random point for starting and use a convenient local optimization method to find a local minimizer of the objective function.

  2. 2.

    Construct a filled function based on the current minimizer of the objective function, and use any point in the proximity of this current minimizer to minimize the filled function. Finally, a local minimizer of the filled function is obtained. This minimizer is in a basin of better solution of objective function.

  3. 3.

    The minimizer of filled function which obtained in step 2 is used as a starting point to find the minimizer of the objective function.

Surely the number of minimizers is reduced by repeating Step 2 and 3. Finally, the global minimizer of objective function is found.

Some of the existing filled functions have been constructed to have a surface somewhat like a surface of the objective function in the lower basin (when \(f(x)\geq f(x^{*}_{1})\), \( x_1^*\) is a current minimizer of the objective function) of the better solution, this situation has drawbacks; it needs more time and function evaluations.

In this study, in order to eliminate the drawbacks in some of previous filled functions we proposed a new filled function. This new proposal is based on putting many stationary points in lower basins, in fact, the filled function does not need to go down in the lower basin, it only needs to obtain any stationary point in the lower basin which can be used as a starting point for minimizing the objective function to get a lower minimizer. This idea helps to reduce the time and function evaluations which are very important for such methods.

This study is organized as follows: In Sect. 7.2, we give some preliminary information. In Sect. 7.3, we propose a new filled function with its properties. In Sect. 7.4, we introduce the filled function algorithm. In Sect. 7.5, we perform a numerical test and present the results obtained from the new method. Finally, Sect. 7.6 consists of conclusions.

7.2 Preliminaries

We consider a class of unconstrained global optimization problems as the following:

$$\displaystyle \begin{aligned} (P)\qquad \min_{x\in\mathbb{R}^n} f(x), \end{aligned} $$
(7.1)

where f(x) : R nR is continuously differentiable function.

We assume that the function f(x) is globally convex, which means f(x) → + as ∥x∥→ +. It means that there exist a closed, bounded, and box-shaped domain \( \varOmega =[l_b,u_b]=\{x:\ l_b\leq x \leq u_b,\ l_b,u_b\in \mathbb {R}^n \}\) that contains all the local minimizers of f(x). Moreover, the number of different values of local minimizers of the function f(x) is finite.

Additionally, basic concepts and symbols used in this study are given as follows:

k::

the number of local minimizers of f(x),

\(x^{*}_{k}\)::

the current local minimizer of f(x),

x   ::

the global minimizer of f(x),

\(B^{*}_{k}\)::

the basin of f(x) at the local minimizer \(x^*_k\).

We indicate the following definitions:

Definition 7.1 ([34])

Let Ω ⊂ R n. A point x ∈ Ω is a global minimizer of objective function f(x) if f(x ) ≤ f(x) for all x ∈ Ω,

Definition 7.2 ([34])

Let \( x_k^* \) is a local minimizer of the objective function f(x). The set of points \(B_k^* \subset \varOmega \) is called a basin of f(x) at the point \( x_k^* \) if any local solver starting from any point in \(B_k^*\) finds the local minimizer \(x_k^*\).

Definition 7.3 ([34])

The auxiliary function \(F(x,x^{*}_{k})\) is called a filled function of the objective function f(x) at a local minimizer \(x_{k}^{*}\) if the function \(F(x,x^{*}_{k})\) has the following properties:

  • \(x_{k}^{*} \) is a local maximizer of the function \(F(x,x^{*}_{k})\),

  • \( F(x,x^{*}_{k}) \) has no stationary points in \( A_1=\{x\in \varOmega |f(x)\geq f(x^{*}_{k}),x\neq x^*_k\}\),

  • if \( x^{*}_{k} \) is not a global minimizer of the function f(x), then the function \( F(x,x^{*}_{k}) \) has a stationary point in the region \( A_2=\{x|f(x)< f(x^{*}_{k}),x\in \varOmega \}\).

7.2.1 Overview of the Filled Functions

In 1987 Ge and Qin proposed a first filled function (we call it as G-function) [34] with two parameters to solve the problem (P) at an isolated local minimizer \(x^*_k\) that is defined by

$$\displaystyle \begin{aligned} G(x,x^*_k,r,\rho)=-(\rho^2 \ln [r+f(x)]+\|x-x_k^*\|{}^2), \end{aligned} $$
(7.2)

and, in 1990 Ge introduced another filled function (P-function) [35] which has the following form:

$$\displaystyle \begin{aligned} P(x,x^*_k,r,\rho)=\frac{1}{r+f(x)}+\exp\Big(-\frac{\|x-x_k^*\|{}^2}{\rho^2}\Big), \end{aligned} $$
(7.3)

where r and ρ are parameters which need to be chosen conveniently. Generally, the G-function and P-function of the objective function f(x) at the current minimizer \(x^*_k\) must satisfy the Definition 3.

Many important studies are developed the FFM to solve multi-modal global optimization problems. These studies can be classified into two categories depending on the number of adjustable parameters.

7.2.2 Filled Functions with Two-Parameter

In [56], Wu et al. offer a filled function with two parameters to decrease the computational cost and overcome several disadvantages of filled functions which has the following form:

$$\displaystyle \begin{aligned} H_{q,r,x^*_k}(x) = q(\exp\Big(-\frac{\|x-x_k^*\|{}^2}{q}\Big) g_r(f(x)- f(x_k^*))+ f_r( f(x)-f (x_k^*)), \end{aligned} $$
(7.4)

where q, r > 0 are adjustable parameters and f r, g r are continuously differentiable functions.

In 2009, Zhang et al. [57] introduced a new definition for the filled function, which rectifies several drawbacks of the classic definition. A new filled function with two parameters defined by

$$\displaystyle \begin{aligned} P(x,x^*_k,r,a) = \varphi(r+f(x))-a(\|x-x_k^*\|{}^2), \end{aligned} $$
(7.5)

where a > 0, r are parameters and the function φ(t) is continuously differentiable. Wei et al. [58] offer a new filled function which is not sensitive to parameters. This function has two parameters and has the following formula:

$$\displaystyle \begin{aligned} P(x,x^*_k) =\frac{1}{(1+\|x-x_k^*\|{}^2)} g(f(x)-f(x^*_k)), \end{aligned} $$
(7.6)

and

$$\displaystyle \begin{aligned}g(t)=\left\{\begin{array}{cc} 0,& t\geq 0 ,\\ r\arctan(t^{\rho}),& t<0,\\ \end{array}\right. \end{aligned}$$

where r > 0, ρ > 1 are parameters.

7.2.3 Filled Functions with One Parameter

According to general opinion, the existence of more than one adjustable parameter in the same filled function makes the control difficult. So, the first filled function which has only one parameter is the function Q. This function proposed in (1987) by Ge and Qin and has the following formula:

$$\displaystyle \begin{aligned} Q(x,a) =-(f(x)-f(x^*_k))\exp(a\|x-x_k^*\|{}^2). \end{aligned} $$
(7.7)

The function Q has one adjustable parameter a, if this parameter becomes large and large, quickly increasing value of exponential function negatively affects the computational results [34]. In order to tackle this drawback, H-function was introduced by Liu in [36] that is given by

$$\displaystyle \begin{aligned} H(x,a) =\frac{1}{\ln(1+f(x)-f(x^*_k))}-a\|x-x_k^*\|{}^2. \end{aligned} $$
(7.8)

The function H keeps the feature of the function Q with only one adjustable parameter but without exponential function. Shang et al. introduced a filled function with one adjustable parameter in the following:

$$\displaystyle \begin{aligned} F_q(x,x^*_k) =\frac{1}{(1+\|x-x_k^*\|)} \varphi_q(f(x)-f(x^*_k)+q), \end{aligned} $$
(7.9)

and

$$\displaystyle \begin{aligned}\varphi_q(t)=\left\{\begin{array}{cc} \exp(-\frac{q^3}{t}),& \quad t\neq 0 ,\\ 0,& \quad t=0, \end{array}\right. \end{aligned}$$

so, q is a parameter subject to certain conditions [59]. Zhang and Xu constructed a filled function to solve non-smooth constrained global optimization problems [60]. This function constructed to overcome several drawbacks of the previous filled functions, and it has one parameter as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} P(x,x^*_k,q) &\displaystyle =&\displaystyle \exp(\|x-x_k^*\|) \ln(1+q(\max\{0,f(x)-f(x^*_k)+r\}\\ &\displaystyle &\displaystyle +\sum^m_{i=1}\max\{0, g_i(x)\})), \end{array} \end{aligned} $$

where q > 0 is the parameter, g i(x) > 0 are constrained conditions, and r is prefixed constant.

In 2013, Wei and Wang proposed a new filled function for problem (P) with one adjustable parameter and it does not sensitive to this parameter [61]. The filled function has the following formula:

$$\displaystyle \begin{aligned} P(x,x^*_k) =-\|x-x_k^*\|{}^2 g(f(x)-f(x^*_k)), \end{aligned} $$
(7.10)

where

$$\displaystyle \begin{aligned}g(t)=\left\{\begin{array}{cc} \frac{\pi}{2},& t\geq 0 ,\\ r \arctan(t^2)+\frac{\pi}{2},& t<0,\\ \end{array}\right. \end{aligned}$$

and r is an adjustable parameter as large as possible, used as the weight parameter.

Wang et al. constructed a filled function for both smooth and non-smooth constrained problems in 2014 [62]. The constructed filled function is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} P(x,x^*_k,q) &\displaystyle =&\displaystyle -\frac{1}{q}[f(x)-f(x^*_k)+\max\{0, g_i(x)\}))]^2-\arg(1+\|x-x_k^*\|{}^2)\\ &\displaystyle &\displaystyle +q[\min(0,\max(f(x)-f(x^*_k) g_i(x), i\in I))]^3. \end{array} \end{aligned} $$

The filled function at above has only one adjustable parameter which is controlled during the process. A new definition and a new filled function is given in 2016 by Yuan et al. [63]. This filled function has one parameter, given by

$$\displaystyle \begin{aligned} F(x,x^*_k,q) =V(\|x-x_k^*\|)W_q(f(x)-f(x^*_k)), \end{aligned} $$
(7.11)

where q > 0 is an adjustable parameter, V (t) : R → R and W q(t) : R → R are continuously differentiable under some properties.

7.3 A New Filled Function Method and Its Properties

We offer a new filled function at a local minimizer \( x^{*}_{k} \) with two parameters to solve the problem (P) as follows:

$$\displaystyle \begin{aligned}F(x,x^{*}_{k})=\frac{1}{\alpha+\|x-x^{*}_{k}\|{}^2}h(f(x)-f(x^{*}_{k})),\end{aligned}$$

where

$$\displaystyle \begin{aligned}h(t)=\left\{\begin{array}{cc} 1,& t\geq 0 ,\\ \sin\Big(\mu t+\frac{\pi}{2}\Big),& t<0, \end{array}\right. \end{aligned}$$

and 0 < α ≤ 1 and μ > 1 are parameters.

The new idea in this filled function is to put many stationary points in the lower basin \( A_2=\{x|f(x)< f(x^{*}_{k}),x\in \varOmega \} \). In fact, the filled function does not need to go down in the lower basin, only it needs to obtain any stationary point in A 2, which can be used as an initial for minimizing objective function to obtain a lower minimizer.

The above idea has many advantages, for example, it helps to reduce the time and evaluation which are very important in cases like this. Furthermore, the parameter μ is used to increase or decrease the number of stationary points in the interval A 2, therefore we have to choose μ carefully, because if it is small there is a possibility that we may lose some of the lower minimizers at which the value of the function is close to the value at the first minimizer (see Figs. 7.1 and 7.2). The parameter 0 < α ≤ 1 in the term \(\frac {1}{\alpha +\|x-x^{*}_{k}\|{ }^2}\) is used to control the hat and it is easy to modify. The following theorems references that the function \(F(x,x^*_k) \) is a filled function by Definition 7.1.

Fig. 7.1
figure 1

Some different values of the parameter μ and their effect on the function \( F(x,x^*_k)\)

Fig. 7.2
figure 2

The graph of \( F(x,x^*_k)\) in two dimensions

Theorem 7.1

Assume that \( x^{*}_{k} \) is a local minimizer of the function f(x), and \( F(x,x^{*}_{k}) \) is defined by the Definition 3, then \( x^{*}_{k} \) is a strict local maximizer of \( F(x,x^*_k) \).

Proof

Since \( x^{*}_{k} \) is a local minimizer of f(x), then there exists neighborhood \( N(x^{*}_{k},\epsilon )\subset A_1 \) of \( x^{*}_{k} \) for some 𝜖 > 0 such that \( f(x)\geq f( x^{*}_{k} ) \) for all \( x\in N(x^{*}_{k},\epsilon ) \) and \( x\ne x^{*}_{k}, 0<\alpha \leq 1\).

$$\displaystyle \begin{aligned}\frac{F(x,x^{*}_{k})}{F(x^{*}_{k},x^{*}_{k})}=\frac{\alpha+\|x^*_k-x^*_k\|{}^2}{\alpha+\|x-x^{*}_{k}\|{}^2}= \frac{\alpha}{\alpha+\|x-x^*_k\|{}^2}<1.\end{aligned}$$

That means \( x^{*}_{k} \) is a strict local maximizer of \( F(x,x^{*}_{k}) \).

Theorem 7.2

Assume that \(x^{*}_{k}\) is a local minimizer of f(x), and x is any point in the set A 1, then x is not stationary point of \( F(x,x^{*}_{k}) \) for any 0 < α ≤ 1.

Proof

We have x ∈ A 1, \( f(x)\ge f( x^{*}_{k} ) \) and \( x\ne x^{*}_{k} \). Then \( F( x,x^{*}_{k})=\frac {1}{\alpha +\|x-x^*_k\|{ }^2} \), and \( \nabla F( x,x^{*}_{k})=-2\frac {x-x^*_k}{(\alpha +\|x-x^*_k\|{ }^2)^2}\ne 0 \), for each 0 < α ≤ 1. This implies the function \( F( x,x^{*}_{k}) \) has no stationary point in the set A 1.

Theorem 7.3

Assume that \(L=\min |f(x^*_i)-f(x^*_j)|\), i, j = 1, 2, …, m, \(f(x^*_i)\neq f(x^*_j)\) and \(x^*_k\) is a local minimizer of f(x) but not global, then there exists a point x  A 2 such that the point x is a stationary point of the function \( F(x,x^{*}_{k}) \) when \(\mu = \frac {\pi }{2L}\) for each 0 < α ≤ 1.

Proof

Since the current local minimizer \(x^*_k\) is not global minimizer of f(x), then there exists second minimizer \(x^*_{k+1} \in A_2\) such that \(f(x^*_{k+1})<f(x^*_k)\).

For any point y ∈ A 1 we have \(F(y,x^*_k)>0\), so by the continuity of f(x), and if \(\mu = \frac {\pi }{2L}\) we obtain \(F(x^*_{k+1},x^*_k)<0\). Then, by the theorem of intermediate value of continuous function, there exist a point lying between the points y and \(x^*_{k+1}\) on the part \([y,x^*_{k+1}]\), the value of the filled function at this point is equal to 0.

Assuming that z is the nearest point to \(x^*_{k+1}\) with \(F(z,x^*_k)=0\), then, we obtain the part \([z,x^*_{k+1}]\). That means z ∈ ∂A 2 and z is in the borders of the set \(B^*_{k+1}\) which is a closed region. By the continuity of the function \(F(x,x^*_k)\), there exist a point \(x^\prime \in B^*_{k+1}\) such that it is a local minimizer of the function \(F(x,x^*_k)\) and \(F(x^\prime ,x^*_k)<0\), since the function \(F(x,x^*_k)\) is continuously differentiable, we obtain

$$\displaystyle \begin{aligned}\nabla F(x^\prime,x^*_k)=0.\end{aligned}$$

7.4 The Algorithm

According to the information of the previous sections, we proposed a new filled function algorithm as follows:

  1. 1

    Set k = 1, 𝜖 = 10−2, choose U μ = 30 an upper bound of μ and give μ = 5; N the number of different directions d i for i = 1, 2, 3, …, N, choose an initial point x int ∈ Ω, and give 0 < α ≤ 1, where n is the dimension of the problem.

  2. 2

    Minimize f(x) using x int as a starting point to find local minimizer \( x^*_k \).

  3. 3

    Construct filled function at \( x^*_k \)

    $$\displaystyle \begin{aligned}F(x,x^{*}_{k})=\frac{1}{\alpha+\|x-x^*_k\|{}^2}h(f(x)-f(x^{*}_{k}))\end{aligned} $$

    and set i = 1.

  4. 4

    If i ≤ N, set \( x=x^*_k+\epsilon d_i \) and go to step (5); otherwise go to step (6).

  5. 5

    Start from x to find a minimizer x F of \( F(x,x^*_k) \), if x F ∈ Ω then set x int = x F, k = k + 1 and go to step (2); otherwise i = i + 1, go to step (4).

  6. 6

    If μ ≤ U μ, then μ = μ + 5 and go to step (2); otherwise take \( x^*_k \) as a global minimizer of f(x) and stop the algorithm.

The set of different directions d i are as the following: let θ 1, …, θ J ∈ [0, 2π] and \( \vartheta _1,\ldots ,\vartheta _J \in [-\frac {\pi }{2},\frac {\pi }{2}]\), are uniformly distributed. If n = 2Q, the components of \( d_i^j=(y_1^j,y_2^j,\ldots ,y_{2Q}^j) \) is calculated as

$$\displaystyle \begin{aligned}y_{2l-1}^j=\frac{\sqrt{2}}{\sqrt{n}}\cos{}(\theta_j)\end{aligned}$$
$$\displaystyle \begin{aligned}y_{2l}^j=\frac{\sqrt{2}}{\sqrt{n}}\sin{}(\theta_j),\end{aligned} $$

for l = 1 ∼ Q. If n = 2Q + 1, the components of \( d_l^j=(y_1^j,y_2^j,\ldots ,y_{2Q+1}^j) \) is calculated as

$$\displaystyle \begin{aligned}y_1^j=\frac{\sqrt{2}}{\sqrt{n}}\cos{}(\vartheta_j)\cos{}(\theta_j)\end{aligned}$$
$$\displaystyle \begin{aligned}y_2^j=\frac{\sqrt{2}}{\sqrt{n}}\cos{}(\vartheta_j)\sin{}(\theta_j)\end{aligned}$$
$$\displaystyle \begin{aligned}y_3^j=\frac{\sqrt{2}}{\sqrt{n}}\sin{}(\vartheta_j)\end{aligned}$$
$$\displaystyle \begin{aligned}y_{2l}^j=\frac{\sqrt{2}}{\sqrt{n}}\cos{}(\theta_j)\end{aligned}$$
$$\displaystyle \begin{aligned}y_{2l+1}^j=\frac{\sqrt{2}}{\sqrt{n}}\sin{}(\theta_j)\end{aligned}$$

for l = 2 ∼ Q [64].

7.5 Numerical Results

In this section, we perform the numerical test of our algorithm on test problems which are stated as follows:

Problem 5.1–5.3 (Two-Dimensional Function)

$$\displaystyle \begin{aligned}\min f(x)=[1-2x_2+c\sin{}(4\pi x_2)-x_1]^2+[x_2-0.5\sin{}(2\pi x_1)]^2, \end{aligned}$$

for x 1, x 2 ∈ [−3, 3], where c = 0.05, 0.2, 0.5.

Problem 5.4 (Three-Hump Back Camel Function)

$$\displaystyle \begin{aligned}\min f(x)=2x_1^2-1.05x_1^4+\frac{1}{6}x_1^6-x_1x_2+x_2^2,\end{aligned} $$

for x 1, x 2 ∈ [−3, 3].

Problem 5.5 (Six-Hump Back Camel Function)

$$\displaystyle \begin{aligned}\min f(x)=4x_1^2-2.1x_1^4+\frac{1}{3}x_1^6-x_1x_2-4x_2^4+4x_2^4,\end{aligned} $$

for x 1, x 2 ∈ [−3, 3].

Problem 5.6 (Treccani Function)

$$\displaystyle \begin{aligned}\min f(x)=x_1^4+4x_1^3+4x_1^2+x_2^2,\end{aligned} $$

for x 1, x 2 ∈ [−3, 3].

Problem 5.7 (Goldstein and Price Function)

$$\displaystyle \begin{aligned} \min f(x)=g_1(x)g_2(x),\end{aligned} $$

where

$$\displaystyle \begin{aligned} g_1(x) = 1 + (x_1 + x_2 + 1)^2(19 - 14x_1 + 3x_1^2 - 14x_2 + 6x_1x_2 + 3x_2^2),\end{aligned} $$

and

$$\displaystyle \begin{aligned} g_2(x) = 30 + (2x_1 - 3x_2)^2(18 - 32x_1 + 12x^2_1 + 48x_2- 36x_1x_2 + 27x_2^2),\end{aligned} $$

for x 1, x 2 ∈ [−3, 3].

Problem 5.8 (Shubert Function)

$$\displaystyle \begin{aligned}\min f(x)=\Bigg\{\sum_{i=1}^5i\cos[(i + 1)x_1 + i ]\Bigg\}\Bigg\{\sum_{i=1}^5i\cos[(i + 1)x_2 + i ]\Bigg\},\end{aligned}$$

s. t. x 1, x 2 ∈ [−10, 10].

Problem 5.9 (Rastrigin Function)

$$\displaystyle \begin{aligned}\min f(x)=20+\sum_{i=1}^2\Bigg[x_i^2-10\cos (2\pi x_i)\Bigg],\end{aligned} $$

for x 1, x 2 ∈ [−5.12, 5.12].

Problem 5.10 (Branin Function)

$$\displaystyle \begin{aligned}\min f(x)=\Big(x_2-\frac{5.1}{4\pi ^2}x_1^2+\frac{5}{\pi}x_1-6\Big)^2+10\Big(1-\frac{1}{8\pi}\Big)\cos{}(x_1)+10,\end{aligned} $$

for x 1 ∈ [−5, 10], x 2 ∈ [0, 15].

Problems 5.11, 5.12, 5.13 (Shekel Function)

$$\displaystyle \begin{aligned}\min f(x)=-\sum_{i=1}^m\Bigg[\sum_{j=1}^4(x_j-a_{i,j})^2+b_i\Bigg]^{-1},\end{aligned} $$

where m = 5, 7, 10, b i is an m-dimensional vector, and a i,j is a 4 × m-dimensional matrix where

and x j ∈ [0, 10], j = 1, .., 4.

Problems 5.14–5.21 (n-Dimensional Function)

$$\displaystyle \begin{aligned}\min f(x)=\frac{\pi}{n}[10\sin^2\pi x_1+g(x)+(x_n-1)^2],\end{aligned} $$

where \(g(x) =\sum _{i=1}^{n-1}\Bigg [(x_i-1)^2(1+10\sin ^2\pi x_{i+1})\Bigg ]\) and x i ∈ [−10, 10], i = 1, 2, …, n.

Problems 5.21–5.29 (Levy Function)

$$\displaystyle \begin{aligned} \begin{array}{rcl} \min f(x)&\displaystyle =&\displaystyle \sin^2(\pi w_1)+\sum^{n-1}_{i=1}(w_i-1)^2\Bigg[1+10\sin^2(\pi w_i+1)\Bigg]\\ &\displaystyle &\displaystyle +(w_n-1)^2\Bigg[1+\sin^2(2\pi w_n)\Bigg], \end{array} \end{aligned} $$

where

$$\displaystyle \begin{aligned}w_i= 1+\frac{x_i-1}{4}, \quad for \quad all \quad i=1,\ldots,n\end{aligned}$$

for x i ∈ [−10, 10], i = 1, 2, …, n.

We rearrange the above problems and the list of test problems are presented in Table 7.1. The algorithm is implemented 10 times starting from the different points independently for each problem on a PC with Matlab R2016a. The “fminunc” function of Matlab is used as a local solver. The used symbols are the following:

  • No.: the number of the problem,

  • n: the dimension,

  • itr-mean: the mean iteration number of the 10 runs,

  • f-eval: the total number of function evaluations,

  • time: the mean of the aggregate running time in 10 runs (second),

  • f-mean: the mean of the function values in 10 runs,

  • f-best: the best function value in 10 runs,

  • S-R: the success rate among 10 implementation with different starting points.

Table 7.1 The list of test problems

The results of the algorithm is presented in Table 7.2. This table consists of seven column; (No) number of the problem, (n) dimension of the problem, (itr-mean) mean value of total iteration number, (f-mean) mean value of the function values, (f-best) the best value of the function values, (f-eval) mean value of the function evaluations and (S-R) success rates of ten trails uniform initial points. As shown in this table our algorithm is tested on 29 problems with different dimensions up to 50 dimensions, each problem is tested on ten different initial points. Our algorithm reaches 10∕10 success rate for almost 40% of the all test problems. At least 6∕10 success rate is obtained considering all of the test problems as stated in column (S-R). By using our algorithm %94 success rate is obtained considering the total number of trials. Moreover, the f-mean and f-best values in Table 7.2 are very close to original function values which are given in Table 7.1.

Table 7.2 The numerical results of our algorithm on the list of problems

The comparison of our algorithm with the method in [65] are summarized in Table 7.3.

Table 7.3 The comparison of our algorithm with algorithm in [65]

In general, it can be seen from Table 7.3 the results of the algorithm presented in this paper obtain an advantage in several places especially in the columns dedicated to function evaluations and success rates compared to the results of the algorithm in [65]. Both of the methods (our method and the method in [65]) are sufficiently successful in terms of “f-best” values. Our method complete the global optimization process with lower function evaluation values than the method in [65] for the 85% of the all test problems. By using our algorithm, 88% successful trial is obtained considering the total number of the trial used in comparison and at least 7∕10 success rate is obtained for each of the test problems used in comparison. 86% successful trial is obtained considering the total number of trials used in comparison and at least 7∕10 success rate is obtained for each of the test problems in comparison, by using the algorithm in [65]. Moreover, our method is more effective than the method in [65] in terms of “f-eval” and “S-R” on the 10 and more than 10 dimensional test problems. Thus, the introduced algorithm in this paper is more efficient than the algorithm introduced in [65].

7.6 Conclusions

In this chapter, a new filled function for unconstrained global optimization is presented and the useful properties are introduced. The proposed filled function contains two parameters which can be easily adjusted in the minimization process. The corresponding filled function algorithm is constructed. Furthermore, it has been performed on numerical experiment in order to demonstrate the effectiveness of the presented algorithm. It can be seen from the computational results that the present method is promising.

This new algorithm is an effective approach to solve multi-modal global optimization problems. In the minimizing process, our methods save time in finding the lower minimizer and it is guaranteed that upper minimizers are not taken into account in all of the minimization process independently from the parameters. These two important properties make our method advantageous among the other methods.

For future work, the applications of global optimization algorithms can be applied to many real-life problems such as data mining, chemical process, aerospace industries, etc.