1 Introduction

Global optimization has extensive applications in many social practices, such as engineering, finance, management, decision science and so on. In recent years, many theoretical and computational contributions have been reported for solving global optimization problems. In general, existing approaches can be classified into two categories: deterministic methods and probabilistic methods. The former includes the auxiliary function method [1,2,3,4], trajectory method [5, 6], and covering method [7, 8] and so on. The typical representatives of the latter are genetic algorithm [10], simulated annealing method [11], particle swarm optimization [12] and differential evolution algorithm [13] and so on. Although the latter methods have been effectively applied to solve many complex problems, there still some issues to solve, such as convergence analysis, the decrease of the computational time, the selection of the parameters, etc. Compared with the latter, the former methods turn out to be more mature in theorems, but many of them have shown to be dependent on the conditions of the specific problem. Additionally, in practical problems, most of the objective functions have several local minimizers, which leads a great challenge for global optimization. The key of global optimization is finding a way to escape from the current local minimizer. Among the existing methods, the auxiliary function method appears to have several advantages over others mainly due to its relatively easy actualization with a process that aims at successively finding a smaller local minimizer. This paper mainly focuses on one representative method of auxiliary function methods: filled function method (FFM), whose main idea can be described as follows:

  1. 1.

    An arbitrary point is taken as an initial point to minimize the objective function by using a local optimization method, and a minimizer of the objective function is obtained.

  2. 2.

    Based on the minimizer of the objective function, construct a filled function and take some points around this minimizer as initial points to minimize the constructed filled function. A minimizer of the filled function falls into a better region which contains a better minimizer with smaller objective function value and it will be found.

  3. 3.

    Take the minimizer of the filled function obtained in the second step as an initial point to minimize the objective function and obtain a better minimizer of the objective function.

By repeating process 2 and 3, a global minimizer will be found at last.

The filled function method has been applied to solve various global optimization problems, e.g., bound-constrained and nonlinearly constrained optimization problems [2], integer programming [14], max-cut problems [15] and so on. As one of the main methods to solve unconstrained global optimization, since the filled function method was proposed, considerable attention has been paid on the constructing of the new filled functions [9, 16, 17]. Both theoretical and algorithmic studies demonstrate that the filled function method has potential. The filled function method is an efficient global optimization method and different filled functions have been proposed. Conventional filled functions either are non-differentiable (even discontinuous) [5, 6], need more than one adjustable parameter [1, 3, 4] or contain ill-conditioned term (exponential terms or logarithmic terms) [1, 2]. Additionally, from the general framework of the FFM, we can see that the minimizer of the filled function is generally not a minimizer of the objective function, and a better local minimizer obtained need to solve the original problem. Consequently, if the filled function which has the same local minimizer of the objective function, and these local minimizers are better than the current one of the original problem, then a minimizer of the filled function will be a better minimizer of the original problem. Therefore the efficiency of the FFM will be higher.

To overcome the shortcomings of the conventional filled functions, a new filled function with only one easy to adjust parameter is proposed. It is a continuously differentiable function which excludes exponential terms or logarithmic terms. What is more satisfactory is that any one local minimizer of the new proposed function is a better local minimizer of the original problem.

The rest of this paper is organized as follows. Some knowledge that is basic for the subsequent sections are given in Sect. 2. In Sect. 3, a new continuously differentiable filled function with only one parameter is proposed and its properties are analyzed. An algorithm with some practical considerations is given in Sect. 4, some numerical experiment results on some testing problems are also shown in this section. Finally, some concluding remarks are given in Sect. 5.

2 Preliminaries

In this paper, we consider the following unconstrained global optimization problem :

$$\begin{aligned} (P)\left\{ \begin{array}{ll} \min &{} f(x), \\ s.t.&{} x\in R^n. \end{array} \right. \end{aligned}$$

where \(f:R^{n} \rightarrow R\) is a continuously differentiable function.

Assumption 1

f(x) is a coercive function on \(R^{n}\), namely, \(f(x)\rightarrow +\infty \), as \(\Vert x\Vert \rightarrow +\infty \).

Assumption 1 implies that there exists a box \(\varOmega =\prod \nolimits _{i=1}^{n}[l_{i},u_{i}]\subset R^{n}\) whose interior contains all global minimizers of f(x). Thus the problem (P) is equivalent to the following problem:

$$\begin{aligned} \min \limits _{x\in \varOmega } f(x) \end{aligned}$$
(1)

Then, we only consider problem (1) in the following. In order to analyze the properties of the new filled function, the following assumptions are all necessary.

Assumption 2

f(x) has only a finite number of minimizers in \(\varOmega \) and the set of the minimizers is denoted as

$$\begin{aligned} M^{l}=\left\{ x_{i}^{*}|i=1,2,\dots ,N\right\} \end{aligned}$$

where N is the number of minimizer of f(x)

Assumption 3

All of the local minimizers of f(x) fall into the interior of \(\varOmega \), and each point y on the boundary of \(\varOmega \) satisfies \(f (y) > c\) , where c satisfies \(c=\max \{f (x)|x\in M^{l}\}\).

Additionally, some useful symbols will be adopted in the following.

  • k: the iteration number;

  • \(x_{k}\): the initial point in the k-th iteration;

  • \(x_{k}^{*}\): the local minimizer of f(x) in the k-th iteration;

  • \(f_{k}^{*}\): the function value at \(x_{k}^{*}\);

  • \(S_{1}\): the set defined by \(S_{1}=\{ x| f(x)\ge f(x_{k}^{*}),x\in \varOmega \backslash \{x_{k}^{*}\}\}\);

  • \(S_{2}\): the set defined by \(S_{2}=\{x|f(x)<f(x_{k}^{*}),x\in \varOmega \}\);

  • m: a constant defined by \(m=\min \nolimits _{i,j\in \{1,2,\cdots \,N\}, f(x_{i}^{*})\ne f(x_{j}^{*})}|f(x_{i}^{*})-f(x_{j}^{*})|\).

Based on the above symbols and assumptions, the FFM was first proposed by Ge [1, 2]and has been updated as the following generations. The examples of the filled function in the first generation are P-function [1] and G-function [2] which are listed as follows:

$$\begin{aligned} P(x,r,\rho )= & {} \exp \left( -\Vert x-x_{k}^{*}\Vert /\rho ^{2}\right) /(r+f(x)) \end{aligned}$$
(2)
$$\begin{aligned} G(x,r,\rho )= & {} -\left[ \rho ^{2}\ln (r+f(x))+\left\| x-x_{k}^{*}\right\| ^{p}\right] \end{aligned}$$
(3)

where p is an integer, for example, p can be taken as 2.

The P-function and G-function have a common feature: there are two adjustable parameters, r and \(\rho \), which need to be appropriately iterated and coordinated. Because of this limitation, the second-generation filled functions were proposed. Among them, the best known is the Q-function [2] given by

$$\begin{aligned} Q(x,A)=-\left( f(x)-f\left( x_{k}^{*}\right) \right) \exp \left( A\left\| x-x_{k}^{*}\right\| ^{2}\right) . \end{aligned}$$
(4)

The Q-function has only one adjustable parameter A, so the algorithm is significantly efficient than those in the first generation. However, the Q-function is susceptible to exponential terms when applied to global optimizations since its magnitude increases exponentially against parameter A. The larger A, which is required by the property of the filled function, the larger exponential may result in an overflow in the computation. To overcome this shortcoming, the H-function was proposed in [18] as follows:

$$\begin{aligned} H(x,A)=1/\ln \left( 1+f(x)-f\left( x_{k}^{*}\right) \right) -A\left\| x-x_{k}^{*}\right\| ^2. \end{aligned}$$
(5)

The H-function keeps the advantage of the Q-function that it has only one adjustable parameter and that it does not include the exponential term. The H-function can be regarded as an example of the third-generation filled functions. Nevertheless, it discontinues at the points whose function value is equal to the one at \(x_{k}^{*}\). Therefore, most local minimization algorithms used in the filled function may lose efficiency, and the FFM will be failure to find a global minimizer of the problem (1). This leads to the fourth-generation (C-function) of the FFM [19]. Based on the thinking of the C-function, a new filled function which has the same properties as the C-function is constructed in this paper. The new filled function not only excluded from some disadvantages of the conventional filled function, but also has the same local minimizer with the original problem.

3 A new filled function and its properties

The definition of the filled function is first introduced by Ge in [1]. Since the definition of the filled function was introduced, many variations of the definition of the filled function are given. In this paper, we adopt the revised definition of the filled function as follows:

Definition 1

Suppose \(x_{k}^{*}\) is a current local minimizer of f(x), A continuously differentiable function \({\mathcal {F}}(x)\) is said to be a filled function of f(x) at \(x_{k}^{*}\), if it satisfies the following properties:

  1. (1)

    \(x_{k}^{*}\) is a strict local maximizer of \({\mathcal {F}}(x)\);

  2. (2)

    \({\mathcal {F}}(x)\) has no stationary point in the set \(S_{1}\);

  3. (3)

    If \(x_{k}^{*}\) is not a global minimizer of f(x), namely \(S_{2}\) is not empty, then there exists a point \(x_{k}^{'}\) such that it is a local minimizer of \({\mathcal {F}}(x)\) on \(S_{2}\) which is a better local minimizer of f(x) .

In order to construct a new filled function, two functions with one variable are introduced firstly:

$$\begin{aligned} h(t)= & {} \left\{ \begin{array}{ll} 1,&{} \quad t\ge 0,\\ 1-2t^3-3t^2,&{} \quad -1\le t<0,\\ 0 , &{}\quad t\le -1. \end{array} \right. \\ g(t)= & {} \left\{ \begin{array}{ll} 0,&{} \quad t\ge 0,\\ t^3,&{} \quad t<0. \end{array} \right. \end{aligned}$$

it can be seen that h(t) and g(t) are all continuously differentiable functions over R. Based on functions h(t) and g(t) , we construct a filled function for solving the problem (1) at a local minimizer \(x_{k}^{*}\) as follows:

$$\begin{aligned} {\mathcal {F}}\left( x,x_{k}^{*},P\right) =\frac{1}{1+\left\| x-x_{k}^{*}\right\| ^{2}}\times h\left( P\times \left( f(x)-f\left( x_{k}^{*}\right) \right) \right) +g\left( f(x)-f\left( x_{k}^{*}\right) \right) \nonumber \\ \end{aligned}$$
(6)

where \(P>0\) is a parameter. We can see that the formula (6) is a continuously differentiable function. The following theorems indicate that the function (6) is a filled function which satisfies Definition 1.

Definition 2

A set \(N(x,\delta )=\{y|\Vert y-x\Vert <\delta \}\) is called a neighborhood of x.

Definition 3

A point y is said to be a local minimizer (maximizer) of f(x) on a set A, if there is \(\delta \), for \(\forall z\in N(y,\delta )\bigcap A\), one has \(f(z)\geqslant (\leqslant )f(y)\). If the equality does not hold, the minimizer (maximizer) is called strict. For \(\forall z\in A\), \(f(z)\geqslant (\leqslant )f(y)\) holds, y is called a global minimizer (maximizer) of f(x) on A

Theorem 1

Suppose \(x_{k}^{*}\) is a local minimizer of f(x) and \({\mathcal {F}}(x,x_{k}^{*},P)\) is defined by (6), then \(x_{k}^{*}\) is a strict local maximizer of \({\mathcal {F}}(x,x_{k}^{*},P)\) on \(\varOmega \) for all \(P>0\).

Proof

Since \(x_{k}^{*}\) is a local minimizer of the problem f(x), there exists \(\delta >0\), such that \(f(x) \geqslant f(x_{k}^{*})\) for any \(x\in \varOmega \bigcap N(x_{k}^{*},\delta )\). Then, for any \(x\in \varOmega \bigcap N(x_{k}^{*},\delta ),x\ne x_{k}^{*}\), one has

$$\begin{aligned} {\mathcal {F}}\left( x,x_{k}^{*},P\right) =\frac{1}{1+\Vert x-x_{k}^{*}\Vert ^2}<1={\mathcal {F}}\left( x_{k}^{*},x_{k}^{*},P\right) . \end{aligned}$$
(7)

Thus, \(x_{k}^{*}\) is a strict local maximizer of \({\mathcal {F}}(x,x_{k}^{*},P)\). \(\square \)

Theorem 2

Suppose \(x_{k}^{*}\) is a local minimizer of f(x), if x is a point such that in set \(S_{1}\), then x is not a stationary point of \( {\mathcal {F}}(x,x_{k}^{*},P)\) for all \(P> 0\).

Proof

By \(x\in S_{1}\), one has \(x\ne x_{k}^{*}\) and

$$\begin{aligned} {\mathcal {F}}\left( x,x_{k}^{*},P\right) =\frac{1}{1+\left\| x-x_{k}^{*}\right\| ^2},\quad \nabla {\mathcal {F}}\left( x,x_{k}^{*},P\right) =\frac{-2\left( x-x_{k}^{*}\right) }{\left( 1+\left\| x-x_{k}^{*}\right\| ^2\right) ^2}\ne \mathbf 0 \end{aligned}$$

Namely x is not a stationary point of \( {\mathcal {F}}(x,x_{k}^{*},P)\) (where\( \nabla {\mathcal {F}}\) is the gradient of \({\mathcal {F}}\) ). \(\square \)

By the continuously differentiability of \({\mathcal {F}}(x,x_{k}^{*},P)\) and definition of \(S_{1}\), we know that \(\forall y\in S_{1}\) is not a local minimizer of \({\mathcal {F}}(x,x_{k}^{*},P)\).

Theorem 3

Suppose \(x_{k}^{*}\) is a local minimizer but not a global minimizer of f(x) on \(\varOmega \), it means \(S_{2}\) is not empty, then there exists a point \(x^{'}\in S_{2}\) such that \(x^{'}\) is a local minimizer of \({\mathcal {F}}(x,x_{k}^{*},P)\) when \(P>\frac{1}{m}\) .

Proof

Since \(x_{k}^{*}\) is a local but not a global minimizer of f(x), there exists another local minimizer \(x^{*}\) of f(x) such that \(f(x^{*})<f(x_{k}^{*})\)

By definition of m and the continuity of f(x) , if \(P>\frac{1}{m}\) , then \(P\times (f(x^{*})-f(x_{k}^{*}))\le -mP<-1\). Consequently

$$\begin{aligned} {\mathcal {F}}\left( x^{*},x_{k}^{*},P\right) =\left( f\left( x^{*}\right) -f\left( x_{k}^{*}\right) \right) ^3<0 \end{aligned}$$
(8)

By the Assumption 3, \(\forall y\in \partial \varOmega \), one has \(f(y)>f(x_{k}^{*})\), there by \({\mathcal {F}}(y,x_{k}^{*},P)=\frac{1}{1+\Vert y-x_{k}^{*}\Vert ^2}>0\). Thus, by the intermediate value theorem of continuous functions, there exists a point on the segment between \( x^{*}\) and y denoted by \([x^{*}, y]\) and the function value at this point is 0. Assume that z is the closest point to \( x^{*}\) on this segment with \({\mathcal {F}}(z, x_{k}^{*}, P)=0\), then we can obtain a segment \([x^{*}, z]\).

Let \(S(x^{*})\) be the set of all the above line segments \([x^{*}, z]\) when \( y\in \partial \varOmega \), then it is a closed region. By continuity of \({\mathcal {F}}(x, x_{k}^{*},P)\), there exists a \(x^{'}\in S(x^{*})\) such that it is a minimizer \({\mathcal {F}}(x, x_{k}^{*},P)\) and \({\mathcal {F}}(x^{'}, x_{k}^{*},P)<0\). Since \({\mathcal {F}}(x, x_{k}^{*},P)\) is continuously differentiable, thus

$$\begin{aligned} \nabla {\mathcal {F}}\left( x^{'}, x_{k}^{*},P\right) =\mathbf 0 \end{aligned}$$

\(\square \)

From Theorems 1, 2 and 3, we know that if \(x_{k}^{*}\) is not a global minimizer of f(x) on \(\varOmega \), there exists a local minimizer \(x^{'}\) of \( {\mathcal {F}}(x,x_{k}^{*},P)\) on \(\varOmega \) which satisfies \(x^{'}\in S_{2}\) when parameter P is taken as large as possible. Meanwhile, if \(x_{k}^{*}\) is not a global minimizer of f(x), there exists another local minimizer \(x^{*}\) of f(x) such that \(f(x^{*})<f(x_{k}^{*})\) , then \(\exists N(x^{*},\delta )\) such that \(P\times (f(x)-f(x_{k}^{*}))<-1\) for \(\forall x\in N(x^{*},\delta ) \), there by for \(\forall x\in N(x^{*},\delta ) \), \( {\mathcal {F}}(x,x_{k}^{*},P)=(f(x)-f(x_{k}^{*}))^{3}\) . It is obvious that \(x^{*}\) is a local minimizer of \( {\mathcal {F}}(x,x_{k}^{*},P)\) . Thus, we can see that a local minimizer of \( {\mathcal {F}}(x,x_{k}^{*},P)\) is also a minimizer of f(x) whose function value is less than \(f(x_{k}^{*})\). Therefore, the filled function method only need to minimize the filled function for finding a global minimizer of the problem (1).

4 Filled function algorithm and numerical experiments

4.1 filled function algorithm and some explanations

Based on the theorems and discussions in the above section, a new filled function algorithm for finding a global minimizer of f(x) and some explanations of the algorithm are given. The algorithm is described firstly.

The filled function algorithm

  1. Step0:

    Choose an upper bound Ubp of P (e.g., take it as \(10^{6}\)) and a constant \(\rho >1\) (e.g., take it as \(\rho =10\)); give the initial P (e.g., take it as 1)respectively; Some directions \(e_{i},i=1,2,\dots ,K, K\ge 2n\) are given in advance, where n is the dimension of the optimization problems. Set \(k:=1\), and choose a point \(x_{k}\in \varOmega \).

  2. Step1:

    Minimize f(x) starting from an initial point \(x_{k}\in \varOmega \) and obtain a minimizer \(x_{k}^{*}\) of f(x) .

  3. Step2:

    Construct

    $$\begin{aligned}\begin{array}{cl} {\mathcal {F}}\left( x,x_{k}^{*},P\right) =\frac{1}{1+\Vert x-x_{k}^{*}\Vert ^{2}}\times h\left( P\times \left( f(x)-f\left( x_{k}^{*}\right) \right) \right) +g\left( f(x)-f\left( x_{k}^{*}\right) \right) . \end{array}\end{aligned}$$

    Set \(i=1\).

  4. Step3:

    If \(i\leqslant K\), then set \(x=x_{k}^{*}+\delta e_{i}\) and go to step 4; otherwise, go to Step5.

  5. Step4:

    Use x as an initial point for minimization of \({\mathcal {F}}(x,x_{k}^{*},P)\). If the minimization sequences of \({\mathcal {F}}(x,x_{k}^{*},P)\) go out of \(\varOmega \), set \(i=i+1\) and go to Step3; otherwise, a minimizer \(x^{'}\) will be found by minimizing \({\mathcal {F}}(x,x_{k}^{*},P)\), set \(x_{k+1}^{*}=x^{'}\),\(k=k+1\) and go to Step2.

  6. Step5:

    If \(P<Ubp\), then increase P by setting \(P:=\rho \times P\) and go to Step 2; Otherwise, the algorithm stops and \(x_{k}^{*}\) is taken as a global minimizer of problem (P).

Some explanations about the above filled function algorithm are necessary.

  1. 1.

    In Step0, K directions need to be chosen, in general, take \(K=2n\), \(e_{i}=(0,\dots , \mathop {1}\limits _i ,\dots ,0)^{T},i=1,2,\dots ,n\) and \(e_{i}=-e_{i-n},i=n+1,\dots ,2n\) .

  2. 2.

    In minimization of f(x) and \(FF(x,x_{k}^{*},P)\), a local optimization method needs to be selected firstly. In our algorithm, the SQP method is chosen.

  3. 3.

    In Step3, the value of \(\delta \) needs to be selected accurately. In our algorithm, \(\delta \) is selected to guarantee \(\Vert \nabla {\mathcal {F}}(x,x_{k}^{*},P)\Vert \) is greater than a threshold (e.g.,take the threshold as \(10^{-2}\)).

  4. 4.

    Step 4 means that the local minimizer \(x^{'}\) of \({\mathcal {F}}(x,x_{k}^{*},P)\) satisfies \(x^{'}\in S_2\) which is a better local minimizer of f(x).

  5. 5.

    We notice that the Assumption 3 is necessary for analyzing the properties of \( {\mathcal {F}}(x,x_{k}^{*},P)\). In implementation of the FFM, this assumption is not necessary.

4.2 Numerical experiments

In this section, the proposed algorithm is tested on the following ten benchmark problems which are usually used as test functions.

Problem 1

(Two-dimensional function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=[1-2x_{2}+c\sin (4\pi x_{2})-x_{1}]^{2}+[x_{2}-0.5\sin (2\pi x_{1})]^2, \\ s.t. &{} 0\le x_{1}\le 10,\quad -10\le x_{2}\le 0, \end{array} \end{aligned}$$

where \(c=0.2,0.5,0.05\). The global minimum function value \(f(x^{*})=0\) for all c.

Problem 2

(Three-hump camel-back function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=2x_{1}^{2}-1.05x_{1}^{4}+\frac{1}{6}x_{1}^{6}-x_{1}x_{2}+x_{2}^{2}, \\ s.t. &{} -3\le x_{1}\le 3,\quad -3\le x_{2}\le 3. \end{array} \end{aligned}$$

The global minimizer is \(x^{*}=(0,0)^{T}\).

Problem 3

(Six-hump camel-back function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}-x_{1}x_{2}-4x_{2}^{2}+4x_{2}^{4}, \\ s.t. &{} -3\le x_{1}\le 3,\quad -3\le x_{2}\le 3. \end{array} \end{aligned}$$

The global minimizer is \(x^{*}=(-0.0898,-0.7127)^{T}\) and \(x^{*}=(0.0898,0.7127)^{T}\).

Problem 4

(Treccani function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=x_{1}^{4}+4x_{1}^{3}+4x_{1}^{2}+x_{2}^{2}, \\ s.t. &{} -3\le x_{1}\le 3,\quad -3\le x_{2}\le 3. \end{array} \end{aligned}$$

The global minimizers are \(x^{*}=(0,0)^{T}\) and \(x^{*}=(-2,0)^{T}\).

Problem 5

(Goldstein and Price function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=g(x)h(x), \\ s.t. &{} -3\le x_{1}\le 3,\quad -3\le x_{2}\le 3, \end{array} \end{aligned}$$

where \(g(x)=1+(x_{1}+x_{2}+1)^{2}(19-14x_{1}+3x_{1}^{2}-14x_{2}+6x_{1}x_{2}+3x_{2}^{2})\), and \(h(x)=30+(2x_{1}-3x_{2})^{2}(18-32x_{1}+12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})\). The global minimizers is \(x^{*}=(0,-1)^{T}\).

Problem 6

(Rosenbrock function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=100(x_{1}^2-x_{2})^{2}+(x_{1}-1)^2, \\ s.t. &{} -10\le x_{1}\le 10,\quad -10\le x_{2}\le 10, \end{array} \end{aligned}$$

The global minimizer is \((1,1)^T\).

Problem 7

(Branin function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=\left( x_{2}-1.275\frac{x_{1}^{2}}{\pi ^2}+5\frac{x_{1}}{\pi }-6\right) ^2+10\left( 1-\frac{0.125}{\pi }\right) \cos {x_{1}}+10,\\ s.t. &{} -5\le x_{1}\le 10,\quad 0\le x_{2}\le 15, \end{array} \end{aligned}$$

The global minimizer are \((-3.142,12.275)^{T}\), \((3.142,2.275)^{T}\) and \((9.425,2.475)^{T}\).

Problem 8

(Two-dimensional Shubert function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=\left\{ \sum \limits _{i=1}^{5}i\cos [(i+1)x_{1}]+i\right\} \left\{ \sum \limits _{i=1}^{5}i\cos [(i+1)x_{2}]+i\right\} , \\ s.t. &{} -10\le x_{1}\le 10,\quad -10 \le x_{2}\le 10. \end{array} \end{aligned}$$

This problem has 760 minimizers in total. The global minimum value is \(f(x^{*})=-186.7309\).

Problem 9

(Shekel’s function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=-\sum \limits _{i=1}^{5}\left[ \sum \limits _{j=1}^{4}(x_{j}-a_{i,j})^2+c_{i}\right] ^{-1}, \\ s.t. &{} 0\le x_{j}\le 10,\quad j=1,2,3,4, \end{array} \end{aligned}$$

where the coefficients \(a_{i,j}\),\(c_{i}\),\(i=1,2,3,4,5,j=1,2,3,4\) are given in Table 1.

Table 1 The coefficient for Problem 9

All local minimizers are approximately equal to \((a_{i,1},a_{i,2},a_{i,3},a_{i,4})^{T}\) with function value \(-1/c_{i}\) ,\(i=1,2,3,4,5\).

Problem 10

(n-dimensional Sine-square II function)

$$\begin{aligned} \begin{array}{ll} \min &{} f(x)=\frac{\pi }{n}\left\{ 10\sin ^{2}\pi x_{1}+g(x)+(x_{n}-1)^{2}\right\} , \\ s.t. &{} -10\le x_{i}\le 10,\quad i=1,2,\dots ,n, \end{array} \end{aligned}$$

where \(g(x)=\sum \nolimits _{i=1}^{n-1}[(x_{i}-1)^2(1+10\sin ^{2}\pi x_{i+1})]\). The global minimizer of this problem is \(x^{*}=(1,\dots ,1)\) for all n.

The proposed algorithm is executed on the above 10 test problems and the performance is compared with that of the algorithms in [18,19,20]. In the related Tables, the following symbols are adopted.

  • \(\textit{Prob}\): the problem number 1–10;

  • \(\textit{ObjEval}\): the number of the total function evaluations of the original objective;

  • \(\textit{FilledEval}\): the number of the total function evaluations of the filled function;

  • \(x_{0}\): an initial point;

The initial value of P is taken as 1 and Ubp is taken as \(10^{6}\) for all problems;

  • \(\textit{NFFM}\): the algorithm proposed in this paper;

  • \(\textit{HFA}\): the algorithm proposed in [18].

  • \(\textit{CFA}\): the algorithm proposed in [19].

  • \(\textit{NFA}\): the algorithm proposed in [20].

  • \(\textit{Time}\): the CPU time for the algorithm to stop. In general, CPU-time is dependent on the tuning of parameter P and the performance of the computer.

The choice of the tuning parameter P is given by the filled function algorithm. The proposed algorithm is programmed in Matlab 2014a for working on the Windows 10 system with Intel(R) Core(TM)i5-3340M CPU and 4G RAM.

The proposed algorithm is executed on the above 10 test problems, for Problems 15, 810, the performance is compared with that of the algorithm in [20]. The minimizers obtained by the above two algorithms are listed in Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, and 16.

Table 2 Results for Problem 1 with \(c=0.2\) and \(x_{0}=(6,-2)^{T}\)
Table 3 Results for Problem 1 with \(c=0.5\) and \(x_{0}=(0,0)^{T}\)
Table 4 Results for Problem 1 with \(c=0.05\) and \(x_{0}=(10,-\,10)^{T}\)
Table 5 Results for Problem 2 with initial point \((-\,2,-\,1)^{T}\)
Table 6 Results for Problem 2 with initial point \((2,1)^{T}\)
Table 7 Results for Problem 3 with initial point \((-\,2,1)^{T}\)
Table 8 Results for Problem 3 with initial point \((2,-1)^{T}\)
Table 9 Results for Problem 3 with initial point \((-\,2,-\,1)^{T}\)
Table 10 Results for Problem 4 with initial point \((-\,1,0)^{T}\)
Table 11 Results for Problem 5 with initial point \((-\,1,-\,1)^{T}\)
Table 12 Results for Problem 8 with initial point \((1,1)^{T}\)
Table 13 Results for Problem 9 with initial point \((1,1,1,1)^{T}\)
Table 14 Results for Problem 9 with initial point \((6,6,6,6)^{T}\)
Table 15 Results for Problem 10 with \(n=7\) and initial point \((2,2,2,2,2,2,2)^{T}\)
Table 16 Results for Problem 10 with \(n=10\) and initial point \((6,6,6,6,6,6,6,6,6,6)^{T}\)
Table 17 The results obtained by \(\textit{NFFM}\) and \(\textit{NFA}\) on Problems 15, 810
Table 18 The average total number of the evaluations of the objective function and the filled function

From Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, and 17, We can see that for all test problems, both the proposed algorithm (NFFM) and the algorithm proposed in [20] (NFA) can find the global optimal solutions for 8 test problems. However, the NFFM used no more iterations to find the global optimal solution. In particular, for test Problem 1 in all three cases, and Problems 8 and 10, the NFFM used fewer iterations to find the optimal solutions than the NFA. For example, for Problem 1 in the case that \(c=0.05\) and \(x_0=(10,-10)\) shown in Table 4, the NFFM only needs 7 iterations to find the global optimal solution, but the NFA needs 8 iterations. For Problem 6, there are 760 local minimizers, and the NFFM only needs 4 iterations to find the global optimal solutions, but the NFA needs 6 iterations. Thus, the proposed algorithm is more efficient than the algorithm in [20].

To evaluate its effectiveness, the new filled function algorithm was used to find the global minimizers of the Problems 28 and 9. In all tests, ten initial points were randomly generated. The experimental results are presented in Table 18, in which the data are the average total number of evaluations of function f(x) and \(FF(x,x_{k}^{*},P)\) when the global minimizer was found. (The number of directions is taken as 2n, in Problem 6, and \(\varOmega \) is taken as \([-\,10,10]\times [-\,10,10]\).)

The performance of the HFA and CFA can be seen in reference [19]. In [19], four initial points were taken randomly, then the average number of evaluations of the objective function can be computed excluding unsuccessful tests. The results are listed in Table 18.

The means of the dates in Table 18 are as follows: When used HFA to solve problem 2, the number of evaluations of the objective function are 345, 109, 340, 78 respectively, then the date in table is denoted as \((345+109+340+78)/4 \times 4\) ; When used HFA to solve Problem 8, the number of evaluations of the objective function are −, 259, 131, 114 (− denote the test is unsuccessful) respectively, then the date in table is denoted as \((259+131+114)/3 \times 3\)

From Table 18, the total number of evaluations of function f(x) and \(FF(x,x_{k}^{*},P)\) of the new FFM is less than the ones HFA and CFA except for Problem 3. Therefore, the new algorithm is better than HFA and CFA in performance.

5 Concluding remarks

This paper presented a new filled function. It excluded some limitations of conventional filled functions and had the same local minimizer which is better than the current local minimizer with the original problem. Therefore, the computational cost would be highly reduced since we avoid minimizing the objective function and the filled function cyclically. We compared our proposed method with existing ones on several testing optimization problems. The experimental results showed the effectiveness of our method on practical problems.