Abstract
A new filled function is proposed in this paper for finding a better minimizer of smooth unconstrained global optimization problems. First, a new definition of the filled function for unconstrained problems with a closed bounded domain is given under reasonable assumptions. The proposed filled function is continuously differentiable and contains two parameters. Then a new algorithm is proposed depending on the theoretical and numerical properties of the proposed filled function. The application of the algorithm on different test problems gave positive acceptable results.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
7.1 Introduction
Many real-life problems have been formulated as optimization problems. They have been applied in many branches of real-life such as finance, portfolio selection, medical science, data mining, etc. [1,2,3,4]. Global optimization constitutes one important part of the theory of optimization. It has many application areas in engineering such as electrical power distribution and design of space trajectories [5, 6]. Global optimization is a very active research area because of the problems becoming more and more complicated from year to year due to increasing number of variables and structure of the problems (non-smoothness). Up to now, many new theories and algorithms have been presented to solve global optimization problems [7, 8]. There exist two different type of methods which are based on local searches (Monotonic Basin Hopping, Hill Climbing Methods, etc. [9, 10]) and not based on local searches (Branch and Bound, DIRECT, etc. [11, 12]). We consider the methods based on local searches. For local search based methods the major difficulties for global optimization are listed below:
-
a.
When finding any local minimizer by using a local solver, how to escape from this current local minimizer.
-
b.
How to ignore the local minimizers of which their values are greater than the value of current minimizer and find a lower minimizer of the objective function.
-
c.
How to evaluate the convergence to the global minimizer, and determine the stopping criteria.
In different point of view, global optimization approaches can be classified into three main classes: stochastic methods, heuristic methods, and deterministic methods.
Stochastic methods are quite simple, very efficient in black box problems and robust with respect to the increment of dimension of the problem but some of the stochastic methods can find only a local solution instead of the global one. The very well-known stochastic approaches are Random Search and Adaptive Search, Markovian Algorithms, etc. [13, 14]. The population algorithms are included in stochastic methods but we handle them in the heuristic methods.
The heuristic methods are based on the simulation of the biological, physical, or chemical processes. These methods are easy applicable and they converge the solution rapidly. However, they can give different results if they are run again. The very well-known methods are Genetic Algorithm [15], Simulated Annealing Algorithm [16,17,18], Particle Swarm Optimization [19, 20], and Artificial Bee Colony Algorithm [21, 22]. In recent years, the hybridizations of the heuristic global optimization algorithms have come into prominence [23,24,25,26,27].
The convergence to the solution is guaranteed in deterministic approaches. This is the outstanding property of the deterministic approaches. However, these methods converge the solution quite slowly [28]. There exist important methods such as Branch and Bound algorithms [11], Covering methods [12], Space Filling Curve methods [29, 30], and other methods [31, 32].
Auxiliary function approach is one of the most important one among the methods on global optimization. These methods are developed according to deterministic search strategies by constructing an auxiliary function to escape from the current local minimizer to a better one, among such methods are Tunneling Method [33], Filled Function Method (FFM) [34,35,36], and Global Descent Method [37].
The first auxiliary function method was introduced by Levy and Montalvo [33]. Cetin et al. developed the tunneling algorithm to resolve constrained global optimization problems [38]. However, many important studies related to tunneling algorithms have been published in [39,40,41].
Among other methods, FFM can be considered an effective approach to solve different global optimization problems, so it seems to have several features over others, for example, it is more simple to find a better local minimizer sequentially compared to other methods. The FFM was presented for the first time by Ge [34, 35] and improved in [42,43,44]. Many valuable studies have been presented in order to make filled function applicable for different type of problems such as non-smooth problems [45, 46], constrained optimization problems [47], system of nonlinear equations [48], etc. [49, 50]. Recently, the next generation of filled function or auxiliary function approaches have been developed [51,52,53,54,55].
The FFM presents a good idea for solving global optimization problems. In general, the filled function mechanism is described as follows:
-
1.
Choose any random point for starting and use a convenient local optimization method to find a local minimizer of the objective function.
-
2.
Construct a filled function based on the current minimizer of the objective function, and use any point in the proximity of this current minimizer to minimize the filled function. Finally, a local minimizer of the filled function is obtained. This minimizer is in a basin of better solution of objective function.
-
3.
The minimizer of filled function which obtained in step 2 is used as a starting point to find the minimizer of the objective function.
Surely the number of minimizers is reduced by repeating Step 2 and 3. Finally, the global minimizer of objective function is found.
Some of the existing filled functions have been constructed to have a surface somewhat like a surface of the objective function in the lower basin (when \(f(x)\geq f(x^{*}_{1})\), \( x_1^*\) is a current minimizer of the objective function) of the better solution, this situation has drawbacks; it needs more time and function evaluations.
In this study, in order to eliminate the drawbacks in some of previous filled functions we proposed a new filled function. This new proposal is based on putting many stationary points in lower basins, in fact, the filled function does not need to go down in the lower basin, it only needs to obtain any stationary point in the lower basin which can be used as a starting point for minimizing the objective function to get a lower minimizer. This idea helps to reduce the time and function evaluations which are very important for such methods.
This study is organized as follows: In Sect. 7.2, we give some preliminary information. In Sect. 7.3, we propose a new filled function with its properties. In Sect. 7.4, we introduce the filled function algorithm. In Sect. 7.5, we perform a numerical test and present the results obtained from the new method. Finally, Sect. 7.6 consists of conclusions.
7.2 Preliminaries
We consider a class of unconstrained global optimization problems as the following:
where f(x) : R n→R is continuously differentiable function.
We assume that the function f(x) is globally convex, which means f(x) → +∞ as ∥x∥→ +∞. It means that there exist a closed, bounded, and box-shaped domain \( \varOmega =[l_b,u_b]=\{x:\ l_b\leq x \leq u_b,\ l_b,u_b\in \mathbb {R}^n \}\) that contains all the local minimizers of f(x). Moreover, the number of different values of local minimizers of the function f(x) is finite.
Additionally, basic concepts and symbols used in this study are given as follows:
- k::
-
the number of local minimizers of f(x),
- \(x^{*}_{k}\)::
-
the current local minimizer of f(x),
- x ∗ ::
-
the global minimizer of f(x),
- \(B^{*}_{k}\)::
-
the basin of f(x) at the local minimizer \(x^*_k\).
We indicate the following definitions:
Definition 7.1 ([34])
Let Ω ⊂ R n. A point x ∗∈ Ω is a global minimizer of objective function f(x) if f(x ∗) ≤ f(x) for all x ∈ Ω,
Definition 7.2 ([34])
Let \( x_k^* \) is a local minimizer of the objective function f(x). The set of points \(B_k^* \subset \varOmega \) is called a basin of f(x) at the point \( x_k^* \) if any local solver starting from any point in \(B_k^*\) finds the local minimizer \(x_k^*\).
Definition 7.3 ([34])
The auxiliary function \(F(x,x^{*}_{k})\) is called a filled function of the objective function f(x) at a local minimizer \(x_{k}^{*}\) if the function \(F(x,x^{*}_{k})\) has the following properties:
-
\(x_{k}^{*} \) is a local maximizer of the function \(F(x,x^{*}_{k})\),
-
\( F(x,x^{*}_{k}) \) has no stationary points in \( A_1=\{x\in \varOmega |f(x)\geq f(x^{*}_{k}),x\neq x^*_k\}\),
-
if \( x^{*}_{k} \) is not a global minimizer of the function f(x), then the function \( F(x,x^{*}_{k}) \) has a stationary point in the region \( A_2=\{x|f(x)< f(x^{*}_{k}),x\in \varOmega \}\).
7.2.1 Overview of the Filled Functions
In 1987 Ge and Qin proposed a first filled function (we call it as G-function) [34] with two parameters to solve the problem (P) at an isolated local minimizer \(x^*_k\) that is defined by
and, in 1990 Ge introduced another filled function (P-function) [35] which has the following form:
where r and ρ are parameters which need to be chosen conveniently. Generally, the G-function and P-function of the objective function f(x) at the current minimizer \(x^*_k\) must satisfy the Definition 3.
Many important studies are developed the FFM to solve multi-modal global optimization problems. These studies can be classified into two categories depending on the number of adjustable parameters.
7.2.2 Filled Functions with Two-Parameter
In [56], Wu et al. offer a filled function with two parameters to decrease the computational cost and overcome several disadvantages of filled functions which has the following form:
where q, r > 0 are adjustable parameters and f r, g r are continuously differentiable functions.
In 2009, Zhang et al. [57] introduced a new definition for the filled function, which rectifies several drawbacks of the classic definition. A new filled function with two parameters defined by
where a > 0, r are parameters and the function φ(t) is continuously differentiable. Wei et al. [58] offer a new filled function which is not sensitive to parameters. This function has two parameters and has the following formula:
and
where r > 0, ρ > 1 are parameters.
7.2.3 Filled Functions with One Parameter
According to general opinion, the existence of more than one adjustable parameter in the same filled function makes the control difficult. So, the first filled function which has only one parameter is the function Q. This function proposed in (1987) by Ge and Qin and has the following formula:
The function Q has one adjustable parameter a, if this parameter becomes large and large, quickly increasing value of exponential function negatively affects the computational results [34]. In order to tackle this drawback, H-function was introduced by Liu in [36] that is given by
The function H keeps the feature of the function Q with only one adjustable parameter but without exponential function. Shang et al. introduced a filled function with one adjustable parameter in the following:
and
so, q is a parameter subject to certain conditions [59]. Zhang and Xu constructed a filled function to solve non-smooth constrained global optimization problems [60]. This function constructed to overcome several drawbacks of the previous filled functions, and it has one parameter as follows:
where q > 0 is the parameter, g i(x) > 0 are constrained conditions, and r is prefixed constant.
In 2013, Wei and Wang proposed a new filled function for problem (P) with one adjustable parameter and it does not sensitive to this parameter [61]. The filled function has the following formula:
where
and r is an adjustable parameter as large as possible, used as the weight parameter.
Wang et al. constructed a filled function for both smooth and non-smooth constrained problems in 2014 [62]. The constructed filled function is defined by
The filled function at above has only one adjustable parameter which is controlled during the process. A new definition and a new filled function is given in 2016 by Yuan et al. [63]. This filled function has one parameter, given by
where q > 0 is an adjustable parameter, V (t) : R → R and W q(t) : R → R are continuously differentiable under some properties.
7.3 A New Filled Function Method and Its Properties
We offer a new filled function at a local minimizer \( x^{*}_{k} \) with two parameters to solve the problem (P) as follows:
where
and 0 < α ≤ 1 and μ > 1 are parameters.
The new idea in this filled function is to put many stationary points in the lower basin \( A_2=\{x|f(x)< f(x^{*}_{k}),x\in \varOmega \} \). In fact, the filled function does not need to go down in the lower basin, only it needs to obtain any stationary point in A 2, which can be used as an initial for minimizing objective function to obtain a lower minimizer.
The above idea has many advantages, for example, it helps to reduce the time and evaluation which are very important in cases like this. Furthermore, the parameter μ is used to increase or decrease the number of stationary points in the interval A 2, therefore we have to choose μ carefully, because if it is small there is a possibility that we may lose some of the lower minimizers at which the value of the function is close to the value at the first minimizer (see Figs. 7.1 and 7.2). The parameter 0 < α ≤ 1 in the term \(\frac {1}{\alpha +\|x-x^{*}_{k}\|{ }^2}\) is used to control the hat and it is easy to modify. The following theorems references that the function \(F(x,x^*_k) \) is a filled function by Definition 7.1.
Theorem 7.1
Assume that \( x^{*}_{k} \) is a local minimizer of the function f(x), and \( F(x,x^{*}_{k}) \) is defined by the Definition 3, then \( x^{*}_{k} \) is a strict local maximizer of \( F(x,x^*_k) \).
Proof
Since \( x^{*}_{k} \) is a local minimizer of f(x), then there exists neighborhood \( N(x^{*}_{k},\epsilon )\subset A_1 \) of \( x^{*}_{k} \) for some 𝜖 > 0 such that \( f(x)\geq f( x^{*}_{k} ) \) for all \( x\in N(x^{*}_{k},\epsilon ) \) and \( x\ne x^{*}_{k}, 0<\alpha \leq 1\).
That means \( x^{*}_{k} \) is a strict local maximizer of \( F(x,x^{*}_{k}) \).
Theorem 7.2
Assume that \(x^{*}_{k}\) is a local minimizer of f(x), and x is any point in the set A 1, then x is not stationary point of \( F(x,x^{*}_{k}) \) for any 0 < α ≤ 1.
Proof
We have x ∈ A 1, \( f(x)\ge f( x^{*}_{k} ) \) and \( x\ne x^{*}_{k} \). Then \( F( x,x^{*}_{k})=\frac {1}{\alpha +\|x-x^*_k\|{ }^2} \), and \( \nabla F( x,x^{*}_{k})=-2\frac {x-x^*_k}{(\alpha +\|x-x^*_k\|{ }^2)^2}\ne 0 \), for each 0 < α ≤ 1. This implies the function \( F( x,x^{*}_{k}) \) has no stationary point in the set A 1.
Theorem 7.3
Assume that \(L=\min |f(x^*_i)-f(x^*_j)|\), i, j = 1, 2, …, m, \(f(x^*_i)\neq f(x^*_j)\) and \(x^*_k\) is a local minimizer of f(x) but not global, then there exists a point x ′∈ A 2 such that the point x ′ is a stationary point of the function \( F(x,x^{*}_{k}) \) when \(\mu = \frac {\pi }{2L}\) for each 0 < α ≤ 1.
Proof
Since the current local minimizer \(x^*_k\) is not global minimizer of f(x), then there exists second minimizer \(x^*_{k+1} \in A_2\) such that \(f(x^*_{k+1})<f(x^*_k)\).
For any point y ∈ A 1 we have \(F(y,x^*_k)>0\), so by the continuity of f(x), and if \(\mu = \frac {\pi }{2L}\) we obtain \(F(x^*_{k+1},x^*_k)<0\). Then, by the theorem of intermediate value of continuous function, there exist a point lying between the points y and \(x^*_{k+1}\) on the part \([y,x^*_{k+1}]\), the value of the filled function at this point is equal to 0.
Assuming that z is the nearest point to \(x^*_{k+1}\) with \(F(z,x^*_k)=0\), then, we obtain the part \([z,x^*_{k+1}]\). That means z ∈ ∂A 2 and z is in the borders of the set \(B^*_{k+1}\) which is a closed region. By the continuity of the function \(F(x,x^*_k)\), there exist a point \(x^\prime \in B^*_{k+1}\) such that it is a local minimizer of the function \(F(x,x^*_k)\) and \(F(x^\prime ,x^*_k)<0\), since the function \(F(x,x^*_k)\) is continuously differentiable, we obtain
7.4 The Algorithm
According to the information of the previous sections, we proposed a new filled function algorithm as follows:
-
1
Set k = 1, 𝜖 = 10−2, choose U μ = 30 an upper bound of μ and give μ = 5; N the number of different directions d i for i = 1, 2, 3, …, N, choose an initial point x int ∈ Ω, and give 0 < α ≤ 1, where n is the dimension of the problem.
-
2
Minimize f(x) using x int as a starting point to find local minimizer \( x^*_k \).
-
3
Construct filled function at \( x^*_k \)
$$\displaystyle \begin{aligned}F(x,x^{*}_{k})=\frac{1}{\alpha+\|x-x^*_k\|{}^2}h(f(x)-f(x^{*}_{k}))\end{aligned} $$and set i = 1.
-
4
If i ≤ N, set \( x=x^*_k+\epsilon d_i \) and go to step (5); otherwise go to step (6).
-
5
Start from x to find a minimizer x F of \( F(x,x^*_k) \), if x F ∈ Ω then set x int = x F, k = k + 1 and go to step (2); otherwise i = i + 1, go to step (4).
-
6
If μ ≤ U μ, then μ = μ + 5 and go to step (2); otherwise take \( x^*_k \) as a global minimizer of f(x) and stop the algorithm.
The set of different directions d i are as the following: let θ 1, …, θ J ∈ [0, 2π] and \( \vartheta _1,\ldots ,\vartheta _J \in [-\frac {\pi }{2},\frac {\pi }{2}]\), are uniformly distributed. If n = 2Q, the components of \( d_i^j=(y_1^j,y_2^j,\ldots ,y_{2Q}^j) \) is calculated as
for l = 1 ∼ Q. If n = 2Q + 1, the components of \( d_l^j=(y_1^j,y_2^j,\ldots ,y_{2Q+1}^j) \) is calculated as
for l = 2 ∼ Q [64].
7.5 Numerical Results
In this section, we perform the numerical test of our algorithm on test problems which are stated as follows:
Problem 5.1–5.3 (Two-Dimensional Function)
for x 1, x 2 ∈ [−3, 3], where c = 0.05, 0.2, 0.5.
Problem 5.4 (Three-Hump Back Camel Function)
for x 1, x 2 ∈ [−3, 3].
Problem 5.5 (Six-Hump Back Camel Function)
for x 1, x 2 ∈ [−3, 3].
Problem 5.6 (Treccani Function)
for x 1, x 2 ∈ [−3, 3].
Problem 5.7 (Goldstein and Price Function)
where
and
for x 1, x 2 ∈ [−3, 3].
Problem 5.8 (Shubert Function)
s. t. x 1, x 2 ∈ [−10, 10].
Problem 5.9 (Rastrigin Function)
for x 1, x 2 ∈ [−5.12, 5.12].
Problem 5.10 (Branin Function)
for x 1 ∈ [−5, 10], x 2 ∈ [0, 15].
Problems 5.11, 5.12, 5.13 (Shekel Function)
where m = 5, 7, 10, b i is an m-dimensional vector, and a i,j is a 4 × m-dimensional matrix where
and x j ∈ [0, 10], j = 1, .., 4.
Problems 5.14–5.21 (n-Dimensional Function)
where \(g(x) =\sum _{i=1}^{n-1}\Bigg [(x_i-1)^2(1+10\sin ^2\pi x_{i+1})\Bigg ]\) and x i ∈ [−10, 10], i = 1, 2, …, n.
Problems 5.21–5.29 (Levy Function)
where
for x i ∈ [−10, 10], i = 1, 2, …, n.
We rearrange the above problems and the list of test problems are presented in Table 7.1. The algorithm is implemented 10 times starting from the different points independently for each problem on a PC with Matlab R2016a. The “fminunc” function of Matlab is used as a local solver. The used symbols are the following:
-
No.: the number of the problem,
-
n: the dimension,
-
itr-mean: the mean iteration number of the 10 runs,
-
f-eval: the total number of function evaluations,
-
time: the mean of the aggregate running time in 10 runs (second),
-
f-mean: the mean of the function values in 10 runs,
-
f-best: the best function value in 10 runs,
-
S-R: the success rate among 10 implementation with different starting points.
The results of the algorithm is presented in Table 7.2. This table consists of seven column; (No) number of the problem, (n) dimension of the problem, (itr-mean) mean value of total iteration number, (f-mean) mean value of the function values, (f-best) the best value of the function values, (f-eval) mean value of the function evaluations and (S-R) success rates of ten trails uniform initial points. As shown in this table our algorithm is tested on 29 problems with different dimensions up to 50 dimensions, each problem is tested on ten different initial points. Our algorithm reaches 10∕10 success rate for almost 40% of the all test problems. At least 6∕10 success rate is obtained considering all of the test problems as stated in column (S-R). By using our algorithm %94 success rate is obtained considering the total number of trials. Moreover, the f-mean and f-best values in Table 7.2 are very close to original function values which are given in Table 7.1.
The comparison of our algorithm with the method in [65] are summarized in Table 7.3.
In general, it can be seen from Table 7.3 the results of the algorithm presented in this paper obtain an advantage in several places especially in the columns dedicated to function evaluations and success rates compared to the results of the algorithm in [65]. Both of the methods (our method and the method in [65]) are sufficiently successful in terms of “f-best” values. Our method complete the global optimization process with lower function evaluation values than the method in [65] for the 85% of the all test problems. By using our algorithm, 88% successful trial is obtained considering the total number of the trial used in comparison and at least 7∕10 success rate is obtained for each of the test problems used in comparison. 86% successful trial is obtained considering the total number of trials used in comparison and at least 7∕10 success rate is obtained for each of the test problems in comparison, by using the algorithm in [65]. Moreover, our method is more effective than the method in [65] in terms of “f-eval” and “S-R” on the 10 and more than 10 dimensional test problems. Thus, the introduced algorithm in this paper is more efficient than the algorithm introduced in [65].
7.6 Conclusions
In this chapter, a new filled function for unconstrained global optimization is presented and the useful properties are introduced. The proposed filled function contains two parameters which can be easily adjusted in the minimization process. The corresponding filled function algorithm is constructed. Furthermore, it has been performed on numerical experiment in order to demonstrate the effectiveness of the presented algorithm. It can be seen from the computational results that the present method is promising.
This new algorithm is an effective approach to solve multi-modal global optimization problems. In the minimizing process, our methods save time in finding the lower minimizer and it is guaranteed that upper minimizers are not taken into account in all of the minimization process independently from the parameters. These two important properties make our method advantageous among the other methods.
For future work, the applications of global optimization algorithms can be applied to many real-life problems such as data mining, chemical process, aerospace industries, etc.
References
Savku, E., Weber, G-W.: A stochastic maximum principle for a Markov regime-switching jump-diffusion model with delay and an application to finance. J. Optim. Theory Appl. 179, 696–721 (2018)
Yazici, C., Yerlikaya-Ozkurt, F., Batmaz, I.: A computational approach to nonparametric regression: bootstrapping CMARS method. Mach. Learn. 101, 211–230 (2015)
Kara, G., Ozmen A., Weber, G-W.: Stability advances in robust portfolio optimization under parallelepiped uncertainty. Cent. Eur. J. Oper. Res. 27, 241–261 (2019)
Onak, O.N., Serinagaoglu-Dogrusoz, Y., Weber, G.-W.: Effects of a priori parameter selection in minimum relative entropy method on inverse electrocardiography problem. Inverse Prob. Sci. Eng. 26(6), 877–897 (2018)
Resener, M., Haffner, S., Pereira, L.A., Pardalos, P.A.: Optimization techniques applied to planning of electric power distribution systems: a bibliographic survey. Energy Syst. 9 473–509 (2018)
Addis, B., Cassioli, A., Locatelli, M., Schoen, F.: A global optimization method for the design of space trajectories. Comput. Optim. Appl. 48, 635–652 (2011)
Cassioli, A., Di Lorenzo, D., Locatelli, M., Schoen, F., Sciandrone, M.: Machine learning for global optimization. Comput. Optim. Appl. 51, 279–303 (2012)
Chen, T.Y., Huang, J.H.: Application of data mining in a global optimization. Adv. Eng. Softw. 66, 24–33 (2013)
Leary, R.H.: Global optimization on funneling landscapes. J. Glob. Optim. 18, 367–383 (2000)
Locatelli, M., Schoen, F.: Global optimization based on local searches. Ann. Oper. Res. 240, 251–270 (2016)
Land, A.H., Doig, A.G.: An automatic method of solving discrete programming problems. Econometrica 28, 497–520 (1960)
Jones, D.R. , Perttunen, C.D. , Stuckman, B.E.: Lipschitzian optimization without the Lipschitz constant. J. Optimiz. Theory Appl. 79, 157–181 (1993)
Zhigljavski, A., Zilinskas, J.: Stochastic Global Optimization. Springer, Berlin (2008)
Schaffler, S.: Global Optimization: A Stochastic Approach. Springer, New York (2012)
Storti, G.L., Paschero, M., Rizzi, A., Mascioli, F.M.: Comparison between time-constrained and time-unconstrained optimization for power losses minimization in smart grids using genetic algorithms. Neurocomputing 170, 353–367 (2015)
Suman, B., Kumar, P.: A survey of simulated annealing as a tool for single and multiobjective optimization. J. Oper. Res. Soc. 57, 1143–1160 (2006)
Ekren, O., Ekren, B.Y.: Size optimization of a PV/wind hybrid energy conversion system with battery storage using simulated annealing. Appl. Energ. 87, 592–598 (2010)
Samora, I., Franca, M.J., Schleiss, A.J., Ramos, H.M.: Simulated annealing in optimization of energy production in a water supply network. Water Resour. Manag. 30, 1533–1547 (2016)
Poli, R., Kennedy, J., Blackwell, T.: Particle swarm optimization. Swarm Intell-US. 1, 33–57 (2007)
Kennedy, J.: Particle swarm optimization. In: Encyclopedia of Machine Learning, pp. 760–766. Springer, Boston (2011)
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J. Global Optim. 39, 459–471 (2007)
Akay, B., Karaboga, D.: A modified artificial bee colony algorithm for real-parameter optimization. Inform. Sciences. 192, 120–142 (2012)
Niknam, T., Amiri, B., Olamaei, J., Arefi, A.: An efficient hybrid evolutionary optimization algorithm based on PSO and SA for clustering. J. Zhejiang Univ-Sc. A. 10, 512–519 (2009)
Mahi, M., Baykan, O.K., Kodaz, H.: A new hybrid method based on particle swarm optimization, ant colony optimization and 3-opt algorithms for traveling salesman problem. Appl. Soft Comput. 30, 484–490 (2015)
Zheng, Y.J., Xu, X.L., Ling, H.F., Chen, S.Y.: A hybrid fireworks optimization method with differential evolution operators. Neurocomputing 148, 75–82 (2015)
Garg, H.: A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 274, 292–305 (2016)
Liu, J., Zhang, S., Wu, C., Liang, J., Wang, X., Teo, KL.: A hybrid approach to constrained global optimization. Appl. Soft. Comput. 47, 281–294 (2016)
Sergeyev, Y.D., Kvasov, D.E.: A deterministic global optimization using diagonal auxiliary functions. Commun. Nonlinear Sci. Numer. Simul. 21, 99–111 (2015)
Lera, D., Sergeyev, Y.D.: Deterministic global optimization using space filling curves and multiple estimates of Lipschitz and Hölder constants. Commun. Nonlinear Sci. Numer. Simul. 23, 328–342 (2015)
Ziadi, R., Bencherif-Madani, A., Ellaia, R.: Continuous global optimization through the generation of parametric curves. Appl. Math. Comput. 282, 65–83 (2016)
Basso, P.: Iterative methods for the localization of the global maximum. SIAM J. Numer. Anal. 19, 781–792 (1982)
Mladineo, R.H.: An algorithm for finding the global maximum of a multimodal, multivariate function. Math. Program. 34, 188–200 (1986)
Levy, A.V., Montalvo, A.: The tunneling algorithm for the global minimization of functions. SIAM J. Sci. Stat. Comput. 6, 15–29 (1985)
Ge, R.P., Qin, Y.F.: A class of filled functions for finding global minimizers of a function of several variables. J. Optimiz. Theory. App. 54, 241–252 (1987)
Ge, R.P.: A filled function method for finding a global minimizer of a function of several variables. Math. Program. 46, 191–204 (1990)
Liu, X.: Finding global minima with a computable filled function. J. Global. Optim. 19, 151–161 (2001)
Wu, Z.Y., Li, D., Zhang, L.S.: Global descent methods for unconstrained global optimization. J. Global. Optim. 50, 379–396 (2011)
Cetin, B.C., Barhen, J., Burdick, J.W.: Terminal repeller unconstrained subenergy tunneling (TRUST) for fast global optimization. J. Optim. Appl. 77, 97–126 (1993)
Groenen, P.J., Heiser, W.J.: The tunneling method for global optimization in multidimensional scaling. Psychometrika 61, 529–550 (1996)
Chowdhury, P.R., Singh, Y.P., Chansarkar, R.A.: Hybridization of gradient descent algorithms with dynamic tunneling methods for global optimization. IEEE T. Syst. Man Cy. A. 30, 384–390 (2000)
Xu, Y.T., Zhang, Y., Wang, S.G.: A modified tunneling function method for non-smooth global optimization and its application in artificial neural network. Appl. Math. Model. 39, 6438–6450 (2015)
Xu, Z., Huang, H.X., Pardalos, P.M., Xu, C.X.: Filled functions for unconstrained global optimization. J. Global. Optim. 20, 49–65 (2001)
Wu, Z.Y., Zhang, L.S., Teo, K.L., Bai, F.S.: New modified function method for global optimization. J. Optim. Theory App. 125, 181–203 (2005)
Wu, Z.Y., Bai, F.S., Lee, H.W., Yang, Y.J.: A filled function method for constrained global optimization. J. Global Optim. 39, 495–507 (2007)
Zhang, Y., Zhang, L., Xu, Y.: New filled functions for nonsmooth global optimization. Appl. Math. Model. 33, 3114–3129 (2009)
Sahiner, A., Gokkaya, H., Yigit, T.: A new filled function for nonsmooth global optimization. In: AIP Conference Proceedings, pp. 972–974. AIP (2012)
Wang, W., Zhang, X., Li, M.: A filled function method dominated by filter for nonlinearly global optimization. J. Appl. Math. (2015). doi: https://doi.org/10.1155/2015/245427
Yuan, L.Y., Wan, Z.P., Tang, Q.H., Zheng, Y.: A class of parameter-free filled functions for box-constrained system of nonlinear equations. Acta Math. Appl. Sin-E. 32, 355–64 (2016)
Wei, F., Wang, Y., Lin, H.: A new filled function method with two parameters for global optimization. J. Optim. Theory. App. 163, 510–527 (2014)
Lin, H., Gao, Y., Wang, Y.: A continuously differentiable filled function method for global optimization. Numer. Algorithms 66, 511–523 (2014)
Yilmaz, N., Sahiner, A.: New global optimization method for non-smooth unconstrained continuous optimization. In: AIP Conference Proceedings, pp. 250002. AIP (2017)
Sahiner, A., Yilmaz, N., Kapusuz, G.: A descent global optimization method based on smoothing techniques via Bezier curves. Carpathian J. Math. 33, 373–380 (2017)
Lin, H., Wang, Y., Gao, Y., Wang, X.: A filled function method for global optimization with inequality constraints. Comput. Appl. Math. 37, 1524–1536 (2018)
Liu, H., Wang, Y., Guan, S., Liu, X.: A new filled function method for unconstrained global optimization. Int. J. Comput. Math. 94, 2283–2296 (2017)
Sahiner, A., Ibrahem, S.A.: A new global optimization technique by auxiliary function method in a directional search. Optim. Lett. (2018). doi: https://doi.org/10.1007/s11590-018-1315-1
Wu, Z.Y., Lee, H.J., Zhang, L.S., Yang, X.M.: A novel filled function method and quasi-filled function method for global optimization. Comput. Optim. Appl. 34, 249–272 (2005)
Zhang, Y., Zhang, L., Xu, Y.: New filled functions for nonsmooth global optimization. Appl. Math. Model. 33, 3114–3129 (2009)
Wei, F., Wang, Y., Lin, H.: A new filled function method with two parameters for global optimization. J. Optim. Theory App. 163, 510–527 (2014)
Shang, Y.L., Pu, D.G., Jiang, A.P.: Finding global minimizer with one-parameter filled function on unconstrained global optimization. Appl. Math. Comput. 191, 176–182 (2007)
Zhang, Y., Xu, Y.T.: A one-parameter filled function method applied to nonsmooth constrained global optimization. Comput. Math. Appl. 58, 1230–1238 (2009)
Wei, F., Wang, Y.: A new filled function method with one parameter for global optimization. Math. Probl. Eng. (2013). doi: https://doi.org/10.1155/2013/532325
Wang, W.X., Shang, Y.L., Zhang, Y.: Global minimization of nonsmooth constrained global optimization with filled function. Math. Probl. Eng. (2014). doi: https://doi.org/10.1155/2014/563860
Yuan, L., Wan, Z., Tang, Q.: A criterion for an approximation global optimal solution based on the filled functions. J. Ind. Manag. Optim. 12, 375–387 (2016)
Wang, Y., Fan, L.: A smoothing evolutionary algorithm with circle search for global optimization. In: 4th IEEE International Conference, pp. 412–418 (2010)
Sahiner, A., Yilmaz, N., Kapusuz, G.: A novel modeling and smoothing technique in global optimization. J. Ind. Manag. Optim. (2018). doi: https://doi.org/10.3934/jimo.2018035
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Sahiner, A., Ibrahem, S.A., Yilmaz, N. (2020). Increasing the Effects of Auxiliary Function by Multiple Extrema in Global Optimization. In: Machado, J., Özdemir, N., Baleanu, D. (eds) Numerical Solutions of Realistic Nonlinear Phenomena. Nonlinear Systems and Complexity, vol 31. Springer, Cham. https://doi.org/10.1007/978-3-030-37141-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-37141-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37140-1
Online ISBN: 978-3-030-37141-8
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)