Abstract
The paper addresses the problem of constructing lower and upper bounding functions for univariate functions. This problem is of a crucial importance in global optimization where such bounds are used by deterministic methods to reduce the search area. It should be noted that bounding functions are expected to be relatively easy to construct and manipulate with. We propose to use piecewise linear estimators for bounding univariate functions. The rules proposed in the paper enable an automated synthesis of lower and upper bounds from the function’s expression in an algebraic form. Numerical examples presented in the paper demonstrate the high accuracy of the proposed bounds.
The reported study was funded by RFBR according to the research project 17-07-00510.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
This paper is devoted to constructing lower and upper bounding functions for univariate functions. A function \(\phi (x)\) is called lower (upper) bounding function (or simply bound) for a function f(x) over an interval [a, b] if \(f(x) \ge \phi (x)\) (\(f(x) \le \phi (x)\)) for all \(x \in [a,b]\).
Lower and upper bounds for objective functions and constraints play an important role in global optimization. Indeed, suppose we know a lower bounding function \(\phi (x)\) for an objective function f(x). Than we can safely exclude from the further search the set defined by the following inequality:
where \(f_r\) is an incumbent value (best solution found along the search) and \(\varepsilon \) is a prescribed tolerance [6, 7].
The inequality 1 can be solved efficiently only when the function \(\phi (x)\) has a simple structure. In this work one of such function types: piecewise linear function (or PWL-function for brevity) is studied. We propose a method to obtain PWL bounds from the function’s algebraic representation (formula). The evaluation of bounds is driven by rules that are applied iteratively from the bottom of the expression tree to its top similarly to computing function values, interval bounds or derivatives. We show that PWL bounds constructed with the help of the proposed approach are generally much tighter than bounds computed with the interval [10] or slope [20] arithmetic.
The paper is organized as follows. The Sect. 2 outlines related works and compares our approach with existing ones. The definition and properties of PWL bounds are discussed in Sect. 3. Numerical examples comparing the accuracy of PWL bounds and other approaches are presented in Sect. 4.
2 Related Work
Deterministic univariate global optimization stems from seminal works of Pijavskij [18] and Shubert [26]. In these papers authors proposed to use the Lipschitzian property of a function to determine the precision of found solutions. They used simple Lipschitzian underestimations:
where L is the Lipschitz constant.
These ideas were further developed in works of Strongin and Sergeyev [23, 28] who established an elaborated theory (“information-statistical approach”) for estimating function bounds over given intervals.
Second-order Lipschitzian bounds were studied in [1, 3]. Authors proposed to use the following underestimation:
where L is the Lipschitzian constant for the derivative. This underestimation was further improved by Sergeyev in [24]. Sergeyev introduced a smooth support function that is closer to the objective function than 2. In the paper [25], geometric and information frameworks are taken into consideration for constructing global optimization algorithms. Another paper [15] is devoted to the development of effective global optimization algorithms with Lipschitz functions and Lipschitz first derivatives.
The further progress in univariate global optimization was made by an important observation that interval bounds on the derivatives can replace a Lipschitz constant. In [4] authors combine ideas borrowed from the Pijavskij method and interval approaches. Besides new bounds the paper introduces powerful reduction rules that can significantly speed up the search process.
Another replacement of Lipschitz constant is provided by slopes. A slope is defined as an interval \(S_f(c)\) that satisfies the following inclusion:
where c is a point within the interval \([\underline{x}, \overline{x}]\).
Clearly \(S_f(c) \subseteq [\min _{x \in [\underline{x}, \overline{x}]} f'(x), \max _{x \in [\underline{x}, \overline{x}]} f'(x)]\). However, this inclusion is often strict: slopes can provide much tighter bounds than derivative estimations. In [20, 21] efficient algorithms for evaluating slopes are proposed. Slopes are evaluated from an algebraic expression driving by rules similarly to automatic differentiation.
It worth to note powerful global optimization techniques [6, 7, 9, 17, 19, 27] for a multivariate case that can serve as a source of good ideas for univariate optimization. See [11] for a good survey of such approaches.
Interval bounds, Lipschitzian bounds and slopes can be considered as a simple form of linear underestimations. More elaborate PWL lower bounds called “kites” are considered in [29]. Kites combine the centered forms and the linear boundary value forms thereby achieving better approximation w.r.t. both forms used separately.
Concave PWL lower and convex PWL upper bounds consisting of exactly two line segments were considered in [5, 13]. Authors propose the rules to evaluate these bounds automatically from an algebraic representation of an expression.
We also should mention that the approach suggested in our paper differs from convex envelopes, convex underestimators and other convexification techniques developed in [2, 8, 12, 14]. The main difference between the approaches outlined above and ours is that we consider generic PWL bounds not limiting to convex or concave cases with an arbitrary finite number of segments.
3 Piecewise Linear Bounds
3.1 Basic Properties of Piecewise Linear Functions
A piecewise linear function on an interval [a, b] is defined as a sequence of segments \(z_i\) connecting points \((x_i, y_i)\) and \((x_{i+1}, y_{i+1})\). More formally:
where \(n \ge 2\), \(a = x_1 \le \dots \le x_n = b\). In what follows we’ll use the abbreviation PWL for “piecewise linear”.
Where appropriate we will denote a PWL function by a sequence of its nodes enclosed in braces:
A set of PWL functions is closed under a set of basic algebraic operations including superposition. More formally this is stated in the two apparent propositions below.
Proposition 1
Let \(\psi (x)\) and \(\phi (x)\) be PWL functions on an interval [a, b]. Then expressions
are PWL functions on [a, b].
Proposition 2
Let \(\phi (x)\) and \(\psi (x)\) be PWL functions on intervals [a, b], [c, d], where \(c = \min _{x \in [a,b]} \psi (x)\), \(d = max_{x \in [a,b]} \psi (x)\). Then \(\omega (x) = \psi (\phi (x))\) is a PWL function on [a, b].
A piecewise linear lower (upper) bound for a function f(x) on an interval [a, b] is a piecewise linear function \(\psi (x)\) such that \(f(x) \ge \psi (x)\) (\(f(x) \le \psi (x)\)) for all \(x \in [a,b]\).
A desirable feature for practice is an ability to automatically construct PWL bounds from the function representation. Below we introduce a theory and rules for computing PWL bounds automatically assuming that the symbolic representation of a function is known.
3.2 PWL Bounds for Elementary Functions
PWL bounds are computed from the tree representation of an expression from the bottom (leaves) to the top (root). The rules to compute bounds rely on PWL bounds for elementary functions.
Under the elementary functions we understand the following univariate functions \(\sin (x),\;\cos (x),\;\arcsin (x),\;\arccos (x),\;\tan (x),\;\arctan (x),\;\; \cot (x),\;\; e^x,\) \(\;\ln (x),\;\frac{1}{x} \) (for \(x>0\)), \(x^{\alpha }\;(\alpha >0)\). Many of elementary functions (like \(e^x,\; \ln (x),\;\frac{1}{x}\)) are convex or concave on a whole domain of definition. For remaining ones (like \(\sin (x),\;\cos (x),\tan (x)\)) the interval to compute bounds can be subdivided into smaller intervals where a function is concave or convex. Thus w.l.o.g. we can limit our consideration to a case when a function is convex (the concavity case is similar). It is obvious that the list of elementary functions can be easily enlarged.
Proposition 3
Let f(x) be a convex function over an interval [a, b]. Consider \(n \ge 1\) points within this interval \(a \le x_1< \dots < x_n \le b\). Define a function \(\mu (x)\) as follows
where \(\psi _i(x) = f(x_i) + d_i (x - x_i)\) and \(d_i\) is a subderivative of f(x) at a point \(x_i\) on an interval [a, b]. Then \(\mu (x)\) is a lower PWL bound for f(x) (Fig. 1).
Proposition 4
Let f(x) be a convex function over an interval [a, b]. Consider \(n \ge 2\) points within this interval \(a = x_1< \dots < x_n = b\). Define a function \(\psi (x)\) as follows:
where \(y_i = f(x_i)\), \(i=1,\dots ,n\). Then \(\psi (x)\) is an upper PWL bound for f(x) (Fig. 2).
Propositions 3 and 4 provide grounds for constructing lower and upper bounds for elementary functions. Clearly the precision of bounds grows with the number of points in a set \(x_1, \dots , x_n\).
3.3 Automated Synthesis of PWL Bounds
In the previous section we proposed an approach to constructing linear bounds for elementary functions. However to compute bounds for a function defined by an expression we need rules to calculate bounds for a superposition of functions and basic operators used to construct formulas.
We start from simple rules for linear combination and \(\min , \max \) operators.
Proposition 5
Let \(\underline{\mu _f}(x)\), \(\overline{\mu _f}(x)\) be PWL lower and upper bounds for a function f(x) on an interval [a, b]. Let \(\underline{\mu _g}(x)\), \(\overline{\mu _g}(x)\) be PWL lower and upper bounds for a function g(x) on an interval [a, b]. Then the following properties hold:
Obvious rules (3)–(5) allow to construct PWL bounds for linear combinations of elementary functions.
The situation with multiplication is more complex. If \(x \in [a,b], y \in [c,d]\) then the following inequalities [10] hold for multiplication:
where \(\phi _1(x) = \underline{\mu _f}(x)\,\cdot \underline{\mu _g}(x)\), \(\phi _2(x) = \underline{\mu _f}(x)\,\cdot \overline{\mu _g}(x)\), \(\phi _3(x) = \overline{\mu _f}(x)\,\cdot \underline{\mu _g}(x)\), \(\phi _4(x) = \overline{\mu _f}(x)\,\cdot \overline{\mu _g}(x)\). Observe that expressions \(\phi _i(x)\), \(i = 1,2,3,4\) are piecewise quadratic. Since quadratic functions are either concave or convex the PWL lower and upper bounds for them are readily constructed according to Propositions 3 and 4. The lower bounds for \(\min (\phi _1(x), \phi _2(x), \phi _3(x), \phi _4(x))\) and \(\max (\phi _1(x), \phi _2(x), \phi _3(x), \phi _4(x))\) are obtained from (7), (8).
The PWL bounds for the reciprocal 1 / x can be constructed following Propositions 3 and 4 for an interval [a, b], when either \(a > 0\) or \(b < 0\). The remaining case \(0 \in [a, b]\) is omitted in this paper and left for further studies. Having bounds for the reciprocal the division is reduced to the multiplication in a standard way \(x / y = x \, \frac{1}{y}\).
To evaluate bounds for complex expressions we need rules to process function superposition. Consider a function \(h(x) = f(g(x))\) on an interval [a, b]. Let \(\underline{\mu _g}(x)\) and \(\overline{\mu _g}(x)\) be PWL bounds for g(x) on an interval [a, b]:
Denote \(c = \min _{x \in [a,b]} \underline{\mu _g}(x)\), \(d = max_{x \in [a,b]} \overline{\mu _g}(x)\). Let \(\underline{\mu _f}(x)\) and \(\overline{\mu _f}(x)\) be PWL bounds for a function f(x) on [c, d]:
Proposition 6
If the function \(\underline{\mu _f}(x)\) is non-decreasing monotonic on [c, d] then \(\underline{\mu _f}(\underline{\mu _g}(x))\) is a PWL lower bound for h(x) on [a, b].
Proof
According to the Proposition 2 \(\underline{\mu _f}(\underline{\mu _g}(x))\) is a PWL function. It remains to prove that \(\underline{\mu _f}(\underline{\mu _g}(x)) \le h(x)\) on [a, b]. Consider \(x \in [a,b]\). From (10) we derive that \(\underline{\mu _g}(x) \le g(x)\). Since \(\underline{\mu _g}(x), g(x) \in [c, d]\) and \(\underline{\mu _f}(x)\) is non-decreasing monotonic on [c, d] we obtain \(\underline{\mu _f}(\underline{\mu _g}(x)) \le \mu _f(g(x))\). From (11) it follows that \(\underline{\mu _f}(g(x)) \le f(g(x)) = h(x)\). Thus \(\underline{\mu _f}(\underline{\mu _g}(x)) \le h(x)\) for \(x \in [a,b]\).
Figure 3 shows the synthesis of a PWL lower bound for the composite function \(h(x)=\sin ^2(x)\) on an interval \([0, \pi ]\). Notice that \(\sin (x) \in [0,1]\) when \(x \in [0,\pi ]\). The outer function \(x^2\) as well as PWL lower bound \(\underline{\mu _f}\) are non-decreasing monotonic on interval [0, 1]. That is why it suffices to consider a PWL lower bound \(\underline{\mu _g}\) for the inner function \(\sin (x)\). According to the Proposition 6 the composite function \(\underline{\mu _f}(\underline{\mu _g)}\) is a PWL lower bound for the function \(\sin ^2(x)\).
Proposition 7
If the function \(\underline{\mu _f}(x)\) is non-increasing monotonic on [c, d] then \(\underline{\mu _f}(\overline{\mu _g}(x))\) is a PWL lower bound for h(x) on [a, b].
Proof
According to Proposition 2 \(\underline{\mu _f}(\overline{\mu _g}(x))\) is a PWL function. It remains to prove that \(\underline{\mu _f}(\overline{\mu _g}(x)) \le h(x)\) on [a, b]. Consider \(x \in [a,b]\). From (10) we derive that \(g(x) \le \overline{\mu _g}(x)\). Since \(\overline{\mu _g}(x), g(x) \in [c, d]\) and \(\underline{\mu _f}(x)\) is non-increasing monotonic on [c, d] we obtain \( \underline{\mu _f}(\overline{\mu _g}(x)) \le \underline{\mu _f}(g(x))\). From (11) it follows that \(\underline{\mu _f}(g(x)) \le f(g(x)) = h(x)\). Thus \(\underline{\mu _f}(\overline{\mu _g}(x)) \le h(x)\) for \(x \in [a,b]\).
Figure 4 shows the synthesis of a PWL lower bound for a composite function \(h(x)=\sin ^2(x)\) on \([\pi , 2 \pi ]\) interval. Notice that \(\sin (x) \in [-1,0]\) when \(x \in [\pi , 2\pi ]\). Unlike the previous example the outer function \(x^2\) as well as the PWL lower bound \(\underline{\mu _f}\) are non-increasing monotonic on the interval \([-1, 0]\). Therefore we take the PWL upper bound \(\overline{\mu _g}\) for the inner function \(\sin (x)\). According to the Proposition 7 the composite function \(\underline{\mu _f}(\overline{\mu _g)}\) is a PWL lower bound for the function \(\sin ^2(x)\).
In the same way we can prove the following two propositions.
Proposition 8
If the function \(\overline{\mu _f}(x)\) is non-decreasing monotonic on [c, d] then \(\overline{\mu _f}(\overline{\mu _g}(x))\) is a PWL upper bound for h(x) on [a, b].
Proposition 9
If the function \(\overline{\mu _f}(x)\) is non-increasing monotonic on [c, d] then \(\overline{\mu _f}(\underline{\mu _g}(x))\) is a PWL upper bound for h(x) on [a, b].
If PWL bounds for an elementary function f(x) on [c, d] are monotonic we can directly apply Propositions 6–9 to compute PWL bounds for a composite function f(g(x)). Otherwise we need to somehow obtain monotonic PWL bounds for f(x). Fortunately this can be easily done for any PWL bound. We explain this for an upper bound (Fig. 5).
Suppose \(\mu (x)\) is a PWL function on an interval [c, d]. The Monotonize procedure constructs a PWL function \(\tilde{\mu }(x)\) that is non-decreasing monotonic and \(\mu (x) \le \tilde{\mu }(x)\) for all \(x \in [c,d]\).
Starting from the leftmost node \(x_1 = c\) the Monotonize algorithm compares successive nodes of a PWL function and if the monotonicity is violated fixes it by “lifting” one node until the link between successive nodes is horizontal. If \(\mu (c) = \mu (x_1) > \mu (x_n) = \mu (d)\) it is usually better to construct a non-increasing PWL upper bound \(\tilde{\mu }(x)\) going in the opposite direction (from d to c) in the same manner. The case of a lower bound is considered similarly.
4 Numerical Examples
Below we consider two examples and compare the proposed approach with the interval [10] and the slopes [20, 21] arithmetic. For the sake of presentation quality we use approximate computations with a small precision. In practice the computations could be as accurate as needed.
Example 1
Compute the PWL lower bound for the \(h(x) = \sin (x)\,\cdot (-x^2+x)\) function over the [1, 3] interval.
According to the rule (9) the lower bound \(\underline{\mu _h}(x)\) is computed as follows:
Observe that \(\underline{\mu _f}(x) \ge 0\) while \(\overline{\mu _g}(x)\le 0\) for \(x\in [1,3]\) (Fig. 6). Thus we can conclude that \(\overline{\mu _h}(x) = \overline{\mu _f}(x) \underline{\mu _g}(x)\). Two tangents \(0.54x+0.3\) and \(-0.99x+3.11\) forming the upper bound for \(\sin (x)\) intersect at \(x=1.83\). For simplicity we choose the same x for constructing chords \(-1.83x + 1.83\) and \(-3.83x + 5.50\) constituting the lower bound for g(x).
Multiplying the obtained bounds we get
However \(\phi (x)\) is not piecewise linear. Thus we need to construct a PWL bound for \(\phi (x)\). Figure 7 shows a PWL lower bound \(\left\{ (1.0, 0.0),(1.76,-1.76),\right. \) \(\left. (2.03,-2.8),(2.64, -2.8), (3, -0.91)\right\} \) for the \(\phi (x)\) (and hence that for f(x)). This lower bound gives a lower estimate for the f(x) equal to \(-2.8\).
Interval analysis gives the following result:
yielding \(-8\) as a lower bound.
To estimate h(x) by means of slopes arithmetic we compute the enclosure \(Y_s\) of \(s_h(c, A)\) for \(A = [1; 3]\) and \(c = 2\):
Thus the slopes arithmetic gives the following inclusion:
yielding \(-6\) as a lower bound.
Example 2
Evaluate a lower bound for a composite function \(h(x)=-e^{x^3-x^2}\), \(x\in [0,2]\).
Notice that h(x) is neither convex nor concave on [0, 2]. The function is a composition of the outer function \(f(x)=-e^x\) and the inner function \(g(x)=x^3-x^2\). Figures 8 and 9 demonstrate synthesis of a PWL lower bound.
Since \(-e^x\) and its lower PWL bound \(\underline{\mu _f}\) are non-increasing functions (Fig. 9) we can apply the Proposition 7.
According to the Proposition 7 we fist need to compute the image of the inner function which is a difference of two elementary functions \(x^3\) and \(x^2\). To obtain the upper PWL bound take a point \(x=1.5\) from [0, 2]. Then the upper PWL bound for \(x^3\) is \(\left\{ (0,0),(1.5,3.375),(2,8)\right\} \). It consists of two chords \(\overline{\mu _1}(x)=2.25 x\) and \(\overline{\mu _2}(x)=9.25 x-10.5\). Similarly the upper PWL bound for \(x^2\) is \(\left\{ (0,0),(1.5,2.25),(2,4)\right\} \). It also consists of two chords \(\overline{\nu _1}(x)=1.5x\) and \(\overline{\nu _2}(x)=3.5x-3\).
In order to build lower PWL bounds for \(x^3\) and \(x^2\) we draw tangents at points \(x=0\) and \(x=2\). The resulting lower PWL bound for \(x^3\) is \(\left\{ (0,0),(\frac{4}{3},0),(2,8)\right\} \). It consists of two tangents \(\underline{\mu _1}(x)=0\) and \(\underline{\mu _2}(x)=12x-16\). The lower PWL bound for \(x^2\) obtained in a similar way is \(\left\{ (0,0),(1,0),(2,2)\right\} \). It also consists of two tangents \(\underline{\nu _1}(x)=0\) and \(\underline{\nu _2}(x)=4x-4\).
Now, we are ready to build upper and lower PWL bounds for the inner function \(g(x)=x^3-x^2\). According to the rule 6 the upper PWL bound is as follows:
The similarly computed lower PWL bound looks as follows:
Using obtained bounds we get the following estimation of the inner function image: \(g(x)\in [-2,4]\). According to the Proposition 7 we should construct a lower PWL bound for the outer function \(f(x) = -e^x\) over \([-2,4]\). Choosing a point \(x=1.125, \; x\in [-2,4]\) we obtain a PWL lower bound \(\underline{\mu _f}(x)\), with the following sequence of nodes \(\left\{ (-2,-0.13),(1.125,-3.08),(4,-54.59)\right\} \). It consists of the following two chords: \(\underline{\phi _1}(x) = -0.95x-1.99\) for \(x\in [-2, 1.125]\) and \(\underline{\phi _2}(x) = -17.91x+17.06\) for \(x\in [1.125, 4]\) depicted at the Fig. 9.
Finally we get the PWL lower bound \(\underline{\mu _h}(x)\) for \(h(x)=e^{x^3-x^2}\) over \(x\in [0,2]\) as a composite function \(\underline{\mu _f}(\overline{\mu _g}(x))\):
Finally we get the following lower PWL bound for \(h(x)=-e^{x^3-x^2}\) over \(x\in [0,2]\): \(\left\{ (0,-1.99),(0.5, -1.068),(1, -23.23),(1.5,-7.56) ,(2,-54.59)\right\} \). Observe that the lower estimation \(-54.59\) of the function is precise: \(f(2)=-e^{2^3-2^2} \approx -54.59\).
The interval analysis gives the following result:
yielding the lower estimate \(-2980.95\).
The lower estimation of \(h(x) = -e^{x^3-x^2}\) by means of slopes arithmetic is computed as follows. First compute the enclosure \(Y_s\) of \(s_h(c, A)\) for \(A = [0; 2]\) and \(c = 1\):
The slope based enclosure is computed as follows:
Table 1 summaries the results obtained with the help of different approaches. The superiority of the proposed approach is clearly observed.
5 Conclusions
We proposed an approach to an automated construction of bounding functions of one variable. The synthesis of bounds is driven by rules applied to an algebraic expression of a function. The proposed approach was experimentally compared with interval and slope arithmetic. Experiments demonstrated that for some functions the proposed method can significantly outperform standard approaches.
It should be noted that our approach can be helpful in separable programming problems [16, 22]. We plan to study this topic in the future.
References
Baritompa, W.: Accelerations for a variety of global optimization methods. J. Global Optim. 4(1), 37–45 (1994)
Bompadre, A., Mitsos, A.: Convergence rate of McCormick relaxations. J. Global Optim. 52(1), 1–28 (2012)
Breiman, L., Cutler, A.: A deterministic algorithm for global optimization. Math. Program. 58(1–3), 179–199 (1993)
Casado, L.G., MartÍnez, J.A., GarcÍa, I., Sergeyev, Y.D.: New interval analysis support functions using gradient information in a global minimization algorithm. J. Global Optim. 25(4), 345–362 (2003)
Ershov, A., Khamisov, O.V.: Automatic global optimization. Diskretnyi Analiz i Issledovanie Operatsii 11(2), 45–68 (2004)
Evtushenko, Y.G.: A numerical method of search for the global extremum of functions (scan on a nonuniform net). Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 11(6), 1390–1403 (1971)
Evtushenko, Y., Posypkin, M.: A deterministic approach to global box-constrained optimization. Optimization Letters 7(4), 819–829 (2013)
Floudas, C., Gounaris, C.: A review of recent advances in global optimization. J. Global Optim. 45(1), 3–38 (2009)
Gergel, V., Grishagin, V., Israfilov, R.: Local tuning in nested scheme of global optimization. Procedia Comput. Sci. 51, 865–874 (2015)
Hansen, E., Walster, G.W.: Global Optimization Using Interval Analysis: Revised and Expanded. CRC Press, New York (2003)
Horst, R., Pardalos, P.M.: Handbook of Global Optimization, vol. 2. Springer Science & Business Media, Dordrecht (2013)
Khajavirad, A., Sahinidis, N.: Convex envelopes of products of convex and component-wise concave functions. J. Global Optim. 52(3), 391–409 (2012)
Khamisov, O.: Explicit univariate global optimization with piecewise linear support functions, proc. DOOR 2016, CEUR-WS.org, vol. 1623, pp. 218–255. http://ceur-ws.org/Vol-1623/papermp19.pdf
Khamisov, O.: Optimization with quadratic support functions in nonconvex smooth optimization, aIP Conference Proceedings 1776, 050010 (2016). https://doi.org/10.1063/1.4965331
Lera, D., Sergeyev, Y.D.: Acceleration of univariate global optimization algorithms working with lipschitz functions and lipschitz first derivatives. SIAM J. Optim. 23(1), 508–529 (2013)
Pardalos, P.M., Rosen, J.: Reduction of nonlinear integer separable programming problems. Int. J. Comput. Math. 24(1), 55–64 (1988)
Paulavičius, R., Žilinskas, J.: Simplicial Global Optimization. Springer, New York (2014). 10.1007/978-1-4614-9093-7
Pijavskij, S.: An algorithm for finding the global extremum of function. Optimal Decisions 2, 13–24 (1967)
Pintér, J.D.: Global Optimization in Action: Continuous and Lipschitz Optimization: Algorithms, Implementations and Applications, vol. 6. Springer Science & Business Media, New York (2013)
Ratz, D.: An optimized interval slope arithmetic and its application. Inst. für Angewandte Mathematik (1996)
Ratz, D.: A nonsmooth global optimization technique using slopes: the one-dimensional case. J. Global Optim. 14(4), 365–393 (1999)
Rosen, J.B., Pardalos, P.M.: Global minimization of large-scale constrained concave quadratic problems by separable programming. Math. Program. 34(2), 163–174 (1986)
Sergeyev, Y.D.: An information global optimization algorithm with local tuning. SIAM J. Optim. 5(4), 858–870 (1995)
Sergeyev, Y.D.: Global one-dimensional optimization using smooth auxiliary functions. Math. Program. 81(1), 127–146 (1998)
Sergeyev, Y.D., Mukhametzhanov, M.S., Kvasov, D.E., Lera, D.: Derivative-free local tuning and local improvement techniques embedded in the univariate global optimization. J. Optim. Theory Appl. 171(1), 186–208 (2016)
Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)
Strekalovsky, A.S.: Global optimality conditions for nonconvex optimization. J. Global Optim. 12(4), 415–434 (1998)
Strongin, R.G., Sergeyev, Y.D.: Global Optimization with Non-convex Constraints: Sequential and Parallel Algorithms, vol. 45. Springer Science & Business Media, New York (2013)
Vinkó, T., Lagouanelle, J.L., Csendes, T.: A new inclusion function for optimization: Kite-the one dimensional case. J. Global Optim. 30(4), 435–456 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Khamisov, O., Posypkin, M., Usov, A. (2019). Piecewise Linear Bounding Functions for Univariate Global Optimization. In: Evtushenko, Y., Jaćimović, M., Khachay, M., Kochetov, Y., Malkova, V., Posypkin, M. (eds) Optimization and Applications. OPTIMA 2018. Communications in Computer and Information Science, vol 974. Springer, Cham. https://doi.org/10.1007/978-3-030-10934-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-10934-9_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-10933-2
Online ISBN: 978-3-030-10934-9
eBook Packages: Computer ScienceComputer Science (R0)