Abstract
In this paper, we study optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems. Three new classes of functions, namely \(\varepsilon \)-pseudoconvex functions of type I and type II and \(\varepsilon \)-quasiconvex functions are introduced, respectively. By utilizing these new concepts, sufficient optimality conditions of approximate solutions for the nonsmooth semi-infinite programming problem are established. Some examples are also presented. The results obtained in this paper improve the corresponding results of Son et al. (J Optim Theory Appl 141:389–409, 2009).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following semi-infinite programming problem:
where T is an arbitrary (not necessarily finite) index set, f, \(f_t:X\rightarrow {\mathbb {R}}\), \(t\in T\), are locally Lipschitz functions on a Banach space X, and A is a nonempty closed convex subset of X. It is well known that semi-infinite programming problems became an active research topic in mathematical programming due to its extensive applications in many fields such as reverse Chebyshev approximate, robust optimization, minimax problems, design centering and disjunctive programming; see ([1,2,3]). Recently, a great deal of results have appeared in the literature; see [4,5,6,7,8,9,10,11,12,13,14,15] and the references therein.
We note that the approximate solutions of optimization problems are very important from both the theoretical and practical points of view because they exist under very mild hypotheses and a lot of solution methods (for example, iterative algorithms or heuristic algorithms) obtain this kind of solutions. Thus, it is meaningful to consider various concepts of approximate solutions to optimization problems. The first concept of approximate solutions for optimization problems was introduced by Kutateladze [16]. We remark that, in recent years, many authors devoted their efforts to propose some new notions of approximate solutions in connection with the optimization problems [17,18,19,20,21].
In 2009, Son et al. [22] obtained some necessary and sufficient conditions of approximate solutions for nonsmooth semi-infinite programming problems, where f is \(\varepsilon \)-semiconvex and \(f_t\), \(t\in T\) is convex. In the definition of \(\varepsilon \)-semiconvexity, it requires the regularity of f. It is worth mentioning that the regularity of f and the convexity of \(f_t\) play an important role in deriving the sufficient optimality condition of approximate solutions for the nonsmooth semi-infinite programming problem in [22]. Thus, it is natural and interesting to remove the regularity requirements on the objective function and weaken the convexity on constraint functions. The main purpose of this paper is to make an effort in this direction.
The rest of the paper is organized as follows. In Sect. 2 we recall some basic definitions and known results of Clark subdifferentials. We also introduce three new classes of functions, namely \(\varepsilon \)-pseudoconvex functions of type I and type II and \(\varepsilon \)-quasiconvex functions. Some examples illustrate this new functions. Section 3 contains the main results of this paper. By these new functions, sufficient optimality conditions of approximate solutions for the nonsmooth semi-infinite programming problem are derived. Examples are given to illustrate our results. It is important to note that, in our main results, the regularity on the objective function and the convexity on the constraint function are removed. Our results improve the corresponding results of Son et al. [22].
2 Preliminaries
Throughout this paper, we assume that X is a Banach space, T is a compact topological space. Let \(X^*\) be the dual space of X and we denote the closed unit ball in \(X^*\) by \(B^*\).
Let us denote by \({\mathbb {R}}^{(T)}\) the following linear vector space [1]:
The nonnegative cone of \({\mathbb {R}}^{(T)}\) is denoted by
It is easy to see that \({\mathbb {R}}^{(T)}_+\) is a convex cone of \({\mathbb {R}}^{(T)}\). For \(\lambda \in {\mathbb {R}}_+^{(T)}\), the supporting set corresponding to \(\lambda \) is defined by \(T(\lambda ):=\{t\in T: \lambda _t>0\}\), which is a finite subset of T.
Let Z be a linear vector space. For \(\lambda \in {\mathbb {R}}^{(T)}\) and \(\{z_t\}_{t\in T}\subset Z\), we set
Let \(f:X\rightarrow {\mathbb {R}}\) be a real-valued function. The function f is said to be directionally differentiable at \(x\in X\) if, for every direction \(d\in X\), the usual one-sided directional derivative
of f at x in the direction d exists and is finite.
Let \(f:X\rightarrow {\mathbb {R}}\) be a locally Lipschitz function and \(x\in X\). The Clarke generalized directional derivative [23] of f at x in the direction \(d\in X\) is defined by
and the Clarke generalized gradient [23] of f at x is denoted by
It is well known that
The function f is said to be regular [23] at \(x\in X\) if, for each \(d\in X\), the directional derivative \(f'(x;d)\) exists and coincides with \(f^\circ (x;d)\). When f is convex, \(\partial _C{f(x)}\) coincides with the subdifferential \(\partial {f(x)}\) in the sense of convex analysis.
Let \(E\subseteq X\) be a nonempty subset and \(x\in E\). Denote by \(d_E\) the distance function of E, i.e., \(d_E(x):=\inf \{\parallel x-y\parallel :y\in E\}\). A vector v in X is tangent to E at x provided \(d^\circ _E(x;v)=0\). The set of all tangents to E at x is denoted \(T_C (x)\). The Clarke normal cone to E at x is defined by
Let \(E\subseteq X\) be a nonempty closed convex subset. The normal cone to E at \(x\in E\) is defined by
Obviously, if E is a closed convex set, then \(N_C(E;x)=N(E;x)\).
The following lemmas will be used in the sequel.
Lemma 2.1
[23, Corollary, p. 52] Let \(E\subseteq X\) be a nonempty subset and \(x\in E\). Suppose that \(f:X\rightarrow {\mathbb {R}}\) is Lipschitz near x and attains a minimum over E at x. Then, \(0\in \partial _C f(x)+N_C(E;x)\).
Lemma 2.2
[23, Prosition 2.3.3] Let \(f_i:X\rightarrow {\mathbb {R}}\), \(i=1,2,\cdots ,n\), be locally Lipschitz functions. Then, for any \(x\in X\),
Let \(E\subseteq X\) be a nonempty subset. A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be pseudoconvex at \(x\in E\) if, for all \(y\in E\),
The function f is said to be quasiconvex at \(x\in E\) if, for all \(y\in E\),
Son et al. [22] introduced the following generalized convexity which is a generalization of the convexity and the semiconvexity.
Definition 2.1
Let \(E\subseteq X\) be a nonempty subset and \(\varepsilon \geqslant 0\). A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be \(\varepsilon \)-semiconvex at \(x\in E\) if f is regular at x and
Remark 2.1
It is worth mentioning that the notion of \(\varepsilon \)-semiconvex function introduced by Son et al. [22] is different from the one introduced by Loridan [20]. We also note that when \(\varepsilon =0\), the definition of \(\varepsilon \)-semiconvex function coincides the semiconvex function defined by Mifflin [24].
In order to obtain our main results, we introduce the following generalized convex functions.
Definition 2.2
Let \(E\subseteq X\) be a nonempty subset and \(\varepsilon \geqslant 0\). A locally Lipschitz function \(f:X\rightarrow {\mathbb {R}}\) is said to be
-
(i)
\(\varepsilon \)-pseudoconvex of type I at \(x\in E\) if, for all \(y\in E\),
$$\begin{aligned} \langle {\xi ,y-x}\rangle +\sqrt{\varepsilon }\Vert y-x\Vert \geqslant 0,\;\exists \;\xi \in \partial _C f(x)\;\Rightarrow \;f(y)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant f(x). \end{aligned}$$ -
(ii)
\(\varepsilon \)-pseudoconvex of type II at \(x\in E\) if, for all \(y\in E\),
$$\begin{aligned} \langle {\xi ,y-x}\rangle \geqslant 0,\;\exists \;\xi \in \partial _C f(x)\;\Rightarrow \;f(y)+\sqrt{\varepsilon }\Vert y-x\Vert \geqslant f(x). \end{aligned}$$ -
(iii)
\(\varepsilon \)-quasiconvex at \(x\in E\) if, for all \(y\in E\),
$$\begin{aligned} f(y)\leqslant f(x)\;\Rightarrow \;\langle {\xi ,y-x}\rangle +\sqrt{\varepsilon }\Vert y-x\Vert \leqslant 0,\;\forall \;\xi \in \partial _C f(x). \end{aligned}$$
Remark 2.2
If f is \(\varepsilon \)-semiconvex at x, then it is also \(\varepsilon \)-pseudoconvex of type I at x. But the converse is not true in general.
Example 2.1
Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be defined by
By a simple computation, we have
Moreover, \(\partial _C f(0)=[1,\frac{3}{2}]\). Let \(\varepsilon \geqslant 0\). It is easy to see that f is \(\varepsilon \)-pseudoconvex of type I at \(x=0\). However, f is not \(\varepsilon \)-semiconvex at \(x=0\) because f is not regular at \(x=0\).
Remark 2.3
We would like to point out that the definition of \(\varepsilon \)-pseudoconvex of type I is different from the definition of approximate pseudoconvex introduced by Gupta et al. [25]; see Definition 6 in [25]. In Example 2.1, f is \(\varepsilon \)-pseudoconvex of type I but not approximate pseudoconvex at \(x=0\).
Remark 2.4
If f is \(\varepsilon \)-pseudoconvex of type I at x, then it is also \(\varepsilon \)-pseudoconvex of type II at x. But the converse is not true in general.
Example 2.2
Let \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) be defined by
By simple computation, we have
and \(\partial _C f (0)=[-1,2]\). Let \(\varepsilon \geqslant 1\). Take \(\xi \in (0,2]\). It is easy to see that f is \(\varepsilon \)-pseudoconvex of type II but not \(\varepsilon \)-pseudoconvex of type I at \(x=0\).
Remark 2.5
We note that the definition of \(\varepsilon \)-pseudoconvex of type II is different from the definition of approximate pseudoconvex of type I introduced by Bhatia et al. [26]; see Definition 8 in [26]. In Example 2.2, f is \(\varepsilon \)-pseudoconvex of type II but not approximate pseudoconvex of type I at \(x=0\).
We now recall some definitions of approximate solutions. Denote the feasible set of SIP by K, i.e., \(K:=\{x\in A: f_t (x)\leqslant 0,\;\forall \;t\in T\}\).
Definition 2.3
[20] Let \(\varepsilon \geqslant 0\). A point \(x_0\in K\) is said to be
-
(i)
an \(\varepsilon \)-minimum for SIP if \(f(x_0)\leqslant f(x)+\varepsilon \) for all \(x\in K\).
-
(ii)
an \(\varepsilon \)-quasi-minimum for SIP if \(f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert \) for all \(x\in K\).
-
(iii)
a regular \(\varepsilon \)-minimum for SIP if it is an \(\varepsilon \)-minimum and an \(\varepsilon \)-quasi-minimum for SIP.
Remark 2.6
When \(\varepsilon =0\), Definition 2.3 reduces to the usual notion of a minimum. Thus, the study of approximate minimum the case \(\varepsilon >0\) is of interest.
For SIP, denote by \(K_\varepsilon \) the \(\varepsilon \)-feasible set, where \(K_\varepsilon :=\{x\in A:f_t (x)\leqslant \sqrt{\varepsilon }\;\text{ for } \text{ all }\;t\in T\}\). It is easy to see that the set \(K_\varepsilon \) is nonempty and closed.
Definition 2.4
[20] Let \(\varepsilon \geqslant 0\). A point \(x_0\in X\) is said to be an almost \(\varepsilon \)-quasi-minimum for SIP if \(x_0\) satisfies the following conditions: (i) \(x_0\in K_\varepsilon \); (ii) \(f(x_0)\leqslant f(x)+\sqrt{\varepsilon }\Vert x-x_0\Vert \) for all \(x\in K\).
3 Optimality Conditions
In this section, we establish some necessary and sufficient optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems.
To obtain the necessary optimality condition, we consider the following constraint qualification condition:
where \(B(x_0):=\{\lambda \in {\mathbb {R}}_+ ^{(T)}: \lambda _t f_t (x_0)=0,\;\forall \;t\in T\}\) and \(x_0\in K\).
First, we obtain the following necessary optimality condition for SIP.
Theorem 3.1
Let \(x_0\) be an \(\varepsilon \)-quasi-minimum for SIP and let the constraint qualification condition \((\textit{CQ})_{x_0}\) hold. Then, there exists \(\lambda \in {\mathbb {R}}_+ ^{(T)}\) such that
Proof
Since \(x_0\) is an \(\varepsilon \)-quasi-minimum for SIP, it is a minimum for the following optimization problem:
This fact together with the condition \((CQ)_{x_0}\) yields the conclusion. The proof is complete.
Remark 3.1
When \(f_t\), \(t\in T\) is a convex function, the constraint qualification condition \((CQ)_{x_0}\) is considered by Dinh et al. [7]. They also gave a sufficient condition guaranteeing this condition.
We next formulate some sufficient conditions for an almost \(\varepsilon \)-quasi-minimum for SIP.
Theorem 3.2
Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that
If f is \(\varepsilon \)-pseudoconvex of type I at \(x_0\) and \(f_t\), \(t\in T\) is quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.
Proof
Let \((x_0,\lambda )\in K_\varepsilon \times \mathbb { R}_+ ^{(T)}\) be such that (3.2) holds. Then, there exist \(u\in \partial _C f(x_0)\), \(v_t\in \partial _C f_t (x_0)\) for all \(t\in T\) with \(w\in N(A;x_0)\) and \(b\in B^*\) such that \(f_t (x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and
Since \(w\in N(A;x_0)\) and \(b\in B^*\), one has
This fact together with (3.3) yields
It follows that
or equivalently,
Note that \(f_t(x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and \(f_t(x)\leqslant 0\) for all \(t\in T\), \(x\in K\). It follows that \( f_t(x)\leqslant f_t(x_0)\) for all \(x\in K\) and \(t\in T(\lambda )\) . Since \(f_t\) is quasiconvex at \(x_0\) and \(v_t\in \partial _C f_t (x_0)\) for any \(t\in T\),
Combining (3.4) and (3.5) yields
As f is \(\varepsilon \)-pseudoconvex of type I at \(x_0\),
Therefore, \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP. The proof is complete.
Remark 3.2
Theorem 3.2 generalizes Theorem 4.3 of Son et al. [22], one of the main results of [22], from the following two aspects: (i) the convexity of \(f_t\), \(t\in T\) is relaxed to the quasiconvexity at \(x_0\); (ii) the \(\varepsilon \)-semiconvexity of f is relaxed to the \(\varepsilon \)-pseudoconvexity of type I at \(x_0\). It is worth mentioning that the regularity of f plays an important role in the proof of Theorem 4.3 in [22]. However, our proof does not require any regularity conditions.
Since an \(\varepsilon \)-semiconvex function is \(\varepsilon \)-pseudoconvex, we have the following corollary.
Corollary 3.1
Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-semiconvex at \(x_0\) and \(f_t\), \(t\in T\) is quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.
Since a convex function is quasiconvex, we have the following corollary.
Corollary 3.2
[22, Theorem 4.3] Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-semiconvex at \(x_0\) and \(f_t\), \(t\in T\) is convex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.
The following example illustrates that Theorem 3.2 holds but Corollary 3.2 is not applicable.
Example 3.1
Consider the following problem:
where
and \(f_t (x):=tx\), for \(x\in {\mathbb {R}}\) and \(t\in T\). By simple calculations,
Let \(\varepsilon =1\). It is easy to check that f is \(\varepsilon \)-pseudoconvex at \(x_0=0\), and \(f_t\), \(t\in T\), is quasiconvex at \(x_0=0\) and \(K=[-1,0]\). Moreover, \(\partial _C f (0)=[0,1]\), \(\partial _C f_t (0)=t\) for all \(t\in T\), and \(N(A;0)=\{0\}\). Let \(\lambda \) satisfy that \(\lambda _0=1\) and \(\lambda _t=0\) for all \(t\in T\backslash \{0\}\). We can check that the optimality condition (3.2) corresponding \((0,\lambda )\) holds. In this case we obtain \(T(\lambda )=\{0\}\). By Theorem 3.2, \(x_0=0\) is an almost \(\varepsilon \)-quasi-minimum for SIP. However, Theorem 4.3 of Son et al. [22] is not applicable since f is not \(\varepsilon \)-semiconvex at \(x_0=0\). We also point out that there is not a minimum for this problem.
In the following theorem, we give another sufficient optimality condition of an almost \(\varepsilon \)-quasi-minimum for SIP.
Theorem 3.3
Let \((x_0,\lambda )\in K_\varepsilon \times {\mathbb {R}}_+ ^{(T)}\) be such that (3.2) holds. If f is \(\varepsilon \)-pseudoconvex of type II at \(x_0\) and \(f_t\) is \(\varepsilon \)-quasiconvex at \(x_0\), then \(x_0\) is an almost \(\varepsilon \)-quasi-minimum for SIP.
Proof
Similarly to the proof of Theorem 3.2, there exist \(u\in \partial _C f(x_0)\), \(v_t\in \partial _C f_t (x_0)\), \(\forall \;t\in T\), \(w\in N(A;x_0)\) and \(b\in B^*\) such that \(f_t (x_0)\geqslant 0\) for all \(t\in T(\lambda )\) and
It is easy to see that \(f_t(x)\leqslant f_t(x_0)\) for all \(x\in K\) and \(t\in T(\lambda )\). By the \(\varepsilon \)-quasiconvexity of \(f_t\) at \(x_0\),
It follows that
This fact together with (3.6) yields
Since f is \(\varepsilon \)-pseudoconvex of type II at \(x_0\),
Therefore, the conclusion holds. The proof is complete.
Remark 3.3
When we replace \(f_t (x_0)\geqslant 0\) by \(f_t (x_0)=0\) in (3.2), Theorems 3.2 and 3.3 also hold. Moreover, when X is a finite dimensional space, the results obtained in this paper also hold.
References
Goberna, M.A., López, M.A.: Linear Semi-Infinite Optimization. Wiley, Chichester (1998)
Reemtsen, R., Ruckmann, J.J. (eds.): Semi-Infinite Programming. Kluwer, Boston (1998)
Stein, O.: How to solve a semi-infinite optimization problem. Eur. J. Oper. Res. 223, 312–320 (2012)
Cánovas, M.J., Kruger, A.Y., López, M.A., Parra, J., Théra, M.A.: Calmness modulus of linear semi-infinite programs. SIAM J. Optim. 24, 29–48 (2014)
Chuong, T.D., Huy, N.Q., Yao, J.C.: Subdifferentials of marginal functions in semi-infinite programming. SIAM J. Optim. 20, 1462–1477 (2009)
Dinh, N., Goberna, M.A., López, M.A., Son, T.Q.: New Farkas-type constraint qualifications in convex infinite programming. ESAIM Control Optim. Calc. Var. 13, 580–597 (2007)
Dinh, N., Mordukhovich, B.S., Nghia, T.T.A.: Subdifferentials of value functions and optimality conditions for DC and bilevel infinite and semi-infinite programs. Math. Program. 123, 101–138 (2010)
Huy, N.Q., Yao, J.C.: Semi-infinite optimization under convex function perturbations: Lipschitz stability. J. Optim. Theory Appl. 128, 237–256 (2011)
Kanzi, N.: Constraint qualifications in semi-infinite systems and their applications in nonsmooth semi-infinite programs with mixed constraints. SIAM J. Optim. 24, 559–572 (2014)
Kim, D.S., Son, T.Q.: Characterizations of solutions sets of a class of nonconvex semi-infinite programming problems. J. Nonlinear Convex Anal. 12, 429–440 (2011)
Li, C., Zhao, X.P., Hu, Y.H.: Quasi-slater and Farkas–Minkowski qualifications for semi-infinite programming with applications. SIAM J. Optim. 23, 2208–2230 (2013)
Long, X.J., Peng, Z.Y., Wang, X.F.: Characterizations of the solution set for nonconvex semi-infinite programming problems. J. Nonlinear Convex Anal. 17, 251–265 (2016)
Mishra, S.K., Jaiswal, M., Le Thi, H.A.: Nonsmooth semi-infinite programming problem using limiting subdifferentials. J. Glob. Optim. 53, 285–296 (2012)
Mordukhovich, B.S., Nghia, T.T.A.: Subdifferentials of nonconvex supremum functions and their applications to semi-infinite and infinite programs with Lipschitzian date. SIAM J. Optim. 23, 406–431 (2013)
Son, T.Q., Kim, D.S.: A new approach to characterize the solution set of a pseudoconvex programming problem. J. Comput. Appl. Math. 261, 333–340 (2014)
Kutateladze, S.S.: Convex \(\varepsilon \)-programming. Sov. Math. Dokl. 20, 391–393 (1979)
Durea, J., Dutta, J., Tammer, C.: Lagrange multipliers for \(\varepsilon \)-pareto solutions in vector optimization with nonsolid cones in Banach spaces. J. Optim. Theory Appl. 145, 196–211 (2010)
Gao, Y., Hou, S.H., Yang, X.M.: Existence and optimality conditions for approximate solutions to vector optimization problems. J. Optim. Theory Appl. 152, 97–120 (2012)
Long, X.J., Li, X.B., Zeng, J.: Lagrangian conditions for approximate solutions on nonconvex set-valued optimization problems. Optim. Lett. 7, 1847–1856 (2013)
Loridan, P.: Necessary conditions for \(\varepsilon \)-optimality. Math. Program. Study 19, 140–152 (1982)
Strodiot, J.J., Nguyen, V.H., Heukemes, N.: \(\varepsilon \)-Optimal solutions in nondifferentiable convex programming and some related questions. Math. Program. 25, 307–328 (1983)
Son, T.Q., Strodiot, J.J., Nguyen, V.H.: \(\varepsilon \)-Optimality and \(\varepsilon \)-Lagrangian duality for a nonconvex problem with an infinite number of constraints. J. Optim. Theory Appl. 141, 389–409 (2009)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley, New York (1983)
Mifflin, R.: Semismooth and semiconvex functions in constrained optimization. SIAM J. Control Optim. 15, 959–972 (1977)
Gupta, A., Mehra, A., Bhatia, D.: Approximate convexity in vector optimisation. Bull. Aust. Math. Soc. 74, 207–218 (2006)
Bhatia, D., Gupta, A., Arora, P.: Optimality via generalized approximate convexity and quasiefficiency. Optim. Lett. 7, 127–135 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was partially supported by the National Natural Science Foundation of China (Nos. 11471059 and 11671282), the Chongqing Research Program of Basic Research and Frontier Technology (Nos. cstc2014jcyjA00037, cstc2015jcyjB00001 and cstc2014jcyjA00033), the Education Committee Project Research Foundation of Chongqing (Nos. KJ1400618 and KJ1400630), the Program for University Innovation Team of Chongqing (No. CXTDX201601026) and the Education Committee Project Foundation of Bayu Scholar.
Rights and permissions
About this article
Cite this article
Long, XJ., Xiao, YB. & Huang, NJ. Optimality Conditions of Approximate Solutions for Nonsmooth Semi-infinite Programming Problems. J. Oper. Res. Soc. China 6, 289–299 (2018). https://doi.org/10.1007/s40305-017-0167-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40305-017-0167-1
Keywords
- Nonsmooth semi-infinite programming problem
- Optimality condition
- Approximate solution
- Generalized pseudoconvexity