Abstract
By using the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials and the sum rule for Mordukhovich subdifferentials, we establish a necessary optimality condition for the local weak sharp efficient solution of a constrained multiobjective optimization problem. Moreover, by employing the approximate projection theorem, and some appropriate convexity and affineness conditions, we also obtain some sufficient optimality conditions respectively for the local and global weak sharp efficient solutions of such a multiobjective optimization problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider the following multiobjective optimization problem with equality and inequality constraints:
where \(f:\mathbb {R}^n\rightarrow \mathbb {R}^m\) with \(f(x)=(f_1(x),f_2(x),\ldots ,f_m(x))\) is a vector-valued function, \(f_k, k\in \mathcal {K}\), \(g_i, i\in \mathcal {I}\) and \(h_j, j\in \mathcal {J}\) are locally Lipschitz functions defined on \(\mathbb {R}^n\), \(\mathcal {K}=\{1,2,\ldots ,m\}\), \(\mathcal {I}=\{1,2,\ldots ,p\}\) and \(\mathcal {J}=\{1,2,\ldots ,q\}\) are index sets, and \(\Omega \) is a nonempty locally closed subset of \(\mathbb {R}^n\). For simplicity, we denote the feasible set of (MOP) by \(S:=\{x\in \mathbb {R}^n\mid g_i(x)\le 0, i\in \mathcal {I}, h_j(x)=0, j\in \mathcal {J}, x\in \Omega \}\). Let \(\mathbb {R}^m_+\) and \(0_{\mathbb {R}^m}\) denote respectively the nonnegative quadrant and origin of \(\mathbb {R}^m\). As we know, a point \(\hat{x}\in S\) is said to be a local efficient solution for (MOP) iff there exists a neighborhood U of \(\hat{x}\) such that
Recall from [1–4] that a point \(\hat{x}\in S\) is said to be a local sharp efficient solution for (MOP) iff there exists a neighborhood U of \(\hat{x}\) and a real number \(\eta >0\) such that
It is obvious that every local sharp efficient solution must be also a local efficient solution. Specially, the local sharp efficient solution reduces to the standard local sharp minima for scalar optimization problems when \(m=1\), that is, (MOP) has only one object. We refer to [5–7] for more details.
In the past few decades, the multiobjective optimization, or generally vector optimization, which is devoted to make an optimal decision with more than one object, has received extensive attentions; see [8–11] and references therein. Recently, investigation on the characterization of various efficient solutions for multiobjective optimization problems, by virtue of the advanced tools of variational analysis and generalized differentiation, has been of great interest in academic and professional communities. Bao and Mordukhovich [12] studied the super efficiency for (MOP) introduced by Borwein and Zhuang [13], and derived some verifiable necessary optimality conditions by using modern variational principles and variational techniques. Ginchev et al. [1] proposed the notion of local sharp efficient solution (equivalently, strict local minimizer or isolated minimizer) for (MOP) and established some optimality conditions by virtue of the Dini derivative. Subsequently, by means of the Mordukhovich generalized differentiation and the normal cone, Chuong [3] obtained a nonsmooth version of Fermat rule for the local sharp efficient solution of (MOP). For more details, we refer to [4, 14].
Just as we know, the local sharp efficient solution is closely related to the convergence and stability analysis in mathematical programming; see [15–18] and references therein. However, every local sharp solution is isolated in the solution set. In order to overcome this difficulty, Burke and Ferris [19] proposed the concept of weak sharp minima for scalar optimization problems, which allowed the solutions not to be isolated. Furthermore, there were a lot of papers devoted to characterize the weak sharp minima and study its applications in other standard optimality topics, such as variational inequalities, error bounds and algorithm analysis; see [6, 7, 20, 21] and references therein. Inspired by the afore-mentioned ideas, we pay our attention to the following weaker efficiency, called weak sharp efficient solution, for (MOP).
Definition 1.1
A point \(\hat{x}\in S\) is said to be a local weak sharp efficient solution for \(\mathrm {(MOP)}\) iff there exist a neighborhood U of \(\hat{x}\) and a real number \(\eta >0\) such that
where \(\widetilde{S}:=\{x\in S\mid f(x)=f(\hat{x})\}=S\cap f^{-1}(f(\hat{x}))\). Specially, if \(U=\mathbb {R}^n\), then \(\hat{x}\) is said to be a global weak sharp efficient solution for \(\mathrm {(MOP)}\).
Comparing and contrasting the local weak sharp efficient solution, defined in Definition 1.1, and the local sharp efficient solution, defined in [1–4], it is easy to verify that the local sharp efficient solution \(\hat{x}\in S\) is isolated in the solution set \(\widetilde{S}\). While, the local weak sharp efficient solution \(\hat{x}\in S\) may not be isolated in \(\widetilde{S}\). Simultaneously, if \(\hat{x}\in S\) is a local weak sharp efficient solution for (MOP), then it must be also a local sharp efficient solution whenever \(\hat{x}\) is isolated in \(\widetilde{S}\). Thus, the weak sharp efficiency not only solves the isolation of the sharp efficiency, which is rigorous in real applications, but also provides a more general and easier accessible form for our analysis. Moreover, characterizations on the weak sharp efficiency for (MOP), similar to the scalar case, are very important and useful to study the metric subregularity property and growth condition for vector-valued functions, and the stability analysis for the solution set-valued mapping to parametric vector optimization problems. For more details, we refer to [21–24] and references therein.
The main purpose of the paper is to establish some generalized nonsmooth Fermat rules for weak sharp efficient solutions of (MOP) in terms of the advanced tools of variational analysis and generalized differentiation. We first establish a necessary optimality condition for the local weak sharp efficient solution of (MOP) by means of a nonsmooth version of generalized Fermat rules, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials and the sum rule for Mordukhovich subdifferentials. Simultaneously, by using the isolation of sharp efficient solutions, we immediately get a corresponding necessary condition to [3, Theorem 3.5] for the sharp efficient solution. Moreover, we also obtain some sufficient optimality conditions respectively for the local and global weak sharp efficient solutions of (MOP) by employing the approximate projection theorem and some appropriate convexity and affineness conditions. Our results about the weak sharp efficiency for (MOP) are more general and extensive, and the methods used in this paper are different form those in [1–4].
The rest of the paper is organized as follows. In Sect. 2, we recall some basic concepts, properties and tools generally used in variational analysis. Section 3 contains the main results concluding a nonsmooth Fermat rule for the local weak sharp efficient solution and some sufficient optimality conditions respectively for the local and global weak sharp efficient solutions under some appropriate convexity and affineness assumptions.
2 Preliminaries
The main tools for our study in this paper are the Mordukhovich generalized differentiation notions which are generally used in variational analysis; see more details in [6, 25]. Throughout the paper, all spaces under consideration are finite dimensional spaces denoted by \(\mathbb {R}^n, n\in \mathbb {N}\), and the inner product and the norm of \(\mathbb {R}^n\) are denoted respectively by \(\langle \bullet ,\bullet \rangle \) and \(\Vert \bullet \Vert \). In general, we denote by \(\mathcal {B}_{\mathbb {R}^n}\) the closed unit ball in \(\mathbb {R}^n\), and \(\mathbb {B}(\hat{x},\delta )\) the open ball with center at \(\hat{x}\) and radius \(\delta >0\) for any \(\hat{x}\in \mathbb {R}^n\). For a nonempty subset \(S\subset \mathbb {R}^n\), we denote \(\mathrm {cl}\,S\), \(\mathrm {bd}\,S\) and \(\mathrm {cone}\,S\) by the closure, boundary and cone hull of S, respectively. As usual, the distance function \(d(\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\), the indicator function \(\psi (\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) and the support function \(\psi ^*(\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) of S are respectively defined by
and
Given a point \(\hat{x}\in S\), the set S is called closed around \(\hat{x}\) iff there is a neighborhood U of \(\hat{x}\) such that \(S\cap U\) is closed. Moreover, if S is closed around every \(\hat{x}\in S\), then S is said to be locally closed. Let S be closed around \(\hat{x}\). Recall that the contingent cone \(T(S,\hat{x})\) of S at \(\hat{x}\) is \(T(S,\hat{x}):=\{v\in \mathbb {R}^n\mid \exists v_n\rightarrow v, \exists t_n\downarrow 0\;\mathrm {s.t.}\;\hat{x}+t_nv_n\in S,\,\forall n\in \mathbb {N}\}\), and the Fréchet normal cone \(\widehat{N}(S,\hat{x})\) of S at \(\hat{x}\), which is a convex and closed subset of \(\mathbb {R}^n\) consisting of all the Fréchet normals, has the form
where \(x\xrightarrow []{S}\hat{x}\) means \(x\rightarrow \hat{x}\) and \(x\in S\). Specially, if \(\hat{x}\notin S\), we set \(\widehat{N}(S,\hat{x})=\varnothing \). The Mordukhovich (or basic, limiting) normal cone \(N(S,\hat{x})\) of S at \(\hat{x}\) is defined by
Specially, if S is convex, then it follows \(T(S,\hat{x})=\mathrm {cl}\,\mathrm {cone}\,(S-\hat{x})\) and
Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be an extended real-valued function and let \(\hat{x}\in \mathrm {dom}\;h\), where \(\mathrm {dom}\;h:=\{x\in \mathbb {R}^n\mid h(x)<+\infty \}\) denotes the domain of h. The analytic \(\varepsilon \)-subdifferential \(\widehat{\partial }_{a\varepsilon }h(\hat{x})\) of h at \(\hat{x}\) is defined by
Specially, when \(\varepsilon =0\), the analytic \(\varepsilon \)-subdifferential \(\widehat{\partial }_{a\varepsilon }h(\hat{x})\) of h at \(\hat{x}\) reduces to the general Fréchet subdifferential, and is denoted by \(\widehat{\partial }h(\hat{x})\). The Mordukhovich (or basic, limiting) subdifferential \(\partial h(\hat{x})\) of h at \(\hat{x}\) is defined by
where \(x_n\xrightarrow []{h}\hat{x}\) means \(x_n\rightarrow \hat{x}\) and \(h(x_n)\rightarrow h(\hat{x})\). If \(\hat{x}\notin \mathrm {dom}\,h\), then we set \(\widehat{\partial } h(\hat{x})=\partial h(\hat{x})=\varnothing \). Obviously, \(\widehat{\partial } h(\hat{x})\subset \partial h(\hat{x})\) for all \(x\in \mathbb {R}^n\). Specially, we get \(\widehat{\partial }\psi (\hat{x},S)=\widehat{N}(S,\hat{x})\) and \(\partial \psi (\hat{x},S)=N(S,\hat{x})\). Simultaneously, \(\widehat{\partial }d(\hat{x}, S)=\mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(S,\hat{x})\) and \(\partial d(\hat{x}, S)\subset \mathcal {B}_{\mathbb {R}^n}\cap N(S,\hat{x})\). Furthermore, if h is a convex function, then we have
Next, we recall some useful and important propositions and definitions for this paper. First of all, the following necessary optimality condition, called generalized Fermat rule, for a function to attain its local minimum plays a key role for our analysis.
Theorem 2.1
[6, 25] Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be a proper lower semicontinuous function. If h attains a local minimum at \(\hat{x}\in \mathbb {R}^n\), then \(0_{\mathbb {R}^n}\in \widehat{\partial }h(\hat{x})\), which implies \(0_{\mathbb {R}^n}\in \partial h(\hat{x})\).
We recall the following fuzzy sum rule for the Fréchet subdifferential and the sum rule for the Mordukhovich subdifferential, which are important in the sequel.
Theorem 2.2
[6, 25] Let \(f,h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be proper lower semicontinuous around \(\hat{x}\in \mathrm {dom}\,f\cap \mathrm {dom}\,h\). If f is Lipschitz continuous around \(\hat{x}\), then
-
(i)
for every \(x^*\in \widehat{\partial }(f+h)(\hat{x})\) and every \(\epsilon >0\), there exist \(x_1,x_2\in \mathbb {B}(\hat{x},\epsilon )\) such that \(|f(x_1)-f(\hat{x})|<\epsilon \), \(|h(x_2)-h(\hat{x})|<\epsilon \) and \(x^*\in \widehat{\partial }f(x_1)+\widehat{\partial }h(x_2)+\epsilon \mathcal {B}_{\mathbb {R}^n}\).
-
(ii)
\(\partial (f+h)(\hat{x})\subset \partial f(\hat{x})+\partial h(\hat{x})\).
Finally in this section, we recall the following important approximate projection theorem in the convex case for the paper.
Theorem 2.3
[26] Assume that \(S\subset \mathbb {R}^n\) is a closed, convex and nonempty subset. Let \(\theta \in (0,1)\). Then, for every \(x\notin S\), there exist \(\tilde{x}\in \mathrm {bd}\;S\) and \(x^*\in N(S,\tilde{x})\) with \(\Vert x^*\Vert =1\) such that
3 Main results
In this section, we focus our attention on establishing some optimality conditions for the local (global) weak sharp efficiency in multiobjective optimization problems in terms of the advanced tools of variational analysis and generalized differentiation. Concretely, by using the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials and the sum rule for Mordukhovich subdifferentials, we firstly establish a necessary condition for the local weak sharp efficient solution of (MOP). Moreover, by employing the approximate projection theorem, and some appropriate convexity and affineness conditions, we also obtain some sufficient conditions for the local and global weak sharp efficient solutions of (MOP), respectively.
Given arbitrary \(\tilde{x}\in \Omega \), we set
Definition 3.1
[3] Given arbitrary \(\tilde{x}\in \Omega \), the constraint qualification \(\mathrm {(CQ)}\) is said to be satisfied at \(\tilde{x}\) iff there do not exist \(\mu _i\ge 0, i\in \mathcal {I}(\tilde{x})\) and \(\gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})\), such that \(\sum _{i\in \mathcal {I}(\tilde{x})}\mu _i+\sum _{j\in \mathcal {J}(\tilde{x})}\gamma _j\ne 0\) and
Just as shown in [3], the (CQ), defined in Definition 3.1, can be guaranteed by the well-known Mangasarian-Fromovitz constraint qualification in the smooth setting when \(\tilde{x}\in S\) and \(\Omega =\mathbb {R}^n\). We refer to [3, 4, 6, 25] for more details.
Next, we establish the following necessary optimality condition for local weak sharp efficient solutions of (MOP) under the above-defined (CQ).
Theorem 3.1
Assume that the \(\mathrm {(CQ)}\), defined by Definition 3.1, is satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\), i.e., there exists a neighborhood U of \(\hat{x}\) such that the \(\mathrm {(CQ)}\) holds at any \(\tilde{x}\in \widetilde{S}\cap U\). If \(\hat{x}\) is a local weak sharp efficient solution for \(\mathrm {(MOP)}\), then there exist real numbers \(\eta ,\delta >0\) such that for every \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\), one has
Proof
Since \(\hat{x}\in S\) is a local weak sharp efficient solution for (MOP), there exist real numbers \(\eta ,\delta _1>0\) such that
Moreover, the (CQ) is satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\). Thus, there exists some \(\delta _2>0\) such that the (CQ) holds at any \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta _2)\). Let \(0<\delta <\min \{\frac{1}{2}\delta _1,\delta _2\}\) and take arbitrary \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\). It follows from \(\widehat{\partial }d(\tilde{x},\widetilde{S})=\mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\) that for every \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\), we have \(x^*\in \widehat{\partial }d(\tilde{x},\widetilde{S})\). Then, for every \(\epsilon >0\), there exists \(0<\delta _3<\frac{1}{2}\delta _1\) such that
Note that, for every \(x\in \mathbb {B}(\tilde{x},\delta _3)\), we get \(\Vert x-\hat{x}\Vert \le \Vert x-\tilde{x}\Vert +\Vert \tilde{x}-\hat{x}\Vert <\delta _3+\delta <\delta _1\), that is, \(x\in \mathbb {B}(\hat{x},\delta _1)\). Together with (3.2) and (3.3), it follows
Furthermore, since \(\tilde{x}\in \widetilde{S}\), i.e., \(\tilde{x}\in S\) and \(f(\tilde{x})=f(\hat{x})\), (3.4) implies that \(\tilde{x}\) is a local minimum point of the following function \(\varphi :\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) defined by
Applying Theorem 2.1 (generalized Fermat rule), we have \(0_{\mathbb {R}^n}\in \widehat{\partial }\varphi (\tilde{x})\). Since \(f_k, k\in \mathcal {K}\) are all Lipschitz continuous around \(\tilde{x}\), the function \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\), defined by \(\phi (x):=\max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\), is also Lipschitz continuous around \(\tilde{x}\). Specially, let \(\ell >0\) be the modulus. Moreover, since S is locally closed, the indicator function \(\psi (\bullet ,S)\) is lower semicontinuous around \(\tilde{x}\). Together with the global Lipschitz continuity of \(\Vert \bullet -\tilde{x}\Vert \) with modulus 1, we get from Theorem 2.2(i) (fuzzy sum rule for Fréchet subdifferentials) that for the proceeding \(\epsilon >0\), there exist \(x^\epsilon _1, x^\epsilon _2, x^\epsilon _3\in \mathbb {B}(\tilde{x},\epsilon )\), such that
and
Since \(\psi (x^\epsilon _3,S)<\epsilon \), we have \(x^\epsilon _3\in S\) and \(\widehat{\partial }\psi (\bullet ,S)(x^\epsilon _3)=\widehat{N}(S,x^\epsilon _3)\). Note that \(\phi \) is Lipschitz continuous around \(\tilde{x}\) with modulus \(\ell \) and \(x^\epsilon _1\in \mathbb {B}(\tilde{x},\epsilon )\). Then it follows from [25, Proposition 1.85] with \(\varepsilon =0\) that for all sufficiently small \(\epsilon >0\), one has \(\widehat{\partial }\phi (x^\epsilon _1)\subset \ell \mathcal {B}_{\mathbb {R}^n}\). Similarly, we get \(\widehat{\partial }\Vert \bullet -\tilde{x}\Vert (x^\epsilon _2)\subset \mathcal {B}_{\mathbb {R}^n}\). Together with \(\mathcal {B}_{\mathbb {R}^n}\) being compact, and moreover, \(x^\epsilon _1, x^\epsilon _2, x^\epsilon _3\in \mathbb {B}(\tilde{x},\epsilon )\), we have
which imply
(taking \(\epsilon \downarrow 0\) and passing to a subsequence if necessary). On one hand, by using the Mordukhovich subdifferential of maximum functions [25, Theorem 3.46(ii)] and Theorem 2.2(ii) (sum rule for Mordukhovich subdifferentials), we get from \(\tilde{x}\in \widetilde{S}\) that \(f(\tilde{x})=f(\hat{x})\) and
On the other hand, let
Then \(S=\Omega \cap \Lambda \). Since \(\delta <\delta _2\) and \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\), the (CQ) holds at \(\tilde{x}\). Note that
always holds since \(0_{\mathbb {R}^n}\in N(\Omega ,\tilde{x})\). Thus, it follows from the (CQ) that there do not exist \(\mu _i\ge 0, i\in \mathcal {I}(\tilde{x})\) and \(\gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})=\mathcal {J}\), such that \(\sum _{i\in \mathcal {I}(\tilde{x})}\mu _i+\sum _{j\in \mathcal {J}}\gamma _j\ne 0\) and
Applying [25, Corollary 4.36], we have
Moreover, it follows from (3.7) and [25, Corollary 3.37] that the (CQ) being satisfied at \(\tilde{x}\) implies
Let \(\mu _i=0\) for every \(i\in \mathcal {I}\setminus \mathcal {I}(\tilde{x})\). Then, it follows from (3.7) and (3.8) that
Note that \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\) and \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\) are arbitrary. Thus, combining (3.5), (3.6) and (3.9), we can conclude that (3.1) holds. This completes the proof. \(\square \)
The following example shows that the \(\mathrm {(CQ)}\) being satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\) is essential for Theorem 3.1.
Example 3.1
Let the vector-valued map \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x):=(f_1(x),f_2(x))\) with \(f_1(x)=f_2(x):=\min \{0,x\}\) for all \(x\in \mathbb {R}\). Moreover, let the functions \(g:\mathbb {R}\rightarrow \mathbb {R}\) and \(h:\mathbb {R}\rightarrow \mathbb {R}\) be respectively defined by \(g(x):=(x-1)^3\) and \(h(x):=\min \{0,x\}\) for all \(x\in \mathbb {R}\). Take \(\Omega :=[-2,2]\subset \mathbb {R}\) and consider the following constrained multiobjective optimization problem:
It is easy to verify that the feasible set \(S=[0,1]\). Specially, let \(\hat{x}=1\in S\). Then, \(f(\hat{x})=(0,0)\) and \(\widetilde{S}=S\cap f^{-1}(f(\hat{x}))=S\). Obviously, \(\hat{x}\) is not isolated in \(\widetilde{S}\). It follows from Definition 1.1 that \(\hat{x}\in S\) is a local weak sharp efficient solution. However, (3.1) does not hold for every \(\eta , \delta >0\). In fact, by direct calculating, we have \(\partial f_1(\hat{x})=\partial f_2(\hat{x})=\{0\}\), \(\partial g(\hat{x})=\{0\}\), \(\partial h(\hat{x})=\{0\}\), \(\partial (-h)(\hat{x})=\{0\}\), \(N(\Omega ,\hat{x})=\{0\}\) and \(N(\widetilde{S},\hat{x})=[0, +\infty )\). Thus, we get \(\eta \mathcal {B}_{\mathbb {R}^2}\cap N(\widetilde{S},\hat{x})=[0,\eta ]\) and
which shows that (3.1) does not hold for every \(\eta , \delta >0\). Meantime, the (CQ) is also not satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\).
It is worth noting that if \(\hat{x}\in S\) is a local sharp efficient solution for (MOP), then \(\hat{x}\) is isolated in \(\widetilde{S}\). Thus, \(\widehat{N}(\widetilde{S},\tilde{x})=\mathbb {R}^n\) and the (CQ) only needs to be satisfied at \(\hat{x}\). Together with Theorem 3.1, the following necessary condition for the local sharp efficiency corresponding to [3, Theorem3.5] immediately holds:
Corollary 3.1
Assume that the \(\mathrm {(CQ)}\), defined by Definition 3.1, is satisfied at \(\hat{x}\in S\). If \(\hat{x}\) is a local sharp efficient solution for \(\mathrm {(MOP)}\), then there exists a real number \(\eta >0\) such that
Remark 3.1
Note that, in order to establish the necessary condition for the local sharp efficiency, Chuong [3, Theorem 3.5] employed the exact sum rule for Fréchet subdifferentials, which is exploited in [27, Theorem 3.1]; see also Chuong and Yao [4, Theorem 3.1] for the semi-infinite vector optimization problem. Simultaneously, Zhou et al. [28, Theorem 4.1] also obtained some necessary conditions by virtue of the exact sum rule for Mordukhovich subdifferentials under appropriate regularity and differentiability assumptions. However, we use a different method, i.e., Theorem 2.2(i) (fuzzy sum rule for Fréchet subdifferentials), to obtain a more general necessary condition for the local weak sharp efficinecy in Theorem 3.1.
Now, by employing the approximate projection theorem, and some appropriate convexity and affineness conditions, we establish the following sufficient optimality conditions for the local and global weak sharp efficient solutions of (MOP), respectively.
Theorem 3.2
Given \(\hat{x}\in S\), let \(\Omega \) be a closed and convex set, and \(\widetilde{S}\) be convex. Suppose that \(f_k, k\in \mathcal {K}\) and \(g_i, i\in \mathcal {I}\) are convex functions, and \(h_j, j\in \mathcal {J}\) are affine functions. If there exist real numbers \(\eta ,\delta >0\) such that for every \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\),
then \(\hat{x}\) is a local weak sharp efficient solution for \(\mathrm {(MOP)}\). Specially, if (3.10) holds for \(\delta =+\infty \), then \(\hat{x}\) is a global weak sharp efficient solution for \(\mathrm {(MOP)}\).
Proof
Note that \(\Omega \) is a closed and convex set, \(g_i, i\in \mathcal {I}\) are convex functions, and \(h_j, j\in \mathcal {J}\) are affine functions. Thus, the feasible set S is closed and convex. Together with the local Lipschitz continuity of \(f_k, k\in \mathcal {K}\) and the convexity of \(\widetilde{S}\), it follows that \(\widetilde{S}=S\cap f^{-1}(f(\hat{x}))\) is a closed and convex set. Next, we consider the following two cases with \(0<\delta <+\infty \) and \(\delta =+\infty \), respectively.
(i) Assume that (3.10) holds for \(0<\delta <+\infty \). Let \(0<\delta _1<\frac{1}{2}\delta \). In what follows, we show
In fact, take arbitrary \(x\in S\cap \mathbb {B}(\hat{x},\delta _1)\). If \(x\in \widetilde{S}\), i.e., \(x\in S\) and \(f(x)=f(\hat{x})\), then (3.11) holds trivially. While, if \(x\notin \widetilde{S}\), then \(0<d(x,\widetilde{S})\le \Vert x-\hat{x}\Vert <\delta _1\) since \(\hat{x}\in \widetilde{S}\). Thus, we get \(0<\frac{1}{\delta _1}d(x,\widetilde{S})<1\). Moreover, for every \(\theta \in \Big (\frac{1}{\delta _1}d(x,\widetilde{S}), 1\Big )\), it follows from Theorem 2.3 (approximate projection theorem) that there exist \(\tilde{x}\in \widetilde{S}\) and
such that
Thus, we have \(\Vert x-\tilde{x}\Vert <\frac{1}{\theta }d(x,\widetilde{S})<\delta _1\), and then, \(\Vert \tilde{x}-\hat{x}\Vert \le \Vert \tilde{x}-x\Vert +\Vert x-\hat{x}\Vert <\delta _1+\delta _1<\delta \), which implies \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\). Together with (3.10) and (3.12), it follows that there exist \(\lambda _k\ge 0\) with \(\sum _{k\in \mathcal {K}}\lambda _k=1\), \(u^*_k\in \partial f_k(\tilde{x}), k\in \mathcal {K}\), \(\mu _i\ge 0\) with \(\mu _ig_i(\tilde{x})=0\), \(v^*_i\in \partial g_i(\tilde{x}), i\in \mathcal {I}\), \(\gamma _j\ge 0\), \(w^*_j\in \partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x}), j\in \mathcal {J}\) and \(\omega ^*\in N(\Omega ,\tilde{x})\) such that
Note that \(x\in S\). Then \(x\in \Omega \). Since \(\Omega \) is convex, we get
Simultaneously, since \(f_k, k\in \mathcal {K}\) and \(g_i, i\in \mathcal {I}\) are convex functions, we have
and
Furthermore, since \(h_j, j\in \mathcal {J}\) are affine functions, there exists \(\alpha _j\in \{-1,1\}\) such that
Since \(x\in S\), it follows \(g_i(x)\le 0\) for all \(i\in \mathcal {I}\) and \(h_j(x)=0\) for all \(j\in \mathcal {J}\). Together with (3.14)–(3.18) and \(\tilde{x}\in \widetilde{S}\), we get \(\tilde{x}\in S\), \(f(\tilde{x})=f(\hat{x})\) and
Since \(\tilde{x}\in \widetilde{S}\), we have \(d(x,\widetilde{S})\le \Vert x-\tilde{x}\Vert \). Together with (3.13) and (3.19), we get
Let \(\theta \uparrow 1\). Then (3.11) holds since \(x\in S\cap \mathbb {B}(\hat{x},\delta _1)\) is arbitrary. Thus, \(\hat{x}\) is a local weak sharp efficient solution for (MOP).
(ii) Let (3.10) hold for \(\delta =+\infty \). Take arbitrary \(\tilde{x}\in \widetilde{S}\). Since \(\widetilde{S}\) is a closed and convex set, we have \(N(\widetilde{S},\tilde{x})=T(\widetilde{S},\tilde{x})^\circ \). Moreover, it follows from [19, Theorem 3.1] that
and from [29, Theorem A.1-4] that
Thus, we can conclude that for every \(x\in S\),
Take arbitrary \(\tilde{x}\in \widetilde{S}\) and \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap N(\widetilde{S},\tilde{x})\). Since (3.10) holds for \(\delta =+\infty \), by the similar method to the proof of (i), we have
Together with (3.20), we get
that is, \(\hat{x}\) is a global weak sharp efficient solution for (MOP). This completes the proof. \(\square \)
Remark 3.2
Corresponding to the method used in [3, Theorem 3.7], we apply the expressions of special distance functions, instead of the Hahn-Banach theorem, to obtain the sufficient condition for the global weak sharp efficiency in Theorem 3.2. However, the similar method is not applicable to establish sufficient conditions for local weak sharp efficient solutions since the reference point \(\hat{x}\in S\) is not isolated in \(\widetilde{S}\). Here in the proof of Theorem 3.2, we employ Theorem 2.3 (approximate projection theorem) to overcome this difficulty.
To end this paper, we give the following example to illustrate Theorem 3.2.
Example 3.2
Let the vector-valued map \(f:\mathbb {R}^2\rightarrow \mathbb {R}^2\) be defined by \(f(x):=(f_1(x),f_2(x))\) with \(f_1(x):=\max \{0,-x_1\}\) and \(f_2(x):=\max \{0,x_2\}\) for all \(x=(x_1,x_2)\in \mathbb {R}^2\). Moreover, let the functions \(g:\mathbb {R}^2\rightarrow \mathbb {R}\) and \(h:\mathbb {R}^2\rightarrow \mathbb {R}\) be respectively defined by \(g(x):=|x_1|+x_2\) and \(h(x):=x_1+x_2\) for all \(x=(x_1,x_2)\in \mathbb {R}^2\). Take \(\Omega :=[-1,1]\times [-1,1]\subset \mathbb {R}^2\) and consider the following constrained multiobjective optimization problem:
Obviously, f and g are convex functions and h is an affine function defined on \(\mathbb {R}^2\). Furthermore, \(\Omega \) is a closed and convex subset and the feasible set \(S=\{x\in \mathbb {R}^2\mid 0\le x_1\le 1, x_1+x_2=0\}\). Specially, let \(\hat{x}=(\frac{1}{2},-\frac{1}{2})\in S\). Then \(f(\hat{x})=(0,0)\) and \(\widetilde{S}:=S\cap f^{-1}(f(\hat{x}))=S\), which implies that \(\widetilde{S}\) is also a closed and convex set. Take arbitrary \(\eta >0\) and \(\tilde{x}\in \widetilde{S}\). Next, we consider the following three cases for \(\tilde{x}\in \widetilde{S}\):
Case 1: \(\tilde{x}=(0,0)\). Then \(\partial f_1(\tilde{x})=[-1,0]\times \{0\}\), \(\partial f_2(\tilde{x})=\{0\}\times [0,1]\), \(\partial g(\tilde{x})=[-1,1]\times \{1\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{(0,0)\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1\le x_2\}\). Thus, we have
and
which implies that (3.10) holds for all \(\eta >0\).
Case 2: \(\tilde{x}=(1,-1)\). Then \(\partial f_1(\tilde{x})=\partial f_2(\tilde{x})=\{(0,0)\}\), \(\partial g(\tilde{x})=\{(1,1)\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{x\in \mathbb {R}^2\mid x_2\le 0\le x_1\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1\ge x_2\}\). Thus, we have
and
Clearly, (3.10) holds for all \(\eta >0\).
Case 3: \(\tilde{x}\ne (0,0)\) and \(\tilde{x}\ne (1,-1)\). Then \(\partial f_1(\tilde{x})=\partial f_2(\tilde{x})=\{(0,0)\}\), \(\partial g(\tilde{x})=\{(1,1)\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{(0,0)\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1=x_2, x_1\in \mathbb {R}\}\). Thus, we have
and
Obviously, (3.10) still holds for all \(\eta >0\).
Therefore, we can conclude that (3.10) holds for all \(\eta >0\) and all \(\tilde{x}\in \widetilde{S}\). Moreover, it follows from Definition 1.1 that \(\hat{x}\in S\) is a global weak sharp efficient solution, and simultaneously, \(\hat{x}\) is obviously not isolated in \(\widetilde{S}\).
References
Ginchev, I., Guerraggio, A., Rocca, M.: Isolated minimizers and proper efficiency for \(C^{0,1}\) constrained vector optimization problems. Journal of Mathematical Analysis and Applications 309, 353–368 (2005)
Ginchev, I., Guerraggio, A., Rocca, M.: From scalar to vector optimization. Appl. Math. 51, 5–36 (2006)
Chuong, T.D.: Optimality and duality for proper and isolated efficiencies in multiobjective optimization. Nonlinear Anal. 76, 93–104 (2013)
Chuong, T.D., Yao, J.C.: Isolated and proper efficiencies in semi-infinite vector optimization problems. J. Optim. Theor. Appl. 162, 447–462 (2014)
Studniarski, M.: Necessary and sufficient conditions for isolated local minima of nonsmooth functions. SIAM J. Control Optim. 24, 1044–1049 (1986)
Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)
Ward, D.E.: Characterizations of strict local minima and necessary conditions for weak sharp minima. J. Optim. Theor. Appl. 80, 551–571 (1994)
Jahn, J.: Vector Optimization: Theory, Applications, and Extensions. Springer, Berlin (2004)
Chen, G.Y., Huang, X.X., Yang, X.Q.: Vector Optimization: Set-Valued and Variational Analysis. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (2005)
Luc, D.T.: Theory of Vector Optimization. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin (1989)
Sawaragi, Y., Nakayama, H., Tanino, T.: Theory of Multiobjective Optimization, in Mathematics in Science and Engineering. Academic Press, Inc., Orlando, FL (1985)
Bao, T.Q., Mordukhovich, B.S.: Necessary conditions for super minimizers in constrained multiobjective optimization. J. Glob. Optim. 43, 533–552 (2009)
Borwein, J.M., Zhuang, D.M.: Super efficiency in vector optimization. Trans. Am. Math. Soc. 338, 105–122 (1993)
Bao, T.Q., Mordukhovich, B.S.: Relative Pareto minimizers in multiobjective optimization: existence and optimality conditions. Math. Program. 122, 301–347 (2010)
Cromme, L.: Strong uniqueness: a far reaching criterion for the convergence of iterative procedures. Numer. Math. 29, 179–193 (1978)
Womersley, R.S.: Local properties of algorithms for minimizing nonsmooth composite functions. Math. Program. 32, 69–89 (1985)
Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71, 179–194 (1995)
Auslender, A.: Stability in mathematical programming with nondifferentiable data. SIAM J. Control Optim. 22, 239–254 (1984)
Burke, J.V., Ferris, M.C.: Weak sharp minima in mathematical programming. SIAM J. Control Optim. 36, 1340–1359 (1993)
Marcotte, P., Zhu, D.L.: Weak sharp solutions of variational inequalities. SIAM J. Optim. 9, 179–189 (1998)
Zălinescu, C.: Convex Analysis in General Vector Spaces. World Scientific, Singapore (2002)
Bednarczuk, E.M.: Weak sharp efficiency and growth conditin for vector-valued functions with applications. Optimization 53, 455–474 (2004)
Studniarski, M.: Weak sharp minima in multiobjective optimization. Control Cybern. 36, 925–937 (2007)
Deng, S., Yang, X.Q.: Weak sharp minima in multicriteria linear programming. SIAM J. Optim. 15, 456–460 (2004)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation. I: Basic Theory. Springer, Berlin (2006)
Ng, K.F., Yang, W.H.: Error bounds for abstract linear inequality systems. SIAM J. Optim. 13, 24–43 (2002)
Mordukhovich, B.S., Nam, N.M., Yen, N.D.: Fréchet subdifferential calculus and optimality conditions in nondifferentiable programming. Optimization 55, 685–708 (2006)
Zhou, J.C., Mordukhovich, B.S., Xiu, N.H.: Complete characterizations of local weak sharp minima with applications to semi-infinite optimization and complementarity. Nonlinear Anal. 75, 1700–1718 (2012)
Burke, J.V., Deng, S.: Weak sharp minima revisited. Part I: basic theory. Control Cybern. 31, 439–469 (2002)
Acknowledgments
The author is grateful to the two anonymous referees for their valuable comments and suggestions, which helped to improve the paper. This research was supported by the Fundamental Research Funds for the Central Universities (Grant JBK150125) and the National Natural Science Foundation of China (Grant 11171362).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhu, S.K. Weak sharp efficiency in multiobjective optimization. Optim Lett 10, 1287–1301 (2016). https://doi.org/10.1007/s11590-015-0925-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-015-0925-0