1 Introduction

In this paper, we consider the following multiobjective optimization problem with equality and inequality constraints:

$$\begin{aligned} (\mathrm {MOP})\quad \min f(x)\quad \mathrm {s.t.}\quad g_i(x)\le 0, i\in \mathcal {I},\, h_j(x)=0, j\in \mathcal {J},\, x\in \Omega , \end{aligned}$$

where \(f:\mathbb {R}^n\rightarrow \mathbb {R}^m\) with \(f(x)=(f_1(x),f_2(x),\ldots ,f_m(x))\) is a vector-valued function, \(f_k, k\in \mathcal {K}\), \(g_i, i\in \mathcal {I}\) and \(h_j, j\in \mathcal {J}\) are locally Lipschitz functions defined on \(\mathbb {R}^n\), \(\mathcal {K}=\{1,2,\ldots ,m\}\), \(\mathcal {I}=\{1,2,\ldots ,p\}\) and \(\mathcal {J}=\{1,2,\ldots ,q\}\) are index sets, and \(\Omega \) is a nonempty locally closed subset of \(\mathbb {R}^n\). For simplicity, we denote the feasible set of (MOP) by \(S:=\{x\in \mathbb {R}^n\mid g_i(x)\le 0, i\in \mathcal {I}, h_j(x)=0, j\in \mathcal {J}, x\in \Omega \}\). Let \(\mathbb {R}^m_+\) and \(0_{\mathbb {R}^m}\) denote respectively the nonnegative quadrant and origin of \(\mathbb {R}^m\). As we know, a point \(\hat{x}\in S\) is said to be a local efficient solution for (MOP) iff there exists a neighborhood U of \(\hat{x}\) such that

$$\begin{aligned} f(x)-f(\hat{x})\notin -\mathbb {R}^m_+\setminus \{0_{\mathbb {R}^m}\},\quad \forall x\in S\cap U. \end{aligned}$$

Recall from [14] that a point \(\hat{x}\in S\) is said to be a local sharp efficient solution for (MOP) iff there exists a neighborhood U of \(\hat{x}\) and a real number \(\eta >0\) such that

$$\begin{aligned} \max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\ge \eta \Vert x-\hat{x}\Vert ,\quad \forall x\in S\cap U. \end{aligned}$$

It is obvious that every local sharp efficient solution must be also a local efficient solution. Specially, the local sharp efficient solution reduces to the standard local sharp minima for scalar optimization problems when \(m=1\), that is, (MOP) has only one object. We refer to [57] for more details.

In the past few decades, the multiobjective optimization, or generally vector optimization, which is devoted to make an optimal decision with more than one object, has received extensive attentions; see [811] and references therein. Recently, investigation on the characterization of various efficient solutions for multiobjective optimization problems, by virtue of the advanced tools of variational analysis and generalized differentiation, has been of great interest in academic and professional communities. Bao and Mordukhovich [12] studied the super efficiency for (MOP) introduced by Borwein and Zhuang [13], and derived some verifiable necessary optimality conditions by using modern variational principles and variational techniques. Ginchev et al. [1] proposed the notion of local sharp efficient solution (equivalently, strict local minimizer or isolated minimizer) for (MOP) and established some optimality conditions by virtue of the Dini derivative. Subsequently, by means of the Mordukhovich generalized differentiation and the normal cone, Chuong [3] obtained a nonsmooth version of Fermat rule for the local sharp efficient solution of (MOP). For more details, we refer to [4, 14].

Just as we know, the local sharp efficient solution is closely related to the convergence and stability analysis in mathematical programming; see [1518] and references therein. However, every local sharp solution is isolated in the solution set. In order to overcome this difficulty, Burke and Ferris [19] proposed the concept of weak sharp minima for scalar optimization problems, which allowed the solutions not to be isolated. Furthermore, there were a lot of papers devoted to characterize the weak sharp minima and study its applications in other standard optimality topics, such as variational inequalities, error bounds and algorithm analysis; see [6, 7, 20, 21] and references therein. Inspired by the afore-mentioned ideas, we pay our attention to the following weaker efficiency, called weak sharp efficient solution, for (MOP).

Definition 1.1

A point \(\hat{x}\in S\) is said to be a local weak sharp efficient solution for \(\mathrm {(MOP)}\) iff there exist a neighborhood U of \(\hat{x}\) and a real number \(\eta >0\) such that

$$\begin{aligned} \max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\ge \eta d(x,\widetilde{S}),\quad \forall x\in S\cap U, \end{aligned}$$

where \(\widetilde{S}:=\{x\in S\mid f(x)=f(\hat{x})\}=S\cap f^{-1}(f(\hat{x}))\). Specially, if \(U=\mathbb {R}^n\), then \(\hat{x}\) is said to be a global weak sharp efficient solution for \(\mathrm {(MOP)}\).

Comparing and contrasting the local weak sharp efficient solution, defined in Definition 1.1, and the local sharp efficient solution, defined in [14], it is easy to verify that the local sharp efficient solution \(\hat{x}\in S\) is isolated in the solution set \(\widetilde{S}\). While, the local weak sharp efficient solution \(\hat{x}\in S\) may not be isolated in \(\widetilde{S}\). Simultaneously, if \(\hat{x}\in S\) is a local weak sharp efficient solution for (MOP), then it must be also a local sharp efficient solution whenever \(\hat{x}\) is isolated in \(\widetilde{S}\). Thus, the weak sharp efficiency not only solves the isolation of the sharp efficiency, which is rigorous in real applications, but also provides a more general and easier accessible form for our analysis. Moreover, characterizations on the weak sharp efficiency for (MOP), similar to the scalar case, are very important and useful to study the metric subregularity property and growth condition for vector-valued functions, and the stability analysis for the solution set-valued mapping to parametric vector optimization problems. For more details, we refer to [2124] and references therein.

The main purpose of the paper is to establish some generalized nonsmooth Fermat rules for weak sharp efficient solutions of (MOP) in terms of the advanced tools of variational analysis and generalized differentiation. We first establish a necessary optimality condition for the local weak sharp efficient solution of (MOP) by means of a nonsmooth version of generalized Fermat rules, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials and the sum rule for Mordukhovich subdifferentials. Simultaneously, by using the isolation of sharp efficient solutions, we immediately get a corresponding necessary condition to [3, Theorem 3.5] for the sharp efficient solution. Moreover, we also obtain some sufficient optimality conditions respectively for the local and global weak sharp efficient solutions of (MOP) by employing the approximate projection theorem and some appropriate convexity and affineness conditions. Our results about the weak sharp efficiency for (MOP) are more general and extensive, and the methods used in this paper are different form those in [14].

The rest of the paper is organized as follows. In Sect. 2, we recall some basic concepts, properties and tools generally used in variational analysis. Section 3 contains the main results concluding a nonsmooth Fermat rule for the local weak sharp efficient solution and some sufficient optimality conditions respectively for the local and global weak sharp efficient solutions under some appropriate convexity and affineness assumptions.

2 Preliminaries

The main tools for our study in this paper are the Mordukhovich generalized differentiation notions which are generally used in variational analysis; see more details in [6, 25]. Throughout the paper, all spaces under consideration are finite dimensional spaces denoted by \(\mathbb {R}^n, n\in \mathbb {N}\), and the inner product and the norm of \(\mathbb {R}^n\) are denoted respectively by \(\langle \bullet ,\bullet \rangle \) and \(\Vert \bullet \Vert \). In general, we denote by \(\mathcal {B}_{\mathbb {R}^n}\) the closed unit ball in \(\mathbb {R}^n\), and \(\mathbb {B}(\hat{x},\delta )\) the open ball with center at \(\hat{x}\) and radius \(\delta >0\) for any \(\hat{x}\in \mathbb {R}^n\). For a nonempty subset \(S\subset \mathbb {R}^n\), we denote \(\mathrm {cl}\,S\), \(\mathrm {bd}\,S\) and \(\mathrm {cone}\,S\) by the closure, boundary and cone hull of S, respectively. As usual, the distance function \(d(\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\), the indicator function \(\psi (\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) and the support function \(\psi ^*(\bullet ,S):\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) of S are respectively defined by

$$\begin{aligned} d(x,S):=\inf _{y\in S}\Vert x-y\Vert ,\quad \forall x\in \mathbb {R}^n, \end{aligned}$$
$$\begin{aligned} \psi (x,S):=0~\mathrm {if}~x\in S\quad \mathrm {and}\quad \psi (x,S):=+\infty ,~\mathrm {if}~x\notin S, \end{aligned}$$

and

$$\begin{aligned} \psi ^*(x^*,S):=\sup _{x\in S}~\langle x^*,x\rangle ,\quad \forall x^*\in \mathbb {R}^n. \end{aligned}$$

Given a point \(\hat{x}\in S\), the set S is called closed around \(\hat{x}\) iff there is a neighborhood U of \(\hat{x}\) such that \(S\cap U\) is closed. Moreover, if S is closed around every \(\hat{x}\in S\), then S is said to be locally closed. Let S be closed around \(\hat{x}\). Recall that the contingent cone \(T(S,\hat{x})\) of S at \(\hat{x}\) is \(T(S,\hat{x}):=\{v\in \mathbb {R}^n\mid \exists v_n\rightarrow v, \exists t_n\downarrow 0\;\mathrm {s.t.}\;\hat{x}+t_nv_n\in S,\,\forall n\in \mathbb {N}\}\), and the Fréchet normal cone \(\widehat{N}(S,\hat{x})\) of S at \(\hat{x}\), which is a convex and closed subset of \(\mathbb {R}^n\) consisting of all the Fréchet normals, has the form

$$\begin{aligned} \widehat{N}(S,\hat{x}):=\left\{ x^*\in \mathbb {R}^n\mid \limsup _{x\xrightarrow []{S}\hat{x}}\frac{\langle x^*, x-\hat{x}\rangle }{\Vert x-\hat{x}\Vert }\le 0\right\} , \end{aligned}$$

where \(x\xrightarrow []{S}\hat{x}\) means \(x\rightarrow \hat{x}\) and \(x\in S\). Specially, if \(\hat{x}\notin S\), we set \(\widehat{N}(S,\hat{x})=\varnothing \). The Mordukhovich (or basic, limiting) normal cone \(N(S,\hat{x})\) of S at \(\hat{x}\) is defined by

$$\begin{aligned} N(S,\hat{x}):=\left\{ x^*\in \mathbb {R}^n\mid \exists x_n\xrightarrow []{S}\hat{x}, \exists x^*_n\rightarrow x^*\;\mathrm {with}\;x^*_n\in \widehat{N}(S, x_n),\quad \forall n\in \mathbb {N}\right\} . \end{aligned}$$

Specially, if S is convex, then it follows \(T(S,\hat{x})=\mathrm {cl}\,\mathrm {cone}\,(S-\hat{x})\) and

$$\begin{aligned} \widehat{N}(S,\hat{x})=N(S,\hat{x})=T(S,\hat{x})^\circ =\left\{ x^*\in \mathbb {R}^n\mid \langle x^*, x-\hat{x}\rangle \le 0,\quad \forall x\in S\right\} . \end{aligned}$$

Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be an extended real-valued function and let \(\hat{x}\in \mathrm {dom}\;h\), where \(\mathrm {dom}\;h:=\{x\in \mathbb {R}^n\mid h(x)<+\infty \}\) denotes the domain of h. The analytic \(\varepsilon \)-subdifferential \(\widehat{\partial }_{a\varepsilon }h(\hat{x})\) of h at \(\hat{x}\) is defined by

$$\begin{aligned} \widehat{\partial }_{a\varepsilon }h(\hat{x}):=\left\{ x^*\in \mathbb {R}^n\mid \liminf _{x\rightarrow \hat{x},~x\ne \hat{x}}\frac{h(x)-h(\hat{x})-\langle x^*,x-\hat{x}\rangle }{\Vert x-\hat{x}\Vert }\ge -\varepsilon \right\} . \end{aligned}$$

Specially, when \(\varepsilon =0\), the analytic \(\varepsilon \)-subdifferential \(\widehat{\partial }_{a\varepsilon }h(\hat{x})\) of h at \(\hat{x}\) reduces to the general Fréchet subdifferential, and is denoted by \(\widehat{\partial }h(\hat{x})\). The Mordukhovich (or basic, limiting) subdifferential \(\partial h(\hat{x})\) of h at \(\hat{x}\) is defined by

$$\begin{aligned} \partial h(\hat{x}):=\left\{ x^*\in \mathbb {R}^n\mid \exists x_n\xrightarrow []{h}\hat{x}, \exists x^*_n\rightarrow x^*~\mathrm {with}~x^*_n\in \widehat{\partial }h(x_n),\quad \forall n\in \mathbb {N}\right\} , \end{aligned}$$

where \(x_n\xrightarrow []{h}\hat{x}\) means \(x_n\rightarrow \hat{x}\) and \(h(x_n)\rightarrow h(\hat{x})\). If \(\hat{x}\notin \mathrm {dom}\,h\), then we set \(\widehat{\partial } h(\hat{x})=\partial h(\hat{x})=\varnothing \). Obviously, \(\widehat{\partial } h(\hat{x})\subset \partial h(\hat{x})\) for all \(x\in \mathbb {R}^n\). Specially, we get \(\widehat{\partial }\psi (\hat{x},S)=\widehat{N}(S,\hat{x})\) and \(\partial \psi (\hat{x},S)=N(S,\hat{x})\). Simultaneously, \(\widehat{\partial }d(\hat{x}, S)=\mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(S,\hat{x})\) and \(\partial d(\hat{x}, S)\subset \mathcal {B}_{\mathbb {R}^n}\cap N(S,\hat{x})\). Furthermore, if h is a convex function, then we have

$$\begin{aligned} \widehat{\partial }h(\hat{x})=\partial h(\hat{x})=\left\{ x^*\in \mathbb {R}^n\mid \langle x^*,x-\hat{x}\rangle \le h(x)-h(\hat{x}),\quad \forall x\in \mathbb {R}^n\right\} . \end{aligned}$$

Next, we recall some useful and important propositions and definitions for this paper. First of all, the following necessary optimality condition, called generalized Fermat rule, for a function to attain its local minimum plays a key role for our analysis.

Theorem 2.1

[6, 25] Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be a proper lower semicontinuous function. If h attains a local minimum at \(\hat{x}\in \mathbb {R}^n\), then \(0_{\mathbb {R}^n}\in \widehat{\partial }h(\hat{x})\), which implies \(0_{\mathbb {R}^n}\in \partial h(\hat{x})\).

We recall the following fuzzy sum rule for the Fréchet subdifferential and the sum rule for the Mordukhovich subdifferential, which are important in the sequel.

Theorem 2.2

[6, 25] Let \(f,h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) be proper lower semicontinuous around \(\hat{x}\in \mathrm {dom}\,f\cap \mathrm {dom}\,h\). If f is Lipschitz continuous around \(\hat{x}\), then

  1. (i)

    for every \(x^*\in \widehat{\partial }(f+h)(\hat{x})\) and every \(\epsilon >0\), there exist \(x_1,x_2\in \mathbb {B}(\hat{x},\epsilon )\) such that \(|f(x_1)-f(\hat{x})|<\epsilon \), \(|h(x_2)-h(\hat{x})|<\epsilon \) and \(x^*\in \widehat{\partial }f(x_1)+\widehat{\partial }h(x_2)+\epsilon \mathcal {B}_{\mathbb {R}^n}\).

  2. (ii)

    \(\partial (f+h)(\hat{x})\subset \partial f(\hat{x})+\partial h(\hat{x})\).

Finally in this section, we recall the following important approximate projection theorem in the convex case for the paper.

Theorem 2.3

[26] Assume that \(S\subset \mathbb {R}^n\) is a closed, convex and nonempty subset. Let \(\theta \in (0,1)\). Then, for every \(x\notin S\), there exist \(\tilde{x}\in \mathrm {bd}\;S\) and \(x^*\in N(S,\tilde{x})\) with \(\Vert x^*\Vert =1\) such that

$$\begin{aligned} \theta \Vert x-\tilde{x}\Vert <\min \{d(x,S), \langle x^*,x-\tilde{x}\rangle \}. \end{aligned}$$

3 Main results

In this section, we focus our attention on establishing some optimality conditions for the local (global) weak sharp efficiency in multiobjective optimization problems in terms of the advanced tools of variational analysis and generalized differentiation. Concretely, by using the generalized Fermat rule, the Mordukhovich subdifferential for maximum functions, the fuzzy sum rule for Fréchet subdifferentials and the sum rule for Mordukhovich subdifferentials, we firstly establish a necessary condition for the local weak sharp efficient solution of (MOP). Moreover, by employing the approximate projection theorem, and some appropriate convexity and affineness conditions, we also obtain some sufficient conditions for the local and global weak sharp efficient solutions of (MOP), respectively.

Given arbitrary \(\tilde{x}\in \Omega \), we set

$$\begin{aligned} \mathcal {I}(\tilde{x}):=\left\{ i\in \mathcal {I}\mid g_i(\tilde{x})=0\} \,\,\mathrm{and }\, \mathcal {J}(\tilde{x})=\{j\in \mathcal {J}\mid h_j(\tilde{x})=0\right\} . \end{aligned}$$

Definition 3.1

[3] Given arbitrary \(\tilde{x}\in \Omega \), the constraint qualification \(\mathrm {(CQ)}\) is said to be satisfied at \(\tilde{x}\) iff there do not exist \(\mu _i\ge 0, i\in \mathcal {I}(\tilde{x})\) and \(\gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})\), such that \(\sum _{i\in \mathcal {I}(\tilde{x})}\mu _i+\sum _{j\in \mathcal {J}(\tilde{x})}\gamma _j\ne 0\) and

$$\begin{aligned} 0_{\mathbb {R}^n}\in \sum _{i\in \mathcal {I}(\tilde{x})}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}(\tilde{x})}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )+N(\Omega ,\tilde{x}). \end{aligned}$$

Just as shown in [3], the (CQ), defined in Definition 3.1, can be guaranteed by the well-known Mangasarian-Fromovitz constraint qualification in the smooth setting when \(\tilde{x}\in S\) and \(\Omega =\mathbb {R}^n\). We refer to [3, 4, 6, 25] for more details.

Next, we establish the following necessary optimality condition for local weak sharp efficient solutions of (MOP) under the above-defined (CQ).

Theorem 3.1

Assume that the \(\mathrm {(CQ)}\), defined by Definition 3.1, is satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\), i.e., there exists a neighborhood U of \(\hat{x}\) such that the \(\mathrm {(CQ)}\) holds at any \(\tilde{x}\in \widetilde{S}\cap U\). If \(\hat{x}\) is a local weak sharp efficient solution for \(\mathrm {(MOP)}\), then there exist real numbers \(\eta ,\delta >0\) such that for every \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\), one has

(3.1)

Proof

Since \(\hat{x}\in S\) is a local weak sharp efficient solution for (MOP), there exist real numbers \(\eta ,\delta _1>0\) such that

$$\begin{aligned} \max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\ge \eta d(x,\widetilde{S}),\quad \forall x\in S\cap \mathbb {B}(\hat{x},\delta _1). \end{aligned}$$
(3.2)

Moreover, the (CQ) is satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\). Thus, there exists some \(\delta _2>0\) such that the (CQ) holds at any \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta _2)\). Let \(0<\delta <\min \{\frac{1}{2}\delta _1,\delta _2\}\) and take arbitrary \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\). It follows from \(\widehat{\partial }d(\tilde{x},\widetilde{S})=\mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\) that for every \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\), we have \(x^*\in \widehat{\partial }d(\tilde{x},\widetilde{S})\). Then, for every \(\epsilon >0\), there exists \(0<\delta _3<\frac{1}{2}\delta _1\) such that

$$\begin{aligned} \langle x^*, x-\tilde{x}\rangle \le d(x,\widetilde{S})+\epsilon \Vert x-\tilde{x}\Vert ,\quad \forall x\in \mathbb {B}(\tilde{x},\delta _3). \end{aligned}$$
(3.3)

Note that, for every \(x\in \mathbb {B}(\tilde{x},\delta _3)\), we get \(\Vert x-\hat{x}\Vert \le \Vert x-\tilde{x}\Vert +\Vert \tilde{x}-\hat{x}\Vert <\delta _3+\delta <\delta _1\), that is, \(x\in \mathbb {B}(\hat{x},\delta _1)\). Together with (3.2) and (3.3), it follows

$$\begin{aligned} \eta \langle x^*, x-\tilde{x}\rangle \le \max _{1\le k\le m}\{f_i(x)-f_i(\hat{x})\}+\eta \epsilon \Vert x-\tilde{x}\Vert ,\quad \forall x\in S\cap \mathbb {B}(\tilde{x},\delta _3). \end{aligned}$$
(3.4)

Furthermore, since \(\tilde{x}\in \widetilde{S}\), i.e., \(\tilde{x}\in S\) and \(f(\tilde{x})=f(\hat{x})\), (3.4) implies that \(\tilde{x}\) is a local minimum point of the following function \(\varphi :\mathbb {R}^n\rightarrow \mathbb {R}\cup \{+\infty \}\) defined by

$$\begin{aligned} \varphi (x)\!:=\!-\eta \langle x^*, x-\tilde{x}\rangle +\max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\!+\!\eta \epsilon \Vert x-\tilde{x}\Vert +\psi (x,S),\quad \forall x\in \mathbb {R}^n. \end{aligned}$$

Applying Theorem 2.1 (generalized Fermat rule), we have \(0_{\mathbb {R}^n}\in \widehat{\partial }\varphi (\tilde{x})\). Since \(f_k, k\in \mathcal {K}\) are all Lipschitz continuous around \(\tilde{x}\), the function \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\), defined by \(\phi (x):=\max _{1\le k\le m}\{f_k(x)-f_k(\hat{x})\}\), is also Lipschitz continuous around \(\tilde{x}\). Specially, let \(\ell >0\) be the modulus. Moreover, since S is locally closed, the indicator function \(\psi (\bullet ,S)\) is lower semicontinuous around \(\tilde{x}\). Together with the global Lipschitz continuity of \(\Vert \bullet -\tilde{x}\Vert \) with modulus 1, we get from Theorem 2.2(i) (fuzzy sum rule for Fréchet subdifferentials) that for the proceeding \(\epsilon >0\), there exist \(x^\epsilon _1, x^\epsilon _2, x^\epsilon _3\in \mathbb {B}(\tilde{x},\epsilon )\), such that

$$\begin{aligned} |\phi (x^\epsilon _1)|<\epsilon ,\quad \eta \epsilon \Vert x^\epsilon _2-\tilde{x}\Vert <\epsilon ,\;\psi (x^\epsilon _3,S)<\epsilon \end{aligned}$$

and

$$\begin{aligned} \eta x^*\in \widehat{\partial }\phi (x^\epsilon _1)+\eta \epsilon \widehat{\partial }\Vert \bullet -\tilde{x}\Vert (x^\epsilon _2)+ \widehat{\partial }\psi (\bullet ,S)(x^\epsilon _3)+\epsilon \mathcal {B}_{\mathbb {R}^n}. \end{aligned}$$

Since \(\psi (x^\epsilon _3,S)<\epsilon \), we have \(x^\epsilon _3\in S\) and \(\widehat{\partial }\psi (\bullet ,S)(x^\epsilon _3)=\widehat{N}(S,x^\epsilon _3)\). Note that \(\phi \) is Lipschitz continuous around \(\tilde{x}\) with modulus \(\ell \) and \(x^\epsilon _1\in \mathbb {B}(\tilde{x},\epsilon )\). Then it follows from [25, Proposition 1.85] with \(\varepsilon =0\) that for all sufficiently small \(\epsilon >0\), one has \(\widehat{\partial }\phi (x^\epsilon _1)\subset \ell \mathcal {B}_{\mathbb {R}^n}\). Similarly, we get \(\widehat{\partial }\Vert \bullet -\tilde{x}\Vert (x^\epsilon _2)\subset \mathcal {B}_{\mathbb {R}^n}\). Together with \(\mathcal {B}_{\mathbb {R}^n}\) being compact, and moreover, \(x^\epsilon _1, x^\epsilon _2, x^\epsilon _3\in \mathbb {B}(\tilde{x},\epsilon )\), we have

$$\begin{aligned} x^\epsilon _1\xrightarrow []{\phi }\tilde{x}, x^\epsilon _2\xrightarrow []{\Vert \bullet -\tilde{x}\Vert }\tilde{x}, x^\epsilon _3\xrightarrow []{S}\tilde{x}, as \epsilon \downarrow 0, \end{aligned}$$

which imply

$$\begin{aligned} \eta x^*\in \partial \phi (\tilde{x})+ N(S,\tilde{x}), \end{aligned}$$
(3.5)

(taking \(\epsilon \downarrow 0\) and passing to a subsequence if necessary). On one hand, by using the Mordukhovich subdifferential of maximum functions [25, Theorem 3.46(ii)] and Theorem 2.2(ii) (sum rule for Mordukhovich subdifferentials), we get from \(\tilde{x}\in \widetilde{S}\) that \(f(\tilde{x})=f(\hat{x})\) and

$$\begin{aligned} \partial \phi (\tilde{x})\subset \left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\tilde{x})\mid \lambda _k\ge 0, k\in \mathcal {K},\quad \sum _{k\in \mathcal {K}}\lambda _k=1\right\} . \end{aligned}$$
(3.6)

On the other hand, let

$$\begin{aligned} \Lambda :=\left\{ x\in \mathbb {R}^n\mid g_i(x)\le 0, i\in \mathcal {I}, h_j(x)=0, j\in \mathcal {J}\right\} . \end{aligned}$$

Then \(S=\Omega \cap \Lambda \). Since \(\delta <\delta _2\) and \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\), the (CQ) holds at \(\tilde{x}\). Note that

$$\begin{aligned}&\left\{ \sum _{i\in \mathcal {I}(\tilde{x})}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}(\tilde{x})}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \mu _i\ge 0, i\in \mathcal {I}(\tilde{x}), \gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})\right\} \\\subset & {} \left\{ \sum _{i\in \mathcal {I}(\tilde{x})}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}(\tilde{x})}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \mu _i\ge 0, i\in \mathcal {I}(\tilde{x}), \gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})\right\} +N(\Omega ,\tilde{x}) \end{aligned}$$

always holds since \(0_{\mathbb {R}^n}\in N(\Omega ,\tilde{x})\). Thus, it follows from the (CQ) that there do not exist \(\mu _i\ge 0, i\in \mathcal {I}(\tilde{x})\) and \(\gamma _j\ge 0, j\in \mathcal {J}(\tilde{x})=\mathcal {J}\), such that \(\sum _{i\in \mathcal {I}(\tilde{x})}\mu _i+\sum _{j\in \mathcal {J}}\gamma _j\ne 0\) and

$$\begin{aligned} 0_{\mathbb {R}^n}\in \sum _{i\in \mathcal {I}(\tilde{x})}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big ). \end{aligned}$$

Applying [25, Corollary 4.36], we have

$$\begin{aligned} N(\Lambda ,\tilde{x})\subset \left\{ \sum _{i\in \mathcal {I}(\tilde{x})}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \mu _i\ge 0, i\in \mathcal {I}(\tilde{x}), \gamma _j\ge 0, j\in \mathcal {J}\right\} .\nonumber \\ \end{aligned}$$
(3.7)

Moreover, it follows from (3.7) and [25, Corollary 3.37] that the (CQ) being satisfied at \(\tilde{x}\) implies

$$\begin{aligned} N(S,\tilde{x})=N(\Omega \cap \Lambda ,\tilde{x})\subset N(\Omega ,\tilde{x})+N(\Lambda ,\tilde{x}). \end{aligned}$$
(3.8)

Let \(\mu _i=0\) for every \(i\in \mathcal {I}\setminus \mathcal {I}(\tilde{x})\). Then, it follows from (3.7) and (3.8) that

$$\begin{aligned} N(S,\tilde{x})\subset & {} \Big \{\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \mu _i\ge 0, \mu _ig_i(\tilde{x})=0, i\in \mathcal {I},\nonumber \\&\gamma _j\ge 0, j\in \mathcal {J}\Big \}+N(\Omega ,\tilde{x}). \end{aligned}$$
(3.9)

Note that \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\) and \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap \widehat{N}(\widetilde{S},\tilde{x})\) are arbitrary. Thus, combining (3.5), (3.6) and (3.9), we can conclude that (3.1) holds. This completes the proof. \(\square \)

The following example shows that the \(\mathrm {(CQ)}\) being satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\) is essential for Theorem 3.1.

Example 3.1

Let the vector-valued map \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x):=(f_1(x),f_2(x))\) with \(f_1(x)=f_2(x):=\min \{0,x\}\) for all \(x\in \mathbb {R}\). Moreover, let the functions \(g:\mathbb {R}\rightarrow \mathbb {R}\) and \(h:\mathbb {R}\rightarrow \mathbb {R}\) be respectively defined by \(g(x):=(x-1)^3\) and \(h(x):=\min \{0,x\}\) for all \(x\in \mathbb {R}\). Take \(\Omega :=[-2,2]\subset \mathbb {R}\) and consider the following constrained multiobjective optimization problem:

$$\begin{aligned} \min f(x)\quad \mathrm {s.t.}\quad g(x)\le 0, h(x)=0, x\in \Omega . \end{aligned}$$

It is easy to verify that the feasible set \(S=[0,1]\). Specially, let \(\hat{x}=1\in S\). Then, \(f(\hat{x})=(0,0)\) and \(\widetilde{S}=S\cap f^{-1}(f(\hat{x}))=S\). Obviously, \(\hat{x}\) is not isolated in \(\widetilde{S}\). It follows from Definition 1.1 that \(\hat{x}\in S\) is a local weak sharp efficient solution. However, (3.1) does not hold for every \(\eta , \delta >0\). In fact, by direct calculating, we have \(\partial f_1(\hat{x})=\partial f_2(\hat{x})=\{0\}\), \(\partial g(\hat{x})=\{0\}\), \(\partial h(\hat{x})=\{0\}\), \(\partial (-h)(\hat{x})=\{0\}\), \(N(\Omega ,\hat{x})=\{0\}\) and \(N(\widetilde{S},\hat{x})=[0, +\infty )\). Thus, we get \(\eta \mathcal {B}_{\mathbb {R}^2}\cap N(\widetilde{S},\hat{x})=[0,\eta ]\) and

$$\begin{aligned}&\left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\hat{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\hat{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\hat{x})\cup \partial (-h_j)(\hat{x})\Big )\mid \lambda _k\ge 0, k\in \mathcal {K}\right. , \nonumber \\&\quad \left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\hat{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\hat{x})=\{0\}, \end{aligned}$$

which shows that (3.1) does not hold for every \(\eta , \delta >0\). Meantime, the (CQ) is also not satisfied around \(\hat{x}\in S\) with respect to \(\widetilde{S}\).

It is worth noting that if \(\hat{x}\in S\) is a local sharp efficient solution for (MOP), then \(\hat{x}\) is isolated in \(\widetilde{S}\). Thus, \(\widehat{N}(\widetilde{S},\tilde{x})=\mathbb {R}^n\) and the (CQ) only needs to be satisfied at \(\hat{x}\). Together with Theorem 3.1, the following necessary condition for the local sharp efficiency corresponding to [3, Theorem3.5] immediately holds:

Corollary 3.1

Assume that the \(\mathrm {(CQ)}\), defined by Definition 3.1, is satisfied at \(\hat{x}\in S\). If \(\hat{x}\) is a local sharp efficient solution for \(\mathrm {(MOP)}\), then there exists a real number \(\eta >0\) such that

$$\begin{aligned} \eta \mathcal {B}_{\mathbb {R}^n}\subset & {} \left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\hat{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\hat{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\hat{x})\cup \partial (-h_j)(\hat{x})\Big )\mid \lambda _k\ge 0, k\in \mathcal {K}\right. ,\nonumber \\&\left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\hat{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\hat{x}). \end{aligned}$$

Remark 3.1

Note that, in order to establish the necessary condition for the local sharp efficiency, Chuong [3, Theorem 3.5] employed the exact sum rule for Fréchet subdifferentials, which is exploited in [27, Theorem 3.1]; see also Chuong and Yao [4, Theorem 3.1] for the semi-infinite vector optimization problem. Simultaneously, Zhou et al. [28, Theorem 4.1] also obtained some necessary conditions by virtue of the exact sum rule for Mordukhovich subdifferentials under appropriate regularity and differentiability assumptions. However, we use a different method, i.e., Theorem 2.2(i) (fuzzy sum rule for Fréchet subdifferentials), to obtain a more general necessary condition for the local weak sharp efficinecy in Theorem 3.1.

Now, by employing the approximate projection theorem, and some appropriate convexity and affineness conditions, we establish the following sufficient optimality conditions for the local and global weak sharp efficient solutions of (MOP), respectively.

Theorem 3.2

Given \(\hat{x}\in S\), let \(\Omega \) be a closed and convex set, and \(\widetilde{S}\) be convex. Suppose that \(f_k, k\in \mathcal {K}\) and \(g_i, i\in \mathcal {I}\) are convex functions, and \(h_j, j\in \mathcal {J}\) are affine functions. If there exist real numbers \(\eta ,\delta >0\) such that for every \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\),

$$\begin{aligned} \eta \mathcal {B}_{\mathbb {R}^n}\!\cap \! N(\widetilde{S},\tilde{x})\subset & {} \left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\tilde{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \lambda _k\ge 0, k\in \mathcal {K}\right. ,\nonumber \\&\left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\tilde{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\tilde{x}), \end{aligned}$$
(3.10)

then \(\hat{x}\) is a local weak sharp efficient solution for \(\mathrm {(MOP)}\). Specially, if (3.10) holds for \(\delta =+\infty \), then \(\hat{x}\) is a global weak sharp efficient solution for \(\mathrm {(MOP)}\).

Proof

Note that \(\Omega \) is a closed and convex set, \(g_i, i\in \mathcal {I}\) are convex functions, and \(h_j, j\in \mathcal {J}\) are affine functions. Thus, the feasible set S is closed and convex. Together with the local Lipschitz continuity of \(f_k, k\in \mathcal {K}\) and the convexity of \(\widetilde{S}\), it follows that \(\widetilde{S}=S\cap f^{-1}(f(\hat{x}))\) is a closed and convex set. Next, we consider the following two cases with \(0<\delta <+\infty \) and \(\delta =+\infty \), respectively.

(i) Assume that (3.10) holds for \(0<\delta <+\infty \). Let \(0<\delta _1<\frac{1}{2}\delta \). In what follows, we show

$$\begin{aligned} \eta d(x,\widetilde{S})\le \max \{f_k(x)-f_k(\hat{x})\},\quad \forall x\in S\cap \mathbb {B}(\hat{x},\delta _1). \end{aligned}$$
(3.11)

In fact, take arbitrary \(x\in S\cap \mathbb {B}(\hat{x},\delta _1)\). If \(x\in \widetilde{S}\), i.e., \(x\in S\) and \(f(x)=f(\hat{x})\), then (3.11) holds trivially. While, if \(x\notin \widetilde{S}\), then \(0<d(x,\widetilde{S})\le \Vert x-\hat{x}\Vert <\delta _1\) since \(\hat{x}\in \widetilde{S}\). Thus, we get \(0<\frac{1}{\delta _1}d(x,\widetilde{S})<1\). Moreover, for every \(\theta \in \Big (\frac{1}{\delta _1}d(x,\widetilde{S}), 1\Big )\), it follows from Theorem 2.3 (approximate projection theorem) that there exist \(\tilde{x}\in \widetilde{S}\) and

$$\begin{aligned} x^*\in \mathcal {B}_{\mathbb {R}^n}\cap N(\widetilde{S},\tilde{x}) \end{aligned}$$
(3.12)

such that

$$\begin{aligned} \theta \Vert x-\tilde{x}\Vert <\min \{d(x,\widetilde{S}), \langle x^*, x-\tilde{x}\rangle \}. \end{aligned}$$
(3.13)

Thus, we have \(\Vert x-\tilde{x}\Vert <\frac{1}{\theta }d(x,\widetilde{S})<\delta _1\), and then, \(\Vert \tilde{x}-\hat{x}\Vert \le \Vert \tilde{x}-x\Vert +\Vert x-\hat{x}\Vert <\delta _1+\delta _1<\delta \), which implies \(\tilde{x}\in \widetilde{S}\cap \mathbb {B}(\hat{x},\delta )\). Together with (3.10) and (3.12), it follows that there exist \(\lambda _k\ge 0\) with \(\sum _{k\in \mathcal {K}}\lambda _k=1\), \(u^*_k\in \partial f_k(\tilde{x}), k\in \mathcal {K}\), \(\mu _i\ge 0\) with \(\mu _ig_i(\tilde{x})=0\), \(v^*_i\in \partial g_i(\tilde{x}), i\in \mathcal {I}\), \(\gamma _j\ge 0\), \(w^*_j\in \partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x}), j\in \mathcal {J}\) and \(\omega ^*\in N(\Omega ,\tilde{x})\) such that

$$\begin{aligned} \eta x^*=\sum _{k\in \mathcal {K}}\lambda _ku^*_k+\sum _{i\in \mathcal {I}}\mu _iv^*_i+\sum _{j\in \mathcal {J}}\gamma _jw^*_j+\omega ^*. \end{aligned}$$
(3.14)

Note that \(x\in S\). Then \(x\in \Omega \). Since \(\Omega \) is convex, we get

$$\begin{aligned} \langle \omega ^*, x-\tilde{x}\rangle \le 0. \end{aligned}$$
(3.15)

Simultaneously, since \(f_k, k\in \mathcal {K}\) and \(g_i, i\in \mathcal {I}\) are convex functions, we have

$$\begin{aligned} \langle u^*_k, x-\tilde{x}\rangle \le f_k(x)-f_k(\tilde{x}),\quad \forall k\in \mathcal {K} \end{aligned}$$
(3.16)

and

$$\begin{aligned} \langle v^*_i, x-\tilde{x}\rangle \le g_i(x)-g_i(\tilde{x}),\quad \forall i\in \mathcal {I}. \end{aligned}$$
(3.17)

Furthermore, since \(h_j, j\in \mathcal {J}\) are affine functions, there exists \(\alpha _j\in \{-1,1\}\) such that

$$\begin{aligned} \langle w_j^*, x-\tilde{x}\rangle =\alpha _j(h_j(x)-h_j(\tilde{x})),\quad \forall j\in \mathcal {J}. \end{aligned}$$
(3.18)

Since \(x\in S\), it follows \(g_i(x)\le 0\) for all \(i\in \mathcal {I}\) and \(h_j(x)=0\) for all \(j\in \mathcal {J}\). Together with (3.14)–(3.18) and \(\tilde{x}\in \widetilde{S}\), we get \(\tilde{x}\in S\), \(f(\tilde{x})=f(\hat{x})\) and

$$\begin{aligned} \langle \eta x^*, x-\tilde{x}\rangle= & {} \Big \langle \sum _{k\in \mathcal {K}}\lambda _ku^*_k+\sum _{i\in \mathcal {I}}\mu _iv^*_i+\sum _{j\in \mathcal {J}}\gamma _jw^*_j+\omega ^*, x-\tilde{x}\Big \rangle \nonumber \\\le & {} \sum _{k\in \mathcal {K}}\lambda _k\Big (f_k(x)-f_k(\tilde{x})\Big )+\sum _{i\in \mathcal {I}}\mu _i\Big (g_i(x)-g_i(\tilde{x})\Big )\nonumber \\&+\sum _{j\in \mathcal {J}}\gamma _j\alpha _j\Big (h_j(x)-h_j(\tilde{x})\Big )\nonumber \\\le & {} \sum _{k\in \mathcal {K}}\lambda _k\max \{f_k(x)-f_k(\tilde{x})\}+\sum _{i\in \mathcal {I}}\Big (\mu _ig_i(x)-\mu _ig_i(\tilde{x})\Big )\nonumber \\\le & {} \max \{f_k(x)-f_k(\hat{x})\}\sum _{k\in \mathcal {K}}\lambda _k=\max \{f_k(x)-f_k(\hat{x})\}. \end{aligned}$$
(3.19)

Since \(\tilde{x}\in \widetilde{S}\), we have \(d(x,\widetilde{S})\le \Vert x-\tilde{x}\Vert \). Together with (3.13) and (3.19), we get

$$\begin{aligned} \eta \theta d(x,\widetilde{S})\le \eta \theta \Vert x-\tilde{x}\Vert <\langle \eta x^*, x-\tilde{x}\rangle \le \max \{f_k(x)-f_k(\hat{x})\}. \end{aligned}$$

Let \(\theta \uparrow 1\). Then (3.11) holds since \(x\in S\cap \mathbb {B}(\hat{x},\delta _1)\) is arbitrary. Thus, \(\hat{x}\) is a local weak sharp efficient solution for (MOP).

(ii) Let (3.10) hold for \(\delta =+\infty \). Take arbitrary \(\tilde{x}\in \widetilde{S}\). Since \(\widetilde{S}\) is a closed and convex set, we have \(N(\widetilde{S},\tilde{x})=T(\widetilde{S},\tilde{x})^\circ \). Moreover, it follows from [19, Theorem 3.1] that

$$\begin{aligned} d(x, T(\widetilde{S},\tilde{x}))=\psi ^*(x, \mathcal {B}_{\mathbb {R}^n}\cap T(\widetilde{S},\tilde{x})^\circ )=\sup _{x^*\in \mathcal {B}_{\mathbb {R}^n}\cap N(\widetilde{S},\tilde{x})}\langle x^*, x\rangle ,\quad \forall x\in \mathbb {R}^n, \end{aligned}$$

and from [29, Theorem A.1-4] that

$$\begin{aligned} d(x,\widetilde{S})=\sup _{\tilde{x}\in \widetilde{S}}d(x,\tilde{x}+T(\widetilde{S},\tilde{x})),\;\forall x\in \mathbb {R}^n. \end{aligned}$$

Thus, we can conclude that for every \(x\in S\),

$$\begin{aligned} \eta d(x,\widetilde{S})=\sup _{\tilde{x}\in \widetilde{S}}\eta d(x-\tilde{x}, T(\widetilde{S},\tilde{x}))=\sup _{\tilde{x}\in \widetilde{S}}\sup _{x^*\in \mathcal {B}_{\mathbb {R}^n}\cap N(\widetilde{S},\tilde{x})}\langle \eta x^*, x-\tilde{x}\rangle . \end{aligned}$$
(3.20)

Take arbitrary \(\tilde{x}\in \widetilde{S}\) and \(x^*\in \mathcal {B}_{\mathbb {R}^n}\cap N(\widetilde{S},\tilde{x})\). Since (3.10) holds for \(\delta =+\infty \), by the similar method to the proof of (i), we have

$$\begin{aligned} \langle \eta x^*, x-\tilde{x}\rangle \le \max \{f_k(x)-f_k(\hat{x})\},\quad \forall x\in S. \end{aligned}$$

Together with (3.20), we get

$$\begin{aligned} \eta d(x,\widetilde{S})\le \max \{f_k(x)-f_k(\hat{x})\},\quad \forall x\in S, \end{aligned}$$

that is, \(\hat{x}\) is a global weak sharp efficient solution for (MOP). This completes the proof. \(\square \)

Remark 3.2

Corresponding to the method used in [3, Theorem 3.7], we apply the expressions of special distance functions, instead of the Hahn-Banach theorem, to obtain the sufficient condition for the global weak sharp efficiency in Theorem 3.2. However, the similar method is not applicable to establish sufficient conditions for local weak sharp efficient solutions since the reference point \(\hat{x}\in S\) is not isolated in \(\widetilde{S}\). Here in the proof of Theorem 3.2, we employ Theorem 2.3 (approximate projection theorem) to overcome this difficulty.

To end this paper, we give the following example to illustrate Theorem 3.2.

Example 3.2

Let the vector-valued map \(f:\mathbb {R}^2\rightarrow \mathbb {R}^2\) be defined by \(f(x):=(f_1(x),f_2(x))\) with \(f_1(x):=\max \{0,-x_1\}\) and \(f_2(x):=\max \{0,x_2\}\) for all \(x=(x_1,x_2)\in \mathbb {R}^2\). Moreover, let the functions \(g:\mathbb {R}^2\rightarrow \mathbb {R}\) and \(h:\mathbb {R}^2\rightarrow \mathbb {R}\) be respectively defined by \(g(x):=|x_1|+x_2\) and \(h(x):=x_1+x_2\) for all \(x=(x_1,x_2)\in \mathbb {R}^2\). Take \(\Omega :=[-1,1]\times [-1,1]\subset \mathbb {R}^2\) and consider the following constrained multiobjective optimization problem:

$$\begin{aligned} \min f(x)\quad \mathrm {s.t.}\quad g(x)\le 0, h(x)=0, x\in \Omega . \end{aligned}$$

Obviously, f and g are convex functions and h is an affine function defined on \(\mathbb {R}^2\). Furthermore, \(\Omega \) is a closed and convex subset and the feasible set \(S=\{x\in \mathbb {R}^2\mid 0\le x_1\le 1, x_1+x_2=0\}\). Specially, let \(\hat{x}=(\frac{1}{2},-\frac{1}{2})\in S\). Then \(f(\hat{x})=(0,0)\) and \(\widetilde{S}:=S\cap f^{-1}(f(\hat{x}))=S\), which implies that \(\widetilde{S}\) is also a closed and convex set. Take arbitrary \(\eta >0\) and \(\tilde{x}\in \widetilde{S}\). Next, we consider the following three cases for \(\tilde{x}\in \widetilde{S}\):

Case 1: \(\tilde{x}=(0,0)\). Then \(\partial f_1(\tilde{x})=[-1,0]\times \{0\}\), \(\partial f_2(\tilde{x})=\{0\}\times [0,1]\), \(\partial g(\tilde{x})=[-1,1]\times \{1\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{(0,0)\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1\le x_2\}\). Thus, we have

$$\begin{aligned} \eta \mathcal {B}_{\mathbb {R}^2}\cap N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1\le x_2, x_1^2+x_2^2\le \eta ^2\} \end{aligned}$$

and

$$\begin{aligned}&\left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\tilde{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \lambda _k\ge 0, k\in \mathcal {K}\right. ,\nonumber \\&\quad \left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\tilde{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\tilde{x})\\&\quad = \left\{ x\in \mathbb {R}^2\mid x_2\le x_1+1, x_1\le 0\le x_2\right\} +\left\{ x\in \mathbb {R}^2\mid -x_2\le x_1\le x_2, x_2\ge 0\right\} \\&\qquad +\left\{ x\in \mathbb {R}^2\mid x_1=x_2\right\} \\&\quad = \left\{ x\in \mathbb {R}^2\mid x_1\le x_2\right\} , \end{aligned}$$

which implies that (3.10) holds for all \(\eta >0\).

Case 2: \(\tilde{x}=(1,-1)\). Then \(\partial f_1(\tilde{x})=\partial f_2(\tilde{x})=\{(0,0)\}\), \(\partial g(\tilde{x})=\{(1,1)\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{x\in \mathbb {R}^2\mid x_2\le 0\le x_1\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1\ge x_2\}\). Thus, we have

$$\begin{aligned} \eta \mathcal {B}_{\mathbb {R}^2}\cap N(\widetilde{S},\tilde{x})=\left\{ x\in \mathbb {R}^2\mid x_1\ge x_2, x_1^2+x_2^2\le \eta ^2\right\} \end{aligned}$$

and

$$\begin{aligned}&\left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\tilde{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \lambda _k\ge 0, k\!\in \!\mathcal {K}\right. ,\nonumber \\&\quad \left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\tilde{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\tilde{x})\\&\quad = \{x\in \mathbb {R}^2\mid x_1=x_2\ge 0\}+ \{x\in \mathbb {R}^2\mid x_1=x_2\}+\{x\in \mathbb {R}^2\mid x_2\le 0\le x_1\}\\&\quad = \{x\in \mathbb {R}^2\mid x_1\ge x_2\}. \end{aligned}$$

Clearly, (3.10) holds for all \(\eta >0\).

Case 3: \(\tilde{x}\ne (0,0)\) and \(\tilde{x}\ne (1,-1)\). Then \(\partial f_1(\tilde{x})=\partial f_2(\tilde{x})=\{(0,0)\}\), \(\partial g(\tilde{x})=\{(1,1)\}\), \(\partial h(\tilde{x})=\{(1,1)\}\), \(\partial (-h)(\tilde{x})=\{(-1,-1)\}\), \(N(\Omega ,\tilde{x})=\{(0,0)\}\) and \(N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1=x_2, x_1\in \mathbb {R}\}\). Thus, we have

$$\begin{aligned} \eta \mathcal {B}_{\mathbb {R}^2}\cap N(\widetilde{S},\tilde{x})=\{x\in \mathbb {R}^2\mid x_1=x_2, x_1^2+x_2^2\le \eta ^2\} \end{aligned}$$

and

$$\begin{aligned}&\left\{ \sum _{k\in \mathcal {K}}\lambda _k\partial f_k(\tilde{x})+\sum _{i\in \mathcal {I}}\mu _i\partial g_i(\tilde{x})+\sum _{j\in \mathcal {J}}\gamma _j\Big (\partial h_j(\tilde{x})\cup \partial (-h_j)(\tilde{x})\Big )\mid \lambda _k\ge 0, k\in \mathcal {K}\right. ,\nonumber \\&\left. \sum _{k\in \mathcal {K}}\lambda _k=1, \mu _i\ge 0, \mu _ig_i(\tilde{x})=0, i\in \mathcal {I}, \gamma _j\ge 0, j\in \mathcal {J}\right\} +N(\Omega ,\tilde{x})\\&\quad = \left\{ x\in \mathbb {R}^2\mid x_1=x_2\ge 0\}+ \{x\in \mathbb {R}^2\mid x_1=x_2\right\} \\&\quad = \left\{ x\in \mathbb {R}^2\mid x_1=x_2\right\} . \end{aligned}$$

Obviously, (3.10) still holds for all \(\eta >0\).

Therefore, we can conclude that (3.10) holds for all \(\eta >0\) and all \(\tilde{x}\in \widetilde{S}\). Moreover, it follows from Definition 1.1 that \(\hat{x}\in S\) is a global weak sharp efficient solution, and simultaneously, \(\hat{x}\) is obviously not isolated in \(\widetilde{S}\).