1 Introduction

This paper is connecting three active research fields: quasiconvex optimization, multiobjective programming, and nonsmooth optimization. All these three issues have strong intersections with global optimization because of the possibility of the presence of nonsmooth nonconvex objective/constraint functions. We work with nonsmooth multiobjective constrained optimization problems under generalized convexity assumptions; essentially quasiconvexity.

Quasiconvex functions, form an important class of generalized convex functions and are widely used in various theoretical and practical fields, including microeconomics, game theory, Multiple Criteria Decision Making (MCDM), industrial organization, probability, and general equilibrium theory. In economics (resp. MCDM), quasiconcave utility (resp. value) function implies that consumer (resp. Decision Maker (DM)) has convex preferences [28]. Quasiconvex programming problems build an important class of nonconvex optimization problems. Thanks to the wide range of their applications, these problems have been studied from various standpoints by many scholars; see e.g. [9, 15, 16, 20,21,22,23,24, 29, 30] among others.

Multiobjective optimization, which refers to minimizing/ maximizing more than one conflicting objective function simultaneously, has received a considerable amount of attention. Indeed, in many real-world practical problems, arising in industry, economics, management sciences, biological sciences, etc, the DM, manager, or user must optimize more than one objective function. Such problems, can be modeled as multiobjective programs. For instances, consider the portfolio selection problem in which the investor should maximize the expected return and minimize the investment risk simultaneously [26, Section 3]. As another example, consider minimizing the production cost and maximizing the quality, simultaneously, in a company.

On the other hand, in recent decades nonsmooth analysis has played a crucial role in solving problems arising in practice with nonsmooth data [2, 6, 18]. In this regard, various generalized (sub)differentials and gradients have been defined and investigated. Some of these generalized notions have been developed for general classes of functions while some other ones concentrate on special functions.

In the current paper, we consider nonsmooth quasiconvex multiobjective programming problems, and so we work with some subdifferentials devised for quasiconvex functions. We consider Greenberg–Pierskalla [11], Penot [20, 21], Plastria [25], and Guti\(\acute{e}\)rrez [12, 13] subdifferential sets defined invoking the sublevel sets of quasiconvex functions. “It is likely that a specific tool will be more efficient than an all-purpose tool” [20]. A function is convex iff its epigraph is convex; and it is quasiconvex iff its sublevel sets are convex. In analysing the convex functions from differentiation standpoint, the main focus is on the properties of the epigraph, while in quasiconvex case this role is given to the sublevel sets. The specific subdifferentials devised for quasiconvex functions are often insensitive to a scaling of the function [20]. Several of them satisfy a stability (or robustness, or closedness) property analogous to the one valid for the Fenchel subdifferential [20, Proposition]. The all-purpose subdifferentials are usually local, while the ones we use are of global nature [16]. The character of these subdifferentials is highlighted if one looks at them from a duality standpoint. Beside their links with duality, these subdifferentials are useful for algorithmic purposes [25] and their use in problems in which quasiconvexity properties occur seems to be sensible [16]. According to the nice properties of Greenberg–Pierskalla, Penot, Plastria, and Guti\(\acute{e}\)rrez subdifferentials, and also their dedication to quasiconvex functions (rather than all-purpose subdifferentials), in the present work we deal with these subdifferentials in the presence of nonsmooth quasiconvex functions. In addition, we apply a somehow different subdifferential introduced by Suzuki and Kuroiwa [29, 30] based upon an idea credited to Penot and Volle [22]. Considering a nonsmooth quasiconvex multiobjective programming problem, some necessary and sufficient optimality conditions in terms of the aforementioned subdifferentials, are derived, leading to characterization of weakly efficient solutions. The presented outcomes can be used for numerical purposes and deriving duality properties. Although there are many works devoted to the investigation of (weakly) efficient solutions of multiobjective programming problems in terms of all-purpose subdifferentials (see, e.g., [19] and the references therein) and optimal solutions of single-objective problems in terms of quasiconvex-devised subdifferentials (QDSs) (see, e.g., [15, 16, 20, 22, 25, 29, 30]), to the best of our knowledge, the current work is the first one which studies multiobjective problems in terms of the QDSs.

Section 2 addresses required preliminaries and Sect. 3 contains the main results.

2 Preliminaries

In this section, we briefly address some notations, basic definitions, and standard preliminaries which are used in the sequel. Given a nonempty set \(\varOmega \subseteq \mathbb {R}^n\), four notations \(cl(\varOmega ),\)\(int(\varOmega )\), \(conv(\varOmega )\), and \(cone(\varOmega )\) denote the closure of \(\varOmega \), the interior of \(\varOmega \), the convex hull of \(\varOmega \), and the cone generated by \(\varOmega \), respectively. Indeed, \(cone(\varOmega )=\cup _{\lambda \ge 0}\lambda \varOmega .\) The standard inner product of \(x,y\in {\mathbb {R}}^n\) is denoted by \(x^Ty\).

The (Fenchel) normal and the tangent cones to \(\varOmega \subseteq {\mathbb {R}}^n\) at \({\bar{x}}\in cl\,\varOmega \) are respectively defined as

$$\begin{aligned}&\displaystyle N(\varOmega ,{\bar{x}}):=\big \{d\in \mathbb {R}^n : d^T(x-{\bar{x}})\le 0,~~\forall x\in \varOmega \big \},\\&\displaystyle T(\varOmega ,{\bar{x}}):=\big \{d\in \mathbb {R}^n : \exists \{(\alpha _\nu ,d_\nu )\}_\nu \subseteq (0,\infty )\times {\mathbb {R}}^n;~\alpha _\nu \downarrow 0,~d_\nu \rightarrow d,~{\bar{x}}+\alpha _\nu d_\nu \in \varOmega ,\; \forall \nu \big \}. \end{aligned}$$

The non-positive polar cone corresponding to a set \(\varOmega \subseteq {\mathbb {R}}^n\) is defined as

$$\begin{aligned} \varOmega ^*:=\{d\in \mathbb {R}^n:d^Tx\le 0,~\forall x\in \varOmega \}. \end{aligned}$$

It is known that \((N(M,{\bar{x}}))^*=T(M,{\bar{x}})\) provided that M is convex. If \(\varOmega _1,\ldots ,\varOmega _k\) are nonempty closed convex sets in \(\mathbb {R}^n\) with \(\displaystyle \cap _{i=1}^k int(\varOmega _i) \ne \emptyset \), then by applying the intersection normal cone formula [14, p. 139], we have

$$\begin{aligned} N \big ({\mathop {\cap }\nolimits _{i=1}^{k}} \varOmega _{i},{\bar{x}}\big )=\sum _{i=1}^{k} N(\varOmega _{i},{\bar{x}}),~~\forall {\bar{x}} \in {\mathop {\cap }\nolimits _{i=1}^{k}} \varOmega _{i}. \end{aligned}$$
(1)

Let \(\varphi :\mathbb {R}^n\rightarrow \mathbb {R}\) be a real-valued function. The sublevel set and the strict sublevel set of \(\varphi \) at \({\bar{x}}\in \mathbb {R}^n\) are, respectively, defined as

$$\begin{aligned} \begin{array}{c} S_{{\bar{x}}}(\varphi ):=\big \{x\in \mathbb {R}^n : \varphi (x)\le \varphi ({\bar{x}}) \big \},\\ S^{s}_{{\bar{x}}}(\varphi ):=\big \{x\in \mathbb {R}^n : \varphi (x)<\varphi ({\bar{x}}) \big \}. \end{array} \end{aligned}$$

Definition 2.1

[3] The function \(\varphi :\mathbb {R}^n\rightarrow \mathbb {R}\) is called

  1. i)

    quasiconvex if for each \(x,y\in \mathbb {R}^n\) and each \(\lambda \in (0,1)\) one has

    $$\begin{aligned} \varphi (\lambda x+(1-\lambda )y)\le \max \{\varphi (x),\varphi (y)\}. \end{aligned}$$
  2. ii)

    strongly quasiconvex if for each \(x,y\in \mathbb {R}^n\) with \(x\ne y\) and each \(\lambda \in (0,1)\) one has

    $$\begin{aligned} \varphi (\lambda x+(1-\lambda )y)<\max \{\varphi (x),\varphi (y)\}. \end{aligned}$$

It is easy to see that, if \(\varphi \) is quasiconvex, then \(\{x\in \mathbb {R}^n : \varphi (x)\le \theta \}\) and \(\{x\in \mathbb {R}^n : \varphi (x)<\theta \}\) are convex for any \(\theta \in {\mathbb {R}}.\) See, e.g., [2, 3, 7,8,9, 15, 28] for characterizations and applications of (strongly) quasiconvex functions.

In the last three decades, various subdifferential sets for quasiconvex functions have been proposed; see e.g. [7,8,9, 11,12,13, 17, 20, 21, 25, 29, 30] and the references therein. In the current paper, we consider some subdifferential sets defined utilizing the sublevel sets.

Let \(\varphi : \mathbb {R}^n \rightarrow \mathbb {R}\) be a quasiconvex function and \({\bar{x}} \in \mathbb {R}^n\). The Greenberg–Pierskalla’s subdifferential [11] of \(\varphi \) at \({\bar{x}}\) is defined as

$$\begin{aligned} \partial ^{GP}\varphi ({\bar{x}}):=\Big \{d\in {\mathbb {R}}^n~ : ~\varphi (x)<\varphi ({\bar{x}})\Longrightarrow d^T(x-{\bar{x}})<0\Big \}. \end{aligned}$$

Another variant of Greenberg–Pierskalla’s subdifferential has been introduced by Penot [20]:

$$\begin{aligned} \partial ^{P}\varphi ({\bar{x}}):=\Big \{d\in {\mathbb {R}}^n~ : ~\varphi (x)\le \varphi ({\bar{x}})\Longrightarrow d^T(x-{\bar{x}})\le 0\Big \}=N\big (S_{{\bar{x}}}(\varphi ),{\bar{x}} \big ). \end{aligned}$$

The star subdifferential [20] of \(\varphi \) at \({\bar{x}}\) is defined as follows: If \({\bar{x}}\) is a minimizer of \(\varphi \), then \(\partial ^{\star }\varphi ({\bar{x}}):=\mathbb {R}^n\); otherwise

$$\begin{aligned} \partial ^{\star }\varphi ({\bar{x}}):=\Big \{d\in {\mathbb {R}}^n\setminus \{0\}~ : ~\varphi (x)<\varphi ({\bar{x}})\Rightarrow d^T(x-{\bar{x}})\le 0\Big \}. \end{aligned}$$

The Plastria’s subdifferential [25] is defined as: \(d\in \partial ^<\varphi ({\bar{x}})\) if and only if

$$\begin{aligned} \varphi (x)-\varphi ({\bar{x}})\ge d^T(y-{\bar{x}}),~~\forall x\in S^{s}_{{\bar{x}}}(\varphi ). \end{aligned}$$

The Guti\(\acute{e}\)rrez’s subdifferential [12, 13] is defined as: \(d\in \partial ^{\le }\varphi ({\bar{x}})\) if and only if

$$\begin{aligned} \varphi (x)-\varphi ({\bar{x}})\ge d^T(y-{\bar{x}}),~~\forall x\in S_{{\bar{x}}}(\varphi ). \end{aligned}$$

The connections between these subdifferential sets as well as their properties can be found in [20, 21] .

Assume that \(f_i :\mathbb {R}^n \rightarrow \mathbb {R},~i=1,\ldots ,m\) are real-valued functions, and \(F:\mathbb {R}^n \rightarrow \mathbb {R}^m\) is defined by

$$\begin{aligned} F(x):=\big (f_1(x),\ldots ,f_m(x) \big ). \end{aligned}$$

Definition 2.2

A vector \(\tilde{x}\in \mathbb {R}^n\) is called a weak minimizer of F if there is no \(x\in \mathbb {R}^n\) satisfying \(f_i(x)< f_i(\tilde{x}),~i=1,\ldots ,m\). The set of all weak minimizers of F is denoted by \(W^{F}\).

Definition 2.3

A vector \(\tilde{x}\in \mathbb {R}^n\) is called a locally weak minimizer of F if there exists some \(\varepsilon >0\) such that there is no \(x\in B(\tilde{x};\varepsilon )\) satisfying \(f_i(x)< f_i(\tilde{x}),~i=1,\ldots ,m\). The set of all locally weak minimizers of F is denoted by \(W^{lF}\).

We consider the following quasiconvex multiobjective programming problem (QCMOP):

$$\begin{aligned} \begin{array}{ll} \text {(QCMOP)}: ~~~~~~~ &{}\inf ~\big (f_1(x), \ldots , f_m(x)\big ) \\ &{}\text {s.t.}~~g_j(x)\le 0,~~~j=1,2,\ldots ,q,\\ &{}~~~~~~x \in X, \end{array} \end{aligned}$$

satisfying Assumption A.

Assumption A: Hereafter, in Problem (QCMOP), X is a nonempty closed convex set in \(\mathbb {R}^n\), and \(f_i,g_j\), for \(i=1,2,\ldots ,m,~j=1,2,\ldots ,q,\) are continuous quasiconvex functions from \(\mathbb {R}^n\) to \(\mathbb {R}\).

Remark 1

Although we consider Assumption A for the whole paper, some of our results hold under weaker conditions as well. For example, in some theorems lower/upper-semicontinuity instead of continuity is enough.

For each j, set

$$\begin{aligned} M_j:=\big \{ x\in \mathbb {R}^n : g_j(x) \le 0 \big \}. \end{aligned}$$

Furthermore, set

$$\begin{aligned} M:=X \cap \bigg (\displaystyle \cap _{j=1}^q M_j\bigg ). \end{aligned}$$

Indeed, M is the set of feasible solutions of (QCMOP). For \({\hat{x}}\in M\), set

$$\begin{aligned} J({\hat{x}}):=\big \{j\in \{1,2,\ldots ,q\} : g_j({\hat{x}})=0 \big \}. \end{aligned}$$

Definition 2.4

[10] A feasible solution \({\hat{x}} \in M\) is said to be a weakly efficient solution of (QCMOP) if there is no \(x\in M\) satisfying \(f_i(x)< f_i({\hat{x}}),~i=1,\ldots ,m\). The set of all weakly efficient solutions of (QCMOP) is denoted by \(W^{qc}\).

3 Main results

The first part of this section is devoted to establishing necessary conditions for weakly efficient solutions of (QCMOP) in terms of \(\partial ^{\star }\) and \(\partial ^P\). Deriving necessary optimality conditions in multiobjective programming (for numerical, theoretical, and practical purposes) is the main subject of many publications appeared in recent years; See e.g. [19, Chapters 9 and 10] and the references therein.

The following Constraint Qualifications (CQs) help us in deriving optimality conditions.

Definition 3.1

We say

  1. (i)

    (QCMOP) satisfies CQ1 if there exists some \({\bar{x}} \in int\, X\) with \(g_j({\bar{x}})<0,~j=1,2,\ldots ,q\).

  2. (ii)

    (QCMOP) satisfies \(CQ2^P\) at \({\hat{x}}\in M\) if

    $$\begin{aligned} N(M,{\hat{x}})\subseteq conv\Big (\bigcup _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}})\Big ). \end{aligned}$$

CQ1 is a variant of Slater CQ [4, 27]. Theorem 3.1 presents necessary optimality conditions in terms of \(\partial ^*\) and \(\partial ^P\).

Theorem 3.1

(Necessary conditions in terms of\(\partial ^*\) and \(\partial ^P\)) Consider Problem (QCMOP) satisfying Assumption A. Assume that \(f_1,f_2,\ldots ,f_m\) are strongly quasiconvex. Let \({\hat{x}} \in W^{qc}\) while \({\hat{x}} \notin W^{F}\). Under either (CQ1) or (\(CQ2^{P}\)) one has

  1. (i)

    \(\Big (- \sum _{i=1}^m \partial ^P f_i({\hat{x}})\Big ) \cap \Big ( \sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) + N(X,{\hat{x}})\Big ) \ne \{0\};\)

  2. (ii)

    \(0\in \sum _{i=1}^m \lambda _i\partial ^* f_i({\hat{x}})+\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) + N(X,{\hat{x}})\) for some \(\lambda _1,\ldots ,\lambda _m\ge 0\) with \(\sum _{i=1}^m\lambda _i=1.\)

Proof

According to the strong quasiconvexity assumption, each locally weak minimizer of F is a minimizer of F. So, \({\hat{x}}\notin W^{lF}\). Two sets

$$\begin{aligned} A:=\cap _{i=1}^mcl\,S^s_{\hat{x}}(f_i)\end{aligned}$$

and M are convex (by quasiconvexity of the appearing functions, and convexity of X) and nonempty (by feasibility of \({\hat{x}}\), and \({\hat{x}} \notin W^{lF}\)).

We show that \(clS^s_{\hat{x}}(f_i)=S_{\hat{x}}(f_i),~i=1,2,\ldots ,m.\) The inclusion \(clS^s_{\hat{x}}(f_i)\subseteq S_{\hat{x}}(f_i)\) is trivial due to the continuity assumption. To prove the converse inclusion, let i be arbitrary and let \(x^0\in S_{\hat{x}}(f_i).\) If \(x^0={\hat{x}}\), since \({\hat{x}} \notin W^{lF}\), there exists a sequence \(\{x_\nu \}_\nu \) in \(S^s_{\hat{x}}(f_i)\) convergent to \({\hat{x}}\). So, \(x^0={\hat{x}}\in cl\,S^s_{\hat{x}}(f_i).\) If \(x^0\ne {\hat{x}}\), taking strong quasiconvexity assumption into account,

$$\begin{aligned} f_i(\lambda x^0+(1-\lambda ){\hat{x}})<f_i({\hat{x}}), \end{aligned}$$

for each \(\lambda \in (0,1).\) So, \(\lambda x^0+(1-\lambda ){\hat{x}}\in S^s_{\hat{x}}(f_i)\) while \(\lambda x^0+(1-\lambda ){\hat{x}}\rightarrow x^0\) as \(\lambda \rightarrow 1^-.\) Hence, \(x^0\in cl\,S^s_{\hat{x}}(f_i).\) So far, we proved

$$\begin{aligned} clS^s_{\hat{x}}(f_i)=S_{\hat{x}}(f_i),~i=1,2,\ldots ,m. \end{aligned}$$

Now, we claim \(int(A)\cap M=\emptyset .\) Notice that \(int(A)\ne \emptyset \) because of \({\hat{x}}\notin W^{lF}\). By indirect proof, assume that there exists some \(x^*\in M\cap int(A).\) If \(x^*={\hat{x}}\), then \({\hat{x}}\pm d\in A\) for some nonzero \(d\in \mathbb {R}^n\), leading to \(f_i({\hat{x}}\pm d)\le f_i({\hat{x}}),~i=1,2,\ldots ,m\). So, according to strong quasiconvexity, we get

$$\begin{aligned} f_i({\hat{x}})=f_i(\frac{1}{2}({\hat{x}}+d)+\frac{1}{2}({\hat{x}}-d))<\max \{f_i({\hat{x}}+d),f_i({\hat{x}}-d)\}\le f_i({\hat{x}}),~i=1,2,\ldots ,m. \end{aligned}$$

This obvious contradiction implies \(x^*\ne {\hat{x}}.\) As \(x^*\in int(A)\), there exists some \(\epsilon >0\) such that

$$\begin{aligned} x\in A=\cap _{i=1}^mcl\,S^s_{\hat{x}}(f_i)=\cap _{i=1}^mS_{\hat{x}}(f_i),~~\forall x\in B(x^*;\epsilon ). \end{aligned}$$

Also, \(x^*\in A\) leads in \(f_i(x^*)\le f_i({\hat{x}})\) for each \(i=1,2,\ldots ,m\). If \(f_k(x^*)=f_k({\hat{x}})\) for some \(k\in \{1,2,\ldots ,m\}\), then by setting

$$\begin{aligned} y:=x^*+\frac{\epsilon }{2\Vert x^*-{\hat{x}}\Vert }(x^*-{\hat{x}}), \end{aligned}$$

we have \(y\in B(x^*;\epsilon )\), leading to \(f_k(y)\le f_k({\hat{x}})=f_k(x^*).\) On the other hand, strong quasiconvexity of \(f_k\) implies

$$\begin{aligned} f_k(x^*)<\max \{f_k(y),f_k({\hat{x}})\}=f_k(x^*) \end{aligned}$$

(notice that \(x^*\) is a strict convex combination of y and \({\hat{x}}\)). This obvious contradiction implies \(f_i(x^*)<f_i({\hat{x}})\) for each \(i=1,2,\ldots ,m\). This contradicts \({\hat{x}}\in W^{qc}.\) Therefore,

$$\begin{aligned} int(A)\cap M=\emptyset . \end{aligned}$$

Now, by applying a separation theorem (see corollaries of [3, Theorem 2.4.8]), there exist some \(\xi \in \mathbb {R}^n \backslash \{0\}\) and some \(r\in \mathbb {R}\) such that

$$\begin{aligned} \xi ^T(y-{\hat{x}})\ge r \ge \xi ^T(u-{\hat{x}}),~~\forall u\in A=\cap _{i=1}^mcl\,S^s_{\hat{x}}(f_i),\ \forall y\in M. \end{aligned}$$
(2)

Taking \(y={\hat{x}}\), we get \(r\le 0\). On the other hand, since \({\hat{x}} \notin W^{lF}\), there exists a sequence \(\{x_\nu \}_\nu \) in \(\cap _{i=1}^mS^s_{\hat{x}}(f_i)\) convergent to \({\hat{x}}\). This leads to \(r \ge 0\), according to (2). Therefore, \(r=0\), and (2) implies

$$\begin{aligned} 0\ne \xi \in N \Big ( \cap _{i=1}^m{cl\,}S^s_{\hat{x}}(f_i),{\hat{x}} \Big )\cap \Big (-N(M,{\hat{x}})\Big ). \end{aligned}$$
(3)

Now, taking \(\cap _{i=1}^mint(cl\,S^s_{\hat{x}}(f_i))\ne \emptyset \) (derived from \({\hat{x}}\notin W^{lF}\)) into account, invoking the normal intersection formula (1), we get

$$\begin{aligned} {\xi \in \Big (N \big ( \cap _{i=1}^m{cl\,}S^s_{\hat{x}}(f_i),{\hat{x}} \big ) \Big )=\Big (\sum _{i=1}^m N \big ({cl\,}S^s_{\hat{x}}(f_i),{\hat{x}} \big )\Big )}~~~~~~~~~~~~~~~~~~~~~~\nonumber \\ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{=\Big (\sum _{i=1}^m N \big (S_{\hat{x}}(f_i),{\hat{x}} \big )\Big )=\sum _{i=1}^m \partial ^P f_i({\hat{x}}).} \end{aligned}$$
(4)

If CQ1 is fulfilled, then there exists some \({\bar{x}} \in X\) satisfying

$$\begin{aligned} {\bar{x}} \in \big (int\, X\big ) \cap \Big (\cap _{j = 1}^q int\,M_j\Big ). \end{aligned}$$

Applying the normal cone intersection formula (1) again, due to the fact \(N(M_j,{\hat{x}})=\{0\}\) for each \(j\in \{1,\ldots ,q\} \setminus J({\hat{x}})\) (since \({\hat{x}} \in \text {int}M_j\)), we deduce

$$\begin{aligned} N(M,{\hat{x}})= & {} N\bigg (X \cap \Big (\cap _{j = 1}^q M_j\Big ),{\hat{x}}\bigg )= N(X,{\hat{x}})+\sum _{j=1}^q N(M_j,{\hat{x}}) \\= & {} N(X,{\hat{x}})+\sum _{j\in J({\hat{x}})}N(M_j,{\hat{x}}). \end{aligned}$$

Since for each \(j\in J({\hat{x}})\), we have

$$\begin{aligned} M_j=\big \{x\in \mathbb {R}^n : g_j(x)\le 0 \big \}= \big \{x\in \mathbb {R}^n : g_j(x)\le g_j({\hat{x}}) \big \}=S_{{\hat{x}}}(g_j), \end{aligned}$$

we get

$$\begin{aligned} N(M,{\hat{x}})=N(X,{\hat{x}})+\sum _{j\in J({\hat{x}})}N\big (S_{{\hat{x}}}(g_j),{\hat{x}}\big )=N(X,{\hat{x}})+\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}). \end{aligned}$$
(5)

Now, the virtues of (3), (4), and (5) imply the desired result of part (i) under CQ1.

If \(CQ2^P\) holds, then

$$\begin{aligned} -\xi \in N(M,{\hat{x}})\subseteq conv\Big (\bigcup _{j\in J({\hat{x}})}\partial ^Pg_j({\hat{x}})\Big )=\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}). \end{aligned}$$
(6)

Equations (3), (4), and (6) lead to the desired result of part (i) under \(CQ2^{P}\).

Part (ii) is derived from Part (i) due to the relation \(\partial ^P\varphi (x)\setminus \{0\}\subset \partial ^*\varphi (x).\)\(\square \)

There are some worth mentioning points about the assumptions and results of Theorem 3.1 as follows.

  1. (1)

    Under the assumptions of Theorem 3.1, each weakly efficient solution of (QCMOP) is a strictly efficient solution of this problem. Recall, from the literature, that a feasible solution \({\hat{x}} \in M\) is said to be a strictly efficient solution of (QCMOP) if there is no \(x\in M\setminus \{{\hat{x}}\}\) satisfying \(f_i(x)\le f_i({\hat{x}}),~i=1,\ldots ,m\). See [10].

  2. (2)

    As each efficient [10]/robust efficient [31]/properly efficient [10] solution is a weakly efficient solution, the necessary conditions provided in the current paper are valid for aforementioned solution notions as well.

  3. (3)

    Theorem 3.1 may not hold if one replaces “strong quasiconvexity” with “quasiconvexity”. Examples 1 and 2 clarify it.

  4. (4)

    Necessary optimality conditions for single-objective case, in terms of Plastria and Gutiérrez subdifferentials, can be found in [16, Propositions 4, 7, 10]. See also the proof of [16, Lemma 9] to have a better insight about CQ2. Furthermore, optimality conditions for multiobjective problems, with respect to all-purpose subdifferentials, have been derived in several publications. See, e.g., [18, 19] for some conditions in terms of Mordukhovich subdifferentials.

Example 1

Consider a (QCMOP) in \({\mathbb {R}}\) with \(m=q=2\), \(X={\mathbb {R}}\), and

$$\begin{aligned} f_1(x)=\left\{ \begin{array}{ll} x+2, &{} ~~x<-1 \\ 1, &{}~~x\in [-1,1] \\ x,&{}~~x>1 \\ \end{array}\right. ~~~~~~~~ f_2(x)=\left\{ \begin{array}{ll} x+1, &{} ~~x<-1 \\ 0, &{}~~x\in [-1,0] \\ x,&{}~~x>0 \\ \end{array}\right. \end{aligned}$$

Also, let \(g_1(x)=x-1\) and \(g_2(x)=-x-1.\) The above four functions are continuous and quasiconvex while \(f_1,f_2\) are not strongly quasiconvex. In the considered problem, \(M=[-1,1]\). Considering \({\hat{x}}=-1,\) we have \({\hat{x}}\in W^{qc}\) and \({\hat{x}}\notin W^{F}.\) Both CQ1 and \(CQ2^P\) are satisfied. Relation (i) in Theorem 3.1 does not hold here because \(\partial ^P f_1({\hat{x}})=\partial ^P f_2({\hat{x}})=\{0\}.\)

Example 2

In Example 1, let \({\hat{x}}=1.\) We have \({\hat{x}}\in W^{qc}\) and \({\hat{x}}\notin W^{F}.\) Both CQ1 and \(CQ2^P\) are satisfied. Relation (ii) in Theorem 3.1 does not hold here because

$$\begin{aligned} \partial ^* f_1({\hat{x}})=\partial ^* f_2({\hat{x}})=(0,+\infty ),~J({\hat{x}})=\{1\},~\partial ^*g_1({\hat{x}})=(0,+\infty ),~N(X,{\hat{x}})=\{0\}. \end{aligned}$$

Theorem 3.2, derived from Theorem 3.1 and some results existing in the literature, provides necessary conditions for weakly efficient solutions of (QCMOP) with respect to other aforementioned subdifferentials.

Theorem 3.2

(KKT necessary conditions with respect to\(\partial ^P\), \(\partial ^<\) and \(\partial ^{\le }\)) Consider Problem (QCMOP) satisfying Assumption A. Let \({\hat{x}} \in W^{qc}\). Consider the following circumstances.

  1. (1a)

    \(f_i,~i=1,2,\ldots ,m\) are Lipschitz on \(S^{s}_{{\hat{x}}}(f_i)\).

  2. (2a)

    \(g_j,~j\in J({\hat{x}})\) are Lipschitz on \(S^{s}_{{\hat{x}}}(g_j)\).

  3. (3a)

    There is no local minimizer of \(f_i,~i=1,2,\ldots ,p,\) in \(f_i^{-1}(f_i({\hat{x}}))\).

  4. (4a)

    There is no local minimizer of \(g_j,~j\in J({\hat{x}})\), in \(g_j^{-1}(g_j({\hat{x}}))\).

  5. (5a)

    Each local minimizer of \(g_j,~j\in J({\hat{x}})\), is a global minimizer.

Then

  1. (1b)

    Under the assumptions of Theorem 3.1, if (1a) holds, then there exist \(\lambda _1,\lambda _2,\ldots ,\lambda _m{\ge }0\) such that \({\sum _{i=1}^m\lambda _i=1}\) and

    $$\begin{aligned} 0\in \sum _{i=1}^m \lambda _i\partial ^{<} f_i({\hat{x}}) + \sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) + N(X,{\hat{x}}). \end{aligned}$$
  2. (2b)

    Under the assumptions of Theorem 3.1, if (1a), (2a), and (4a) hold, then there exist \(\lambda _1,\lambda _2,\ldots ,\lambda _m{\ge }0\) with \({\sum _{i=1}^m\lambda _i=1}\) and \(\mu _j\ge 0;~j\in J({\hat{x}})\) such that

    $$\begin{aligned} 0\in \sum _{i=1}^m \lambda _i\partial ^{<} f_i({\hat{x}}) + \sum _{j\in J({\hat{x}})}\mu _j\partial ^< g_j({\hat{x}}) + N(X,{\hat{x}}). \end{aligned}$$
  3. (3b)

    Under the assumptions of Theorem 3.1, if (1a)-(5a) hold, then there exist \(\lambda _1,\lambda _2,\ldots ,\lambda _m{\ge }0\) with \({\sum _{i=1}^m\lambda _i=1}\) and \(\mu _j\ge 0;~j\in J({\hat{x}})\) such that

    $$\begin{aligned} 0\in \sum _{i=1}^m \lambda _i\partial ^{\le } f_i({\hat{x}}) + \sum _{j\in J({\hat{x}})}\mu _j\partial ^{\le } g_j({\hat{x}}) + N(X,{\hat{x}}). \end{aligned}$$

Proof

Part (1b) is derived from [20, Proposition 12] and Theorem 3.1(ii) (Notice that we have continuity from Assumption A). Part (2b) is derived from [20, Propositions 8 and 12] and Theorem 3.1(ii). Part (3b) results from [20, Proposition 9] and the preceding part. \(\square \)

According to the results of [20], in the above theorem \(\partial ^<\) can be replaced by \(\partial ^{GP}\) as well (under appropriate assumptions). Furthermore, some remarks similar to those given after Theorem 3.1 can be provided about Theorem 3.2 as well.

Remark 2

As mentioned before, some necessary optimality conditions for single-objective case, in terms of Plastria and Gutiérrez subdifferentials, have been provided by Linh and Penot [16]. They have derived their results under different conditions rather than ours. Precisely speaking, they have gotten \(0\in \partial ^<f({\bar{x}})+N(\varOmega ,{\bar{x}})\) (Here, f is the single-objective function) as a necessary condition for local optimality provided that \({\bar{x}}\) is not a local minimizer of f, and the objective function is quasiconvex and upper semi-continuous (usc) fulfilling \(N(S^{s}_{{\bar{x}}}(f),{\bar{x}})=[0,\infty )\partial ^<f({\bar{x}})\). A function satisfying the preceding equality is called a Plastria function. Linh and Penot [16] have established analogous results invoking Gutiérrez subdifferential as well.

At the first glance it may seem that one can characterize weakly efficient solutions of multiobjective programs utilizing the results of [16] due to the connection between single-objective and multiobjective programming because of the weighted sum scalarization tool [10]. Such a trick does not work here because sum of quasiconvex functions is not quasiconvex necessarily.

In the following, we work with some subdifferentials addressed by Suzuki and Kuroiwa [29, 30] based an interesting property which says a lower semi-continuous quasiconvex function consists of a supremum of some family of lower semi-continuous quasiaffine functions [22].

Defining \(\overline{{\mathbb {R}}}:={\mathbb {R}}\cup \{\pm \infty \}\) and

$$\begin{aligned} \Sigma :=\Big \{h:{\mathbb {R}}\longrightarrow \overline{{\mathbb {R}}}~ : ~h\text { is lower semi-continuous (lsc) and non-decreasing}\Big \}, \end{aligned}$$

Penot and Volle [23] proved that \(\varphi :{\mathbb {R}}^n\longrightarrow {\mathbb {R}}\) is lsc and quasiconvex if and only if there exists a family

$$\begin{aligned} \Big \{(k_t,w_t)~ : ~t\in \digamma \Big \}\subseteq \Sigma \times {\mathbb {R}}^n \end{aligned}$$

such that

$$\begin{aligned} \varphi (x)=\sup _{t\in \digamma } k_t(w_t^Tx),~~x\in {\mathbb {R}}^n. \end{aligned}$$

Suzuki and Kuroiwa [29, 30] called the set \(\{(k_t,w_t)~ : ~t\in \digamma \}\) as a generator of \(\varphi \), and defined another subdifferential for quasiconvex functions using Penot and Volle result.

In the following definition, \(D_-h(\bar{a})\) stands for the lower left-hand Dini derivative of \(h:{\mathbb {R}}\longrightarrow {\mathbb {R}}\) at \(\bar{a}\) defined by

$$\begin{aligned} D_-h(\bar{a})=\liminf _{\varepsilon \longrightarrow 0^-}\frac{h(\bar{a}+\varepsilon )-h(\bar{a})}{\varepsilon }. \end{aligned}$$

It is clear that \(D_-h(\bar{a})\ge 0\) for each \(h\in \Sigma \). A function h is called lower left-hand Dini differentiable (LLD in brief) if \(D_-h(a)\) is finite at each \(a\in {\mathbb {R}}.\) A generator \(\big \{(k_t,w_t)~ : ~t\in \digamma \big \}\) is called LLD if \(k_t,~t\in \digamma \), is LLD.

Definition 3.2

[29, 30] Let \(\varphi :{\mathbb {R}}^n\longrightarrow {\mathbb {R}}\) be a lsc quasiconvex function with an LLD generator \(G:=\{(k_t,w_t)~ : ~t\in \digamma \}\subseteq \Sigma \times {\mathbb {R}}^n\). The G-subdifferential of \(\varphi \) at \({\bar{x}}\in {\mathbb {R}}^n\) is defined as follows:

$$\begin{aligned} \partial _G\varphi ({\bar{x}}):=cl\Big ( conv\Big (\Big \{D_-k_t(w_t^T{\bar{x}})w_t~ : ~t\in \Gamma _{{\bar{x}}}\Big \}\Big )\Big ), \end{aligned}$$

where

$$\begin{aligned} \Gamma _{{\bar{x}}}:=\{t\in \digamma ~ : ~\varphi ({\bar{x}})=k_t(w_t^T{\bar{x}})\}. \end{aligned}$$
(7)

Theorem 3.3 presents a necessary condition for weakly efficient solutions of (QCMOP) in terms of \(\partial _G\) and \(\partial ^P\).

Theorem 3.3

(KKT necessary condition with respect to\(\partial _G\) and \(\partial ^P\)) Consider Problem (QCMOP) satisfying Assumption A. Let \(G^i=\{(k_t^i,w_t^i) : \)\(t\in \digamma ^i\},~i=1,2,\ldots ,m,\) be an LLD generator of \(f_i,~i=1,2,\ldots ,m\). Assume that, for each i, the set \(\digamma ^i\) is compact, the mappings \(t\longrightarrow w_t^i\) and \((t,a)\longrightarrow D_-k_t^i(a)\) are continuous, and the mapping \((t,a)\longrightarrow k_t^i(a)\) is upper semi-continuous (usc). If \({\hat{x}}\in W^{qc}\) and CQ1 is fulfilled, then there exist scalars \(\lambda _1,\lambda _2,\ldots ,\lambda _m\ge 0\) such that \(\sum _{i=1}^m\lambda _i=1\) and

$$\begin{aligned} 0\in \sum _{i=1}^m\lambda _i \partial _Gf_i({\hat{x}})+\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) + N(X,{\hat{x}}). \end{aligned}$$

Proof

According to Assumption A, the feasible set M is a closed convex set. Set

$$\begin{aligned} B:=\Big \{\sum _{i=1}^m\lambda _i d^i~ : ~\sum _{i=1}^m\lambda _i=1,~\lambda _i\ge 0,~d^i\in \partial _Gf_i({\hat{x}})\Big \}. \end{aligned}$$

Under the assumptions of the theorem, it can be shown that \(\partial _Gf_i(.)\) is compact for each i; see [29, 30]. Hence, B is a compact set. We claim that the following system does not have any solution \(d\in {\mathbb {R}}^n\):

$$\begin{aligned} \left\{ \begin{array}{ll} d^ty<0,&{}~\forall y\in B,\\ d^tz\le 0,&{}~\forall z\in N(M,{\hat{x}}).\\ \end{array}\right. \end{aligned}$$
(8)

By contradiction, assume that a nonzero vector \(d_0\in {\mathbb {R}}^n\) is a solution of (8). We have \(d_0\in (N(M,{\hat{x}}))^*=T(M,{\hat{x}})\), and hence

$$\begin{aligned} \exists \alpha _\nu \downarrow 0,~ \exists \{d_\nu \}\subseteq {\mathbb {R}}^n~~s.t.~~ d_\nu \rightarrow d_0, ~~x_\nu :={\hat{x}}+\alpha _\nu d_\nu \in M,\;\forall \nu . \end{aligned}$$

Let \(i\in \{1,2,\ldots ,m\}\) be arbitrary. The set \(\digamma ^i\) is compact and the mapping \(t\longrightarrow k_t^i(w_t^i{^T}{\hat{x}})\) is usc. Thus, \(\Gamma ^i_{x_\nu }\ne \emptyset \) for each \(\nu \). Considering the sequence \(t_\nu \in \Gamma ^i_{x_\nu }\), this sequence has a convergent subsequence. We denote this subsequence by \(t_\nu \) for simplicity, and assume \(t_\nu \longrightarrow \hat{t}\in {\mathbb {R}}.\) Since \(f_i\) is continuous and \((t,a)\longrightarrow k_t^i(a)\) is usc, we get \(f_i({\hat{x}})=k_{\hat{t}}(w_{\hat{t}}^i{^T}{\hat{x}}),\) and hence \(\hat{t}\in \Gamma ^{i}_{{\hat{x}}}.\)

On the other hand, \(D_-k_{\hat{t}}^i(w_{\hat{t}}^i{^T}{\hat{x}})>0\) and \(w_{\hat{t}}^i{^T}d_0<0\), because

$$\begin{aligned} \begin{array}{ll} \hat{t}\in \Gamma ^i_{{\hat{x}}}&{}\Longrightarrow D_-k_{\hat{t}}^i(w_{\hat{t}}^i{^T}{\hat{x}})w_{\hat{t}}^i\in \partial _Gf_i({\hat{x}})\subseteq B\\ &{}\Longrightarrow \underbrace{D_-k_{\hat{t}}^i(w_{\hat{t}}^i{^T}{\hat{x}})}_{>0}w_{\hat{t}}^i{^T}d_0<0. \end{array} \end{aligned}$$

Due to the continuity of \((t,a)\longrightarrow D_-k_t^i(a)\) and the usc assumption on \(t\longrightarrow w_t^i\), we have \(D_-k_{t_\nu }^i(w_{t_\nu }^{i^T}{\hat{x}})>0\) and \(w_{t_\nu }^i{^T}d_\nu <0\) for sufficiently large \(\nu \). Hence, for sufficiently large \(\nu \),

$$\begin{aligned} \frac{f_i(x_\nu )-f_i({\hat{x}})}{\alpha _\nu w^{i^T}_{t_\nu }d_\nu }= & {} \frac{k^i_{t_\nu }\Big (w^{i^T}_{t_\nu }{\hat{x}}+\alpha _\nu w^{i^T}_{t_\nu }d_\nu \Big )-k^i_{t_\nu }\Big (w^{i^T}_{t_\nu }{\hat{x}}\Big )}{\alpha _\nu w_{t_\nu }^Td_\nu }\\\ge & {} \liminf _{\varepsilon \longrightarrow 0^-}\frac{k^i_{t_\nu }\Big (w^{i^T}_{t_\nu }{\hat{x}}+\varepsilon \Big )-k^i_{t_\nu }\Big (w^{i^T}_{t_\nu }{\hat{x}}\Big )}{\varepsilon }\\= & {} D_-k^i_{t_\nu }(w^{i^T}_{t_\nu }{\hat{x}})>0. \end{aligned}$$

This implies

$$\begin{aligned} \exists \nu _i\in \mathbb {N}~~s.t.~~f_i(x_\nu )<f_i({\hat{x}}),~~\forall \nu \ge \nu _i. \end{aligned}$$

Since i was arbitrary, this contradicts the weak efficiency of \({\hat{x}}\). Hence, (8) does not have any solution \(d\in {\mathbb {R}}^n\). On the other hand, \(B+N(M,{\hat{x}})\) is closed. Now, by applying Theorem 6 in [1], we have

$$\begin{aligned} 0\in B+N(M,{\hat{x}}). \end{aligned}$$
(9)

According to CQ1 and similar to the proof of Theorem 3.1, we infer

$$\begin{aligned} N(M,{\hat{x}})=\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) +N(X,{\hat{x}}). \end{aligned}$$
(10)

Now, the proof is completed according to (9) and (10). \(\square \)

There are some worth mentioning points about Theorem 3.3 as follows.

  1. (1)

    Theorem 3.3 is still valid if one replaces CQ1 with \(CQ2^P\).

  2. (2)

    Some continuity assumptions considered in Theorem 3.3 are redundant when \(\digamma ^i\) sets are finite.

  3. (3)

    Similar to Theorem 3.2, more necessary conditions in terms of \(\partial ^<\), \(\partial ^{\le }\), and \(\partial ^*\) (instead of \(\partial ^P\)) can also be obtained.

Remark 3

Under the assumptions of Theorem 3.3, if \(m=1\) (i.e. one considers single-objective case), then all assumptions of [29, Theorem 1(ii)] are fulfilled, and hence by [29, Theorem 1], we get \(0\in \partial _Gf({\hat{x}})+N(M,{\hat{x}}))\). If one furthermore assumes CQ1, then we will have

$$\begin{aligned} N(M,{\hat{x}})=\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) +N(X,{\hat{x}}), \end{aligned}$$

leading to

$$\begin{aligned} 0\in \partial _Gf({\hat{x}})+\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}}) +N(X,{\hat{x}}). \end{aligned}$$

So, Theorem 3.3 can be considered as an extension of [29, Theorem 1].

In Theorem 3.4 below, we investigate a sufficient optimality condition.

Theorem 3.4

(KKT sufficient condition) Consider Problem (QCMOP) satisfying Assumption A. If \({\bar{x}} \in M\) satisfies

$$\begin{aligned} 0\in \sum _{i=1}^m \partial ^{\star } f_i ({\hat{x}})+\sum _{j\in J({\hat{x}})}\partial ^P g_j({\hat{x}})+N(X,{\hat{x}}), \end{aligned}$$
(11)

then \({\hat{x}}\) is a weakly efficient solution of (QCMOP).

Proof

If \(0\in \partial ^{\star } f_i ({\hat{x}})\) for some i, then \({\hat{x}}\) is a minimizer of \(f_i\), and hence \({\hat{x}}\) is a weakly efficient solution of (QCMOP). So, we assume \(0\notin \partial ^{\star } f_i ({\hat{x}})\) for all i. By indirect proof assume that there exists some \(x^* \in M\) such that

$$\begin{aligned} f_i(x^*)<f_i(\hat{x}),~~\forall i=1,\ldots ,m. \end{aligned}$$
(12)

Let \(J(\hat{x}):=\{j_1,\ldots ,j_k\}\). Due to (11), the equality

$$\begin{aligned} \xi _1+\ldots +\xi _m+\zeta _{j_1}+\ldots +\zeta _{j_k}+\eta =0 \end{aligned}$$
(13)

holds for some \(\xi _i \in \partial ^{\star } f_i (\hat{x}),\ i=1,\ldots ,m,\) and \(\zeta _{j_v} \in \partial ^P g_{j_v}(\hat{x}),\ v=1,2,\ldots ,k\), and \(\eta \in N(X,\hat{x})\). Because of \(g_{j_v}(x^*)\le 0=g_{j_v}(\hat{x})\) for \(v=1,2,\ldots ,k\), the inclusion \(x^* \in S_{\hat{x}}(g_{j_v})\) holds, and hence

$$\begin{aligned} \zeta _{j_v}^T(x^*-\hat{x})\le 0,~~\forall v=1,2,\ldots ,k. \end{aligned}$$
(14)

On the other hand, from (12) we deduce

$$\begin{aligned} \xi _i^T(x^*-\hat{x})\le 0,~~\forall i=1,\ldots ,m. \end{aligned}$$
(15)

Also, \(\xi _i \ne 0\), for all \(i=1,\ldots ,m\). We claim

$$\begin{aligned} \xi _i^T(x^*-\hat{x})<0,~~\forall i=1,\ldots ,m. \end{aligned}$$
(16)

If (16) does not hold, in view of (15) we obtain

$$\begin{aligned} \xi _{i_0}^T(x^*-\hat{x})=0, \end{aligned}$$

for some \(i_0 \in \{1,\ldots ,m\}\). The latter is equivalent to the existence of a sequence \(\{u_t\}_{t=1}^{\infty }\) converging to \(x^* -\hat{x}\) such that \(\xi _{i_0}^Tu_t>0\) for all \(t\in \mathbb {N}\). Due to

$$\begin{aligned} \xi _{i_0}^T((u_t+\hat{x})-\hat{x})>0 \end{aligned}$$

and the definition of \(\partial ^{\star } f_{i_0}(\hat{x})\), the inequality \(f_{i_0}(u_t+\hat{x}) \ge f_{i_0}(\hat{x})\) is fulfilled. Thus passing to the limit as \(t\rightarrow \infty \) and by continuity of \(f_{i_0}\), we obtain \(f_{i_0}(x^*) \ge f_{i_0}(\hat{x})\), which contradicts (12). Hence, (16) holds.

Adding inequalities (14), (16), and \(\eta ^T(x^*-\hat{x})\le 0\) we get

$$\begin{aligned} \Big (\xi _1+\ldots +\xi _m+\zeta _{j_1}+\ldots +\zeta _{j_k}+\eta \Big )^T(x^*-\hat{x})<0, \end{aligned}$$

which contradicts (13) and completes the proof. \(\square \)

The following example shows that Theorem 3.4 may not be valid if one replaces \(\partial ^P\) with \(\partial ^*\) for constraint functions.

Example 3

Consider a (QCMOP) in \({\mathbb {R}}\) with \(m=2,~q=1,~X={\mathbb {R}},~f_1(x)=x,~f_2(x)=x^3,\) and

$$\begin{aligned}g_1(x)=\left\{ \begin{array}{ll} -1, &{} ~~~x\in [2,4] \\ 1-x &{}~~~x\in [1,2]\\ x-5 &{}~~~x\in [4,5]\\ 0,&{} ~~~\text {otherwise}. \end{array}\right. \end{aligned}$$

Here, \(M=\mathbb {R}\). Considering \({\hat{x}}=-1\), we have \(J(-1)=\{1\}\), and

$$\begin{aligned} S_{-1}^s(g_1)= & {} (1,5)\Longrightarrow \partial ^*g_1(-1)=(-\infty ,0), \\ S_{-1}^s(f_1)= & {} S_{-1}^s(f_2)=(-\infty ,-1)\Longrightarrow \partial ^*f_1(-1)=\partial ^*f_2(-1)=(0,\infty ). \end{aligned}$$

Hence, \(0\in \partial ^*f_1(-1)+\partial ^*f_2(-1)+\partial ^*g_1(-1)+N(X,{\hat{x}})\) while \({\hat{x}}\) is not a weakly efficient solution to the considered problem.

Due to the proof of Theorem 3.1, under the assumptions of this theorem, one can get

$$\begin{aligned} \Big (- \sum _{i=1}^m \partial ^P f_i({\hat{x}})\Big ) \cap \Big ( N(M,{\hat{x}})\Big ) \ne \{0\} \end{aligned}$$

as a necessary condition for weak efficiency of \({\hat{x}}.\) We close the paper with a result which shows that the inward part

$$\begin{aligned} \Big (- \sum _{i=1}^m \partial ^P f_i( x)\Big ) \cap \Big ( N(M, x)\Big ) \end{aligned}$$

of the normal cone to M does not depend on the choice of feasible x in \(F^{-1}\big (F({\hat{x}})\big )\). This theorem extends the main results of [5].

Theorem 3.5

Consider Problem (QCMOP) satisfying Assumption A. Let \({\hat{x}}\in M\). Then for each \(\hat{y}\in M\cap \big (F^{-1}(F({\hat{x}}))\big )\) we have

$$\begin{aligned} \Big (- \sum _{i=1}^m \partial ^P f_i({\hat{x}})\Big ) \cap \Big ( N(M,{\hat{x}})\Big )=\Big (- \sum _{i=1}^m \partial ^P f_i(\hat{y})\Big ) \cap \Big ( N(M,\hat{y})\Big ). \end{aligned}$$

Proof

Considering \(\xi ^* \in \Big (- \sum _{i=1}^m \partial ^P f_i({\hat{x}})\Big ) \cap \Big ( N(M,{\hat{x}})\Big )\), we have \(\xi ^*=-(\xi _1+\ldots +\xi _m)\) for some \(\xi _i \in \partial ^P f_i({\hat{x}}),\ i=1,\ldots ,m\). Since \(F({\hat{x}})=F(\hat{y})\), we have \(\hat{y} \in \cap _{i=1}^mS_{{\hat{x}}}(f_i)\), and hence \(\xi _i^T(\hat{y}-{\hat{x}})\le 0\) for all \(i=1,\ldots ,m\). Thus \( \xi ^{*^T}(\hat{y}-{\hat{x}})=-\sum _{i=1}^m \xi _i^T(\hat{y}-{\hat{x}})\ge 0. \) On the other hand, since \(\hat{y} \in M\) and \(\xi ^* \in N(M,{\hat{x}})\), we have \(\xi ^{*^T}(\hat{y}-{\hat{x}})\le 0\), and so

$$\begin{aligned} \xi ^{*^T}(\hat{y}-{\hat{x}})= 0. \end{aligned}$$
(17)

Therefore, for each \(x\in M\) we have

$$\begin{aligned} \xi ^{*^T}(x-\hat{y})=\underbrace{\xi ^{*^T}(x-{\hat{x}})}_{\le 0}-\xi ^{*^T}(\hat{y}-{\hat{x}})\le 0. \end{aligned}$$

This means that

$$\begin{aligned} \xi ^* \in N(M,\hat{y}). \end{aligned}$$
(18)

As \(\xi _i^T(\hat{y}-{\hat{x}})\le 0\) for each i, by (17) we deduce

$$\begin{aligned} \xi _i^T(\hat{y}-{\hat{x}})=0,~~\forall i=1,\ldots ,m. \end{aligned}$$
(19)

For each \(x_i \in S_{\hat{y}}(f_i),\ i=1,\ldots ,m\), we have \(x_i \in S_{{\hat{x}}}(f_i),\ i=1,\ldots ,m\) (since \(S_{\hat{y}}(f_i)=S_{{\hat{x}}}(f_i)\)). So, from (19), we have

$$\begin{aligned} \xi _i^T(x_i-\hat{y})= \underbrace{\xi _i^T(x_i-{\hat{x}})}_{\le 0}-\xi _i^T(\hat{y}-{\hat{x}})\le 0,~~\forall x_i \in S_{\hat{y}}(f_i),~\forall i=1,\ldots ,m. \end{aligned}$$

This implies \(\xi _i \in \partial ^P f_i(\hat{y})\) for all \(i=1,\ldots ,m\), and hence

$$\begin{aligned} \xi ^* =-\sum _{i=1}^m \xi _i \in -\sum _{i=1}^m \partial ^P f_i(\hat{y}). \end{aligned}$$
(20)

Inclusions (18) and (20) yield \( \xi ^* \in \Big (- \sum _{i=1}^m \partial ^P f_i(\hat{y})\Big ) \cap \Big ( N(M,\hat{y})\Big )\). Therefore,

$$\begin{aligned} \Big (- \sum _{i=1}^m \partial ^P f_i({\hat{x}})\Big ) \cap \Big ( N(M,{\hat{x}})\Big ) \subseteq \Big (- \sum _{i=1}^m \partial ^P f_i(\hat{y})\Big ) \cap \Big ( N(M,\hat{y})\Big ). \end{aligned}$$

The inclusion \(\supseteq \) can be proved similarly. \(\square \)

As a final remark, although we considered Assumption A for the whole paper, some of our results hold under weaker conditions as well. For example, in the preceding theorem we did not use continuity.