1 Introduction

One of the important topics in optimization is analyzing the behavior and the sensitivity of the solutions of a mathematical programming problem against the data perturbation. There are various approaches for dealing with and investigating perturbation in optimization, see, e.g. [1, 3] and the references therein. In recent decades, Lipschitz stability, a well-known concept in this field, has caught the attention of scholars; see [3, 5, 6, 11, 14, 20, 34] among others. Surveying many references, Klatte and Kummer [14] and Dontchev and Rockafellar [5] presented overviews of Lipschitz stability definitions. In the following, we briefly review some of the works related to our problem.

To the best of our knowledge, Robinson [33] is the pioneer of studying Lipschitz behavior of the optimal solutions of single-objective optimization problems. Shapiro and Bonnans [36] considered parametric optimization problems with cone constraints in Banach spaces and investigated local behavior of the optimal solutions of these problems. One of the most important concepts in Lipschitz stability framework is tilt-stability, introduced by Poliquin and Rockafellar [28]. This notion addresses Lipschitz behavior of locally optimal solutions of a single-objective unconstrained optimization problem whose extended real-valued objective function is perturbed by adding small linear terms. Poliquin and Rockafellar applied positive-definiteness of the Hessian matrix in smooth case and positive-definiteness of the generalized Hessian/the second-order subdifferential (in the sense of Mordukhovich [18]) in nonsmooth case, and achieved necessary and sufficient conditions for tilt-stability. After that, especially in the recent years, characterizing tilt-stability for various classes of optimization problems has been done in many publications. See, e.g., [7, 8, 16, 21,22,23, 25, 27] and the references therein. Along the lines of [28], Levy et al. [15] introduced a new Lipschitz stability concept, called full-stability, as an extension of tilt-stability. This work was done by applying a parametric objective function and then, creating general parametric perturbations alongside of the tilt perturbations considered by Poliquin and Rockafellar [28]. Similar to the tilt-stability, many researchers provided necessary and sufficient conditions to characterize full-stability based on the variational analysis tools [15, 24, 26]. The interested readers can find interesting results about tilt and full stability in the aforementioned papers and the references therein.

The main aim of the current paper is to analyze the behavior of the efficient solutions of a multi-objective optimization problem, with extended real-valued objective functions, against small perturbations (by adding linear terms to the objective functions). We introduce a new Lipschitz stability notion in multi-objective programming, invoking Aubin property (also called pseudo-Lipschitzian) of the local strictly efficient solution map. This new notion, called Lipschitz-stable local efficiency, extends tilt-stability in the single-objective case. We prove that the defined stability concept ensures the Lipschitzian behavior of the (locally) efficient value mapping when the objective functions are locally Lipschitz. After that, we establish necessary conditions for Lipschitz-stable locally efficient solutions and tilt-stable local minimizers utilizing the first-order subdifferentials. On the other hand, we show that the full-/tilt-stable local optimal/minimum solutions of the scalarized weighted sum problem are Lipschitz-stable locally efficient for the considered multi-objective programming problem. Furthermore, invoking some modern variational analysis tools, including the extended second-order subdifferential, the second-order conditions, and partial strong metric regularity of the subdifferential mappings, we obtain various sufficient conditions for Lipschitz-stable local efficiency. Moreover, we derive a necessary condition in terms of the local metric regularity of the first-order subdifferential of the objective functions.

The rest of the paper is organized as follows. Section 2 contains the required preliminaries. In Sect. 3, some basic variational analysis tools applied in the proofs of the main results are reviewed. Section 4 is devoted to defining Lipschitz-stable local efficient solutions in multi-objective programming and establishing their properties. Some necessary conditions for characterizing Lipschitz-stable locally efficient solutions and tilt-stable local minimum points are derived in this section. In Sect. 5, invoking the weighted sum method, the relationships between Lipschitz-stable local efficiency and full-stability are investigated. Various sufficient conditions accompanying a necessary condition in terms of the local metric regularity are provided.

2 Preliminaries

In this section, we address some basic notions, necessary conventions and preliminaries. Given a nonempty set \(X\subseteq {\mathbb {R}}^n\), we denote the complement, the convex hull, and the convex conical hull (the generated convex cone) of X by \(X^c\), \(conv\,X\), and \(pos\,X\), respectively. We reserve the notation \(I_X\) for the indicator function corresponding to X, whose values are equal to zero on X and \(+\infty \) outside of X.

Given two vectors \(x,y\in {\mathbb {R}}^n\), \(x^T\) and \(\langle x,y\rangle \) stand for the transpose of x and the inner product of xy, respectively. Throughout the paper, we use the Euclidean norm \(\Vert y\Vert =\sqrt{\langle y,y\rangle }\) for each \(y\in {\mathbb {R}}^n\) and the Frobenius norm \(\Vert C\Vert =\sqrt{\sum _{i,j} \vert c_{ij}\vert ^2}\) for each real \(p \times n\) matrix \(C = [c_{ij}] \in {\mathbb {R}}^{p\times n}\). Also, we define \({\mathbb {S}}_n := \{y \in {\mathbb {R}}^n: \Vert y\Vert = 1\},\) \({\mathbb {B}}_n := \{y \in {\mathbb {R}}^n: \Vert y\Vert < 1\}\), and \({\mathbb {B}}_{p\times n}:=\{C\in {\mathbb {R}}^{p\times n}: \Vert C\Vert < 1\}\). Furthermore, we set \({\mathbb {B}}({\bar{x}};\delta ) := \{y \in {\mathbb {R}}^n: \Vert y-{\bar{x}}\Vert < \delta \}\) and \({\mathbb {B}}[{\bar{x}};\delta ] := \{y \in {\mathbb {R}}^n: \Vert y-{\bar{x}}\Vert \le \delta \}\) for \({\bar{x}}\in {\mathbb {R}}^n\) and \(\delta >0\).

To compare two vectors \(x,y \in {\mathbb {R}}^p\), we apply the following component-wise orders:

  • \(x\leqq y\) iff \(x_i\le y_i\) for each \(i=1,2,\ldots ,p\),

  • \(x\le y\) iff \(x\leqq y\) and \(x\not =y\),

  • \(x< y\) iff \(x_i< y_i\) for each \(i=1,2,\ldots ,p\).

The inequalities \(\geqq ,~\ge \), and > are understood analogously. Corresponding to these three orders, we have three sets \({\mathbb {R}}^p_\geqq :=\{x\in {\mathbb {R}}^p :x\geqq 0\},\) \({\mathbb {R}}^p_\ge :=\{x\in {\mathbb {R}}^p :x\ge 0\},\) and \({\mathbb {R}}^p_>:=\{x\in {\mathbb {R}}^p :x>0\},\) respectively. Also, we define \(\Delta ^p_>:=\{x \in {\mathbb {R}}^p_>: \sum ^p_{j=1}x_j = 1\},\) \(\Delta ^p_\ge :=\{x \in {\mathbb {R}}^p_\ge : \sum ^p_{j=1}x_j = 1\}.\)

Consider the following Multi-Objective programming Problem (MOP):

$$\begin{aligned} \begin{array}{ll} &{}\min \ \ f(x):=(f_1(x),f_2(x),\ldots , f_p(x))\\ &{} \ s.t. \ \ x\in X, \end{array} \end{aligned}$$
(1)

where \(f_i:{\mathbb {R}}^n\longrightarrow {\mathbb {R}}\), \(i=1,2, \ldots ,p\), are the objective functions and the nonempty set \(X\subseteq {\mathbb {R}}^n\) is the feasible set. In the following, some solution concepts known in the multi-objective programming literature are recalled. See, e.g., [9] as a reference monograph.

Definition 1

A vector \({\bar{x}} \in X\) is called

  • an efficient solution of (1) if there is no \(x\in X\) with \(f(x)\le f({\bar{x}}).\)

  • a locally efficient solution of (1) if, for some \(\delta >0,\) there is no \(x\in X\cap {\mathbb {B}}[{\bar{x}};\delta ]\) with \(f(x)\le f({\bar{x}}).\)

  • a strictly efficient solution of (1) if there is no \(x\in X\setminus \{{{\bar{x}}}\}\) with \(f(x)\leqq f({\bar{x}}).\)

  • a local strictly efficient solution of (1) if, for some \(\delta >0,\) there is no \(x\in X\cap {\mathbb {B}}[{\bar{x}};\delta ]\setminus \{{{\bar{x}}}\}\) with \(f(x)\leqq f({\bar{x}}).\)

  • a properly efficient solution of (1) (in the sense of Geoffrion [12]) if there is a real number \(M>0\) such that for each \(i\in \{1,2,\ldots ,p\}\) and \(x\in X\) satisfying \(f_i(x) < f_i({\bar{x}})\) there exists some \(j\in \{1,2,\ldots ,p\}\) such that \( f_j(x)>f_j({\bar{x}})\) and \(\frac{f_i({\bar{x}})- f_i(x)}{f_j(x)-f_j({\bar{x}})} \le M\).

  • a local properly efficient solution of (1) if there are real numbers \(\delta ,M>0\) such that for each \(i\in \{1,2,\ldots ,p\}\) and \(x\in X\cap {\mathbb {B}}[{\bar{x}};\delta ]\) satisfying \(f_i(x) < f_i({\bar{x}})\) there exists some \(j\in \{1,2,\ldots ,p\}\) such that \( f_j(x)>f_j({\bar{x}})\) and \(\frac{f_i({\bar{x}})- f_i(x)}{f_j(x)-f_j({\bar{x}})} \le M\).

In local (strict/proper) efficiency defined above, the scalar \(\delta \) is called a local (strict/proper) efficiency radius.

3 Some material from variational analysis

In the current section, some basic concepts in generalized differentiation and variational analysis, extracted from [18, 19, 26, 34], are briefly reviewed.

Let \(\varphi :{\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}:=(-\infty ,+\infty ]\) and \({\bar{x}}\in dom\,\varphi :=\{x\in {\mathbb {R}}^n:\varphi (x)<+\infty \}\). The regular subdifferential of \(\varphi \) at \({\bar{x}}\), denoted by \({\hat{\partial }} \varphi ({\bar{x}})\), is defined as

$$\begin{aligned} {\hat{\partial }} \varphi ({\bar{x}}):=\bigg \{v\in {\mathbb {R}}^n: \liminf _{x\longrightarrow {\bar{x}}} \dfrac{\varphi (x)-\varphi ({\bar{x}})-\langle v,x-{\bar{x}}\rangle }{\Vert x-{\bar{x}}\Vert }\ge 0\bigg \}. \end{aligned}$$

This is known as pre-subdifferential and Fr\(\acute{e}\)chet (or viscosity) subdifferential as well.

The Mordukhovich subdifferential (known as general, limiting, and basic first-order subdifferential) of \(\varphi \) at \({\bar{x}}\), denoted by \(\partial \varphi ({\bar{x}})\), is defined as the set of all \(v\in {\mathbb {R}}^n\) such that there are sequences \(\{x_\nu \}_{\nu }\) and \(\{v_\nu \}_{\nu }\) converging to \({\bar{x}}\) and v, respectively, with \(v_\nu \in {\hat{\partial }}\varphi (x_\nu )\) for each \(\nu \) and \(\varphi (x_\nu )\longrightarrow \varphi ({\bar{x}})\).

The horizon subdifferential of \(\varphi \) at \({\bar{x}}\), denoted by \(\partial ^\infty \varphi ({\bar{x}})\), is defined as the set of all \(v\in {\mathbb {R}}^n\) such that there are sequences \(\{x_\nu \}_{\nu }\) and \(\{v_\nu \}_{\nu }\) with \(v_\nu \in {\hat{\partial }}\varphi (x_\nu )\) such that \(x_\nu \rightarrow {\bar{x}}\), \(\varphi (x_\nu )\rightarrow \varphi ({{\bar{x}}})\), and \(\lambda _\nu v_\nu \rightarrow v\) for some scalar sequence \(\{\lambda _\nu \}_\nu \) with \(\lambda _\nu \downarrow 0\).

Given a nonempty set \(X\subseteq {\mathbb {R}}^n\) and \({\bar{x}}\in X\), the regular (resp. basic/limiting) normal cone to X at \({\bar{x}}\) is defined by \({\widehat{N}}(X;{\bar{x}}):={\hat{\partial }}I_X({\bar{x}})\) (resp. \(N(X;{\bar{x}}):=\partial I_X({\bar{x}})\)). If X is locally convex at \({\bar{x}}\), i.e., \(X\cap {\mathbb {B}}({\bar{x}};\delta )\) is convex for some \(\delta >0\), then

$$\begin{aligned} N(X;{\bar{x}})={\widehat{N}}(X;{\bar{x}})=\{v\in {\mathbb {R}}^n: \langle v,x-{\bar{x}}\rangle \le 0,~ \forall x\in X\cap {\mathbb {B}}({\bar{x}};\delta )\}. \end{aligned}$$

In the rest of the current section, \(F:{\mathbb {R}}^n\rightrightarrows {\mathbb {R}}^p\) is a set-valued mapping. The graph of F is \(graph\,F:=\{(x,y)\in {\mathbb {R}}^n\times {\mathbb {R}}^p: y\in F(x)\}.\)

The coderivative of F at \(({\bar{x}},{\bar{y}})\in graph\,F\) is a set-valued mapping \(D^*F({\bar{x}},{\bar{y}}):{\mathbb {R}}^p\rightrightarrows {\mathbb {R}}^n\) defined by

$$\begin{aligned} D^*F({\bar{x}},{\bar{y}})(u):=\{w\in {\mathbb {R}}^n: (w,-u)\in N\big (({\bar{x}},{\bar{y}});graph\,F\big )\},~~u\in {\mathbb {R}}^p. \end{aligned}$$

If \(F:{\mathbb {R}}^n\longrightarrow {\mathbb {R}}^p\) is a single-valued mapping and \(F\in {\mathcal {C}}^1\), then the coderivative of F at \({{\bar{x}}}\) is a single-valued mapping \(D^*F({\bar{x}}):{\mathbb {R}}^p\longrightarrow {\mathbb {R}}^n\) with \(D^*F({\bar{x}})(u)=\{\nabla F({\bar{x}})^Tu\},~u\in {\mathbb {R}}^p\).

The set-valued mapping F is called inner semicontinuous at \(({\bar{x}},{\bar{y}})\in graph\,F\), if for each sequence \(\{x_\nu \}_\nu \subseteq {\mathbb {R}}^n\) converging to \({\bar{x}}\) with \(F(x_\nu )\ne \emptyset ,~\nu \in {\mathbb {N}}\), there exists a sequence \(\{y_\nu \}_\nu \subseteq {\mathbb {R}}^p\) converging to \({\bar{y}}\) with \(y_\nu \in F(x_\nu ),~\nu \in {\mathbb {N}}\).

Definition 2

[19] The set-valued mapping F is called locally Lipschitz-like around \(({\bar{x}},{\bar{y}})\in graph\,F\) with modulus \(L\ge 0\), if there are neighborhoods \({\mathcal {U}}\) of \({\bar{x}}\) and \({\mathcal {V}}\) of \({\bar{y}}\) such that for every \(x,z \in {\mathcal {U}}\) one has

$$\begin{aligned} F(x)\cap {\mathcal {V}} \subseteq F(z) + L\Vert x-z\Vert {\mathbb {B}}_p. \end{aligned}$$

The property addressed in Definition 2 is also known in the literature as the pseudo-Lipschitzian, Aubin property, and Lipschitz-like property; see, e.g., [17] for more references and discussions.

Definition 3

[19] Given nonempty subsets \(U\subseteq {\mathbb {R}}^n\) and \(V\subseteq {\mathbb {R}}^p\), the set-valued mapping F is called metrically regular on U relative to V with modulus \(\mu >0\) if

$$\begin{aligned} d(x;F^{-1}(y))\le \mu d(y;F(x)), \end{aligned}$$

for all \(x\in U\) and \(y\in V\) satisfying \(d(y;F(x))\ne \infty \).

The second-order subdifferential (known also as generalized Hessian) of \(\varphi \) at \({\bar{x}}\) relative to a subgradient \({\bar{v}} \in \partial \varphi ({\bar{x}})\) is defined by

$$\begin{aligned} \partial ^2\varphi ({\bar{x}},{\bar{v}})(u):=(D^*\partial \varphi )({\bar{x}},{\bar{v}})(u),~~u\in {\mathbb {R}}^n. \end{aligned}$$

If \(\varphi \in {\mathcal {C}}^2\) with the Hessian matrix \(\nabla ^2\varphi ({\bar{x}})\), then \(\partial ^2\varphi ({\bar{x}})(u)=\{\nabla ^2\varphi ({\bar{x}})u\},\) for each \(u\in {\mathbb {R}}^n\).

Let \(h: {\mathbb {R}}^n\times {\mathbb {R}}^p\rightarrow \overline{{\mathbb {R}}}\) be an extended real-valued function of two variables \((x,\lambda ) \in {\mathbb {R}}^n\times {\mathbb {R}}^p\). The partial first-order subdifferential mapping \(\partial _xh:{\mathbb {R}}^n\times {\mathbb {R}}^p\rightrightarrows {\mathbb {R}}^n\) at \(({\bar{x}}, {\bar{\lambda }})\in dom\, h\) is defined by

$$\begin{aligned} \partial _xh({\bar{x}},{\bar{\lambda }}):= \partial h_{{\bar{\lambda }}}({\bar{x}}), \end{aligned}$$

where \(h_{{\bar{\lambda }}}:{\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}\) is defined by \(h_{{\bar{\lambda }}}(\cdot ):=h(\cdot ,{\bar{\lambda }})\). Also, the partial inverse of \(\partial h\) is defined by

$$\begin{aligned} S_h(\lambda , v):=\{x\in {\mathbb {R}}^n : v \in \partial _xh(x,\lambda )\}, ~~(\lambda , v)\in {\mathbb {R}}^p\times {\mathbb {R}}^n. \end{aligned}$$

Furthermore, the extended partial second-order subdifferential of h with respect to x at \(({\bar{x}}, {\bar{\lambda }})\in dom\, h\) relative to some \({\bar{v}}\in \partial _xh({\bar{x}},{\bar{\lambda }})\) is defined by

$$\begin{aligned} {\tilde{\partial }}^2_xh({\bar{x}},{\bar{\lambda }},{\bar{v}})(u):=(D^*\partial _x h)({\bar{x}},{\bar{\lambda }},{\bar{v}})(u), ~~u\in {\mathbb {R}}^n. \end{aligned}$$
(2)

If \(h\in {\mathcal {C}}^2\) with \({\bar{v}}= \nabla _xh({\bar{x}},{\bar{\lambda }})\), then for each \(u\in {\mathbb {R}}^n\),

$$\begin{aligned} {\tilde{\partial }}^2_xh({\bar{x}},{\bar{\lambda }})(u)=\{(\nabla ^2_{xx}h({\bar{x}},{\bar{\lambda }})u, \nabla ^2_{x\lambda }h({\bar{x}},{\bar{\lambda }})u)\}. \end{aligned}$$
(3)

Notice that, the second-order subdifferential addressed in (2) is different to the standard partial second-order subdifferential defined as

$$\begin{aligned} \partial ^2_xh({\bar{x}},{\bar{\lambda }},{\bar{v}})(u):=(D^*\partial _x h_{{\bar{\lambda }}})({\bar{x}},{\bar{v}})(u) =\partial ^2_xh_{{\bar{\lambda }}}({\bar{x}},{\bar{v}})(u), ~~u\in {\mathbb {R}}^n. \end{aligned}$$
(4)

Even in the smooth case (i.e., \(h\in {\mathcal {C}}^2\)), we have \(\partial ^2_xh({\bar{x}},{\bar{\lambda }})(u)=\{\nabla ^2_{xx}h({\bar{x}},{\bar{\lambda }})u\}\). Comparing this equality with (3) makes the difference of two above-mentioned second-order subdifferentials clear.

Definition 4

[26] Let \(h: {\mathbb {R}}^n\times {\mathbb {R}}^p\rightarrow \overline{{\mathbb {R}}}\). Given \(({\bar{x}},{\bar{\lambda }}) \in dom\, h\) and \({{\bar{v}}} \in \partial _x h({\bar{x}},{\bar{\lambda }})\), the partial subdifferential mapping \(\partial _x h:{\mathbb {R}}^n\times {\mathbb {R}}^p\rightrightarrows {\mathbb {R}}^n\) is called partial strong metrically regular at \(({{\bar{x}}},{\bar{\lambda }},{{\bar{v}}})\) if its partial inverse, \(S_h\), admits a Lipschitzian single-valued localization around this point.

4 Lipschitz-stable locally efficient solutions

In the current section, after discussing some stability concepts addressed in the literature, we introduce and investigate a new stability notion in multi-objective programming.

A given constrained optimization problem, minimization of \(\varphi _0:{\mathbb {R}}^n\longrightarrow {\mathbb {R}}\) over the feasible set \(X\subset {\mathbb {R}}^n\), can be converted to the unconstrained problem, minimization of the extended real-valued function \(\varphi \) over the whole space \({\mathbb {R}}^n\), where \(\varphi \equiv \varphi _0\) on X, and \(\varphi \equiv +\infty \) outside of X. So, we start our analysis with the single-objective unconstrained case.

Given \(\varphi :{\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}\), consider the minimization of \(\varphi \) over \({\mathbb {R}}^n\). Poliquin and Rockafellar [28] considered the following parametric unconstrained optimization problem, derived by a linear perturbation in \(\varphi \):

$$\begin{aligned} \displaystyle \min _{x\in {\mathbb {R}}^n}\varphi (x) -\langle v,x\rangle . \end{aligned}$$

Following Robinson [33] and motivated by some numerical algorithms, Poliquin and Rockafellar [28] investigated the Lipschitz behavior of the local minimizers of \(\varphi \) via the aforementioned perturbed problem and introduced the so-called “tilt stability" notion. Later on, Levy et al. [15] extended the tilt-stability concept by focusing on a parametric extended real-valued function \(h:{\mathbb {R}}^n\times {\mathbb {R}}^p\longrightarrow \overline{{\mathbb {R}}}\), of two variables \((x,\lambda )\in {\mathbb {R}}^n\times {\mathbb {R}}^p\), instead of \(\varphi \). They considered a family of optimization problems, \(\min \, h(x,\lambda )~ s.t.~x\in {\mathbb {R}}^n\), parameterized by \(\lambda \in {\mathbb {R}}^p\) and chose the problem

$$\begin{aligned} \overline{{\mathcal {P}}}:~~\displaystyle \min _{x\in {\mathbb {R}}^n} h(x,{\bar{\lambda }}), \end{aligned}$$

for given \({\bar{\lambda }}\in {\mathbb {R}}^p\). After that, by varying \(\lambda \) near the associated parameter \({\bar{\lambda }}\) and adding a small linear perturbation term to the objective function, they adopted the two-parametric unconstrained problem

$$\begin{aligned} {\mathcal {P}}(\lambda ,v):~~\displaystyle \min _{x\in {\mathbb {R}}^n} h(x,\lambda ) -\langle v,x\rangle , \end{aligned}$$

and introduced “full-stable local optimality" as a new notion related to the Lipschitzian stability. In \({\mathcal {P}}(\lambda ,v)\), the general parametric perturbation created in \(\overline{{\mathcal {P}}}\) by \(\lambda \in {\mathbb {R}}^p\) and the linear perturbation added to the objective function by \(v\in {\mathbb {R}}^n\) are called the “basic" and “tilt" perturbations, respectively [15].

Definition 5

[15, 26, 28] Let \({{\bar{\lambda }}}\in {\mathbb {R}}^p\) and the feasible solution \({\bar{x}}\in {\mathbb {R}}^n\) to \(\overline{{\mathcal {P}}}\) (i.e., \(h({\bar{x}},{\bar{\lambda }})<+\infty \)) be given. For \(\delta >0\), consider the mappings \(m_\delta :{\mathbb {R}}^p\times {\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}\) and \(M_\delta :{\mathbb {R}}^p\times {\mathbb {R}}^n\rightrightarrows {\mathbb {R}}^n\) defined by

$$\begin{aligned} m_\delta (\lambda ,v):= & {} \inf _{\Vert x-{\bar{x}}\Vert \le \delta }\big \{h(x,\lambda ) -\langle v,x\rangle \big \},~~(\lambda ,v)\in {\mathbb {R}}^p\times {\mathbb {R}}^n\\ M_\delta (\lambda ,v):= & {} \displaystyle \arg \min _{\Vert x-{\bar{x}}\Vert \le \delta }\,\big \{h(x,\lambda )-\langle v,x\rangle \big \}, ~~(\lambda ,v)\in {\mathbb {R}}^p\times {\mathbb {R}}^n. \end{aligned}$$

The feasible solution \({\bar{x}}\) is called

  • a tilt-stable local minimum of \(h({\bar{\lambda }},\cdot )\), under the fixed parameter \(\lambda ={\bar{\lambda }}\), if there exist \(\delta >0\) and neighborhood \({\mathcal {V}}\) of \(v=0_n\) such that the mapping \(v\mapsto M_\delta ({\bar{\lambda }},v)\) is single-valued and Lipschitz on \({\mathcal {V}}\), with \(M_{\delta ,f}({\bar{\lambda }},0_n)=\{{\bar{x}}\}\).

  • a full-stable local optimal solution of \({\mathcal {P}}({{\bar{\lambda }}} ,{{\bar{v}}})\) for some \({{\bar{v}}} \in {\mathbb {R}}^n\) if there exist \(\delta >0\) and neighborhoods \(\Lambda \) of \({\bar{\lambda }}\) and \({\mathcal {V}}\) of \({\bar{v}}\) such that the mapping \((\lambda ,v)\mapsto M_\delta (\lambda ,v)\) is single-valued and Lipschitz on \(\Lambda \times {\mathcal {V}}\), with \(M_{\delta ,f}({\bar{\lambda }},{\bar{v}})=\{{\bar{x}}\}\), and the function \((v,\lambda )\mapsto m_\delta (\lambda ,v)\) is likewise Lipschitz on \(\Lambda \times {\mathcal {V}}\).

Tilt-stability defined in [28, Definition 1.1] can be derived from the above definition by setting \(h(\cdot ,{\bar{\lambda }}):=\varphi (\cdot )\) for some \({\bar{\lambda }}\in {\mathbb {R}}^p\).

In the current paper, motivated by tilt-stability concept, we introduce a new Lipschitz stability notion for Multi-Objective programming Problems (MOPs). Consider MOP (1). By using the indicator function \(I_X\), this MOP can be reformulated as the unconstrained MOP

$$\begin{aligned} \displaystyle \min _{x\in {\mathbb {R}}^n} {\bar{f}}(x):=({\bar{f}}_1(x),{\bar{f}}_2(x),\ldots , {\bar{f}}_p(x)), \end{aligned}$$
(5)

where \({\bar{f}}:{\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}^p\) is an extended vector-valued mapping with \({\bar{f}}_i:=f_i+I_X,~i=1,2,\ldots ,p\). Establishing the equivalence of two MOPs (1) and (5) is straightforward. So, hereafter, we concentrate on the MOP,

$$\begin{aligned} \displaystyle \min _{x\in {\mathbb {R}}^n} f(x):=(f_1(x),f_2(x),\ldots , f_p(x)), \end{aligned}$$
(6)

in which \(f_i:{\mathbb {R}}^n\longrightarrow \overline{{\mathbb {R}}}\), \(i=1,2,\ldots ,p\), are objective functions and

$$\begin{aligned} dom\,f:=\{x\in {\mathbb {R}}^n: f_i(x)<+\infty ,~i=1,2,\ldots ,p\}\ne \emptyset . \end{aligned}$$

Let \({\bar{x}}\in dom\, f\) and \(\delta >0\). We consider the set-valued mappings \(M_{\delta , f}:{\mathbb {R}}^{p\times n} \rightrightarrows {\mathbb {R}}^n\) and \(m_{\delta , f}:{\mathbb {R}}^{p\times n} \rightrightarrows {\mathbb {R}}^p\) such that \(M_{\delta ,f}(C)\) is the set of strictly efficient solutions of the perturbed MOP

$$\begin{aligned} (P_{C,\delta }):~~~\displaystyle \min _{\Vert x-{\bar{x}}\Vert \le \delta } f(x) -Cx, \end{aligned}$$
(7)

and \(m_{\delta ,f}(C):=\{f(x)-Cx: x\in M_{\delta ,f}(C)\}\), for \(C\in {\mathbb {R}}^{p\times n}\).

If \(x\in M_{\delta ,f}(C)\), then x is a strictly efficient solution of \((P_{C,\delta })\). For \({{\bar{x}}}\) itself, \({{\bar{x}}}\in M_{\delta ,f}(C)\) means the local strict efficiency of \({{\bar{x}}}\) with radius \(\delta \) for

$$\begin{aligned} \displaystyle \min _{x\in {\mathbb {R}}^n} f(x) -Cx. \end{aligned}$$
(8)

Now, we are ready to introduce Lipschitz-stable local efficiency in multi-objective programming. It is defined utilizing local Lipschitz-like property for set-valued mappings, dealing with sensitivity and perturbation.

Definition 6

A vector \({\bar{x}} \in dom\, f\) is called a Lipschitz-stable (briefly, L-stable) locally efficient solution of (6), if there exists \(\delta >0\) such that \(M_{\delta ,f}(\cdot )\ne \emptyset \) around \(0_{p\times n}\) with \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\), and the set-valued mapping \(M_{\delta ,f}\) is locally Lipschitz-like around \((0_{p\times n}, {\bar{x}})\in graph\,M_{\delta ,f}\).

L-stability in multi-objective programming, in the sense of Definition 6, analyses the Lipschitzian behavior of the solution set of the perturbed problem around the point in question. Indeed, a local strictly efficient solution \({{\bar{x}}}\) of (6) with radius \(\delta \), is an L-stable locally efficient solution, if \(M_{\delta ,f}(\cdot )\ne \emptyset \) around \(0_{p\times n}\) and furthermore there are scalar \(L>0\) and neighborhoods \({\mathcal {U}}\) of \({\bar{x}}\) and \({\mathcal {V}}\) of \(0_{p\times n}\) such that

$$\begin{aligned} M_{\delta ,f}(C)\cap {\mathcal {U}}\subseteq M_{\delta ,f}(D)+L\Vert C-D\Vert {\mathbb {B}}_n,~~\forall C,D\in {\mathcal {V}}. \end{aligned}$$
(9)

The last inclusion means for each strictly efficient solution \({\hat{x}}\in {\mathcal {U}}\) of \((P_{C,\delta })\), there exists a strictly efficient solution \(x^0\) of \((P_{D,\delta })\) such that

$$\begin{aligned} \Vert {\hat{x}}-x^0\Vert \le L\Vert C-D\Vert . \end{aligned}$$

According to the above explanations, in the single-objective case (when \(p=1\)) L-stability is equivalent to the tilt-stability. In this case, the strict efficiency coincides with the unique optimality, and \(M_{\delta ,f}(v)=\arg \displaystyle \min _x\{f(x)-\langle v,x\rangle : \Vert x-{\bar{x}}\Vert \le \delta \}\) and \(m_{\delta ,f}(v)=\displaystyle \inf _x\{f(x)-\langle v,x\rangle : \Vert x-{\bar{x}}\Vert \le \delta \}\) for each \(v\in {\mathbb {R}}^n\).

Theorem 1

If \({\bar{x}}\in dom\, f\) is an L-stable locally efficient solution of (6) with \(\delta >0\) and \(f_i\), \(i=1,2,\ldots ,p\), are continuous at \({\bar{x}}\), then \({\bar{x}}\) fulfills L-stability for each \(\gamma \in (0,\delta ]\).

Proof

Assume that \({\bar{x}}\) is an L-stable locally efficient solution of (6) with \(\delta \). Then \(M_{\delta ,f}(\cdot )\ne \emptyset \) around \(0_{p\times n}\) with \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\) and there are a scalar \(L>1\) and neighborhoods \({\mathcal {U}}\) of \({\bar{x}}\) and \({\mathcal {V}}\) of \(0_{p\times n}\) such that (9) holds. Let \(\gamma \in (0,\delta ]\) be arbitrary and constant hereafter. It is easy to see that \(M_{\gamma ,f}(\cdot )\ne \emptyset \) around \(0_{p\times n}\) and \({\bar{x}}\in M_{\gamma ,f}(0_{p\times n})\). Now, we claim that there exists some \(\epsilon \in (0,\gamma )\) such that \(M_{\gamma ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon )=M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon )\) for any \(C\in \epsilon {\mathbb {B}}_{p\times n}\). The inclusion \(M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon )\subseteq M_{\gamma ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon )\) holds trivially (by choosing \(\epsilon \) less than \(\gamma \)). To prove the converse inclusion, by indirect proof, assume that there are sequences \(\epsilon _\nu \in {\mathbb {R}}_>\) with \(\epsilon _\nu \longrightarrow 0\), \(C_\nu \in {\mathbb {R}}^{p\times n}\) with \(\Vert C_\nu \Vert <\epsilon _\nu \), and \(x_\nu \in M_{\gamma ,f}(C_\nu )\cap {\mathbb {B}}({\bar{x}};\epsilon _\nu )\) such that \(x_\nu \not \in M_{\delta ,f}(C_\nu )\). Then, for any \(\nu \in {\mathbb {N}}\), there exists \(y_\nu \in {\mathbb {R}}^n\) such that \(\gamma <\Vert y_\nu -{\bar{x}}\Vert \le \delta \) and

$$\begin{aligned} f(y_\nu )-C_\nu y_\nu \leqq f(x_\nu )-C_\nu x_\nu . \end{aligned}$$

These imply \(y_\nu \longrightarrow {\bar{y}}\) for some \({\bar{y}} \in {\mathbb {B}}({\bar{x}};\delta )\) (by choosing an appropriate subsequence if necessary) with \({\bar{y}}\ne {\bar{x}}\) and \(f({\bar{y}})\leqq f({\bar{x}})\). This contradicts \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\) and the claim is proved.

Now, without loss of generality, consider \(\epsilon \in (0,\gamma )\) such that \({\mathbb {B}}({\bar{x}};\epsilon )\subseteq {\mathcal {U}}\), \(\epsilon {\mathbb {B}}_{p\times n}\subseteq {\mathcal {V}}\), and \(M_{\gamma ,f}(C)\cap {\mathbb {B}}({\bar{x}};\dfrac{\epsilon }{2})=M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\dfrac{\epsilon }{2})\) for any \(C\in \frac{\epsilon }{2}{\mathbb {B}}_{p\times n}\). Let \(C_0,D_0\in \dfrac{\epsilon }{4L}{\mathbb {B}}_{p\times n}\) be arbitrary. Assume \({\hat{x}}\in M_{\gamma ,f}(C_0)\cap {\mathbb {B}}({\bar{x}};\dfrac{\epsilon }{2})\). Then, \({\hat{x}}\in M_{\delta ,f}(C_0)\cap {\mathcal {U}}\). Taking \(D_0\in \dfrac{\epsilon }{2L}{\mathbb {B}}_{p\times n}\subseteq {\mathcal {V}}\) into account, by (9), there exists some \(x^0\in M_{\delta ,f}(D_0)\) such that

$$\begin{aligned} \Vert {\hat{x}}-x^0\Vert \le L\Vert C_0-D_0\Vert . \end{aligned}$$

Furthermore,

$$\begin{aligned} \Vert x^0-{\bar{x}}\Vert \le \Vert x^0-{{\hat{x}}}\Vert +\Vert {{\hat{x}}}-{{\bar{x}}}\Vert \le L(\Vert C_0\Vert +\Vert D_0\Vert )+\dfrac{\epsilon }{2}\le L(\dfrac{\epsilon }{4L}+\dfrac{\epsilon }{4L})+\dfrac{\epsilon }{2}=\epsilon <\gamma . \end{aligned}$$

This implies \(x^0\in {\mathbb {B}}({\bar{x}};\gamma )\), and hence \(x^0\in M_{\gamma ,f}(D_0).\)

Hence, by setting \({\mathcal {U}}_0:={\mathbb {B}}({\bar{x}};\dfrac{\epsilon }{2})\) and \({\mathcal {V}}_0:=\dfrac{\epsilon }{4L}{\mathbb {B}}_{p\times n}\), we have

$$\begin{aligned} M_{\gamma ,f}(C_0)\cap {\mathcal {U}}_0\subseteq M_{\gamma ,f}(D_0)+L\Vert C_0-D_0\Vert {\mathbb {B}}_n,~~\forall C_0,D_0\in {\mathcal {V}}_0. \end{aligned}$$

This completes the proof. \(\square \)

Now, we continue with a discussion about two stability notions existing in the literature.

A stability property for local strictly efficient solutions of MOPs has been defined in [35]: \({\bar{x}}\in dom\, f\) a said to be a stable locally efficient solution of (6) if there are scalars \(\delta ,\epsilon >0\) and \(\gamma ,\lambda \ge 0\) such that \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\) and

  1. (i)

    for any \(C\in \epsilon {\mathbb {B}}_{p\times n}\), there is \(x\in M_{\delta ,f}(C)\) such that \(\Vert x-{\bar{x}}\Vert \le \gamma \Vert C\Vert ;\)

  2. (ii)

    for any \(C, D\in \epsilon {\mathbb {B}}_{p\times n}\), there are \(x\in M_{\delta ,f}(C)\) and \(y \in M_{\delta ,f}(D)\) such that \(\Vert x-y\Vert \le \gamma \Vert C-D\Vert \);

  3. (iii)

    for any \(C, D\in \epsilon {\mathbb {B}}_{p\times n}, \,x\in M_{\delta ,f}(C)\), and \(y \in M_{\delta ,f}(D)\) with \(\Vert x-y\Vert \le \gamma \Vert C-D\Vert \), the inequality \(\Vert f(x)-f(y)\Vert \le \lambda \Vert C-D\Vert \) holds.

The authors of [35] have investigated their stability concept under Lipschitzness assumption. If \(f_i\), \(i=1,2,\ldots , p\), are Lipschitz on \({\mathbb {B}}[{\bar{x}};\delta ]\), then condition (iii) automatically holds (for some \(\lambda \ge 0\)). So, it can be ignored. Therefore, under Lipschitzness assumption, the above conditions are equivalent to

$$\begin{aligned} \left\{ \begin{array}{ll} {\bar{x}}\in M_{\delta ,f}(C)+\gamma \Vert C\Vert {\mathbb {B}}_n,~~~&{}\forall C\in \epsilon {\mathbb {B}}_{p\times n}, \\ 0_n\in M_{\delta ,f}(C) -M_{\delta ,f}(D) +\gamma \Vert C-D\Vert {\mathbb {B}}_n,~~~&{}\forall C, D\in \epsilon {\mathbb {B}}_{p\times n}. \end{array}\right. \end{aligned}$$

In the single-objective case (i.e., \(p=1\)), the above conditions imply the classic Lipschitz property for the single-valued function \(M_{\delta ,f}\). However, it is not necessarily valid in the multi-objective case, \(p>1\), when \(M_{\delta ,f}\) is set-valued. In other words, the above conditions do not result in the Lipschitz-like property for \(M_{\delta ,f}\), necessarily, while it is expected from stability notions. According to the literature, the main idea behind defining stable solutions is the investigation of the Lipschitzian behavior of these solutions against perturbation.

A popular method in the literature for dealing with sensitivity and perturbation is robustness. In recent decades, various robustness concepts have been introduced and investigated; see, e.g., [1, 2, 10] and the references therein. Georgiev et al. [13] and Zamani et al. [37] defined a robustness notion for linear and nonlinear MOPs, respectively. Rahimi and Soleimani-damaneh [30, 32] extended it to the vector optimization. In these works, an efficient solution is said a robust efficient solution if it remains efficient under small perturbations in the objective functions. More precisely, according to [37, Definition 3.1], an efficient solution \({\bar{x}} \in X\) of (1) is called a norm-based robust efficient solution, if there exists a scalar \(r>0\) such that \({\bar{x}}\) is an efficient solution of the following problem for any \(C \in r{\mathbb {B}}_{p\times n}\),

$$\begin{aligned} \begin{array}{ll} &{}\min f(x)+Cx \\ &{} \ s.t. \ x\in X. \end{array} \end{aligned}$$
(10)

Pourkarimi and Soleimani-damaneh [29, Theorem 3.9] showed that norm-based robust efficiency in the linear case is equivalent to the strict efficiency. Furthermore, Rahimi and Soleimani-damaneh [31, Theorem 4.1 and Corollary 4.2] proved the coincidence of norm-based robust efficiency and isolated efficiency for the non-linear case.Footnote 1 Moreover, it is evidently seen that \({\bar{x}}\) is stable locally efficient in the sense of [35] if \(f_i\), \(i=1,2,\ldots , p\), are locally Lipschitz at \({\bar{x}}\) and it is a norm-based robust efficient solution of (1) with \(X={\mathbb {B}}[{\bar{x}};\delta ]\). The following example shows that norm-based robust efficiency does not necessarily lead in L-stability.

Example 1

Consider the multi-objective optimization problem

$$\begin{aligned} \min \,f(x)=(\max \{x,0\},\max \{-x,0\})^T~~s.t.~~x\in [-1,1]. \end{aligned}$$
(11)

According to [32, Example 5.1], \({{\bar{x}}}:=0\) is a norm-based robust efficient solution of (11). Hence, \({\bar{x}}\) is a stable locally efficient solution in the sense of [35]. We establish that \({\bar{x}}\) is not an L-stable locally efficient solution. By indirect proof, assume that there exist scalars \(1>\delta>\epsilon >0\) and \(L>1\) such that

$$\begin{aligned} \forall C, D \in \epsilon {\mathbb {B}}_{2\times 1} :~ M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon ) \subseteq M_{\delta ,f}(D) + L\Vert C-D\Vert {\mathbb {B}}_1. \end{aligned}$$
(12)

Consider two matrices \(C:=\dfrac{\epsilon }{3L}(-1,0)^T\) and \(D:=0_{2\times 1}\). We get \(M_{\delta ,f}(D)=\{{\bar{x}}\}\) and \(M_{\delta ,f}(C)=\{x: -\delta \le x\le 0\}\). Therefore, \(-\dfrac{\epsilon }{2} \in M_{\delta ,f}(C)\) and due to (12),

$$\begin{aligned} \vert -\dfrac{\epsilon }{2} -{\bar{x}}\vert \le L\Vert C-D\Vert \Longrightarrow \dfrac{\epsilon }{2} \le \dfrac{\epsilon }{3}. \end{aligned}$$

This contradiction implies that \({\bar{x}}\) is not an L-stable locally efficient solution.

Example 2 shows that L-stable local efficiency does not necessarily imply norm-based robust efficiency (and proper efficiency) even if the objective functions are convex and the feasible set is convex and compact.

Example 2

Consider the multi-objective optimization problem

$$\begin{aligned} \min \,f(x):=(x, x^2)^T,~~ s.t.~~x\in {\mathbb {R}}. \end{aligned}$$
(13)

Let \({{\bar{x}}}:=0\), \(\delta :=2\), and \(\epsilon :=1\). Let \(C=(c_1,c_2)^T \in {\mathbb {B}}_{2\times 1}\). Then, \(M_{\delta ,f}(C)=[-2,\dfrac{c_2}{2}]\). It is easy to show that \(M_{\delta ,f}\) is locally Lipschitz-like around \((0_{2\times 1},{\bar{x}})\) with modulus \(L:=\sqrt{2}\). Therefore, \({{\bar{x}}}:=0\) is an L-stable locally efficient solution of (13). According to [37, Theorem 3.1], \({\bar{x}}=0\) is not a norm-based robust efficient solution of (13).

In the single-objective case, \(p=1\), if f is lower semicontinuous, then the optimal value function \(v\longmapsto \displaystyle \inf _x\{f(x)-\langle v,x\rangle : \Vert x-{\bar{x}}\Vert \le \delta \}\) is finite and concave on some neighborhood of \(v=0_n\). So, it is locally Lipschitz at \(v=0_n\). In the following theorem, we show that L-stability ensures the Lipschitzian behavior of the (locally) efficient value mapping of the perturbed problem.

Theorem 2

If \({\bar{x}} \in dom\,f\) is an L-stable locally efficient solution of (6) and \(f_i\), \(i=1,2,\ldots ,p\), are locally Lipschitz at \({\bar{x}}\), then there exists some \(\delta >0\) such that the set-valued mapping \(m_{\delta ,f}\) is locally Lipschitz-like around \((0_{p\times n}, f({\bar{x}})) \in graph\,m_{\delta ,f}\).

Proof

Assume that \({\bar{x}} \in dom\,f\) is an L-stable locally efficient solution of (6). Then, there exist scalars \(\delta ,L>0\) and neighborhoods \({\mathcal {U}}\) of \({\bar{x}}\) and \({\mathcal {V}}\) of \(0_{p\times n}\) such that

$$\begin{aligned} \forall C,D \in {\mathcal {V}} :~ M_{\delta ,f}(C)\cap {\mathcal {U}} \subseteq M_{\delta ,f}(D) + L\Vert C-D\Vert {\mathbb {B}}_n. \end{aligned}$$

Without loss of generality, assume that \(f_i\) is Lipschitz on \({\mathbb {B}}[{\bar{x}};\delta ]\) with constant \(K_i\). We claim that there exists some \(\epsilon >0\) such that \({\mathbb {B}}({\bar{x}};\epsilon )\subseteq {\mathcal {U}}\), \(~\epsilon {\mathbb {B}}_{p\times n}\subseteq {\mathcal {V}}\), and if \(f(x)-Cx \in m_{\delta ,f}(C) \cap {\mathbb {B}}(f({\bar{x}});\epsilon )\) with \(C\in \epsilon {\mathbb {B}}_{p\times n}\) and \(x\in M_{\delta ,f}(C)\), then \(x\in {\mathcal {U}}\).

If not, there are sequences \(\{C_\nu \}_\nu \subseteq {\mathbb {R}}^{p\times n}\) and \(\{x_\nu \}_\nu \subseteq {\mathbb {B}}[{\bar{x}};\delta ]\) such that \(C_\nu \rightarrow 0\), \(x_\nu \in M_{\delta ,f}(C_\nu )\), \(f(x_\nu )-C_\nu x_\nu \longrightarrow f({\bar{x}})\), and \(\{x_\nu \}_\nu \cap {\mathcal {U}} =\emptyset \). Then, \(f(x_\nu )\longrightarrow f({\bar{x}})\). As the sequence \(x_\nu \) is bounded, without loss of generality, we assume \(x_\nu \longrightarrow {\hat{x}}\) for some \({\hat{x}} \in {\mathbb {B}}[{\bar{x}};\delta ]\) with \({\hat{x}}\ne {\bar{x}}\). Therefore, \(f(x_\nu )\longrightarrow f({\hat{x}})\) and so \(f({\hat{x}})=f({\bar{x}})\). This contradicts \(\bar{x}\in M_{\delta ,f}(0_{p\times n})\) and the claim is proved.

Now, suppose that \(C, D\in \epsilon {\mathbb {B}}_{p\times n}\) are arbitrary and constant hereafter. Consider \(f(x)-Cx \in m_{\delta ,f}(C) \cap {\mathbb {B}}(f({\bar{x}});\epsilon )\). Then, taking the aforementioned claim into account, \(x\in M_{\delta ,f}(C) \cap {\mathcal {U}}\) and so, there exists some \(y\in M_{\delta ,f}(D)\) such that \(\Vert x-y\Vert \le L\Vert C-D\Vert \). Hence,

$$\begin{aligned} \begin{array}{ll} &{}\Vert f(x)-Cx -(f(y)-Dy)\Vert \\[4pt] &{}\quad \le \Vert f(x)-f(y)\Vert +\Vert Dy-Cx\Vert \\[4pt] &{}\quad \le \sum \limits ^p_{i=1}\vert f_i(x)-f_i(y)\vert +\Vert D(y-x) +(D-C)x\Vert \\[4pt] &{}\quad \le \sum \limits ^p_{i=1}K_i\Vert y-x\Vert +\Vert D\Vert \Vert y-x\Vert +\Vert D-C\Vert \Vert x\Vert \\[4pt] &{}\quad \le \left( L\sum \limits ^p_{i=1}K_i+\epsilon L +\delta +\Vert {\bar{x}}\Vert \right) \Vert C-D\Vert . \end{array} \end{aligned}$$

Therefore, by setting \(L':=L\sum ^p_{i=1}K_i+\epsilon L +\delta +\Vert {\bar{x}}\Vert \), we have

$$\begin{aligned} \forall C,D \in \epsilon {\mathbb {B}}_{p\times n} :~ m_{\delta ,f}(C)\cap {\mathbb {B}}(f({\bar{x}});\epsilon ) \subseteq m_{\delta ,f}(D) + L' \Vert C-D\Vert {\mathbb {B}}_p, \end{aligned}$$

and the proof is complete. \(\square \)

In Theorem 3 below, we provide a necessary condition for L-stability.

Theorem 3

Let \({\bar{x}}\in {\mathbb {R}}^n\). Assume that \(f_i\), \(i=1,2,\ldots , p\), are finite-valued and convex around \({{\bar{x}}}\). If \({\bar{x}}\) is an L-stable locally efficient solution of (6), then

  1. (i)

    for each \(\nu >0\) there exist scalars \(\delta ,\alpha ,\epsilon ,L>0\) such that

    $$\begin{aligned} \dfrac{\alpha }{\sqrt{p}}{\mathbb {B}}_n\subseteq \bigcup _{x\in {\mathbb {B}}({\bar{x}};\nu )} \bigg (conv\, \Big (\bigcup _{i=1}^p \partial f_i(x)\Big )\bigcap \Big (\dfrac{\Vert x-y\Vert }{L\sqrt{p}}{\mathbb {B}}_n\Big )^c\bigg ), \end{aligned}$$
    (14)

    for any \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\).

  2. (ii)

    for each \(\nu >0\) there exist scalars \(\delta ,\epsilon >0\) such that

    $$\begin{aligned} \bigcup _{x\in {\mathbb {B}}(y;\nu )} pos\, \Big (\bigcup _{i=1}^p \partial f_i(x)\Big ) ={\mathbb {R}}^n, \end{aligned}$$
    (15)

    for any \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\).

Proof

Let \(\nu >0\) be arbitrary and constant hereafter. Assume that \({\bar{x}}\) is an L-stable locally efficient solution of (6). So, there exists some \(\delta >0\) such that \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\) and there are scalars \(L>1\), \(\epsilon \in (0,\dfrac{\delta }{2})\), and \(\alpha \in (0,\dfrac{\delta }{2L})\) such that

$$\begin{aligned} C,D \in \alpha {\mathbb {B}}_{p\times n}:~~M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon ) \subseteq M_{\delta ,f}(D) + L\Vert C-D\Vert {\mathbb {B}}_n. \end{aligned}$$
(16)

Due to Theorem 1, without loss of generality, assume that \(\delta <\nu \) and \(f_i\), \(i=1,2,\ldots , p\), are finite-valued and convex on \({\mathbb {B}}[{{\bar{x}}};\delta ]\).

To prove (i), assume \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\) and \(d\in \dfrac{\alpha }{\sqrt{p}}{\mathbb {B}}_n\) are arbitrary. Consider the matrix \(D\in {\mathbb {R}}^{p\times n}\) whose each row is the vector \(d^T\). Then, \(D\in \alpha {\mathbb {B}}_{p\times n}\) and so, by (16), there exists some \(x^0\in M_{\delta ,f}(D)\) such that

$$\begin{aligned} \Vert y-x^0\Vert \le L\Vert D\Vert = L\sqrt{p}\Vert d\Vert . \end{aligned}$$

Furthermore,

$$\begin{aligned} \Vert x^0-{\bar{x}}\Vert \le \Vert x^0-y\Vert + \Vert y-{{\bar{x}}}\Vert<\alpha L+\epsilon< \dfrac{\delta }{2}+\dfrac{\delta }{2}=\delta <\nu . \end{aligned}$$

On the other hand, due to the weighted-sum scalarization method [9], \(x^0\in M_{\delta ,f}(D)\) implies the existence of a vector \(\lambda \in \Delta _\ge ^p\) such that \(x^0\) is an optimal solution of

$$\begin{aligned} \min \,\sum _{i=1}^p\lambda _if_i(x)-\langle d,x\rangle ~~s.t.~~x\in {\mathbb {B}}[{\bar{x}};\delta ]. \end{aligned}$$

Thus, according to [19, Proposition 1.114] and [4, Theorem 4.10], we get

$$\begin{aligned} d \in \sum _{i=1}^p\lambda _i\partial f_i(x^0). \end{aligned}$$

Hence,

$$\begin{aligned} d\in \bigcup _{x\in {\mathbb {B}}({\bar{x}};\nu )} \bigg (conv\, \Big (\bigcup _{i=1}^p \partial f_i(x)\Big )\bigcap \Big (\dfrac{\Vert x-y\Vert }{L\sqrt{p}}{\mathbb {B}}_n\Big )^c\bigg ), \end{aligned}$$

and (i) is proved.

To prove (ii), suppose that \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\) and \(d\in {\mathbb {R}}^n\) are arbitrary. Consider the matrix \(D\in \alpha {\mathbb {B}}_{p\times n}\) whose each row is the vector \(\dfrac{\alpha }{L\sqrt{p}\Vert d\Vert }d^T\). Therefore, by (16), there exists some \(x^*\in M_{\delta ,f}(D)\) with

$$\begin{aligned} \Vert y-x^*\Vert \le L\Vert D\Vert =\alpha <\nu . \end{aligned}$$

Following a manner similar to the last part of the proof of part (i), \(\Vert x^*-{\bar{x}}\Vert<\delta <\nu \) and there exists a vector \(\lambda \in \Delta _\ge ^p\) such that \(x^*\) is an optimal solution of

$$\begin{aligned} \min \,\sum _{i=1}^p\lambda _if_i(x)-\dfrac{\alpha }{L\sqrt{p}\Vert d\Vert }\langle d,x\rangle ~~s.t.~~x\in {\mathbb {B}}[{\bar{x}};\delta ]. \end{aligned}$$

So, we get

$$\begin{aligned} \dfrac{\alpha }{L\sqrt{p}\Vert d\Vert }d \in \sum _{i=1}^p\lambda _i\partial f_i(x^*), \end{aligned}$$

whiles implies

$$\begin{aligned} d\in \bigcup _{x\in {\mathbb {B}}(y;\nu )} pos\, \Big (\bigcup _{i=1}^p \partial f_i(x)\Big ), \end{aligned}$$

and the proof is complete. \(\square \)

Corollary 1 is a direct consequence of Theorem 3.

Corollary 1

Let \({\bar{x}}\in {\mathbb {R}}^n\). Assume that \(f_i\), \(i=1,2,\ldots , p\), are finite-valued, convex, and continuously differentiable around \({{\bar{x}}}\). If \({\bar{x}}\) is an L-stable locally efficient solution of (6), then

  1. (i)

    for each \(\nu >0\) there exist scalars \(\delta ,\alpha ,\epsilon ,L>0\) such that

    $$\begin{aligned} \dfrac{\alpha }{\sqrt{p}}{\mathbb {B}}_n\subseteq \bigcup _{x\in {\mathbb {B}}({\bar{x}};\nu )} \bigg (conv\,\Big \{\nabla f_1(x),\nabla f_2(x),\ldots ,\nabla f_p(x)\Big \}\bigcap \Big (\dfrac{\Vert x-y\Vert }{L\sqrt{p}}{\mathbb {B}}_n\Big )^c\bigg ), \end{aligned}$$

    for any \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\).

  2. (ii)

    for each \(\nu >0\) there exist scalars \(\delta ,\epsilon >0\) such that

    $$\begin{aligned} \bigcup _{x\in {\mathbb {B}}(y;\nu )} pos\,\Big \{\nabla f_1(x),\nabla f_2(x),\ldots ,\nabla f_p(x)\Big \} ={\mathbb {R}}^n, \end{aligned}$$

    for any \(y\in M_{\delta ,f}(0_{p\times n})\cap {\mathbb {B}}({\bar{x}};\epsilon )\).

Theorem 4 provides a necessary condition for tilt-stability without convexity assumption.

Theorem 4

Let \({\bar{x}}\in {\mathbb {R}}^n\). Assume that \(p=1\) and f is finite-valued around \({\bar{x}}\). If \({\bar{x}}\) is a tilt-stable local minimum of f, then for each \(\nu >0\) there exist scalars \(\alpha >0\) and \(L>1\) such that

$$\begin{aligned} \alpha {\mathbb {B}}_n\subseteq \bigcup _{x\in {\mathbb {B}}({\bar{x}};\nu )} \bigg (\partial f(x)\bigcap \Big (\dfrac{\Vert x-{\bar{x}}\Vert }{L}{\mathbb {B}}_n\Big )^c\bigg ). \end{aligned}$$
(17)

Proof

Assume that \({\bar{x}}\) is a tilt-stable local minimum of f. Then, there exists some \(\delta >0\) such that \(M_{\delta ,f}(0_n)=\{{\bar{x}}\}\) and there exist scalar \(L>1\) and neighborhood \({\mathcal {V}}\) of \(v=0_n\) such that the mapping \(M_{\delta ,f}\) is single-valued on \({\mathcal {V}}\) and for each \(u,v\in {\mathcal {V}},\)

$$\begin{aligned} \Vert M_{\delta ,f}(u) -M_{\delta ,f}(v)\Vert \le L\Vert u-v\Vert . \end{aligned}$$

Also, without loss of generality, assume that f is finite-valued on \({\mathbb {B}}[{\bar{x}};\delta ]\). If (17) holds for some \(\hat{\nu }>0\), then it holds for any \(\nu >{{\hat{\nu }}}\). So, consider \(\nu \in (0,\delta )\). There exists sufficiently small \(\alpha >0\) such that \(\alpha {\mathbb {B}}_n\subseteq {\mathcal {V}}\) and \(\alpha L<\nu \). Let \(d\in \alpha {\mathbb {B}}_n\) be arbitrary and constant hereafter. Then, there exists \({\hat{x}}\in {\mathbb {B}}({\bar{x}};\delta )\) such that \(M_{\delta ,f}(d)=\{{\hat{x}}\}\) and

$$\begin{aligned} \Vert {\hat{x}}-{\bar{x}}\Vert \le L\Vert d\Vert<L\alpha <\nu . \end{aligned}$$

Therefore, \({\hat{x}}\in {\mathbb {B}}({\bar{x}};\nu )\) and \(d\not \in \dfrac{\Vert {\hat{x}}-{\bar{x}}\Vert }{L}{\mathbb {B}}_n\). On the other hand, \(M_{\delta ,f}(d)=\{{\hat{x}}\}\) and \(\Vert {{\hat{x}}}-{\bar{x}}\Vert <\delta \) lead in \(d\in \partial f({\hat{x}})\), invoking [19, Proposition 1.114]. Hence,

$$\begin{aligned} d\in \bigcup _{x\in {\mathbb {B}}({\bar{x}};\nu )} \bigg (\partial f(x)\bigcap \Big (\dfrac{\Vert x-{\bar{x}}\Vert }{L}{\mathbb {B}}_n\Big )^c\bigg ), \end{aligned}$$

and the proof is complete. \(\square \)

The following example shows that in Theorem 4, the tilt-stability assumption cannot be replaced by unique optimality.

Example 3

Consider the function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} f(x):=\left\{ \begin{array}{ll} x^3,~~~&{}x\ge 0, \\ -x^3,~~~&{}x<0 \end{array}\right. \end{aligned}$$

Here, \({\bar{x}}:=0\) is the unique minimizer of f which does not satisfy (17). More precisely, let \(\nu =1\). If (17) holds for some \(\alpha >0\) and \(L>1\), then considering \(d\in (0,\min \{\alpha ,\frac{1}{3L^2}\})\), we have \(d\in \alpha {\mathbb {B}}_1\), and so \(d\in \partial f(x)=\{3x^2\}\) for some \(x\in {\mathbb {R}}\) with \(\vert x\vert <1\). We have \(3x^2=d<\frac{1}{3L^2}\) leading in \(9x^4<\frac{x^2}{L^2}\) and then \(3x^2<\frac{\vert x\vert }{L}=\frac{\vert x-{{\bar{x}}}\vert }{L}\). This implies \(d\in \frac{\vert x-{{\bar{x}}}\vert }{L}{\mathbb {B}}_1\) which contradicts (17).

5 Weighted sum scalarization and L-stable local efficiency

One of the most popular and important approaches for handling and solving the multi-objective programming problems is the scalarization. There are several scalarization techniques in the literature. The weighted sum scalarization method is one of the most operational and interpretable ones [9]. In the current section, we provide a bridge between L-stability and some corresponding notions defined for the single-objective case, by means of the weighted sum technique.

Given \(\lambda \in {\mathbb {R}}^p_\ge \), the single-objective weighted sum problem corresponding to (6) is as follows:

$$\begin{aligned} \begin{array}{ll} \min \,\displaystyle \sum _{i=1}^p\lambda _if_i(x). \\ ~s.t.~~x\in {\mathbb {R}}^n \end{array} \end{aligned}$$

We define the extended real-valued function \(h:{\mathbb {R}}^n \times {\mathbb {R}}^p \longrightarrow \overline{{\mathbb {R}}}\) by

$$\begin{aligned} h(x,\lambda ):=\langle \lambda ,f\rangle (x):=\sum _{i=1}^p \lambda _i f_i(x),~~(x,\lambda )\in {\mathbb {R}}^n\times {\mathbb {R}}^p. \end{aligned}$$

Also, we consider the single-objective optimization problem

$$\begin{aligned} {\mathcal {P}}(\lambda ,v):~~ \min \,h(x,\lambda )-\langle v,x\rangle ~~\text {s.t.}~x\in {\mathbb {R}}^n, \end{aligned}$$

as a two-parametric unconstrained optimization problem with parameters \(\lambda \in {\mathbb {R}}^p\) and \(v\in {\mathbb {R}}^n\).

Now, we present some characterizations of L-stable locally efficient solutions, utilizing the weighted sum problem. In the following, we show that, under appropriate assumptions, full-stability and tilt-stability for the weighted sum problem imply L-stability for MOP (6).

Theorem 5

Let \(f_i\), \(i=1,2, \ldots , p\), be locally convex at \({\bar{x}}\in dom\,f\). Assume that there exists a unique vector \({\bar{\lambda }} \in \Delta ^p_>\) such that \({\bar{x}}\) is a locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\). If \({\bar{x}}\) is a full-stable local optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\), then \({\bar{x}}\) is an L-stable locally efficient solution of (6).

Proof

Assume that \({\bar{x}}\in dom\,f\) is a full-stable local optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\). There exist scalars \(\delta >0\) and \(K>0\) and neighborhoods \(\Lambda \) of \({\bar{\lambda }}\) and \({\mathcal {V}}\) of \(0_n\) such that the mapping \(M_{\delta }(\cdot ,\cdot )\) is single-valued on \(\Lambda \times {\mathcal {V}}\) with \(M_{\delta }({\bar{\lambda }},0_n)=\{{\bar{x}}\}\) and

$$\begin{aligned} \Vert M_{\delta }(\lambda ,v)-M_{\delta }(\mu ,u) \Vert \le K\Vert (\lambda ,v)-(\mu ,u)\Vert , \end{aligned}$$
(18)

for any \((\lambda ,v),(\mu ,u) \in \Lambda \times {\mathcal {V}}\). Without loss of generality, we assume that \(f_i\), \(i=1,2,\ldots ,p\), are convex on \({\mathbb {B}}[{\bar{x}};\delta ]\). Due to \(M_{\delta }({\bar{\lambda }},0_n)=\{{\bar{x}}\}\), the vector \({\bar{x}}\) is a unique locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\) with radius \(\delta \). So, thanks to [9, Proposition 3.9], \({\bar{x}}\) is a local strictly efficient solution of (6) with radius \(\delta \). Hence, \({\bar{x}}\in M_{\delta ,f}(0_{p\times n})\).

We consider \(\gamma >0\) sufficiently small such that \({\mathbb {B}}({\bar{\lambda }};\gamma ) \subseteq \Lambda \cap {\mathbb {R}}^p_>\). Now, we claim that there exists \(\epsilon \in (0,\delta )\) such that \(\epsilon {\mathbb {B}}_{p}\subseteq {\mathcal {V}}\) and for any \(C\in \dfrac{\epsilon }{\Vert {\bar{\lambda }}\Vert +\gamma } {\mathbb {B}}_{p\times n}\) and any \(x\in M_{\delta ,f}(C)\cap {\mathbb {B}}({\bar{x}};\epsilon )\), x is a locally optimal solution of \({\mathcal {P}}(\lambda ,\sum _{i=1}^p \lambda _i C_i)\) with radius \(\delta \) for some \(\lambda \in {\mathbb {B}}({\bar{\lambda }};\gamma )\). Hereafter, \(C_i\) stands for the i-th row of the matrix C.

Proof of the claim: By indirect proof, assume that there are sequences \(\epsilon _{\nu }\longrightarrow 0\), \(C^{\nu }\longrightarrow 0_{p\times n}\), and \(x_{\nu }\in M_{\delta ,f}(C^{\nu })\cap {\mathbb {B}}({\bar{x}};\epsilon _{\nu })\) such that \(W^{\nu } \cap {\mathbb {B}}({\bar{\lambda }};\gamma ) =\emptyset \) for any \(\nu \in {\mathbb {N}}\), where \(W^{\nu }\) denotes the set of all \(\lambda \in \Delta ^p_\ge \) that \(x_{\nu }\) is a locally optimal solution of \({\mathcal {P}}(\lambda ,\sum _{i=1}^p\lambda _i C^{\nu }_i)\) with radius \(\delta \). The local strict efficiency of the vector \(x_{\nu }\) on \({\mathbb {B}}[{\bar{x}};\delta ]\) and local convexity assumption yield \(W^{\nu } \ne \emptyset \), for each \(\nu \in {\mathbb {N}}\) (Due to [9, Proposition 3.10]). Consider \(\lambda ^\nu \in W^{\nu },~\nu \in {\mathbb {N}}\). Then \(\lambda ^\nu \longrightarrow {\tilde{\lambda }}\) for some \({\tilde{\lambda }} \in \Delta ^p_\ge \). It is clear that \({\tilde{\lambda }} \not \in {\mathbb {B}}({\bar{\lambda }};\gamma )\). Let \(y\in {\mathbb {B}}[{\bar{x}};\delta ]\) be arbitrary. We get

$$\begin{aligned} \displaystyle \sum _{i=1}^p \lambda ^{\nu }_i f_i(x_{\nu }) -\displaystyle \sum _{i=1}^p \lambda ^{\nu }_i C^{\nu }_ix_{\nu } \le \displaystyle \sum _{i=1}^p \lambda ^{\nu }_i f_i(y) -\displaystyle \sum _{i=1}^p \lambda ^{\nu }_i C^{\nu }_iy,~~\forall \nu \in {\mathbb {N}}, \end{aligned}$$

leading in

$$\begin{aligned} \displaystyle \sum _{i=1}^p {\tilde{\lambda }}_i f_i({\bar{x}})\le \displaystyle \sum _{i=1}^p {\tilde{\lambda }}_i f_i(y), \end{aligned}$$

by \(\nu \rightarrow \infty .\) On the other hand, \(\sum _{i=1}^p {\bar{\lambda }}_i f_i({\bar{x}})\le \sum _{i=1}^p {\bar{\lambda }}_i f_i(y)\), due to the local optimality of \({\bar{x}}\) for \({\mathcal {P}}({\bar{\lambda }},0_n)\) with radius \(\delta \). Therefore,

$$\begin{aligned} \sum _{i=1}^p \dfrac{({\bar{\lambda }}+{\tilde{\lambda }})_i}{2} f_i({\bar{x}})\le \sum _{i=1}^p \dfrac{({\bar{\lambda }}+{\tilde{\lambda }})_i}{2} f_i(y). \end{aligned}$$

So, \({\bar{x}}\) is a locally optimal solution of \({\mathcal {P}}(\dfrac{{\bar{\lambda }}+{\tilde{\lambda }}}{2},0_n)\) with radius \(\delta \) and \(\dfrac{{\bar{\lambda }}+{\tilde{\lambda }}}{2}\in \Delta ^p_>\). This contradicts the uniqueness of \({\bar{\lambda }}\). Hence, the claim is correct.

Now, let \({{\hat{C}}}, {{\hat{D}}}\in \dfrac{\epsilon }{\Vert {\bar{\lambda }}\Vert +\gamma } {\mathbb {B}}_{p\times n}\) and \({{\hat{x}}}\in M_{\delta ,f}({{\hat{C}}})\cap {\mathbb {B}}({\bar{x}};\epsilon )\). Then, there exists \({{\hat{\lambda }}} \in {\mathbb {B}}({\bar{\lambda }};\gamma )\) such that \({{\hat{x}}}\) is a locally optimal solution of \({\mathcal {P}}(\hat{\lambda },\sum _{i=1}^p{\hat{\lambda }}_i{\hat{C}}_i)\) with radius \(\delta \). As

$$\begin{aligned} \left\| \sum _{i=1}^p{\hat{\lambda }}_i{\hat{C}}_i\right\| \le \Vert {\hat{\lambda }}\Vert \Vert {\hat{C}}\Vert < (\Vert {\bar{\lambda }}\Vert +\gamma )\dfrac{\epsilon }{\Vert {\bar{\lambda }}\Vert +\gamma } =\epsilon , \end{aligned}$$

we have \(M_{\delta }({\hat{\lambda }},\sum _{i=1}^p{\hat{\lambda }}_i {\hat{C}}_i)=\{{\hat{x}}\}\). On the other hand, the problem \({\mathcal {P}}({{\hat{\lambda }}},\sum _{i=1}^p{\hat{\lambda }}_i{\hat{D}}_i)\) admits a locally optimal solution with radius \(\delta \). If \({\hat{z}}\) is a locally optimal solution of \({\mathcal {P}}(\hat{\lambda },\sum _{i=1}^p{\hat{\lambda }}_i{\hat{D}}_i)\) with radius \(\delta \), then similar to what was already proven, \(M_{\delta }({\hat{\lambda }},\sum _{i=1}^p{\hat{\lambda }}_i {\hat{D}}_i)=\{{\hat{z}}\}\) and so, \({\hat{z}}\in M_{\delta ,f}({\hat{D}})\). We have

$$\begin{aligned} \begin{aligned} \Vert {\hat{x}}-{\hat{z}}\Vert&=\Vert M_{\delta }({\hat{\lambda }},\sum _{i=1}^p{\hat{\lambda }}_i {\hat{C}}_i)-M_{\delta }({\hat{\lambda }},\sum _{i=1}^p{\hat{\lambda }}_i {\hat{D}}_i)\Vert \\&\le K \Vert \sum _{i=1}^p {{\hat{\lambda }}}_i {{\hat{C}}}_i - \sum _{i=1}^p {{\hat{\lambda }}}_i {{\hat{D}}}_i\Vert ~~~~~~~~~\text {[By }(18)] \\&=K \Vert \sum _{i=1}^p {{\hat{\lambda }}}_i ({{\hat{C}}}_i - {{\hat{D}}}_i)\Vert \\&\le K\Vert {{\hat{\lambda }}}\Vert \Vert {{\hat{C}}}-{{\hat{D}}}\Vert \le K(\Vert {\bar{\lambda }}\Vert +\gamma )\Vert {{\hat{C}}}- {{\hat{D}}}\Vert . \end{aligned} \end{aligned}$$

Therefore, by setting \({\mathcal {U}}_0:={\mathbb {B}}({\bar{x}};\epsilon )\), \({\mathcal {V}}_0:=\dfrac{\epsilon }{\Vert {\bar{\lambda }}\Vert +\gamma } {\mathbb {B}}_{p\times n}\), and \(L:=K(\Vert {\bar{\lambda }}\Vert +\gamma )\), we have

$$\begin{aligned} M_{\delta ,f}(C_0)\cap {\mathcal {U}}_0 \subseteq M_{\delta ,f}(D_0) +L\Vert C_0-D_0\Vert {\mathbb {B}}_n,~~ \forall C_0,D_0 \in {\mathcal {V}}_0. \end{aligned}$$

So, \({\bar{x}}\) is an L-stable locally efficient solution of (6) and the proof is complete. \(\square \)

The following example shows that the converse of Theorem 5 is not valid necessarily.

Example 4

Consider the multi-objective optimization problem

$$\begin{aligned} \min \,f(x)=(f_1(x),f_2(x))~~s.t.~~x\in {\mathbb {R}}, \end{aligned}$$

where

$$\begin{aligned} f_1(x):=\left\{ \begin{array}{ll} x, ~~&{} x\ge -1,\\ -1, ~~&{} x<-1, \end{array}\right. ~~~~~~~~~~f_2(x):=\left\{ \begin{array}{ll} -1, ~~&{} x\ge 1,\\ -x, ~~&{} x<1. \end{array}\right. \end{aligned}$$

Let \({\bar{x}}:=0\), \(\delta :=1\), and \(\epsilon :=0.5\). The objective function of the above problem on \({\mathbb {B}}[{\bar{x}};\delta ]\) is \(g(x):=f_{\mid _{{\mathbb {B}}[{\bar{x}};\delta ]}}(x)=(x,-x)\). Furthermore, \(M_{\delta ,f}(0)={\mathbb {B}}[{\bar{x}};\delta ]\). Also, for any \(C=(c_1,c_2)\) with \(\Vert C\Vert <\epsilon \), we get \(g(x)+Cx= (1+c_1,c_2-1)x\) and so \(M_{\delta ,f}(C)={\mathbb {B}}[{\bar{x}};\delta ]\). Therefore, \({\bar{x}}=0\) is an L-stable locally efficient solution of the above problem. On the other hand, \({\bar{x}}\) is a properly efficient solution of this problem. It is a locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0)\) for the unique vector \(\bar{\lambda }:=(0.5,0.5)\in \Delta ^2_>\), while it is not the unique locally optimal solution. In fact, \(M_{\nu }({\bar{\lambda }},0)={\mathbb {B}}[{\bar{x}};\nu ]\) for any \(\nu \le \delta \). Thus, \({\bar{x}}\) is not a full-stable local optimal solution for \({\mathcal {P}}({\bar{\lambda }},0)\).

The following theorem provides sufficient conditions for L-stability in terms of the extended second-order subdifferential.

We define the set-valued mapping

$$\begin{aligned} S:{\mathbb {R}}^n\times {\mathbb {R}}^p\times {\mathbb {R}}^n\rightrightarrows \underbrace{{\mathbb {R}}^n\times {\mathbb {R}}^n\times \ldots \times {\mathbb {R}}^n}_{n~order}, \end{aligned}$$

by

$$\begin{aligned} S(x,\lambda ,v):=\bigg \{(v_1,v_2,\ldots ,v_n): \sum _{i=1}^pv_i=v,~~v_i\in \lambda _i\partial f_i(x), ~i=1,2,\ldots ,p\bigg \}. \end{aligned}$$

Theorem 6

Let \(f_i\), \(i=1,2, \ldots , p\), be convex around \({\bar{x}}\in dom\,f\). Assume that there exists a unique vector \({\bar{\lambda }} \in \Delta ^p_>\) such that \(0_n\in \sum _{i=1}^p{\bar{\lambda }}_i\partial f_i({\bar{x}})\). Then \({\bar{x}}\) is an L-stable locally efficient solution of (6), if either (a) or (b) holds.

  1. (a)

     

    $$\begin{aligned} \left\{ \begin{array}{ll} (0_n,\mu )\in {\tilde{\partial }}^2_x\langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }},0_n)(0_n) \Longrightarrow \mu =0, \\[4pt] \big ((w,\mu )\in {\tilde{\partial }}^2_x\langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }},0_n)(u) , u\ne 0_n\big ) \Longrightarrow \langle u,w\rangle >0. \end{array}\right. \end{aligned}$$
    (19)
  2. (b)

    There exist \({\bar{v}}_i\in {\bar{\lambda }}_i\partial f_i({\bar{x}})\), \(i=1,2,\ldots ,p\), with \(\sum _{i=1}^p{\bar{v}}_i=0_n\), such that S is inner semicontinuous at \(({\bar{x}},{\bar{\lambda }},0_n,{\bar{v}}_1,{\bar{v}}_2,\ldots ,{\bar{v}}_n)\). Furthermore, for each \(k=1,2,\ldots ,p\), the constraint qualification

    $$\begin{aligned} {\tilde{\partial }}^2_xh_k({\bar{x}},{\bar{\lambda }},{\bar{v}}_k)(0_n) \bigcap \big (-{\tilde{\partial }}^2_x\big [\sum _{i=k+1}^p h_i\big ]({\bar{x}},{\bar{\lambda }},\sum _{i=k+1}^p{\bar{v}}_i)(0_n)\big )=\{(0_n,0_p)\}, \end{aligned}$$
    (20)

    holds, and the second-order conditions

    $$\begin{aligned} \left\{ \begin{array}{ll} (0_n,\mu )\in \sum _{i=1}^p{\tilde{\partial }}^2_xh_i({\bar{x}},{\bar{\lambda }},{\bar{v}}_i)(0_n) \Longrightarrow \mu =0, \\[4pt] \big ((w,\mu )\in {\tilde{\partial }}^2_xh_i({\bar{x}},{\bar{\lambda }},{\bar{v}}_i)(u) , u\ne 0_n\big ) \Longrightarrow \langle u,w\rangle >0, \end{array}\right. \end{aligned}$$
    (21)

    are fulfilled, where \(h_i(x,\lambda ):=\lambda _i f_i(x)\).

Proof

Assume (a). The convexity assumption ensures the “prox-regularity” and “subdifferential continuity" of \(\langle \cdot ,f\rangle \) at \(({\bar{x}},{\bar{\lambda }},0_n)\) (defined in [34, Definitions 13.27 and 13.28, respectively]). Also, by [19, Corollary 1.81],

$$\begin{aligned} \partial ^\infty \langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }})=\{(0_n,0_p)\}. \end{aligned}$$

On the other hand,

$$\begin{aligned} 0_n\in \sum _{i=1}^p{\bar{\lambda }}_i\partial f_i({\bar{x}})= \partial \left( \sum _{i=1}^p{\bar{\lambda }}_if_i\right) ({\bar{x}})=\partial _x\langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }}). \end{aligned}$$

So, thanks to [26, Theorem 3.2] (or [15, Theorem 2.3]), \({\bar{x}}\) is a full-stable local optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\). Hence, according to Theorem 5, \({\bar{x}}\) is an L-stable locally efficient solution of (6).

Now, assume (b). Consider \({\bar{v}}_0:=0_n\). For each \(k=1,2,\ldots ,p-1\), we define the set-valued mapping \(S_k:{\mathbb {R}}^n\times {\mathbb {R}}^p\times {\mathbb {R}}^n\rightrightarrows {\mathbb {R}}^n\times {\mathbb {R}}^n\) by

$$\begin{aligned} S_k(x,\lambda ,v):=\bigg \{(v_1,v_2):~~v_1\in \lambda _k\partial f_k(x),~v_2\in \sum _{i=k+1}^p\lambda _i\partial f_i(x),~ v_1+v_2=v\bigg \}. \end{aligned}$$

It can be shown that the set-valued mappings \(S_k\) are inner semicontinuous at \(({\bar{x}},{\bar{\lambda }},-\sum _{i=0}^{k-1}{\bar{v}}_i,{\bar{v}}_k,\sum _{i=k+1}^p{\bar{v}}_i)\). Invoking the constraint qualification (20) and the definition of the extended second-order subdifferential mapping (2), by [19, Theorem 3.10], for each \(u\in {\mathbb {R}}^n\) we get

$$\begin{aligned} \begin{aligned} {\tilde{\partial }}^2_x\langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }},0_n)(u)&\subseteq {\tilde{\partial }}^2_xh_1({\bar{x}},{\bar{\lambda }},{\bar{v}}_1)(u)+ {\tilde{\partial }}^2_x\big [\sum _{i=2}^p h_i\big ]({\bar{x}},{\bar{\lambda }},\sum _{i=2}^p{\bar{v}}_i)(u)\\&\subseteq \sum _{i=1}^2{\tilde{\partial }}^2_xh_i({\bar{x}},{\bar{\lambda }},{\bar{v}}_i)(u)+ {\tilde{\partial }}^2_x\big [\sum _{i=3}^p h_i\big ]({\bar{x}},{\bar{\lambda }},\sum _{i=3}^p{\bar{v}}_i)(u)\\&\vdots \\&\subseteq \sum _{i=1}^p{\tilde{\partial }}^2_xh_i({\bar{x}},{\bar{\lambda }},{\bar{v}}_i)(u). \end{aligned} \end{aligned}$$

Let \(u\in {\mathbb {R}}^n\) and \((w,\mu )\in {\tilde{\partial }}^2_x\langle \cdot ,f\rangle ({\bar{x}},{\bar{\lambda }},0_n)(u)\). Then, there exist \((w_i,\mu _i)\in {\tilde{\partial }}^2_xh_i({\bar{x}},{\bar{\lambda }},{\bar{v}}_i)(u),\) \(i=1,2,\ldots ,p\), such that

$$\begin{aligned} (w,\mu )=\sum _{i=1}^p(w_i,\mu _i)=\left( \sum _{i=1}^pw_i,\sum _{i=1}^p\mu _i\right) . \end{aligned}$$

Now, according to second-order condition (21), if \(u=w=0_n\), \(\mu =\sum _{i=1}^p\mu _i=0\), and \(u\ne 0_n\), then \(\langle u,w_i\rangle >0\) for each \(i=1,2,\ldots ,p\) leading in

$$\begin{aligned} \langle u,w\rangle =\langle u,\sum _{i=1}^pw_i\rangle >0. \end{aligned}$$

Thus, the second-order condition (19) holds and by part (a), \({\bar{x}}\) is an L-stable locally efficient solution of (6). The proof is complete. \(\square \)

In Definition 5 (extracted from [15, 26, 28]), the full stability has been defined for a general function, while in our setting the function \(\varphi \) has a special structure inspired by the weighted sum scalarization. This leads us to define another stability notion, called \({{\bar{\lambda }}}\)-tilt-stability, which depends on a given weight vector \({\bar{\lambda }}\in {\mathbb {R}}^p_\ge \). This concept has a simpler structure in compared to the general one.

Definition 7

Consider the function \(\varphi :{\mathbb {R}}^n \times {\mathbb {R}}^p_\ge \longrightarrow \overline{{\mathbb {R}}}\), of variables \((x,\lambda ) \in {\mathbb {R}}^n \times {\mathbb {R}}^p_\ge \), defined by \(\varphi (x,\lambda ):=\langle \lambda ,f\rangle (x)\). Given \({\bar{x}} \in {\mathbb {R}}^n\) and \({\bar{\lambda }}\in {\mathbb {R}}^p_\ge \), we say that \({\bar{x}}\) is a \({{\bar{\lambda }}}\)-tilt-stable locally optimal solution of \(\varphi \) (with respect to the variable x), if there are scalars \(\delta , L>0\) and neighborhoods \(\Lambda \) of \({\bar{\lambda }}\) and \({\mathcal {V}}\) of \(0_n\) such that the mapping

$$\begin{aligned} v\longmapsto M_{\delta }(\lambda ,v) \end{aligned}$$

is single-valued with \(M_{\delta }({{\bar{\lambda }}},0_n)=\{{\bar{x}}\}\) and Lipschitz on \({\mathcal {V}}\) with Lipschitz constant L, for any \(\lambda \in \Lambda \cap {\mathbb {R}}^p_\ge \).

In Theorem 7 below, the function \(\varphi :{\mathbb {R}}^n \times {\mathbb {R}}^p_\ge \longrightarrow \overline{{\mathbb {R}}}\), of variables \((x,\lambda ) \in {\mathbb {R}}^n \times {\mathbb {R}}^p_\ge \), is defined by \(\varphi (x,\lambda ):=\langle \lambda ,f\rangle (x)=\displaystyle \sum _{i=1}^p\lambda _if_i(x)\).

Theorem 7

Let \(f_i\), \(i=1,2, \ldots , p\), be locally convex at \({\bar{x}}\in dom\,f\). Assume that there exists the unique vector \({\bar{\lambda }} \in \Delta ^p_\ge \) such that \({\bar{x}}\) is a locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\). If \({\bar{x}}\) is a \({{\bar{\lambda }}}\)-tilt-stable locally optimal solution of \(\varphi \) (with respect to the variable x), then \({\bar{x}}\) is an L-stable locally efficient solution of (6).

Proof

The proof is similar to that of Theorem 5 and is hence omitted. \(\square \)

Notice that in Theorem 7, \({\bar{\lambda }} \in {\mathbb {R}}^p_\ge \), while in Theorem 5, \({\bar{\lambda }} \in {\mathbb {R}}^p_>\). Indeed, the existence of \({\bar{\lambda }} \in {\mathbb {R}}^p_>\) (resp. \({\bar{\lambda }} \in {\mathbb {R}}^p_\ge \)) such that \({\bar{x}}\) is a locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\), assumed in Theorem 5 (resp. Theorem 7), is equivalent with local proper (resp. weak) efficiency of \({\bar{x}}\) for \({\mathcal {P}}({\bar{\lambda }},0_n)\).

In the following, we present necessary and sufficient conditions for L-stability in terms of the metric regularity of the subgradient mapping. First, we derive a sufficient condition.

Theorem 8

Let \(f_i\), \(i=1,2, \ldots , p\), be locally convex at \({\bar{x}}\in dom\,f\). If \(0_n\in \partial _x h({\bar{x}},{\bar{\lambda }})\) for a unique vector \({\bar{\lambda }} \in \Delta ^p_>\), and \(\partial _x h\) is partial strong metrically regular at \(({\bar{x}},{\bar{\lambda }},0_n)\), then \({\bar{x}}\) is an L-stable locally efficient solution of (6).

Proof

Apply Theorem 5 and [26, Theorem 3.4]. \(\square \)

We close the paper by a necessary condition for L-stability in terms of the metric regularity. In this regard, we need to the set-valued mapping \(\Phi :{\mathbb {R}}^n\rightrightarrows {\mathbb {R}}^n\) defined by

$$\begin{aligned} \Phi (x):=conv\,\big (\bigcup _{i=1}^p\partial f_i(x)\big ),~~x\in {\mathbb {R}}^n. \end{aligned}$$

If \(f_i\), \(i=1,2,\ldots ,p\), are convex, then by [4, Proposition 10.15], we have

$$\begin{aligned} \Phi (x)=\displaystyle \bigcup _{\lambda \in \Delta ^p_\ge }\partial _x h(x,\lambda ), \end{aligned}$$

and

$$\begin{aligned} \Phi ^{-1}(u)=\pi _1(\partial _x h)^{-1}(u):=\{x\in {\mathbb {R}}^n: ~u\in \partial _x h(x,\lambda ) ~\text {for some}~\lambda \in \Delta ^p_\ge \}. \end{aligned}$$

Theorem 9

Let \({\bar{x}} \in dom\,f\) be an L-stable locally efficient solution of (6) with \(\delta >0\). If \(f_i\), \(i=1,2, \ldots , p\), are strictly convex on \({\mathbb {B}}[{\bar{x}};\delta ]\), then the mapping \(\Phi \) is local metrically regular at \(({\bar{x}},0_n) \in graph\,\Phi \).

Proof

The vector \({\bar{x}} \in dom\,f\) is an L-stable locally efficient solution of (6). Then, there exist scalars \(\delta >0\) and \(L>0\) and neighborhoods \({\mathcal {U}}\subseteq {\mathbb {B}}[{\bar{x}};\delta ]\) of \({\bar{x}}\) and \({\mathcal {V}}\) of \(0_{p\times n}\) such that \({\bar{x}}\) is a local strictly efficient solution of (6) with radius \(\delta \) and for any \(C,D \in {\mathcal {V}}\),

$$\begin{aligned} M_{\delta ,f}(C)\cap {\mathcal {U}} \subseteq M_{\delta ,f}(D) + L\Vert C-D\Vert {\mathbb {B}}_n. \end{aligned}$$

The local strict efficiency of \({\bar{x}}\) for (6) and the strict convexity assumption lead in the existence of some \({\bar{\lambda }}\in \Delta ^p_\ge \) such that \({\bar{x}}\) is a unique locally optimal solution of \({\mathcal {P}}({\bar{\lambda }},0_n)\), due to [9, Proposition 3.10]. Thus,

$$\begin{aligned} 0_n\in \partial \Big (\sum _{i=1}^p {\bar{\lambda }}_i f_i\Big )({{\bar{x}}})=\partial _x h({\bar{x}},{\bar{\lambda }}) \subseteq \Phi ({\bar{x}}), \end{aligned}$$

due to [4, Proposition 4.12]. Consider \(\epsilon \in (0,\dfrac{\delta }{L})\) sufficiently small such that \(\epsilon {\mathbb {B}}_{p\times n}\subseteq {\mathcal {V}}\). Assume that \(u,v\in \dfrac{\epsilon }{\sqrt{p}}{\mathbb {B}}_n\) and \(x \in \Phi ^{-1}(u)\cap {\mathcal {U}}\) are arbitrary and constant hereafter. Consider two matrices \(C,D\in \epsilon {\mathbb {B}}_{p\times n}\) such that their all rows are \(u^T\) and \(v^T\), respectively. We get \(u\in \partial _x h(x,\lambda )\) for some \(\lambda \in \Delta ^p_\ge \) and so, x is a unique locally optimal solution of \({\mathcal {P}}(\lambda ,u)\) with radius \(\delta \). Thus, \(x\in M_{\delta ,f}(C)\cap {\mathcal {U}}\). Now, since \(C,D\in {\mathcal {V}}\), there exists \(y\in M_{\delta ,f}(D)\) such that

$$\begin{aligned} \Vert x-y\Vert \le L\Vert C-D\Vert \le L\sqrt{p}\Vert u-v\Vert . \end{aligned}$$

On the other hand, \(y\in M_{\delta ,f}(D)\) implies that y is the unique locally optimal solution of \({\mathcal {P}}(\mu ,v)\) with radius \(\delta \) for some \(\mu \in \Delta ^p_\ge \) and so, \(v\in \partial _x h(y,\mu )\). Therefore, \(y \in \Phi ^{-1}(v)\). Hence, for any \(u,v\in \dfrac{\epsilon }{\sqrt{p}}{\mathbb {B}}_n\), we get

$$\begin{aligned} \Phi ^{-1}(u)\cap {\mathcal {U}} \subseteq \Phi ^{-1}(v) + L\sqrt{p}\Vert u-v\Vert {\mathbb {B}}_n. \end{aligned}$$

So, \(\Phi \) is local metrically regular at \(({\bar{x}},0_n) \in graph\,\Phi \) due to [19, Theorem 1.49]. \(\square \)