Abstract
In this paper we give a systematization of global well-posedness in vector optimization. We investigate the links among global notions of well-posedness for a vector optimization problem (see e.g. Miglierina et al. in J Optim Theory Appl 126:391–409, 2005 for a detailed explanation of the difference between pointwise and global well-posedness in vector optimization). In particular we compare several notions of global well-posedness referring to efficient solutions, weakly efficient solutions and properly efficient solutions of a vector optimization problem. We also establish scalar characterizations of global vector well-posedness. Finally we study global well-posedness of vector cone-convex functions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In a very simplified setting taking a decision involves a single objective that we wish to minimize or maximize. Anyway, in real situations several conflicting objectives are involved in a decision and we have to choose the “best” or at least a “good” alternative among the available ones. This consideration is the foundation of multiobjective or vector optimization. From its first roots, laid by Pareto at the end of the 19th century, the discipline has prospered and grown, especially during the last decades. Today, many decision support systems incorporate methods to deal with conflicting objectives. The foundation for such systems is a mathematical theory of optimization under multiple objectives (see e.g. Erghott 2005).
When dealing with single objective and multiobjective optimization problems, the notion of well-posedness plays a crucial role. Generally speaking, the different notions of well-posedness of a given optimization problem can be divided into two classes. Notions in the first class are based on the behaviour of a prescribed class of sequences of approximate solutions. Notions in the second class are based on the continuous dependence of the (necessarily existing) solution on the problem data. For single valued optimization problems the first notion of the first type was introduced by Tykhonov (1966) and later took his name, while the second type of well-posedness notions was introduced by Hadamard (1902).
In this paper we focus on well-posedness notions in the first class. We recall that if A is a subset of a normed space X and \(f: X\rightarrow \mathbb {R}\), the scalar optimization problem
is well-posed in the sense of Tykhonov, when it has a unique minimizer \({\bar{x}} \in A\) and every sequence \(x^n \in A\) such that \(f(x^n) \rightarrow \inf _{x \in A}f(x)\) converges to \({\bar{x}}\). A sequence \(x^n\in A\) such that \(f(x^n) \rightarrow \inf _{x \in A}f(x)\) is usually called a minimizing sequence. The motivation for Tykhonov well-posedness is clearly inspired by a practical point of view. Usually, every numerical method solving a minimization problem produces iteratively some minimizing sequence \(x^n\) and one wants to be sure that the approximate solutions \(x^n\) are not far from the (unique) minimum \({\bar{x}}\). In the context of single objective optimization, the relations between the two approaches and their applications to Mathematical Programming and Optimal Control have been widely studied (see e.g. Zolezzi, 2001). One can see e.g. (Dontchev & Zolezzi, 1993) for a detailed treatment of well-posedness in the single objective case.
In the last decades, several extensions of the well-posedness notion to vector optimization appeared (see e.g. Bednarczuk, 1987, 1994a; Loridan, 1995, Dentcheva & Helbig, 1996, Huang, 2000, 2001; Miglierina et al., 2005 for well-posedness concepts in the vector case). In Anh et al. (2020) well-posedness of uncertain vector optimization problems is considered, while in Morgan (2005) vector well-posedness notions are applied to multicriteria games.
Generalizations of well-posedness to set-optimization constitute an active field of research (see e.g. Crespi et al., 2014, 2018, Miholca, 2021).
In multiobjective optimization we generally have multiple solutions. For this reason, in Miglierina et al. (2005) a classification of vector well-posedness notions in two groups is given: pointwise and global notions. Definitions in the first group consider a fixed efficient point (or the image of an efficient point) and deal with well-posedness of the vector optimization problem at this point. For a survey and relationships among well-posedness notions in the first group one can see (Miglierina et al., 2005). This approach imposes that the minimizing sequences related to the considered point are well-behaved.
Since in the vector case the solution set is typically not a singleton, there is also a class of definitions, the so-called global well-posedness notions, that involve the efficient frontier, the weakly efficient frontier or the properly efficient frontier as a whole. In this paper we survey the notions on global well-posedness and investigate the relationships among them. Then we characterize global well-posedness notions in terms of well-posedness of suitable scalarized optimization problems. Finally we give results concerning the well-posedness of cone-convex functions. The outline of the paper is the following. In Sect. 2 we give some preliminary concepts. In Sect. 3 we survey global well-posedness notions in vector optimization. In Sect. 4 we investigate some relationships among the various global well-posedness notions. In section 5 we characterize global well-posedness notions in terms of well-posedness of suitable scalarized optimization problems. Section 6 is devoted to the well-posedness of cone-convex functions. Proofs of the results are given in the Appendix.
2 Preliminaries
2.1 Problem setting and notation
In this section we recall some basic notions on vector optimization. We assume that X, Y are normed spaces and Y is ordered by a closed, convex and pointed cone \(K\subseteq Y\), with nonempty interior. In this setting we have
Clearly, when \(Y=\mathbb {R}^m\) and \(K=\mathbb {R}^m_+\) we obtain the classical notion of Pareto order.
We denote by B the closed unit ball, both in X and in Y (from the context it will be clear to which space we refer).
Let \(f: X \rightarrow Y \) and let \(A \subseteq X\) be a closed subset of X. We consider the vector optimization problem (A, f) given by
A point \(\bar{x} \in A\) is an efficient solution of problem (A, f) when
We denote by Eff (A, f) the set of all efficient solutions of problem (A, f) and by Min (A, f) the image set of Eff (A, f) through the objective function f. The image of an efficient solution is called a minimal point.
We recall also that a point \(\bar{x} \in A\) is said to be a weakly efficient solution of problem (A, f) when
We denote respectively by WEff(A, f) the set of weakly efficient solutions and by WMin (A, f) the image set of WEff (A, f) through the objective function f, i.e. the set of weakly minimal points.
Clearly, when \(Y=\mathbb {R}^m\) and \(K= \mathbb {R}^m_+\) from the previous definitions we obtain the well-known concepts of Pareto efficiency. For our purposes it is worth mentioning the notion of proper efficiency. This notion dates back to Geoffrion (1968) in the case \(Y=\mathbb {R}^m\) and \(K=\mathbb {R}^m_+\). The interested reader can refer e.g. to Guerraggio et al. (1994) for an overview of the several notions of proper efficient solution and relationships among them.
According to the definition, a Pareto efficient solution for Problem (A, f) does not allow improvement of one objective function, while retaining the same values of the others. Improvement of some criterion can only be obtained at the expense of the deterioration of at least one criterion. The trade-offs among objectives can be measured computing the increase in one objective per unit decrease of another objective. In some situations such trade-offs can be unbounded. At a properly efficient point an unbounded trade-off cannot happen.
In this paper we refer to the following general notion of proper efficiency. It can be proven that Geoffrion’s notion can be obtained as a particular case. A point \(\bar{x} \in A\) is a properly efficient solution of problem (A, f) when there exists a closed convex pointed cone \(K'\) with \(\textrm{int}\, K\subseteq K'\) such that \({\bar{x}}\) is an efficient solution of problem (A, f) with respect to the ordering cone \(K'\). We denote by \(\textrm{PEff}(A, f)\) the set of properly efficient solutions for problem (A, f) and by \(\textrm{PMin}(A, f)\) the image of \(\textrm{PEff}(A, f)\) through the function f. For details about the various notions of solution to a vector optimization problem one can see e.g. (Luc, 1989a).
The positive polar cone of K is denoted by
where \(Y^*\) denotes the topological dual of Y. We recall that if K is a closed convex pointed cone with nonempty interior, also \(K^+\) has nonempty interior. When dealing with vector optimization problems, scalarization of the problem is a key issue. This is a tool that allows to reduce a multiobjective optimization problem to a single objective (or scalar) one. The easiest way to scalarize a multiobjective optimization problem is the well known linear scalarizazion. This means to associate to problem (A, f) the family of problems
with \(\lambda \in K^+\).
This technique, anyway relies heavily on convexity assumptions on the objective function (see e.g. Erghott, 2005; Luc, 1989b). When convexity of the objective function is dropped we need to use nonlinear scalarization tools. We recall the notion of “oriented distance” from a point to a set introduced by Hiriart-Urruty (1979) which is useful to introduce nonlinear scalarization techniques.
Definition 2.1
For a set \(M \subseteq Y\), the oriented distance function \(\Delta _{M}: Y \rightarrow \mathbb {R}\cup \{\pm \infty \}\) is defined as
where \(d(y, M)= \inf _{z \in M}\Vert y-z\Vert \) is the distance of the point y from the set M.
Function \(\Delta _{M}\) has been introduced in the framework of nonsmooth scalar optimization. Later it has been used to obtain a scalarization of a vector optimization problem, considering \(M=-K.\) The main properties of function \(\Delta _{M}\) are gathered in the following proposition (see e.g. Ginchev et al., 2006; Zaffaroni, 2003).
Proposition 2.1
-
(i)
If \(M \ne \emptyset \) and \(M \ne Y\) then \(\Delta _{M}\) is real valued;
-
(ii)
\(\Delta _{M}(y) < 0, \; \forall y \in \textrm{int}M, \; \Delta _{M}(y) = 0, \; \forall y \in \partial M\) and \(\Delta _{M} > 0, \; \forall y \in \textrm{int}(M^{c});\)
-
(iii)
if M is closed, then \(M = \{y\in Y: \Delta _{M} \le 0\};\)
-
(iv)
if M is convex, then \(\Delta _{M}\) is convex;
-
(v)
if M is a cone, then \(\Delta _{M}\) is positively homogeneous;
-
(vi)
if M is a closed, convex cone, then \(\Delta _{M}\) is nonincreasing with respect to the ordering induced on Y, that is if \(y_{1}, y_{2} \in Y\) then
$$\begin{aligned} y_{1} - y_{2} \in M \Rightarrow \Delta _{M}(y_{1}) \le \Delta _{M} (y_{2}). \end{aligned}$$ -
(vii)
If \(M=-K\) then it holds
$$\begin{aligned} \Delta _{-K}(y)=\max _{v\in K^+\cap S}\langle v, y\rangle = \max _{v\in K^+\cap B}\langle v, y\rangle \end{aligned}$$(3)
where S denotes the unit sphere in \(Y^*\).
We summarize in the next result some characterizations of solutions of the vector problem (A, f) is terms of solutions of a suitable scalar minimization problem obtained through function \(\Delta _{-K}\) (see e.g. (Zaffaroni, 2003; Ginchev et al., 2006)).
Theorem 2.1
Let \(g(x)=\Delta _{-K}(f(x)-f({\bar{x}}))\). Then:
-
(i)
\({\bar{x}} \in \textrm{Eff}(A, f)\) if and only if \(g(x)>g({\bar{x}})=0\) for every \(x \in A\backslash f^{-1}({\bar{x}})\).
-
(ii)
\({\bar{x}} \in \textrm{WEff}(A, f)\) if and only if \(g(x)\ge g({\bar{x}})= 0\) for every \(x \in A\).
Finally, we recall the notion of K-convex function.
Definition 2.2
Let A be a convex set. A function \(f: A\rightarrow Y\) is said to be K-convex when for every \(x, y \in A\) and \(t \in [0,1]\) is holds
3 Global well-posedness notions
We give here a summary of the various notions of global well-posedness. In order to show some links among them, we divide these notions in three groups. In the first one we consider those definitions that refer to efficient solutions, in the second group we consider those definitions which refer to weakly efficient solutions, while the third group contains a definition of well-posedness that refers to properly efficient solutions.
The concept of minimizing sequence which is fundamental in defining well-posedness for scalar optimization problems in the scalar case (see e.g. Dontchev & Zolezzi, 1993) can be generalized in several ways to the vector case.
The following notions of minimizing sequence are requested in order to introduce the definitions of well-posedness in the first group. They extend to the vector case the concept of minimizing sequence in the sense of Tykhonov.
Definition 3.1
(Bednarczuk, 1994b; Miglierina & Molho, 2003; Miglierina et al., 2005)
-
(i)
A sequence \(x^n\in A\) is said B-minimizing when \( \forall n \in \mathbb {N}, \exists k^{n} \in K\), \(k^{n} \rightarrow 0\) and \(y^{n} \in \) Min(A, f) such that \(f(x^{n}) \le _{K} y^{n} + k^n\);
-
(ii)
A sequence \(x^n\in A\) is said MM-minimizing when \(d(f(x^{n}),\textrm{Min}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty \).
Now we introduce the well-posedness notions in the first group. These notions extend to the vector case the classical notion of Tykhonov well-posedness for scalar optimization problems. The solution of a vector optimization problem generally is not unique and these notions refer to the set of efficient points. Basically, according to the following notions, a vector optimization problem is well-posed when every minimizing sequence “approaches” in some way the set \({\mathrm{Eff\,}}(A, f)\).
Definition 3.2
(Bednarczuk, 1994b) Problem (A, f) is called B-well-posed (B-wp for short) if:
-
(i)
Min\((A,f) \ne \emptyset ,\)
-
(ii)
for every B-minimizing sequence \(x^n\in A\backslash \textrm{Eff}(A, f)\) there exists a subsequence converging to some element of Eff(A, f).
Definition 3.3
(Bednarczuk, 1994b) Problem (A, f) is called weakly B-well-posed (weakly B-wp for short) if:
-
(i)
Min\((A,f) \ne \emptyset ,\)
-
(ii)
for every B-minimizing sequence \(x^n\in A\) it holds \(d(x^{n},\textrm{Eff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
Definition 3.4
(Miglierina & Molho, 2003) Problem (A, f) is called MM-well-posed (MM-wp for short) if
-
(i)
Min\((A,f) \ne \emptyset ,\)
-
(ii)
for every MM-minimizing sequence \(x^n\in A\) it holds \(d(x^{n},\textrm{Eff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
The second group of well-posedness definitions gathers those notions that refer to wealkly efficient points. We need the following notions of minimizing sequence.
Definition 3.5
(Huang, 2000; Crespi et al., 2007)
-
(i)
A sequence \(x^n\in A\) is HCGR-minimizing when \( \exists k^0 \in \textrm{int}K, \alpha _{n} \ge 0, \alpha _{n} \rightarrow 0 \) such that \(f(x) - f(x^{n}) + \alpha _{n}k^0\not \in -\textrm{int}K, \forall x \in A\), \(\forall n \in \mathbb {N}\)
-
(ii)
A sequence \(x^n\in A\) is H-minimizing when \(\exists k^0 \in \textrm{int}K, \; \alpha _{n} \ge 0, \; \alpha _{n} \rightarrow 0,\) and \(y^{n} \in \textrm{WMin}(A,f)\) such that \(f(x^{n}) \le _{K} y^{n} + \alpha _{n}k^0\), \(\forall n \in \mathbb {N}\),
-
(iii)
A sequence \(x^n\in A\) is Hw-minimizing when \(d(f(x^{n}),\textrm{WMin}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
Remark 3.1
It can be shown (see e.g. Crespi et al., 2007) that Definitions 3.5 (i) and (ii) do not depend on the choice of \(k^0\in \textrm{int}\,K\), i.e. if \(\{x^n\}\) is HCGR-minimizing or H-minimizing with respect to some \(k^0\in \textrm{int}\,K\), then it is HCGR-minimizing or H-minimizing with respect to every \(k^0\in \textrm{int}\,K\).
Remark 3.2
In point (i) of Definition 3.5 we can substitute \(\textrm{int}\, K\) with K.
Well-posedness notions in the second group extend Tykhonov definition for scalar optimization problems basically requiring that every minimizing sequence “approaches” in some way the set \({\mathrm{WEff\,}}(A, f)\).
Definition 3.6
(Crespi et al., 2007) Problem (A, f) is called CGR-well-posed if:
-
(i)
WEff\((A,f) \ne \emptyset ,\)
-
(ii)
for every HCGR-minimizing sequence it holds \(d(x^{n},\textrm{WEff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
Definition 3.7
(Huang, 2000) Problem (A, f) is called H-well-posed, if:
-
(i)
WEff\((A,f) \ne \emptyset ,\)
-
(ii)
for every H-minimizing sequence there exists a subsequence converging to some element of \(\textrm{WEff}(A, f)\).
Definition 3.8
(Huang, 2000) Problem (A, f) is called Hs-well-posed if:
-
(i)
WEff\((A,f) \ne \emptyset ,\)
-
(ii)
for every HCGR-minimizing sequence there exists a subsequence converging some element of \(\textrm{WMin}(A, f)\).
Definition 3.9
(Huang, 2000) Problem (A, f) is called Hw-well-posed if:
-
(i)
WEff\((A,f) \ne \emptyset ,\)
-
(ii)
for every Hw-minimizing sequence it holds \(d(x^{n},\textrm{Weff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
Finally we recall a global well-posedness notion with respect to properly efficient solutions (Lalitha & Chatterjee, 2013).
Definition 3.10
(Lalitha & Chatterjee, 2013) A sequence \(x^n\in A\) is said to be a LC-minimizing sequence, if for any n, there exist \(y^n \in \textrm{PMin}(S, f )\) and \(k^n \in K\) with \(k^n\rightarrow 0\) such that \(f(x^n) \le _K y^n + k^n\).
Definition 3.11
(Lalitha & Chatterjee, 2013) Problem (A, f) is called LC-well-posed if:
-
(i)
PEff\((A,f) \ne \emptyset ,\)
-
(ii)
for every LC-minimizing sequence it holds \(d(x^{n},\textrm{PEff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)
In the next section we will investigate relationships among the various global well-posedness notions and we will provide several examples. We close this section with two examples that illustrate Definition 3.11.
Example 3.1
(Lalitha & Chatterjee, 2013) Consider problem (A, f) with \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) being the indentity mapping and
It can be seen that for \(K= \mathbb {R}^2_+\) we have
and problem (A, f) is LC-well-posed.
Example 3.2
(Lalitha & Chatterjee, 2013) Consider problem (A, f) with \(f: \mathbb {R}\rightarrow \mathbb {R}^2\) defined as
\(A=\mathbb {R}\) and \(K= \mathbb {R}^2_+\). It holds
and \(\textrm{PEff}(A, f)= \{x: x\ge 0\}\). For the sequence \(\{x^n\}\) with \(x^n=-n\) we have \(f(x^n)= (-n, \sqrt{1+n^2})\). Hence, we observe that \(y^n-f(x^n)+k^n \in K\) where \(y^n =(-n, n)\in \textrm{PMin}(A, f)\) and \(k^n=(\frac{1}{n}, \sqrt{1+n^2}-n)\in K\), that is \(\{x^n\}\) is a LC-minimizing sequence. Clearly problem (A, f) is not LC-well-posed since \(d(x^n, \textrm{PEff}(A, f)) \not \rightarrow 0\).
4 Relations among well-posedness notions
In this section we establish relations among the various global well-posedness notions. We need the following assumptions.
-
A1.
For every \(\varepsilon >0\) there exists \(\delta >0\) such that
$$\begin{aligned} \left( f(A)- \textrm{Min}(A, f)\right) \cap \left( \delta B-K \right) \subseteq \varepsilon B \end{aligned}$$(see e.g. (Miglierina et al., 2005)).
-
A2.
\(\textrm{Eff}(A, f)\) compact.
-
A3.
\(\textrm{WEff}(A, f)\) compact.
-
A4.
\(\textrm{Eff}(A, f)=\textrm{WEff}(A, f)\)
-
A5.
\(\textrm{Eff}(A, f)= \textrm{PEff}(A, f)\).
In the following we will use the notation \(\Longrightarrow _{{Ai}}\) when the implication holds under assumption Ai.
4.1 Relations among well-posedness notions in the first group
The following relations are known (see e.g. (Huang, 2000; Miglierina et al., 2005)).
Theorem 4.1
Next example shows that assumption A2 cannot be avoided in order weakly B-well-posednes implies B-well-posedness.
Example 4.1
(Miglierina et al., 2005) Let \(X=Y=A=\mathbb {R}^2\) and \(K=\mathbb {R}^2_+\), let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be defined as \(f(x_1, x_2)=(x_1^2, x_1^2)\). We have \(\textrm{Eff}(A,f)=\{(0, x_2): x_2 \in \mathbb {R}\}\) and it is easily seen that problem (A, f) is weakly B-well-posed but not B-well-posed.
Next example shows that assumption A1 is crucial in order MM-well-posedness implies weakly B-well-posedness.
Example 4.2
(Miglierina et al., 2005) Let \(f:\mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function, let \(K=\mathbb {R}^2_+\) and
It is easily seen that assumption A1 does not hold and problem (A, f) is MM-well-posed but not weakly B-well-posed.
4.2 Relations among well-posedness notions in the second group
From the definitions we obtain the following chain of implications.
Theorem 4.2
Next example shows that assumption A3 cannot be avoided in order CGR-well-posedness implies Hs-well-posedness.
Example 4.3
Consider the identity function \(f: A \subseteq \mathbb {R}^{2} \rightarrow \mathbb {R}^{2}\) where \(A= \{(x,y) \in \mathbb {R}^{2}: y \ge 0 \wedge y \ge -x \}.\) Problem (A, f) is CGR-well-posed, but not Hs-well-posed as, for example, the Hs-minimizing sequence \(x^{n} = \left( -n, n+\frac{1}{n} \right) \) doesn’t admit any subsequence converging to a weakly efficient point.
The next examples show that the implications among Hs-well-posedness, H-well-posedness and Hw-well-posedness cannot be reverted even under assumption A3.
Example 4.4
Let \(A=\{ (x_1, x_2) \in \mathbb {R}^2: x_2 \ge -x_1e^{x_1} \}\) and let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function. We have \(\textrm{WEff}(A, f)=\textrm{Eff}(A, f)= \{ (x_1, x_2) \in \mathbb {R}^2: x_1 \ge 0, x_2= -x_1e^{x_1} \}\) and problem (A, f) is Hw-well posed but not H-well-posed.
Example 4.5
Let \(A=\{(x_1, x_2) \in \mathbb {R}^2: (x_2 \ge -(x_1-1)e^{x_1}, x_1 <0)\vee \ (x_1=0, x_2 \in [-1, -2])\}\) and let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function. We have \(\textrm{WEff}(A,f)= \{(x_1, x_2): x_1 =0, \ x_2 \in [-1, -2]\}\) and (A, f) is H-well-posed. The sequence \((-n, (n+1)e^{-n})\) is HCGR-minimizing but not H-minimizing and does not converge to an element of \(\textrm{WEff}(A, f)\). Hence (A, f) is not Hs-well-posed.
4.3 Relations among well-posedness notions in different groups
Now we give relationships among well-posedness concepts referring to different solution concepts. We need the following result.
Theorem 4.3
A sequence \(x^n\in A\) is B-minimizing if and only if for every \(e \in \textrm{int}K\) there exists \(\alpha _n\rightarrow 0^+\), \(y^n \in \textrm{Min}(A, f)\) such that
Theorem 4.4
It holds:
The next example shows that if \(\textrm{WEff}(A, f) \not = \textrm{Eff}(A, f)\), then H-well-posedness does not imply B-well-posedness.
Example 4.6
Let \(A=\{(x_1, x_2) \in \mathbb {R}^2: x_2 \ge g(x_1), x_1 \in [-2,1] \}\), where
anf let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function, \(K=\mathbb {R}^2_+\). Then \(\textrm{Eff}(A, f)= \{(x_1, x_2): x_2= -x_1^2-x_1, x_1 \in (0, 1]\}\), \(\textrm{WEff}(A, f)= \textrm{Eff}(A, f)\cup \{[-2, -1]\times \{0\}\}\). It is easily seen that problem (A, f) is H-well-posed. The sequence \(\left( -1+\frac{1}{n}, -(-1+\frac{1}{n})^2+1-\frac{1}{n}\right) \) is B-minimizig and converges to \((-1, 0)\) which belongs to \(\textrm{WEff}(A, f)\), but not to \(\textrm{Eff}(A, f)\). Hence problem (A, f) is not B-well-posed.
If the compactness assumption does not hold in the previous theorem, B-well-posedness does not imply H-well-posedness as the following example shows.
Example 4.7
Let \(A= [-1, +\infty ]\), \(f: A \rightarrow \mathbb {R}\), \(f(x)= {\left\{ \begin{array}{ll} -x^2+1, \ x\in [-1,1]\ \\ 0, \ x \ge 1\ \\ \end{array}\right. } \), \(K=\mathbb {R}_+\). \(\textrm{WEff}(A, f)=\textrm{Eff}(A, f)=\{-1\}\cup [1, +\infty ]\) is not compact and we have that problem (A, f) is B- well-posed but not H-well-posed.
Theorems 4.1 and 4.4 entail that under asumptions A3 and A4 it holds:
The proof of the next result is straightforward from the definitions.
Theorem 4.5
If assumption A5 holds, then LC-wp is equivalent to weakly B-wp.
The relationships among the various global well-posedness notions are summarized below.
5 Global well-posedness and scalarization
When dealing with vector optimization scalarization is a useful technique to reduce the problem under consideration to a single objective problem. In (Miglierina et al., 2005) the authors introduce a new approach to the study of well-posedness for vector optimization problems. They show the various pointwise well-posedness notions for a vector optimization problem, can be characterized as well-posedness properties of a suitable scalarized problem. Hence they show the existence of a parallelism between scalar and vector well-posedness. Among various scalarization procedures known in the literature, the authors of (Miglierina et al., 2005) choose the one based on the oriented distance function. In this section, we apply a similar approach to characterize some of the global well-posedness notions in terms of well-posedness of a suitable scalarized problem.
Consider the scalar minimization problem (A, g)
where \(g: A \subseteq X \rightarrow \mathbb {R}\) and denote by \(\textrm{argmin}(A,g)\) the solution set of problem (A, g), which we assume to be nonempty. A sequence \( x^n \in A\) is minimizing for problem (A, g) when \(g(x^{n}) \rightarrow \inf g(A).\) Now, we recall some well-posedness notions for scalar optimization problems (see e.g. Dontchev & Zolezzi, 1993).
Definition 5.1
Problem (A, g) is well-posed in the generalized sense when \(\textrm{argmin}(A, g) \not = \emptyset \) and every minimizing sequence contains a subsequence converging to an element of \(\textrm{argmin}(A, g)\).
When \(\textrm{argmin} (A, g)\) is a singleton, the previous definition reduces to the classical notion of Tykonov well-posedness.
We recall also a generalization of the above mentioned notion (see e.g. Bednarczuk & Penot, 1994).
Definition 5.2
Problem (A, g) is metrically well-set (m-well-set) if:
-
(i)
argmin\((A,g) \ne \emptyset ,\)
-
(ii)
for every minimizing sequence \(x^n \in A, \) \(d(x^{n},\textrm{argmin}(A,g)) \rightarrow 0.\)
We consider the following scalar problem associated with the vector one (A, f):
where
In the sequel we denote this problem by (A, h). Next Theorem characterizes the set of efficient points of the vector problem (A, f) in terms of solutions of the scalar problem (A, h).
Theorem 5.1
Assume \(\textrm{Eff}(A, f)\) is compact anf f is continuous. It holds \(h(x) \ge 0, \ \forall x \in A\) and \(h(x)=0\) if and only if \(x \in \textrm{Eff}(A, f)\). Hence \(\textrm{Eff}(A, f)\) coincides with the set of solutions of problem (A, h).
The next results provide a characterization of B-well-posedness of vector problem (A, f) in terms of well-posedness of the scalar problem (A, h).
Theorem 5.2
A sequence \(x^n\in A\) is B-minimizing if and only if is minimizing for problem (A, h).
Theorem 5.3
Let \(f:X \rightarrow Y\) be a continuous functions and \(\textrm{Eff}(A,f)\) be a compact set. Then problem (A, f) is B-well-posed if and only if the scalar problem (A, h) is well-posed in the generalized sense.
We consider now the following scalar problem associated with the vector one (A, f):
where
In the sequel we denote this problem by \((A,h_1).\) Function \(h_1\) is always nonnegative, in fact it is enough to observe that for \(z = x,\) we have
We observe that function \(h_1\) can be rewritten as
and, recalling Proposition 2.1, we get
We show that the weak solutions of the vector problem (A, f) can be completely characterized by solutions of the scalar problem \((A,h_1)\).
Theorem 5.4
Let \(\bar{x} \in A.\) Then \(\bar{x} \in \textrm{WEff}(A,f)\) if and only if \(h_1(\bar{x}) = 0\) (and hence \(\bar{x} \in \textrm{argmin}\,(A,h_1)\)).
Theorem 5.5
A sequence is HCGR-minimizing if and only if is minimizing for problem \((A, h_1)\).
The next result provides the equivalence between well-posedness of the vector problem (A, f) and of the related scalar problem \((A,h_1).\) The proof is straightforward combining Theorems 5.4 and 5.5.
Theorem 5.6
(A, f) is CGR-well-posed if and only if \((A,h_1)\) is m-well-set.
We close this section considering the remarkable case of linear scalarization.
Consider functions \(g_{\lambda }(x)= \langle \lambda ,f(x)\rangle \) with \(\lambda \in K^{+} \setminus \{0\}\) and the family of parametric scalar problems \((X,g_{\lambda })\) given by
where \(\lambda \in C^{+} \cap \partial B\).
Theorem 5.7
(Crespi et al., 2011) Let \(f:X \rightarrow Y \) be \( K-\)convex on the convex set A and assume \({\mathrm{WMin\,}}(A, f)\) is closed. If problems \((X,g_{\lambda })\) are metrically well-set for every \(\lambda \in K^{+} \cap \partial B\), then problem (A, f) is Hw-well-posed.
In general, the converse of the Theorem is not true as the following example shows.
Example 5.1
Let \(f:A\subseteq \mathbb {R}^{2} \rightarrow \mathbb {R}^{2}\) defined as \(f(x_{1},x_{2}) = \left( \frac{x_{1}^{2}}{x_{2}},x_{1} \right) , \ K= \mathbb {R}^{2}_{+}\) and \(A= \{ (x_{1},x_{2}) \in \mathbb {R}^{2}: x_{1} \ge 0, \ x_{2} \ge 1 \}.\) The objective function is \(K-\)convex, \({\mathrm{WMin\,}}(A, f) = \{(0,0)\}\), \({\mathrm{WEff\,}}(A, f) = \{(0,x_{2}): x_{2} \ge 1 \}\). Problem (A, f) is Hw-well-posed but the scalar problem \((X,g_{\lambda })\) with \(\lambda = (1,0)\) is not metrically well-set.
6 Well-posedness of cone-convex functions
A classical result in scalar optimization states that optimization problems involving convex functions defined on a finite dimensional space with a unique minimum point are Tykhonov well-posed. It is also well-known that this result cannot be extended to the case of functions defined on an infinite dimensional space (see e.g. Dontchev & Zolezzi, 1993). In this section we consider extensions of this result to vector optimization problems involving K-convex functions. We assume X and Y are finite dimensional spaces. The next result concerns B-well-posedness of K-convex functions.
Theorem 6.1
(Miglierina et al., 2005) Let f be K-convex and continuous on the convex set A. For every \({\bar{y}}\in \textrm{Min}( A, f)\), assume that \(f^{-1}( {\bar{y}})\) is compact and suppose that \(\textrm{Min} (A, f)\) is compact. Then, problem (A, f) is B-well-posed.
Remark 6.1
Since B-wp is the strongest notion among the well-posedness notions referring to the set of efficient points (see Sect. 4), the assumptions of the previous Theorem ensure that a K- convex function is also weakly B-wp and M-wp.
Now we give a result concerning CGR-well-posedness of K-convex functions.
Definition 6.1
A set \(A \subseteq X\) is said to be connected when there are no open subsets U and V of X such that:
The next result concerns CGR-well-posedness.
Theorem 6.2
Let \(f: X \rightarrow Y\) be a a continuous K-convex function and assume \(\textrm{WEff}(A, f)\) is bounded. Then problem (A, f) is CGR-well-posed.
Since (under assumption A3) CGR-wp is the strongest notion among the well-posedness notions referring to the set of weakly efficient points (see Sect. 4), the assumptions of the previous Theorem ensure that a K-convex function is also Hs-wp, H-wp and Hw-wp.
Finally we recall the following result about LC-well-posedness of cone-convex functions.
Theorem 6.3
(Lalitha & Chatterjee, 2013) If in problem (A, f), A is a closed, convex subset of X, \(f: A \rightarrow Y\) is continuous and K-convex map on A and \(\textrm{PEff}(A, f)\) is compact, then problem (A, f) is LC-well-posed.
In the previous results the boundedness assumption on \({\mathrm{WEff\,}}(A, f)\) and the compactness assumptions on \({\mathrm{Eff\,}}(A, f)\) and \(\textrm{PEff}(A, f)\) cannot be avoided, as the following example shows.
Example 6.1
Let \(A= \mathbb {R}\times [1, +\infty ] \subseteq \mathbb {R}^2\) and \(f: A \rightarrow \mathbb {R}^2\) be defined as \(f(x, y)=( \frac{x^2}{y}, \frac{x^2}{y})\). We have \({\mathrm{Eff\,}}(A, f)={\mathrm{WEff\,}}(A, f)=\textrm{PEff}(A, f)= \{(0, y), y \in [1, +\infty ]\}\) and \(\textrm{Min} (A, f)= \textrm{WMin} (A, f)= \textrm{PMin}(A, f)=\{(0,0)\}\). The sequence \((n, n^3)\) is B-minimizing, HCGR- minimizing, LC-minimizing, but does not converge to an element in \({\mathrm{Eff\,}}(A, f)={\mathrm{WEff\,}}(A, f)=\textrm{PEff}(A, f)\).
References
Anh, L. Q., Duy, T. Q., & Hien, D. V. (2020). Well-posedness for the optimistic counterpart of uncertain vector optimization problems. Annals of Operations Research, 295, 517–533.
Bednarczuk, E. (1987). Well posedness of vector optimization problems. In Jahn J., & Krabs W., Recent advances and historical development of vector optimization problems. Lecture notes in economics and mathematical systems (vol. 294, pp. 51–61).
Bednarczuk, E. (1994). An approach to well-posedness in vector optimization: consequences to stability. Control and Cybernetics, 23, 107–122.
Bednarczuk, E. (1994). An approach to well-posedness in vector optimization: Consequences to stability. Control and Cybernetics, 23, 107–122.
Bednarczuk, E., & Penot, J. P. (1994). Metrically well-set minimization problems. Applied Mathematics and Optimization, 26, 273–285.
Crespi, G. P., Dhingra, M., & Lalitha, C. S. (2018). Pointwise and global well-posedness in set optimization: a direct approach. Annals of Operations Research, 269, 149–166.
Crespi, G. P., Guerraggio, A., & Rocca, M. (2007). Well-posedness in vector optimization problems and vector variational inequalities. Journal of Optimization Theory and Applications, 132, 213–226.
Crespi, G. P., Kuroiwa, D., & Rocca, M. (2014). Convexity and global well-posedness in set-optimization. Taiwanese Journal of Mathematics, 18, 1897–1908.
Crespi, G. P., Papalia, M., & Rocca, M. (2011). Extended well-posedness of vector optimization problems: the convex case. Taiwanese Journal of Mathemathics, 15, 1545–1559.
Dentcheva, D., & Helbig, S. (1996). On variational principles, level sets, well-posedness, and \(\epsilon -\)solutions in vector optimization. Journal of Optimization Theory and Applications, 89, 325–349.
Dontchev, A. L., & Zolezzi, T. (1993). Well-posed optimization problems. Lecture Notes in Mathematics (Vol. 1543). Springer-Verlag.
Erghott, M. (2005). Multicriteria optimization. Springer.
Geoffrion, A. M. (1968). Proper efficiency and the theory of vector maximization. Journal of Mathematical Analysis and Applications, 22(3), 618–630.
Ginchev, I., Guerraggio, A., & Rocca, M. (2006). From scalar to vector optimization. Applications of Mathematics, 51, 5–36.
Guerraggio, A., Molho, E., & Zaffaroni, A. (1994). On the notion of proper efficiency in vector optimization. Journal of Optimization Theory and Applications, 82, 1–21.
Hadamard, J. (1902). Sur le problèmes aux dèrivees partielles et leur signification physique. Bull Univ Princeton, 13, 49–52.
Hiriart-Urruty, J.-B. (1979). New concepts in nondifferentiable programming. Bulletin de la Société Mathématique de France, 60, 57–85.
Huang, X. X. (2000). Extended well-posedness properties of vector optimization problems. Journal of Optimization Theory and Applications, 106, 165–182.
Huang, X. X. (2001). Extended and strongly extended well-posedness of set-valued optimization problems. Mathematical Methods of Operations Research, 53, 101–116.
Lalitha, C. S., & Chatterjee, P. (2013). Well-posedness and stability in vector optimization problems using Henig proper efficiency. Optimization, 62, 155–165.
Loridan, P. (1995). Well-posedness in vector optimization. In Lucchetti, R., & Revalski, J. Recent developments in well-posed variational problems. Mathematics and its applications. (vol. 331, pp. 171–192). Kluwer Academic Publisher.
Luc, D. T. (1989a). Theory of Vector Optimization. Springer-Verlag.
Luc, D. T. (1989b). Theory of vector optimization. Lecture Notes in Economics and Mathematical Systems, Springer-Verlag.
Lucchetti, R., & Miglierina, E. (2004). Stability for convex vector optimization problems. Optimization, 53, 517–528.
Miglierina, E., & Molho, E. (2003). Well-posedness and convexity in vector optimization. Mathematical Methods of Operations Research, 58, 375–385.
Miglierina, E., Molho, E., & Rocca, M. (2005). Well-posedness and scalarization in vector optimization. Journal of Optimization Theory and Applications, 126, 391–409.
Miholca, M. (2021). Global well-posedness in set optimization. Numerical Functional Analysis and Optimization, 42, 1700–1717.
Morgan, J. (2005). Approximations and well-posedness in multicriteria games. Annals of Operations Research, 137, 257–268.
Tykhonov, A. N. (1966). On the stability of the functional optimization problem. USSR Journal of Computational Mathematics and Mathematical Physics, 6, 631–634.
Zaffaroni, A. (2003). Degrees of efficiency and degrees of minimality. SIAM Journal on Control and Optimization, 42, 1071–1086.
Zolezzi, T. (2001). Well-posedness and optimization under perturbations. Annals of Operations Research, 101, 351–361.
Funding
Open access funding provided by Università degli Studi dell'Insubria within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Proofs of the results
Appendix A: Proofs of the results
Proof of Theorem 4.2
Relations between CGR-wp and Hs-wp follow directly from the definitions. Relations among Hs-wp, H-wp and Hw-wp can be found in Huang (2000). \(\square \)
Proof of Theorem4.3
Clearly if \(x^n\) satisfies (8) then it is B-minimizing. Conversely, assume \(x^n\) is B-minimizing, that is \( \forall n \in \mathbb {N}, \exists k^{n} \in K\), \(k^{n} \rightarrow 0\) and \(y^{n} \in \) Min(A, f) such that \(f(x^{n}) \le _{K} y^{n} + k^n\). We show that for \(e \in \textrm{int}K, \; \exists \alpha _{n}>0, \; \alpha _{n} \rightarrow 0,\) such that
which holds if for \(k^{n} \in K, \; k^{n} \rightarrow 0, \; \exists \alpha _{n} > 0, \alpha _{n} \rightarrow 0\) such that \(k^{n} - K \subseteq \alpha _{n}e - K\). The previous inclusion can be written as \(k^{n} - \alpha _{n}e -K \subseteq -K,\) and it is satisfied if \( \Delta _{-K} (k^{n} -\alpha _{n}e -z) \le 0, \; \forall z \in K\), since K is a closed, convex cone (see Proposition 2.1). Since \(\Delta _{-K}\) is subadditive, we have
In order that the righthandside in this inequality is nonpositive it is enough to require \(\Delta _{-K}(k^{n}) + \Delta _{-K}(-\alpha _{n}e) \le 0\), since \(\Delta _{-K}(-z) \le 0\).
By homogeneity of the oriented distance function,
holds for
(observe \(\Delta _{-K}(-e)<0\) since \(e \in \textrm{int}\, K\)). Choose \(\alpha _{n} = -\frac{\Delta _{-K}(k^{n})}{\Delta _{-K}(-e)}\) and observe \(\alpha _n \rightarrow 0^+\) to get the desired conclusion. \(\square \)
Proof of Theorem 4.4
Since assumption A4 holds, Theorem 4.3 entails that a sequence is B-minimizing if and only if is H-minimizing. Hence H-wp \(\Longrightarrow _\textrm{A4}\) B-wp. Now we show that under assumptions A3 and A4 B-well-posedness implies H-well-posedness. Let \(x^n\in A\) be a H-minimizing sequence, i.e. there exists \(k^0\in \textrm{int}K, \; \alpha _{n} > 0, \; \alpha _{n} \rightarrow 0\) and \(y^{n} \in \textrm{Min}(A,f) \) such that \(f(x^{n}) \le _{K} y^{n} + \alpha _{n}k^0.\) There are two possible cases:
-
(1)
\( x^{n} \in A \setminus \textrm{Eff}(A,f)\). In this case \( x^{n} \) is a B-minimizing sequence, so there exists a subsequence converging to some element of Eff(A, f).
-
(2)
\( x^{n} \in \textrm{Eff}(A,f)\). By compactness of Eff(A, f), there exists a subsequence converging to an efficient point.
Proof of Theorem 5.1
It is known (see e.g. Zaffaroni, 2003) that \(z \in \textrm{Min}(A, f)\) if and only if z is the unique solution of the scalar minimization problem
Hence for \(z \in \textrm{Min}(A, f)\) it follows \(\Delta _{-K}(y-z) \ge 0\) and \(\Delta _{-K}(y-z)=0\) if and only if \(y=z\). It follows \(\inf _{z \in \textrm{Eff}(A, f)}\Delta _{-K}(f(x)-f(z)) \ge 0\) for every \(x \in A\). Assume \(x \in \textrm{Eff}(A, f)\). Then \(\inf _{z \in \textrm{Eff}(A, f)}\Delta _{-K}(f(x)-f(z)) \le \Delta _{-K}(f(x)-f(x))=0\) which entails \(\inf _{z \in \textrm{Eff}(A, f)}\Delta _{-K}(f(x)-f(z))=0\). Conversely assume \(\inf _{z \in \textrm{Eff}(A, f)}\Delta _{-K}(f(x)-f(z))=0\). Since f is continuous, Proposition 2.1 (vii) entails that also \(\Delta _{-K}(f(x)-f(\cdot ))\) is continuous and since \(\textrm{Eff}(A, f)\) is compact the infimum is attained at some point \({\bar{z}} \in \textrm{Eff}(A, f)\) i.e. \( \Delta _{-K}(f(x)-f({\bar{z}}))=0\). It follows \(x \in f^{-1}({\bar{z}})\) and hence \(x \in \textrm{Eff}(A, f)\). \(\square \)
Proof of Theorem 5.2
Assume \(x^n\) is a minimizing sequence for problem (A, h). Then
This implies the existence of a sequence \(z^n \in \textrm{Eff}(A, f)\) such that \(\Delta _{-K}(f(x^n)-f(z^n))\rightarrow 0\). Let \(\Delta _{-K}(f(x^n)-f(z^n))= \alpha _n\rightarrow 0^+\) and let \(e \in \textrm{int}\, K\) with \(\Delta _{-K}(-e) \le -1\). By Proposition 2.1, we have
which, according to Proposition 2.1 gives
that is \(\{x^n\}\) is B-minimizing (see Theorem 4.3).
Conversely, assume \(\{x^n\}\) is B-minimizing. By Theorem 4.3, for \(e \in \textrm{int}\, K\) with \(\Delta _{-K}(e)=1\) there exists \(\alpha _n >0\), \(\alpha _n \rightarrow 0\) and \(z^n \in \textrm{Eff}(A, f)\) such that \(f(x^n)\in f(z^n)-K + \alpha _n e\). Then, by Proposition 2.1 we have
This entails \(h(x^n)= \inf _{z \in \textrm{Eff}(A, f)} \Delta _{-K}(f(x^n)-f(z)) \rightarrow 0\) which completes the proof. \(\square \)
Proof of Theorem 5.3
The proof is immediate combining Theorems 5.1 and 5.2. \(\square \)
Proof of Theorem 5.4
Let \(\bar{x} \in \textrm{WEff}(A,f)\) i.e. \(f(z) - f(\bar{x}) \not \in -\textrm{int}K, \; \forall z \in A\). Hence
which entails \(h_1(\bar{x}) \le 0\). Since \(h_1(x)\ge 0\) \(\forall x \in A\), we get \(h_1(\bar{x}) = 0\) which means \(\bar{x} \in \textrm{argmin}(A,h_1).\)
Let \(h_1(\bar{x}) = 0.\) Then, clearly, \(\bar{x} \in \textrm{argmin}(A,h_1).\) From \(h_1(\bar{x})=0,\) we get \(\Delta _{-K} (f(z) - f(x)) \le 0, \; \forall z \in A,\) which means \(f(z) - f(x) \not \in - \textrm{int}K, \; \forall z \in A\) (see Proposition 2.1) and hence \(\bar{x} \in \textrm{WEff}(A,f).\) \(\square \)
Proof of Theorem 5.5
Let \(x^{n} \in A\) be a HCGR-minimizing sequence i.e. there exists \(k^0 \in \textrm{int}\, K\), \(\alpha _n >0\), \(\alpha _n \rightarrow 0\) such that
This entails \(\Delta _{-K} (f(x) - f(x^{n}) +\alpha _{n}k^0) \ge 0, \; \forall x \in A,\) and hence, by subadditivity of \(\Delta _{-K} (\cdot ),\)
Put \(\gamma _{n}=\alpha _{n}\Delta _{-K}(k^0) \ge 0\). We have \(\gamma _{n} \rightarrow 0\) and
hence \(h_1(x^{n}) \rightarrow 0,\) that is \(x^{n}\) is minimizing for problem \((A,h_1).\)
Conversely, assume \(x^{n}\) is minimizing for problem \((A,h_1).\) Hence \(h_1(x^{n}) \rightarrow 0,\) which implies \(h_1(x^{n}) \le \beta _{n},\) for some sequence \(\beta _{n} \ge 0, \beta _{n} \rightarrow 0.\) It holds,
that is \( \inf _{z \in A} \Delta _{-K} (f(z) - f(x^{n})) \ge -\beta _{n}\) and this is equivalent to
Since \(\Delta _{-K}(y) = \max _{\xi \in K^{+} \cap S} \langle \xi ,y \rangle \) (see Proposition 2.1), choosing a vector \(k^0 \in \textrm{int}K,\) with \(\langle \xi , k^0 \rangle \ge 1, \; \forall \xi \in K^{+} \cap S \) we obtain, \(\forall z \in A\):
Hence \(\Delta _{-K}(f(z)- f(x^{n}) + \beta _{n}k^0) \ge 0, \; \forall z \in A\) and this is equivalent to say \(f(z) - f(x^{n}) + \beta _{n}k^0 \not \in - \textrm{int}K, \; \forall z \in A.\)
Hence \(x^{n}\) is a HCGR-minimizing sequence for the vector problem (A, f). \(\square \)
Proof of Theorem 6.2
Let
Assume that problem (A, f) is not CGR-well-posed. Then we can find sequences \(\varepsilon _{n} \rightarrow 0^{+}, \; x^{n} \in {\mathrm{WEff\,}}_{\varepsilon _{n}e} \left( A, f\right) ,\) such that, for some \(\delta > 0 \) it holds \(x^n \not \in {\mathrm{WEff\,}}(A,f) + \delta B\), where B denotes the unit ball in Y.
We claim that for every n there exists a point \(z^{n} \in \partial [{\mathrm{WEff\,}}(f,A) + \delta B]\cap {\mathrm{WEff\,}}_{\varepsilon _{n}e}(A, f)\). If such a \(z^{n}\) does not exist, we would have for every n
Clearly \({\mathrm{WEff\,}}_{\varepsilon _{n}e}\left( A, f\right) \cap [{\mathrm{WEff\,}}(A, f)+\delta B]^c\not =\emptyset \). Further it holds
since
We prove that \({{\mathrm{WEff\,}}}_{\varepsilon _{n}e}\left( A, f\right) \) is connected and hence (14) cannot hold since
and both \({\mathrm{int\,}}\left[ {{\mathrm{WEff\,}}}(A, f) + \delta B\right] \) and \([{\mathrm{WEff\,}}(A, f)+\delta B]^c\) are open.
If \({\bar{y}} =f({\bar{x}})\), with \(\bar{x}\in {\mathrm{WEff\,}}(A, f)\), the level set
is clearly nonempty and further we have
Indeed, assume there exists a point \(x' \in {\textrm{Lev}}\left( f,{\bar{y}},A\right) \backslash {\mathrm{WEff\,}}(A, f)\). Hence \(f(x') \in f({\bar{x}})-K\) and we can find a point \(x'' \in A\) such that \(f(x'') \in f(x') - {\mathrm{int\,}}K\). This entails \(f(x'') \in f(x') - {\mathrm{int\,}}K \subseteq f({\bar{x}}) - {\mathrm{int\,}}K\), which contradicts to \({\bar{x}} \in {\mathrm{WEff\,}}(A, f)\).
The inclusion \({\textrm{Lev}}\left( f,{\bar{y}},A\right) \subseteq {\mathrm{WEff\,}}(A,f)\) proves \({\textrm{Lev}}\left( f,{\bar{y}},A\right) \) is bounded. Since f is continuous, we conclude that \({\mathrm{WEff\,}}(A, f)\) is closed and hence \({\textrm{Lev}}\left( f,{\bar{y}},A\right) \) is compact. Further, the compactness of \({\textrm{Lev}}\left( f,{\bar{y}},A\right) \) and the K-convexity of f imply \({\textrm{Lev}}\left( f,y,A\right) \) is compact \(\forall y \in Y\) (when nonempty) (Lucchetti & Miglierina, 2004).
This implies that the set \({\mathrm{WEff\,}}_{\varepsilon _{n}e}(A, f)\) is connected, nonempty and closed (see Theorem 4.1 in Crespi et al., 2007) and hence (14) cannot hold. It follows the existence of a sequence \(z^{n} \in \partial \left[ {\mathrm{WEff\,}}\left( A,f\right) + \delta B\right] \cap {\mathrm{WEff\,}}_{\varepsilon _{n}e} \left( A, f\right) .\)
Since \({\mathrm{WEff\,}}(A, f)\) is compact, we can assume \(z^{n}\) converges to a point \(\bar{z} \in \partial \left[ {\mathrm{WEff\,}}\left( A,f\right) + \delta B\right] \). Since \(z^{n} \in {\mathrm{WEff\,}}_{\varepsilon _{n}e}\left( A, f\right) \) the continuity of f gives \(\bar{z} \in {\mathrm{WEff\,}}(A, f)\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rocca, M. A systematization of global well-posedness in vector optimization. Ann Oper Res (2024). https://doi.org/10.1007/s10479-024-06089-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10479-024-06089-z