1 Introduction

In a very simplified setting taking a decision involves a single objective that we wish to minimize or maximize. Anyway, in real situations several conflicting objectives are involved in a decision and we have to choose the “best” or at least a “good” alternative among the available ones. This consideration is the foundation of multiobjective or vector optimization. From its first roots, laid by Pareto at the end of the 19th century, the discipline has prospered and grown, especially during the last decades. Today, many decision support systems incorporate methods to deal with conflicting objectives. The foundation for such systems is a mathematical theory of optimization under multiple objectives (see e.g. Erghott 2005).

When dealing with single objective and multiobjective optimization problems, the notion of well-posedness plays a crucial role. Generally speaking, the different notions of well-posedness of a given optimization problem can be divided into two classes. Notions in the first class are based on the behaviour of a prescribed class of sequences of approximate solutions. Notions in the second class are based on the continuous dependence of the (necessarily existing) solution on the problem data. For single valued optimization problems the first notion of the first type was introduced by Tykhonov (1966) and later took his name, while the second type of well-posedness notions was introduced by Hadamard (1902).

In this paper we focus on well-posedness notions in the first class. We recall that if A is a subset of a normed space X and \(f: X\rightarrow \mathbb {R}\), the scalar optimization problem

$$\begin{aligned} \min _{x \in A} f(x) \end{aligned}$$
(1)

is well-posed in the sense of Tykhonov, when it has a unique minimizer \({\bar{x}} \in A\) and every sequence \(x^n \in A\) such that \(f(x^n) \rightarrow \inf _{x \in A}f(x)\) converges to \({\bar{x}}\). A sequence \(x^n\in A\) such that \(f(x^n) \rightarrow \inf _{x \in A}f(x)\) is usually called a minimizing sequence. The motivation for Tykhonov well-posedness is clearly inspired by a practical point of view. Usually, every numerical method solving a minimization problem produces iteratively some minimizing sequence \(x^n\) and one wants to be sure that the approximate solutions \(x^n\) are not far from the (unique) minimum \({\bar{x}}\). In the context of single objective optimization, the relations between the two approaches and their applications to Mathematical Programming and Optimal Control have been widely studied (see e.g. Zolezzi, 2001). One can see e.g. (Dontchev & Zolezzi, 1993) for a detailed treatment of well-posedness in the single objective case.

In the last decades, several extensions of the well-posedness notion to vector optimization appeared (see e.g. Bednarczuk, 1987, 1994a; Loridan, 1995, Dentcheva & Helbig, 1996, Huang, 2000, 2001; Miglierina et al., 2005 for well-posedness concepts in the vector case). In Anh et al. (2020) well-posedness of uncertain vector optimization problems is considered, while in Morgan (2005) vector well-posedness notions are applied to multicriteria games.

Generalizations of well-posedness to set-optimization constitute an active field of research (see e.g. Crespi et al., 2014, 2018, Miholca, 2021).

In multiobjective optimization we generally have multiple solutions. For this reason, in Miglierina et al. (2005) a classification of vector well-posedness notions in two groups is given: pointwise and global notions. Definitions in the first group consider a fixed efficient point (or the image of an efficient point) and deal with well-posedness of the vector optimization problem at this point. For a survey and relationships among well-posedness notions in the first group one can see (Miglierina et al., 2005). This approach imposes that the minimizing sequences related to the considered point are well-behaved.

Since in the vector case the solution set is typically not a singleton, there is also a class of definitions, the so-called global well-posedness notions, that involve the efficient frontier, the weakly efficient frontier or the properly efficient frontier as a whole. In this paper we survey the notions on global well-posedness and investigate the relationships among them. Then we characterize global well-posedness notions in terms of well-posedness of suitable scalarized optimization problems. Finally we give results concerning the well-posedness of cone-convex functions. The outline of the paper is the following. In Sect. 2 we give some preliminary concepts. In Sect. 3 we survey global well-posedness notions in vector optimization. In Sect. 4 we investigate some relationships among the various global well-posedness notions. In section 5 we characterize global well-posedness notions in terms of well-posedness of suitable scalarized optimization problems. Section 6 is devoted to the well-posedness of cone-convex functions. Proofs of the results are given in the Appendix.

2 Preliminaries

2.1 Problem setting and notation

In this section we recall some basic notions on vector optimization. We assume that XY are normed spaces and Y is ordered by a closed, convex and pointed cone \(K\subseteq Y\), with nonempty interior. In this setting we have

$$\begin{aligned} y_{1} \le _{K} y_{2} \; \iff \; y_{2} - y_{1} \in K, \\ y_{1} <_{K} y_{2} \; \iff \; y_{2} - y_{1} \in \textrm{int}\, K \end{aligned}$$

Clearly, when \(Y=\mathbb {R}^m\) and \(K=\mathbb {R}^m_+\) we obtain the classical notion of Pareto order.

We denote by B the closed unit ball, both in X and in Y (from the context it will be clear to which space we refer).

Let \(f: X \rightarrow Y \) and let \(A \subseteq X\) be a closed subset of X. We consider the vector optimization problem (Af) given by

$$\begin{aligned} \min f(x), \;\; x \in A \end{aligned}$$

A point \(\bar{x} \in A\) is an efficient solution of problem (Af) when

$$\begin{aligned} (f(A)-f(\bar{x})) \cap (-K) = \{0\} \end{aligned}$$

We denote by Eff (Af) the set of all efficient solutions of problem (Af) and by Min (Af) the image set of Eff (Af) through the objective function f. The image of an efficient solution is called a minimal point.

We recall also that a point \(\bar{x} \in A\) is said to be a weakly efficient solution of problem (Af) when

$$\begin{aligned} (f(A)-f(\bar{x})) \cap (-\textrm{int}K) = \emptyset . \end{aligned}$$

We denote respectively by WEff(Af) the set of weakly efficient solutions and by WMin (Af) the image set of WEff (Af) through the objective function f, i.e. the set of weakly minimal points.

Clearly, when \(Y=\mathbb {R}^m\) and \(K= \mathbb {R}^m_+\) from the previous definitions we obtain the well-known concepts of Pareto efficiency. For our purposes it is worth mentioning the notion of proper efficiency. This notion dates back to Geoffrion (1968) in the case \(Y=\mathbb {R}^m\) and \(K=\mathbb {R}^m_+\). The interested reader can refer e.g. to Guerraggio et al. (1994) for an overview of the several notions of proper efficient solution and relationships among them.

According to the definition, a Pareto efficient solution for Problem (Af) does not allow improvement of one objective function, while retaining the same values of the others. Improvement of some criterion can only be obtained at the expense of the deterioration of at least one criterion. The trade-offs among objectives can be measured computing the increase in one objective per unit decrease of another objective. In some situations such trade-offs can be unbounded. At a properly efficient point an unbounded trade-off cannot happen.

In this paper we refer to the following general notion of proper efficiency. It can be proven that Geoffrion’s notion can be obtained as a particular case. A point \(\bar{x} \in A\) is a properly efficient solution of problem (Af) when there exists a closed convex pointed cone \(K'\) with \(\textrm{int}\, K\subseteq K'\) such that \({\bar{x}}\) is an efficient solution of problem (Af) with respect to the ordering cone \(K'\). We denote by \(\textrm{PEff}(A, f)\) the set of properly efficient solutions for problem (Af) and by \(\textrm{PMin}(A, f)\) the image of \(\textrm{PEff}(A, f)\) through the function f. For details about the various notions of solution to a vector optimization problem one can see e.g. (Luc, 1989a).

The positive polar cone of K is denoted by

$$\begin{aligned} K^{+} = \{v \in Y^*: \langle v,y \rangle \ge 0, \; \forall y \in K \} \end{aligned}$$
(2)

where \(Y^*\) denotes the topological dual of Y. We recall that if K is a closed convex pointed cone with nonempty interior, also \(K^+\) has nonempty interior. When dealing with vector optimization problems, scalarization of the problem is a key issue. This is a tool that allows to reduce a multiobjective optimization problem to a single objective (or scalar) one. The easiest way to scalarize a multiobjective optimization problem is the well known linear scalarizazion. This means to associate to problem (Af) the family of problems

$$\begin{aligned} \min g_{\lambda }(x)= \langle \lambda , f(x)\rangle \end{aligned}$$

with \(\lambda \in K^+\).

This technique, anyway relies heavily on convexity assumptions on the objective function (see e.g. Erghott, 2005; Luc, 1989b). When convexity of the objective function is dropped we need to use nonlinear scalarization tools. We recall the notion of “oriented distance” from a point to a set introduced by Hiriart-Urruty (1979) which is useful to introduce nonlinear scalarization techniques.

Definition 2.1

For a set \(M \subseteq Y\), the oriented distance function \(\Delta _{M}: Y \rightarrow \mathbb {R}\cup \{\pm \infty \}\) is defined as

$$\begin{aligned} \Delta _{M}(y) = d(y, M) - d(y, Y \setminus M) \end{aligned}$$

where \(d(y, M)= \inf _{z \in M}\Vert y-z\Vert \) is the distance of the point y from the set M.

Function \(\Delta _{M}\) has been introduced in the framework of nonsmooth scalar optimization. Later it has been used to obtain a scalarization of a vector optimization problem, considering \(M=-K.\) The main properties of function \(\Delta _{M}\) are gathered in the following proposition (see e.g. Ginchev et al., 2006; Zaffaroni, 2003).

Proposition 2.1

  1. (i)

    If \(M \ne \emptyset \) and \(M \ne Y\) then \(\Delta _{M}\) is real valued;

  2. (ii)

    \(\Delta _{M}(y) < 0, \; \forall y \in \textrm{int}M, \; \Delta _{M}(y) = 0, \; \forall y \in \partial M\) and \(\Delta _{M} > 0, \; \forall y \in \textrm{int}(M^{c});\)

  3. (iii)

    if M is closed, then \(M = \{y\in Y: \Delta _{M} \le 0\};\)

  4. (iv)

    if M is convex, then \(\Delta _{M}\) is convex;

  5. (v)

    if M is a cone, then \(\Delta _{M}\) is positively homogeneous;

  6. (vi)

    if M is a closed, convex cone, then \(\Delta _{M}\) is nonincreasing with respect to the ordering induced on Y, that is if \(y_{1}, y_{2} \in Y\) then

    $$\begin{aligned} y_{1} - y_{2} \in M \Rightarrow \Delta _{M}(y_{1}) \le \Delta _{M} (y_{2}). \end{aligned}$$
  7. (vii)

    If \(M=-K\) then it holds

    $$\begin{aligned} \Delta _{-K}(y)=\max _{v\in K^+\cap S}\langle v, y\rangle = \max _{v\in K^+\cap B}\langle v, y\rangle \end{aligned}$$
    (3)

where S denotes the unit sphere in \(Y^*\).

We summarize in the next result some characterizations of solutions of the vector problem (Af) is terms of solutions of a suitable scalar minimization problem obtained through function \(\Delta _{-K}\) (see e.g. (Zaffaroni, 2003; Ginchev et al., 2006)).

Theorem 2.1

Let \(g(x)=\Delta _{-K}(f(x)-f({\bar{x}}))\). Then:

  1. (i)

    \({\bar{x}} \in \textrm{Eff}(A, f)\) if and only if \(g(x)>g({\bar{x}})=0\) for every \(x \in A\backslash f^{-1}({\bar{x}})\).

  2. (ii)

    \({\bar{x}} \in \textrm{WEff}(A, f)\) if and only if \(g(x)\ge g({\bar{x}})= 0\) for every \(x \in A\).

Finally, we recall the notion of K-convex function.

Definition 2.2

Let A be a convex set. A function \(f: A\rightarrow Y\) is said to be K-convex when for every \(x, y \in A\) and \(t \in [0,1]\) is holds

$$\begin{aligned} f(tx + (1-t)y) \in tf(x)+(1-t)f(y)-K \end{aligned}$$
(4)

3 Global well-posedness notions

We give here a summary of the various notions of global well-posedness. In order to show some links among them, we divide these notions in three groups. In the first one we consider those definitions that refer to efficient solutions, in the second group we consider those definitions which refer to weakly efficient solutions, while the third group contains a definition of well-posedness that refers to properly efficient solutions.

The concept of minimizing sequence which is fundamental in defining well-posedness for scalar optimization problems in the scalar case (see e.g. Dontchev & Zolezzi, 1993) can be generalized in several ways to the vector case.

The following notions of minimizing sequence are requested in order to introduce the definitions of well-posedness in the first group. They extend to the vector case the concept of minimizing sequence in the sense of Tykhonov.

Definition 3.1

(Bednarczuk, 1994b; Miglierina & Molho, 2003; Miglierina et al., 2005)

  1. (i)

    A sequence \(x^n\in A\) is said B-minimizing when \( \forall n \in \mathbb {N}, \exists k^{n} \in K\), \(k^{n} \rightarrow 0\) and \(y^{n} \in \) Min(Af) such that \(f(x^{n}) \le _{K} y^{n} + k^n\);

  2. (ii)

    A sequence \(x^n\in A\) is said MM-minimizing when \(d(f(x^{n}),\textrm{Min}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty \).

Now we introduce the well-posedness notions in the first group. These notions extend to the vector case the classical notion of Tykhonov well-posedness for scalar optimization problems. The solution of a vector optimization problem generally is not unique and these notions refer to the set of efficient points. Basically, according to the following notions, a vector optimization problem is well-posed when every minimizing sequence “approaches” in some way the set \({\mathrm{Eff\,}}(A, f)\).

Definition 3.2

(Bednarczuk, 1994b) Problem (Af) is called B-well-posed (B-wp for short) if:

  1. (i)

    Min\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every B-minimizing sequence \(x^n\in A\backslash \textrm{Eff}(A, f)\) there exists a subsequence converging to some element of Eff(Af).

Definition 3.3

(Bednarczuk, 1994b) Problem (Af) is called weakly B-well-posed (weakly B-wp for short) if:

  1. (i)

    Min\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every B-minimizing sequence \(x^n\in A\) it holds \(d(x^{n},\textrm{Eff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

Definition 3.4

(Miglierina & Molho, 2003) Problem (Af) is called MM-well-posed (MM-wp for short) if

  1. (i)

    Min\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every MM-minimizing sequence \(x^n\in A\) it holds \(d(x^{n},\textrm{Eff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

The second group of well-posedness definitions gathers those notions that refer to wealkly efficient points. We need the following notions of minimizing sequence.

Definition 3.5

(Huang, 2000; Crespi et al., 2007)

  1. (i)

    A sequence \(x^n\in A\) is HCGR-minimizing when \( \exists k^0 \in \textrm{int}K, \alpha _{n} \ge 0, \alpha _{n} \rightarrow 0 \) such that \(f(x) - f(x^{n}) + \alpha _{n}k^0\not \in -\textrm{int}K, \forall x \in A\), \(\forall n \in \mathbb {N}\)

  2. (ii)

    A sequence \(x^n\in A\) is H-minimizing when \(\exists k^0 \in \textrm{int}K, \; \alpha _{n} \ge 0, \; \alpha _{n} \rightarrow 0,\) and \(y^{n} \in \textrm{WMin}(A,f)\) such that \(f(x^{n}) \le _{K} y^{n} + \alpha _{n}k^0\), \(\forall n \in \mathbb {N}\),

  3. (iii)

    A sequence \(x^n\in A\) is Hw-minimizing when \(d(f(x^{n}),\textrm{WMin}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

Remark 3.1

It can be shown (see e.g. Crespi et al., 2007) that Definitions 3.5 (i) and (ii) do not depend on the choice of \(k^0\in \textrm{int}\,K\), i.e. if \(\{x^n\}\) is HCGR-minimizing or H-minimizing with respect to some \(k^0\in \textrm{int}\,K\), then it is HCGR-minimizing or H-minimizing with respect to every \(k^0\in \textrm{int}\,K\).

Remark 3.2

In point (i) of Definition 3.5 we can substitute \(\textrm{int}\, K\) with K.

Well-posedness notions in the second group extend Tykhonov definition for scalar optimization problems basically requiring that every minimizing sequence “approaches” in some way the set \({\mathrm{WEff\,}}(A, f)\).

Definition 3.6

(Crespi et al., 2007) Problem (Af) is called CGR-well-posed if:

  1. (i)

    WEff\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every HCGR-minimizing sequence it holds \(d(x^{n},\textrm{WEff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

Definition 3.7

(Huang, 2000) Problem (Af) is called H-well-posed, if:

  1. (i)

    WEff\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every H-minimizing sequence there exists a subsequence converging to some element of \(\textrm{WEff}(A, f)\).

Definition 3.8

(Huang, 2000) Problem (Af) is called Hs-well-posed if:

  1. (i)

    WEff\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every HCGR-minimizing sequence there exists a subsequence converging some element of \(\textrm{WMin}(A, f)\).

Definition 3.9

(Huang, 2000) Problem (Af) is called Hw-well-posed if:

  1. (i)

    WEff\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every Hw-minimizing sequence it holds \(d(x^{n},\textrm{Weff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

Finally we recall a global well-posedness notion with respect to properly efficient solutions (Lalitha & Chatterjee, 2013).

Definition 3.10

(Lalitha & Chatterjee, 2013) A sequence \(x^n\in A\) is said to be a LC-minimizing sequence, if for any n, there exist \(y^n \in \textrm{PMin}(S, f )\) and \(k^n \in K\) with \(k^n\rightarrow 0\) such that \(f(x^n) \le _K y^n + k^n\).

Definition 3.11

(Lalitha & Chatterjee, 2013) Problem (Af) is called LC-well-posed if:

  1. (i)

    PEff\((A,f) \ne \emptyset ,\)

  2. (ii)

    for every LC-minimizing sequence it holds \(d(x^{n},\textrm{PEff}(A,f)) \rightarrow 0\) as \(n \rightarrow +\infty .\)

In the next section we will investigate relationships among the various global well-posedness notions and we will provide several examples. We close this section with two examples that illustrate Definition 3.11.

Example 3.1

(Lalitha & Chatterjee, 2013) Consider problem (Af) with \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) being the indentity mapping and

$$\begin{aligned} A= \mathbb {R}^2_+ \cup \{(x_1, x_2): x_ 2 \ge \sqrt{1-(x-1)^2}, \ 0 \le x \le 2\} \end{aligned}$$
(5)

It can be seen that for \(K= \mathbb {R}^2_+\) we have

$$\begin{aligned} \textrm{PEff}(A, f)= \{(x_1, x_2): x_ 2 = \sqrt{1-(x-1)^2}, \ 0< x< 1\} \end{aligned}$$
(6)

and problem (Af) is LC-well-posed.

Example 3.2

(Lalitha & Chatterjee, 2013) Consider problem (Af) with \(f: \mathbb {R}\rightarrow \mathbb {R}^2\) defined as

$$\begin{aligned} f(x)= {\left\{ \begin{array}{ll} (x, \sqrt{1+x^2}), \ x<0\ \\ (-x, x), \ x\ge 0 \end{array}\right. } \end{aligned}$$

\(A=\mathbb {R}\) and \(K= \mathbb {R}^2_+\). It holds

$$\begin{aligned} \textrm{PMin}(A, f)=\{(-x, x): x \ge 0\} \end{aligned}$$
(7)

and \(\textrm{PEff}(A, f)= \{x: x\ge 0\}\). For the sequence \(\{x^n\}\) with \(x^n=-n\) we have \(f(x^n)= (-n, \sqrt{1+n^2})\). Hence, we observe that \(y^n-f(x^n)+k^n \in K\) where \(y^n =(-n, n)\in \textrm{PMin}(A, f)\) and \(k^n=(\frac{1}{n}, \sqrt{1+n^2}-n)\in K\), that is \(\{x^n\}\) is a LC-minimizing sequence. Clearly problem (Af) is not LC-well-posed since \(d(x^n, \textrm{PEff}(A, f)) \not \rightarrow 0\).

4 Relations among well-posedness notions

In this section we establish relations among the various global well-posedness notions. We need the following assumptions.

  1. A1.

    For every \(\varepsilon >0\) there exists \(\delta >0\) such that

    $$\begin{aligned} \left( f(A)- \textrm{Min}(A, f)\right) \cap \left( \delta B-K \right) \subseteq \varepsilon B \end{aligned}$$

    (see e.g. (Miglierina et al., 2005)).

  2. A2.

    \(\textrm{Eff}(A, f)\) compact.

  3. A3.

    \(\textrm{WEff}(A, f)\) compact.

  4. A4.

    \(\textrm{Eff}(A, f)=\textrm{WEff}(A, f)\)

  5. A5.

    \(\textrm{Eff}(A, f)= \textrm{PEff}(A, f)\).

In the following we will use the notation \(\Longrightarrow _{{Ai}}\) when the implication holds under assumption Ai.

4.1 Relations among well-posedness notions in the first group

The following relations are known (see e.g. (Huang, 2000; Miglierina et al., 2005)).

Theorem 4.1

$$\begin{aligned} \begin{array}{ c c c c c} &{} \Longrightarrow &{} &{} \Longrightarrow \\ \text{ B-wp } &{} &{} \text{ weakly } \text{ B-wp } &{} &{} \text{ MM-wp } \\ &{} \Longleftarrow _{A2} &{} &{}\Longleftarrow _{A1} \end{array} \end{aligned}$$

Next example shows that assumption A2 cannot be avoided in order weakly B-well-posednes implies B-well-posedness.

Example 4.1

(Miglierina et al., 2005) Let \(X=Y=A=\mathbb {R}^2\) and \(K=\mathbb {R}^2_+\), let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be defined as \(f(x_1, x_2)=(x_1^2, x_1^2)\). We have \(\textrm{Eff}(A,f)=\{(0, x_2): x_2 \in \mathbb {R}\}\) and it is easily seen that problem (Af) is weakly B-well-posed but not B-well-posed.

Next example shows that assumption A1 is crucial in order MM-well-posedness implies weakly B-well-posedness.

Example 4.2

(Miglierina et al., 2005) Let \(f:\mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function, let \(K=\mathbb {R}^2_+\) and

$$\begin{aligned} A=\{(x_1, x_2) \in \mathbb {R}^2: x_1 \ge 0 \vee x_2 \ge 0 \vee x_2 \ge -x_1-1\} \end{aligned}$$

It is easily seen that assumption A1 does not hold and problem (Af) is MM-well-posed but not weakly B-well-posed.

4.2 Relations among well-posedness notions in the second group

From the definitions we obtain the following chain of implications.

Theorem 4.2

$$\begin{aligned} \begin{array}{ c c c c c} &{} \Longrightarrow _{A3} &{} &{}\\ \text{ GGR-wp } &{} &{} \text{ Hs-wp } \Longrightarrow \text{ H-wp } &{} \Longrightarrow &{} \text{ Hw-wp }\\ &{} \Longleftarrow &{} \end{array} \end{aligned}$$

Next example shows that assumption A3 cannot be avoided in order CGR-well-posedness implies Hs-well-posedness.

Example 4.3

Consider the identity function \(f: A \subseteq \mathbb {R}^{2} \rightarrow \mathbb {R}^{2}\) where \(A= \{(x,y) \in \mathbb {R}^{2}: y \ge 0 \wedge y \ge -x \}.\) Problem (Af) is CGR-well-posed, but not Hs-well-posed as, for example, the Hs-minimizing sequence \(x^{n} = \left( -n, n+\frac{1}{n} \right) \) doesn’t admit any subsequence converging to a weakly efficient point.

The next examples show that the implications among Hs-well-posedness, H-well-posedness and Hw-well-posedness cannot be reverted even under assumption A3.

Example 4.4

Let \(A=\{ (x_1, x_2) \in \mathbb {R}^2: x_2 \ge -x_1e^{x_1} \}\) and let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function. We have \(\textrm{WEff}(A, f)=\textrm{Eff}(A, f)= \{ (x_1, x_2) \in \mathbb {R}^2: x_1 \ge 0, x_2= -x_1e^{x_1} \}\) and problem (Af) is Hw-well posed but not H-well-posed.

Example 4.5

Let \(A=\{(x_1, x_2) \in \mathbb {R}^2: (x_2 \ge -(x_1-1)e^{x_1}, x_1 <0)\vee \ (x_1=0, x_2 \in [-1, -2])\}\) and let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function. We have \(\textrm{WEff}(A,f)= \{(x_1, x_2): x_1 =0, \ x_2 \in [-1, -2]\}\) and (Af) is H-well-posed. The sequence \((-n, (n+1)e^{-n})\) is HCGR-minimizing but not H-minimizing and does not converge to an element of \(\textrm{WEff}(A, f)\). Hence (Af) is not Hs-well-posed.

4.3 Relations among well-posedness notions in different groups

Now we give relationships among well-posedness concepts referring to different solution concepts. We need the following result.

Theorem 4.3

A sequence \(x^n\in A\) is B-minimizing if and only if for every \(e \in \textrm{int}K\) there exists \(\alpha _n\rightarrow 0^+\), \(y^n \in \textrm{Min}(A, f)\) such that

$$\begin{aligned} f(x^n) \in y^n -K+\alpha _n e \end{aligned}$$
(8)

Theorem 4.4

It holds:

$$\begin{aligned} \begin{array}{ c c c c c} &{} \Longrightarrow _\textrm{A4} &{} &{} \\ \text{ H-wp } &{} &{} \text{ B-wp } \\ &{} \Longleftarrow _\textrm{A3, A4} \end{array} \end{aligned}$$

The next example shows that if \(\textrm{WEff}(A, f) \not = \textrm{Eff}(A, f)\), then H-well-posedness does not imply B-well-posedness.

Example 4.6

Let \(A=\{(x_1, x_2) \in \mathbb {R}^2: x_2 \ge g(x_1), x_1 \in [-2,1] \}\), where

$$\begin{aligned} g(x_1)={\left\{ \begin{array}{ll} -x_1^2-x_1, \ x_1\in [-1,1]\ \\ 0, \ x_1 \in [-2, -1]\ \\ \end{array}\right. } \end{aligned}$$

anf let \(f: \mathbb {R}^2 \rightarrow \mathbb {R}^2\) be the identity function, \(K=\mathbb {R}^2_+\). Then \(\textrm{Eff}(A, f)= \{(x_1, x_2): x_2= -x_1^2-x_1, x_1 \in (0, 1]\}\), \(\textrm{WEff}(A, f)= \textrm{Eff}(A, f)\cup \{[-2, -1]\times \{0\}\}\). It is easily seen that problem (Af) is H-well-posed. The sequence \(\left( -1+\frac{1}{n}, -(-1+\frac{1}{n})^2+1-\frac{1}{n}\right) \) is B-minimizig and converges to \((-1, 0)\) which belongs to \(\textrm{WEff}(A, f)\), but not to \(\textrm{Eff}(A, f)\). Hence problem (Af) is not B-well-posed.

If the compactness assumption does not hold in the previous theorem, B-well-posedness does not imply H-well-posedness as the following example shows.

Example 4.7

Let \(A= [-1, +\infty ]\), \(f: A \rightarrow \mathbb {R}\), \(f(x)= {\left\{ \begin{array}{ll} -x^2+1, \ x\in [-1,1]\ \\ 0, \ x \ge 1\ \\ \end{array}\right. } \), \(K=\mathbb {R}_+\). \(\textrm{WEff}(A, f)=\textrm{Eff}(A, f)=\{-1\}\cup [1, +\infty ]\) is not compact and we have that problem (Af) is B- well-posed but not H-well-posed.

Theorems 4.1 and 4.4 entail that under asumptions A3 and A4 it holds:

$$\begin{aligned} \text{ B-wp } = \text{ weakly } \text{ B-wp }= \text{ H-wp } \end{aligned}$$

The proof of the next result is straightforward from the definitions.

Theorem 4.5

If assumption A5 holds, then LC-wp is equivalent to weakly B-wp.

The relationships among the various global well-posedness notions are summarized below.

$$\begin{aligned} \begin{array}{c c c cc } &{} \Longrightarrow &{} &{} \Longrightarrow \\ \text{ B-wp } &{} &{} \text{ weakly } \text{ B-wp } &{} &{} \text{ M-wp } \\ &{} \Longleftarrow _{A2} &{} &{} \Longleftarrow _{A1}\\ &{} &{} &{} \\ &{} \Longrightarrow _{A3} &{} &{}\\ \text{ GGR-wp } &{} &{} \text{ Hs-wp } \Longrightarrow \text{ H-wp } &{} \Longrightarrow &{} \text{ Hw-wp }\\ &{} \Longleftarrow &{} \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{ c c c c c} &{} \Longrightarrow _\textrm{A4} &{} &{} \\ \text{ H-wp } &{} &{} \text{ B-wp } \\ &{} \Longleftarrow _\textrm{A3, A4} \end{array} \end{aligned}$$
$$\begin{aligned} \text{ weakly } \text{ B- } \text{ wp } \Leftrightarrow _{A5} \text{ LC-wp } \end{aligned}$$

5 Global well-posedness and scalarization

When dealing with vector optimization scalarization is a useful technique to reduce the problem under consideration to a single objective problem. In (Miglierina et al., 2005) the authors introduce a new approach to the study of well-posedness for vector optimization problems. They show the various pointwise well-posedness notions for a vector optimization problem, can be characterized as well-posedness properties of a suitable scalarized problem. Hence they show the existence of a parallelism between scalar and vector well-posedness. Among various scalarization procedures known in the literature, the authors of (Miglierina et al., 2005) choose the one based on the oriented distance function. In this section, we apply a similar approach to characterize some of the global well-posedness notions in terms of well-posedness of a suitable scalarized problem.

Consider the scalar minimization problem (Ag)

$$\begin{aligned} \min g(x), \; \; x \in A \end{aligned}$$

where \(g: A \subseteq X \rightarrow \mathbb {R}\) and denote by \(\textrm{argmin}(A,g)\) the solution set of problem (Ag), which we assume to be nonempty. A sequence \( x^n \in A\) is minimizing for problem (Ag) when \(g(x^{n}) \rightarrow \inf g(A).\) Now, we recall some well-posedness notions for scalar optimization problems (see e.g. Dontchev & Zolezzi, 1993).

Definition 5.1

Problem (Ag) is well-posed in the generalized sense when \(\textrm{argmin}(A, g) \not = \emptyset \) and every minimizing sequence contains a subsequence converging to an element of \(\textrm{argmin}(A, g)\).

When \(\textrm{argmin} (A, g)\) is a singleton, the previous definition reduces to the classical notion of Tykonov well-posedness.

We recall also a generalization of the above mentioned notion (see e.g. Bednarczuk & Penot, 1994).

Definition 5.2

Problem (Ag) is metrically well-set (m-well-set) if:

  1. (i)

    argmin\((A,g) \ne \emptyset ,\)

  2. (ii)

    for every minimizing sequence \(x^n \in A, \) \(d(x^{n},\textrm{argmin}(A,g)) \rightarrow 0.\)

We consider the following scalar problem associated with the vector one (Af):

$$\begin{aligned} \min h(x), \; \; x \in A, \end{aligned}$$

where

$$\begin{aligned} h(x)= \inf _{z \in \textrm{Eff}(A, f)}\Delta _{-K}(f(x)-f(z)) \end{aligned}$$
(9)

In the sequel we denote this problem by (Ah). Next Theorem characterizes the set of efficient points of the vector problem (Af) in terms of solutions of the scalar problem (Ah).

Theorem 5.1

Assume \(\textrm{Eff}(A, f)\) is compact anf f is continuous. It holds \(h(x) \ge 0, \ \forall x \in A\) and \(h(x)=0\) if and only if \(x \in \textrm{Eff}(A, f)\). Hence \(\textrm{Eff}(A, f)\) coincides with the set of solutions of problem (Ah).

The next results provide a characterization of B-well-posedness of vector problem (Af) in terms of well-posedness of the scalar problem (Ah).

Theorem 5.2

A sequence \(x^n\in A\) is B-minimizing if and only if is minimizing for problem (Ah).

Theorem 5.3

Let \(f:X \rightarrow Y\) be a continuous functions and \(\textrm{Eff}(A,f)\) be a compact set. Then problem (Af) is B-well-posed if and only if the scalar problem (Ah) is well-posed in the generalized sense.

We consider now the following scalar problem associated with the vector one (Af):

$$\begin{aligned} \min h_1(x), \; \; x \in A, \end{aligned}$$

where

$$\begin{aligned} h_1(x) = - \inf _{z \in A} \Delta _{-K} (f(z) - f(x)) \end{aligned}$$

In the sequel we denote this problem by \((A,h_1).\) Function \(h_1\) is always nonnegative, in fact it is enough to observe that for \(z = x,\) we have

$$\begin{aligned} \Delta _{-K}(f(x)-f(x)) = \Delta _{-K}(0) = 0 \end{aligned}$$

We observe that function \(h_1\) can be rewritten as

$$\begin{aligned} h_1(x) = \sup _{z \in A} [-\Delta _{-K}(f(z)-f(x))] \end{aligned}$$

and, recalling Proposition 2.1, we get

$$\begin{aligned} h_1(x) = \sup _{z \in A} \min _{\xi \in K^{+} \cap S} \langle \xi , f(x) - f(z) \rangle . \end{aligned}$$

We show that the weak solutions of the vector problem (Af) can be completely characterized by solutions of the scalar problem \((A,h_1)\).

Theorem 5.4

Let \(\bar{x} \in A.\) Then \(\bar{x} \in \textrm{WEff}(A,f)\) if and only if \(h_1(\bar{x}) = 0\) (and hence \(\bar{x} \in \textrm{argmin}\,(A,h_1)\)).

Theorem 5.5

A sequence is HCGR-minimizing if and only if is minimizing for problem \((A, h_1)\).

The next result provides the equivalence between well-posedness of the vector problem (Af) and of the related scalar problem \((A,h_1).\) The proof is straightforward combining Theorems 5.4 and 5.5.

Theorem 5.6

(Af) is CGR-well-posed if and only if \((A,h_1)\) is m-well-set.

We close this section considering the remarkable case of linear scalarization.

Consider functions \(g_{\lambda }(x)= \langle \lambda ,f(x)\rangle \) with \(\lambda \in K^{+} \setminus \{0\}\) and the family of parametric scalar problems \((X,g_{\lambda })\) given by

$$\begin{aligned} \min g_{\lambda }(x), \ \ x \in X, \end{aligned}$$

where \(\lambda \in C^{+} \cap \partial B\).

Theorem 5.7

(Crespi et al., 2011) Let \(f:X \rightarrow Y \) be \( K-\)convex on the convex set A and assume \({\mathrm{WMin\,}}(A, f)\) is closed. If problems \((X,g_{\lambda })\) are metrically well-set for every \(\lambda \in K^{+} \cap \partial B\), then problem (Af) is Hw-well-posed.

In general, the converse of the Theorem is not true as the following example shows.

Example 5.1

Let \(f:A\subseteq \mathbb {R}^{2} \rightarrow \mathbb {R}^{2}\) defined as \(f(x_{1},x_{2}) = \left( \frac{x_{1}^{2}}{x_{2}},x_{1} \right) , \ K= \mathbb {R}^{2}_{+}\) and \(A= \{ (x_{1},x_{2}) \in \mathbb {R}^{2}: x_{1} \ge 0, \ x_{2} \ge 1 \}.\) The objective function is \(K-\)convex, \({\mathrm{WMin\,}}(A, f) = \{(0,0)\}\), \({\mathrm{WEff\,}}(A, f) = \{(0,x_{2}): x_{2} \ge 1 \}\). Problem (Af) is Hw-well-posed but the scalar problem \((X,g_{\lambda })\) with \(\lambda = (1,0)\) is not metrically well-set.

6 Well-posedness of cone-convex functions

A classical result in scalar optimization states that optimization problems involving convex functions defined on a finite dimensional space with a unique minimum point are Tykhonov well-posed. It is also well-known that this result cannot be extended to the case of functions defined on an infinite dimensional space (see e.g. Dontchev & Zolezzi, 1993). In this section we consider extensions of this result to vector optimization problems involving K-convex functions. We assume X and Y are finite dimensional spaces. The next result concerns B-well-posedness of K-convex functions.

Theorem 6.1

(Miglierina et al., 2005) Let f be K-convex and continuous on the convex set A. For every \({\bar{y}}\in \textrm{Min}( A, f)\), assume that \(f^{-1}( {\bar{y}})\) is compact and suppose that \(\textrm{Min} (A, f)\) is compact. Then, problem (Af) is B-well-posed.

Remark 6.1

Since B-wp is the strongest notion among the well-posedness notions referring to the set of efficient points (see Sect. 4), the assumptions of the previous Theorem ensure that a K- convex function is also weakly B-wp and M-wp.

Now we give a result concerning CGR-well-posedness of K-convex functions.

Definition 6.1

A set \(A \subseteq X\) is said to be connected when there are no open subsets U and V of X such that:

$$\begin{aligned} A \subseteq U\cup V, \ \ A \cap U \not =\emptyset , \ \ A \cap V \not = \emptyset \ \ \textrm{and}\ \ A \cap U \cap V=\emptyset \, \end{aligned}$$

The next result concerns CGR-well-posedness.

Theorem 6.2

Let \(f: X \rightarrow Y\) be a a continuous K-convex function and assume \(\textrm{WEff}(A, f)\) is bounded. Then problem (Af) is CGR-well-posed.

Since (under assumption A3) CGR-wp is the strongest notion among the well-posedness notions referring to the set of weakly efficient points (see Sect. 4), the assumptions of the previous Theorem ensure that a K-convex function is also Hs-wp, H-wp and Hw-wp.

Finally we recall the following result about LC-well-posedness of cone-convex functions.

Theorem 6.3

(Lalitha & Chatterjee, 2013) If in problem (Af), A is a closed, convex subset of X, \(f: A \rightarrow Y\) is continuous and K-convex map on A and \(\textrm{PEff}(A, f)\) is compact, then problem (Af) is LC-well-posed.

In the previous results the boundedness assumption on \({\mathrm{WEff\,}}(A, f)\) and the compactness assumptions on \({\mathrm{Eff\,}}(A, f)\) and \(\textrm{PEff}(A, f)\) cannot be avoided, as the following example shows.

Example 6.1

Let \(A= \mathbb {R}\times [1, +\infty ] \subseteq \mathbb {R}^2\) and \(f: A \rightarrow \mathbb {R}^2\) be defined as \(f(x, y)=( \frac{x^2}{y}, \frac{x^2}{y})\). We have \({\mathrm{Eff\,}}(A, f)={\mathrm{WEff\,}}(A, f)=\textrm{PEff}(A, f)= \{(0, y), y \in [1, +\infty ]\}\) and \(\textrm{Min} (A, f)= \textrm{WMin} (A, f)= \textrm{PMin}(A, f)=\{(0,0)\}\). The sequence \((n, n^3)\) is B-minimizing, HCGR- minimizing, LC-minimizing, but does not converge to an element in \({\mathrm{Eff\,}}(A, f)={\mathrm{WEff\,}}(A, f)=\textrm{PEff}(A, f)\).