1 Introduction

It is widely acknowledged that well-posedness plays an important role in the study of stability theory in optimization. Tykhonov well-posedness, a classic concept for global optimization problems, was initially developed by Tykhonov (1966). This concept was motivated by numerical algorithms that generate approximate solution sequences for a given optimization problem and it was based on two main requirements: the unique existence of an optimizer and the convergence of all optimizing sequences to this optimizer. Since then, many mathematicians have proposed generalized Tykhonov well-posedness concepts for optimization problems. Three main extensions of this concept can be listed as follows. The first one, Levitin–Polyak well-posedness, has been studied for families of constrained optimization problems; see, for instance, Huang and Yang (2006); Peng et al. (2012); Duy (2021) and the references therein. The second one, generalized well-posedness, has been defined for optimization problems with more than one optimizer, but requiring a certain convergence of some subsequences of every optimizing sequence to an optimizer (Miglierina et al. 2005; Amini-Harandi et al. 2016; Sofonea and Xiao 2019). The third one, known as extended well-posedness (also called well-posedness under perturbations), was first proposed by Zolezzi (1996) for scalar optimization problems. This last concept involves embedding the original problem into a class of perturbed problems and requiring convergence of each asymptotically optimizing sequence corresponding to slight perturbations to a solution of the original problem; see, for instance, Crespi et al. (2009); Anh and Duy (2016); Gutiérrez et al. (2016a).

On the other hand, set-valued optimization problems, which involve set-valued objective maps with values in ordered spaces, have attracted significant attention in recent years due to their real-world applications in various fields including finance, game theory, engineering and so on; see Khan et al. (2016); Hamel et al. (2015) and the references therein. There are various approaches to define an optimal solution of a set-valued optimization problem. The vector approach involves solution notions inspired by well-known concepts like efficient, weak efficient and proper efficient solutions in vector optimization. The set approach compares values of a set-valued objective map by using appropriate order relations on the power set of the objective space. A set-valued optimization problem with the set approach is called set optimization problem. Many aspects of set optimization theory have been extensively studied, such as existence results (Jahn and Ha 2011; Hernández and López 2019), optimality conditions (Ansari and Bao 2019), and stability (Gutiérrez et al. 2016b; Anh et al. 2020c, a; Preechasilp and Wangkeeree 2019; Duy 2023a, b). In the complete lattice approach, set order relations are used to create an image space where the inclusion of a subset or superset serves as a partial order; see, for instance, Hamel et al. (2015); Hamel and Löhne (2018); Crespi et al. (2021). Furthermore, it is well known that the class of semi-infinite optimization problems plays an important role in approximation theory, optimal design and various engineering problems. Semi-infinite optimization problems have recently become an active area of research in mathematical programming due to their broad range of applications; see Chuong and Yao (2014); Chuong (2018); Kim and Son (2018); Khanh and Tung (2020) and the references therein for more details and discussions. However, the combination of semi-infinite optimization theory and set optimization problems has not been well researched so far.

The study of well-posedness in set optimization dates back to Zhang et al. (2009). Since then, various concepts of well-posedness, which are popularly divided into two classifications: pointwise and global well-posedness, have been proposed and studied for set optimization problems. The former considers a given efficient solution and examines well-posedness at that point, while the latter deals with the entire efficient solution set; see Gutiérrez et al. (2012); Long and Peng (2013); Crespi et al. (2018); Vui et al. (2020); Han and Huang (2017); Han et al. (2020); Anh et al. (2020b). It is worth emphasizing that investigating sufficient conditions for global well-posedness for efficient solutions is not an easy task in set optimization. In our view, there are two popular approaches in the literature to reach the goal. The first one is imposing compactness or assumptions related to efficient solution sets; see, for instance, Zhang et al. (2009); Gutiérrez et al. (2012); Long and Peng (2013); Crespi et al. (2018); Vui et al. (2020). The second one is relying on additional conditions to ensure the coincidence of efficient and weak efficient solution sets; see Han and Huang (2017); Han et al. (2020). Admittedly, both approaches of investigating sufficient criteria for global well-posedness for efficient solutions are rather restrictive and unnatural, so an alternative approach is needed to address these limitations.

Motivated by the above research stream, our aim in this work is to study two notions of well-posedness that focus on a whole efficient solution set of a semi-infinite set optimization problem, without fixing a specific efficient solution. We first study a global well-posedness of a given problem, then we propose a notion of well-posedness under perturbations through a sequence of approximate problems. We show that these notions can be characterized via qualitative properties of an approximately minimal solution map associated to the reference problem. The work also aims to establish sufficient conditions for these notions, even when efficient and weakly efficient solution sets are different, without imposing any assumption on the efficient solution sets.

The remainder of this work is organized as follows. Section 2 provides some necessary definitions and results. In Sect. 3, we introduce a notion of global well-posedness in the setting of a semi-infinite set optimization problem, and then we give its characterizations and sufficient conditions. In Sect. 4, we extend the study to a well-posedness under perturbations in terms of a sequence of perturbed problems.

2 Preliminaries

We begin this section by introducing some notation which will be used throughout this article unless otherwise specified. Let \( {\mathbb {X}} \) and \( {\mathbb {T}} \) be Banach spaces. For a nonempty subset A of a Banach space, we denote the interior, closure and boundary of A by \( {\text {int}} A, {\text {cl}} A \) and \( {\text {bd}}A, \) respectively. For a given \(i\in {\mathbb {N}}:=\{1,2,\ldots \}\), the metric on \({\mathbb {R}}^i\) is assumed to be generated by the Euclidean norm \(\Vert \cdot \Vert _i.\) We write \( {\mathbb {R}}^i_+ \) to refer the set of all vectors in \( {\mathbb {R}}^i\) with non-negative components; in particular, \( {\mathbb {R}}_+:= {\mathbb {R}}^1_+. \) For an element x in a metric space, the open (closed) ball of radius \( r>0 \) centered at x is denoted by B(xr) (B[xr] ). For nonempty subsets A and B of \( {\mathbb {R}}^i, \) \(H(A,B)=\max \{h(A,B), h(B,A)\}\) is the Hausdorff distance between A and B,  where \(h(A,B)=\sup _{a\in A}d(a,B)\) with \(d(a,B)=\inf _{b\in B}d(a,b)\).

For \( l, m \in {\mathbb {N}},\) let \( C\subset {\mathbb {R}}^l\) and \(K\subset {\mathbb {R}}^m \) be proper, closed, convex, pointed and solid cones. We define the partial orders \( \le _K \) and \( <_K \) in \( {\mathbb {R}}^m \) associated with the cone K as follows: for \( x,y\in {\mathbb {R}}^m, \)

$$\begin{aligned} x\le _K y \ (x<_Ky ) \Longleftrightarrow y-x \in K \ (y-x \in {\text {int}}K). \end{aligned}$$

The following simple property will be used frequently in the sequel.

Lemma 1

Let Z be a Banach space and \( D\subset Z \) be a proper pointed convex cone with a nonempty interior. Then for any \( k\in {\text {int}}D, \) we have \( Z{\setminus }(k-D)+D= Z{\setminus }(k-D) \).

Proof

The inclusion \( Z\!\setminus \!\!(k-D)\subset Z\!\setminus \!\!(k-D)+D\) is obvious. It suffices to show that the opposite inclusion is also true. To this end, let an arbitrary vector \( x\in Z{\setminus }(k-D)+D.\) Then there are \( y\in Z{\setminus }(k-D) \) and \( d\in D \) such that \( x=y+d,\) or equivalently \( y=x-d. \) Suppose that \( x\in k-D.\) From \( y=x-d \) and the convexity of D, we obtain

$$\begin{aligned} y\in k-D -D \subset k-D, \end{aligned}$$

which is a contradiction. Therefore, \( Z\!\setminus \!\!(k-D)+D\subset Z\setminus (k-D). \) \(\square \)

We recall some concepts of cone continuity for a vector-valued map, which are generalizations of the classical upper and lower semicontinuities of a scalar function.

Definition 1

(Khan et al. 2016, Definition 3.1.28) Let A be a nonempty subset of \( {\mathbb {X}} \) and \( {\bar{x}} \in A. \) A vector-valued map \(g:A \rightarrow {\mathbb {R}}^m\) is said to be

  1. (a)

    K-lower semicontinuous (K-lsc) at \({\bar{x}}\) if for any neighborhood V of \(g({\bar{x}})\), there exists a neighborhood U of \({\bar{x}}\) such that \(g(x) \in V + K, \quad \forall x\in U\cap A;\)

  2. (b)

    K-upper semicontinuous (K-usc) at \({\bar{x}}\) if \((-g)\) is K-lsc at \({\bar{x}};\)

  3. (c)

    K-continuous at \({\bar{x}}\) if it is both K-lsc and K-usc at \({\bar{x}}\).

We next recall a generalized cone convexity notion for a vector-valued map.

Definition 2

(Anh and Khanh 2010) Let A be a nonempty convex subset of \( {\mathbb {X}} \) and \( g: A\rightarrow {\mathbb {R}}^m \). We say that g is generalized K-quasiconvex on A if for each \( x_1,x_2 \in A\) and \( \lambda \in ]0,1[ ,\) one has

$$\begin{aligned} g( x_1)\in -K \text{ and } g(x_2)\in -{\text {int}}K \text{ imply } \text{ that } g(\lambda x_1+ (1-\lambda )x_2)\in -{\text {int}}K. \end{aligned}$$

Definition 3

(Khan et al. 2016, Definitions 3.1.1 and 3.1.12) Let A be a nonempty subset of \( {\mathbb {X}} \), \( G: A\rightrightarrows {\mathbb {R}}^l \) be a set-valued map and \( x_0\in {{\,\textrm{dom}\,}}G. \) The set-valued map G is

  1. (a)

    upper continuous (u.c.) at \(x_0\) if for each open subset \(U\subset {\mathbb {R}}^l,\) \(G(x_0)\subset U,\) there exists a neighborhood N of \(x_0\) such that for all \( x\in N\cap {\text {dom}}G, \) \(G(x)\subset U;\)

  2. (b)

    lower continuous (l.c.) at \(x_0\) if for each open subset \(U\subset {\mathbb {R}}^l, G(x_0)\cap U\ne \emptyset , \) there is a neighborhood N of \(x_0\) such that for all \( x\in N\cap {\text {dom}}G,\) \(G(x)\cap U\ne \emptyset ;\)

  3. (c)

    continuous at \( x_0 \) if it is both u.c. and l.c. at \( x_0;\)

  4. (d)

    upper Hausdorff continuous at \(x_0\) if for each neighborhood U of \( 0_{{\mathbb {R}}^l},\) there exists a neighborhood N of \(x_0\) such that for all \( x\in N\cap {\text {dom}}G, \) \(G(x)\subset G(x_0)+U;\)

  5. (e)

    closed at \(x_0\) if for all \(\{(x_n, y_n)\}\subset {{\,\textrm{gph}\,}}G, (x_n, y_n)\rightarrow (x_0,y_0)\), one has \(y_0\in G(x_0);\)

  6. (f)

    compact at \(x_0\) if for all \(\{(x_n, y_n)\}\subset {{\,\textrm{gph}\,}}G, x_n\rightarrow x_0\), one can extract a subsequence \( \{y_{n_k}\} \) of \( \{y_n\} \) such that \( y_{n_k}\rightarrow y_0\in G(x_0)\).

The following result is known as a characterization for the upper and lower continuity properties of a set-valued map.

Lemma 2

(Khan et al. 2016, Propositions 3.1.6 and 3.1.9) Let A be a nonempty subset of \( {\mathbb {X}} \), \(G:A\rightrightarrows {\mathbb {R}}^l \) be a set-valued map and \( x_0\in {{\,\textrm{dom}\,}}G. \) The following assertions hold true.

  1. (a)

    Suppose that \( G(x_0) \) is compact. The map G is u.c. at \( x_0 \) if and only if for all \(\{x_n\}\subset A, x_n\rightarrow x_0\), every sequence \(\{y_n\}, y_n\in G(x_n),\) has a subsequence converging to \(y_0\) for some \( y_0\in G(x_0)\).

  2. (b)

    G is l.c. at \(x_0\) if and only if for each \(y\in G(x_0)\) and for each sequence \(\{x_n\}, x_n\rightarrow x_0\), there is a sequence \(\{y_n\}\) such that \( y_n\rightarrow y\) and \(y_n\in G(x_n)\) for n large enough.

  3. (c)

    G is l.c. at \(x_0\) iff \( G(x_0)\subset \liminf _{x\rightarrow x_0}G(x) \), where

    $$\begin{aligned} \liminf _{x\rightarrow x_0}G(x)=\{ y\in {\mathbb {R}}^l: \lim _{x\rightarrow x_0}d(y,G(x)) =0 \}. \end{aligned}$$

Unless otherwise stated, let A and T be nonempty closed subsets of \( {\mathbb {X}} \) and \( {\mathbb {T}}, \) respectively. Let \( F:A \rightrightarrows {\mathbb {R}}^l\) be a set-valued map, \( f: A\times T \rightarrow {\mathbb {R}}^m \) be a vector-valued map. We consider the following semi-infinite set optimization problem:

$$\begin{aligned} \text{ Minimize } \ F(x) \ \text{ subject } \text{ to } \ x\in \Omega (f), \end{aligned}$$
(SISP)

where \( \Omega (f)=\{x\in A: f(x,t)\le _K 0_{{\mathbb {R}}^m}, \ \forall t\in T \}. \)

In this article, minimal solutions to the problem (SISP) are defined with respect to the upper type less order relation, which was introduced by Kuroiwa (1998). For nonempty sets \( B_1, B_2\subset {\mathbb {R}}^l, \) the upper type less order relation \( \curlyeqprec \) is defined by

$$\begin{aligned} B_1\curlyeqprec B_2 \Longleftrightarrow B_1\subset B_2 -C, \end{aligned}$$

and strictly upper type less order relation \( \prec \) is defined by

$$\begin{aligned} B_1\prec B_2 \Longleftrightarrow B_1\subset B_2 -{\text {int}}C. \end{aligned}$$

The relation \( \sim \) defined by

$$\begin{aligned} B_1\sim B_2 \Longleftrightarrow B_1\curlyeqprec B_2 \text{ and } B_2\curlyeqprec B_1 \end{aligned}$$

is an equivalence relation and we say that \( B_1 \) is equivalent to \( B_2 \) with respect to the order \( \curlyeqprec \). By \( [B_1],\) we denote the equivalence class (with respect to \( \curlyeqprec \)) of \( B_1\).

Definition 4

A feasible solution \( {\bar{x}} \in \Omega (f) \) is

  1. (a)

    an efficient solution to \( ({\text {SISP}}) \) if

    $$\begin{aligned} \forall x\in \Omega (f), F(x)\curlyeqprec F({\bar{x}}) \text{ implies } F({\bar{x}})\sim F(x). \end{aligned}$$
  2. (b)

    a weakly efficient solution to \( ({\text {SISP}}) \) if

    $$\begin{aligned} \forall x\in \Omega (f), F(x)\prec F({\bar{x}}) \text{ implies } F({\bar{x}})\prec F(x). \end{aligned}$$

We denote the sets of all efficient and weakly efficient solutions to the problem \( ({\text {SISP}}) \) by \( {\text {Eff}}(F,\Omega (f))\) and \( {\text {WEff}}(F,\Omega (f))\), respectively. It is clear that \( {\text {Eff}}(F,\Omega (f))\subset {\text {WEff}}(F,\Omega (f)).\)

To end this section, we recall a concept of converse property for a set-valued map with respect to the order relation \( \curlyeqprec \).

Definition 5

(Anh et al. 2020a; Duy 2021) Let A be a nonempty subset of \( {\mathbb {X}} \). We say that \( F:A\rightrightarrows {\mathbb {R}}^l \) has the converse property at \( {\bar{x}}\in A \) with respect to \( {\hat{x}} \in A{\setminus }\{{\bar{x}}\}, \) if and only if either or for all sequence there is \( n_0\in {\mathbb {N}}\) such that \( F({\hat{x}})\curlyeqprec F(x_{n_0}). \)

The stability of set-valued optimization problems can be studied by relying on the concept of converse property for set-valued maps, as explored in literature. Xu and Li (2014, 2016); Preechasilp and Wangkeeree (2019) introduced concepts of converse property with respect to set order relations for objective set-valued maps perturbed by parameters, which enabled an examination of the continuity of solution maps in parametric set optimization problems. More recently, Anh et al. (2020a) proposed an alternative converse property that is based on the set-less order relation. This approach has been particularly useful in investigating the external stability of set optimization problems. In fact, Duy (2021) used this property to investigate various Levitin–Polyak well-posedness properties in set optimization problems.

3 Well-posedness of (SISP)

The aim of this section is to study a notion of global well-posedness for (SISP). Motivated by the notion of generalized well-posedness for abstract set optimization problems with respect to the lower type less order relation introduced by Zhang et al. (2009), we present notions of approximately minimizing sequence and global well-posedness for \( ({\text {SISP}}) \). Then we provide some characterizations via an approximately minimal solution map and sufficient criteria for the well-posedness.

From now on let \( e\in {\text {int}} C\) be a given vector. We introduce the approximately minimal solution map \( \Lambda _{e}:{\mathbb {R}}_+ \rightrightarrows A \) defined by

$$\begin{aligned} \Lambda _{e}(\varepsilon )=\{ x\in \Omega (f): F(x)\curlyeqprec F(s)+\varepsilon e \text{ for } \text{ some } s \in {\text {Eff}}(F,\Omega (f)) \}, \end{aligned}$$

for every \( \varepsilon \in {\mathbb {R}}_+. \) It is obvious that \( \Lambda _{e}(0)={\text {Eff}}(F,\Omega (f)) \) and \( \Lambda _{e}(\varepsilon _1)\subset \Lambda _{e}(\varepsilon _2) \) whenever \(0\le \varepsilon _1\le \varepsilon _2 \).

Definition 6

A sequence \(\{x_n\}\subset A\) is an approximately minimizing sequence with respect to \( e \in {\text {int}}C \) for \( ({\text {SISP}}) \) if there is a sequence \(\{\varepsilon _n\} \subset {\mathbb {R}}_+ \) converging to 0 such that \( x_n\in \Lambda _{e}(\varepsilon _n)\) for all n.

Remark 1

Definition 6 does not depend on the choice of vector \( e\in {\text {int}}C,\) that is, for any \( e_1\) and \( e_2 \) in \({\text {int}}C\), a sequence \( \{x_n\} \) is an approximately minimizing sequence with respect to \( e_1 \) for \( ({\text {SISP}}) \) if and only if it is an approximately minimizing sequence with respect to \( e_2 \). The proof of this statement can be shown similarly to that of Proposition 5.2 from Gutiérrez et al. (2012). In the sequel, the term “with respect to e” is, therefore, omitted in claims about approximately minimizing sequences for \( ({\text {SISP}}) \). Note further that when \( \Omega (f)\equiv A \), Definition  6 collapses to Definition 3.3 introduced by Long and Peng (2013).

Definition 7

The problem \( ({\text {SISP}}) \) is generalized well-posed if every approximately minimizing sequence \( \{x_n\}\) admits a subsequence \( \{x_{n_k}\} \) converging to some solution in \( {\text {Eff}}(F,\Omega (f)). \)

Remark 2

When \( \Omega (f)\equiv A, \) the problem \( ({\text {SISP}})\) becomes the set optimization problem (SOP) considered by Long and Peng (2013); Han and Huang (2017). In these works, a concept of well-posedness in the sense of Bednarczuk is studied, named generalized B-well-posedness. We recall that (SOP) is generalized B-well-posed iff \( \Lambda _{e} \) is upper Hausdorff continuous at 0. According to Theorem 3.1 of Long and Peng (2013), we can see that if a set optimization problem is generalized well-posed in the sense of Definition  7, then it is generalized B-well-posed. However, the converse is not in general true. To see this let us consider the following example.

Example 1

Let \( {\mathbb {X}}={\mathbb {R}}, A=[-1,1] \) and \( C={\mathbb {R}}_+^2. \) We consider the problem

$$\begin{aligned} \text{ Minimize } \ F(x) \ \text{ subject } \text{ to } \ x\in A, \end{aligned}$$
(SOP)

where \( F: A\rightrightarrows {\mathbb {R}}^2 \) is given by

$$\begin{aligned} F(x)={\left\{ \begin{array}{ll} \left\{ \lambda (x,x^2) +(1-\lambda )(0,1): \lambda \in [0,1] \right\} , &{} \text{ if } x\ne 0,\\ \{ 0 \}\times [1,2], &{} \text{ if } x= 0. \end{array}\right. } \end{aligned}$$

It can be seen that \( {\text {Eff}}(F,A)=[-1,0[ \) and (SOP) is generalized B-well-posed. However, it is not generalized well-posed for any \( e\in {\text {int}}C. \)

We now give some characterizations for the generalized well-posedness of the problem (SISP) via qualitative properties of the approximately minimal solution map \( \Lambda _{e}. \)

Proposition 3

The following assertions are equivalent.

  1. (a)

    The problem \( ({\text {SISP}}) \) is generalized well-posed.

  2. (b)

    \( \Lambda _{e} \) is u.c. at 0 and \( {\text {Eff}}(F,\Omega (f)) \) is compact.

  3. (c)

    \( H\left( \Lambda _{e}(\varepsilon ), {\text {Eff}}(F,\Omega (f)) \right) \rightarrow 0 \) as \(\varepsilon \rightarrow 0\) and \( {\text {Eff}}(F,\Omega (f)) \) is compact.

The proof of Proposition 3 is similar to that of Theorem  4.2 from Crespi et al. (2018) under suitable adjustments.

We are now in a position to establish sufficient conditions for the generalized well-posedness of (SISP).

Theorem 4

Assume that A is compact and

  1. (i)

    F is compact-valued and continuous on A

  2. (ii)

    F admits the converse property at each \( x \in A \) with respect to every \( y \in A{\setminus }\{x\};\)

  3. (iii)

    \( f(\cdot ,t) \) is K-lsc on A for each \( t\in T.\)

Then \( ({\text {SISP}}) \) is generalized well-posed.

Proof

We divide the proof into the following three steps.

Step 1. We prove that \( \Omega (f) \) is a compact set. Let \( \{x_n\} \subset \Omega (f)\) be an arbitrary sequence converging to some x. Of course, \( x\in A \) as A is closed. By the definition of \( \Omega (f), \) one has \( f(x_n,t) \in -K \) for all \( t\in T. \) Let an arbitrary neighborhood N of \( 0_{{\mathbb {R}}^m} \). Then there exists a balanced neighborhood \( N_1 \) of \( 0_{{\mathbb {R}}^m} \) satisfying \( N_1\subset N \). Since \( f(\cdot ,t) \) is K-lsc at x,  for n sufficiently large, one has

$$\begin{aligned} f(x_n,t)\in f(x,t) +N_1 + K, \end{aligned}$$

or equivalently, \( f(x,t)\in f(x_n,t)+N_1-K \subset N_1 -K. \) Because N is an arbitrary neighborhood of the origin and K is closed, we get \( f(x,t)\in -K. \) Therefore, it is concluded that \( x\in \Omega (f).\) By the compactness of A,  the set \( \Omega (f) \) is also compact.

Step 2. We show that \( {\text {Eff}}(F,\Omega (f)) \) is closed. Let \( \{s_n\}\subset {\text {Eff}}(F, \Omega (f)) \) be a sequence converging to some \( s\in \Omega (f) \) and let \( x\in \Omega (f) \) be such that \( F(x)\curlyeqprec F(s). \) We will demonstrate that \( F(s)\curlyeqprec F(x). \) In fact, it is nothing to prove if \( x=s \); so we assume that \( x\ne s. \) For any \( a\in F(s), \) we can find a sequence \( \{a_n\} \) converging to a and \( a_n\in F(s_n) \) for n large enough, inasmuch as F is l.c. at s. According to assumption (ii), there is a subsequence \( \{s_{n_k}\}\subset \{s_n\} \) satisfying \( F(x)\curlyeqprec F(s_{n_k})\) for any k. This together with \( s_{n_k}\in {\text {Eff}}(F,\Omega (f))\) gives \( F(s_{n_k})\curlyeqprec F(x) \). Consequently,

$$\begin{aligned} a_{n_k}\in F(s_{n_k})\subset F(x)-C. \end{aligned}$$

The compactness of F(x) ensures that \(F(x)-C \) is closed, and hence we conclude by passing to the limit as \( k\rightarrow \infty \) that \( a\in F(x)-C. \) As a result, \( F(s)\subset F(x)-C, \) or equivalently \( F(s)\curlyeqprec F(x) \). Therefore, \( s\in {\text {Eff}}(F,\Omega (f))\) which implies that \( {\text {Eff}}(F,\Omega (f)) \) is closed.

Step 3. Let \( \{x_n\} \) be an approximately minimizing sequence for \(({\text {SISP}})\) and take any \( e\in {\text {int}}C \) (see Remark 1). We get then sequences \(\{\varepsilon _n\} \subset {\mathbb {R}}_+ \) converging to 0 and \( \{s_n\}\subset {\text {Eff}}(F,\Omega (f)) \) such that

$$\begin{aligned} F(x_n)\curlyeqprec F(s_n)+\varepsilon _n e. \end{aligned}$$
(1)

Because \( {\text {Eff}}(F,\Omega (f)) \) is a closed subset of the compact set \( \Omega (f) \), we can assume, without loss of generality, that sequences \( \{s_n\} \) and \( \{x_n\} \) converge to \( s\in {\text {Eff}}(F,\Omega (f))\) and \( {\bar{x}}\in \Omega (f), \) respectively. We claim that \( {\bar{x}}\in {\text {Eff}}(F,\Omega (f)). \) For any \( u\in F({\bar{x}}) \), since F is l.c. at \({\bar{x}}, \) we can find a sequence \( \{u_n\}, u_n\rightarrow u, \) and \( u_n\in F(x_n) \) for n sufficiently large. By (1), we have \( u_n\in F(s_n) +\varepsilon _n e -C.\) Consequently, for each n we can pick up \( w_n\in F(s_n) \) such that

$$\begin{aligned} u_n\in w_n+\varepsilon _n e -C. \end{aligned}$$
(2)

By the upper continuity and compact-valuedness of F at s, we can assume that \( \{w_n\} \) converges to an element \( w\in F(s) \) (if necessary we can choose a subsequence). From (2) and the closedness of C, we obtain, by taking the limit, that \( u\in w-C. \) Thus, \( F({\bar{x}})\subset F(s)-C \) due to the arbitrariness of u. Consequently, \( {\bar{x}} \) belongs to \( {\text {Eff}}(F,\Omega (f)).\) We complete the proof. \(\square \)

Remark 3

In the case when \( \Omega (f)\equiv A, \) the problem (SISP) collapses to the set optimization problem considered in many papers, for instance, Han and Huang (2017); Long and Peng (2013); Vui et al. (2020). Even in this special case, Theorem 4 differs from previous ones in the literature because of the following reasons. First, we do not impose assumptions requiring to know some information on efficient solution sets as in Long and Peng (2013); Vui et al. (2020). Second, we establish sufficient conditions for the generalized well-posedness of (SISP) where efficient solution sets and weakly efficient ones are distinct, and hence our approach is different from Han and Huang (2017). We give an example to demonstrate the applicability of Theorem 4 where an efficient solution set does not coincide with a weakly efficient one.

Example 2

Let \( {\mathbb {X}}= {\mathbb {T}}={\mathbb {R}}, C=K={\mathbb {R}}^2_+, T=[0,1]\) and \(A=[-3,3].\) Let \( f: A\times T\rightarrow {\mathbb {R}}^2 \) and \( F: A\rightrightarrows {\mathbb {R}}^2 \) be, respectively, defined by: for all \( x\in A \) and \( t\in T, \)

$$\begin{aligned} f(x,t) =(x^2-1-t,-x^2-t) \end{aligned}$$

and

$$\begin{aligned} F(x) =\left\{ \lambda (0,2) +(1-\lambda )(x, |x |): \lambda \in [0,1] \right\} . \end{aligned}$$

Obviously, all assumptions of Theorem 4 are satisfied, and hence the problem (SISP) is generalized well-posed. In fact, using direct calculations, we obtain that \( \Omega (f)=[-1,1]\) and \( {\text {Eff}}(F,\Omega (f))= [-1,0], \) while \( {\text {WEff}}(F,\Omega (f))= \Omega (f).\)

4 Sequential well-posedness of (SISP)

In this section, we assume that \( A\subset {\mathbb {X}}\) is a nonempty closed and convex set. Let \( {\mathcal {S}}\) be the set of all vector-valued maps from \( A\times T \) into \( {\mathbb {R}}^m \) and \( {\mathcal {M}}\) be the set of all set-valued maps from A into \( {\mathbb {R}}^l. \) Set \(\mathbb {P}:= {\mathcal {S}}\times {\mathcal {M}}. \) For each \(p=(f,F)\in \mathbb {P},\) the problem (SISP) is now denoted by \( ({\text {SISP}}_p)\). We consider the collection of semi-infinite set optimization problems \( \{({\text {SISP}}_p): p\in \mathbb {P} \} \) (we simply write (SISP)). One such collection is a type of parametric semi-infinite set optimization problem corresponding to a perturbation parameter \( p\in \mathbb {P}.\) We are interested in studying the problem under perturbations in terms of sequences \( \{p_n= (f_n,F_n)\} \) in \( \mathbb {P}, \) which means that our perturbed problem is

where \( \Omega (f_n)=\{x\in A: f_n(x,t)\le _K 0_{{\mathbb {R}}^m}, \forall t\in T\}. \)

For each \(p=(f,F)\in \mathbb {P},\) let E(p) stand for the efficient solution set of the problem \(({\text {SISP}}_p)\). Then \(p\mapsto E(p)\) is a set-valued map acting from \(\mathbb {P}\) into A. Furthermore, we say that p satisfies the Slater condition if there exists \( {\bar{x}}\in A \) such that

$$\begin{aligned} f({\bar{x}},t) <_K 0_{{\mathbb {R}}^m}, \ \forall t\in T. \end{aligned}$$

Let \( {\mathcal {E}}=\{ p\in \mathbb {P}: p \text{ satisfy } \text{ the } \text{ Slater } \text{ condition } \text{ and } E(p) \ne \emptyset \}. \)

For each \(p_1=(f_1,F_1)\) and \(p_2=(f_2,F_2)\) in \( \mathbb {P}\), we define the distance between \( p_1\) and \(p_2\) as:

$$\begin{aligned} {\mathfrak {d}}(p_1,p_2)=\sup _{(x,t)\in A\times T}d(f_1(x,t),f_2(x,t))+\sup _{x\in A} H(F_1(x),F_2(x)). \end{aligned}$$

It is easy to see that \((\mathbb {P},{\mathfrak {d}})\) is a pseudo-metric space; for more details about this space, we refer the reader to Bonsangue et al. (1998). In this way, we say that a sequence \( \{p_n\} \subset \mathbb {P}\) converges to \( p\in \mathbb {P}\) if \( {\mathfrak {d}}(p_n,p) \rightarrow 0\) as \( n\rightarrow \infty . \)

For a given \( e\in {\text {int}}C \) and each \( p=(f,F)\in \mathbb {P}, \) \( \varepsilon \in {\mathbb {R}}_+, \) we define the \( \varepsilon \)-approximately minimal solution map \( \Xi _{e}:\mathbb {P} \times {\mathbb {R}}_+ \rightrightarrows A \) for \( ({\text {SISP}}_p) \) as:

$$\begin{aligned} \Xi _{e}(p,\varepsilon )=\{x\in \Omega (f): F(x)\curlyeqprec F(s)+\varepsilon e \text{ for } \text{ some } s \in E(p) \}. \end{aligned}$$

It is obvious that \( \Xi _{e}(p,0)=E(p). \)

Definition 8

For a given \( p\in \mathbb {P},\) let \( \{p_n=(f_n,F_n)\} \subset \mathbb {P}\) be a sequence converging to p and let \( e\in {\text {int}}C. \) A sequence \( \{x_n\}\subset A \) is said to be an approximately e-minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{ p_n \} \) if there exists a sequence \( \{\varepsilon _n \}\subset {\mathbb {R}}_+ \) converging to 0,  such that \( x_n\in \Xi _{e}(p_n,\varepsilon _n) \) for all n.

The following result shows that Definition 8 is independent of the choice of vector e.

Proposition 5

For a given \( p\in \mathbb {P},\) let \( \{p_n=(f_n,F_n)\} \subset \mathbb {P}\) be a sequence converging to p. A sequence \( \{x_n\} \) is an approximately e-minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{ p_n \} \) if and only if there exist a sequence \( \{c_n\}\subset C\) converging to \(0_{{\mathbb {R}}^l} \) and a sequence \( \{s_n\}, s_n\in E(p_n),\) such that \( F_n(x_n) \curlyeqprec F_n(s_n) +c_n. \)

Proof

Suppose that \( \{x_n\} \) is an approximately e-minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{ p_n \} \). Then the conclusion follows immediately from choosing \( c_n=\varepsilon _n e. \) Conversely, let sequences \( \{c_n\}\subset C,\) \( \{x_n\}\) and \( \{s_n\} \) with \( c_n \rightarrow 0_{ {\mathbb {R}}^l}, \) \( x_n \in \Omega (f_n) \) and \( s_n\in E(p_n)\) be such that \( F_n(x_n) \curlyeqprec F_n(s_n) +c_n. \) Then, one has

$$\begin{aligned} F_n(x_n)\subset F_n(s_n)+c_n-C. \end{aligned}$$
(3)

Note that \( (e-{\text {int}}C) \) is a neighborhood of \( 0_{ {\mathbb {R}}^l} \), so we can pick up a positive real number r such that \( B[0_{ {\mathbb {R}}^l},r]\subset e-{\text {int}}C. \) Consequently, for each \( n\in {\mathbb {N}}, \) one has

$$\begin{aligned} c_n\in B\left[ 0_{ {\mathbb {R}}^l},\Vert c_n\Vert \right] \subset \frac{\Vert c_n\Vert }{r}(e-{\text {int}}C)\subset \frac{\Vert c_n\Vert }{r}e-C. \end{aligned}$$

This, together with (3), and the convexity of C give us \( F_n(x_n)\subset F_n(s_n)+\varepsilon _n e-C \), where \( \varepsilon _n=r^{-1}\Vert c_n\Vert \) for each \(n\in {\mathbb {N}}. \) Of course, the sequence \( \{\varepsilon _n\} \) approaches 0,  and therefore \( \{x_n\} \) is an approximately e-minimizing sequence for the problem \( ({\text {SISP}}_p)\) corresponding to \( \{ p_n \} \). \(\square \)

Due to Proposition 5, we will, in what follows, not indicate the vector e in statements regarding approximately minimizing sequences.

Definition 9

For a given \( p\in \mathbb {P},\) (SISP) is said to be sequentially generalized well-posed at p if for any sequence \( \{p_n\} \subset \mathbb {P}\) converging to p,  every approximately minimizing sequence for \( ({\text {SISP}}_p) \) corresponding to \( \{p_n\} \) admits a subsequence converging to some solution in E(p).

Remark 4

  1. (a)

    By treating the positive integer n as a parameter, the problem \( ({\text {SISP}}_{p_n}) \) can be interpreted as a parametric problem that belongs to a special class of problems known as parameterized problems. Thus, Definitions 8 and 9 appear to be quite close to Zolezzi’s well-known definitions for parametric well-posedness. In many practical circumstances, the data of perturbed problems are collected incrementally, through methods such as measuring, examining, or observing. As a result, analytical formulations based on parameters are rarely available. Therefore, Definitions 8 and 9 provide a more useful approach for analyzing perturbed problems, compared to other parameter-based formulations.

  2. (b)

    When \( f_n(x,t)\equiv f(x,t) \) and \( F_n(x)\equiv F(x) \), Definitions 8 and 9 collapse, respectively, to Definitions 6 and 7 in Sect. 3. Moreover, by replacing the upper-type order relation with lower-type one, Definitions 8 and 9 become those of generalized minimizing sequence and generalized well-posedness in the sense of Loridan considered by Zhang et al. (2009); Crespi et al. (2018); Han and Huang (2017); Miholca (2021).

We now provide some characterizations for the sequentially generalized well-posedness of (SISP) via the map \( \Xi _{e}.\)

Theorem 6

(SISP) is sequentially generalized well-posed at \( {\bar{p}}\in \mathbb {P} \) if and only if \( \Xi _{e} \) is u.c. and compact-valued at \( (\bar{p},0). \)

Proof

(\( \Rightarrow \)) Let a sequence \( \{x_n\} \) be in \( \Xi _{e}(\bar{p},0) \) (be also an approximately minimizing sequence for \( ({\text {SISP}}_{{\bar{p}}}) \)). Thus, it admits a subsequence converging to a solution of \( E({\bar{p}})=\Xi _{e}({\bar{p}},0)\) and so \( \Xi _{e}({\bar{p}},0) \) is compact. Suppose to the contrary that there are an open subset \( U \subset X \) with \( \Xi _{e}({\bar{p}},0)\subset U\) and a sequence \( \{(p_n,\varepsilon _n)\}\subset \mathbb {P}\times {\mathbb {R}}_+ \) with \((p_n,\varepsilon _n)\rightarrow ({\bar{p}},0) \) such that for every n we can find \( x_n \in \Xi _{e}(p_n,\varepsilon _n) {\setminus } U \). By the definition of \( \Xi \), we can pick up a sequence \( \{s_n\} \) with \( s_n\in E(p_n) \) such that \( F_n(x_n)\curlyeqprec F_n(s_n)+\varepsilon _n e\) for all n. Consequently, \( \{x_n\} \) is an approximately minimizing sequence for \( ({\text {SISP}}_{{\bar{p}}}) \) corresponding to \( \{p_n\} \) and so it admits a subsequence converging to some solution \( x_0\in E({\bar{p}})=\Xi _{e}({\bar{p}},0), \) which contradicts the fact \( x_n\notin U \) for all n. Therefore, \( \Xi _{e} \) is u.c. at \( ({\bar{p}},0). \)

\( (\Leftarrow ) \) For every sequence \( \{p_n\} \) with \( p_n=(f_n,F_n) \) converging to \( {\bar{p}} \), let \( \{x_n\} \) be an approximately minimizing sequence for \( ({\text {SISP}}_{{\bar{p}}}) \) corresponding to \( \{p_n\} \). Then there are a sequence \( \{\varepsilon _n\}\subset {\mathbb {R}}_+ \) converging to 0 and a sequence \( \{s_n\} \) with \(s_n\in E(p_n) \) satisfying \( F_n(x_n)\curlyeqprec F_n(s_n)+\varepsilon _n e, \) that is, \( x_n\in \Xi _{e}(p_n,\varepsilon _n). \) Because \( \Xi _{e} \) is u.c. and compact-valued at \( ({\bar{p}},0) \), thank to Lemma 2a, we conclude that the sequence \( \{x_n\} \) has a subsequence converging to some solution in \( \Xi _{e}({\bar{p}},0)=E({\bar{p}}). \) Therefore, the problem (SISP) is sequentially generalized well-posed at \( {\bar{p}} \). \(\square \)

Theorem 7

(SISP) is sequentially generalized well-posed at \( {\bar{p}}\in \mathbb {P}\) if and only if \( E({\bar{p}}) \) is compact and \( H(\Upsilon _e(\bar{p},\delta ,\varepsilon ), E({\bar{p}}))\rightarrow 0 \) as \( (\delta ,\varepsilon )\rightarrow (0,0) \), where \( \Upsilon _e(\bar{p},\delta ,\varepsilon )=\bigcup _{p\in B(\bar{p},\delta )}\Xi _{e}(p,\varepsilon ). \)

Proof

Let (SISP) be sequentially generalized well-posed at \( \bar{p}\in \mathbb {P}\). Obviously, \( E({\bar{p}}) \) is a nonempty compact set. It follows from \( E({\bar{p}})\subset \Upsilon _e(\bar{p},\delta ,\varepsilon )\) that \( h(E({\bar{p}}),\Upsilon _e(\bar{p},\delta ,\varepsilon ))=0\). We only need to show that \( h(\Upsilon _e({\bar{p}},\delta ,\varepsilon ),E({\bar{p}}))\rightarrow 0 \) as \( (\delta ,\varepsilon )\rightarrow (0,0). \) Suppose on the contrary that there are sequences \( \{(\delta _n,\varepsilon _n)\}\subset {\mathbb {R}}^2_+, (\delta _n,\varepsilon _n)\rightarrow (0,0), \) and \( \{x_n\}, x_n\in \Upsilon _e({\bar{p}},\delta _n,\varepsilon _n),\) such that for all n, \( d(x_n, E({\bar{p}}))\ge r \) for some \( r>0. \) Note that the sequence \( \{x_n\} \) is an approximately minimizing sequence for \( ({\text {SISP}}_{{\bar{p}}}) \) corresponding to some \( \{p_n\} \) with \( p_n\in B({\bar{p}}, \delta _n), \) so it admits a subsequence \( \{x_{n_i}\} \) converging to some \( {\bar{x}} \in E({\bar{p}}). \) Consequently, we have eventually that \( d(x_{n_i},{\bar{x}})<r, \) which is impossible as \( d(x_n, E({\bar{p}}))\ge r \) for all n.

Conversely, let \( \{x_n\} \) be an approximately minimizing sequence for \( ({\text {SISP}}_{{\bar{p}}}) \) corresponding to some \( \{p_n\} \) with \( p_n\rightarrow {\bar{p}}. \) Thus, there is a sequence \( \{\varepsilon _n\}\subset {\mathbb {R}}_+ \) with \(\varepsilon _n \rightarrow 0 \) satisfying \( x_n\in \Xi _{e}(p_n,\varepsilon _n) \) for all n. By setting \( \delta _n=d(p_n,{\bar{p}}) \), we see that \( x_n \in \Upsilon _e({\bar{p}},\delta _n,\varepsilon _n)\). Consequently,

$$\begin{aligned} d(x_n,E({\bar{p}}))\le H(\Upsilon _e({\bar{p}},\delta ,\varepsilon ), E(\bar{p})) \rightarrow 0 \text{ as } n\rightarrow \infty , \end{aligned}$$

which gives us a sequence \( \{{\hat{x}}_n \} \) in \( E({\bar{p}}) \) satisfying \( d(x_n,{\hat{x}}_n)\rightarrow 0 \) as \( n\rightarrow \infty . \) By the compactness of \( E({\bar{p}}) \), \( \{{\hat{x}}_n\} \) admits a subsequence converging to some \( {\hat{x}} \in E({\bar{p}})\), and thus the sequence \( \{ x_n\} \) must have a corresponding subsequence tending to \( {\hat{x}} \). Therefore, (SISP) is sequentially generalized well-posed at \( {\bar{p}}. \) \(\square \)

Next, we provide sufficient conditions for the sequentially generalized well-posedness of (SISP). In this way, we generalize a concept concerning the converse property of a set-valued map.

Definition 10

We say that a map \( F\in {\mathcal {M}} \) has the strong converse property at \( {\bar{x}}\in A \) with respect to \( {\hat{x}} \in A{\setminus }\{\bar{x}\}\) if and only if either or for any sequences \( \{F_n\}\subset {\mathcal {M}} \) with \( \sup _{x\in A} H(F_n(x),F(x)) \rightarrow 0\) as \( n\rightarrow \infty \) and \( \{x_n\}, \{{\hat{x}}_n\}\subset A \) respectively converging to \({\bar{x}} \) and \( {\hat{x}}, \) one has \( F_{n_0}({\hat{x}}_{n_0})\curlyeqprec F_{n_0}(x_{n_0}) \) for some \( n_0\in {\mathbb {N}}. \)

We consider the following basic conditions:

  1. (A1)

    for each \( t\in T, \) \( f(\cdot ,t)\) is K-lsc and generalized K-quasiconvex on A

  2. (A2)

    for each \( x\in A, \) \( f(x,\cdot )\) is K-usc on A

  3. (A3)

    F is compact-valued and continuous on A

  4. (A4)

    F admits the strong converse property at each \( x \in A \) with respect to every \( y \in A{\setminus }\{x\}.\)

We consider the following subsets of \( {\mathcal {S}} \) and \( {\mathcal {M}}: \)

$$\begin{aligned} \hat{{\mathcal {S}}}&=\{ f\in {\mathcal {S}} : f \text{ satisfies } \text{(A1) } \text{ and } \text{(A2) } \}, \\ \hat{{\mathcal {M}}}&=\{ F\in {\mathcal {M}} : F \text{ satisfies } \text{(A3) } \text{ and } \text{(A4) } \}. \end{aligned}$$

Theorem 8

Assume that A and T are compact. Then (SISP) is sequentially generalized well-posed on \( {\mathcal {E}}\cap (\hat{{\mathcal {S}}}\times \hat{{\mathcal {M}}}). \)

Proof

For every \( p=(f,F)\in {\mathcal {E}}\cap (\hat{{\mathcal {S}}}\times \hat{{\mathcal {M}}}) \), let \( \{p_n=(f_n,F_n) \}\subset {\mathcal {S}}\times {\mathcal {M}} \) be a sequence converging to p. Let \( \{x_n\}, x_n\in \Omega (f_n) \), be an approximately minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{p_n\}. \) Then we can find sequences \( \{\varepsilon _n\}\subset {\mathbb {R}}^+, \varepsilon _n\rightarrow 0, \) and \( \{s_n\}, s_n \in E(p_n), \) such that

$$\begin{aligned} F_n(x_n) \curlyeqprec F_n(s_n) +\varepsilon _ne. \end{aligned}$$
(4)

Since A is compact, one can assume that the sequences \( \{x_n\} \) and \( \{s_n\} \) respectively converge to \( {\bar{x}} \) and \( {\bar{s}} \) in A.

Step 1. We prove that \( {\bar{s}} \in \Omega (f). \) Suppose to the contrary that there exists \( t\in T \) satisfying \( f(\bar{s},t)\notin -K. \) Note that we can choose a vector \( k\in {\text {int}}K \) such that \( f({\bar{s}},t)\notin k- K. \) Indeed, otherwise we would have a decreasing sequence \( \{k_n\}\subset {\text {int}}K \) converging to \( 0_{{\mathbb {R}}^m} \) such that \( f({\bar{s}},t)-k_n\in -K \); taking the limit when \( n\rightarrow \infty \) and using the closedness of K,  we arrive at \( f({\bar{s}},t)\in -K, \) which is a contradiction. Hence, \( f({\bar{s}},t)\) belongs to the open set \( {\mathbb {R}}^m\setminus (k- K)\) for some \( k\in {\text {int}}K\). Since \( f(\cdot ,t) \) is K-lsc at \( {\bar{s}} \) and \( {\mathbb {R}}^m{\setminus }(k- K)\) is a neighborhood of \( f(\bar{s},t) \), there exists a neighborhood U of \( {\bar{s}} \) such that \( f(s,t)\in {\mathbb {R}}^m{\setminus }(k- K)+K\) for all \( s\in U \cap A\). Because \( \{s_n\}\subset A \) converges to \( {\bar{s}} \), we have \( s_n\in U\cap A \) for n sufficiently large, which leads to \( f(s_n,t) \in {\mathbb {R}}^m{\setminus }(k- K)+K.\) Applying Lemma  1, we obtain

$$\begin{aligned} f(s_n,t) \in {\mathbb {R}}^m\!\setminus \!\!(k-K). \end{aligned}$$
(5)

Since \( k\in {\text {int}}K, \) the set \( (k-K) \) strictly contains \( -K\) and thus there exists \( r>0 \) such that \( \min _{e\in {\text {bd}}(k-K)} d(e,-K)=r \). This, together with (5), gives

$$\begin{aligned} d(f(s_n,t), -K)>r. \end{aligned}$$

On the other hand, \( f_n(s_n,t) \in -K\) for all \( t\in T \) inasmuch as \( s_n\in \Omega (f_n) \), and hence

$$\begin{aligned} d\left( f(s_n,t), f_n(s_n,t)\right) \ge d\left( f(s_n,t),-K \right) >r. \end{aligned}$$
(6)

It is worth noting that

$$\begin{aligned} 0\le d\left( f(s_n,t),f_n(s_n,t)\right) \le \sup _{(x,t)\in A\times T}d\left( f(x,t),f_n(x,t) \right) . \end{aligned}$$

Consequently, \( d\left( f(s_n,t), f_n(s_n,t)\right) \le {\mathfrak {d}}(p_n,p) \rightarrow 0,\) as \( n\rightarrow \infty ,\) which contradicts (6). Therefore, \( f({\bar{s}},t) \le _K 0_{{\mathbb {R}}^m} \) for all \( t\in T, \) that is, \( {\bar{s}}\in \Omega (f). \)

Step 2. We show that for any \( {\bar{y}}\in \Omega (f), \) there exists \( y_n\in \Omega (f_n) \) satisfying \( y_n\rightarrow {\bar{y}}. \) In this way, we first claim that

$$\begin{aligned} {\hat{\Omega }}(f)\subset \liminf {\hat{\Omega }} (f_n), \end{aligned}$$
(7)

where \( {\hat{\Omega }}(f) =\{ x\in A: f(x,t) <_K 0_{{\mathbb {R}}^m}, \forall t\in T \}\). Otherwise, there would be \( {\hat{u}}\in {\hat{\Omega }}(f) \) such that any sequence \( \{u_n\}, u_n\in {\hat{\Omega }}(f_n), \) would not converge to \( {\hat{u}}. \) Then we can assume, without loss of generality, that \( {\hat{u}} \notin {\hat{\Omega }}(f_n) \) for all n. Consequently, there is a sequence \( \{t_n\}\subset T \) satisfying

$$\begin{aligned} f_n(\hat{u},t_n) \notin -{\text {int}}K. \end{aligned}$$
(8)

We can assume, due to the compactness of T,  that \( \{t_n\} \) converges to some \( t\in T. \) Because \( \hat{u} \in {\hat{\Omega }}(f), \) we can pick up \( k\in {\text {int}}K \) such that \( f({\hat{u}},t)\in -k-{\text {int}}K. \) Since \( f ({\hat{u}},\cdot )\) is K-usc at t,  for n sufficiently large

$$\begin{aligned} f(\hat{u},t_n)\in -k-{\text {int}}K -K\subset -k-K. \end{aligned}$$
(9)

Through (8) and (9), keeping in mind that \( \min _{e\in {\text {bd}}(-K)}d(e, -k-K)>r\) for some \( r>0, \) we see that

$$\begin{aligned} d(f_n(\hat{u},t_n),f(\hat{u},t_n))\ge d(f_n(\hat{u},t_n),-k-K)>r. \end{aligned}$$
(10)

Furthermore, since

$$\begin{aligned} 0\le d(f_n(\hat{u},t_n),f(\hat{u},t_n))\le \sup _{(x,t)\in A\times T}d(f_n(x,t),f(x,t))\le {\mathfrak {d}}(p_n,p), \end{aligned}$$

by passing to the limit as \( n\rightarrow \infty , \) we come to \(d(f_n(\hat{u},t_n),f(\hat{u},t_n)) \rightarrow 0, \) which contradicts (10). Thus, the inclusion (7) holds true. We next verify that \( \bar{y}\in {\text {cl}}{\hat{\Omega }}(f) \). Since p holds the Slater condition, we can find a vector \( x_0 \in {\hat{\Omega }}(f). \) It is nothing to prove if \( {\bar{y}} =x_0; \) so we consider the case in which they are distinct. By setting \( x_\lambda =(1-\lambda ){\bar{y}}+\lambda x_0 \) for \( \lambda \in ]0,1[ \) and utilizing the generalized K-quasiconvexity of \( f(\cdot ,t) \), we arrive at \( x_\lambda \in {\hat{\Omega }}(f). \) In the limit as \( \lambda \rightarrow 0, \) we obtain that \( {\bar{y}}\in {\text {cl}}{\hat{\Omega }} (f), \) and hence \( \Omega (f)\subset {\text {cl}}{\hat{\Omega }} (f)\) as \( {\bar{y}} \) is arbitrarily chosen. Combining this with (7) and taking into account the closedness of \( \liminf {\hat{\Omega }} (f_n) \), we arrive at

$$\begin{aligned} \Omega (f)\subset {\text {cl}}{\hat{\Omega }} (f) \subset \liminf {\hat{\Omega }} (f_n)\subset \liminf \Omega (f_n), \end{aligned}$$

which ensures the existence of a sequence \( \{y_n\} \) with \( y_n\in \Omega (f_n) \) converging to \( {\bar{y}}. \)

Step 3. We show that \( {\bar{s}} \in E(p). \) For each \( \bar{y}\in \Omega (f){\setminus } \{{\bar{s}}\}, \) as proved in Step 2, we can find a sequence \( \{y_n\}, y_n\in \Omega (f_n), \) converging to \( {\bar{y}}. \) Since \( s_n\in E(p_n)\) for all \( n\in {\mathbb {N}}, \) the disjunction of the two following propositions

is true for all \( n\in {\mathbb {N}} \).

Case 1. The first proposition is true. Suppose that \(F({\bar{y}})\curlyeqprec F({\bar{s}})\). Since F satisfies (A4), we have \(F_{n_i}(y_{n_i})\curlyeqprec F_{n_i}(s_{n_i})\) for some \( n_i\in {\mathbb {N}},\) which is a contradiction. Thus, .

Case 2. The second proposition is true. We show that \( F({\bar{y}})\in [F({\bar{s}})], \) that is, \( F({\bar{s}})\subset F({\bar{y}})-C \) and \( F({\bar{y}})\subset F({\bar{s}})-C. \) For any \( {\bar{w}}\in F(\bar{s}), \) because F is l.c. at \( {\bar{s}} \), we get a sequence of elements \( w_n\in F(s_n) \) with \( w_n\rightarrow {\bar{w}}, \) and thus for any \( \varepsilon >0 \) there is \( n_1\in {\mathbb {N}} \) such that

$$\begin{aligned} d(w_n,{\bar{w}})\le \frac{\varepsilon }{2}, \forall n\ge n_1. \end{aligned}$$
(11)

Because

$$\begin{aligned} H\left( F_n(s_n),F(s_n) \right) \le \sup _{x\in A}H\left( F_n(x),F(x) \right) \rightarrow 0 \text{ as } n\rightarrow \infty , \end{aligned}$$

for \( \varepsilon >0 \) as above, there exists \( n_2\in {\mathbb {N}} \) such that

$$\begin{aligned} H\left( F_n(s_n),F(s_n) \right) \le \frac{\varepsilon }{2}, \forall n\ge n_2. \end{aligned}$$

Consequently, there exist \( {\hat{w}}_n \in F_n(s_n) \) satisfying

$$\begin{aligned} d({\hat{w}}_n, w_n)\le \frac{\varepsilon }{2}, \forall n \ge n_2. \end{aligned}$$
(12)

By choosing \( m=\max \{n_1,n_2\}, \) we have from (11) and (12) that

$$\begin{aligned} d({\hat{w}}_n,{\bar{w}})\le d({\hat{w}}_n, w_n)+d(w_n,{\bar{w}})\le \varepsilon , \forall n\ge m, \end{aligned}$$

that is,

$$\begin{aligned} {\hat{w}}_n\rightarrow {\bar{w}}, \text{ as } n\rightarrow \infty . \end{aligned}$$
(13)

Note that \( F_n(s_n)\subset F_n(y_n)-C \) inasmuch as \( F_n(y_n)\in [F_n(s_n)], \) so we can pick up \( {\hat{z}}_n\in F_n(y_n) \) such that

$$\begin{aligned} {\hat{w}}_n\in {\hat{z}}_n-C. \end{aligned}$$
(14)

Since

$$\begin{aligned} H\left( F_n(y_n),F(y_n) \right) \le \sup _{x\in A}H\left( F_n(x),F(x) \right) \rightarrow 0 \text{ as } n\rightarrow \infty , \end{aligned}$$

for all \( \delta >0, \) there exists \( n_3\in {\mathbb {N}} \) such that

$$\begin{aligned} H\left( F_n(y_n),F(y_n) \right) \le \frac{\delta }{2}, \forall n\ge n_3. \end{aligned}$$

Hence, due to the compactness of \(F(y_n),\) there is \( z_n\in F(y_n) \) such that

$$\begin{aligned} d({\hat{z}}_n, z_n)\le \frac{\delta }{2}, \forall n\ge n_3. \end{aligned}$$
(15)

Because F is u.c. and compact-valued at \( {\bar{y}}, \) we can assume that \( z_n\rightarrow {\bar{z}} \) for some \( {\bar{z}} \in F({\bar{y}}). \) Thus, there is \( n_4\in {\mathbb {N}} \) such that \( d( z_n,{\bar{z}}) \le 2^{-1}\delta \) for all \( n\ge n_4. \) Utilizing this and (15), we get

$$\begin{aligned} d({\hat{z}}_n,{\bar{z}})\le d({\hat{z}}_n, z_n)+d( z_n,{\bar{z}}) \le \delta , \forall n\ge \max \{n_3,n_4\}, \end{aligned}$$

which implies that \( \{{\hat{z}}_n\} \) converges to \( {\bar{z}} \) as \( n\rightarrow \infty . \) Combining this with (13), (14) and taking into account the closedness of C, we arrive at \( {\bar{w}}\in {\bar{z}}-C.\) Consequently, \( F(\bar{s})\subset F({\bar{y}})-C\) as \( {\bar{w}} \) is arbitrary. Similarly, by interchanging the roles of \( {\bar{s}} \) and \( {\bar{y}} \) in the above arguments, we also get \( F({\bar{y}})\subset F({\bar{s}})-C\), which leads to \( F({\bar{y}})\in [F({\bar{s}})]. \)

From the conclusions of Cases 1 and 2, we obtain \( {\bar{s}} \in E(p).\)

Step 4. We finally show that \( {\bar{x}} \in E(p). \) Let a be arbitrary in \( F({\bar{x}}). \) Using the same arguments given in the first part of Case 2, we obtain a sequence \( \{a_n\} \) converging to a and \( a_n\in F_n(x_n) \) for n sufficiently large. From (4), we get \( a_n \in F_n(s_n)+\varepsilon _n e -C,\) and hence there exists \( b_n \in F_n(s_n)\) satisfying

$$\begin{aligned} a_n\in b_n +\varepsilon _n e-C. \end{aligned}$$
(16)

By repeating the argument in the second part of Case 2 after (14), we conclude that the sequence \( \{b_n\} \) (taking a subsequence if necessary) converges to some \( b\in F(\bar{s}) \). So if we pass to the limit as \( n\rightarrow \infty \) in (16), we obtain \( a\in b-C. \) Consequently, \( F(\bar{x})\subset F({\bar{s}}) -C\) as a is chosen arbitrarily, and hence \( F({\bar{x}})\in [F({\bar{s}})]. \) Therefore, (SISP) is sequentially generalized well-posed at p. The proof is complete. \(\square \)

To finalize this section, we give some examples to justify the importance of the assumptions in Theorem 8.

Example 3

Let \( {\mathbb {X}}={\mathbb {T}}={\mathbb {R}}, C=K={\mathbb {R}}^2_+, A=[0,2], T=[0,1].\) Let \( f: A\times T\rightarrow {\mathbb {R}}^2, \) and \( F: A\rightrightarrows {\mathbb {R}}^2 \) be, respectively, defined by: for all \( x\in A \) and \( t\in T, \)

$$\begin{aligned} f(x,t) = \left( t-x-1, t(x-x^2)\right) , \end{aligned}$$

and

$$\begin{aligned} F(x) =\left\{ (u_1,u_2)\in {\mathbb {R}}^2: (u_1-x)^2 +(u_2+x^2-2x)^2\le \left( \frac{x}{x+1}\right) ^2 \right\} . \end{aligned}$$

Let \( f_n: A\times T\rightarrow {\mathbb {R}}^2 \) and \( F_n: A\rightrightarrows {\mathbb {R}}^2 \) be, respectively, defined by:

$$\begin{aligned} f_n(x,t)= \left( t-x-1-\frac{1}{n}, t\left( x+\frac{1}{10n}\right) -t\left( x+\frac{1}{10n}\right) ^2 \right) \text{ and } F_n(x)=F(x). \end{aligned}$$

Obviously, the sequence \( \{p_n\} \) defined by \( p_n=(f_n,F_n) \) converges to \( p=(f,F) \) as \( n\rightarrow \infty .\) One can check that \( \Omega (f_n)=[1-(10n)^{-1}, 2]\) and \( \Omega (f)= [1,2] \cup \{0\}. \)

For all \(x\in \Omega (f)\), we have \(F(0)=\{(0,0)\}\subset F(x)-C\), i.e., \(F(0) \curlyeqprec F(x)\), hence \( 0\in E(p). \) Suppose that there exists \(x\in E(p)\) with \(x\ge 1.\) Then it follows from \(F(0) \curlyeqprec F(x)\) that \(F(x) \curlyeqprec F(0),\) or equivalently \(F(x)\subset -C,\) which is impossible (see Fig. 1). Therefore, \(E(p)=\{0\}.\) Moreover, for all \(x\in \Omega (f_n)=[1-(10n)^{-1}, 2]\), from Fig. 1, we see that \(F(x)\not \subset F(2)-C\) which leads to \(2\in E(p_n) \). Then the sequence \(\{x_n=2\}\) is an approximately minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{p_n\}\), but \(\{x_n\}\) converges to \( 2\notin E(p), \) and hence (SISP) is not sequentially generalized well-posed. The reason is that \( f(\cdot , t) \) is not generalized K-quasiconvex on A for some \( t\in A \). Indeed, let \( t=0.5, \) \( x =0 \) and \( y =1.5 \), then we have \( f(x,t)=(-0.5,0)\in -K\) and \( f(y,t)=(-2,-0.375)\in -{\text {int}}K, \) but \( f(0.9x+0.1y,t)= (-0.65,0.06375) \notin -{\text {int}}K.\)

Fig. 1
figure 1

Illustration of F(x) in Example 3

Example 4

Let \( {\mathbb {X}}, {\mathbb {T}}, C, K, A \) and T be as in Example  3. Let \( f: A\times T\rightarrow {\mathbb {R}}^2 \) and \( F: A\rightrightarrows {\mathbb {R}}^2 \) be, respectively, defined by: for all \( x\in A \) and \( t\in T, \)

$$\begin{aligned} f(x,t) = \left( -tx^2, x^2-t^2-1\right) \text{ and } F(x) =\{ (u_1,0)\in {\mathbb {R}}^2: 0\le u_1\le x+2 \}. \end{aligned}$$

It is clear that \( f\in \hat{{\mathcal {S}}} \) and F satisfies (A3). For \( p=(f,F),\) one can see that \( \Omega (f)=[0,1] \) and \( E(p)=\{0 \}. \) Now, let \( f_n: A\times T\rightarrow {\mathbb {R}}^2\) and \( F_n: A\rightrightarrows {\mathbb {R}}^2 \) be, respectively, defined as:

$$\begin{aligned} f_n(x,t)= \left( \frac{tx}{n}-tx^2, x^2-t^2-1-\frac{1}{n^2}\right) , \end{aligned}$$

and

$$\begin{aligned} F_n(x)=\left\{ (u_1,u_2)\in {\mathbb {R}}^2: 0\le u_1\le x+2, 0\le u_2\le \frac{3-x}{n} \right\} . \end{aligned}$$

Obviously, \( p_n=(f_n,F_n) \) converges to \( p=(f,F) \) as \( n\rightarrow \infty . \) By direct computations, we have

$$\begin{aligned} \Omega (f_n)=\left[ \frac{1}{n}, \frac{\sqrt{n^2+1}}{n}\right] \cup \{0\}. \end{aligned}$$

Let \( x_n = n^{-1}\sqrt{n^2+1}\), then we can check that \( \{x_n\} \) is an approximately minimizing sequence for the problem \( ({\text {SISP}}_p) \) corresponding to \( \{p_n\}\), but it converges to \( 1\notin E(p). \) Thus, (SISP) is not sequentially generalized well-posed; the reason is that F does not hold (A4). Indeed, for \( {\bar{x}} =1 \) and \( {\hat{x}} =0 \), we have \( F({\hat{x}}) \curlyeqprec F({\bar{x}}), \) but for all n,  where \( {\bar{x}}_n =1-(3n+1)^{-1}\) and \( {\hat{x}}_n =(3n+1)^{-1}. \)

5 Conclusions

In this article, we have studied the global well-posedness concerning efficient solution sets for semi-infinite set optimization problems. We have also introduced and investigated a notion of well-posedness under perturbations in terms of convergent sequences of feasible and objective maps. The obtained results may be useful in situations where we are unable to obtain an analytic formulation of the reference problem depending on perturbation parameters but are instead left with a sequence of perturbed problems. To the best of our knowledge, it is the first attempt in the literature to study global well-posedness properties for a semi-infinite set optimization problem under such perturbations. We would like to underline that the analysis of well-posedness in the present work is carried out only in a decision space. It would be of interest to deepen in the future some well-posedness properties by means of an image space approach.