1 Introduction

Uncertainty is ubiquitous in the natural world, engineering systems and our social life. In most real-world applications, the parameters in optimization problems (OPs) are not known exactly, and solutions to OPs can exhibit remarkable sensitivity to perturbations in the parameters of the problem. So, it is very important to explore uncertain optimization. Robust optimization, as one of the most effective approaches to dealing with uncertain OPs, is becoming more and more popular. The focus of this method lies on minimizing the worst possible objective function value, and looking for a solution, that is immunized against the effect of data uncertainty, where the uncertain parameters belong to a known set. Robust OPs were introduced by Soyster [1] for the first time and have been extensively studied in the literature. We refer to two monographs [2, 3] for extensive collections of results and to [4, 5] for a survey on recent developments.

Nowadays, many interesting and important conclusions have been made on the topic of characterizing robust solutions for uncertain OPs. Under a variety of constraint qualifications and convexity assumptions, some scholars devoted themselves to deriving robust optimality conditions for uncertain convex OPs. For more details, one can refer to [6,7,8,9,10] and reference therein. Furthermore, falling back on the convex uncertain sets and the concave constraint functions, a necessary optimality condition for nonsmooth robust OPs was shown in [11]. By using nonlinear scalarization technique, Klamroth et al. [12] unified many different concepts of robustness and stochastic programming. Subsequently, they developed a unified characterization of various robustness concepts from vector optimization, set-valued optimization and scalarization points of view [13], while, as far as we know, very few papers have concentrated on studying robust optimality conditions for general uncertain scalar OPs, without any convexity or concavity assumptions. Besides, whether there exists another tool to develop a unified approach to characterizing a variety of robust solutions is worth considering. The above problems prompted us to make further explorations and contributed the motivation of this paper.

After a long gestation, in which we can mention paper [14] as well as a writing by Hestenes in his book [15], image space analysis (for short, ISA) was born with [16]. It has been proven to be a very fruitful method in investigating some topics of the optimization theory for different kinds of OPs, for example, optimality conditions, duality, variational principles, penalty methods, gap functions and error bounds and so on; see [16,17,18,19,20,21,22,23,24,25,26]. One of the main features of the ISA consists in stating the given problem by means of the impossibility of a parametric system, which can be reduced to the disjunction of two suitable subsets of the image space. It can be realized by allowing them to lie in two disjoint level sets of a linear or nonlinear separation function. Then, separation is vital in the process of implementation, and how to choose appropriate separation function is the key in the ISA.

Very recently, by defining a suitable image, Wei et al. [27] established an equivalence relation between the uncertain OP and its image problem for strictly robustness. Besides, based on separation, a unified characterization of several multiobjective robustness concepts [28] for uncertain multiobjective OPs was provided in [29, 30]. Some necessary and sufficient conditions of multiobjective robust efficiency on vectorization counterparts were derived in [31]. Under mild assumptions, various robust solutions for different kinds of robustness concepts were characterized on the frame of ISA [32], which lead to a unified approach to tackling with robustness for uncertain OPs. As a continuation, motivated greatly by the works [12, 13, 16, 17, 27, 29,30,31,32], this paper is dedicated to deriving some characterizations of robust optimality conditions for uncertain OPs in the sense of separation and exploring the relationships between ISA approach and vector/set-valued optimization method for robustness analysis. Specifically, by virtue of linear and nonlinear (regular) weak separation functions, some robust optimality conditions are achieved. Above all, based on some notations of ISA presented in [32, Sect. 3], we introduce a collection of vector OPs and set-valued OPs in the context of image space and discuss their relations to robustness concepts. It provides a brand-new perspective in face of concrete OPs with uncertainties and may be helpful for decision making.

The remainder of this paper is organized as follows. In Sect. 2, we recall some preliminaries and formulate the uncertain problem. Some characterizations of robust optimality conditions for uncertain OPs are achieved by virtue of linear and nonlinear (regular) weak separation functions in Sect. 3. Section 4 reveals the relationships between ISA, vector optimization and set-valued optimization approaches. In Sect. 5, an application for the shortest path problem is employed to verify the usefulness of the results obtained in this paper, and a short conclusion of the paper is given in Sect. 6.

2 Preliminaries

In this section, we shall recall some notations and definitions, which will be used in the sequel. Let \(\mathcal {X}\subseteq \mathbb {R}^{n}\) be a nonempty subset and Y be a normed linear space. The topological interior, closure, boundary and complement of a set \(M\subseteq Y\) are denoted by \(\text{ int }\,M,\text{ cl }\,M,\text{ bd }\,M,M^{c}\), respectively. For the Euclidean space \(\mathbb {R}^{m}\), \(\langle \cdot ,\cdot \rangle \) represents the scalar product. A nonempty subset \(C\subseteq \mathbb {R}^{m}\) is said to be a cone, if and only if \(tC\subseteq C\) for all \(t\ge 0\). A cone \(C\subseteq \mathbb {R}^{m}\) is said to be convex (resp. pointed), if and only if \(C+C\subseteq C\) (resp. \(C\cap (-C)=\{0_{\mathbb {R}^{m}}\}\)). The positive polar cone of a cone \(C\subseteq \mathbb {R}^{m}\) is given by \(C^{*}:= \{l\in \mathbb {R}^{m}: \langle l,z\rangle \ge 0,\;\forall \;z\in C\}\). The set \(\text{ cone }M:=\bigcup _{t\ge 0}tM\) is the cone generated by M. For a function \(h:\mathcal {X}\rightarrow \mathbb {R}\) and \(\alpha \in \mathbb {R}\), the set \(lev_{\ge (>)\alpha }h:= \{x\in \mathcal {X}: h(x)\ge (>)\alpha \}\) is called (strict) upper-level set of h.

Now a general OP with uncertainties both in the objective and constraints is considered. Assume that the nonempty set U of a finite-dimensional space defines an uncertainty set (convex or nonconvex, compact or noncompact, discrete scenarios or continuous interval), where \(\xi \in U\) is a parameter, which represents an uncertain real number, vector or scenario. Let \(f:\mathcal {X}\times U\rightarrow \mathbb {R}\) and \(F_{i}: \mathcal {X}\times U\rightarrow \mathbb {R},\, i=1,\ldots ,m\). Then, an uncertain scalar OP is defined as a parametric OP

$$\begin{aligned} (Q(\xi ),\; \xi \in U), \end{aligned}$$
(1)

where for a given \(\xi \in U\) the OP (\(Q(\xi )\)) is described by

$$\begin{aligned} \min f(x,\xi )\;\;\text{ s.t. }\;\;F_{i}(x,\xi )\le 0,\,i=1,\ldots ,m,\;\;x\in \mathcal {X}. \end{aligned}$$

We denote by \(\hat{\xi }\in U\) the nominal value, i.e., the value of \(\xi \) that we believe is true today. (\(Q(\hat{\xi })\)) is called the nominal problem.

Under mild assumptions, a unified approach to characterizing various robust solutions for different kinds of robustness concepts was proposed in [32], for example, strict robustness, optimistic robustness, reliable robustness, light robustness, \(\epsilon \)-constraint robustness and adjustable robustness. More precisely, one can refer to [32, Theorems 3.1-3.6]. In this paper, we mainly focus on analyzing the concepts of strict robustness and optimistic robustness in the context of ISA. In what follows, two basic mathematical models and some notations used within the ISA approach from the works [27, 32] are recalled.

For problem (1), the classical robustness concept is called strict robustness, which was originally proposed by Soyster [1] and developed by Ben-Tal et al. [2]. The idea is to minimize the worst possible objective function value and search for a solution that is good enough in the worst case. Strict robustness is a conservative concept and reveals the pessimistic attitude of a decision maker. Then, the strictly robust counterpart of problem (1) is given by

$$\begin{aligned} \min \,\sup _{\xi \in U}f(x,\xi )\;\;\text{ s.t. }\;\;F_{i}(x,\xi )\le 0,\;\forall \,\xi \in U,\;i=1,\ldots ,m,\;\;x\in \mathcal {X}. \end{aligned}$$
(2)

Set \(D^{1}:=-\mathbb {R}^{m}_{+}\) and \(F(x,\xi ):=(F_{1}(x,\xi ),\ldots ,F_{m}(x,\xi ))\) for a given \(\xi \in U\). The set of feasible solutions of problem (2) is denoted as \(X_{1}:=\{x\in \mathcal {X}:F(x,\xi )\in D^{1},\,\forall \xi \in U\}\). It follows from [2, pp. 9–10] that, if a feasible solution \(x\in X_{1}\) is optimal to problem (2), then it is called a strictly robust solution to problem (1). Assume that \(\sup _{\xi \in U}f(x,\xi )<\infty \) and \(\sup _{\xi \in U}F_{i}(x,\xi )<\infty ,i=1,\ldots ,m\) for all \(x\in \mathcal {X}\). Let \(\bar{x}\in \mathcal {X}\). Define the map

$$\begin{aligned} A^{1}_{\bar{x}}:\mathcal {X}\times U\rightarrow \mathbb {R}^{1+m},\quad A^{1}_{\bar{x}}(x,\xi ):=\bigg (f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi ),\,F^{1}(x)\bigg ), \end{aligned}$$

where \(F^{1}(x):=\Big (\sup \limits _{\xi \in U}F_{1}(x,\xi ),\ldots ,\sup \limits _{\xi \in U}F_{m}(x,\xi )\Big )\), and consider the following sets

$$\begin{aligned} \mathcal {K}^{1}_{\bar{x}}:= & {} \{(u,v)\in \mathbb {R}^{1+m}: (u,v)=A^{1}_{\bar{x}}(x,\xi ),\;x\in \mathcal {X},\,\xi \in U\}, \\ \mathcal {H}^{1}:= & {} \{(u,v)\in \mathbb {R}^{1+m}: u>0,\,v\in D^{1}\}=\mathbb {R}_{++}\times (-\mathbb {R}^{m}_{+}). \end{aligned}$$

\(\mathcal {K}^{1}_{\bar{x}}\) is called corrected image of problem (1) and \(\mathbb {R}^{1+m}\) is the image space. It follows from [27] that \(\bar{x}\in X_{1}\) is a strictly robust solution to problem (1), if and only if \(\mathcal {K}^{1}_{\bar{x}}\cap \mathcal {H}^{1}=\emptyset \).

While strict robustness concentrates on the worst case, optimistic robustness aims at minimizing the best possible objective function value over \(X_{2}\) and shows an optimistic attitude of a decision maker, where \(X_{2}:=\{x\in \mathcal {X}:F(x,\xi )\in D^{1}\;\text{ for } \text{ some }\;\xi \in U\}\) stands for the set of optimistic feasible solutions. The optimistic counterpart was proposed by Beck and Ben-Tal [33] in studying robust duality theory. Then, the optimistic counterpart of problem (1) is denoted by

$$\begin{aligned} \min \,\inf _{\xi \in U}f(x,\xi )\;\;\text{ s.t. }\;\;F_{i}(x,\xi )\le 0,\;\text{ for } \text{ some }\;\xi \in U,\;i=1,\ldots ,m,\;\;x\in \mathcal {X}. \end{aligned}$$
(3)

If a feasible solution \(x\in X_{2}\) is optimal to problem (3), then it is an optimistic robust solution of problem (1). To introduce some notations associated with ISA, we recall the assumptions that the values of \(\inf _{\xi \in U}f(x,\xi )\) and \(\inf _{\xi \in U}F_{i}(x,\xi ),\,i=1,\ldots ,m\) are finite for all \(x\in \mathcal {X}\), and \(\inf _{\xi \in U}F_{i}(x,\xi ),\,i=1,\ldots ,m\) are attained for all \(x\in \mathcal {X}\). Let \(\bar{x}\in \mathcal {X}\) and \(F^{2}(x):=\Big (\min \limits _{\xi \in U}F_{1}(x,\xi ),\ldots ,\min \limits _{\xi \in U}F_{m}(x,\xi )\Big )\). Define the map

$$\begin{aligned} A^{2}_{\bar{x}}:\mathcal {X}\times U\rightarrow \mathbb {R}^{1+m},\quad A^{2}_{\bar{x}}(x,\xi ):=\bigg (\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi ),\,F^{2}(x)\bigg ), \end{aligned}$$

and consider the following sets

$$\begin{aligned} \mathcal {K}^{2}_{\bar{x}}:=\{(u,v)\in \mathbb {R}^{1+m}: (u,v)=A^{2}_{\bar{x}}(x,\xi ),\;x\in \mathcal {X},\,\xi \in U\},\;\;\;\mathcal {H}^{2}:=\mathcal {H}^{1}. \end{aligned}$$

\(\mathcal {K}^{2}_{\bar{x}}\) is called corrected image of problem (1) under the optimistic robust counterpart. We achieve from [32, Theorem 3.2] that \(\bar{x}\in X_{2}\) is an optimistic robust solution to problem (1), if and only if \(\mathcal {K}^{2}_{\bar{x}}\cap \mathcal {H}^{2}=\emptyset \).

Let \(A,B\subseteq \mathbb {R}^{p}\) be nonempty subsets, and \(K\subseteq \mathbb {R}^{p}\) be a convex and pointed cone. The set of maximal elements of A with respect to K (see [34, Definition 4.1]) is denoted by

$$\begin{aligned} Max (A,K):=\{a\in A: A\cap (\{a\}+K)=\{a\} \}. \end{aligned}$$

Let \(C\subseteq \mathbb {R}^{p}\) be a proper, convex, closed and pointed cone. In set-valued OPs, in order to compare two sets A and B, we recall the upper (resp., lower) set less order relation \(\preceq _{C}^{u}\) (resp., \(\preceq _{C}^{l}\)) [28, Definition 2.6.9] defined by\(A\preceq _{C}^{u}B:\Leftrightarrow A\subseteq B-C\;\;(\text{ resp. },\;A\preceq _{C}^{l}B:\Leftrightarrow B\subseteq A+C).\)

It is well known that the relationship between upper set less order relation and lower set less order relation can be described by \(A\preceq _{C}^{u}B\Leftrightarrow B\preceq _{-C}^{l}A\).

In the next, we recall two kinds of famous nonlinear scalarizing functions (Gerstewitz (Tammer) function and the oriented distance function), and some of their properties, which are employed to achieve (regular) weak separation functions.

Definition 2.1

[35] Let \(C\subset Y\) be a proper, convex, closed and pointed cone with nonempty interior. Given a fixed point \(k\in -\text{ int }\,C\), the Gerstewitz (Tammer) function \(\varphi _{C,k}:Y\rightarrow \mathbb {R}\) has the form

$$\begin{aligned} \varphi _{C,k}(y):=\min \{t\in \mathbb {R}:y\in tk+C\},\quad y\in Y. \end{aligned}$$

Proposition 2.1

[28, 35] For any given \(k\in -int \,C\) and \(t\in \mathbb {R}\), it holds:

  1. (i)

    \(\varphi _{C,k}(y)<t\Leftrightarrow y\in tk+int \,C\);

  2. (ii)

    \(\varphi _{C,k}(y)\le t\Leftrightarrow y\in tk+C\);

  3. (iii)

    \(\varphi _{C,k}(y)=t\Leftrightarrow y\in tk+bd \,C\).

Definition 2.2

[36] For a set \(A\subseteq Y\), let the oriented distance function \(\Delta _{A}:Y\rightarrow \mathbb {R}\cup \{\pm \infty \}\) be defined as \(\Delta _{A}(y):=d_{A}(y)-d_{Y\backslash A}(y),\) where \(d_{A}(y)=\inf _{a\in A}\parallel y-a\parallel \), \(d_{\emptyset }(y)=+\infty \), and \(\parallel y\parallel \) denotes the norm of y in Y.

In general, this function is presented to analyze the geometry of nonsmooth OPs and derive necessary optimality conditions. Some properties of \(\Delta _{A}\) are gathered together in the following proposition.

Proposition 2.2

[37] If the set A is nonempty and \(A\ne Y\), then

  1. (i)

    \(\Delta _{A}\) is real valued;

  2. (ii)

    if \(int \,A\ne \emptyset \), then \(\Delta _{A}(y)<0\) for every \(y\in int \,A\);

  3. (iii)

    \(\Delta _{A}(y)=0\) for every \(y\in bd \,A\);

  4. (iv)

    if \(int \,A^{c}\ne \emptyset \), then \(\Delta _{A}(y)>0\) for every \(y\in int \,A^{c}\).

3 Separation Functions and Robust Optimality Conditions

Based on the idea of ISA, some equivalent characterizations of various robust solutions have been established by constructing the impossibility of parametric systems (i.e., the separation of two suitable subsets in the image space) in [32], which lead to a unified framework to deal with uncertain OPs. With the purpose of deriving robust optimality conditions for varieties of robustness concepts, we recall linear and nonlinear (regular) weak separation functions such that the sets obtained in [32] are separable, and analyze different kinds of robust counterpart problems.

Let \(\Pi \) denote a set of parameters to be specified case by case, and \(\mathcal {H}\subset \mathbb {R}^{1+m}\) be a nonempty set. We firstly recall the definitions of weak separation function and regular weak separation function.

Definition 3.1

[16] The class of all the functions \(w:\mathbb {R}^{1+m}\times \Pi \rightarrow \mathbb {R}\), such that

  1. (i)

    \(lev_{\ge 0}\,w(\cdot ;\pi )\supseteq \mathcal {H}\), \(\forall \pi \in \Pi \),

  2. (ii)

    \(\bigcap _{\pi \in \Pi }lev_{>0}\,w(\cdot ;\pi )\subseteq \mathcal {H}\),

is called the class of weak separation functions and is denoted by \(W(\Pi )\).

Definition 3.2

[16] The class of all the functions \(w:\mathbb {R}^{1+m}\times \Pi \rightarrow \mathbb {R}\), such that

$$\begin{aligned} \bigcap _{\pi \in \Pi }lev_{>0}\,w(\cdot ;\pi )=\mathcal {H}, \end{aligned}$$

is denoted by \(W_{R}(\Pi )\) and is called the class of regular weak separation functions.

Consider the following linear and nonlinear separation functions:

$$\begin{aligned} w_{1}(u,v;\theta ,\lambda ):= & {} \theta u+\langle \lambda ,v\rangle ,\quad (u,v)\in \mathbb {R}^{1+m}, \,(\theta ,\lambda )\in \Pi _{1}:=(\mathbb {R}_{+}\times D^{1*})\setminus \{0_{\mathbb {R}^{1+m}}\}; \\ w_{l}(u,v;\theta ,\lambda ):= & {} \theta u+\overline{\omega }_{l}(v;\lambda ),\quad (u,v)\in \mathbb {R}^{1+m}, \;(\theta ,\lambda )\in \Pi _{2}:= \mathbb {R}^{2}_{+}\setminus \{0_{\mathbb {R}^{2}}\},\;l=2,3,4, \end{aligned}$$

where \(\overline{\omega }_{l}:\mathbb {R}^{m}\times \mathbb {R}_{+}\rightarrow \mathbb {R}\) are given by

$$\begin{aligned} \overline{\omega }_{l}(v;\lambda ):=\left\{ \begin{array}{lll} -\lambda \,\Delta _{D^{1}}(v),\;&{}l=2,\\ -\lambda \,\varphi _{D^{1},k^{1}}(v),\;&{}l=3,\\ \sup _{z\in \{v\}-D^{1}}(-\lambda \,\varphi _{D^{1},k^{1}}(z) -r\,\sigma (z)),\;&{}l=4, \end{array}\right. \end{aligned}$$

and \(k^{1}\in -int \,\mathbb {R}^{m}_{+}\) is given, \(r>0\) is a real constant and the augmented function \(\sigma :\mathbb {R}^{m}\rightarrow \mathbb {R}\) is upper semicontinuous such that

$$\begin{aligned} \arg \min _{z\in \mathbb {R}^{m}}\sigma (z)=\{0_{\mathbb {R}^{m}}\}, \;\;\sigma (0_{\mathbb {R}^{m}})=0. \end{aligned}$$

The linear separation function \(w_{1}\) has been employed to investigate various OPs [16, 19, 21], and some optimality conditions as well as duality results have been derived. For the nonlinear weak separation function \(w_{2}\), it was introduced to study the optimality conditions and error bounds for Ky Fan quasi-inequalities in [25]. By means of the Gerstewitz function, the nonlinear weak separation functions \(w_{3}\) and \(w_{4}\) were presented and used to analyze inverse variational inequalities [26]. If we replace \(\Pi _{1}\) and \(\Pi _{2}\) by \(\Pi _{3}:=\mathbb {R}_{++}\times D^{1*}\) and \(\Pi _{4}:=\mathbb {R}_{++}\times \mathbb {R}_{+}\), respectively, then \(w_{l}\), \(l=1,2,3,4\) are regular. By applying the linear and nonlinear separation functions, our aim is to derive some robust optimality conditions for general uncertain OPs.

Remark 3.1

When the sets \(\text{ cone }(\mathcal {K}^{i}_{\bar{x}})\) and \(\mathcal {H}^{i}\) for \(i=1,2\) are convex, the linear separation can be ensured. Unfortunately, the inverse implication usually does not hold. One can refer to [32, Example 3.2], in which \(\mathcal {K}^{1}_{0}\cap \mathcal {H}^{1}=\emptyset \) and \(\text{ cone }(\mathcal {K}^{1}_{0})\) is nonconvex for strict robustness, while the separation is linear (see [32, Fig. 4]). Thus, the convexity of sets is only a sufficient condition for linear separation. Naturally, if the linear separation holds, then the nonlinear separation must be obtained, but the reverse implication is in general not true. Moreover, another equivalence of linear (regular) separation between \(\mathcal {K}^{i}_{\bar{x}}\) and \(\mathcal {H}^{i}\) has been given, one can refer to [19] and [24, Theorems 4.1 and 4.3].

Remark 3.2

The motivation for the function \(w_{3}\) introduced is twofold. First, it is convenient and efficient in calculation as \(w_{2}\) by choosing appropriate \(k^{1}\in -int \,\mathbb {R}^{m}_{+}\). Second, \(w_{3}\) as a nonlinear separation function is valid, when D is a general closed convex set and not necessary a cone.

We now use the specific linear or nonlinear functions \(w_{l},\,l=1,2,3,4\) to derive Lagrangian-type sufficient robust optimality conditions. Before that, it is necessary to define the linear and nonlinear regular separation.

Definition 3.3

The sets \(\mathcal {K}^{i}_{\bar{x}}\) and \(\mathcal {H}^{i}\) admit a

  1. (i)

    linear regular separation for \(i=1,2\), if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\), such that

    $$\begin{aligned} w_{1}(u,v;\bar{\theta },\bar{\lambda }):=\bar{\theta } u+\langle \bar{\lambda },v\rangle \le 0,\quad \forall \;(u,v)\in \mathcal {K}^{i}_{\bar{x}}; \end{aligned}$$
    (4)
  2. (ii)

    nonlinear regular separation for \(i=1,2\), if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{4}\), such that

    $$\begin{aligned} w_{l}(u,v;\bar{\theta },\bar{\lambda }):=\bar{\theta } u+\overline{\omega }_{l}(v;\bar{\lambda })\le 0,\quad \forall \;(u,v)\in \mathcal {K}^{i}_{\bar{x}},\;l=2,3,4. \end{aligned}$$
    (5)

Proposition 3.1

If \(\mathcal {K}^{i}_{\bar{x}}\) and \(\mathcal {H}^{i}\) admit a (linear/nonlinear) regular separation for \(i=1,2\), then \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)) is (a/an) strictly (resp., optimistic) robust solution of problem (1).

Proof

We only verify the linear result, since the proof of nonlinear case is similar. By Definition 3.3, if \(\mathcal {K}^{i}_{\bar{x}}\) and \(\mathcal {H}^{i}\) admit a linear regular separation for \(i=1,2\), then \(\exists \;(\bar{\theta },\bar{\lambda })\in \Pi _{3}\) such that (4) holds. It follows from Definition 3.2 that

$$\begin{aligned} \bigcap _{(\theta ,\lambda )\in \Pi _{3}}lev_{>0}\,w_{1} (\cdot ;\theta ,\lambda )\supseteq \mathcal {H}^{i}. \end{aligned}$$
(6)

Combining (4) and (6), one can reach that \(\mathcal {K}^{i}_{\bar{x}}\cap \mathcal {H}^{i}=\emptyset \). Thus, \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)) is a strictly (resp., optimistic) robust solution of problem (1). \(\square \)

Before we deduce Lagrangian-type robust optimality conditions for the scalar robust OP, we mainly consider the concepts of strict robustness and optimistic robustness. Then, the corresponding generalized Lagrangian functions associated with inequalities (4) and (5) are described as follows:

  1. (i)

    For strict robustness, the generalized Lagrangian functions \(\mathcal {L}^{1}_{1}:\mathcal {X}\times \Pi _{3}\rightarrow \mathbb {R}\) and \(\mathcal {L}^{1}_{l}:\mathcal {X}\times \Pi _{4}\rightarrow \mathbb {R}\) (where \(l=2,3,4\) for (i), (ii) and (iii)) are denoted by

    $$\begin{aligned} \mathcal {L}^{1}_{1}(x;\theta ,\lambda ):= & {} \theta \sup _{\xi \in U}f(x,\xi )-\langle \lambda ,F^{1}(x)\rangle \;\;\text{ and } \\ \mathcal {L}^{1}_{l}(x;\theta ,\lambda ):= & {} \theta \sup _{\xi \in U}f(x,\xi )-\overline{\omega }_{l}(F^{1}(x);\lambda ); \end{aligned}$$
  2. (ii)

    For optimistic robustness, the generalized Lagrangian functions \(\mathcal {L}^{2}_{1}:\mathcal {X}\times \Pi _{3}\rightarrow \mathbb {R}\), \(\mathcal {L}^{2}_{l}:\mathcal {X}\times \Pi _{4}\rightarrow \mathbb {R}\) are defined by

    $$\begin{aligned} \mathcal {L}^{2}_{1}(x;\theta ,\lambda ):= & {} \theta \inf _{\xi \in U}f(x,\xi )-\langle \lambda ,F^{2}(x)\rangle ,\;\;\; \\ \mathcal {L}^{2}_{l}(x;\theta ,\lambda ):= & {} \theta \inf _{\xi \in U}f(x,\xi )-\overline{\omega }_{l}(F^{2}(x);\lambda ). \end{aligned}$$

Under mild assumptions, the following theorems establish equivalent relationships between separation and saddle point in the ISA.

Theorem 3.1

Let \(\bar{x}\in X_{1}\). Assume that \(\sup _{\xi \in U}f(\bar{x},\xi )\) is attained. The sets \(\mathcal {K}^{1}_{\bar{x}}\) and \(\mathcal {H}^{1}\) admit a

  1. (i)

    linear regular separation, if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}^{1}_{1}(x;\bar{\theta },\lambda )\) on \(\mathcal {X}\times D^{1*}\), i.e.,

    $$\begin{aligned} \mathcal {L}^{1}_{1}(\bar{x};\bar{\theta },\lambda )\le \mathcal {L}^{1}_{1} (\bar{x};\bar{\theta },\bar{\lambda })\le \mathcal {L}^{1}_{1}(x;\bar{\theta },\bar{\lambda }), \quad \forall \; x\in \mathcal {X},\;\forall \;\lambda \in D^{1*}; \end{aligned}$$
  2. (ii)

    nonlinear regular separation, if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{4}\), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian functions \(\mathcal {L}^{1}_{l}(x;\bar{\theta },\lambda )\) on \(\mathcal {X}\times \mathbb {R}_{+}\) for \(l=2,3,4\).

Proof

The detailed proofs of parts (i) and (ii) can be seen in [27] (i.e., the proof of Theorem 3.4), where the optimality conditions for strictly robust solutions have been investigated. \(\square \)

Theorem 3.2

Let \(\bar{x}\in X_{2}\). Assume that \(\inf _{\xi \in U}f(\bar{x},\xi )\) is attained. The sets \(\mathcal {K}^{2}_{\bar{x}}\) and \(\mathcal {H}^{1}\) admit a

  1. (i)

    linear regular separation, if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}^{2}_{1}(x;\bar{\theta },\lambda )\) on \(\mathcal {X}\times D^{1*}\), i.e.,

    $$\begin{aligned} \mathcal {L}^{2}_{1}(\bar{x};\bar{\theta },\lambda ) \le \mathcal {L}^{2}_{1}(\bar{x};\bar{\theta },\bar{\lambda })\le \mathcal {L}^{2}_{1}(x;\bar{\theta },\bar{\lambda }), \quad \forall \; x\in \mathcal {X},\;\forall \;\lambda \in D^{1*}; \end{aligned}$$
    (7)
  2. (ii)

    nonlinear regular separation, if and only if there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{4}\), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian functions \(\mathcal {L}^{2}_{l}(x;\bar{\theta },\lambda )\) on \(\mathcal {X}\times \mathbb {R}_{+}\) for \(l=2,3,4\).

Proof

  1. (i)

    Necessity. Assume that \(\mathcal {K}^{2}_{\bar{x}}\) and \(\mathcal {H}^{1}\) admit a linear regular separation. It follows from Definition 3.3 that there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\) such that (4) holds, i.e.,

    $$\begin{aligned} \bar{\theta }\Big (\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\Big )+\langle \bar{\lambda },F^{2}(x)\rangle \le 0,\quad \forall \; x\in \mathcal {X},\;\forall \;\xi \in U. \end{aligned}$$
    (8)

    Since \(\inf _{\xi \in U}f(\bar{x},\xi )\) is attained, there exists \(\bar{\xi }\in U\) such that \(f(\bar{x},\bar{\xi })=\inf _{\xi \in U}f(\bar{x},\xi )\). By taking \(x=\bar{x}\) and \(\xi =\bar{\xi }\), we achieve from (8) that \(\langle \bar{\lambda },F^{2}(\bar{x})\rangle \le 0\). Due to \(\bar{x}\in X_{2}\), it holds \(\langle \bar{\lambda },F^{2}(\bar{x})\rangle \ge 0\). Thus, we conclude \(\langle \bar{\lambda },F^{2}(\bar{x})\rangle =0\). One can reach that

    $$\begin{aligned} \mathcal {L}^{2}_{1}(\bar{x};\bar{\theta },\lambda )= & {} \bar{\theta }\inf \limits _{\xi \in U}f(\bar{x},\xi )-\langle \lambda ,F^{2}(\bar{x})\rangle \;\le \;\bar{\theta }\inf \limits _{\xi \in U}f(\bar{x},\xi )-\langle \bar{\lambda },F^{2}(\bar{x})\rangle \nonumber \\= & {} \mathcal {L}^{2}_{1}(\bar{x};\bar{\theta },\bar{\lambda }), \quad \forall \,\lambda \in D^{1*}. \end{aligned}$$
    (9)

    It follows from \(\langle \bar{\lambda },F^{2}(\bar{x})\rangle \ge 0\) and (8), for any \(x\in \mathcal {X}\), that

    $$\begin{aligned} \begin{aligned}&\bar{\theta }\Big (\inf \limits _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\Big )+\langle \bar{\lambda },F^{2}(x)\rangle \le 0 \quad \text{ for } \text{ all }\;\xi \in U\\ \Leftrightarrow \;\;&\bar{\theta }\inf \limits _{\xi \in U}f(\bar{x},\xi )\le \bar{\theta }f(x,\xi )-\langle \bar{\lambda },F^{2}(x)\rangle \quad \text{ for } \text{ all }\;\xi \in U\\ \Leftrightarrow \;\;&\bar{\theta }\inf \limits _{\xi \in U}f(\bar{x},\xi )\le \bar{\theta }\inf \limits _{\xi \in U}f(x,\xi )-\langle \bar{\lambda },F^{2}(x)\rangle \\ \Rightarrow \;\;&\bar{\theta }\inf \limits _{\xi \in U}f(\bar{x},\xi )-\langle \bar{\lambda },F^{2}(\bar{x})\rangle \le \bar{\theta }\inf \limits _{\xi \in U}f(x,\xi )-\langle \bar{\lambda },F^{2}(x)\rangle \\ \Leftrightarrow \;\;&\mathcal {L}^{2}_{1}(\bar{x};\bar{\theta },\bar{\lambda })\le \mathcal {L}^{2}_{1}(x;\bar{\theta },\bar{\lambda }). \end{aligned} \end{aligned}$$
    (10)

    We achieve from (9) and (10) that (7) holds. Hence, \((\bar{x},\bar{\lambda })\) is a saddle point. Sufficiency. Let \(\bar{x}\in X_{2}\). Suppose that \((\bar{x},\bar{\lambda })\) be a saddle point for \(\mathcal {L}^{2}_{1}(x;\bar{\theta },\lambda )\). Then, inequality (7) is true. From the first inequality of (7), one can see that \(0\le \langle \bar{\lambda },F^{2}(\bar{x})\rangle \le \langle \lambda ,F^{2}(\bar{x})\rangle \) for all \(\lambda \in D^{1*}\). Set \(\lambda =0_{\mathbb {R}^{m}}\in D^{1*}\), we have \(\langle \lambda ,F^{2}(\bar{x})\rangle =0\), which implies \(\langle \bar{\lambda },F^{2}(\bar{x})\rangle =0\). Combining the second inequality of (7) and the inverse direction of (10), one can read that the inequality (8) holds (note that the attainment property of \(\inf _{\xi \in U}f(\bar{x},\xi )\) is unnecessary). It follows from Definition 3.3 that \(\mathcal {K}^{2}_{\bar{x}}\) and \(\mathcal {H}^{1}\) admit a linear regular separation.

  2. (ii)

    If \(\bar{x}\in X_{2}\), it holds \(F^{2}(\bar{x})\in D^{2}\). Since \(k^{2}\in -int \,\mathbb {R}^{m}_{+}\) is given, it follows from Propositions 2.1 (ii) and 2.2 (ii)-(iii) that \(\varphi _{D^{2},k^{2}}(F^{2}(\bar{x}))\le 0\) and \(\Delta _{D^{2}}(F^{2}(\bar{x}))\le 0\). Since \(0_{\mathbb {R}^{m}}\in \{F^{2}(\bar{x})\}-D^{2}\) and \(\sigma (0_{\mathbb {R}^{m}})=0\), one can reach that \(\sup _{z\in \{F^{2}(\bar{x})\}-D^{2}}(-\lambda \,\varphi _{D^{2},k^{2}}(z)-r\,\sigma (z))\ge 0\). To be precise, essential to adapt the proof of (i) are \(\overline{\omega }_{l}(F^{2}(\bar{x});\lambda )\ge 0\) and \(\overline{\omega }_{l}(F^{2}(\bar{x});0)=0\) for all \(l=2,3,4\). It should be pointed out that the rest can be easily checked.

\(\square \)

Combined Proposition 3.1 with Theorems 3.1 and 3.2, the following corollaries can be reached. Thus, we state it without detailed proof.

Corollary 3.1

Let \(\bar{x}\in X_{1}\). If there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\) (resp., \(\Pi _{4}\)), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}^{1}_{1}(x;\bar{\theta },\lambda )\) (resp., \(\mathcal {L}^{1}_{l}(x;\bar{\theta },\lambda )\) for some \(l\in \{2,3,4\}\)) on \(\mathcal {X}\times D^{1*}\) (resp., \(\mathcal {X}\times \mathbb {R}_{+}\)), then \(\bar{x}\) is a strictly robust solution of problem (1).

Corollary 3.2

Let \(\bar{x}\in X_{2}\). If there exists \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\) (resp., \(\Pi _{4}\)), such that \((\bar{x},\bar{\lambda })\) is a saddle point for the generalized Lagrangian function \(\mathcal {L}^{2}_{1}(x;\bar{\theta },\lambda )\) (resp., \(\mathcal {L}^{2}_{l}(x;\bar{\theta },\lambda )\) for some \(l\in \{2,3,4\}\)) on \(\mathcal {X}\times D^{1*}\) (resp., \(\mathcal {X}\times \mathbb {R}_{+}\)), then \(\bar{x}\) is an optimistic robust solution of problem (1).

By means of linear and nonlinear regular weak separation functions, we provide necessary and sufficient robust optimality conditions for the scalar robust OP.

Theorem 3.3

  1. (i)

    Let \(\bar{\theta }\in \,]0,+\infty [\) and \(\bar{x}\in X_{1}\). Assume that \(\sup _{\xi \in U}f(\bar{x},\xi )\) is attained. Then, \(\bar{x}\) is a strictly robust solution of problem (1), if and only if

    $$\begin{aligned} \sup _{(x,\xi )\in \mathcal {X}\times U}\inf _{\lambda \,\in D^{1*}}\Big [\,\bar{\theta }\Big (f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi )\Big )+\langle \lambda ,F^{1}(x)\rangle \,\Big ]=0, \end{aligned}$$
    (11)

    or

    $$\begin{aligned} \sup _{(x,\xi )\in \mathcal {X}\times U}\inf _{\lambda \,\in \mathbb {R}_+}\Big [\,\bar{\theta }\Big (f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi )\Big )+\overline{\omega }_{l}(F^{1}(x);\lambda )\,\Big ]=0,\quad l=2,3,4. \end{aligned}$$
    (12)
  2. (ii)

    Let \(\bar{\theta }\in \,]0,+\infty [\) and \(\bar{x}\in X_{2}\). Assume that \(\inf _{\xi \in U}f(\bar{x},\xi )\) is attained. Then, \(\bar{x}\) is an optimistic robust solution of problem (1), if and only if

    $$\begin{aligned} \sup _{(x,\xi )\in \mathcal {X}\times U}\inf _{\lambda \,\in D^{1*}}\Big [\,\bar{\theta }\Big (\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\Big )+\langle \lambda ,F^{2}(x)\rangle \,\Big ]=0, \end{aligned}$$
    (13)

    or

    $$\begin{aligned} \sup _{(x,\xi )\in \mathcal {X}\times U}\inf _{\lambda \,\in \mathbb {R}_+}\Big [\,\bar{\theta }\Big (\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\Big )+\overline{\omega }_{l}(F^{2}(x);\lambda )\,\Big ]=0,\quad l=2,3,4. \end{aligned}$$
    (14)

Proof

The statement of part (i) is just [27, Theorem 3.3], and one can refer to the proof of the result. By a tiny change of the proof, the conclusions of part (ii) can be verified. It is noteworthy that the assumption of part (ii) (i.e., \(\inf _{\xi \in U}f(\bar{x},\xi )\) is attained) is employed to verify the necessity. \(\square \)

Remark 3.3

Based on analysis of equivalence relations between separation and saddle point for the above two robustness concepts (Theorems 3.1 and 3.2), we can divide six robustness concepts discussed in [32, Sect. 3] into three cases: (i) For strict robustness, reliable robustness and adjustable robustness, we assume that the supremum of objective function is attained for some \(\bar{x}\in X_{1}\); (ii) for optimistic robustness, it is required that the infimum of objective function is attained for some \(\bar{x}\in X_{2}\); (iii) for light robustness and \(\epsilon \)-constraint robustness, no additional assumptions are needed for the corresponding counterpart problem, which can be seen as a special case of the constrained extremum problem (1.1.1) described in [16]. Therefore, the corresponding conclusions of Proposition 3.1, Theorems 3.13.3 and Corollaries 3.13.2 for reliable robustness (resp., light robustness, \(\epsilon \)-constraint robustness, adjustable robustness) also can be obtained. Thus, various sufficient and necessary robust optimality conditions can be reached.

4 Relationships to Vector/Set-Valued Optimization

In Sect. 3, the robust optimality conditions are obtained by virtue of linear and nonlinear regular weak separation functions, especially saddle point sufficient optimality conditions for different kinds of robustness concepts. In what follows, by means of some notations of ISA, a family of vector OPs and set-valued OPs are introduced. We investigate their relations to robustness concepts. More precisely, Sects. 4.1 and 4.2 explore the relationships between ISA approach and vector (resp., set-valued) optimization, respectively. Besides, some close connections to the existing methods (vector optimization and set-valued optimization) presented in [13] are well established.

4.1 Relationships to Vector Optimization

In this subsection, we introduce a family of vector OPs by virtue of sets \(\mathcal {K}^{i}_{\bar{x}}\) and \(\mathcal {H}^{i}\) for \(i=1,2\) and investigate their relationships to strict robustness and optimistic robustness, since others are analogous to both. For each \(\bar{x}\in X_{i},i=1,2\), let \(\mathcal {A}_{i}(\bar{x}):=\mathcal {K}^{i}_{\bar{x}}\cup \{0_{\mathbb {R}^{1+m}}\}\) and \(K:=\mathcal {H}^{1}\cup \{0_{\mathbb {R}^{1+m}}\}\) (due to \(\mathcal {H}^{2}:=\mathcal {H}^{1}\)), obviously, K is a convex and pointed cone. Then, a family of vector OPs can be defined, i.e., compute (\(Max (\mathcal {A}_{i}(\bar{x}),K)\), \(\bar{x}\in X_{i}\)). By analysis, one can obtain the following conclusion.

Theorem 4.1

A feasible solution \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)) is a strictly (resp., an optimistic) robust solution to problem (1), if and only if \(0_{\mathbb {R}^{1+m}}\in Max (\mathcal {A}_{1}(\bar{x}),K)\) (resp., \(Max (\mathcal {A}_{2}(\bar{x}),K)\)).

Proof

Since \(0_{\mathbb {R}^{1+m}}\not \in \mathcal {H}^{i}\) for \(i=1,2\), then one has

$$\begin{aligned} \begin{aligned} 0_{\mathbb {R}^{1+m}}\in Max (\mathcal {A}_{i}(\bar{x}),K)&\Leftrightarrow \;0_{\mathbb {R}^{1+m}}\in Max (\mathcal {K}^{i}_{\bar{x}}\cup \{0_{\mathbb {R}^{1+m}}\},\mathcal {H}^{i}\cup \{0_{\mathbb {R}^{1+m}}\})\\&\Leftrightarrow \;(\mathcal {K}^{i}_{\bar{x}}\cup \{0_{\mathbb {R}^{1+m}}\})\cap (\{0_{\mathbb {R}^{1+m}}\}+ \mathcal {H}^{i}\cup \{0_{\mathbb {R}^{1+m}}\})=\{0_{\mathbb {R}^{1+m}}\}\\&\Leftrightarrow \;[\mathcal {K}^{i}_{\bar{x}}\cap (\mathcal {H}^{i}\cup \{0_{\mathbb {R}^{1+m}}\})]\cup \{0_{\mathbb {R}^{1+m}}\}=\{0_{\mathbb {R}^{1+m}}\}\\&\Leftrightarrow \;[(\mathcal {K}^{i}_{\bar{x}}\cap \mathcal {H}^{i})\cup (\mathcal {K}^{i}_{\bar{x}}\cap \{0_{\mathbb {R}^{1+m}}\})] \cup \{0_{\mathbb {R}^{1+m}}\}=\{0_{\mathbb {R}^{1+m}}\}\\&\Leftrightarrow \; \mathcal {K}^{i}_{\bar{x}}\cap \mathcal {H}^{i}=\emptyset \\&\Leftrightarrow \;\bar{x}\;\;\text{ is } \text{ a } \text{ strictly } \text{(resp., } \text{ an } \text{ optimistic) } \text{ robust } \text{ solution } \text{ to } \text{ problem } (1). \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

Remark 4.1

With the help of the notations of ISA, a family of vector OPs for strict robustness (resp., optimistic robustness) have been introduced. Subsequently, the relations between vector OPs and two robustness concepts have been discussed; more precisely, one can refer to Theorem 4.1. Note that the corresponding conclusion of Theorem 4.1 for reliable robustness (resp., light robustness, \(\epsilon \)-constraint robustness, adjustable robustness) also can be derived by introducing sets \(\mathcal {A}(\bar{x})\) and K, appropriately. Then, an equivalent characterization of various robust solutions can be obtained from vector optimization point of view. What is noteworthy is that the proof of Theorem 4.1 makes full use of the conclusions of [32, Theorems 3.1 and 3.2]. Thus, the tool of vector optimization has close connections to ISA approach. An example is employed to visualize the fact in Theorem 4.1 for the strict robustness case.

Example 4.1

For problem (1), take \(\mathcal {X}=\mathbb {R}\) and \(U=\{\xi _{1},\xi _{2}\}\) be the parameter set. Let \(f(x,\xi _{1})=x\), \(f(x,\xi _{2})=-x\), \(F(x,\xi _{1})=x-1\) and \(F(x,\xi _{2})=x-2\). By simple calculation, one has \(\max _{\xi \in U}f(x,\xi )=|x|\), \(\max _{\xi \in U}F(x,\xi )=x-1\) and \(X_{1}:=]-\infty ,1]\). For strict robustness, one can reach that \(x^{*}=0\) is the unique strictly robust solution. Set \(\bar{x}=0\). It follows from [32, Example 3.1] that the corrected image set is \(\mathcal {K}^{1}_{0}=\{(u,v)\in \mathbb {R}^{2}:v=-u-1,\;u\le 0\}\cup \{(u,v)\in \mathbb {R}^{2}:v=u-1,\;u<0\}\). Due to \(\mathcal {A}_{1}(0):=\mathcal {K}^{1}_{0}\cup \{0_{\mathbb {R}^2}\}\) and \(K:=\mathcal {H}^{1}\cup \{0_{\mathbb {R}^2}\}=\{(u,v)\in \mathbb {R}^{2}:u>0,\;v\le 0\}\cup \{0_{\mathbb {R}^2}\}\), it is not difficult to verify the conclusion of Theorem 4.1, i.e., \(0_{\mathbb {R}^2}\in Max (\mathcal {A}_{1}(0),K)\).

By defining two kinds of vector orderings, i.e., sup-order relation (\(\alpha _{\sup }\)) and inf-order relation (\(\alpha _{\inf }\)), Klamroth et al. [13] developed a unified characterization of five concepts of robust optimization in terms of vector optimization. We recall some basic notations associated with the vector optimization approach. For a given \(x\in \mathcal {X}\), when U is not a finite set, let \(\mathcal {F}_{x}:U\rightarrow \mathbb {R}\) be function, where \(\mathcal {F}_{x}(\xi )=f(x,\xi )\) contains the objective function value of x in scenario \(\xi \), \(\xi \in U\). In order to compare two feasible solutions x and y, order relations \(\alpha _{\sup }\) and \(\alpha _{\inf }\) in the real linear functional space \(\mathbb {R}^{U}\) of all mappings \(\mathcal {F}:U\rightarrow \mathbb {R}\) were considered. Let \(\mathcal {F}_{x},\mathcal {F}_{y}\in \mathbb {R}^{U}\) be given, \(\alpha _{\sup }\) and \(\alpha _{\inf }\) are described by

$$\begin{aligned} \mathcal {F}_{x}\;\alpha _{\sup }\;\mathcal {F}_{y}:\Leftrightarrow \sup _{\xi \in U}\mathcal {F}_{x}(\xi ){\le }\sup _{\xi \in U}\mathcal {F}_{y}(\xi ) \;\;\;\text{ and }\;\;\;\mathcal {F}_{x}\;\alpha _{\inf }\;\mathcal {F}_{y}:\Leftrightarrow \inf _{\xi \in U}\mathcal {F}_{x}(\xi ){\le }\inf _{\xi \in U}\mathcal {F}_{y}(\xi ), \end{aligned}$$

respectively. Particularly, in the case of a finite uncertainty set \(U:=\{\xi _{1},\ldots ,\xi _{q}\}\), \(\alpha _{\sup }\) corresponds to the max-order relation in multiobjective optimization [38], where \(\mathcal {F}_{x}:=(f(x,\xi _{1}),\ldots ,f(x,\xi _{q}))\in \mathbb {R}^{q}\).

In what follows, taking into account strict robustness and optimistic robustness, the relationships between ISA approach and vector optimization method described in [13] are revealed on the basis of Theorem 4.1. Corollary 4.1 is corresponding to [13, Theorems 1 and 4], and the detailed proof is omitted. For regret robustness, reliable robustness and adjustable robustness, the order relation used is also \(\alpha _{\sup }\); so, it is analogous to the case of strict robustness.

Proposition 4.1

Let \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)). \(0_{\mathbb {R}^{1+m}}\in Max (\mathcal {A}_{1}(\bar{x}),K)\) (resp., \(Max (\mathcal {A}_{2}(\bar{x}),K)\)), if and only if \(\mathcal {F}_{\bar{x}}\;\alpha _{\sup }\;\mathcal {F}_{x}\) (resp., \(\mathcal {F}_{\bar{x}}\;\alpha _{\inf }\;\mathcal {F}_{x}\)) for all \(x\in X_{1}\) (resp., \(X_{2}\)).

Proof

Since \(\bar{x}\in X_{1}\), combining [32, Theorem 3.1] and Theorem 4.1, one can reach that

$$\begin{aligned} \begin{aligned} 0_{\mathbb {R}^{1+m}}\in Max (\mathcal {A}_{1}(\bar{x}),K)&\Leftrightarrow \; \mathcal {K}^{1}_{\bar{x}}\cap \mathcal {H}^{1}=\emptyset \\&\Leftrightarrow \;f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi )\le 0,\;\;\forall \;x\in X_{1},\;\forall \;\xi \in U\\&\Leftrightarrow \;\sup _{\xi \in U}f(\bar{x},\xi )\le \sup _{\xi \in U}f(x,\xi ),\;\;\forall \;x\in X_{1}\\&\Leftrightarrow \;\sup _{\xi \in U}\mathcal {F}_{\bar{x}}(\xi )\le \sup _{\xi \in U}\mathcal {F}_{x}(\xi ),\;\;\forall \;x\in X_{1}\\&\Leftrightarrow \;\mathcal {F}_{\bar{x}}\;\alpha _{\sup }\;\mathcal {F}_{x},\;\;\forall \;x\in X_{1}. \end{aligned} \end{aligned}$$

Analogously, if \(\bar{x}\in X_{2}\), one can see that

$$\begin{aligned} \begin{aligned} 0_{\mathbb {R}^{1+m}}\in Max (\mathcal {A}_{2}(\bar{x}),K)&\Leftrightarrow \; \mathcal {K}^{2}_{\bar{x}}\cap \mathcal {H}^{2}=\emptyset \\&\Leftrightarrow \;\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\le 0,\;\;\forall \;x\in X_{2},\;\forall \;\xi \in U\\&\Leftrightarrow \;\inf _{\xi \in U}f(\bar{x},\xi )\le \inf _{\xi \in U}f(x,\xi ),\;\;\forall \;x\in X_{2}\\&\Leftrightarrow \;\inf _{\xi \in U}\mathcal {F}_{\bar{x}}(\xi )\le \inf _{\xi \in U}\mathcal {F}_{x}(\xi ),\;\;\forall \;x\in X_{2}\\&\Leftrightarrow \;\mathcal {F}_{\bar{x}}\;\alpha _{\inf }\;\mathcal {F}_{x},\;\;\forall \;x\in X_{2}. \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

Corollary 4.1

A feasible solution \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)) is a strictly (resp., an optimistic) robust solution to problem (1), if and only if \(\mathcal {F}_{\bar{x}}\;\alpha _{\sup }\;\mathcal {F}_{x}\) (resp., \(\mathcal {F}_{\bar{x}}\;\alpha _{\inf }\;\mathcal {F}_{x}\)) for all \(x\in X_{1}\) (resp., \(X_{2}\)).

Remark 4.2

Theorem 4.1 and Corollary 4.1 provide some equivalent characterizations to robust solutions from the vector optimization point of view. Importantly, they are deduced by using the separation of ISA approach. The ISA used herein has shown to be instrumental in unifying fields of robust optimization and vector optimization and to allow one to find new results. Similar phenomena hold between the ISA approach and the approach based on set-valued optimization (see Theorem 4.2 and Corollary 4.2 in next subsection). About research on vector/set-valued optimization through ISA, one could refer to [39, 40].

4.2 Relationships to Set-Valued Optimization

This subsection concentrates on exploring the relationships between ISA approach and set-valued optimization in dealing with robust OPs. First of all, a collection of set-valued OPs can be introduced by means of \(\mathcal {K}^{i}_{\bar{x}}\) for \(i=1,2\). Moreover, we derive an equivalent characterization of strict robustness (resp., optimistic robustness) from set-valued optimization point of view. Finally, some connections to set-valued optimization method proposed in [13] can be established.

For a given \(x\in \mathcal {X}\), let \(\mathcal {B}_{x}:=\{f(x,\xi ):\xi \in U\}\subset \mathbb {R}\) and \(C:=\mathbb {R}_{+}\). For each \(\bar{x}\in X_{1}\) (resp., \(X_{2}\)), let

$$\begin{aligned} \begin{aligned} u_{1}(\bar{x},x)&:=\{f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi ):\xi \in U\}= \{f(\bar{x},\xi ):\xi \in U\}-\sup _{\xi \in U}f(x,\xi ) \\&=\mathcal {B}_{\bar{x}}-\sup \mathcal {B}_{x}\subset \mathbb {R},\\ \Big (\text{ resp., }\;\;u_{2}(\bar{x},x)&:=\{\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi ):\xi \in U\} \\&=\inf _{\xi \in U}f(\bar{x},\xi )- \{f(x,\xi ):\xi \in U\} =\inf \mathcal {B}_{\bar{x}}-\mathcal {B}_{x}\subset \mathbb {R}\Big ) \end{aligned} \end{aligned}$$

then a collection of set-valued OPs can be defined by \(\Big (\max \limits _{x\in X_i}u_{i}(\bar{x},x),\bar{x}\in X_{i}\Big )\) for \(i=1,2\), respectively. For a given \(\bar{x}\in X_{i}\), the set-valued OP is \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\). According to [28, Definition 2.6.19], an element \(x^{*}\in X_i\) is a maximal solution of the set-valued OP with respect to \(\preceq _{C}^{u}\), if \(u(\bar{x},x^{*})\preceq _{C}^{u}u(\bar{x},x)\) for some \(x\in X_{i}\Rightarrow u(\bar{x},x)\preceq _{C}^{u}u(\bar{x},x^{*})\). Then, the relations between set-valued OPs and strict robustness (resp., optimistic robustness) can be revealed.

Theorem 4.2

A feasible solution \(\bar{x}\in X_{i}\) for \(i=1\) (resp., 2) is a strictly (resp., an optimistic) robust solution to problem (1), if and only if \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\preceq _{C}^{u}\{0\}\), i.e., \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\subseteq -C\).

Proof

For \(i=1\), since \(\bar{x}\in X_{1}\), it holds \(F^{1}(\bar{x})\in D^{1}\). Then, one has

$$\begin{aligned} \begin{aligned} \max \limits _{x\in X_1}u_{1}(\bar{x},x)\subseteq -C&\Leftrightarrow \;f(\bar{x},\xi )-\sup _{\xi \in U}f(x,\xi )\le 0,\;\;\forall \;x\in X_1,\;\;\forall \;\xi \in U\\&\Leftrightarrow \;\mathcal {K}^{1}_{\bar{x}}\cap \mathcal {H}^{1}=\emptyset \\&\Leftrightarrow \;\bar{x}\;\;\text{ is } \text{ a } \text{ strictly } \text{ robust } \text{ solution } \text{ to } \text{ problem } (1). \end{aligned} \end{aligned}$$

For the case of \(i=2\), due to the fact that \(\bar{x}\in X_{2}\), it is obvious that \(F^{2}(\bar{x})\in D^{2}\). One can see that

$$\begin{aligned} \begin{aligned} \max \limits _{x\in X_2}u_{2}(\bar{x},x)\subseteq -C&\Leftrightarrow \;\inf _{\xi \in U}f(\bar{x},\xi )-f(x,\xi )\le 0,\;\;\forall \;x\in X_2,\;\;\forall \;\xi \in U\\&\Leftrightarrow \;\mathcal {K}^{2}_{\bar{x}}\cap \mathcal {H}^{2}=\emptyset \\&\Leftrightarrow \;\bar{x}\;\;\text{ is } \text{ an } \text{ optimistic } \text{ robust } \text{ solution } \text{ to } \text{ problem } (1). \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

Remark 4.3

A collection of set-valued OPs for strict robustness and optimistic robustness have been introduced by using the notations of ISA, respectively. Meanwhile, the relationships between set-valued OPs and strict robustness (resp., optimistic robustness) have been discussed. Specifically, one can see Theorem 4.2. It is worth noting that the proof of Theorem 4.2 takes advantage of the results of [32, Theorems 3.1 and 3.2]. As a consequence, the tool of set-valued optimization has close connections to ISA approach. Additionally, Theorem 4.2 is also suitable for reliable robustness and adjustable robustness in view of defining the corresponding set-valued OPs. The following example is given to illustrate the effectiveness of Theorem 4.2 for the strict robustness case.

Example 4.2

For problem (1), let \(\mathcal {X}=\mathbb {R}\), \(U=[1,2]\), \(f(x,\xi )=x+\xi \) and \(F(x,\xi )=x^{2}-\xi \). By simple calculation, it is not difficult to see that \(\max _{\xi \in U}f(x,\xi )=x+2\), \(\max _{\xi \in U}F(x,\xi )=x^{2}-1\) and \(X_{1}=[-1,1]\). One can see that \(x^{*}=-1\) is the unique strictly robust solution. (i) Taking \(\bar{x}=-1\), since \(\max \limits _{x\in X_1}u_{1}(-1,x)=\{-1+\xi :\xi \in U\}+\max \limits _{x\in X_1}(-x-2)=\{-2+\xi :\xi \in U\}\), it holds

$$\begin{aligned} \max \limits _{x\in X_1}u_{1}(-1,x)=[-1,0]\subseteq -C, \;\;\text{ i.e. },\;\; \max \limits _{x\in X_1}u_{1}(-1,x)\preceq _{C}^{u}\{0\}. \end{aligned}$$

(ii) Set \(\bar{x}=1\). One has \(\max \limits _{x\in X_1}u_{1}(1,x)=\{1+\xi :\xi \in U\}+\max \limits _{x\in X_1}(-x-2)=\{\xi :\xi \in U\}=[1,2]\nsubseteq -C\), i.e., \(\max \limits _{x\in X_1}u_{1}(1,x)\npreceq _{C}^{u}\{0\}\).

Based on the notation \(\mathcal {B}_{x}\) and set order relations, set-valued optimization, as a unified approach to tackling with uncertain OP, has been developed in [13]. We recall the upper-type set-relation \(\beta ^{u}\) and lower-type set-relation \(\beta ^{l}\) used in the literature. For two feasible solutions x and y, the corresponding sets are \(\mathcal {B}_{x}\) and \(\mathcal {B}_{y}\). The upper-type set-relation \(\beta ^{u}\) and lower-type set-relation \(\beta ^{l}\) are given by

$$\begin{aligned} \mathcal {B}_{x}\;\beta ^{u}\;\mathcal {B}_{y}:\Leftrightarrow \sup \mathcal {B}_{x}\le \sup \mathcal {B}_{y} \;\;\;\text{ and }\;\;\;\mathcal {B}_{x}\;\beta ^{l}\;\mathcal {B}_{y}:\Leftrightarrow \inf \mathcal {B}_{x}\le \inf \mathcal {B}_{y}, \end{aligned}$$

respectively. In fact, the set order relations \(\beta ^{u}\) and \(\beta ^{l}\) herein are coincident with \(\preceq _{C}^{u}\) and \(\preceq _{C}^{l}\). Considering strict robustness and optimistic robustness, we investigate the relationships between ISA approach and set-valued optimization method stated in [13] by virtue of Theorem 4.2 in the following. Corollary 4.2 is in accordance with [13, Theorems 2 and 5], and the detailed proof can be omitted.

Proposition 4.2

Let \(\bar{x}\in X_{i}\) for \(i=1\) (resp., 2). \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\preceq _{C}^{u}\{0\}\), if and only if \(\mathcal {B}_{\bar{x}}\;\beta ^{u}\;\mathcal {B}_{x}\) (resp., \(\mathcal {B}_{\bar{x}}\;\beta ^{l}\;\mathcal {B}_{x}\)) for all \(x\in X_{1}\) (resp., \(X_{2}\)).

Proof

For \(i=1\), one can reach that

$$\begin{aligned} \begin{aligned} \max _{x\in X_1}u_{1}(\bar{x},x)\preceq _{C}^{u}\{0\}&\Leftrightarrow \;\max _{x\in X_1}(\mathcal {B}_{\bar{x}}-\sup \mathcal {B}_{x})\preceq _{C}^{u}\{0\}\\&\Leftrightarrow \;\mathcal {B}_{\bar{x}}-\sup \mathcal {B}_{x}\subseteq -C,\;\;\forall \;x\in X_1\\&\Leftrightarrow \;\mathcal {B}_{\bar{x}}\subseteq \sup \mathcal {B}_{x}-C,\;\;\forall \;x\in X_1\\&\Leftrightarrow \;\sup \mathcal {B}_{\bar{x}}\le \sup \mathcal {B}_{x},\;\;\forall \;x\in X_1\\&\Leftrightarrow \;\mathcal {B}_{\bar{x}}\;\beta ^{u}\;\mathcal {B}_{x},\;\;\forall \;x\in X_1. \end{aligned} \end{aligned}$$

For the case of \(i=2\), it holds

$$\begin{aligned} \begin{aligned} \max _{x\in X_2}u_{2}(\bar{x},x)\preceq _{C}^{u}\{0\}&\Leftrightarrow \;\max _{x\in X_2}(\inf \mathcal {B}_{\bar{x}}-\mathcal {B}_{x})\preceq _{C}^{u}\{0\}\\&\Leftrightarrow \;\inf \mathcal {B}_{\bar{x}}-\mathcal {B}_{x}\subseteq -C,\;\;\forall \;x\in X_2\\&\Leftrightarrow \;\mathcal {B}_{x}\subseteq \inf \mathcal {B}_{\bar{x}}+C,\;\;\forall \;x\in X_2\\&\Leftrightarrow \;\inf \mathcal {B}_{\bar{x}}\le \inf \mathcal {B}_{x},\;\;\forall \;x\in X_2\\&\Leftrightarrow \;\mathcal {B}_{\bar{x}}\;\beta ^{l}\;\mathcal {B}_{x},\;\;\forall \;x\in X_2. \end{aligned} \end{aligned}$$

The proof is complete. \(\square \)

Corollary 4.2

A feasible solution \(\bar{x}\in X_{i}\) for \(i=1\) (resp., 2) is a strictly (resp., an optimistic) robust solution to problem (1), if and only if \(\mathcal {B}_{\bar{x}}\;\beta ^{u}\;\mathcal {B}_{x}\) (resp., \(\mathcal {B}_{\bar{x}}\;\beta ^{l}\;\mathcal {B}_{x}\)) for all \(x\in X_{1}\) (resp., \(X_{2}\)).

Remark 4.4

Analogously to the case of \(\preceq _{C}^{u}\), a maximal solution of the set-valued OP with respect to \(\preceq _{C}^{l}\) also can be given. By using \(\preceq _{C}^{l}\), if we replace \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\preceq _{C}^{u}\{0\}\) by \(\{0\}\preceq _{-C}^{l}\max \limits _{x\in X_i}u_{i}(\bar{x},x)\) for \(i=1,2\) in Theorem 4.2 and Proposition 4.2, the conclusions also hold. This is because \(\{0\}\preceq _{-C}^{l}\max \limits _{x\in X_i}u_{i}(\bar{x},x)\) is equivalent to \(\max \limits _{x\in X_i}u_{i}(\bar{x},x)\subseteq -C\).

5 A Specific Application: The Shortest Path Problem

Suppose that we plan to travel from A (origin) to B (destination) by self-driving and that there are four feasible paths passing through city M at present, i.e., paths 1, 2, 3 and 4, where \(x_1\), \(x_2\), \(x_3\) and \(x_4\) are the corresponding path lengths. Let \(\bar{v}\) be the average velocity from A to B for all paths. Generally, without considering other conditions, shorter path lengths yields shorter traveling times and lower cost. However, some inevitable factors affect the decision maker’s choice in the real world. For example, traffic jams are a common phenomenon, which increase the traveling time. Typically, the city center is the most crowded. Thus, the costs have close connections to each path. Also, it is not known beforehand whether some festival events take place or not. If festival events are celebrated, their locations would have great effects on the nearby path and lead to serious transportation problems. The location of festival events is thus vital to our decision.

Let \(U=\{\xi _{1},\xi _{2},\xi _{3},\xi _{4},\xi _{5}\}\) be the uncertainty set with five scenarios. The implication of every scenario is described as follows:

$$\begin{aligned} \begin{aligned}&\xi _{1}: \text{ no } \text{ festival };\quad \quad \quad \quad \quad \quad \;\;\xi _{2}: \text{ festival } \text{ in } \text{ the } \text{ center }\;M;\quad \quad \xi _{3}: \text{ festival } \text{ in } \text{ the } \text{ north }\;M;\\&\xi _{4}: \text{ festival } \text{ in } \text{ the } \text{ south }\;M;\quad \;\;\xi _{5}: \text{ festival } \text{ in } \text{ the } \text{ whole } \text{ city }. \end{aligned} \end{aligned}$$

Doubtlessly, traveling time will be longer for the neighboring paths, when some festival events occur. It is obvious that \(\xi _{1}\) and \(\xi _{5}\) are the best and worst scenarios, respectively. Assuming that the costs are made up of fuel and tolls, the fuel is proportional to path length, and the tolls depend on different scenario. If some festival events take place, the tolls of the nearby highway are free. Let q, \(p(x,\xi )\), C and \(t(x,\xi )\) denote the fuel of unit length, the tolls of unit length, the maximum acceptable cost and the extra time produced by traffic jams or other emergencies, respectively. Now, we are interested in a short traveling time under acceptable cost, no matter which scenario will be realized. Naturally, the setting of maximum acceptable cost C plays a significant role in decision making. In order to intuitively understand this problem, one can refer to Fig. 1.

Fig. 1
figure 1

Illustration of the shortest path problem

Table 1 The function \(p(x,\xi )\) concerning scenario \(\xi \) and path x
Table 2 The function \(t(x,\xi )\) concerning scenario \(\xi \) and path x

According to the above description, the following uncertain OP (\(Q(\xi )\)) can be formulated as:

$$\begin{aligned} \min \Big (\frac{x}{\bar{v}}+t(x,\xi )\Big )\;\;\text{ s.t. }\;\;x(p(x,\xi )+q)-C\le 0,\;\;x\in \mathcal {X}. \end{aligned}$$

where \(\mathcal {X}=\{x_1,x_2,x_3,x_4\}\). Let \(X_{1}=\{x\in \mathcal {X}: x(p(x,\xi )+q)-C\le 0,\;\forall \;\xi \in U\}\). For this uncertain problem, we have obtained some known data, which are exact in general case:

  1. (i)

    The lengths of four paths (kilometer): \(x_1=50\), \(x_2=60\), \(x_3=80\) and \(x_4=100\);

  2. (ii)

    The fuel fee of unit length (dollar/kilometer) is \(q=0.5\), and the tolls of unit length can be seen in Table 1;

  3. (iii)

    The extra time produced by traffic jams or other emergencies (hour) is summarized in Table 2.

Additionally, assume that the average velocity of the car \(\bar{v}=100\)km/h, and the maximum acceptable cost \(C=90\) dollars. In what follows, we will apply some results derived in Sect. 3 and [32, Sect. 3] to analyze this problem in the context of various robustness concepts and provide reasonable choices for the decision maker.

  1. (I)

    For strict robustness, since \(X_{1}=\Big \{x\in \mathcal {X}: x-90\le 0,\,\frac{1}{2}x-90\le 0\Big \}=\{x_1,x_2,x_3\}\), we only need to choose one path from \(X_{1}\). Due to \(\max _{\xi \in \mathcal {U}}\Big (\frac{x}{100}+t(x,\xi )\Big )=\frac{x}{100}+t(x,\xi _5)\), one can reach that \(x^{*}_{sr}={\arg \min }_{x\in X_{1}}\max _{\xi \in \mathcal {U}}\Big (\frac{x}{100}+t(x,\xi )\Big )=x_{2}\) and \(\frac{x^{*}_{sr}}{100}+t(x^{*}_{sr},\xi _5)=1.1\). Let \(\bar{x}=x_1\). If we take \(x^{0}=x_3\) and \(\xi =\xi _2\), then the corresponding \(\mathcal {K}^{1}_{x_1}\cap \mathcal {H}^{1}\ne \emptyset \). Simultaneously, there does not exist \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\) (resp. \(\Pi _{4}\)) such that inequality (4) (resp. (5)) and equality (11) (resp. (12)) hold. Set \(\bar{x}=x_3\), \(\xi =\xi _{3}\) and \(x^{0}=x_{2}\), the same conclusions can be presented. Thus, both \(x_1\) and \(x_3\) are not strictly robust solutions. Here, one can also refer to a result, e.g., [32, Theorem 3.1]. Taking \(\bar{x}=x_2\) and \((\bar{\theta },\bar{\lambda })=(\frac{1}{2},0)\in \Pi _{3}\) (resp. \(\Pi _{4}\)), it is not difficult to see that \(\mathcal {K}^{1}_{x_2}\cap \mathcal {H}^{1}=\emptyset \) and inequalities (4), (5) hold. Meanwhile, \((x_2,0)\) is a saddle point for the generalized Lagrangian functions \(\mathcal {L}^{1}_{1}(x;\theta ,\lambda )\) (resp. \(\mathcal {L}^{1}_{l}(x;\theta ,\lambda ),l=2,3,4\)). For a given \(\bar{\theta }\in \,]0,+\infty [\), the equalities (11) and (12) are true. It follows from [32, Theorem 3.1], Theorem 3.3 (i), Definition 3.3, Proposition 3.1 and Corollary 3.1 that \(x_2\) is a strictly robust solution, which is coincident to the result \(x^{*}_{sr}\) we directly obtained.

  2. (II)

    For optimistic robustness, due to \(X_{2}=\{x\in \mathcal {X}: x(p(x,\xi )+q)-C\le 0,\;\text{ for } \text{ some }\;\xi \in U\}=\mathcal {X}\) and \(\xi _{1}\) is the best scenario, one can see that

    $$\begin{aligned} x^{*}_{or}={\arg \min }_{x\in X_{2}}\min _{\xi \in \mathcal {U}}\Big (\frac{x}{100}+t(x,\xi )\Big )={\arg \min }_{x\in X_{2}}\Big (\frac{x}{100}+t(x,\xi _1)\Big )=x_{3} \end{aligned}$$

    and \(\frac{x^{*}_{or}}{100}+t(x^{*}_{or},\xi _1)=0.9\). Setting \(\bar{x}=x_1\), \(x^{0}=x_2\) and \(\xi =\xi _3\), it holds \(\mathcal {K}^{2}_{x_1}\cap \mathcal {H}^{2}\ne \emptyset \). Meanwhile, there do not exist \((\bar{\theta },\bar{\lambda })\in \Pi _{3}\) and \((\bar{\theta },\bar{\lambda })\in \Pi _{4}\) such that (4), (5), (13) and (14) hold. Taking \(\bar{x}=x_2\), \(x^{0}=x_3\) and \(\xi =\xi _1\) (resp. \(\bar{x}=x_4\), \(x^{0}=x_2\) and \(\xi =\xi _4\)), we can conclude the same results. Then, \(x_1\), \(x_2\) and \(x_4\) are not optimistic robust solutions. Here, one can also refer to a result, e.g., [32, Theorem 3.2]. Let \(\bar{x}=x_3\) and \((\bar{\theta },\bar{\lambda })=(1,0)\in \Pi _{3}\) (resp. \(\Pi _{4}\)). One can reach that \(\mathcal {K}^{2}_{x_3}\cap \mathcal {H}^{2}=\emptyset \) and inequalities (4), (5) are true, as well as \((x_3,0)\) is a saddle point for the generalized Lagrangian functions \(\mathcal {L}^{2}_{1}(x;\theta ,\lambda )\) (resp. \(\mathcal {L}^{2}_{l}(x;\theta ,\lambda ),l=2,3,4\)). In addition, the equalities (13) and (14) can be verified for a given \(\bar{\theta }\in \,]0,+\infty [\). We achieve from [32, Theorem 3.2], Theorem 3.3 (ii), Definition 3.3, Proposition 3.1 and Corollary 3.2 that \(x_3\) is an optimistic robust solution.

  3. (III)

    When the maximum acceptable cost is given by \(C=49\) dollars, in this case, the set \(X_{1}\) is empty. Now, we introduce some infeasibility tolerances for the constraints and analyze this problem with the help of reliable robustness. Set \(\delta =11\) and \(\hat{\xi }=\xi _2\). One can see that the reliable feasible set \(X_3=\Big \{x\in \mathcal {X}: x-49\le 11,\,\frac{1}{2}x_1-49\le 0,\,\frac{1}{2}x_2-49\le 0\Big \}=\{x_1,x_2\}\). It is not difficult to see that \(x^{*}_{rr}={\arg \min }_{x\in X_{3}}\max _{\xi \in \mathcal {U}}\Big (\frac{x}{100}+t(x,\xi )\Big )={\arg \min }_{x\in X_{3}}\Big (\frac{x}{100}+t(x,\xi _5)\Big )=x_{2}\) and \(\frac{x^{*}_{rr}}{100}+t(x^{*}_{rr},\xi _5)=1.1\). Taking \(\bar{x}=x_2\), one can read that \(\mathcal {K}^{3}_{x_2}\cap \mathcal {H}^{3}=\emptyset \). It can be concluded from [32, Theorem 3.3] that \(x_2\) is a reliable robust solution.

  4. (IV)

    Considering \(\epsilon \)-constraint robustness, we choose \(\xi _{3}\) as one particular scenario and minimize the objective function \(\Big (\frac{x}{\bar{v}}+t(x,\xi _3)\Big )\). Let \(\epsilon _1=1.0,\epsilon _2=1.1,\epsilon _4=1.05\) and \(\epsilon _5=1.2\). Then, the feasible set \(X_5=\Big \{x\in X_1:\frac{x}{100}+t(x,\xi _l)\le \epsilon _l,\;l=1,2,4,5\Big \}=\{x_2,x_3\}\). One can reach that \(x^{*}_{\epsilon cr}={\arg \min }_{x\in X_{5}}\Big (\frac{x}{100}+t(x,\xi _3)\Big )=x_{2}\) and \(\frac{x^{*}_{\epsilon cr}}{100}+t(x^{*}_{\epsilon cr},\xi _3)=0.95\). Setting \(\bar{x}=x_2\), it can be seen that \(\mathcal {K}^{5}_{x_2}\cap \mathcal {H}^{5}=\emptyset \). We achieve from [32, Theorem 3.5] that \(x_2\) is an \(\epsilon \)-constraint robust solution.

Based on the above discussions, we have found the corresponding robust solutions for several kinds of robustness concepts and verified the conclusions obtained in [32, Sect. 3] and Sect. 3. Which robustness approach to choose concretely depends on the preferences of the decision maker. If he/she is conservative and risk averse, then path 2 is the best choice. Inversely, the perfect decision is path 3. Especially, when the maximum acceptable cost is changed such that \(X_1=\emptyset \), it seems that reliable robustness is more utility. While the decision maker’s attitudes have not been expressed, the \(\epsilon \)-constraint robustness would provide him/her with a wider choice of options.

6 Conclusions

On the frame of ISA, some characterizations of robust optimality conditions for scalar robust OPs were derived. According to some notations of ISA [32, Sect. 3], the corresponding vector OPs and set-valued OPs were introduced, and then their relations to robustness concepts were discussed. It has close connections to the ones presented in [13], such as vector optimization and set-valued optimization. As described in Remark 3.1, the linear separation can be ensured when the sets in the image space are convex, but this condition is only sufficient. Then, under what conditions a linear or a nonlinear separation for the above sets exist. Additionally, the relationship between ISA approach and scalarization techniques addressed in [13] seems to be of interest. Above all, how can one effectively use the optimality conditions derived by ISA as well as by other approaches like in [13], in order to design numerical algorithms for solving robust counterpart problems? What is noteworthy is that this paper gives a partial answer to the open question 3 posed in [32].