1 Introduction

Characterizations of optimal solution sets of optimization problems are of great interest, as they are useful for understanding the development of solution methods for solving mathematical programming problems. Mangasarian [1] first presented several simple and elegant characterizations of optimal solution sets of a convex optimization problem with differentiable functions. Subsequently, Burke and Ferris [2] extended the results obtained in [1] to nonsmooth cases. Since then, many interesting results on characterizations of optimal solution sets of various types of optimization problems have been obtained; see, for example, pseudo-linear programs in [3], quasi-convex programming problems in [4, 5], nonconvex optimization problems with pseudo-invex functions in [6,7,8], nonconvex semi-infinite optimization problems in [9, 10] and cone-constrained convex vector optimization problems in [11].

Recently, optimization problems with data uncertainty in the objective function or the constraint function have received much attention. It is known that robust optimization approach [12, 13] (worst-case approach) has emerged as a deterministic framework for studying mathematical programming problems with uncertain data. Many researchers have been attracted to work on the theory and applications of uncertain optimization problems by using robust optimization methodology. By using a new robust characteristic cone constraint qualification, some duality results are obtained in [14] for convex programming problems with data uncertainty. A robust conjugate duality theorem is given in [15] for support vector machines which are a class of important convex optimization models for classifying two labeled data sets. By employing conjugate analysis, robust duality is investigated in [16] for a family of parametric convex optimization problems. In addition, as applications, a robust duality theorem is also obtained for the best approximation problems with constraint involving data uncertainty. Stable Lagrange dualities for conical programming problems with data uncertainty are considered in [17] within the framework of robust optimization. Using generalized convexity assumptions, portfolio optimization problems are investigated in [18] through the study of nonsmooth robust multiobjective optimization problems. By virtue of image space analysis, several necessary and sufficient robust optimality conditions are given in [19] for a general uncertain optimization problem without any convexity or concavity assumptions. Some robust approximate optimality conditions are established in [20] for an uncertain nonsmooth optimization problem with continuous inequality constraints.

More recently, characterization of robust optimal solution sets for optimization problems with data uncertainty has attracted an increasing interest by many researchers. By using robust optimization, some properties and characterizations of robust optimal solution sets of an uncertain convex optimization problem are investigated in [21] under a robust Slater-type condition. Similar results are also obtained in [22] by using a new robust Farkas–Minkowski constraint qualification which is strictly weaker than the robust Slater-type condition. Subsequently, results on optimal solution sets are obtained in [23] for a pseudo-convex robust optimization problem with constraint involving data uncertainty. These results can be regarded as nonconvex extensions of the results obtained in [21, 22]. By introducing a robust-type subdifferential constraint qualification, some complete characterizations of robust optimal solution sets of a cone-constrained convex optimization problem with data uncertainty in both the objective and constraint functions are established in [24]. In [25], some characterizations of robust solution sets of an uncertain fractional optimization problem are also provided by using the similar method considered in [24]. Moreover, by employing Clarke subdifferentials and convex subdifferentials, robust solution sets are also investigated in [26] for an uncertain convex optimization problem with locally Lipschitz inequality constraints.

Motivated and inspired by the work of [9, 21, 24], we aim to investigate some new characterizations of robust optimal solution sets for a class of uncertain optimization problems via a new dual approach, which is different from the approach used in the papers mentioned above. More specifically, we first introduce a mixed-type robust dual problem of this class of uncertain optimization problems and explore robust strong duality relations. Then, we show that the Lagrangian-type function associated with a Lagrange multiplier is constant on a subset of locally convex vector spaces which is wider than the robust optimal solution set of this class of uncertain optimization problems. On the other hand, by virtue of robust strong duality relations and the Lagrangian multiplier being a constant, we propose a new approach to characterize the robust optimal solution set of this class of uncertain optimization problems. Furthermore, several results on characterizations of robust optimal solution sets of uncertain optimization problems reported in recent literature can be obtained by the use of our approach.

The rest of the paper is organized as follows: In Sect. 2, we recall some notions and introduce a mixed-type robust dual problem of this class of uncertain optimization problems. Then, we give some auxiliary results which will be used in the following sections. In Sect. 3, we obtain some new properties of the Lagrange-type function associated with uncertain optimization problems. In Sect. 4, we propose a new dual approach to characterize the robust optimal solution set of this class of uncertain optimization problems via its duality relations.

2 Preliminaries and Auxiliary Results

Throughout this paper, let X and Y be two locally convex vector spaces with their topological dual space \(X^*\) and \(Y^*\), endowed with the weak\(^*\) topologies \(w(X^*,X)\) and \(w(Y^*,Y)\), respectively. Let \(K \subseteq Y\) be a nonempty, closed and convex cone. The dual cone of K is defined by

$$\begin{aligned} K^*=\{y^*\in Y^*:\langle y^*,y\rangle \ge 0,\quad \forall y\in K\}, \end{aligned}$$

where we use the notation \(\langle \cdot ,\cdot \rangle \) for the value of the continuous linear functional \(y^*\in Y^*\) at \(y\in Y\). Let D be a nonempty set of X. The indicator function \(\delta _D: X\rightarrow \overline{{\mathbb {R}}}:=\mathbb {{\mathbb {R}}}\cup \{\pm \,\infty \}\) of D is defined by

$$\begin{aligned} \delta _D(x)= \left\{ \begin{array}{ll} 0,&{} \quad \text{ if } x\in D, \\ +\infty ,&{} \quad \text{ if } x\not \in D. \end{array} \right. \end{aligned}$$

For an extended real-valued function \(f:X\rightarrow {\mathbb {\overline{R}}}\), the effective domain and the epigraph are defined by

$$\begin{aligned} \text{ dom } f=\{x\in X:f(x)<+\infty \} \end{aligned}$$

and

$$\begin{aligned} \text{ epi } {f}=\{(x,r)\in X\times {\mathbb {R}}:f(x)\le r\}, \end{aligned}$$

respectively. We say that f is a proper function, iff for all \(x\in X\), \(\text{ dom } f\not =\emptyset .\) We say that f is a convex function, iff for any \(x, y\in X\) and \( t\in [0,1]\),

$$\begin{aligned} f(t x+(1-t)y)\le t f(x)+(1-t)f(y). \end{aligned}$$

And we say that f is concave, iff \(-f\) is convex. The subdifferential of f at \({\bar{x}}\in \text{ dom } f\), denoted by \(\partial f({\bar{x}})\), is

$$\begin{aligned} \partial f({\bar{x}})=\{x^*\in X^*:\langle x^*,x-{\bar{x}}\rangle \le f(x)- f({\bar{x}}), \quad \forall x\in X\}. \end{aligned}$$

Let \(g:X\rightarrow Y\) be a vector-valued function. The domain and the K-epigraph of g are defined by

$$\begin{aligned} \text{ dom } g :=\{x\in X:g(x)\in Y\}, \end{aligned}$$

and

$$\begin{aligned} \text{ epi }_K ~ g := \{(x,y)\in X\times Y:y\in g(x)+K\}, \end{aligned}$$

respectively. The function g is said to be proper, iff \(\text{ dom } g \not =\emptyset \). The function g is said to be K-convex, iff for any \(x, y\in X\) and \( t\in [0,1]\),

$$\begin{aligned} g(t x+(1-t)y)-t g(x)-(1-t)g(y)\in -K. \end{aligned}$$

And the function g is K-concave, iff \(-g\) is K-convex. Let \(\lambda \in K^*\). The function \((\lambda g): X\rightarrow \overline{{\mathbb {R}}}\) is defined by

$$\begin{aligned} (\lambda g)(x):= \left\{ \begin{array}{ll} \langle \lambda , g(x)\rangle ,&{} \quad \text{ if } x\in \mathrm {dom~} g , \\ +\infty ,&{} \quad \text{ otherwise }. \end{array} \right. \end{aligned}$$

Obviously, for each \(\lambda \in K^*\), the function g is K-convex (resp. K-concave) if and only if \((\lambda g)\) is a convex (resp. concave) function. For more details, see [27].

Consider the following optimization problem with data uncertainty both in the objective and constraints:

$$\begin{aligned} (\text{ UP }) ~~~~~~\inf _{x\in C}\Big \{ f(x,u) ~:~ g (x,v) \in -K\Big \}, \end{aligned}$$

where \(f :X\times {\bar{Z}}\rightarrow {\mathbb {R}}\) and \(g :X\times Z\rightarrow Y\) are given functions, Z and \({\bar{Z}}\) are another two locally convex vector spaces, \(C\subseteq X\) is a nonempty, closed and convex set, u and v are uncertain parameters and they belong to their respective convex and compact uncertainty sets \({\mathcal {U}}\subseteq {\bar{Z}}\), and \({\mathcal {V}}\subseteq Z\).

In this paper, robust optimization (i.e., worst-case) approach is applied to \(\text{(UP) }\). Now, we associate with \(\text{(UP) }\) its robust (worst-case) counterpart

$$\begin{aligned} (\text{ RUP }) ~~~~~~\inf _{x\in C}\Big \{ \max \limits _{u\in {\mathcal {U}}}f(x,u) ~:~ g (x,v) \in -K, \quad \forall v\in {\mathcal {V}}\Big \}, \end{aligned}$$

where the uncertain objective and constraint contain uncertain parameters from their respective uncertainty sets \({\mathcal {U}}\) and \({\mathcal {V}}\). In what follows, unless otherwise specified, we always assume that \(f(\cdot ,u)\) is a convex function for any \(u\in {\mathcal {U}}\), and \(g(\cdot ,v)\) is a K-convex function for any \(v\in {\mathcal {V}}\).

The aim of this paper is to obtain some dual characterizations of the robust optimal solution set of \(\text{(UP) }\) via its dual problem. So, we first give some important concepts and auxiliary results which will be used later in this paper.

Definition 2.1

[24] The robust feasible set of \(\text{(UP) }\) is defined by

$$\begin{aligned} {\mathcal {F}}^{\mathrm {(RUP)}} :=\Big \{x\in C:g(x,v)\in -K,\quad \forall v\in {\mathcal {V}}\Big \}. \end{aligned}$$

Definition 2.2

[24] We say that \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\text{(UP) }\), iff it is an optimal solution of \(\text{(RUP) }\). The robust optimal solution set of \(\text{(UP) }\) is the set, which consists of all the robust optimal solutions of \(\text{(UP) }\), given by

$$\begin{aligned} {\mathcal {S}}^{\mathrm {(RUP)}} :=\left\{ x\in {\mathcal {F}}^{\mathrm {(RUP)}} :\max _{u\in {\mathcal {U}}}f(x,u)\le \max _{u\in {\mathcal {U}}}f(y,u), \quad \forall y\in {\mathcal {F}}^{\mathrm {(RUP)}}\right\} . \end{aligned}$$

In order to derive some dual characterizations of \({\mathcal {S}}^{\mathrm {(RUP)}} \), we assume that the \({\mathcal {S}}^{\mathrm {(RUP)}} \not =\emptyset \). In [24], under a robust-type subdifferential constraint qualification condition, \({\bar{x}}\in {\mathcal {S}}^{\mathrm {(RUP)}}\) if and only if there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that

$$\begin{aligned} 0\in \partial f(\cdot ,{\bar{u}})({\bar{x}})+\partial \left( \left( {\bar{\lambda }} g\right) (\cdot ,{\bar{v}})\right) ({\bar{x}})+\partial \delta _C({\bar{x}}), ~~\left( {\bar{\lambda }} g\right) ({\bar{x}},{\bar{v}})=0, \end{aligned}$$
(1)

and

$$\begin{aligned} f({\bar{x}},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({\bar{x}},{u}). \end{aligned}$$
(2)

As usual, the Lagrangian-type function associated with \(\text{(UP) }\) is defined by

$$\begin{aligned} {\mathcal {L}}\left( x, {\lambda }, {u}, {v}\right) :=\left\{ \begin{array}{ll} f(x,{u})+\left( {\lambda } g\right) (x, {v}) ,&{} \quad \text{ if } x\in C, {\lambda }\in K^*, {u}\in {\mathcal {U}}, {v}\in {\mathcal {V}}, \\ +\infty ,&{} \quad \text{ otherwise. } \end{array} \right. \end{aligned}$$

Then, \(\text{(UP) }\) has its conventional uncertain mixed-type dual problem with the same uncertain parameters \(u\in {\mathcal {U}}\) and \(v\in {\mathcal {V}}\):

$$\begin{aligned} (\mathrm {UD}) ~~~~~~\left\{ \begin{array}{ll} \max ~~~ {\mathcal {L}}\left( y, {\lambda }, {u}, {v}\right) \\ \text{ s.t. } ~~ 0\in \partial f( \cdot ,u)(y)+\partial \big (\left( ({\lambda }+\mu ) g\right) (\cdot ,v)\big )( y)+\partial \delta _C( y),\\ ~~~~~~~~(\mu g)(y,v)\ge 0,\\ ~~~~~~~~ \lambda \in K^*,\mu \in K^*, y\in C. \end{array} \right. \end{aligned}$$

The optimistic counterpart of \((\mathrm {UD})\), called optimistic dual optimization problem, is a deterministic maximization problem given by

$$\begin{aligned} (\mathrm {OUD}) ~~~~~~\left\{ \begin{array}{ll} \max ~~~ {\mathcal {L}}\left( y, {\lambda }, {u}, {v}\right) \\ \text{ s.t. } ~~ 0\in \partial f( \cdot ,u)(y)+\partial \big (\left( ({\lambda }+\mu ) g\right) (\cdot ,v)\big )( y)+\partial \delta _C( y),\\ ~~~~~~~~(\mu g)(y,v)\ge 0,\\ ~~~~~~~~ \lambda \in K^*,\mu \in K^*, y\in C, u\in {\mathcal {U}}, v\in {\mathcal {V}}. \end{array} \right. \end{aligned}$$

Here, the maximization is also over all the parameters \(u\in {\mathcal {U}}\) and \(v\in {\mathcal {V}}\).

Remark 2.1

Obviously, if \(\mu =0\), \(\mathrm {(OUD)}\) collapses to Wolfe-type optimistic dual problem \((\mathrm {OUD_{W}})\) given by

$$\begin{aligned} (\mathrm {OUD_{W}}) ~~~~~~\left\{ \begin{array}{ll} \max ~~~ {\mathcal {L}}\left( y, {\lambda }, {u}, {v}\right) := f(y,u)+(\lambda g)(y,v) \\ \text{ s.t. } ~~ 0\in \partial f( \cdot ,u)(y)+\partial \big (\left( {\lambda } g\right) (\cdot ,v)\big )( y)+\partial \delta _C( y),\\ ~~~~~~~~ \lambda \in K^*, y\in C,u\in {\mathcal {U}}, v\in {\mathcal {V}}. \end{array} \right. \end{aligned}$$

If \(\lambda =0\), \(\mathrm {(OUD)}\) collapses to Mond–Weir-type optimistic dual problem \((\mathrm {OUD_{M}})\) given by

$$\begin{aligned} (\mathrm {OUD_M} )~~~~~~\left\{ \begin{array}{ll} \max ~~~ f(y,u) \\ \text{ s.t. } ~~ 0\in \partial f( \cdot ,u)(y)+\partial \big (\left( \mu g\right) (\cdot ,v)\big )( y)+\partial \delta _C( y),\\ ~~~~~~~~(\mu g)(y,v)\ge 0,\\ ~~~~~~~~ \mu \in K^*, y\in C,u\in {\mathcal {U}}, v\in {\mathcal {V}}. \end{array} \right. \end{aligned}$$

Now, we shall present some robust strong duality properties between \(\mathrm {(UP)}\) and \(\mathrm {(UD)}\). Let us recall that robust strong duality between \((\mathrm {UP})\) and \((\mathrm {UD})\) is to refer to the situation when the optimal value \(val(\mathrm {RUP})\) of \((\mathrm {RUP})\) equals to the optimal value \( val(\mathrm {OUD})\) of \((\mathrm {OUD})\), and both the minimum of \((\mathrm {RUP})\) and the maximum of \((\mathrm {OUD})\) are attained. In what follows, we always assume that \( {\mathcal {F}}^{\mathrm {(OUD)}}\) is the feasible set of \((\mathrm {OUD})\).

Lemma 2.1

If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold, then \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) and \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \) are optimal solutions of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.

Proof

Let \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Obviously, \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) and \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \) are feasible solutions of \((\mathrm {OUD})\). On the other hand, for any feasible solution \((y,\lambda ,\mu , u,v)\) of \((\mathrm {OUD})\), we have

$$\begin{aligned} 0\in \partial f( \cdot ,u)(y)+\partial \big (\left( (\lambda + \mu ) g\right) (\cdot ,v)\big )( y)+\partial \delta _C( y). \end{aligned}$$

It follows that

$$\begin{aligned} 0\in \partial {\mathcal {L}}( \cdot ,\lambda +\mu ,u,v)(y)+ \partial \delta _C( y). \end{aligned}$$

Then, there exist \(\eta \in \partial {\mathcal {L}}( \cdot ,\lambda +\mu ,u,v)(y)\) and \(\xi \in \partial \delta _C( y)\), such that \(\eta +\xi =0.\) By the definition of subdifferential, we obtain

$$\begin{aligned}&{\mathcal {L}}({\bar{x}},\lambda +\mu ,u,v )-{\mathcal {L}}(y,\lambda +\mu ,u,v )\ge \langle \eta ,{\bar{x}}-y\rangle , \\&\qquad \delta _C({\bar{x}})-\delta _C(y)\ge \langle \xi , {\bar{x}}-y\rangle . \end{aligned}$$

Then, adding these inequalities we yield,

$$\begin{aligned} {\mathcal {L}}({\bar{x}},\lambda +\mu ,u,v )\ge {\mathcal {L}}(y,\lambda +\mu ,u,v ). \end{aligned}$$
(3)

Moreover, together with \( \left( ({\lambda }+\mu ) g\right) ({\bar{x}}, {v})\le 0\), \((\mu g)(y,v)\ge 0\) and (3), we get

$$\begin{aligned} f({\bar{x}}, {u}) \ge {\mathcal {L}}({\bar{x}},\lambda +\mu ,u,v )\ge {\mathcal {L}}(y,\lambda +\mu ,u,v )\ge {\mathcal {L}}(y,\lambda ,u,v ), \end{aligned}$$

which implies that

$$\begin{aligned} \max _{u\in {\mathcal {U}}}f({\bar{x}}, {u}) \ge {\mathcal {L}}(y,\lambda ,u,v ). \end{aligned}$$

Together with (1) and (2), we have

$$\begin{aligned} {\mathcal {L}}({\bar{x}},{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) \ge {\mathcal {L}}(y,\lambda ,u,v ), \text{ for } \text{ any } (y,\lambda ,\mu , u,v)\in {\mathcal {F}}^{\mathrm {(OUD)}}. \end{aligned}$$

Thus, \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.

On the other hand, by a similar argument as above, we can easily show that \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) is also an optimal solution of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal. This completes the proof. \(\square \)

Similarly, we can easily obtain the following result.

Lemma 2.2

If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold, then \(\left( {\bar{x}},{\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD_W})\) and \((\mathrm {OUD_M})\), and the objective values of \(\mathrm {(RUP)}\), \((\mathrm {OUD_W})\) and \((\mathrm {OUD_M})\) are equal.

3 Constant Lagrangian-Type Function Properties

In this section, we will prove that the Lagrangian-type function associated with a Lagrange multiplier is a constant on a subset of X which is wider than the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\).

Proposition 3.1

Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). Then, \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\) and for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \((({\bar{\lambda }}+{\bar{\mu }}) g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).

Proof

Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). Then, \({\bar{y}}\in C\), \({\bar{\lambda }}\in K^*\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and

$$\begin{aligned} 0\in \partial f( \cdot ,{\bar{u}})({\bar{y}})+\partial \left( \left( ({\bar{\lambda }}+{\bar{\mu }}) g\right) (\cdot ,{\bar{v}})\right) ( {\bar{y}})+\partial \delta _C( {\bar{y}}), ~~~~~~~~({\bar{\mu }} g)({\bar{y}},{\bar{v}})\ge 0. \end{aligned}$$

By an argument similar to that given in the proof of Lemma 2.1, we can show that

$$\begin{aligned} {\mathcal {L}}({\bar{y}},{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} )\le {\mathcal {L}}(y,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} ), \text{ for } \text{ any } y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}. \end{aligned}$$
(4)

Hence, it follows from \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})\ge 0\) that

$$\begin{aligned} {\mathcal {L}}({\bar{y}},{\bar{\lambda }} ,{\bar{u}},{\bar{v}} )\le {\mathcal {L}}(y,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} ), \text{ for } \text{ any } y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}, \end{aligned}$$

which implies that

$$\begin{aligned} {\mathcal {L}}({\bar{y}},{\bar{\lambda }} ,{\bar{u}},{\bar{v}} )\le \inf _{y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}}{\mathcal {L}}(y,{ \bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} ). \end{aligned}$$

Moreover, for \({\bar{\lambda }}\) and \({\bar{\mu }}\) given above, we have

$$\begin{aligned} \inf _{y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}}{\mathcal {L}}(y,{\bar{\lambda }}+ {\bar{\mu }},{\bar{u}},{\bar{v}} )\le & {} \sup _{y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}}{\mathcal {L}}(y,{\bar{\lambda }}+{\bar{\mu }}, {\bar{u}},{\bar{v}} )\\\le & {} \sup _{\left( {y}, {\lambda }, {\mu }, {u}, {v}\right) \in {\mathcal {F}} ^{\mathrm {(OUD)}}}{\mathcal {L}}(y, \lambda , {u}, {v} )\\= & {} {\mathcal {L}}({\bar{y}},{\bar{\lambda }} ,{\bar{u}},{\bar{v}} ). \end{aligned}$$

So, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}\),

$$\begin{aligned} {\mathcal {L}}(y,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} )= {\mathcal {L}}({\bar{y}},{\bar{\lambda }} ,{\bar{u}},{\bar{v}} ), \end{aligned}$$
(5)

which is a constant. In addition, by taking \(y={\bar{y}}\) in (5), it follows that \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\).

Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then by \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\) and (4), we have

$$\begin{aligned} val(\mathrm {RUP})={\mathcal {L}}({\bar{y}},{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} ) \le {\mathcal {L}}( {y},{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} ), \text{ for } \text{ any } y\in {\mathcal {S}}^{\mathrm {(RUP)}}. \end{aligned}$$
(6)

So, together with (6) and \(({\bar{\lambda }}+{\bar{\mu }})g(y,{\bar{v}})\le 0\), \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), we obtain

$$\begin{aligned} val(\mathrm {RUP})\le & {} {\mathcal {L}}( {y},{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} ) \le f(y,{\bar{u}})\le \max _{u\in {\mathcal {U}}}f(y,{u})\nonumber \\ {}= & {} val(\mathrm {RUP}), \text{ for } \text{ any } y\in {\mathcal {S}}^{\mathrm {(RUP)}}. \end{aligned}$$
(7)

Since \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\), it follows from (6) and (7) that for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\),

$$\begin{aligned} {\mathcal {L}}( {y},{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} )={\mathcal {L}}( {\bar{y}},{\bar{\lambda }},{\bar{u}},{\bar{v}} ), \end{aligned}$$
(8)

which is a constant.

In addition, by (7), we get

$$\begin{aligned} {\mathcal {L}}( {y},{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} ) = f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u}), \text{ for } \text{ any } y\in {\mathcal {S}}^{\mathrm {(RUP)}}. \end{aligned}$$

This implies that

$$\begin{aligned} \left( \left( {\bar{\lambda }}+{\bar{\mu }}\right) g\right) (y,{\bar{v}})=0, \text{ for } \text{ any } y\in {\mathcal {S}}^{\mathrm {(RUP)}}. \end{aligned}$$

This completes the proof. \(\square \)

By Lemma 2.1 and Proposition 3.1, the result in the corollary below can be easily established.

Corollary 3.1

If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, for any \(y\in {\mathcal {F}}_0^{\mathrm {(OUD)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }},0, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {F}}_0^{\mathrm {(OUD)}}\), and for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(({\bar{\lambda }} g)(y,{\bar{v}})=0\), and \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\).

Remark 3.1

In [24, Proposition 3.1], it is observed that Lagrangian-type function associated with \(\text{(UP) }\) is a constant on the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\). However, in Corollary 3.1, the Lagrange-type function associated with \(\text{(UP) }\) can be a constant on a subset of X which is larger than the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\). Thus, the result obtained in Corollary 3.1 covers [24, Proposition 3.1].

Example 3.1

[22] Consider the following uncertain optimization problem

$$\begin{aligned} (\text{ UP }) ~~~~~~\inf _{(x_1,x_2)\in {\mathbb {R}}^2}\Big \{ u_1x_1+u_2x_2 ~:~ x_1+v_1x_2\le 0,-x_1+v_2\le 0\Big \}, \end{aligned}$$

where \((u_1,u_2)\in {\mathcal {U}}:=\left\{ (1+t_1,1+t_2)\in {\mathbb {R}}^2:\sqrt{t_1^2+t_2^2}\le 1\right\} \) and \((v_1,v_2)\in {\mathcal {V}}:=[0,1]\times [-1,0]\). It is easy to see that

$$\begin{aligned} {\mathcal {F}}^{\mathrm {(RUP)}}=\left\{ (x_1,x_2)\in {\mathbb {R}}^2:x_1\ge 0,x_2\le 0\right\} , \end{aligned}$$

and

$$\begin{aligned} {\mathcal {S}}^{\mathrm {(RUP)}}=\{(x_1,x_2)\in {\mathbb {R}}^2:x_1=0,x_2\le 0\}. \end{aligned}$$

For any \(y=(y_1,y_2)\in {\mathbb {R}}^2\), \(\lambda =(\lambda _1,\lambda _2)\in {\mathbb {R}}^2_+\), \(\mu =(\mu _1,\mu _2)\in {\mathbb {R}}^2_+\), \(u=(u_1,u_2)\in {\mathcal {U}}\), and \(v=(v_1,v_2)\in {\mathcal {V}}\). The optimistic dual optimization problem of \((\text{ UP })\) is defined as follows:

$$\begin{aligned} (\mathrm {OUD}) ~~~~~~\left\{ \begin{array}{ll} \max ~~~ u_1y_1+u_2y_2+\lambda _1(y_1+v_1y_2)+\lambda _2(-y_1+v_2) \\ \text{ s.t. } ~~(0, 0)=(u_1+\lambda _1+\mu _1-\lambda _2-\mu _2,u_2+\lambda _1v_1+\mu _1v_1), \\ ~~~~~~~~(\mu _1-\mu _2)y_1+\mu _1v_1y_2+\mu _2v_2\ge 0,\\ ~~~~~~~~ \lambda \in {\mathbb {R}}^2_+,\mu \in {\mathbb {R}}^2_+, y\in {\mathbb {R}}^2, u\in {\mathcal {U}}, v\in {\mathcal {V}}. \end{array} \right. \end{aligned}$$

Let \({\bar{x}}=(0,0)\in {\mathcal {F}}^{\mathrm {(RUP)}}.\) Clearly, \({\bar{x}} \in {\mathcal {S}}^{\mathrm {(RUP)}}\), and there exist \({\bar{\lambda }}=(0,1)\in {\mathbb {R}}^2_+\), \({\bar{u}}=(1,0)\in {\mathcal {U}}\), and \({\bar{v}}=(1,0)\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, for any \(y\in {\mathcal {F}}_0^{\mathrm {(OUD)}}={\mathbb {R}}^2\), it is easy to show that \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) =0\), which is a constant on \({\mathbb {R}}^2\).

Moreover, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), we can show that

$$\begin{aligned} ({\bar{\lambda }} g)(y,{\bar{v}})={\bar{\lambda }}_1( {y}_1+{\bar{v}}_1 {y}_2)+{\bar{\lambda }}_2(- {y}_1+{\bar{v}}_2)=0, \end{aligned}$$

and

$$\begin{aligned} f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})=0. \end{aligned}$$

Thus, Corollary 3.1 is applicable.

Now, we give the following two corollaries by using Wolfe-type and Mond–Weir-type robust dual problems of \(\mathrm {(UP)}\), respectively. Their proofs follow readily as direct consequences of Proposition 3.1.

Corollary 3.2

Let \( {\mathcal {F}}^{\mathrm {(OUD_W)}}\) be the feasible set of \((\mathrm {OUD_W})\). If \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W})\), then, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD_W)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_W)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD_W)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD_W})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(( {\bar{\lambda }} g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).

Corollary 3.3

Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). If \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M})\), then, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD_M)}}:=\left\{ y\in C:\left( {y}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\right\} \), \({\mathcal {L}}( y, {\bar{\mu }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD_M)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(( {\bar{\mu }} g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y, {\bar{\mu }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).

4 Characterizations of the Robust Optimal Solution Set

In this section, by using the Lagrangian-type function properties, we present a new approach to characterize robust optimal solution sets for \(\mathrm {(UP)}\) under the assumptions that there exists a given optimal solution of \((\mathrm {OUD})\) and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.

Theorem 4.1

Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). If \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}},\) where

$$\begin{aligned}&{\mathcal {S}} := \Bigg \{x\in {\mathcal {F}}^{\mathrm {(RUP)}}:\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ( {x},{\bar{v}})=0,~f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\\&\qquad \qquad \exists \eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}}) \cap \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x),~\left\langle \eta , {\bar{y}} -x\right\rangle =0\Bigg \}. \end{aligned}$$

Proof

\((\supseteq ):\) Let \(x\in {\mathcal {S}}\). Then, \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ( {x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and there exists \( \eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}}) \cap \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\), such that \(\left\langle \eta , {\bar{y}} -x\right\rangle =0\). Since \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x),\) we have

$$\begin{aligned} 0=\left\langle \eta , {\bar{y}}-x \right\rangle \le {\mathcal {L}}( {\bar{y}},{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}}) - {\mathcal {L}}( x,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}}), \end{aligned}$$

It follows that

$$\begin{aligned} f( {\bar{y}},{\bar{u}})+\left( \left( {\bar{\lambda }}+{\bar{\mu }}\right) g\right) ({\bar{y}},{\bar{v}}) \ge f( x,{\bar{u}})+\left( \left( {\bar{\lambda }}+ {\bar{\mu }}\right) g\right) (x,{\bar{v}}). \end{aligned}$$
(9)

By Proposition 3.1, it is clear that \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0.\) Thus, together with \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ({x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and (9), we obtain

$$\begin{aligned} f( {\bar{y}},{\bar{u}})+\left( {\bar{\lambda }} g\right) ({\bar{y}},{\bar{v}}) \ge \max _{u\in {\mathcal {U}}}f({x},{u}) . \end{aligned}$$

Moreover, since \(val(\mathrm {RUP})=val(\mathrm {OUD})\) and \(val(\mathrm {OUD})=f( {\bar{y}},{\bar{u}})+\left( {\bar{\lambda }} g\right) ({\bar{y}},{\bar{v}})\), it follows that

$$\begin{aligned} val(\mathrm {RUP})\ge \max _{u\in {\mathcal {U}}}f({x}, {u}). \end{aligned}$$

This, together with \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), gives \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\).

\((\subseteq ):\) Let \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}.\) Obviously, from Proposition 3.1, \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ( {x},{\bar{v}})=0\) and \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u})\). Moreover, since \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\), it is clear that \({\bar{y}}\in C\), \({\bar{\lambda }}\in K^*\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and

$$\begin{aligned} 0\in \partial f( \cdot ,{\bar{u}})({\bar{y}})+\partial \left( \left( ({\bar{\lambda }}+{\bar{\mu }}) g\right) (\cdot ,{\bar{v}})\right) ( {\bar{y}})+\partial \delta _C( {\bar{y}}). \end{aligned}$$

This follows that

$$\begin{aligned} 0\in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})+ \partial \delta _C( {\bar{y}}). \end{aligned}$$

Then, there exists \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\), such that \(-\eta \in \partial \delta _C( {\bar{y}})\). Thus,

$$\begin{aligned} \langle -\eta ,x-{\bar{y}}\rangle \le \delta _C({x})-\delta _C({\bar{y}})=0. \end{aligned}$$
(10)

On the other hand, by \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\) and Proposition 3.1, we obtain

$$\begin{aligned} \langle \eta ,x-{\bar{y}}\rangle \le {\mathcal {L}}( x,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})-{\mathcal {L}}( {\bar{y}},{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})=0, \end{aligned}$$

where the equality follows from (8) and \( ({\bar{\mu }}g)({\bar{y}}, {\bar{v}})=0\). This, together with (10), implies that

$$\begin{aligned} \langle \eta ,x-{\bar{y}}\rangle =0. \end{aligned}$$
(11)

Now, it remains to show that \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\). In fact, for any \(y\in X,\)

$$\begin{aligned} \left\langle \eta ,y-{x}\right\rangle= & {} \left\langle \eta ,y-{\bar{y}}\right\rangle +\left\langle \eta ,{\bar{y}}-{x}\right\rangle = \left\langle \eta ,y-{\bar{y}}\right\rangle \le {\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})\nonumber \\&-\,{\mathcal {L}}( {\bar{y}},{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}}), \end{aligned}$$
(12)

where the second equality follows from (11), the inequality follows from \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\). Moreover, since \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\), it follows from Proposition 3.1 that

$$\begin{aligned} {\mathcal {L}}( x,{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}} )={\mathcal {L}}( {\bar{y}},{\bar{\lambda }},{\bar{u}},{\bar{v}} )={\mathcal {L}}( {\bar{y}},{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}} ), \end{aligned}$$
(13)

where the first equality follows from (8), and the second equality follows from \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\). So, by (12) and (13), we have

$$\begin{aligned} \left\langle \eta ,y-{x}\right\rangle \le {\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})-{\mathcal {L}}( x,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}}), \end{aligned}$$

which implies that \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\). Thus, \({\mathcal {S}}^{\mathrm {(RUP)}}\subseteq {\mathcal {S}},\) and the proof is complete. \(\square \)

Corollary 4.1

Suppose that \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0,\) where

$$\begin{aligned}&{\mathcal {S}}_0 := \Bigg \{x\in {\mathcal {F}}^{\mathrm {(RUP)}}:\left( {\bar{\lambda }} g\right) ( {x},{\bar{v}})=0,~f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\\&\qquad \qquad \exists \eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }} ,{\bar{u}},{\bar{v}})({\bar{x}}) \cap \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }} ,{\bar{u}},{\bar{v}})(x),~\left\langle \eta , {\bar{x}} -x\right\rangle =0\Bigg \}. \end{aligned}$$

Proof

Let \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold for \(\left( {\bar{x}},{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \). Then, it follows from Lemma 2.1 that \(\left( {\bar{x}}, {\bar{\lambda }},0, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD})\) and \(val(\mathrm {RUP})=val(\mathrm {OUD})\). Thus, the result follows readily from Theorem 4.1. \(\square \)

In special cases when \(\lambda =0\) and \(\mu =0\), we can obtain results on characterizations of the robust optimal solution set for the problem \(\mathrm {(UP)}\) via its dual problems \((\mathrm {OUD_M} )\) and \((\mathrm {OUD_W} )\), respectively.

Corollary 4.2

Let \({\mathcal {F}}^{\mathrm {(OUD_W)}}\) be the feasible set of \((\mathrm {OUD_W})\). Suppose that \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W} )\). If \(val(\mathrm {RUP})=val(\mathrm {OUD_W})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0.\)

Proof

Suppose that \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W} )\). It is easy to see that \(\left( {\bar{y}}, {\bar{\lambda }}, 0,{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD )}}\) is an optimal solution of \((\mathrm {OUD} )\). By Theorem 4.1, the desired result follows readily. \(\square \)

Corollary 4.3

Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M})\). If \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0.\)

Proof

Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M} )\). It is easy to see that \(\left( {\bar{y}}, 0,{\bar{\mu }},{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD )}}\) is an optimal solution of \((\mathrm {OUD} )\). By Theorem 4.1, we can obtain the desired result. \(\square \)

Remark 4.1

By Lemma 2.2, the results established in Corollaries  4.2 and 4.3 can be obtained from Corollary 4.1.

Now, we establish another characterizations of robust optimal solution sets of \((\mathrm {UP} )\).

Theorem 4.2

Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M} )\). If \(val(\mathrm {UP})=val(\mathrm {OUD_M} )\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_1,\) where

$$\begin{aligned}&{\mathcal {S}}_1:= \Bigg \{x\in {\mathcal {F}}^{\mathrm {(RUP)}}:\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0,~f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\\&\qquad \qquad \exists \eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}}) \cap \partial f( \cdot , {\bar{u}} )(x),~\left\langle \eta , {\bar{y}} -x\right\rangle =0\Bigg \}. \end{aligned}$$

Proof

\((\supseteq ):\) Let \(x\in {\mathcal {S}}_1\). Then, \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), \(\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and there exists \( \exists \eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}}) \cap \partial f( \cdot , {\bar{u}} )(x)\), such that \(\left\langle \eta , {\bar{y}} -x\right\rangle =0\). Thus,

$$\begin{aligned} f( {\bar{y}}, {\bar{u}} ) - f( x, {\bar{u}} )\ge \left\langle \eta , {\bar{y}}-x \right\rangle =0, \end{aligned}$$

where the inequality follows from \( \eta \in \partial f( \cdot , {\bar{u}} )(x)\). Then, by \(val(\mathrm {RUP})=val(\mathrm {OUD_M} )\), we have

$$\begin{aligned} val(\mathrm {RUP})=f( {\bar{y}},{\bar{u}}) \ge f( x,{\bar{u}})=\max _{u\in {\mathcal {U}}}f({x},{u}). \end{aligned}$$

This, together with \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), gives \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\).

\((\subseteq ):\) Let \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}.\) Obviously, from Corollary 3.3, \(\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0\) and \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u})\). Moreover, since \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\), it follows that \({\bar{y}}\in C\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and

$$\begin{aligned} 0\in \partial f( \cdot ,{\bar{u}})({\bar{y}})+\partial \left( ({\bar{\mu }} g )(\cdot ,{\bar{v}})\right) ( {\bar{y}})+\partial \delta _C( {\bar{y}}), \text{ and } \left( {\bar{\mu }} g\right) ({\bar{y}},{\bar{v}})\ge 0. \end{aligned}$$

Then, there exist \(\eta \in \partial f(\cdot ,{\bar{u}})({\bar{y}})\), \(\xi \in \partial \left( \left( {\bar{\mu }} g\right) (\cdot ,{\bar{v}})\right) ({\bar{y}}),\) and \(\theta \in \partial \delta _C({\bar{y}})\) such that

$$\begin{aligned} \eta +\xi +\theta =0, \end{aligned}$$
(14)

Since \(\xi \in \partial \left( \left( {\bar{\mu }} g\right) (\cdot ,{\bar{v}})\right) ({\bar{y}}),\) and \(\theta \in \partial \delta _C({\bar{y}})\), we have

$$\begin{aligned} \left( {\bar{\mu }} g\right) (x,{\bar{v}} )-\left( {\bar{\mu }} g\right) ({\bar{y}},{\bar{v}} )\ge \left\langle \xi ,x-{\bar{y}}\right\rangle ,~~0\ge \left\langle \theta , x-{\bar{y}}\right\rangle , \end{aligned}$$

which implies that

$$\begin{aligned} \left( {\bar{\mu }} g\right) (x,{\bar{v}} )- \left( {\bar{\mu }} g\right) ({\bar{y}},{\bar{v}} )\ge \left\langle \theta + \xi ,x-{\bar{y}}\right\rangle . \end{aligned}$$

Moreover, noting that \( \left( {\bar{\mu }} g\right) (x,{\bar{v}} )=0\) and \(\left( {\bar{\mu }} g\right) ({\bar{y}},{\bar{v}} )\ge 0\), we have \(\left\langle \theta + \xi ,x-{\bar{y}}\right\rangle \le 0.\) Thus, it follows from (14) that

$$\begin{aligned} \left\langle \eta ,x-{\bar{y}}\right\rangle \ge 0. \end{aligned}$$
(15)

On the other hand, by \(\eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}})\) and Corollary 3.3, we get

$$\begin{aligned} \langle \eta ,x-{\bar{y}}\rangle \le f( x, {\bar{u}} )-f( {\bar{y}}, {\bar{u}} )=0, \end{aligned}$$

where the equality follows from (8) for \({\bar{\lambda }}=0\) and \(({\bar{\mu }} g)(x,{\bar{v}})=0\). This, together with (15), implies that

$$\begin{aligned} \langle \eta ,x-{\bar{y}}\rangle =0. \end{aligned}$$
(16)

Now, it remains to show that \(\eta \in \partial f( \cdot , {\bar{u}} )(x)\). In fact, for any \(y\in X,\)

$$\begin{aligned} \left\langle \eta ,y-{x}\right\rangle = \left\langle \eta ,y-{\bar{y}}\right\rangle +\left\langle \eta ,{\bar{y}}-{x}\right\rangle = \left\langle \eta ,y-{\bar{y}}\right\rangle \le f( y, {\bar{u}} )-f( {\bar{y}}, {\bar{u}} ), \end{aligned}$$
(17)

where the second equality follows from (16), the inequality follows from \(\eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}})\). Moreover, since \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\), it follows from Corollary 3.3 that

$$\begin{aligned} f( x, {\bar{u}} )={\mathcal {L}}( x, {\bar{\mu }} ,{\bar{u}},{\bar{v}} )=f( {\bar{y}}, {\bar{u}} ) , \end{aligned}$$
(18)

where the first equality follows from \(({\bar{\mu }} g)(x,{\bar{v}})=0\), and the second equality follows from (8) for \({\bar{\lambda }}=0\). Thus, by (17) and (18), we have

$$\begin{aligned} \left\langle \eta ,y-{x}\right\rangle \le f( y, {\bar{u}} )-f( x, {\bar{u}} ), \end{aligned}$$

which means that \(\eta \in \partial f( \cdot , {\bar{u}} )(x)\). Thus, \({\mathcal {S}}^{\mathrm {(RUP)}}\subseteq {\mathcal {S}}_1,\) and the proof is complete. \(\square \)

Corollary 4.4

Suppose that \({\bar{y}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_1.\)

Proof

Let \({\bar{y}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, it follows from Lemma 2.2 that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD_M})\) and \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\). Thus, the result follows from Theorem 4.2. \(\square \)

Remark 4.2

It is worth noting that the conclusion of Corollary 4.4 coincides with Theorem 3.2 in [24].

Example 4.1

Consider the uncertain optimization problem \(\mathrm {(UP)}\) introduced in Example 3.1. Let \({\bar{y}} =(0,0)\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \({\bar{\mu }}=(0,1)\in {\mathbb {R}}^2_+\), \({\bar{u}}=(1,0)\in {\mathcal {U}}\), and \({\bar{v}}=(1,0)\in {\mathcal {V}}\). Then, it is easy to see that conditions (1) and (2) hold.

On the other hand, for any \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}=\{(x_1,x_2)\in {\mathbb {R}}^2:x_1\ge 0,x_2\le 0\}\), we can show that \(({\bar{\mu }} g)(x,{\bar{v}})={\bar{\mu }}_1( {x}_1+{\bar{v}}_1 {x}_2)+{\bar{\mu }}_2(- {x}_1+{\bar{v}}_2)=-x_1\), \(f(x,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(x,{u})=x_1\), \(\partial f( \cdot , {\bar{u}} )({\bar{y}}) = \partial f( \cdot , {\bar{u}} )(x)=(1,0)\), and \(\langle \eta , {\bar{y}}-x\rangle =-x_1.\) Then, we can show that

$$\begin{aligned} {\mathcal {S}}_1:= & {} \Bigg \{x\in {\mathcal {F}}^{\mathrm {(RUP)}}:\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0,~f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\\&~~~\exists \eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}}) \cap \partial f( \cdot , {\bar{u}} )(x),~\left\langle \eta , {\bar{y}} -x\right\rangle =0\Bigg \}.\\= & {} \{(x_1,x_2)\in {\mathbb {R}}^2:x_1=0,x_2\le 0\}.\\= & {} {\mathcal {S}}^{\mathrm {(RUP)}}. \end{aligned}$$

5 Conclusions

In this paper, following the framework of robust optimization, we give various characterizations of robust optimal solution sets for \(\mathrm {(UP)}\) in terms of subgradients and Lagrange multipliers. We show that a Lagrange-type function, corresponding to a Lagrange multiplier, is a constant on a subset of locally convex vector space X which is wider than the robust optimal solution sets of \(\mathrm {(UP)}\). Based on these Lagrange-type function properties, we establish various simple Lagrange multiplier-based characterizations of the robust optimal solution set of \(\mathrm {(UP)}\) via its dual problems. It is worth noting that the approach used in this paper is new, and some existing results in the literature can be obtained by the use of our approach.