Abstract
In this paper, we deal with robust optimal solution sets for a class of optimization problems with data uncertainty in both the objective and constraints. We first introduce a mixed-type robust dual problem of this class of uncertain optimization problems and explore robust strong duality relations between them. Then, we propose a new approach to characterize robust optimal solution sets of this class of uncertain optimization problems via its dual problem. Moreover, we show that several results on characterizations of robust optimal solution sets of uncertain optimization problems obtained in recent literature can be obtained using our approach.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Characterizations of optimal solution sets of optimization problems are of great interest, as they are useful for understanding the development of solution methods for solving mathematical programming problems. Mangasarian [1] first presented several simple and elegant characterizations of optimal solution sets of a convex optimization problem with differentiable functions. Subsequently, Burke and Ferris [2] extended the results obtained in [1] to nonsmooth cases. Since then, many interesting results on characterizations of optimal solution sets of various types of optimization problems have been obtained; see, for example, pseudo-linear programs in [3], quasi-convex programming problems in [4, 5], nonconvex optimization problems with pseudo-invex functions in [6,7,8], nonconvex semi-infinite optimization problems in [9, 10] and cone-constrained convex vector optimization problems in [11].
Recently, optimization problems with data uncertainty in the objective function or the constraint function have received much attention. It is known that robust optimization approach [12, 13] (worst-case approach) has emerged as a deterministic framework for studying mathematical programming problems with uncertain data. Many researchers have been attracted to work on the theory and applications of uncertain optimization problems by using robust optimization methodology. By using a new robust characteristic cone constraint qualification, some duality results are obtained in [14] for convex programming problems with data uncertainty. A robust conjugate duality theorem is given in [15] for support vector machines which are a class of important convex optimization models for classifying two labeled data sets. By employing conjugate analysis, robust duality is investigated in [16] for a family of parametric convex optimization problems. In addition, as applications, a robust duality theorem is also obtained for the best approximation problems with constraint involving data uncertainty. Stable Lagrange dualities for conical programming problems with data uncertainty are considered in [17] within the framework of robust optimization. Using generalized convexity assumptions, portfolio optimization problems are investigated in [18] through the study of nonsmooth robust multiobjective optimization problems. By virtue of image space analysis, several necessary and sufficient robust optimality conditions are given in [19] for a general uncertain optimization problem without any convexity or concavity assumptions. Some robust approximate optimality conditions are established in [20] for an uncertain nonsmooth optimization problem with continuous inequality constraints.
More recently, characterization of robust optimal solution sets for optimization problems with data uncertainty has attracted an increasing interest by many researchers. By using robust optimization, some properties and characterizations of robust optimal solution sets of an uncertain convex optimization problem are investigated in [21] under a robust Slater-type condition. Similar results are also obtained in [22] by using a new robust Farkas–Minkowski constraint qualification which is strictly weaker than the robust Slater-type condition. Subsequently, results on optimal solution sets are obtained in [23] for a pseudo-convex robust optimization problem with constraint involving data uncertainty. These results can be regarded as nonconvex extensions of the results obtained in [21, 22]. By introducing a robust-type subdifferential constraint qualification, some complete characterizations of robust optimal solution sets of a cone-constrained convex optimization problem with data uncertainty in both the objective and constraint functions are established in [24]. In [25], some characterizations of robust solution sets of an uncertain fractional optimization problem are also provided by using the similar method considered in [24]. Moreover, by employing Clarke subdifferentials and convex subdifferentials, robust solution sets are also investigated in [26] for an uncertain convex optimization problem with locally Lipschitz inequality constraints.
Motivated and inspired by the work of [9, 21, 24], we aim to investigate some new characterizations of robust optimal solution sets for a class of uncertain optimization problems via a new dual approach, which is different from the approach used in the papers mentioned above. More specifically, we first introduce a mixed-type robust dual problem of this class of uncertain optimization problems and explore robust strong duality relations. Then, we show that the Lagrangian-type function associated with a Lagrange multiplier is constant on a subset of locally convex vector spaces which is wider than the robust optimal solution set of this class of uncertain optimization problems. On the other hand, by virtue of robust strong duality relations and the Lagrangian multiplier being a constant, we propose a new approach to characterize the robust optimal solution set of this class of uncertain optimization problems. Furthermore, several results on characterizations of robust optimal solution sets of uncertain optimization problems reported in recent literature can be obtained by the use of our approach.
The rest of the paper is organized as follows: In Sect. 2, we recall some notions and introduce a mixed-type robust dual problem of this class of uncertain optimization problems. Then, we give some auxiliary results which will be used in the following sections. In Sect. 3, we obtain some new properties of the Lagrange-type function associated with uncertain optimization problems. In Sect. 4, we propose a new dual approach to characterize the robust optimal solution set of this class of uncertain optimization problems via its duality relations.
2 Preliminaries and Auxiliary Results
Throughout this paper, let X and Y be two locally convex vector spaces with their topological dual space \(X^*\) and \(Y^*\), endowed with the weak\(^*\) topologies \(w(X^*,X)\) and \(w(Y^*,Y)\), respectively. Let \(K \subseteq Y\) be a nonempty, closed and convex cone. The dual cone of K is defined by
where we use the notation \(\langle \cdot ,\cdot \rangle \) for the value of the continuous linear functional \(y^*\in Y^*\) at \(y\in Y\). Let D be a nonempty set of X. The indicator function \(\delta _D: X\rightarrow \overline{{\mathbb {R}}}:=\mathbb {{\mathbb {R}}}\cup \{\pm \,\infty \}\) of D is defined by
For an extended real-valued function \(f:X\rightarrow {\mathbb {\overline{R}}}\), the effective domain and the epigraph are defined by
and
respectively. We say that f is a proper function, iff for all \(x\in X\), \(\text{ dom } f\not =\emptyset .\) We say that f is a convex function, iff for any \(x, y\in X\) and \( t\in [0,1]\),
And we say that f is concave, iff \(-f\) is convex. The subdifferential of f at \({\bar{x}}\in \text{ dom } f\), denoted by \(\partial f({\bar{x}})\), is
Let \(g:X\rightarrow Y\) be a vector-valued function. The domain and the K-epigraph of g are defined by
and
respectively. The function g is said to be proper, iff \(\text{ dom } g \not =\emptyset \). The function g is said to be K-convex, iff for any \(x, y\in X\) and \( t\in [0,1]\),
And the function g is K-concave, iff \(-g\) is K-convex. Let \(\lambda \in K^*\). The function \((\lambda g): X\rightarrow \overline{{\mathbb {R}}}\) is defined by
Obviously, for each \(\lambda \in K^*\), the function g is K-convex (resp. K-concave) if and only if \((\lambda g)\) is a convex (resp. concave) function. For more details, see [27].
Consider the following optimization problem with data uncertainty both in the objective and constraints:
where \(f :X\times {\bar{Z}}\rightarrow {\mathbb {R}}\) and \(g :X\times Z\rightarrow Y\) are given functions, Z and \({\bar{Z}}\) are another two locally convex vector spaces, \(C\subseteq X\) is a nonempty, closed and convex set, u and v are uncertain parameters and they belong to their respective convex and compact uncertainty sets \({\mathcal {U}}\subseteq {\bar{Z}}\), and \({\mathcal {V}}\subseteq Z\).
In this paper, robust optimization (i.e., worst-case) approach is applied to \(\text{(UP) }\). Now, we associate with \(\text{(UP) }\) its robust (worst-case) counterpart
where the uncertain objective and constraint contain uncertain parameters from their respective uncertainty sets \({\mathcal {U}}\) and \({\mathcal {V}}\). In what follows, unless otherwise specified, we always assume that \(f(\cdot ,u)\) is a convex function for any \(u\in {\mathcal {U}}\), and \(g(\cdot ,v)\) is a K-convex function for any \(v\in {\mathcal {V}}\).
The aim of this paper is to obtain some dual characterizations of the robust optimal solution set of \(\text{(UP) }\) via its dual problem. So, we first give some important concepts and auxiliary results which will be used later in this paper.
Definition 2.1
[24] The robust feasible set of \(\text{(UP) }\) is defined by
Definition 2.2
[24] We say that \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\text{(UP) }\), iff it is an optimal solution of \(\text{(RUP) }\). The robust optimal solution set of \(\text{(UP) }\) is the set, which consists of all the robust optimal solutions of \(\text{(UP) }\), given by
In order to derive some dual characterizations of \({\mathcal {S}}^{\mathrm {(RUP)}} \), we assume that the \({\mathcal {S}}^{\mathrm {(RUP)}} \not =\emptyset \). In [24], under a robust-type subdifferential constraint qualification condition, \({\bar{x}}\in {\mathcal {S}}^{\mathrm {(RUP)}}\) if and only if there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that
and
As usual, the Lagrangian-type function associated with \(\text{(UP) }\) is defined by
Then, \(\text{(UP) }\) has its conventional uncertain mixed-type dual problem with the same uncertain parameters \(u\in {\mathcal {U}}\) and \(v\in {\mathcal {V}}\):
The optimistic counterpart of \((\mathrm {UD})\), called optimistic dual optimization problem, is a deterministic maximization problem given by
Here, the maximization is also over all the parameters \(u\in {\mathcal {U}}\) and \(v\in {\mathcal {V}}\).
Remark 2.1
Obviously, if \(\mu =0\), \(\mathrm {(OUD)}\) collapses to Wolfe-type optimistic dual problem \((\mathrm {OUD_{W}})\) given by
If \(\lambda =0\), \(\mathrm {(OUD)}\) collapses to Mond–Weir-type optimistic dual problem \((\mathrm {OUD_{M}})\) given by
Now, we shall present some robust strong duality properties between \(\mathrm {(UP)}\) and \(\mathrm {(UD)}\). Let us recall that robust strong duality between \((\mathrm {UP})\) and \((\mathrm {UD})\) is to refer to the situation when the optimal value \(val(\mathrm {RUP})\) of \((\mathrm {RUP})\) equals to the optimal value \( val(\mathrm {OUD})\) of \((\mathrm {OUD})\), and both the minimum of \((\mathrm {RUP})\) and the maximum of \((\mathrm {OUD})\) are attained. In what follows, we always assume that \( {\mathcal {F}}^{\mathrm {(OUD)}}\) is the feasible set of \((\mathrm {OUD})\).
Lemma 2.1
If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold, then \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) and \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \) are optimal solutions of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.
Proof
Let \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Obviously, \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) and \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \) are feasible solutions of \((\mathrm {OUD})\). On the other hand, for any feasible solution \((y,\lambda ,\mu , u,v)\) of \((\mathrm {OUD})\), we have
It follows that
Then, there exist \(\eta \in \partial {\mathcal {L}}( \cdot ,\lambda +\mu ,u,v)(y)\) and \(\xi \in \partial \delta _C( y)\), such that \(\eta +\xi =0.\) By the definition of subdifferential, we obtain
Then, adding these inequalities we yield,
Moreover, together with \( \left( ({\lambda }+\mu ) g\right) ({\bar{x}}, {v})\le 0\), \((\mu g)(y,v)\ge 0\) and (3), we get
which implies that
Together with (1) and (2), we have
Thus, \(\left( {\bar{x}},{\bar{\lambda }},0,{\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.
On the other hand, by a similar argument as above, we can easily show that \(\left( {\bar{x}},0,{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) is also an optimal solution of \((\mathrm {OUD})\), and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal. This completes the proof. \(\square \)
Similarly, we can easily obtain the following result.
Lemma 2.2
If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold, then \(\left( {\bar{x}},{\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD_W})\) and \((\mathrm {OUD_M})\), and the objective values of \(\mathrm {(RUP)}\), \((\mathrm {OUD_W})\) and \((\mathrm {OUD_M})\) are equal.
3 Constant Lagrangian-Type Function Properties
In this section, we will prove that the Lagrangian-type function associated with a Lagrange multiplier is a constant on a subset of X which is wider than the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\).
Proposition 3.1
Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). Then, \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\) and for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \((({\bar{\lambda }}+{\bar{\mu }}) g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y,{\bar{\lambda }}+{\bar{\mu }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).
Proof
Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). Then, \({\bar{y}}\in C\), \({\bar{\lambda }}\in K^*\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and
By an argument similar to that given in the proof of Lemma 2.1, we can show that
Hence, it follows from \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})\ge 0\) that
which implies that
Moreover, for \({\bar{\lambda }}\) and \({\bar{\mu }}\) given above, we have
So, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD)}}\),
which is a constant. In addition, by taking \(y={\bar{y}}\) in (5), it follows that \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\).
Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then by \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\) and (4), we have
So, together with (6) and \(({\bar{\lambda }}+{\bar{\mu }})g(y,{\bar{v}})\le 0\), \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), we obtain
Since \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\), it follows from (6) and (7) that for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\),
which is a constant.
In addition, by (7), we get
This implies that
This completes the proof. \(\square \)
By Lemma 2.1 and Proposition 3.1, the result in the corollary below can be easily established.
Corollary 3.1
If \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, for any \(y\in {\mathcal {F}}_0^{\mathrm {(OUD)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }},0, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {F}}_0^{\mathrm {(OUD)}}\), and for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(({\bar{\lambda }} g)(y,{\bar{v}})=0\), and \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\).
Remark 3.1
In [24, Proposition 3.1], it is observed that Lagrangian-type function associated with \(\text{(UP) }\) is a constant on the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\). However, in Corollary 3.1, the Lagrange-type function associated with \(\text{(UP) }\) can be a constant on a subset of X which is larger than the robust optimal solution set \( {\mathcal {S}}^{\mathrm {(RUP)}}\). Thus, the result obtained in Corollary 3.1 covers [24, Proposition 3.1].
Example 3.1
[22] Consider the following uncertain optimization problem
where \((u_1,u_2)\in {\mathcal {U}}:=\left\{ (1+t_1,1+t_2)\in {\mathbb {R}}^2:\sqrt{t_1^2+t_2^2}\le 1\right\} \) and \((v_1,v_2)\in {\mathcal {V}}:=[0,1]\times [-1,0]\). It is easy to see that
and
For any \(y=(y_1,y_2)\in {\mathbb {R}}^2\), \(\lambda =(\lambda _1,\lambda _2)\in {\mathbb {R}}^2_+\), \(\mu =(\mu _1,\mu _2)\in {\mathbb {R}}^2_+\), \(u=(u_1,u_2)\in {\mathcal {U}}\), and \(v=(v_1,v_2)\in {\mathcal {V}}\). The optimistic dual optimization problem of \((\text{ UP })\) is defined as follows:
Let \({\bar{x}}=(0,0)\in {\mathcal {F}}^{\mathrm {(RUP)}}.\) Clearly, \({\bar{x}} \in {\mathcal {S}}^{\mathrm {(RUP)}}\), and there exist \({\bar{\lambda }}=(0,1)\in {\mathbb {R}}^2_+\), \({\bar{u}}=(1,0)\in {\mathcal {U}}\), and \({\bar{v}}=(1,0)\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, for any \(y\in {\mathcal {F}}_0^{\mathrm {(OUD)}}={\mathbb {R}}^2\), it is easy to show that \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) =0\), which is a constant on \({\mathbb {R}}^2\).
Moreover, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), we can show that
and
Thus, Corollary 3.1 is applicable.
Now, we give the following two corollaries by using Wolfe-type and Mond–Weir-type robust dual problems of \(\mathrm {(UP)}\), respectively. Their proofs follow readily as direct consequences of Proposition 3.1.
Corollary 3.2
Let \( {\mathcal {F}}^{\mathrm {(OUD_W)}}\) be the feasible set of \((\mathrm {OUD_W})\). If \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W})\), then, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD_W)}}:=\left\{ y\in C:\left( {y}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_W)}}\right\} \), \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD_W)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD_W})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(( {\bar{\lambda }} g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y,{\bar{\lambda }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).
Corollary 3.3
Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). If \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M})\), then, for any \(y\in {\mathcal {F}}_1^{\mathrm {(OUD_M)}}:=\left\{ y\in C:\left( {y}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\right\} \), \({\mathcal {L}}( y, {\bar{\mu }} ,{\bar{u}},{\bar{v}})\) is a constant on \({\mathcal {F}}_1^{\mathrm {(OUD_M)}}\). Furthermore, if \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\), then, for any \(y\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \(( {\bar{\mu }} g)(y,{\bar{v}})=0\), \(f(y,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(y,{u})\), and \({\mathcal {L}}( y, {\bar{\mu }} ,{\bar{u}},{\bar{v}}) \) is a constant on \({\mathcal {S}}^{\mathrm {(RUP)}}\).
4 Characterizations of the Robust Optimal Solution Set
In this section, by using the Lagrangian-type function properties, we present a new approach to characterize robust optimal solution sets for \(\mathrm {(UP)}\) under the assumptions that there exists a given optimal solution of \((\mathrm {OUD})\) and the objective values of \(\mathrm {(RUP)}\) and \((\mathrm {OUD})\) are equal.
Theorem 4.1
Let \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\) be an optimal solution of \((\mathrm {OUD})\). If \(val(\mathrm {RUP})=val(\mathrm {OUD})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}},\) where
Proof
\((\supseteq ):\) Let \(x\in {\mathcal {S}}\). Then, \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ( {x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and there exists \( \eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}}) \cap \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\), such that \(\left\langle \eta , {\bar{y}} -x\right\rangle =0\). Since \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x),\) we have
It follows that
By Proposition 3.1, it is clear that \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0.\) Thus, together with \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ({x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and (9), we obtain
Moreover, since \(val(\mathrm {RUP})=val(\mathrm {OUD})\) and \(val(\mathrm {OUD})=f( {\bar{y}},{\bar{u}})+\left( {\bar{\lambda }} g\right) ({\bar{y}},{\bar{v}})\), it follows that
This, together with \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), gives \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\).
\((\subseteq ):\) Let \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}.\) Obviously, from Proposition 3.1, \(\left( \left( {\bar{\lambda }} +{\bar{\mu }}\right) g\right) ( {x},{\bar{v}})=0\) and \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u})\). Moreover, since \(\left( {\bar{y}}, {\bar{\lambda }},{\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD)}}\), it is clear that \({\bar{y}}\in C\), \({\bar{\lambda }}\in K^*\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and
This follows that
Then, there exists \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\), such that \(-\eta \in \partial \delta _C( {\bar{y}})\). Thus,
On the other hand, by \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\) and Proposition 3.1, we obtain
where the equality follows from (8) and \( ({\bar{\mu }}g)({\bar{y}}, {\bar{v}})=0\). This, together with (10), implies that
Now, it remains to show that \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\). In fact, for any \(y\in X,\)
where the second equality follows from (11), the inequality follows from \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})({\bar{y}})\). Moreover, since \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\), it follows from Proposition 3.1 that
where the first equality follows from (8), and the second equality follows from \(({\bar{\mu }} g)({\bar{y}},{\bar{v}})=0\). So, by (12) and (13), we have
which implies that \(\eta \in \partial {\mathcal {L}}( \cdot ,{\bar{\lambda }}+{\bar{\mu }},{\bar{u}},{\bar{v}})(x)\). Thus, \({\mathcal {S}}^{\mathrm {(RUP)}}\subseteq {\mathcal {S}},\) and the proof is complete. \(\square \)
Corollary 4.1
Suppose that \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0,\) where
Proof
Let \({\bar{x}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\lambda }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold for \(\left( {\bar{x}},{\bar{\lambda }},{\bar{u}},{\bar{v}}\right) \). Then, it follows from Lemma 2.1 that \(\left( {\bar{x}}, {\bar{\lambda }},0, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD})\) and \(val(\mathrm {RUP})=val(\mathrm {OUD})\). Thus, the result follows readily from Theorem 4.1. \(\square \)
In special cases when \(\lambda =0\) and \(\mu =0\), we can obtain results on characterizations of the robust optimal solution set for the problem \(\mathrm {(UP)}\) via its dual problems \((\mathrm {OUD_M} )\) and \((\mathrm {OUD_W} )\), respectively.
Corollary 4.2
Let \({\mathcal {F}}^{\mathrm {(OUD_W)}}\) be the feasible set of \((\mathrm {OUD_W})\). Suppose that \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W} )\). If \(val(\mathrm {RUP})=val(\mathrm {OUD_W})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0.\)
Proof
Suppose that \(\left( {\bar{y}}, {\bar{\lambda }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_W)}}\) is an optimal solution of \((\mathrm {OUD_W} )\). It is easy to see that \(\left( {\bar{y}}, {\bar{\lambda }}, 0,{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD )}}\) is an optimal solution of \((\mathrm {OUD} )\). By Theorem 4.1, the desired result follows readily. \(\square \)
Corollary 4.3
Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M})\). If \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_0.\)
Proof
Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M} )\). It is easy to see that \(\left( {\bar{y}}, 0,{\bar{\mu }},{\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD )}}\) is an optimal solution of \((\mathrm {OUD} )\). By Theorem 4.1, we can obtain the desired result. \(\square \)
Remark 4.1
By Lemma 2.2, the results established in Corollaries 4.2 and 4.3 can be obtained from Corollary 4.1.
Now, we establish another characterizations of robust optimal solution sets of \((\mathrm {UP} )\).
Theorem 4.2
Let \( {\mathcal {F}}^{\mathrm {(OUD_M)}}\) be the feasible set of \((\mathrm {OUD_M})\). Suppose that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}} ^{\mathrm {(OUD_M)}}\) is an optimal solution of \((\mathrm {OUD_M} )\). If \(val(\mathrm {UP})=val(\mathrm {OUD_M} )\), then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_1,\) where
Proof
\((\supseteq ):\) Let \(x\in {\mathcal {S}}_1\). Then, \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), \(\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0\), \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u}),\) and there exists \( \exists \eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}}) \cap \partial f( \cdot , {\bar{u}} )(x)\), such that \(\left\langle \eta , {\bar{y}} -x\right\rangle =0\). Thus,
where the inequality follows from \( \eta \in \partial f( \cdot , {\bar{u}} )(x)\). Then, by \(val(\mathrm {RUP})=val(\mathrm {OUD_M} )\), we have
This, together with \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}\), gives \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\).
\((\subseteq ):\) Let \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}.\) Obviously, from Corollary 3.3, \(\left( {\bar{\mu }} g\right) ( {x},{\bar{v}})=0\) and \(f({x},{\bar{u}}) =\max _{u\in {\mathcal {U}}}f({x},{u})\). Moreover, since \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \in {\mathcal {F}}^{\mathrm {(OUD_M)}}\), it follows that \({\bar{y}}\in C\), \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), \({\bar{v}}\in {\mathcal {V}}\), and
Then, there exist \(\eta \in \partial f(\cdot ,{\bar{u}})({\bar{y}})\), \(\xi \in \partial \left( \left( {\bar{\mu }} g\right) (\cdot ,{\bar{v}})\right) ({\bar{y}}),\) and \(\theta \in \partial \delta _C({\bar{y}})\) such that
Since \(\xi \in \partial \left( \left( {\bar{\mu }} g\right) (\cdot ,{\bar{v}})\right) ({\bar{y}}),\) and \(\theta \in \partial \delta _C({\bar{y}})\), we have
which implies that
Moreover, noting that \( \left( {\bar{\mu }} g\right) (x,{\bar{v}} )=0\) and \(\left( {\bar{\mu }} g\right) ({\bar{y}},{\bar{v}} )\ge 0\), we have \(\left\langle \theta + \xi ,x-{\bar{y}}\right\rangle \le 0.\) Thus, it follows from (14) that
On the other hand, by \(\eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}})\) and Corollary 3.3, we get
where the equality follows from (8) for \({\bar{\lambda }}=0\) and \(({\bar{\mu }} g)(x,{\bar{v}})=0\). This, together with (15), implies that
Now, it remains to show that \(\eta \in \partial f( \cdot , {\bar{u}} )(x)\). In fact, for any \(y\in X,\)
where the second equality follows from (16), the inequality follows from \(\eta \in \partial f( \cdot , {\bar{u}} )({\bar{y}})\). Moreover, since \(x\in {\mathcal {S}}^{\mathrm {(RUP)}}\), it follows from Corollary 3.3 that
where the first equality follows from \(({\bar{\mu }} g)(x,{\bar{v}})=0\), and the second equality follows from (8) for \({\bar{\lambda }}=0\). Thus, by (17) and (18), we have
which means that \(\eta \in \partial f( \cdot , {\bar{u}} )(x)\). Thus, \({\mathcal {S}}^{\mathrm {(RUP)}}\subseteq {\mathcal {S}}_1,\) and the proof is complete. \(\square \)
Corollary 4.4
Suppose that \({\bar{y}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) is a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, \({\mathcal {S}}^{\mathrm {(RUP)}}={\mathcal {S}}_1.\)
Proof
Let \({\bar{y}}\in {\mathcal {F}}^{\mathrm {(RUP)}}\) be a robust optimal solution of \(\mathrm {(UP)}\), and there exist \({\bar{\mu }}\in K^*\), \({\bar{u}}\in {\mathcal {U}}\), and \({\bar{v}}\in {\mathcal {V}}\), such that conditions (1) and (2) hold. Then, it follows from Lemma 2.2 that \(\left( {\bar{y}}, {\bar{\mu }}, {\bar{u}},{\bar{v}}\right) \) is an optimal solution of \((\mathrm {OUD_M})\) and \(val(\mathrm {RUP})=val(\mathrm {OUD_M})\). Thus, the result follows from Theorem 4.2. \(\square \)
Remark 4.2
It is worth noting that the conclusion of Corollary 4.4 coincides with Theorem 3.2 in [24].
Example 4.1
Consider the uncertain optimization problem \(\mathrm {(UP)}\) introduced in Example 3.1. Let \({\bar{y}} =(0,0)\in {\mathcal {S}}^{\mathrm {(RUP)}}\), \({\bar{\mu }}=(0,1)\in {\mathbb {R}}^2_+\), \({\bar{u}}=(1,0)\in {\mathcal {U}}\), and \({\bar{v}}=(1,0)\in {\mathcal {V}}\). Then, it is easy to see that conditions (1) and (2) hold.
On the other hand, for any \(x\in {\mathcal {F}}^{\mathrm {(RUP)}}=\{(x_1,x_2)\in {\mathbb {R}}^2:x_1\ge 0,x_2\le 0\}\), we can show that \(({\bar{\mu }} g)(x,{\bar{v}})={\bar{\mu }}_1( {x}_1+{\bar{v}}_1 {x}_2)+{\bar{\mu }}_2(- {x}_1+{\bar{v}}_2)=-x_1\), \(f(x,{\bar{u}})=\max _{u\in {\mathcal {U}}}f(x,{u})=x_1\), \(\partial f( \cdot , {\bar{u}} )({\bar{y}}) = \partial f( \cdot , {\bar{u}} )(x)=(1,0)\), and \(\langle \eta , {\bar{y}}-x\rangle =-x_1.\) Then, we can show that
5 Conclusions
In this paper, following the framework of robust optimization, we give various characterizations of robust optimal solution sets for \(\mathrm {(UP)}\) in terms of subgradients and Lagrange multipliers. We show that a Lagrange-type function, corresponding to a Lagrange multiplier, is a constant on a subset of locally convex vector space X which is wider than the robust optimal solution sets of \(\mathrm {(UP)}\). Based on these Lagrange-type function properties, we establish various simple Lagrange multiplier-based characterizations of the robust optimal solution set of \(\mathrm {(UP)}\) via its dual problems. It is worth noting that the approach used in this paper is new, and some existing results in the literature can be obtained by the use of our approach.
References
Mangasarian, O.L.: A simple characterization of solution sets of convex programs. Oper. Res. Lett. 7, 21–26 (1988)
Burke, J.V., Ferris, M.C.: Characterization of solution sets of convex programs. Oper. Res. Lett. 10, 57–60 (1991)
Jeyakumar, V., Yang, X.Q.: Characterizing the solution sets of pseudo-linear programs. J. Optim. Theory Appl. 87, 747–755 (1995)
Penot, J.P.: Characterization of solution sets of quasiconvex programs. J. Optim. Theory Appl. 117, 627–636 (2003)
Suzuki, S., Kuroiwa, D.: Characterizations of the solution set for quasiconvex programming in terms of GreenbergCPierskalla subdifferential. J. Glob. Optim. 62, 431–441 (2015)
Yang, X.M.: On characterizing the solution sets of pseudoinvex extremum problems. J. Optim. Theory Appl. 140, 537–542 (2009)
Zhao, K.Q., Yang, X.M.: Characterizations of the solution set for a class of nonsmooth optimization problems. Optim. Lett. 7, 685–694 (2013)
Liu, C.P., Yang, X.M.: Characterizations of the approximate solution sets of nonsmooth optimization problems and its applications. Optim. Lett. 9, 755–768 (2015)
Son, T.Q., Kim, D.S.: A new approach to characterize the solution set of a pseudoconvex programming problem. J. Comput. Appl. Math. 261, 333–340 (2014)
Long, X.J., Peng, Z.Y., Wang, X.F.: Characterizations of the solution set for nonconvex semi-infinite programming problems. J. Nonlinear Convex Anal. 17, 251–265 (2016)
Jeyakumar, V., Lee, G.M., Dinh, N.: Characterization of solution sets of convex vector minimization problems. Eur. J. Oper. Res. 174, 1380–1395 (2006)
Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. Ser. B 92, 453–480 (2002)
Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton (2009)
Jeyakumar, V., Li, G.: Strong duality in robust convex programming: complete characterizations. SIAM J. Optim. 20, 3384–3407 (2010)
Li, G.Y., Jeyakumar, V., Lee, G.M.: Robust conjugate duality for convex optimization under uncertainty with application to data classification. Nonlinear Anal. 74, 2327–2341 (2011)
Bo̧t, R.I., Jeyakumar, V., Li, G.Y.: Robust duality in parametric convex optimization. Set-Valued Var. Anal. 21, 177–189 (2013)
Fang, D.H., Li, C., Yao, J.C.: Stable Lagrange dualities for robust conical programming. J. Nonlinear Convex Anal. 16, 2141–2158 (2015)
Fakhar, M., Mahyarinia, M.R., Zafarani, J.: On nonsmooth robust multiobjective optimization under generalized convexity with applications to portfolio optimization. Eur. J. Oper. Res. 265, 39–48 (2018)
Wei, H.Z., Chen, C.R., Li, S.J.: Characterizations for optimality conditions of general robust optimization problems. J. Optim. Theory Appl. 177, 835–856 (2018)
Sun, X.K., Fu, H.Y., Zeng, J.: Robust approximate optimality conditions for uncertain nonsmooth optimization with infinite number of constraints. Mathematics 7, 12 (2019)
Jeyakumar, V., Lee, G.M., Li, G.Y.: Characterizing robust solution sets of convex programs under data uncertainty. J. Optim. Theory Appl. 164, 407–435 (2015)
Li, X.B., Wang, S.: Characterizations of robust solution set of convex programs with uncertain data. Optim. Lett. 12, 1387–1402 (2018)
Lee, G.M., Yao, J.C.: On solution sets for robust optimization problems. J. Nonlinear Convex Anal. 17, 957–966 (2016)
Sun, X.K., Peng, Z.Y., Guo, X.L.: Some characterizations of robust optimal solutions for uncertain convex optimization problems. Optim. Lett. 10, 1463–1478 (2016)
Sun, X.K., Long, X.J., Fu, H.Y., Li, X.B.: Some characterizations of robust optimal solutions for uncertain fractional optimization and applications. J. Ind. Manag. Optim. 13, 803–824 (2017)
Sisarat, N., Wangkeeree, R., Lee, G.M.: Some characterizations of robust solution sets for uncertain convex optimization problems with locally Lipschitz inequality constraints. J. Ind. Manag. Optim. https://doi.org/10.3934/jimo.2018163
Rockafellar, R.T.: Convex Analysis. Princeton Univ. Press, Princeton (1970)
Acknowledgements
We would like to express our sincere thanks to the anonymous referees for many helpful comments and constructive suggestions which have contributed to the final preparation of this paper. This research was supported by the Basic and Advanced Research Project of Chongqing (cstc2017jcyjBX0032, cstc2016jcyjAX0178), the ARC Discovery Grant (DP190103361), the National Natural Science Foundation of China (11701057 and 11471059), and the Program for University Innovation Team of Chongqing (CXTDX201601026).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Xinmin Yang.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sun, X., Teo, K.L. & Tang, L. Dual Approaches to Characterize Robust Optimal Solution Sets for a Class of Uncertain Optimization Problems. J Optim Theory Appl 182, 984–1000 (2019). https://doi.org/10.1007/s10957-019-01496-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-019-01496-w