Abstract
We present approximate solutions for the robust semi-infinite multi-objective convex symmetric cone programming problem. By using the robust optimization approach, we establish an approximate optimality theorem and approximate duality theorems for approximate solutions in convex symmetric cone optimization problem involving infinitely many constraints to be satisfied and multiple objectives to be optimized simultaneously under the robust characteristic cone constraint qualification. We also give an example to illustrate the obtained results in an important special case, namely the robust semi-infinite multi-objective convex second-order cone program.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Robust optimization [1,2,3,4] has become a very active methodological approach that is established to deal with optimization problems under uncertainty. The uncertainty means that the entering parameters of those problems are not acknowledged precisely on the time while an answer must be determined. In recent years, many authors have established approximate optimality conditions and duality theorems for approximate solutions (\(\epsilon \)-solutions) for different classes of optimization problems [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21]. More specifically, Jeyakumar and Li established strong duality for robust semidefinite linear programming [8], Lee and Jiao established quasi approximate solutions for robust convex programming [9], Lee and Lee established approximate solutions for robust convex programming [11], robust fractional programming [12], robust convex semidefinite programming [14], and robust semi-infinite programming [15]. Lee and Lee also established optimality conditions and duality theorems for robust semi-infinite multi-objective programming [13].
Convex symmetric cone optimization [22,23,24,25,26] problems are a class of convex optimization problems in which we minimize a convex function over the intersection of an affine set with the Cartesian product of symmetric cones. Well-known examples of symmetric cones are the nonnegative orthant cone, the second-order cone, the cone of symmetric positive semidefinite matrices, the cone of complex hermitian positive semidefinite matrices, and the cone of quaternion Hermitian positive semidefinite matrices. Therefore, well-studied special cases of symmetric programming are linear programming, second-order cone programming [27,28,29,30] and semidefinite programming [31,32,33]. Other special cases are optimization problems over complex Hermitian positive semidefinite matrices, and optimization problems over quaternion Hermitian positive semidefinite matrices [22, 23, 25].
To illustrate the modeling potential of conic programming and the extensive applicability of symmetric cone programming, we mention that all convex programming problems can be formulated as conic programs [34], and that almost all real-world applications of conic programming are associated with symmetric cones [27, 28, 31, 32, 34]. There is a strong relationship between symmetric cone programming and Euclidean Jordan algebras. When we optimize over symmetric cones, the importance of Euclidean Jordan algebras stems from the fact that a cone is symmetric if and only if it is the cone of squares of some Euclidean Jordan algebra. Some Jordan algebraic notations, are listed in Table 1. These notations will be used in the sequel. Readers who are unfamiliar with the theory of Jordan algebra are encouraged to read [22, Section 2].
Despite the genuine need for establishing an approximate optimality theorem and approximate duality theorems for convex symmetric cone programming problems, there are no symmetric conicity analogs of these approximate theorems. Inspired by this gap in the literature, in this paper, we establish \(\epsilon \)-solutions for the robust convex symmetric cone programming problem. Our setting is general in the sense that our convex symmetric cone optimization problem involves infinitely many constraints and multiple objective functions. That is, we establish an \(\epsilon \)-optimality theorem and \(\epsilon \)-duality theorems for robust semi-infinite multi-objective convex symmetric cone programming. We also apply our results to an important special case, namely the robust semi-infinite multi-objective convex second-order cone program.
The semi-infinite multi-objective symmetric programming problem is defined as
where \(x\in \mathbb {R}^m\), \(f_k:\mathbb {R}^m\rightarrow \mathbb {R}, k=1,\ldots ,K\), for \(a_i^{(t)} \in \mathcal {J}\) for \(i=0,1,\ldots ,m,\;t\in T\), \(\mathcal {J}\) is a Jordan algebra with dimension n and rank r, and T is an arbitrary index set that can be infinite.
The semi-infinite multi-objective symmetric programming problem with uncertain data in the constraints is defined as
where, for each \(i=0,1,\ldots ,m\) and \(t\in T\), the element \(a_i^{(t)}\) belongs to an uncertainty set \(\mathcal {V}_i^{(t)} \subseteq \mathcal {J}\).
The robust counterpart of USIMSP is defined as
Hence, the robust feasible set \(\mathcal {F}_{\mathcal {P}}\) of RSIMSP is given as
where \(I:=\lbrace 0,1,2,\ldots ,m\rbrace \). We make the following assumptions throughout this paper.
Assumption 1.1
For each \(t\in T\) and \(i\in I\), \(\mathcal {V}_i^{(t)}\subseteq \mathcal {J}\) is compact and convex.
Assumption 1.2
The robust feasiblity set \(\mathcal {F}_{\mathcal {P}}\) has a nonempty interior.
Assumption 1.1 is necessary to give a characterization and prove properties of the robust characteristic cone which will be given in the next section. Assumption 1.2 is called the Slater condition and is necessary to prove and apply the robust version of Farkas’ lemma.
We use \(\mathbb {R}_+^n:= \lbrace (x_1,\ldots ,x_n)\in \mathbb {R}^n: x_i \ge 0, i=1, \ldots , n\rbrace \) to denote the nth dimensional nonnegative orthant cone of \(\mathbb {R}^n\). Its interior, \(\text {int}\;\mathbb {R}^n_+:= \lbrace (x_1,\ldots ,x_n)\in \mathbb {R}^n: x_i > 0, i=1, \ldots , n\rbrace \), is denoted by \(\mathbb {R}_{++}^n\). Let \(\epsilon =(\epsilon _1,\ldots ,\epsilon _k)\in {{\mathbb {R}}}^K_+\), we say that \(\bar{x}\in \mathcal {F}_{\mathcal {P}}\) is an \(\epsilon \)-solution of RSIMSP if for any \(x\in \mathcal {F}_{\mathcal {P}}\) we have that \(f_k(x)\ge f_k(\bar{x})-\epsilon _k\) for each \(k=1,\ldots ,K\). Our focus in this paper is to present the approximate solutions (\(\epsilon \)-solutions) for RSIMSP.
The next pages of the paper are structured as follows: Sect. 2 defines the robust characteristic cone and proves its convexity. The \(\epsilon \)-optimality condition theorem and the \(\epsilon \)-duality theorems for robust semi-infinite multi-objective symmetric programming are established in Sects. 3 and 4, respectively. In Sect. 5, we apply our results to an important special case, robust semi-infinite multi-objective second-order cone program.
2 The robust characteristic cone
In this section, we define the robust characteristic cone for our setting and and prove that it is closed and convex. First, we present some preliminaries.
Let \(\bar{\mathbb {R}}:=[-\infty ,+\infty ]\) and \(f: \mathbb {R}^n \rightarrow \bar{\mathbb {R}}\) be a function. We say f is proper if for all \(x\in \mathbb {R}^n,\; f(x) > -\infty \) and there exists \(x_0 \in \mathbb {R}^n\) such that \(f(x_0)\in \mathbb {R}\). A proper function f is said to be convex if for all \(\mu \in [0,1]\), we have
for all \(x,y \in \mathbb {R}^n\). The domain of f is defined to be the set dom\(f:= \lbrace x \in \mathbb {R}^n : f(x) < +\infty \rbrace \). The epigraph of f is defined to be the set \(\text {epi} f:=\left\{ (x,r) \in \mathbb {R}^n\times \mathbb {R} : f(x)\le r \right\} \).
The subdifferential of f at \(x\in \mathbb {R}^n\) is defined as
In general, for any \(\epsilon \ge 0\) , the \(\epsilon \)-subdifferential of f at \(x \in \mathbb {R}^n\) is defined by
A function f is said to be a lower semi-continuous function if lim \(\hbox {inf}_ {y\rightarrow x} f (y)\ge f (x)\) for all \(x\in \mathbb {R}^n\). The conjugate function of any proper convex function g on \(\mathbb {R}^n\) is the function \(g^\star :\mathbb {R}^n\rightarrow \mathbb {R}\cup \lbrace +\infty \rbrace \) defined as
for any \(x^\star \in \mathbb {R}^n\). The following proposition is due to Jeyakumar et al. [35].
Proposition 2.1
Let \(f,\;g : \mathbb {R}^n\rightarrow \mathbb {R}\cup \lbrace +\infty \rbrace \) be proper lower semicontinuous convex functions. If one of the functions f and g is continuous, then
For a given set \(A \subset \mathbb {R}^n \), we write cl A and co A to denote the closure of A and the convex hull generated by A, respectively. The indicator function \(\delta _A\) is defined as
In convex programming, a constrained minimization problem over a closed convex subset C of \(\mathbb {R}^n\) can be reformulated as an unconstrained minimization problem by replacing its objective function, say f, with the function \((f+\delta _C)(x)\). The following proposition is due to Hiriart-Urruty and Lemarechal [36].
Proposition 2.2
Let \(f: \mathbb {R}^n\rightarrow \mathbb {R}\) be a convex function, C be a closed convex subset of \(\mathbb {R}^n\), and \(\epsilon \ge 0\). Then
We have also the following proposition [37, 38].
Proposition 2.3
Let I is an arbitrary index set, and \(g_i: \mathbb {R}^n\rightarrow \mathbb {R}\cup \lbrace \infty \rbrace \) be a proper lower semicontinuous convex function for \(i \in I\). Assume that there exists \(x_0\in \mathbb {R}^n\) such that \(\sup _{i\in I} g_i(x_0) \le +\infty \). Then
Let \(\mathcal {D}\) be the robust characteristic cone defined as
The following lemma shows that \(\mathcal {D}\) is indeed a cone in \(\mathbb {R}^{m+1}\) under Assumption 1.1.
Lemma 2.1
The set \(\mathcal {D} \subset \mathbb {R}^{m+1}\) is a cone.
Proof
It s clear that \(0 \in \mathcal {D}\). To prove the desired result, we need to show that for each \(x \in \mathcal {D}\) and \(\lambda \in {{\mathbb {R}}}_{++}\), we have \(\lambda x \in \mathcal {D}\). Since \(x \in \mathcal {D}\), there exist \(a_i= (a_i^{(t)})_{t\in T}, i \in I\), \(z=(z^{(t)})_{t\in T}\), and \(r\in {{\mathbb {R}}}^{(T)}_+\), where \(a_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(z^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that \(x_i=\sum _{t \in T} (z^{(t)}\bullet a_i^{(t)})\) for \(i=1, \ldots , m\), and \(x_{m+1}=-\sum _{t \in T}(z^{(t)}\bullet a_0^{(t)}+r^{(t)})\). It follows that \(\lambda x_i= \sum _{t \in T} (\lambda z^{(t)}\bullet a_i^{(t)})\) and \(\lambda x_{m+1}=-\sum _{t \in T} (\lambda z^{(t)}\bullet a_0^{(t)} + \bar{r})\). Note that \(\lambda r^{(t)} \ge 0\), and that \(\lambda z^{(t)}\succeq 0\) because \(\mathcal {K}_{\mathcal {J}}\) is a cone. Thus, \(\lambda x \in \mathcal {D}\). \(\square \)
The following lemma is due to [21, Lemma 4.2].
Proposition 2.4
Let \(\epsilon \in {{\mathbb {R}}}_+\), then \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP if
for any \(x\in \mathcal {F}_{\mathcal {P}}\cap \lbrace x\in \mathbb {R}^m: f_k(x)\ge f_k(\bar{x}), k=1, \ldots , K\rbrace \).
We say that RSIMSP satisfies the convexity condition if for every \(t \in T\) we have
where \(U_i^{(t)}\) is compact convex subset of \(\mathbb {R}^l\), \(a_{0}^{(t)} \in \mathcal {J}\) and \(a_{j}^{(t)}\succeq 0\), for each \(t\in T, i\in I\), and \(j=1,2,\ldots ,l\). We point out that if the above convexity condition holds, then the uncertainty sets \(V^t_i\) includes the box uncertainty sets of linear programming and the spectrahedral uncertainty sets of semidefinite programming as special cases.
The closeness and convexity of the robust characteristic cone \(\mathcal {D}\) are necessary for the robust characteristic cone constraint qualification to hold. The following lemma proves the convexity of \(\mathcal {D}\) under the convexity condition of RSIMSP (see also [39, Proposition 1]).
Lemma 2.2
If RSIMSP satisfies the convexity condition, the robust characteristic cone \(\mathcal {D}\) is convex.
Proof
Let \(x,y\in \mathcal {D}\) and let \(\lambda \in [0,1]\). We want to show that \(\lambda x+(1-\lambda )y\in \mathcal {D}\). From the definition of \(\mathcal {D}\), there exist \(h_i= (h_i^{(t)})_{t\in T}, c_i= (c_i^{(t)})_{t\in T}, i \in I\), \(b=(b^{(t)})_{t\in T},n=(n^{(t)})_{t\in T}\), and \(r, s \in {{\mathbb {R}}}^{(T)}_+\), where \(h_i^{(t)},c_i^{(t)}\in \mathcal {V}_{t}\), and \(b^{(t)},n^{(t)}\succeq 0\), for \(i\in I, t\in T\), such that
Since \(h_i^{(t)},c_i^{(t)}\in \mathcal {V}_{t}\), there exist \((u_{i_1}^{(t)},u_{i_2}^{(t)},\ldots ,u_{i_l}^{(t)})\in U_i^{(t)}\) and \((v_{i_1}^{(t)},v_{i_2}^{(t)},\ldots ,v_{i_l}^{(t)}) \in U_i^{(t)}\) such that, for each \(i\in I\), we have
Fixing an \(i\in I-\lbrace 0\rbrace \), we have that
For \(i\in I, t \in T\) and \(j=1,2,\ldots ,l\), we define \(w_{i_j}^{(t)}\) as
By the convexity of \(U_i^{(t)}\), it is clear that \((w_{i_1}^{(t)},w_{i_2}^{(t)}, \ldots ,w_{i_l}^{(t)}) \in U_i^{(t)}\) for \(i\in I\). In addition, for each \(i\in I-\lbrace 0\rbrace ,\;t\in T\) and \(j=1,2,\ldots ,l\), we have that
Note that if \((\lambda b^{(t)}+(1-\lambda )n^{(t)})\bullet a_{j}^{(t)} \ne 0\), the equality in (1) follows trivially. If \((\lambda b^{(t)}+(1-\lambda )n^{(t)})\bullet a_{j}^{(t)} =0\), then \(\lambda b^{(t)}\bullet a_{j}^{(t)} =(1-\lambda )n^{(t)} \bullet a_{j}^{(t)}=0\) because \(b^{(t)},n^{(t)},a_{j}^{(t)}\succeq 0\), for all \(i\in I-\lbrace 0\rbrace , t \in T\) and \(j=1,2,\ldots ,l\), and hence the equality in (1) follows in this case as well. It follows immediately that
for \(i \in I - \{0\}\). Similarly it is also seen that
Note that \(\lambda r^{(t)} +(1-\lambda )s^{(t)} \ge 0\), \(\lambda b^{(t)}+(1-\lambda )n^{(t)}\succeq 0\), and \((w_{i_1}^{(t)},w_{i_2}^{(t)}, \ldots ,w_{i_l}^{(t)}) \in U_i^{(t)}\), for each \(i\in I\) and \(t \in T\). This implies that \(\lambda x+(1-\lambda )y\in \mathcal {D}\), and therefore \(\mathcal {D}\) is convex. The proof is complete. \(\square \)
The following lemma proves the closeness of \(\mathcal {D}\) under Assumptions 1.1 and 1.2 (see also [8, Corollary 2.1]).
Lemma 2.3
If the Slater condition holds, the robust characteristic cone \(\mathcal {D}\) is closed.
Proof
Let \(\lbrace x(k)\rbrace _{k=1}^\infty := \lbrace (x_1(k),x_2(k),\ldots ,x_{m+1}(k)) \rbrace _{k=1}^\infty \) be a sequence in \(\mathcal {D}\) that is convergent to the point \(x:=(x_1,x_2,\ldots ,x_{m+1}) \in {{\mathbb {R}}}^{m+1}\). To show that \(\mathcal {D}\) is closed, we want to show that \(x \in \mathcal {D}\). From the definition of \(\mathcal {D}\), there exist \(a_i(k)= (a_i^{(t)}(k))_{t\in T}, i \in I\), \(z(k)=(z^{(t)}(k))_{t\in T}\), and \(r(k)\in {{\mathbb {R}}}^{(T)}_+\), where \(a_i^{(t)}(k) \in \mathcal {V}_i^{(t)}\) and \(z^{(t)}(k) \succeq 0\), for all \(t\in T\) and \(i \in I\), such that
and
Based on Assumption 1.1, the sequence \(\lbrace a_{i}^{(t)}(k) \rbrace _{k=1}^\infty \) has a convergent subsequence. Therefore, after passing to a convergent subsequence, if necessary, we may assume that \(a_{i}^{(t)}(k)\rightarrow a_i^{(t)}\in \mathcal {V}_i^{(t)}\). Now, we show that \(\lbrace \Vert (z^{(t)}(k)\Vert \rbrace \) is a bounded sequence by contradiction. Suppose on the contrary that \(\Vert (z^{(t)}(k)\Vert \rightarrow +\infty \). Then, \(z^{(t)}(k)/\Vert z^{(t)}(k)\Vert \rightarrow z^{(t)}\in \mathcal {K}_{\mathcal {J}}-\lbrace 0\rbrace \). Dividing both sides of (2) and (3) by \(\Vert z^{(t)}(k)\Vert \) and applying the limit, we obtain
Based on Assumption 1.2, there exists \(x^{(0)}\in {{\mathbb {R}}}^m\) such that \(a_0^{(t)} + \sum _{i=1}^m x^{(0)}_ia_i^{(t)} \succ 0\), for all \(a_i^{(t)}\in \mathcal {V}_i^{(t)}\), and
This is on one side of the coin, but on the other side, since \(z^{(t)}\in \mathcal {K}_{\mathcal {J}}- \lbrace 0\rbrace \), we have \(z^{(t)}\bullet (a_0^{(t)} +\sum _{i=1}^m x^{(0)}_ia_i^{(t)})>0\), which is a contradiction. Hence, the sequence \(\lbrace \Vert z^{(t)}(k)\Vert \rbrace _{k=1}^\infty \) is bounded. From (3), the sequence \(\lbrace r^{(t)}(k)\rbrace _{k=1}^\infty \) is also bounded. Therefore, after passing to a subsequence, if necessary, we can assume that \(z^{(t)}(k) \rightarrow \bar{z}^{(t)}\) and \(r^{(t)}(k)\rightarrow \bar{r}^{(t)}\). By applying the limit in (2) and (3), we have
Thus, \(x\in \mathcal {D}\). This completes the proof. \(\square \)
3 \(\epsilon \)-Optimality theorem
In this section, we establish the \(\epsilon \)-optimality theorem for our problem. First, we prove two intermediate lemmas. The next lemma is the robust version of Farkas’ lemma for our setting, and is based on Assumptions 1.2 and 1.1.
Lemma 3.1
Let \((c_k,\alpha _k) \in \mathbb {R}^m \times \mathbb {R}\) for \(k=1,\ldots ,K\), then
Proof
Assume that \(\mathcal {F}_{\mathcal {P}}\subset \lbrace x\in \mathbb {R}^m :\langle c_k,x\rangle \ge \alpha _k,\;k=1,\dots ,K\rbrace \), and let \(\phi _k(x)= \langle c_k,x\rangle - \alpha _k\) for \(k=1,2,\ldots ,K\). Then \(\mathcal {F}_{\mathcal {P}}\subset \lbrace x\in \mathbb {R}^m :\phi _k(x)\ge 0,\;k=1,\dots ,K\rbrace .\) It follows that \(\sum _{k=1}^K\phi _k(x) +\delta _{\mathcal {F}_{\mathcal {P}}}(x)\ge 0\) for all \(x\in \mathbb {R}^{m}\). Note that \(\phi _k\) is continuous for \(k=1,2,\ldots ,K\). Therefore, using Proposition 2.1, we have
Thus,
The desired result is obtained by showing that \(\sum _{k=1}^K(c_k,\alpha _k)\in \text {cl co}\; \mathcal {D}\). To prove this, in light of (4), it is enough to show that \(\text {epi}\;\delta _{\mathcal {F}_{\mathcal {P}}}^{\star } =-\text {cl co}\; \mathcal {D}\).
Note that, for each \(z^{(t)}\succeq 0, a_i^{(t)}\in \mathcal {V}_i^{(t)}\), \(i\in I,\;t\in T\) and \(\xi \in {{\mathbb {R}}}^m\), we have
Note also that, for any \(x\in {{\mathbb {R}}}^m\), we have
It follows that
where the second equality follows from Propositions 2.1 and 2.3 and the third equality follows from (5). The proof is complete. \(\square \)
The following lemma is also based on Assumption 1.1.
Lemma 3.2
Let \(\bar{x}\in \mathcal {F}_{\mathcal {P}}\) and \(\epsilon \ge 0\). Let also \(f_k:\mathbb {R}^m\rightarrow \mathbb {R}\) be a convex function for \(k=1, 2, \ldots , K\). Then \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP if and only if for each \(k=1,2, \ldots , K\) and \(t \in T\), there exist \(\epsilon _k, \epsilon _t\ge 0\) and \(\xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\) such that \(\sum _{k=1}^K\epsilon _k +\sum _{t\in T}\epsilon _t=\epsilon \) and
Proof
Assume that \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP, where \(\epsilon \ge 0\). Then, for any \(x\in \mathcal {F}_{\mathcal {P}},\;\sum _{k=1}^{K} f_k(x)\ge \sum _{k=1}^{K}f_k(\bar{x})-\epsilon \). It follows that, for any \(x\in \mathbb {R}^m\), we have
Then, according to the definition of \(\epsilon \)-subdifferentiability, we deduce that \(0\in \partial _{\epsilon } (\sum _{k=1}^{K}f_k +\delta _{\mathcal {F}_{\mathcal {P}}})(\bar{x})\), which in view of Proposition 2.2 is equivalent to
Therefore, for each \(k=1, \ldots , K\) and \(t \in T\), there exist \(\epsilon _k, \epsilon _t \ge 0\), \(\xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\), and \(-\xi _t\in \partial _{\epsilon _t} \delta _{\mathcal {F}_{\mathcal {P}}}(\bar{x})\) such that
Equivalently, for each \(k=1, \ldots , K\) and \(t \in T\), there exist \(\epsilon _k, \epsilon _t\ge 0\) and \(\xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\) such that \(\langle \xi _k,x\rangle \ge \langle \xi _k,\bar{x}\rangle -\epsilon _t\) for any \(x \in \mathcal {F}_{\mathcal {P}}\) and \(t \in T\), and hence
for any \(x\in \mathcal {F}_{\mathcal {P}}\). By Lemma 3.1, we conclude that for each \(k=1, \ldots , K\) and \(t \in T\), there exist \(\epsilon _k,\epsilon _t\ge 0 \) and \(\xi _k\in \partial _{\epsilon _k}\delta _{\mathcal {F}_{\mathcal {P}}}(\bar{x})\) such that
The proof is complete. \(\square \)
In light of Lemmas 3.1 and 3.2, we can obtain the \(\epsilon \)-optimality theorem under the robust characteristic cone constraint qualification.
Theorem 3.1
(Approximate optimality theorem) Consider the RSIMSP problem, and let \(\bar{x}\in \mathcal {F}_{\mathcal {P}}\). Then \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP if and only if there exist \((\epsilon _t)_{t\in T} \in {{\mathbb {R}}}^{(T)}_+, \epsilon _k \in {{\mathbb {R}}}_+, \xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\), \(k=1, 2, \ldots , K\), \(\bar{a}_i= (\bar{a}_i^{(t)})_{t\in T}, i \in I\), and \(\bar{z}=(\bar{z}^{(t)})_{t\in T}\), where \(\bar{a}_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(\bar{z}^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that \(\sum _{k=1}^K\epsilon _k+\sum _{t\in T}\epsilon _t=\epsilon \), and
Proof
Assume that \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP, where \(\epsilon \ge 0\). By Lemma 3.2, for each \(t \in T\) and \(k=1,2, \ldots , K\), there exist \(\epsilon _k,\epsilon _t\ge 0\) and \(\xi _k\in \partial _{\epsilon _k} f_k(\bar{x})\) such that \(\sum _{k=1}^K\epsilon _k+\sum _{t\in T}\epsilon _t= \epsilon \) and
It follows that
for some \((\bar{r}^{(t)})_{t\in T} \in {{\mathbb {R}}}^{(T)}_+\), \(\bar{a}_i= (\bar{a}_i^{(t)})_{t\in T}, i \in I\), and \(\bar{z}=(\bar{z}^{(t)})_{t\in T}\), with \(\bar{a}_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(\bar{z}^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\).
By combining (3) and (3), we get
This proves the first direction.
For the second, assume that there exist \((\epsilon _t)_{t\in T} \in {{\mathbb {R}}}^{(T)}_+, \epsilon _k \in {{\mathbb {R}}}_+, \xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\), \(k=1, 2, \ldots , K\), \(\bar{a}_i= (\bar{a}_i^{(t)})_{t\in T}, i \in I\), and \(\bar{z}=(\bar{z}^{(t)})_{t\in T}\), where \(\bar{a}_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(\bar{z}^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that the equality \(\sum _{k=1}^K\epsilon _k+\sum _{t\in T}\epsilon _t=\epsilon \) holds, and that (6) and (7) hold. By the definition of the \(\epsilon \)-subdifferentiality of f, for any \(x\in \mathbb {R}^m\), we have that
where the first equality is obtained from (6), the second and third inequalities follow from (7). Therefore, \(\sum _{k=1}^{K}f_k(x) \ge \sum _{k=1}^{K}f_k(\bar{x}) -\epsilon \), for any \(x\in \mathcal {F}_{\mathcal {P}}\). Thus, from Proposition 2.4, \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP. This proves the second direction. The result is established. \(\square \)
In multi-objective optimization problems, a solution is called Pareto optimal if none of the objective values can be improved without degrading some of the other objective values. The following remark is an immediate corollary of Theorem 3.1.
Remark 3.1
If the vectors \((\epsilon _t)_{t\in T} \in {{\mathbb {R}}}^{(T)}_+\) and \((\epsilon _k)_{1\le k \le K} \in {{\mathbb {R}}}^{K}_+\) in Theorem 3.1 tend to the null vector, we obtain (exact) optimality conditions for Pareto solutions.
4 \(\epsilon \)-Duality theorems
The (Wolfe-type) dual problem associated with the RSIMSP problem is the problem
Note that RSIMSD has the feasibility set
Let \(\epsilon \ge 0 \). The point \((\bar{x},\bar{a}_0, \dots ,\bar{a}_m, \bar{z})\) is called an \(\epsilon \)-solution of RSIMSD if for any \((y,a_0,\dots ,a_m, z) \in \mathcal {F}_{\mathcal {D}}\) we have
Now, we are ready to establish the \(\epsilon \)-weak duality theorem, which holds between the RSIMSP problem and its dual, the RSIMSD problem.
Theorem 4.1
(Approximate weak duality theorem) For any feasible solution x of RSIMSP and any feasible solution \((y,a_0,a_1, \ldots ,a_m,z)\) of RSIMSD, we have
Proof
Let x and \((y,a_0,a_1,\ldots ,a_m,z)\) be feasible solutions of RSIMSP and RSIMSD, respectively. Then \(a_i= (a_i^{(t)})_{t\in T}\) and \(z=(z^{(t)})_{t\in T}\), where \(a_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(z^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), and the inequality \(\sum _{t\in T} (z^{(t)}\bullet (a_0^{(t)} + \sum _{i=1}^m x_i a_i^{(t)})) \ge 0\) holds. It follows that, for each \(k=1,2, \ldots , K\), there exist \(\epsilon _k \ge 0, \xi _k\in \partial _{\epsilon _k}f_k(x)\) such that \(\sum _{k=1}^K \xi _k = \sum _{t \in T} \left( z^{(t)}\bullet a_1^{(t)}, z^{(t)}\bullet a_2^{(t)}, \ldots ,z^{(t)}\bullet a_m^{(t)}\right) \) and \(\sum _{k=1}^K\epsilon _k \le \epsilon \). Then, by the definition of the \(\epsilon \)-subdifferentiability, we have that
The proof is complete. \(\square \)
Now, we state and prove the \(\epsilon \)-strong duality, which holds theorem between RSIMSP and RSIMSD under the robust characteristic cone constraint qualification.
Theorem 4.2
(Approximate strong duality theorem) Assume that the robust characteristic cone \(\mathcal {D}\) is closed and convex. If \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP, then there exist \(a_i= (a_i^{(t)})_{t\in T}, i \in I\), and \(z=(z^{(t)})_{t\in T}\), where \(a_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(z^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that \((\bar{x},\bar{a}_0,\bar{a}_1,\ldots ,\bar{a}_m,\bar{z})\) is a \(2\epsilon \)-solution of RSIMSD.
Proof
Let \(\bar{x}\) be an \(\epsilon \)-solution of RSIMSP, then by using Theorem 3.1, there exist \((\epsilon _t)_{t\in T} \in {{\mathbb {R}}}^{(T)}_+, \epsilon _k \in {{\mathbb {R}}}_+, \xi _k\in \partial _{\epsilon _k}f_k(\bar{x})\), \(k=1, 2, \ldots , K\), \(\bar{a}_i= (\bar{a}_i^{(t)})_{t\in T}, i \in I\), and \(\bar{z}=(\bar{z}^{(t)})_{t\in T}\), where \(\bar{a}_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(\bar{z}^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that
Therefore, the point (\(\bar{x},\bar{a}_0,\bar{a}_1,\ldots ,\bar{a}_m,\bar{z}\)) is a feasible solution for RSIMSD. Then, using Theorem 4.1, for any feasible solution \((y,a_0,a_1,\ldots ,a_m,z)\) of RSIMSD, we have that
This means that \((\bar{x},\bar{a}_0,\bar{a}_1, \ldots ,\bar{a}_m,\bar{z})\) is a \(2\epsilon \)-solution of RSIMSD. \(\square \)
Now, we give the \(\epsilon \)-strong duality between RSIMSP and RSIMSD under the Slater condition and the weakened robust characteristic cone constraint qualification.
Corollary 4.1
Assume that the robust Slater condition holds and that the robust characteristic cone \(\mathcal {D}\) is convex. If \(\bar{x}\) is an \(\epsilon \)-solution of RSIMSP, then there exist \(a_i= (a_i^{(t)})_{t\in T}, i \in I\), and \(z=(z^{(t)})_{t\in T}\), where \(a_i^{(t)} \in \mathcal {V}_i^{(t)}\) and \(z^{(t)} \succeq 0\), for all \(t\in T\) and \(i \in I\), such that \((\bar{x},\bar{a}_0,\bar{a}_1,\ldots ,\bar{a}_m,\bar{z})\) is a \(2\epsilon \)-solution of RSIMSD.
Proof
By Lemma 2.3, the robust characteristic cone \(\mathcal {D}\) is closed. The result immediately follows from Theorem 4.2. \(\square \)
In the remaining part of this paper, we shall demonstrate in an example that the approximate weak and strong duality of a second-order cone program hold true even though the Slater condition fails.
5 An illustrative example
Throughout this section, we use “,” for adjoining vectors and matrices in a row, and use “;” for adjoining them in a column. So, for example, if a, and b are vectors, then \((a^\textsf{T }, b^\textsf{T })^\textsf{T }= (a; b)\).
Let \(\mathcal {E}^n\) be the n-dimensional real vector space \({{\mathbb {R}}}\times {{\mathbb {R}}}^{n-1}\) whose elements are indexed from 0. For each vector \(x \in \mathcal {E}^n\), we write \(\bar{x}\) for the sub-vector consisting of entries 1 through \(n-1\); therefore \(x = (x_0; \bar{x})\).
The nth-dimensional second-order cone (also known as the quadratic or Lorentz cone) is defined as
where \(\left\Vert \,\cdot \, \right\Vert \) denotes the Euclidean norm.
The cone \(\mathcal {E}^n_+\) is closed, pointed (i.e., it does not contain a pair of opposite nonzero vectors) and convex with nonempty interior in \({{\mathbb {R}}}^{n}\). It is also known that \(\mathcal {E}^n_+\) is self-dual (i.e., it equals its dual cone), and homogeneous (i.e., its automorphism group acts transitively on its interior). Therfore, the cone \(\mathcal {E}^n_+\) is symmetric [22, 27]. The graph to the right shows the 3rd-dimensional second-order cone \(\mathcal {E}^3_+\).
Table 2 lists some notions from the Euclidean Jordan algebra associated with the second-order cone.
In this section, we consider the robust semi-infinite multi-objective convex second-order cone programming problem:
where \(\mathcal {V}_0^{(t)}, \mathcal {V}_1^{(t)}\) and \(\mathcal {V}_2^{(t)}\), \(t\in [0,1]\), are the uncertainty subsets:
Now, for any \(t\in [0,1]\), we have
One can see that \(\mathcal {F}_{\mathcal {P}}= \lbrace (x_1;x_2): x_1=0,\;x_2\ge 1\rbrace \) is the set of all robust feasible solutions of RSIMSOCP. Let \(\epsilon \ge 0\), then \(S_{\mathcal {F}_{\mathcal {P}}}=\lbrace (0;x_2): 1\le x_2\le \sqrt{1+\epsilon }\rbrace \) is the set of all \(\epsilon \)-solutions of RSIMSOCP.
The robust characteristic cone is
Note that \(z^{(t)}\in \mathcal {E}^3_+\) means that \(z_0^{(t)} \ge \left\Vert (z_1^{(t)};z_2^{(t)}) \right\Vert = ((z_1^{(t)})^2+(z_2^{(t)})^2)^{1/2}\). It follows that
which is the set \(\mathbb {R}\times \mathbb {R}_+\times \mathbb {R}\), hence \(\mathcal {D}\) is closed and convex.
It is clear that \(a_0^{(t)}+x_1a_1^{(t)}+x_2a_2^{(t)}\) is on the boundary of the second-order cone for any \((x_1;x_2) \in \mathcal {F}_{\mathcal {P}}\), hence the robust Slater condition fails.
We now formulate the Wolf dual problem, RSIMSOCD, of RSIMSOCP as follows
Let \(\mathcal {U}= [-t,0]\times [-t,t]\times \lbrace t\rbrace \). Then the feasible set \(\mathcal {F}_{\mathcal {D}}\) is
Then, for any \((x_1,x_2)\in \mathcal {F}_{\mathcal {P}}\) and any \((y_1,y_2,a_0^{(t)},a_1^{t},a_2^{(t)},z^{(t)}) \in \mathcal {F}_{\mathcal {D}}\), we have, with \(x_1=0\) and \(x_2\ge y_2\), that
Also, for any \((x_1,x_2)\in \mathcal {F}_{\mathcal {P}}\) and any \((y_1,y_2,a_0^{(t)},a_1^{t},a_2^{(t)},z^{(t)}) \in \mathcal {F}_{\mathcal {D}}\), we have, with \(x_1=0\) and \(x_2< y_2\), that
This implies that for any \((x_1,x_2)\in \mathcal {F}_{\mathcal {P}}\) and any \((y_1,y_2,a_0^{(t)},a_1^{t},a_2^{(t)},z^{(t)}) \in \mathcal {F}_{\mathcal {D}}\), we have
Therefore, the approximate weak duality theorem (Theorem 4.1) holds.
For the strong duality, let \((\bar{x}_1,\bar{x}_2)=(0,\sqrt{1+\epsilon })\in S_{\mathcal {F}_{\mathcal {P}}}\), \(\epsilon _1+\epsilon _2 =(\sqrt{1+\epsilon }-1)^2\). Let also
One can see that \((\bar{x}_1,\bar{x}_2,\bar{a}_0^{(t)},\bar{a}_1^{t},\bar{a}_2^{(t)},\bar{z}^{(t)}) \in \mathcal {F}_{\mathcal {D}}\) for \(t \in T\). Furthermore, for any \((y_1,y_2,a_0^{(t)},a_1^{t},a_2^{(t)},z^{(t)}) \in \mathcal {F}_{\mathcal {D}}\), we have that
where we used (8) to obtain the first inequality. Thus, the approximate strong duality theorem (Theorem 4.2) also holds.
References
Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robustness, in Handbook on Semidefinite Programming. Kluwer, New York (2000)
Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimzation. Princeton University Press, Princeton (2009)
Bertsimas, D., Pachamanova, D., Sim, M.: Robust linear optimization under general norms. Oper. Res. Lett. 32, 510–516 (2004)
Goldfarb, D., Iyengar, G.: Robust convex quadratically constrained programs. Math. Program. 97, 495–515 (2003)
Govil, M.G., Mehra, A.: \(\epsilon \)-Optimality for multi-objective programming on a Banach space. Eur. J. Oper. Res. 157, 106–112 (2004)
Gutiérrez, C., Jiménez, B., Novo, V.: Multiplier rules and saddle-point theorems for Helbig’s approximate solutions in convex Pareto problems. J. Glob. Optim. 32, 367–383 (2005)
Hamel, A.: An \(\epsilon \)-Lagrange multiplier rule for a mathematical programming problem on Banach spaces. Optimization 49, 137–149 (2001)
Jeyakumar, V., Li, G.Y.: Strong duality in robust semi-definite linear programming under data uncertainty. Optimization 63, 713–733 (2014)
Lee, J.H., Jiao, L.G.: On quasi \(\epsilon \)-solution for robust convex optimization problems. Optim. Lett. 11, 1609–1622 (2017)
Lee, J.H., Lee, G.M.: \(\epsilon \)-Duality theorems for convex semidefinite optimization problems with conic constraints. J. Inequal. Appl. 2010, 363012 (2010)
Lee, J.H., Lee, G.M.: On \(\epsilon \)-solutions for convex optimization problems with uncertainty data. Positivity 16, 509–526 (2012)
Lee, J.H., Lee, G.M.: On \(\epsilon \)-solutions for robust fractional optimization problems. J. Inequal. Appl. 2014, 501 (2014)
Lee, J.H., Lee, G.M.: On optimality conditions and duality theorems for robust semi-infinite multi-objective optimization problems. Ann. Oper. Res. 269, 419–438 (2018)
Lee, J.H., Lee, G.M.: On approximate solutions for robust convex semidefinite optimization problems. Positivity 22, 419–438 (2018)
Lee, J.H., Lee, G.M.: On \(\epsilon \)-solutions for robust semi-infinite optimization problems. Positivity 23, 651–669 (2019)
Liu, J.C.: \(\epsilon \)-Duality theorem of nondifferentiable nonconvex multi-objective programming. J. Optim. Theory Appl. 69, 153–167 (1991)
Liu, J.C.: \(\epsilon \)-Pareto optimality for nondifferentiable multi-objective programming via penalty function. J. Math. Anal. Appl. 198, 248–261 (1996)
Arutyunov, A., Polyak, B.T., Mordukhovich, B.S.: Variational analysis and generalized differentiation I. Basic theory, II. Applications. Autom. Remote Control 70, 1086–1087 (2009)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation, I: Basic Theory, II. Applications. Springer, Berlin (2006)
Strodiot, J.J., Nguyen, V.H., Heukemes, N.: \(\epsilon \)-Optimal solutions in nondifferentiable convex programming and some related question. Math. Program. 25, 307–328 (1983)
Yokoyama, K.: Epsilon approximate solutions for multi-objective programming problems. J. Math. Anal. Appl. 203, 142–149 (1996)
Schmieta, S., Alizadeh, F.: Extension of primal-dual interior point algorithms to symmetric cones. Math. Program. Ser. A 96, 409–438 (2003)
Schmieta, S.H., Alizadeh, F.: Associative and Jordan algebras, and polynomial time interior point algorithms for symmetric cones. Math. Oper. Res. 26(3), 543–564 (2001)
Alzalg, B.: A primal-dual interior-point method based on various selections of displacement step for symmetric optimization. Comput. Optim. Appl. 72, 363–390 (2019)
Alzalg, B., Ariyawansa, K.A.: Logarithmic barrier decomposition-based interior point methods for stochastic symmetric programming. J. Math. Anal. Appl. 409, 973–995 (2014)
Alzalg, B.: Combinatorial and Algorithmic Mathematics: From Foundation to Optimization, 1st edn. Kindle Direct Publishing, Seattle, WA (2022)
Alizadeh, F., Goldfarb, D.: Second-order cone programming. Math. Program. Ser. B 95, 3–51 (2003)
Alzalg, B.: Stochastic second-order cone programming: application models. Appl. Math. Model. 36, 5122–5134 (2012)
Alzalg, B., Badarneh, K., Ababneh, A.: Infeasible interior-point algorithm for stochastic second-order cone optimization. J. Optim. Theory Appl. 181, 324–346 (2019)
Alzalg, B.: A logarithmic barrier interior-point method based on majorant functions for second-order cone programming. Optim. Lett. 14, 729–746 (2020)
Todd, M.J.: Semidefinite optimization. Acta Numer. 10, 515–560 (2001)
Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38, 49–95 (1996)
Ariyawansa, K., Zhu, Y.: A class of polynomial volumetric barrier decomposition algorithms for stochastic semidefinite programming. Math. Comput. 80, 1639–1661 (2019)
Nesterov, Yu.E., Nemirovskii, A.S.: Conic formulation of a convex programming problem and duality. Optim. Methods Softw. 1, 95–115 (1992)
Jeyakumar, V., Lee, G.M., Dinh, N.: Characterization of solution sets of convex vector minimization problems. Eur. J. Oper. Res. 174, 1380–1395 (2006)
Hiriart-Urruty, J.B., Lemarechal, C.: Convex Analysis and Minimization Algorithms, vol. I and II. Springer, Berlin (1993)
Jeyakumar, V., Lee, G.M., Dinh, N.: New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs. SIAM J. Optim. 14, 534–547 (2003)
Li, C., Ng, K.F., Pong, T.K.: Constraint qualifications for convex inequality systems with applications in constrainted optimization. SIAM. J. Optim. 19, 163–187 (2008)
Goberna, M.A., Jeyakumar, V., Li, G., López, M.A.: Robust linear semi-infinite programming duality under uncertainty. Math. Program. 139(1–2), 185–203 (2013)
Acknowledgements
The authors thank Yifan Dou from the Ohio State University for reading the manuscript and pointing out misprints. The authors also thank the two anonymous expert referees for their valuable suggestions from which the paper has benefited.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Alzalg, B., Oulha, A.A. On approximate solutions for robust semi-infinite multi-objective convex symmetric cone optimization. Positivity 26, 86 (2022). https://doi.org/10.1007/s11117-022-00952-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11117-022-00952-8
Keywords
- Robust symmetric cone optimization
- Semi-infinite programming
- Multi-objective programming
- Approximate optimality conditions
- Approximate duality theorems