Abstract
This paper studies a class of multiobjective convex polynomial problems, where both the constraint and objective functions involve uncertain parameters that reside in ellipsoidal uncertainty sets. Employing the robust deterministic approach, we provide necessary conditions and sufficient conditions, which are exhibited in relation to second order cone conditions, for robust (weak) Pareto solutions of the uncertain multiobjective optimization problem. A dual multiobjective problem is proposed to examine robust converse, robust weak and robust strong duality relations between the primal and dual problems. Moreover, we establish robust solution relationships between the uncertain multiobjective optimization program and a (scalar) second order cone programming relaxation problem of a corresponding weighted-sum optimization problem. This in particular shows that we can find a robust (weak) Pareto solution of the uncertain multiobjective optimization problem by solving a second order cone programming relaxation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since the seminal paper by Soyster [34], robust optimization has become a useful and efficient deterministic mathematical approach to handle problems relating to decision making in the face of data uncertainty [5, 7]. Recently, the robust optimization approach has been applied to various domains in multiobjective/vector frameworks with many further developments and high-potential techniques to solve various real-life decision making problems under the data uncertainty (see e.g., [9, 12, 16,17,18, 20, 23, 24, 26, 36] and the references therein).
A recent research direction in robust multiobjective optimization has been devoted to investigating on how to reformulate and solve a robust multiobjective optimization problem via semidefinite programming (SDP) (see e.g., [4]) techniques by exploiting special structures of the constraint and objective functions such as linear, quadratic or polynomials; see e.g., [10, 11, 14, 25, 28]. For example, Magron et al. [28] used SDP relaxations to approximate Pareto curves of a bi-criteria polynomial program. The author in [11] established duality and linear matrix inequality conditions for a multiobjective SOS-convex optimization problem under constraint uncertainty, while Lee and Jiao in [25] examined such a multiobjective problem by employing an approximate constraint scalarization approach to find robust efficient solutions via SDP duals. In terms of two-stage frameworks, the authors in [14] reformulated and solved an adjustable robust linear multiobjective optimization using SDP relaxations.
Remarkably, a new class of first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials has been recently introduced in [8]. This class is a subclass of SOS-convex polynomials [3, 21], and is a numerically tractable subclass of convex polynomials in the sense that checking a given polynomial is 1st-SDSOS-convex or not can be done by solving a feasibility problem of a second order cone programming (SOCP) problem [1, 2, 8]. The class of 1st-SDSOS-convex polynomials covers all separable convex quadratic functions and any polynomial that can be expressed as a sum of even powers with the corresponding nonnegative coefficients [8]. The interested readers are referred to a more recent paper [13] for an application of the class of 1st-SDSOS-convex polynomials to examine duality and optimality conditions for weak efficient solutions of a multiobjective optimization problem.
In this paper, we consider an uncertain multiobjective optimization problem that is defined by
where \(x\in \mathbb {R}^n\) is the vector of decision variables, \(u_\zeta \in U_\zeta , \zeta =1,\ldots , m, \omega _l\in \Omega _l, l=1,\dots ,q,\) are uncertain parameters, \(U_\zeta , \zeta =1,\ldots ,m, \Omega _l, l=1,\ldots , q\) are uncertainty sets and \(f_\zeta : \mathbb {R}^n\times U_\zeta \rightarrow \mathbb {R}, \zeta =1,\dots ,m, g_l: \mathbb {R}^n\times \Omega _l\rightarrow \mathbb {R}, l=1,\dots ,q\) are bi-functions.
In what follows, the uncertainty sets \(U_\zeta , \zeta =1,\ldots ,m, \Omega _l, l=1,\ldots , q\) are assumed to be ellipsoids given by
where \(A_\zeta , \zeta =1,\ldots , m, B_l, l=1,\ldots , q\) are positive-definite symmetric matrices \((A_\zeta \succ 0, B_l\succ 0)\) and \(\rho _\zeta , \zeta =1,\ldots ,m, \tau _l, l=1,\ldots ,q\) are positive real numbers \((\rho _\zeta> 0, \tau _l> 0\)), while the bi-functions \(f_\zeta (\cdot ,\cdot ), \zeta =1,\dots ,m, g_l(\cdot ,\cdot ), l=1,\dots ,q\) are assumed to satisfy that \(f_\zeta (\cdot ,u_\zeta ), u_\zeta \in U_\zeta \) and \(g_l(\cdot ,\omega _l), \omega _l\in \Omega _l\) are 1st-SDSOS-convex polynomials (cf. Definition 2) and that \( f_\zeta (x,\cdot ), g_l(x,\cdot ), x\in \mathbb {R}^n\) are affine functions given by
where \(p_\zeta ^i, i=0,1,\ldots ,s, \zeta =1,\ldots ,m \) and \(h^j_l, j=0,1,\ldots ,r, l=1,\ldots ,q\) are given polynomials with degrees at most \(\eta \in \mathbb {N}_0:=\{0,1,2,\ldots \}\). Without loss of generality, we suppose that \(\eta \) is an even number because otherwise we can replace \(\eta \) by \(\eta +1.\)
To deal with the uncertainty problem (U), we associate with it the following robust counterpart:
where the uncertain objective functions and uncertain constraint functions are enforced for all possible realizations within the corresponding uncertainty sets.
From now on, the feasible set of problem (R) is denoted by
We also use the notations \(F:=(F_1,\ldots ,F_m)\), where \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).
The forthcoming concepts of solutions are in terms of worst-case robustness efficiencies (see e.g., [16, 22]) of multiobjective/vector optimization (cf. [15, 19, 27, 29, 31, 35]).
Definition 1
(i) A point \({\bar{x}}\in C\) is called a robust Pareto solution of problem (U) if \({\bar{x}}\) is a Pareto solution of problem (R), i.e., for all \( x\in C,\)
where \(\mathbb {R}^m_+\) stands for the nonnegative orthant of \(\mathbb {R}^m\). (ii) We say that \({\bar{x}}\in C\) is a robust weak Pareto solution of problem (U) if \({\bar{x}}\) is a weak Pareto solution of problem (R), i.e., for all \( x\in C,\)
where \(\mathrm{int\,} \mathbb {R}^m_+\) denotes the topological interior of \(\mathbb {R}^m_+\).
The purpose of this paper is threefold: Firstly, we propose necessary and sufficient optimality conditions for robust (weak) Pareto solutions of the uncertain multiobjective optimization problem (U). The obtained optimality conditions are displayed by means of second order cone (SOC) conditions, which provide a numerically checkable certificate of the solvability of the robust optimality conditions. Interestingly, it is shown that the SOC conditions are equivalent to the robust Karush-Kuhn-Tucker (KKT) condition. These are done by employing the special structures of the 1st-SDSOS-convex polynomials that empower us to provide robust optimality conditions by way of SOC conditions. These results will be given in Sect. 2.
Secondly, we suggest a dual multiobjective problem in terms of SOC conditions to the robust multiobjective optimization problem (R) and explore robust strong, robust weak and robust converse duality relations between (R) and its dual. The obtained dual results characterize the SOC conditions as well as the fulfilment of the robust KKT condition. In particular, this shows how we can verify a Pareto solution of the robust multiobjective optimization problem (R) by solving the corresponding dual multiobjective problem, which is expressed in terms of SOC conditions. These results will be given in Sect. 3.
Thirdly, we establish robust solution relationships between the uncertain multiobjective optimization problem (U) and an SOCP relaxation of a corresponding weighted-sum optimization problem. These relationships show how one can find a robust (weak) Pareto solution of the uncertain multiobjective polynomial program (U) by solving an SOCP relaxation. In this way, we obtain strong SOCP duality and exact SOCP relaxation among the (scalar) dual, relaxation and weighted-sum optimization programs. These results will be presented in Sect. 4. The last section summarizes the main results and provides some perspectives on this research topic.
2 Second-order cone optimality conditions and solution relationships
To begin with, some notations and definitions are presented. The notation \(\mathbb {R}^n\) signifies the Euclidean space whose norm is denoted by \(\Vert \cdot \Vert \) for each \(n\in \mathbb {N}:=\{1,2,\ldots \}\). The inner product in \(\mathbb {R}^n\) is defined by \(\langle x,y\rangle :=x^T y\) for all \(x, y\in \mathbb {R}^n.\) Denote by \(\mathbb {R}[x]\) (or \(\mathbb {R}[x_1,\ldots ,x_n]\)) the ring of polynomials in \(x:=(x_1,\ldots ,x_n)\) with real coefficients. Consider a polynomial f with degree at most \(\eta \) where \(\eta \) is an even number, and let \(l:=\eta /2\). We note the canonical basis of \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) as
where \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) is the space including of all real polynomials on \(\mathbb {R}^n\) with degree \(\eta \). Let \(s(\eta ,n)\) be the dimension of \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) and for each \(1 \le \alpha \le s(\eta ,n)\), let \(i(\alpha )=(i_1(\alpha ),\ldots ,i_n(\alpha )) \in (\mathbb {N}_0)^n\) denote the multi-index satisfying
Let the monomials \(m_\alpha (x)=x^{(\eta )}_{\alpha }\) be the \(\alpha \)-th coordinate of \(x^{(\eta )}\), \(1 \le \alpha \le s(\eta ,n)\). Thus, we can write
Given \(u \in \mathbb {N}\) and \(y=(y_\alpha )\in \mathbb {R}^{s(u,n)},\) the Riesz functional \(L_y:\mathbb {R}_\eta [x]\rightarrow \mathbb {R}\) is defined by
and the moment matrix \(\textbf{M}_u(y)\) about \(x \in \mathbb {R}^n\) with degree u generated by \(y=(y_\alpha )\in \mathbb {R}^{s(2u,n)}\) is defined by
where \(M_\alpha ,\alpha =1,\ldots , s(2u,n),\) are the \((s(u,n)\times s(u,n))\) symmetric matrices such that
Definition 2
Let d be a polynomial on \(\mathbb {R}^n\). (i) The polynomial d with degree \(\eta =2\ell , l\in \mathbb {N}_0,\) is scaled diagonally dominant sum-of-squares (SDSOS) [1] if there exist nonnegative numbers \(\alpha _i\), \(\beta _{ij}^+,\beta _{ij}^-\) \(\gamma _{ij}^+,\gamma ^-_{ij}\) and \(p \in \mathbb {N}\) with \(1 \le p \le s(\ell ,n)\) such that
where \(m_i, m_j\) are monomials in the variable x. We let \(\textbf{SDSOS}_\eta [x]\) denote the set of all SDSOS polynomials on \(\mathbb {R}^n\) with degree at most \(\eta \). (ii) For the polynomial d with degree \(\eta \in \mathbb {N}_0\), we consider a polynomial D defined as
where \(\nabla d(y)\) stands for the derivative of d at y. The polynomial d is called first-order scaled diagonally dominant sum-of-squares convex (1st-SDSOS-convex) [8] if D is an SDSOS polynomial in the variable of (x, y).
It is worth mentioning here that all 1st-SDSOS-convex polynomials are convex, the sum of two 1st-SDSOS-convex polynomials is a 1st-SDSOS-convex polynomial, and the product of a 1st-SDSOS-convex polynomial and a nonnegative scalar is also a 1st-SDSOS-convex polynomial.
The first theorem in this section provides necessary and sufficient optimality conditions for robust (weak) Pareto solutions of problem (U) in terms of second order cone (SOC) conditions. Let \(Q_\zeta , \zeta =1,\ldots ,m\) and \(E_l, l=1,\ldots ,q\) be square matrices such that \(A_\zeta =Q_\zeta ^T Q_\zeta \) and \(B_l=E_l^T E_l.\)
Theorem 1
(SOC conditions for robust (weak) efficiencies) Let \(\bar{x}\in \mathbb {R}^n\) be a robust feasible point of problem (U).
-
(i)
Suppose that the Slater condition is satisfied; i.e., there exists \({\hat{x}}\in \mathbb {R}^n\) such that for all \( \omega _l\in \Omega _l, l=1\ldots ,q,\)
$$\begin{aligned} g_l({\hat{x}},\omega _l)<0. \end{aligned}$$(7)If \({\bar{x}}\) is a robust weak Pareto solution of problem (U), then there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that
$$\begin{aligned}&\sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )- \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x],\end{aligned}$$(8)$$\begin{aligned}&\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0,\; \zeta =1,\ldots ,m,\end{aligned}$$(9)$$\begin{aligned}&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q, \end{aligned}$$(10)where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).
-
(ii)
If there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) are satisfied, then \({\bar{x}}\) is a robust weak Pareto solution of problem (U).
-
(iii)
If there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,} \mathbb {R}^m_+, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) are satisfied, then \({\bar{x}}\) is a robust Pareto solution of problem (U).
Proof
(i) Suppose that the Slater condition (7) is satisfied and let \({\bar{x}}\) be a robust weak Pareto solution of problem (U). Put \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) and \( G_l(x):=\max \limits _{\omega _l\in \Omega _l}g_l(x,\omega _l), l=1,\ldots ,q\) for \(x\in \mathbb {R}^n\). Then, it holds that \(F_\zeta , \zeta =1,\ldots ,m\) and \(G_l,l=1,\ldots ,q\) are convex functions finite on \(\mathbb {R}^n\) because \(f_\zeta (\cdot ,u_\zeta ), u_\zeta \in U_\zeta \) and \(g_l(\cdot ,\omega _l), \omega _l\in \Omega _l\) are 1st-SDSOS-convex polynomials (hence, convex polynomials), \( f_\zeta (x,\cdot )\) and \( g_l(x,\cdot )\) are affine functions for each \(x\in \mathbb {R}^n,\) and \(U_\zeta \) and \(\Omega _l\) are compact sets. Note that the Slater condition (7) ensures that for each \(\;l=1,\ldots ,q,\)
Since \({\bar{x}}\) is a robust weak Pareto solution of problem (U), it ensures that
Invoking a classical alternative theorem in convex analysis (see e.g., [32, Theorem 21.1]), we can find \(\lambda _\zeta \ge 0, \zeta =1,\ldots , m, \gamma _l\ge 0, l=1,\ldots , q\) such that \(\sum \limits _{\zeta =1}^m(\lambda _\zeta )^2+\sum \limits _{l=1}^q(\gamma _l)^2> 0\) and
We observe by (11) and (12) that \((\lambda _1,\ldots ,\lambda _m)\ne 0\), and that
Denoting \(W:=\prod \limits _{\zeta =1}^mU_\zeta \times \prod \limits _{l=1}^q\Omega _l\subset \mathbb {R}^{ms+qr}\), it is true that W is a convex compact set. Now, by recalling the definition of \(F_\zeta , \zeta =1,\ldots ,m\) and \(G_l, l=1,\ldots ,q,\) we get by (13) that
where \(w:=(u_1,\ldots ,u_m,\omega _1,\ldots ,\omega _q).\) Let \(H:\mathbb {R}^n\times \mathbb {R}^{ms+qr}\rightarrow \mathbb {R}\) be defined by \(H(x,w):=\sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,u_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\omega _l)\) for \(x\in \mathbb {R}^n, w:=(u_1,\ldots ,u_m,\omega _1,\ldots ,\omega _q)\in \mathbb {R}^{ms+qr}.\) Then, H is a convex function in variable x and affine function in variable w and so, we refer to the classical minimax theorem (see e.g., [33, Theorem 4.2]) to claim that
This together with (14) entails that
Hence, we can find \({\bar{u}}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,\bar{u}^{s}_\zeta )\in U_\zeta , \zeta =1,\ldots ,m\) and \( \bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q\) such that
We now consider a polynomial on \(\mathbb {R}^n\) given by for each \(x\in \mathbb {R}^n,\) \(\sigma (x):=\sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,{\bar{u}}_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\bar{\omega }_l)-\sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta ({\bar{x}}).\) By our assumption, it is easy to see that \(\sigma \) is a 1st-SDSOS-convex polynomial and moreover, \(\sigma (x)\ge 0, \forall x\in \mathbb {R}^n,\) by virtue of (15). So, \(\sigma \in \textbf{SDSOS}_\eta [x]\) (cf. [8, Proposition 5.3]).
Now, the relations \({\bar{u}}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,\bar{u}^s_\zeta )\in U_\zeta , \zeta =1,\ldots ,m\) mean that
Setting \(\alpha ^0_\zeta :=\lambda _\zeta \ge 0, \zeta =1,\ldots ,m\) and \(\alpha ^i_\zeta :=\alpha ^0_\zeta {\bar{u}}^i_\zeta \in \mathbb {R}, i=1,\ldots , s, \zeta =1,\ldots ,m,\) we claim that (9) holds. To see this, let \(\zeta \in \{1,\ldots ,m\}.\) If \(\alpha _\zeta ^0= 0\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s,\) and hence, (9) is trivially valid. If \(\alpha _\zeta ^0\ne 0,\) then it follows from (16) that
where \({\hat{u}}_\zeta :=(\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s).\) Since \(A_\zeta =Q_\zeta ^T Q_\zeta \), we arrive at the conclusion that \(\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0\), which shows that (9) holds, too. Similarly, by letting \(\mu ^0_l:=\gamma _l\ge 0, l=1,\ldots ,q\) and \(\mu ^j_l:=\mu ^0_l\bar{\omega }^j_l\in \mathbb {R}, j=1,\ldots , r, l=1,\ldots ,q\), we conclude from the relations \(\bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q\) that (10) also holds.
Next, by noting the definitions of functions \(f_\zeta , \zeta =1,\ldots ,m\) and \(g_l, l=1,\ldots ,q\) in (2), we obtain that \(\lambda _\zeta f_\zeta (x,{\bar{u}}_\zeta )=\alpha ^0_\zeta p^0_\zeta (x)+\sum \limits _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)\) and \(\gamma _lg_l(x,\bar{\omega }_l)=\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x),\;l=1,\ldots ,q,\) for all \(x\in \mathbb {R}^n.\) This together with the definition of \(\sigma \) yields
which shows that (8) is satisfied. Therefore, the proof of (i) is accomplished.
(ii) Assume that there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) hold.
Consider any \(\zeta \in \{1,\ldots ,m\}\). We claim by (9) that if \(\alpha _\zeta ^0=0\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s.\) Assume on contrary that \(\alpha _\zeta ^0=0\) but there exists \(i_0\in \{1,\ldots ,s\}\) with \(\alpha _\zeta ^{i_0}\ne 0.\) In this case, (9) entails that \(\Vert Q_\zeta w_\zeta \Vert = 0\), where \(w_\zeta :=(\alpha ^1_\zeta ,\dots ,\alpha ^s_\zeta )\ne 0.\) This establishes a contradiction as inasmuch \( \Vert Q_\zeta w_\zeta \Vert ^2=w^T_\zeta A_\zeta w_\zeta >0\), where the strict inequality is satisfied by virtue of \(A_\zeta \succ 0\). So, our claim is valid. Similarly, we assert by (10) that if \(\mu _l^0=0\) for each \(l\in \{1,\ldots ,q\}\), then \(\mu _l^j=0\) for all \(j=1,\ldots ,r.\)
Now, denote \({\hat{u}}_\zeta :=({\hat{u}}_\zeta ^1,\ldots ,{\hat{u}}_\zeta ^s), \zeta =1,\ldots , m\) and \(\hat{\omega }_l:=(\hat{\omega }_l^1,\ldots ,\hat{\omega }_l^r), l=1,\ldots ,q\), where
It follows from (9) and (10) that
which show that \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q.\) Then, for any \(x\in \mathbb {R}^n,\)
where we note that if \(\alpha _\zeta ^0=0\) for each \(\zeta \in \{1,\ldots ,m\}\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s,\) as shown above. Similarly, \(\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\;l=1,\ldots ,q\) for each \(x\in \mathbb {R}^n.\)
Therefore, we use (8) to find \(\sigma _0\in \textbf{SDSOS}_\eta [x]\) such that
Let \({\tilde{x}}\in \mathbb {R}^n\) be an arbitrary robust feasible point of problem (U). Note here that \(\sigma _0(\tilde{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\tilde{x}},{\hat{\omega }}_l)\ge 0\) inasmuch as \(\sigma _0\) is an SDSOS polynomial, and hence \(\sigma _0({\tilde{x}})\ge 0\) and \(\mu _l^0\ge 0, g_l({\tilde{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q.\) Then, we estimate (17) at \({\tilde{x}}\) to obtain that \(\sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\tilde{x}},{\hat{u}}_\zeta )\ge \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})\) and thus,
due to the fact that \(\alpha _\zeta ^0\ge 0, F_\zeta ({\tilde{x}})\ge f_\zeta ({\tilde{x}},{\hat{u}}_\zeta )\) for \(\zeta =1,\ldots ,m.\)
Note by (18) that \(F({\tilde{x}})-F({\bar{x}})\notin -\mathrm{int\,} \mathbb {R}^m_+\) because of \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}.\) So, \(\bar{x}\) is a robust weak Pareto solution of problem (U).
(iii) Assume that there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,}\mathbb {R}^m_+, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) hold. Following the same lines as in the proof of (ii), we arrive at the assertion that
for each given robust feasible point \({\tilde{x}}\) of problem (U). We claim that \({\bar{x}}\) is a robust Pareto solution of problem (U). Otherwise, there exists a robust feasible point \(x^*\) of problem (U) such that \(F(x^*)-F(\bar{x})\in -\mathbb {R}^m_+{\setminus }\{0\}.\) Moreover, by \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,}\mathbb {R}^m_+,\) we see that \(\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta (x^*)-\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})<0,\) which contradicts the assertion in (19). So, our claim is true and the proof is complete. \(\square \)
Remark 1
The Slater condition (7) is important for obtaining necessary SOC conditions. The following example shows that the conclusion of (i) in Theorem 1 may go awry if this Slater condition does not hold.
Example 1
(The importance of the Slater condition) Consider the robust multiobjective convex polynomial optimization problem
where \(U:=\{u:=(u^1,u^2)\in \mathbb {R}^2\mid \frac{(u^1)^2}{2}+\frac{(u^2)^2}{3}\le 1\}\) and \(\Omega :=\{\omega :=(\omega ^1,\omega ^2)\in \mathbb {R}^2\mid (\omega ^1)^{2}+(\omega ^2) \le 1\}.\) The problem (PE1) can be expressed in terms of problem (R), where the ellipsoids \(U_1:=U_2:=U\) and \(\Omega _1:=\Omega _2:=\Omega \) are described respectively by
\(\rho _1:=\rho _2:=\tau _1:=\tau _2:=1\) and the bi-functions \(f_\zeta , \zeta =1,2, g_l, l=1,2\) are given by \(f_\zeta (x,u):=p^0_\zeta (x)+\sum \limits ^{2}_{i=1}u^ip^i_\zeta (x)\) for \( u:=(u^1, u^2)\in \mathbb {R}^2\) with \(p^0_1(x):=x^8+3x^2-2x+1, p^1_1(x):=0, p^2_1(x):=-x^2, p^0_2(x):=2x^2-3x-1, p^1_2(x):=-x^2, p^2_2(x):=0\) for \(x\in \mathbb {R}\) and \( g_l(x,\omega ):=h^0_l(x)+\sum \limits ^{2}_{j=1}\omega ^jh^j_l(x)\) for \(\omega :=(\omega ^1,\omega ^2)\in \mathbb {R}^2\) with \( h^0_1(x):=x^8+2x^2, h^1_1(x):=h^2_1(x):=0, h^0_2(x):=x^8+x^2-1, h^1_2(x):=-x^2, h^2_2(x):=-x^8\) for \(x\in \mathbb {R}\).
It is clear that \( f_\zeta (\cdot ,u), u\in U, \zeta =1,2\) and \( g_l(\cdot ,\omega ), \omega \in \Omega , l=1,2\) are 1st-SDSOS-convex polynomials and that \({\bar{x}}:=0\) is a Pareto solution of the robust problem (PE1). However, we assert that the SOC condition given in (8) is not valid at \({\bar{x}}.\) To see this, suppose by the contradiction that there exist \((\alpha ^0_1,\alpha ^0_2)\in \mathbb {R}^2_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,2, i=1,2,\) \(\mu _l^0\in \mathbb {R}_+, \mu _l^j\in \mathbb {R}, l=1,2, j=1,2\) and \(\sigma \in \textbf{SDSOS}_8[x]\) such that
where \(F_1({\bar{x}}):=\max \limits _{u\in U}\{p^0_1(\bar{x})+\sum \limits _{i=1}^2u^ip^i_1({\bar{x}})\}=1\) and \(F_2(\bar{x}):=\max \limits _{u\in U}\{p^0_2(\bar{x})+\sum \limits _{i=1}^2u^ip^i_2({\bar{x}})\}=-1\). We rearrange (20) and obtain that
For each \(x\in \mathbb {R},\) as \(\mu ^0_2\ge 0\) and \(\sigma (x)\ge 0\), it entails that
Considering \(x_\zeta :=\frac{1}{\zeta }\), where \(\zeta \in \mathbb {N}\), we claim by (21) that
for \(\zeta \) large enough. This means that
for all \(\zeta \) large enough. Now, letting \(\zeta \rightarrow \infty \) in (22), we arrive at a contradiction. Consequently, the result of (i) in Theorem 1 is not true for this setting. The reason is that the Slater condition fails for this problem.
Similarly as shown in [13, Example 2.7] that the 1st-SDSOS-convexity assumption in our framework is also important for the validation of SOC conditions. That is, the conclusion of (i) in Theorem 1 may fail for a robust multiobjective polynomial problem, where the related functions are convex (but not 1st-SDSOS-convex) polynomials although the Slater qualification holds.
We now show that the SOC conditions in (8), (9) and (10) are equivalent to the robust Karush-Kuhn-Tucker (KKT) condition of problem (U). In doing so, we say that the robust KKT condition is valid at \({\bar{x}}\in C\) if there exist \( \bar{\omega }_l\in \Omega _l, l=1,\ldots ,q\) and \(\alpha :=(\alpha _1,\ldots ,\alpha _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda :=(\lambda _1,\ldots ,\lambda _q)\in \mathbb {R}^q_+\), \({\bar{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) such that
where \(\nabla _1f_\zeta \) (resp., \(\nabla _1g_l\)) denotes the derivative of \(f_\zeta \) (resp., \(g_l\)) with respect to the first variable and \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\bar{x}},u_\zeta )\) for \( \zeta =1,\ldots ,m\). Note that if \(f_\zeta (\cdot ,u_\zeta ):=p^0_\zeta \) for all \( u_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) (i.e., there is no uncertainty on the objective functions), then the above robust KKT condition was examined in [11] for a class of SOS-convex polynomials, and in [9] for a more general class of local Lipschitz nonsmooth functions.
Theorem 2
(SOC and robust KKT conditions) Let \({\bar{x}}\in \mathbb {R}^n\) be a robust feasible point of problem (U). Then, the following conditions are equivalent:
- (i)
-
(ii)
The robust KKT condition is valid at \({\bar{x}}.\)
-
(iii)
There exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that
where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).
Proof
[(i) \(\Longrightarrow \) (ii)] Suppose that the SOC conditions given in (8), (9) and (10) are valid. Following the proof of (ii) in Theorem 1, we observe by (9) and (10) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that
Now, we use (8) to find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that
where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m.\) Observe here that \(\sigma (\bar{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\bar{x}},\hat{\omega }_l)\ge 0\) inasmuch as \(\sigma \) is an SDSOS polynomial and hence \(\sigma (\bar{x})\ge 0\), and \(\mu _l^0g_l({\bar{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q\) as \({\bar{x}}\) is a robust feasible point of problem (U). By estimating (28) at \({\bar{x}}\), we obtain that \(\sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\bar{x}},{\hat{u}}_\zeta )\ge \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})\) and thus,
due to the fact that \(\alpha _\zeta ^0\ge 0, F_\zeta ({\bar{x}})\ge f_\zeta ({\bar{x}},{\hat{u}}_\zeta )\) for \(\zeta =1,\ldots ,m.\) Substituting \(x:={\bar{x}}\) into (28) and taking (29) into account, we arrive at \(\sigma (\bar{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\bar{x}},\hat{\omega }_l)=0\). This ensures that \(\sigma ({\bar{x}})=0\) and
Therefore, \(\sigma (x)\ge \sigma ({\bar{x}}), \forall x\in \mathbb {R}^n\), which means that \({\bar{x}}\) is a minimizer of \(\sigma \) on \(\mathbb {R}^n.\) It entails that \(\nabla \sigma ({\bar{x}})=0\), and so we get by (28) that
which together with (29) and (30) shows that the robust KKT condition is valid at \({\bar{x}}.\)
[(ii) \(\Longrightarrow \) (iii)] Suppose that the robust KKT condition is valid at \({\bar{x}}\). This means that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda :=(\lambda _1,\ldots ,\lambda _q)\in \mathbb {R}^q_+\) and \(\bar{u}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,{\bar{u}}^s_\zeta )\in U_\zeta , \zeta =1,\ldots ,m, \bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q,\) such that (23) is valid. Let \(\alpha ^0_\zeta :=\alpha _\zeta \ge 0, \zeta =1,\ldots ,m, \alpha ^i_\zeta :=\alpha ^0_\zeta {\bar{u}}^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots , s\) and \(\mu ^0_l:=\lambda _l\ge 0, l=1,\ldots ,q, \mu ^j_l:=\mu ^0_l\bar{\omega }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots , r\). Arguing similarly as in the proof of Theorem 1, we show that (25) and (26) hold and that
This together with (23) proves that (24) is valid.
[(iii) \(\Longrightarrow \) (i)] Let \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) be such that (24), (25) and (26) hold. As above, we use (25) and (26) to find \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that
Consider a function \(d:\mathbb {R}^n\rightarrow \mathbb {R}\) defined by
As \( f_\zeta (\cdot ,{\hat{u}}_\zeta ), \zeta =1,\ldots ,m\) and \( g_l(\cdot ,\hat{\omega }_l), l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials, d is 1st-SDSOS-convex and hence, being a convex function. We show by (24) and (31) that
Then, the convexity of d entails that \(d(x)\ge d({\bar{x}})=0\) for all \(x\in \mathbb {R}^n\). This together with the 1st-SDSOS-convexity of d guarantees that \(d\in \textbf{SDSOS}_\eta [x]\) (cf. [8, Proposition 5.3]). Taking (31) into account, we arrive at
Consequently, the assertion in (i) holds. The proof is complete. \(\square \)
Remark 2
-
(i)
The result in Theorem 2 provides new characterizations for the robust KKT condition via the proposed SOC conditions in the setting of 1st-SDSOS-convex polynomials. The interested reader is referred to [11, Proposition 2.10] for corresponding characterizations for the robust KKT condition in terms of the linear matrix inequality (LMI) conditions in the setting of SOS-convex polynomials.
-
(ii)
In the definition of robust KKT condition in (23), if \(\alpha \in \mathbb {R}^m_+\setminus \{0\}\) is replaced by \(\alpha \in \textrm{int}\mathbb {R}^m_+\), then the corresponding condition is called robust strong KKT condition. Using a concept of robust proper Pareto solution for our setting and proceeding similarly as in the proof of Theorem 2, we could obtain some characterizations or at least some sufficient conditions for the fulfillment of the robust strong KKT condition and thus for the validation of the hypothesis (iii) of Theorem 1. This would be an interesting topic for a further study.
3 Robust duality via second order cone conditions
In this section, we propose a dual multiobjective problem to the robust multiobjective optimization problem (R) and explore duality relations between them. More precisely, we justify that robust dual results characterize the SOC conditions as well as the robust KKT condition. This particularly shows a Pareto point of the primal robust multiobjective optimization problem (R) by solving its dual problem, which is exhibited in terms of SOC conditions.
In connection with (R), we consider a dual multiobjective problem as follows:
It is worth noticing here that a Pareto solution (resp., a weak Pareto solution) of a “max” multiobjective problem like the dual problem (\(\textrm{D}\)) is defined similarly as in Definition 1 by replacing \(-\mathbb {R}^m_+\setminus \{0\}\) (resp., \(-\mathrm{int\,} \mathbb {R}^m_+\)) by \(\mathbb {R}^m_+\setminus \{0\} \) (resp., \(\mathrm{int\,} \mathbb {R}^m_+\)). We also recall here the notation \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).
The following theorem provides weak and strong duality relations between the dual problem (\(\textrm{D}\)) and the primal problem (R).
Theorem 3
(i) (Robust weak duality) Let \((t_\zeta ,\alpha ^0_\zeta ,\alpha ^i_\zeta ,\mu ^0_l,\mu ^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a feasible point of problem (\(\textrm{D}\)) and let \({\hat{x}}\) be a feasible point of problem (R). We have
(ii) (Robust strong dual characterization) Let \({\bar{x}}\) be a weak Pareto solution of problem (R). The robust KKT condition is valid at \({\bar{x}}\) if and only if there exists a weak Pareto solution of problem (\(\textrm{D}\)), say \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that
Proof
(i) As \((t_\zeta ,\alpha ^0_\zeta ,\alpha ^i_\zeta ,\mu ^0_l,\mu ^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)), we find \( \sigma \in \textbf{SDSOS}_\eta [x]\) such that
where \( (\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}\) \(\alpha ^i_\zeta \in \mathbb {R}, t_\zeta \in \mathbb {R}, \zeta =1,\ldots , m, i=1,\ldots , s\) and \( \mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, j=1,\ldots ,r, l=1,\ldots ,q \). As shown in the proof of (ii) of Theorem 1, we show by (34) and (35) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that
Note further that \(\sigma ({\hat{x}})\ge 0\) as \(\sigma \) is an SDSOS polynomial and that \(g_l({\hat{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q,\) as \({\hat{x}}\) is a feasible point of problem (R). We now estimate (33) at \({\hat{x}}\) to arrive at
where we should remind that \(F_\zeta ({\hat{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\hat{x}},u_\zeta )\ge f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\) for all \( \zeta =1,\ldots ,m\). On account of \( (\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}\), we conclude by (36) that \(F({\hat{x}})-(t_1,\ldots ,t_m)\notin -\mathrm{int\,} \mathbb {R}^m_+\), which concludes that (32) holds.
(ii) Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Suppose that the robust KKT condition is valid at \({\bar{x}}\). By Theorem 2, the SOC conditions hold, i.e., there exist \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that
where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\). Letting \(\bar{t}_\zeta :=F_\zeta ({\bar{x}}), \zeta =1,\ldots ,m\), it holds that
and that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)).
We assert further that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)). To see this, assume on the contrary that there exists another feasible point of problem (\(\textrm{D}\)), say \((\tilde{t}_\zeta ,\tilde{\alpha }^0_\zeta ,\tilde{\alpha }^i_\zeta ,\tilde{\mu }^0_l,\tilde{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that \(({\tilde{t}}_1,\ldots ,{\tilde{t}}_m)-({\bar{t}}_1,\ldots ,{\bar{t}}_m)\in \textrm{int}\mathbb {R}^m_+,\) which amounts to the following relation:
This clearly shows a contradiction to the robust weak duality of (i).
Conversely, let \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a weak Pareto solution of problem (\(\textrm{D}\)) such that \(F({\bar{x}})=(\bar{t}_1,\ldots ,{\bar{t}}_m).\) Then, we obtain that
where \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\). In other words, the SOC conditions given in Theorem 1 hold, and so, by Theorem 2, the robust KKT condition is valid at \({\bar{x}}\) as well. \(\square \)
In the forthcoming corollary, we derive a robust strong duality relation under the validation of the Slater condition, which is an easy-to-verify condition in practice.
Corollary 1
(Robust strong duality with the Slater condition) Suppose that the Slater condition (7) is satisfied. Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Then, there exists a weak Pareto solution of problem (\(\textrm{D}\)), say \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that
where \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\bar{x}},u_\zeta ), \zeta =1,\ldots ,m\).
Proof
Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Under the Slater condition (7), we employ Theorem 1(i) to conclude that the SOC conditions given in (8), (9) and (10) hold. In view of Theorem 2, it is equivalent to saying that the robust KKT condition is valid at \({\bar{x}}\). To finish the proof of the theorem, it remains to invoke Theorem 3(ii). \(\square \)
In the following theorem, we present a robust converse duality relation between the robust multiobjective problem (R) and its dual problem (\(\textrm{D}\)).
Theorem 4
(Robust converse duality) Let the feasible set C in (3) be compact and the Slater condition (7) hold. If \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)), then there exists a weak Pareto solution of problem (R), denoted by \({\bar{x}}\), such that
Proof
Let \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a weak Pareto solution of problem (\(\textrm{D}\)). Then, it is clear that
where \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\).
Put \(X:=\{F(x)+w\mid x\in C, w\in \mathbb {R}^m_+\},\) where \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\). Since \(F_\zeta , \zeta =1,\ldots ,m\) are convex functions finite on \(\mathbb {R}^n\), it holds that X is a closed convex set, and we claim that
Indeed, if this is not the case, i.e., \(({\bar{t}}_1,\ldots ,\bar{t}_m)\notin X.\) By a classical strong separation theorem in convex analysis (see e.g., [30, Theorem 2.2]), there exists \(\lambda :=(\lambda _1,\ldots , \lambda _m)\in \mathbb {R}^m_+\setminus \{0\}\) such that \(\sum _{\zeta =1}^m\lambda _\zeta {\bar{t}}_\zeta < \inf {\big \{\lambda ^Tv \mid v\in X\big \}}.\) This entails that
Observe that the (scalar) optimization problem on the right hand-side of (42) has an optimal solution as the feasible set C is compact and \(F_\zeta , \zeta =1,\ldots ,m\) are convex functions finite on \(\mathbb {R}^n\) and thus being continuous. Then, there exists an optimal solution \(x_*\in C\) such that
and that
where \( G_l(x):=\max \limits _{\omega _l\in \Omega _l}g_l(x,\omega _l),\) for \(x\in \mathbb {R}^n, l=1,\ldots ,q.\) Invoking a classical alternative theorem in convex analysis (see e.g., [32, Theorem 21.1]), we find \(\lambda _0\ge 0, \gamma _l\ge 0, l=1,\ldots , q\), not all zero, such that
On account of the Slater condition (7), we assert by (44) that \(\lambda _0> 0\), and so there is no loss of generality in assume that \(\lambda _0:=1\). Consequently, we arrive at
Now, following the same lines (after (12)) as in the proof of Theorem 1(i), we can find \( \alpha ^{i*}_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\mu ^{0*}_l\in \mathbb {R}_+, \mu ^{j*}_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) such that
Letting \(\gamma :=\frac{1}{\sum _{\zeta =1}^m\lambda _\zeta }\Big (\sum \limits _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*)-\sum \limits _{\zeta =1}^m\lambda _\zeta {\bar{t}}_\zeta \Big )\), we conclude by (43) that \(\gamma >0\) and, by (45), that
Therefore, \((\bar{t}_\zeta +\gamma ,\lambda _\zeta ,\alpha ^{i*}_\zeta ,\mu ^{0*}_l,\mu ^{j*}_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)), and
This contradicts the fact that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)). Hence, our assertion in (41) is valid.
Granting this, we find \({\bar{x}}\in C\) and \({\bar{w}}:=(\bar{w}_1,\ldots ,{\bar{w}}_m)\in \mathbb {R}^m_+\) such that
which concludes that (37) holds. By (38) and (46), we find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that
This, together with \(\sum _{\zeta =1}^m\bar{\alpha }^0_\zeta {\bar{w}}_\zeta \ge 0,\) guarantees that
In view of Theorem 1(ii), we get by (39), (40) and (47) that \({\bar{x}}\) is a weak Pareto solution of problem (R), and so the proof of the theorem is complete. \(\square \)
4 Finding robust efficient solutions via SOCP relaxations
In this section, we show how to find a robust (weak) Pareto solution of the uncertain multiobjective problem (U) by using a second-order cone programming (SOCP) relaxation. This is done by establishing robust solution relationships between the uncertain multiobjective problem (U) and an SOCP relaxation of a corresponding weighted-sum optimization program.
For a given \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+\setminus \{0\},\) let us consider a corresponding weighted-sum optimization problem of (R) as follows:
where \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).
A second-order cone programming (SOCP) dual program of problem (\(\hbox {P}_{\nu }\)) is defined by
where \(Q_\zeta , \zeta =1,\ldots ,m\) and \(E_l, l=1,\ldots ,q\) are square matrices such that \(A_\zeta =Q_\zeta ^T Q_\zeta \) and \(B_l=E_l^T E_l\) as previously.
We also consider an SOCP relaxation program of problem (\(\hbox {P}_{\nu }\)) as
where \(Q^i_\zeta , i=1,\ldots ,s\) are the columns of the matrix \(Q_\zeta \) for \(\zeta =1,\ldots ,m\) and \(E^j_l, j=1,\ldots ,r\) are the columns of the matrix \(E_l\) for \(l=1,\ldots ,q.\)
The following lemma is needed for our analysis in the sequel.
Lemma 1
(Jensen’s inequality with 1st-SDSOS-convex polynomials, cf. [8, Proposition 6.1]) Consider a 1st-SDSOS-convex polynomial d on \(\mathbb {R}^n\) with degree \(\eta :=2\ell \). Let \(y=(y_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) with \(y_1=1\) and \(1 \le i,j \le s(\ell ,n),\)
Then, one has
where \(L_y\) is given as in (4) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate.
Robust solution relationships between the uncertain multiobjective optimization problem (U) and an SOCP relaxation (\(\hbox {D}_{\nu }^*\)) of the corresponding weighted-sum optimization problem (\(\hbox {P}_{\nu }\)) is now ready to be established. This result illustrates, in particular, that we can find robust (weak) Pareto solutions of program (U) by solving the (scalar) SOCP relaxation problem (\(\hbox {D}_{\nu }^*\)). In addition, we obtain strong SOCP duality-exact SOCP relaxation among the dual problem (\(\hbox {D}_{\nu }\)), the relaxation (\(\hbox {D}_{\nu }^*\)) and the weighted-sum optimization problem (\(\hbox {P}_{\nu }\)) that might be of independent interest.
Theorem 5
Let the Slater qualification condition (7) be valid. Then, we have the following statements:
-
(i)
If \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) is a robust weak Pareto solution of problem (U), then there exist \(\nu \in \mathbb {R}^m_+{\setminus }\{0\}\) and \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) such that
$$\begin{aligned} \min {(P_{\nu })}=\max {(D_{\nu })}=\min {(D_{\nu })} \end{aligned}$$(49)and \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)), where \({\bar{y}}:=(1, {\bar{x}}_1,\ldots ,\bar{x}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,\bar{x}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,{\bar{x}}_n^\eta )\).
-
(ii)
Consider \(L_{{\bar{y}}}\) given as in (4) and \(x_i\) being the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate. Let \(\nu \in \mathbb {R}^m_+\setminus \{0\}\) be such that the problem (\(\hbox {P}_{\nu }\)) admits an optimal solution. If \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) with \({\bar{y}}:=({\bar{y}}_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)), then \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots , L_{{\bar{y}}}(x_n))\in \mathbb {R}^n\) is a robust weak Pareto solution of problem (U). Moreover, if \(\nu \in \textrm{int}\mathbb {R}^m_+\), then \({\bar{x}}\) is a robust Pareto solution of problem (U).
Proof
(i) Let \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) be a robust weak Pareto solution of problem (U). In view of Theorem 1(i), there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that
where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).
Step 1. Let us first prove that
By (50), we find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that
As shown in the proof of (ii) in Theorem 1, we show by (51) and (52) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that
Let \( {\hat{x}}\) be an arbitrary feasible point of problem (\(\hbox {P}_{\nu }\)). This entails that \(g_l({\hat{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q\). Note further that \(F_\zeta ({\hat{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\hat{x}},u_\zeta )\ge f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\) for all \( \zeta =1,\ldots ,m\) and that \(\sigma ({\hat{x}})\ge 0\) as \(\sigma \) is an SDSOS polynomial. Estimating (54) at \({\hat{x}}\), we arrive at
This concludes that \({\bar{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)), and so \(\min {(P_{\nu })}=\sum _{\zeta =1}^m \nu _\zeta F_\zeta (\bar{x}).\)
To proceed, let
By putting \({\bar{t}}:=\sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}})\), we get by (50) that \(({\bar{t}},\lambda ^i_\zeta ,\mu ^0_l,\mu ^j_l, i=1,\ldots ,s, \zeta =1,\ldots ,m, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\hbox {D}_{\nu }\)) and so, \({\bar{t}}\le \nu ^*\). Let us show that
To see this, assume that \((t,\lambda ^i_\zeta ,\mu ^0_l,\mu ^j_l, i=1,\ldots ,s, \zeta =1,\ldots ,m, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\hbox {D}_{\nu }\)). Then, \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \( \lambda ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) satisfying
By similar arguments as in (54) to (55), we come to the conclusion \(\sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\bar{x}}) \ge t\), which proves that (56) is valid due to \(\min {(P_{\nu })}=\sum _{\zeta =1}^m \nu _\zeta F_\zeta (\bar{x}).\) Consequently, (53) holds.
Step 2. Denote \({\bar{y}}:={\bar{x}}^{(\eta )}=(1, \bar{x}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,\bar{x}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,\bar{x}_n^\eta )\). We now prove that there exist \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) such that
\((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)) and
Since \({\bar{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)), it is true that
This entails that
Letting \(\hat{\omega }_l:=0_r\in \mathbb {R}^r,\) we have \(\Vert E_l\hat{\omega }_l\Vert < \sqrt{\tau _l},\) i.e., the strict feasibility condition holds for (58). This allows us to employ the strong duality in second-order cone programming (see e.g., [6, Theorem 7.1]) to assert that there exist \( {\tilde{z}}^l\in \mathbb {R}^r, \tilde{\xi }_l\in \mathbb {R}, l=1,\ldots ,q\) such that \(\Vert {\tilde{z}}^l\Vert \le \tilde{\xi }_l, l=1,\ldots ,q\) and
Similarly, by \(F_\zeta ({\bar{x}}):= \max \limits _{u_\zeta :=(u_\zeta ^1,\ldots ,u^s_\zeta )\in U_\zeta }\{p^0_\zeta (x)+\sum \limits ^{s}_{i=1}u^i_\zeta p^i_\zeta (x)\}, \zeta =1,\dots ,m\), we have
and so, we can find \(z^\zeta \in \mathbb {R}^s, \xi _\zeta \in \mathbb {R}, \zeta =1,\ldots ,m\) such that \(\Vert z^\zeta \Vert \le \xi _\zeta , \zeta =1,\ldots ,m\) and
As \({\bar{y}}:={\bar{x}}^{(\eta )}=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, \bar{x}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,\bar{x}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,{\bar{x}}_n^\eta )\), it holds that \({\bar{y}}_1:=1\) and
where \(L_{y}\) is defined as in (4). Furthermore, by the definition of the moment matrix (cf. (5) and (6)), we see that
which ensures that for each \(1 \le i,j \le s(\frac{\eta }{2},n),\)
Consequently, \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is a feasible point of problem (\(\hbox {D}_{\nu }^*\)), and it in turn implies that
where the second inequality holds by (59) and (60), while the equality holds by (53).
We now justify that \(\min {(P_{\nu })}\le \inf {(D_{\nu })}.\) Let \((y,{\xi }_{1},\ldots ,{\xi }_{m},{\tilde{\xi }}_1,\ldots ,{\tilde{\xi }}_q,z^1,\ldots ,z^m,{\tilde{z}}^1,\ldots ,{\tilde{z}}^q)\) be a feasible point of problem (\(\hbox {D}_{\nu }^*\)), where \(y:=(y_\alpha )\in \mathbb {R}^{s(\eta ,n)}\), and denote \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) Then,
Considering any \(l\in \{1,\ldots ,q\}\), we verify that
where \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) It suffices to show that for any \(\omega _l\in \Omega _l,\) it holds that
Since \(\omega _l:=(\omega ^1_l,\ldots ,\omega ^r_l)\in \Omega _l\), it holds that \(||E_l(\omega ^1_l,\ldots ,\omega ^r_l)||\le \sqrt{\tau _l}.\) This together with (63) ensures that
Therefore, we have
where \(L_{y}\) is defined as in (4). Note that the polynomial \(h^0_l+\sum _{j=1}^{r}\omega ^j_lh^j_l\) is 1st-SDSOS-convex by our assumption. On account of (64) and (65), we invoke the Jensen’s inequality given in Lemma 1 to claim that
which together with (68) shows that (67) holds. So, (66) is true. Granting this, we conclude from (62) that \({\tilde{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)) and hence,
Similarly, we can verify that
and so
Hence, by (69), it follows that
Granting this, we invoke (61) to conclude that
i.e., (57) is valid and \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)).
(ii) Let \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+\setminus \{0\}\) be such that the problem (\(\hbox {P}_{\nu }\)) admits an optimal solution. Suppose that \({\hat{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)). Arguing similarly as in the proof of Step 2 of (i), we arrive at
Let \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) with \({\bar{y}}:=({\bar{y}}_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) be an optimal solution of problem (\(\hbox {D}_{\nu }^*\)). Then, we have \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) and
Denote \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots , L_{\bar{y}}(x_n))\in \mathbb {R}^n\). As shown in (i), under the validation of (72), (74), (75) and (76), we can verify that \({\bar{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)), and so
Similarly, we derive from (73), (75) and (76) that
Combining this with (77), (71) and (70) shows that \(\min {(\hbox {P}_{\nu })}= \sum _{\zeta =1}^m\nu _\zeta F_\zeta (\bar{x})\), and so \({\bar{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)). We can make conclusion that \({\hat{x}}\) is a robust weak Pareto solution of problem (U). In the case of \(\nu \in \textrm{int}\mathbb {R}^m_+\), we obtain that \({\bar{x}}\) is a robust Pareto solution of problem (U). The proof is accomplished. \(\square \)
5 Conclusions
In this paper, we have presented necessary and sufficient optimality conditions in terms of second order cone conditions for robust (weak) Pareto solutions of an uncertain multiobjective optimization problem. We have also addressed a dual problem to the robust multiobjective optimization problem and examined robust converse, robust weak and robust strong duality relations between the primal and dual problems. Furthermore, robust solution relationships between the uncertain multiobjective optimization program and (scalar) second order cone programming (SOCP) dual and relaxation problems have been established by using the duality theory in second order cone programming. Particularly, it has been shown that one can calculate a robust (weak) Pareto solution of the uncertain multiobjective optimization problem by solving a related second order cone programming relaxation.
The results obtained in this paper are conceptual dual/relaxation schemes that have potentially numerical experiments for a subclass of robust convex multiobjective polynomial optimization problems. It would be of great interest to see how we can develop associated algorithms that allow to find robust (weak) Pareto solutions of an uncertain multiobjective optimization problem via its SOCP dual or relaxation programs.
References
Ahmadi, A.A., Majumdar, A.: Some applications of polynomial optimization in operations research and real-time decision making. Optim. Lett. 10(4), 709–729 (2016)
Ahmadi, A.A., Majumdar, A.: DSOS and SDSOS optimization: more tractable alternatives to sum of squares and semidefinite optimization. SIAM J. Appl. Algebra Geom. 3, 193–230 (2019)
Ahmadi, A.A., Parrilo, P.A.: A complete characterization of the gap between convexity and sos-convexity. SIAM J. Optim. 23(2), 811–833 (2013)
Blekherman, G., Parrilo, P.A., Thomas, R.: Semidefinite Optimization and Convex Algebraic Geometry. SIAM Publications, Philadelphia (2012)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton, NJ (2009)
Ben-Tal, A., den Hertog, D., Vial, J.-P.: Deriving robust counterparts of nonlinear uncertain inequalities. Math. Program. 149(1–2), 265–299 (2015)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53, 464–501 (2011)
Chuong, T.D., Jeyakumar, V., Li, G.: A new bounded degree hierarchy with SOCP relaxations for global polynomial optimization and conic convex semi-algebraic programs. J. Global Optim. 75, 885–919 (2019)
Chuong, T.D.: Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 134, 127–143 (2016)
Chuong, T.D.: Robust alternative theorem for linear inequalities with applications to robust multiobjective optimization. Oper. Res. Lett. 45(6), 575–580 (2017)
Chuong, T.D.: Linear matrix inequality conditions and duality for a class of robust multiobjective convex polynomial programs. SIAM J. Optim. 28, 2466–2488 (2018)
Chuong, T.D.: Robust optimality and duality in multiobjective optimization problems under data uncertainty. SIAM J. Optim. 30, 1501–1526 (2020)
Chuong, T.D.: Second-order cone programming relaxations for a class of multiobjective convex polynomial problems. Ann. Oper. Res. 311, 1017–1033 (2022)
Chuong, T.D., Jeyakumar, V.: Adjustable robust multi-objective linear optimization: pareto optimal solutions via conic programming. Ann. Oper. Res. (2022). https://doi.org/10.1007/s10479-022-05104-5
Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)
Ehrgott, M., Ide, J., Schobel, A.: Minmax robustness for multi-objective optimization problems. Eur. J. Oper. Res. 239, 17–31 (2014)
Georgiev, P.G., Luc, D.T., Pardalos, P.M.: Robust aspects of solutions in deterministic multiple objective linear programming. Eur. J. Oper. Res. 229(1), 29–36 (2013)
Goberna, M.A., Jeyakumar, V., Li, G., Perez, J.-V.: Robust solutions of multi-objective linear semi-infinite programs under constraint data uncertainty. SIAM J. Optim. 24(3), 1402–1419 (2014)
Jahn, J.: Vector Optimization: Theory, Applications, and Extensions. Springer, Berlin (2004)
Gorissen, B.L., den Hertog, D.: Approximating the Pareto sets of multiobjective linear programs via robust optimizaton. Oper. Res. Lett. 40(5), 319–324 (2012)
Helton, J.W., Nie, J.: Semidefinite representation of convex sets. Math. Program. 122(1), 21–64 (2010)
Kuroiwa, D., Lee, G.M.: On robust multiobjective optimization. Vietnam J. Math. 40(2–3), 305–317 (2012)
La Torre, D., Mendivil, F.: Portfolio optimization under partial uncertainty and incomplete information: a probability multimeasure-based approach. Ann. Oper. Res. 267(1–2), 267–279 (2018)
Lee, G.M., Lee, J.H.: On nonsmooth optimality theorems for robust multiobjective optimization problems. J. Nonlinear Convex Anal. 16(10), 2039–2052 (2015)
Lee, J.H., Jiao, L.: Finding efficient solutions in robust multiple objective optimization with SOS-convex polynomial data. Ann. Oper. Res. 296, 803–820 (2021)
Lee, J.H., Lee, G.M.: On optimality conditions and duality theorems for robust semi-infinite multiobjective optimization problems. Ann. Oper. Res. 269(1–2), 419–438 (2018)
Luc, D.T.: Theory of Vector Optimization Lecture Notes in Economics and Mathematical Systems, vol. 319. Springer, Berlin (1989)
Magron, V., Henrion, D., Lasserre, J.-B.: Approximating Pareto curves using semidefinite relaxations. Oper. Res. Lett. 42(6–7), 432–437 (2014)
Miettinen, K.: Nonlinear Multiobjective Optimization, Vol. 12. Kluwer Academic Publishers (1999)
Mordukhovich, B. S., Nam, N. M.: An easy path to convex analysis and applications, Synthesis Lectures on Mathematics and Statistics. 14. Morgan & Claypool Publishers, Williston (2014)
Pardalos, P.M., Zilinskas, A., Zilinskas, J.: Non-convex Multi-objective Optimization. Springer Optimization and its Applications, vol. 123. Springer, Cham (2017)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton, NJ (1970)
Sion, M.: On general minimax theorems. Pacific J. Math. 8, 171–176 (1958)
Soyster, A.: Convex programming with set-inclusive constraints and applications to inexact linear programming. Oper. Res. 1, 1154–1157 (1973)
Steuer, R.E.: Multiple Criteria Optimization. Theory, Computation, and Application. Wiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics. Wiley, New York (1986)
Zamani, M., Soleimani-damaneh, M., Kabgani, A.: Robustness in nonsmooth nonlinear multi-objective programming. Eur. J. Oper. Res. 247(2), 370–378 (2015)
Acknowledgements
The authors are grateful to a referee for the valuable comments and suggestions. This research is funded by Vietnam National University Ho Chi Minh City (VNU-HCM) under grant number T2024-26-01.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Tinh, C.T., Chuong, T.D. Robust second order cone conditions and duality for multiobjective problems under uncertainty data. J Glob Optim 88, 901–926 (2024). https://doi.org/10.1007/s10898-023-01335-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-023-01335-3
Keywords
- Multiobjective optimization
- Second order cone programming
- Robust optimization
- Convex polynomial
- Optimality condition