Abstract
In this article, a mathematical programming problem under affinely parameterized uncertain data with multiple objective functions given by SOS-convex polynomials, denoting by (UMP), is considered; moreover, its robust counterpart, denoting by (RMP), is proposed by following the robust optimization approach (worst-case approach). Then, by employing the well-known \(\epsilon \)-constraint method (a scalarization technique), we substitute (RMP) by a class of scalar problems. Under some suitable conditions, a zero duality gap result, between each scalar problem and its relaxation problems, is established; moreover, the relationship of their solutions is also discussed. As a consequence, we observe that finding robust efficient solutions to (UMP) is tractable by such a scalarization method. Finally, a nontrivial numerical example is designed to show how to find robust efficient solutions to (UMP) by applying our results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the following mathematical programming problem with multiple objective functions:
where \(f_j: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}},\) \(j=1,\ldots ,p,\) and \(g_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}},\) \(i=1,\ldots ,m,\) are convex polynomials.
-
If \(p=1,\) the model problem \(\mathrm{(MP)}\) deduces to a scalar one, and admits a hierarchy of semidefinite programming (SDP) relaxations, which is the well-known Lasserre hierarchy of SDP relaxations. More broadly speaking, the Lasserre hierarchy of SDP relaxations is often used to solve nonconvex polynomial optimization problems with compact feasible sets (Lasserre 2009a, b), and it has finite convergence generically as shown in Helton and Nie (2010). In particular, if the involving functions are SOS-convex polynomials (see Definition 2.1), then \(\mathrm{(MP)}\) enjoys an exact SDP relaxation in the sense that the optimal values of \(\mathrm{(MP)}\) and its relaxation problem are equal; moreover, the relaxation problem attains its optimum under the Slater condition (Lasserre 2009b, Theorem 3.4).
-
If \(p\ge 2,\) and the involving functions are SOS-convex polynomials (the reason why we consider SOS-convex polynomials is that deciding whether a polynomial is SOS-convex or not can be equivalently rewritten as a feasibility problem of a semidefinite programming problem, which can be efficiently validated), Lee and Jiao (2018) contributed an article on how to find efficient solutions of \(\mathrm{(MP)}\) by employing the \(\epsilon \)-constraint method, which is a scalarization technique, i.e., solving a scalar objective problem, where one of the objective functions is minimized while all the other objective functions are bounded from above by means of additional constraints.
On the other hand, the data of many real-world optimization problems are often uncertain (that is, they are not known exactly at the time of the decision) due to lack of information, estimation errors or prediction errors. Recently, the robust optimization approach, which associates an uncertain mathematical program with its robust counterpart, has emerged as a powerful deterministic approach for studying mathematical programming with data uncertainty; see, for example, Beck and Ben-Tal (2009), Ben-Tal and Nemirovski (1998), Ben-Tal et al. (2009), Chuong (2016), Doolittle et al. (2018), Ide and Schöbel (2016), and Wiecek and Dranichak (2016) and the reference therein.
Indeed, the robust optimization approach (also known as worst-case approach) is a method that deals with optimization problems, in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution; see, for example, Crespi et al. (2018), Lee and Lee (2018), and Wiecek and Dranichak (2016). It is well known that a robust convex quadratic optimization problem under ellipsoidal data uncertainty enjoys exact SDP relaxation as it can be equivalently reformulated as a semidefinite programming problem (see Ben-Tal and Nemirovski 1998). In the same vein, a robust convex quadratic optimization problems under restricted ellipsoidal data uncertainty can be equivalently reformulated as a second-order cone programming problem (see Goldfarb and Iyengar 2003).
In this paper, we are interested in the study of finding robust efficient solutions to \(\mathrm{(MP)}\) in the face of data uncertainty, where the involving functions are still SOS-convex polynomials. This model problem under data uncertainty in the constraints can be captured by the following problem:
where \(u_i\) is an uncertain parameter and \(u_i \in {{\mathcal {U}}}_i\) for some nonempty compact and convex uncertain set \({{\mathcal {U}}}_i \subseteq {{\mathbb {R}}}^{q_i},\) \(q_i\in {\mathbb {N}},\) and the uncertain data is affinely parameterized in the sense that (Jeyakumar et al. 2015)
where \(g_i^r,\) \(r=0,\ldots ,t_i,\) are SOS-convex polynomials and \(g_i^r,\) \(r=t_i+1,\ldots ,q_i,\) are affine functions, for all \(i=1,\ldots ,m.\)
Following the robust optimization approach, the robust counterpart of \(\mathrm{(UMP)}\) is given by
-
For the case \(p=1,\) finding robust optimal solutions, that are immunized against data uncertainty, has become an important research topic in convex optimization and has recently been extensively studied in the literature; see, for example, Bertsimas et al. (2011), Chuong and Jeyakumar (2018), and Jeyakumar et al. (2013) and the references therein.
-
However, in the case of \(p\ge 2\), as far as we know, it seems that there are not so many results on finding efficient solutions that are immunized against data uncertainty, notwithstanding the fact that some results focusing on theoretical research are investigated; see, for example, Chuong (2016), and Lee and Lee (2018) and the references therein. This motivates us to contribute the present research paper.
In this paper, we make the following contributions to the robust multiobjective convex optimization (RMP):
-
(i)
By employing the \(\epsilon \)-constraint method (Chankong and Haimes 1983; Ehrgott 2005), we substitute \(\mathrm{(RMP)}\) to a class of scalar problems, and each scalar problem enjoys a zero duality gap in the sense that the optimal value of the scalar objective problem and its associated relaxation problems are equal under the Slater condition and the strict feasibility condition (see Theorem 3.1).
-
(ii)
We do the study of finding efficient solutions of \(\mathrm{(RMP)}\) (see Theorem 3.3 and Theorem 3.4). A nontrivial numerical example is designed to show how to find efficient solutions of \(\mathrm{(RMP)}.\)
The outline of the paper is organized as follows. Section 2 presents some notations and preliminaries. Section 3 states the main results of the paper on how to find efficient solutions of \(\mathrm{(RMP)}.\) Section 4 provides a numerically tractable example. Finally, conclusions are given in Sect. 5.
2 Notation and preliminaries
First of all, let us recall some notations and preliminaries. \({{\mathbb {R}}}^n\) denotes the Euclidean space with dimension n. The non-negative orthant of \({{\mathbb {R}}}^n\) is denoted by \({{\mathbb {R}}}_{+}^n.\) Let \(S^n\) be the set of \(n\times n\) symmetric matrices. For a matrix \(X\in S^n,\) X is positive semidefinite denoted by \(X\succeq 0,\) if \(z^TXz\ge 0\) for any \(z\in {{\mathbb {R}}}^n.\) We denote by \(S_+^n\) the set of \(n\times n\) symmetric positive semidefinite matrices. For \(M, N\in S^n,\) \(\langle M, N\rangle \) stands for \(\mathrm{trace}(MN).\)
The space of all real polynomials on \({{\mathbb {R}}}^n\) is denoted by \({{\mathbb {R}}}[x];\) moreover, the space of all real polynomials on \({{\mathbb {R}}}^n\) with degree at most d is denoted by \({{\mathbb {R}}}[x]_d.\) The degree of a polynomial f is denoted by \(\deg f.\) We say that a real polynomial f is sum of squares, if there exist real polynomials \(q_\ell ,\) \(\ell = 1,\ldots ,r,\) such that \(f =\sum _{\ell =1}^{r}q_{\ell }^2.\) The set consisting of all sum of squares of real polynomials with degree at most d is denoted by \({\Sigma }_{d}^2.\) For a multi-index \(\alpha \in {\mathbb {N}}^n,\) let \(|\alpha |:=\sum _{i=1}^n\alpha _i,\) and let \({\mathbb {N}}^n_d:= \{\alpha \in {\mathbb {N}}^n : |\alpha |\le d\}.\) \(x^\alpha \) denotes the monomial \(x_1^{\alpha _1}\cdots x_n^{\alpha _n}.\) The canonical basis of \({{\mathbb {R}}}[x]_d\) is denoted by
which has dimension \(s(n,d):=\left( \begin{array}{c} n+d \\ d \end{array}\right) .\) Given a polynomial \(f\in {{\mathbb {R}}}[x]\) with vector of coefficients \((f_\alpha ),\) let \(\mathrm{supp}(f) \subset {\mathbb {N}}^n\) denote the support of f, i.e., the set \(\{ \alpha \in {\mathbb {N}}^n : f_\alpha \ne 0\}.\) The Newton polytope of \(f\in {{\mathbb {R}}}[x]\) is the convex hull of \(\mathrm{supp}(f)\). Given an s(n, 2d)-vector \(y:= (y_\alpha )_{\alpha \in {\mathbb {N}}^n_{2d}}\) with \(y_\mathbf{0 }=1,\) let \(M_{d}(y)\) be the moment matrix of dimension s(n, d), with rows and columns labeled by (1). For example, for \(n=2\) and \(d=1,\)
In what follows, for convenience, we denote \(M_{d}(y):=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}y_\alpha B_\alpha ,\) where \(y=(y_\alpha )_{\alpha \in {\mathbb {N}}^n_{2d}}\) with \(y_\mathbf{0 }=1\), and for each \(\alpha \in {\mathbb {N}}^n_d,\) \(B_\alpha \) is an \(s(n,d)\times s(n,d)\) symmetric matrix, which is understood from the definition of \(M_d(y).\)
We now recall the notion of SOS-convex polynomials.
Definition 2.1
(Ahmadi and Parrilo 2012, 2013; Helton and Nie 2010) A real polynomial f on \({{\mathbb {R}}}^n\) is called SOS-convex if the polynomial
is a sum of squares polynomial in \({{\mathbb {R}}}[x;y]\) (with respect to variables x and y).
Observe that an SOS-convex polynomial is convex. However, the converse is not true, which means that there exists a convex polynomial which is not SOS-convex (Ahmadi and Parrilo 2012, 2013). In addition, the problem which related by SOS-convex polynomials can be checked numerically by solving a semidefinite program.
The following proposition shows to confirm whether a polynomial can be written as a sum of squares via semidefinite optimization methods.
Proposition 2.1
(Lasserre 2009a) A polynomial \(f\in {{\mathbb {R}}}[x]_{2d}\) has a sum of squares decomposition if and only if there exists \(Q\in S^{s(n,d)}_+\) such that \(f(x)=\langle v_d(x)v_d(x)^T,Q\rangle \) for all \(x\in {{\mathbb {R}}}^n.\)
Consider a polynomial \(f\in {{\mathbb {R}}}[x]_{2d},\) and let \(v_d(x)v_d(x)^T:=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}x^\alpha B_\alpha .\) Then \(f(x)=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}f_\alpha x^\alpha \) is a sum of squares if and only if we can solve the following semidefinite feasibility problem (Lasserre 2009a):
3 Finding efficient solutions of \(\mathrm{(RMP)}\)
In this section, we focus on the study of finding efficient solutions of \(\mathrm{(RMP)}.\) Now, let us recall the robust multiobjective SOS-convex polynomial optimization problem
Let \(K:=\{x\in {{\mathbb {R}}}^n: g_i(x,u_i) \le 0, \ \forall u_i\in {{\mathcal {U}}}_i, \ i\in I\}\) be the set of feasible solutions of \(\mathrm{(RMP)},\) where \(I:=\{1,\ldots ,m\}.\) Let \(2d:=\max \{\mathrm{deg}f_1,\ldots ,\mathrm{deg}f_p,\) \( \mathrm{deg}g_1(\cdot ,u_1),\ldots ,\) \(\mathrm{deg}g_m(\cdot ,u_m)\}.\)
Hereafter, we consider the problem \(\mathrm{(RMP)}\) with restricted spectrahedron data uncertain sets (Chieu et al. 2018; Chuong 2017, 2018), which are compact and convex sets given by for each \(i\in I,\)
where \(A_i^r,\) \(r=0,1,\ldots ,q_i,\) are \(s_i\times s_i\) symmetric matrices with some \(s_i\in {\mathbb {N}}.\)
Now, we recall the notions of (robust) efficient solutions, which can be seen in (Wiecek and Dranichak 2016, Section 4) .
Definition 3.1
A point \({\bar{x}}\in K\) is said to be a robust efficient solution of \((\mathrm{UMP});\) equivalently, \({\bar{x}}\in K\) is an efficient solution of \((\mathrm{RMP}),\) if
In the present paper, we adopt the famous \(\epsilon \)-constraint method—one kind of scalarization methods—to study the problem (RMP). Indeed, the \(\epsilon \)-constraint method was minutely studied by Chankong and Haimes (1983) (see also Ehrgott 2005) and improved by Ehrgott and Ruzika (2008); moreover, it is shown to be an effective method to find efficient solutions of multicriteria optimization problem.
Now, we substitute \((\mathrm{RMP})\) by the scalar problems as follows:
where \(\epsilon \in {{\mathbb {R}}}^p.\) Note that the component \(\epsilon _j\) is unrelated for \((\mathrm{RP}_j(\epsilon ))\), the convention involving it here will be convenient for our later analytic. For each \(j=1,\ldots ,p,\) let \(K_j(\epsilon ):=\{x\in K : f_k(x)\le \epsilon _k, \ k\ne j\}\) be the feasible set of \((\mathrm{RP}_j(\epsilon )).\)
Let \(j\in \{1,\ldots ,p\}\) be any fixed. Then, the corresponding sum of squares relaxation dual problem of \((\mathrm{RP}_j(\epsilon ))\) with degree 2d is the following problem:
Thanks to Proposition 2.1, \((\mathrm{RD}_j(\epsilon ))\) can be rewritten as the following semidefinite programming problem:
The dual problem of \((\mathrm{SDD}_j(\epsilon ))\) can be stated as follows:
Let \(v(\cdot )\) be the optimal value of the problem \((\cdot ),\) correspondingly. For example, \(v(\mathrm{RP}_j(\epsilon ))\) stands for the optimal value of the problem \((\mathrm{RP}_j(\epsilon )).\)
Definition 3.2
For each fixed \(j=1,\ldots ,p,\) we say that the Slater condition holds for \((\text {RP}_j(\epsilon )),\) if there exists \({\hat{x}}\in {{\mathbb {R}}}^n\) such that
Definition 3.3
We say a strict feasibility condition holds, if for each \(i\in I,\) there exists \({\hat{u}}_i\in {{\mathbb {R}}}^{t_i}_+\times {{\mathbb {R}}}^{q_i-t_i}\) such that \(A_i^0+\sum _{r=1}^{q_i}{\hat{u}}_i^rA_i^r\succ 0.\)
The following lemma, which includes the information on the nonemptiness of solution set of our problem and will play an important role in Theorem 3.1 (see below), was studied by Belousov and Klatte (2002).
Lemma 3.1
(Belousov and Klatte 2002) Let \(f, g_1,\ldots , g_m\) be convex polynomials on \({{\mathbb {R}}}^n.\) Let \(F := \{x \in {{\mathbb {R}}}^n : g_i(x) \le 0, \ i = 1,\ldots ,m\}.\) Suppose that \(\inf _{x\in F}f(x) >-\infty .\) Then, \(\mathop {\mathrm{argmin}}\limits _{x\in F}f(x)\ne \emptyset .\)
It is also worth mentioning that, recently, Helton and Nie (2010) gave an important property of SOS-convex polynomials, which also plays an important role in the proof of Theorem 3.1.
Lemma 3.2
(Helton and Nie 2010, Lemma 8) Let f be an SOS-convex polynomial. Suppose that there exists \(\bar{x}\in {\mathbb {R}}^n\) such that \(f({\bar{x}}) = 0\) and \(\nabla f({\bar{x}}) = 0.\) Then, f is a sum of squares polynomial.
Below, we give a zero duality gap result for \((\mathrm{RP}_j(\epsilon )),\) \((\mathrm{SDD}_j(\epsilon )),\) and \((\mathrm{SDP}_j(\epsilon )),\) under the Slater condition and the strict feasibility condition. Note that the proof is motivated by (Chieu et al. 2018, Theorem 4.3).
Theorem 3.1
For each fixed \(j=1,\ldots ,p,\) if the Slater condition and the strict feasibility condition for \((\mathrm{RP}_j(\epsilon ))\) hold, then
Proof
Let \(j\in \{1,\ldots ,p\}\) be any fixed. We first claim that
Let \(\gamma _j:=v(\mathrm{RP}_j(\epsilon ))\in {{\mathbb {R}}}.\) Since the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) holds, by the Lagrangian duality for convex optimization problems (Boyd and Vandenberghe 2004), we see that
where \(u=(u_1,\ldots ,u_m)\in {{\mathcal {U}}}:=\prod _{i=1}^m{{\mathcal {U}}}_i\) and
Note that \(g_i(\cdot , u_i)\) is convex for each fixed \(u_i\) and \(g_i (x, \cdot )\) is concave for each fixed x. Since \({{\mathcal {U}}}\) is compact and convex, by the convex-concave minimax theorem (Rockafellar 1970), we have for each \(\mu \in {{\mathbb {R}}}^{p-1}_+\) and \(\lambda \in {{\mathbb {R}}}^m_+,\)
This, together with (2), yields
So, there exist \({\tilde{\mu }}_k\ge 0,\) \(k\ne j,\) \({\tilde{\lambda }}_i\ge 0,\) and \({\tilde{u}}_i\in {{\mathcal {U}}}_i,\) \(i\in I,\) such that
Let \(\varPhi (x):=f_j(x)+\sum _{k\ne j}{\tilde{\mu }}_k( f_k(x)-\epsilon _k)+\sum _{i=1}^m{\tilde{\lambda }}_ig_i(x,{\tilde{u}}_i)-\gamma _j.\) Since \(f_\ell ,\) \(\ell =1,\ldots ,p,\) and \(g_i(\cdot ,{\bar{u}}_i),\) \(i\in I,\) are SOS-convex polynomials, so is \(\varPhi (x).\) Moreover, since \(\inf _{x\in {{\mathbb {R}}}^n}\varPhi (x)=0,\) by Lemma 3.1, there exists \({\bar{x}}\in \mathop {\mathrm{argmin}}\limits _{x\in {{\mathbb {R}}}^n}\varPhi (x)\) such that \(\varPhi ({\bar{x}})=0,\) and so, \(\nabla \varPhi ({\bar{x}})=0.\) It follows from Lemma 3.2 that \(\varPhi (x)\) is a sum of squares polynomial, i.e,
On the other hand, for each \(i=1,\ldots , m,\) as \({\tilde{u}}_i=({\tilde{u}}_i^1,\ldots ,{\tilde{u}}_i)\in {{\mathcal {U}}}_i,\)
Let \(\lambda _i^0:={\tilde{\lambda }}_i\) and \(\lambda _i^r:={\tilde{\lambda }}_i{\tilde{u}}_i^r,\) \(i\in I,\) \(r=1,\ldots ,q_i.\) It follows from (3) and (4) that
and
It means that \((\gamma _j,\mu ,\varvec{\lambda }_1,\ldots ,\varvec{\lambda }_m)\in {{\mathbb {R}}}\times {{\mathbb {R}}}_+^{p-1}\times ({{\mathbb {R}}}_+\times {{\mathbb {R}}}^{t_1}_+\times {{\mathbb {R}}}^{q_1-t_1})\times \cdots \times ({{\mathbb {R}}}_+\times {{\mathbb {R}}}^{t_m}_+\times {{\mathbb {R}}}^{q_m-t_m})\) is feasible for \((\mathrm{RD}_j(\epsilon )),\) where \(\varvec{\lambda }_i:=(\lambda _i^0,\lambda _i^1,\ldots ,\lambda _i^{q_i}),\) \(i\in I.\) So, we have
Moreover, \(v(\mathrm{RD}_j(\epsilon ))=v(\mathrm{SDD}_j(\epsilon ))\) obviously holds by the construction of \((\mathrm{RD}_j(\epsilon ))\) and \((\mathrm{SDD}_j(\epsilon )).\)
Note that \((\mathrm{SDD}_j(\epsilon ))\) and \((\mathrm{SDP}_j(\epsilon ))\) are dual problems to each other. So, by the usual weak duality for semidefinite programming (Vandenberghe and Boyd 1996), we see that \(v(\mathrm{SDD}_j(\epsilon ))\le v(\mathrm{SDP}_j(\epsilon )).\)
To finish the proof of this theorem, it remains to show that \(v(\mathrm{RP}_j(\epsilon ))\ge v(\mathrm{SDP}_j(\epsilon )).\) Let x be any feasible solution of \((\mathrm{RP}_j(\epsilon ))\) and let \(\gamma _j:=f_j(x).\) Then \(f_k(x)\le \epsilon _k,\) \(k\ne j,\) and
where for each \(i\in I,\) the compact and convex set \({{\mathcal {U}}}_i\subset {{\mathbb {R}}}^{q_i}\) is given by
It follows from the strict feasibility condition and the strong duality theorem for semidefinite programming (Vandenberghe and Boyd 1996) that there exist \(Z_i\succeq 0,\) \(i\in I,\) such that
Let \(y:=(1,x_1 ,\ldots , x_n, x_1^2, x_1x_2 ,\ldots , x_1^{2d},\ldots , x_n^{2d})^T.\) Then we have
and for each \(i\in I,\)
Moreover, \(yy^T = \sum _{\alpha \in {\mathbb {N}}_{2d}^n}y_\alpha B_\alpha \succeq 0\) with \(y_\mathbf{0 }=1.\) So, \((y, Z_1,\ldots , Z_m)\) is a feasible solution of \((\mathrm{SDP}_j(\epsilon )),\) and hence, we see that
It means that \(v(\mathrm{RP}_j(\epsilon ))\ge v(\mathrm{SDP}_j(\epsilon )).\) Thus, we obtain the desired result. \(\square \)
Remark 3.1
In Theorem 3.1, it is worth mentioning that we can prove that \(v(\mathrm{RP}_j(\epsilon ))=v(\mathrm{RD}_j(\epsilon ))\) under the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) (the strict feasibility condition is not necessary) due to the weak duality relationship between \((\mathrm{RP}_j(\epsilon ))\) and \((\mathrm{RD}_j(\epsilon ));\) however, for the simplicity of the proof of Theorem 3.1, we would omit the detailed proof.
In what follows, we recall a result obtained by Lasserre (2009b), and we will use the result in the proof of Theorem 3.2; indeed, the result is an extension of Jensen’s inequality to a class of linear functionals that are not necessarily probability measures when one restricts its application to the class of SOS-convex polynomials.
Lemma 3.3
(Lasserre 2009b) Let \(f\in {{\mathbb {R}}}[x]_{2d}\) be SOS-convex, and let \(y=(y_\alpha )_{\alpha \in {\mathbb {N}}_{2d}^n}\) satisfy \(y_\mathbf{0 }=1\) and \(\sum _{\alpha \in {\mathbb {N}}_{2d}^n}y_\alpha B_\alpha \succeq 0.\) Let \(L_y : {{\mathbb {R}}}[x]\rightarrow {{\mathbb {R}}}\) be a linear functional defined by \(L_y(f):=\sum _{\alpha \in {\mathbb {N}}_{2d}^n} f_\alpha y_\alpha ,\) where \(f(x)=\sum _{\alpha \in {\mathbb {N}}_{2d}^n} f_\alpha x^\alpha .\) Then
where \(L_y(x):=(L_y(x_1),\ldots ,L_y(x_n)).\)
The following theorem provides a way to find an optimal solution to problem \((\mathrm{RP}_j(\epsilon ))\) from an optimal solution of \((\mathrm{SDP}_j(\epsilon )).\)
Theorem 3.2
For each fixed \(j=1,\ldots ,p,\) assume that the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) and the strict feasibility condition hold. If \(({\bar{y}}, {\bar{Z}}_1,\ldots ,{\bar{Z}}_m)\) is an optimal solution of \((\mathrm{SDP}_j(\epsilon )),\) then \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots ,L_{{\bar{y}}}(x_n))\) is an optimal solution of \((\mathrm{RP}_j(\epsilon )).\)
Proof
Let \(j\in \{1,\ldots ,p\}\) be any fixed. Assume that \(({\bar{y}}, {\bar{Z}}_1,\ldots ,{\bar{Z}}_m)\) is an optimal solution of \((\mathrm{SDP}_j(\epsilon )).\) Then, we have
Let \((u_i^1,\ldots ,u_i^{q_i})\in {{\mathcal {U}}}_i,\) \(i\in I,\) be any given. Then for each \(i\in I,\) \(A_i^0+\sum _{r=1}^{q_i}u_i^rA_i^r\succeq 0.\) It follows from (5) and (6) that
Note that for each \(i\in I,\) \(g_i^0+\sum _{r=1}^{q_i}u_i^rg_i^r\) is SOS-convex. Since \({\bar{y}}\) satisfies (7), by Lemma 3.3, we see that for each \(i\in I,\)
This, together with (8), yields that
Also, by a similar argument as above, it shows that
So, from (9) and (10), \({\bar{x}}\) is a feasible solution of \((\mathrm{RP}_j(\epsilon ))\) as for each \(i\in I,\) \((u_i^1,\ldots ,u_i^{q_i})\in {{\mathcal {U}}}_i\) is arbitrary. Moreover, by a similar argument as (9), we see that \(\sum _{\alpha \in {\mathbb {N}}_{2d}^n}(f_j)_\alpha {\bar{y}}_\alpha \ge f_j({\bar{x}}).\) It follows from Theorem 3.1 that
Thus, \({\bar{x}}\) is an optimal solution of \((\mathrm{RP}_j(\epsilon )).\) \(\square \)
The following proposition, which is called the \(\epsilon \)-constraint method and plays a key role for our main result, shows that an efficient solution of \(\mathrm{(RMP)}\) is related to an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}))\) for all \(j=1,\ldots ,p\) with the same \({\bar{\epsilon }}.\)
Proposition 3.1
(Chankong and Haimes 1983; Ehrgott 2005) A feasible solution \({\bar{x}}\) is an efficient solution of \(\mathrm{(RMP)}\) if and only if there exists \({\bar{\epsilon }}\in {{\mathbb {R}}}^{p}\) such that \({\bar{x}}\) is an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}))\) for all \(j = 1,\ldots , p.\)
Now, we give a theorem, which shows that how we can find efficient solutions of \(\mathrm{(RMP)};\) also known as robust efficient solutions of \(\mathrm{(UMP)},\) by using the \(\epsilon \)-constraint method.
Theorem 3.3
Let \({\bar{x}}_{(0)}\in K\) be any given. Assume that for \(j=1,\ldots ,p,\)
where \({\bar{\epsilon }}_{(j)}:=(f_1({\bar{x}}_{(j-1)}),\ldots ,f_p({\bar{x}}_{(j-1)}))\in {{\mathbb {R}}}^p.\) Then, \({\bar{x}}_{(p)}\) is a robust efficient solution of \(\mathrm{(UMP)}.\)
Proof
For \(j=1,\ldots ,p,\) let \({\bar{x}}_{(j)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x).\) Note that for each \(j=1,\ldots ,p,\) the feasible set of \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\) is as follows:
Since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j)}\in K_j({\bar{\epsilon }}_{(j)}),\)
Moreover, since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x)\),
for any \(x\in K_j({\bar{\epsilon }}_{(j)}).\) Since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j-1)}\in K_j({\bar{\epsilon }}_{(j)}),\) from (12), we see that
So, by (11) and (13), we obtain
Hence, we see that for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(p)}\in K_j({\bar{\epsilon }}_{(p)})\subseteq K_j({\bar{\epsilon }}_{(j)}).\) So, we have for each \(j=1,\ldots ,p,\)
Since, by (14), for each \(j=1,\ldots ,p,\) \(f_j({\bar{x}}_{(p)})\le f_j({\bar{x}}_{(j)}),\) we have for each \(j=1,\ldots ,p,\)
It means that \({\bar{x}}_{(p)}\) is an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}_{(p)}))\) for all \(j=1,\ldots ,p.\) Thus, by Proposition 3.1, we obtain the desired result. \(\square \)
Immediately, according to Theorems 3.2 and 3.3, we obtain the following theorem, which shows that finding efficient solutions of \((\mathrm{RMP})\) is tractable under the Slater condition and the strict feasibility condition.
Theorem 3.4
Let \({\bar{x}}_{(0)}\in K\) be any given. Assume that for \(j=1,\ldots ,p,\)
where for each \(j=1,\ldots ,p,\) \({\bar{\epsilon }}_{(j)}:=(f_1({\bar{x}}_{(j-1)}),\ldots ,f_p({\bar{x}}_{(j-1)}))\in {{\mathbb {R}}}^p\) and \({\bar{x}}_{(j)}:=(L_{{\bar{y}}_{(j)}}(x_1),\ldots ,L_{{\bar{y}}_{(j)}}(x_n)).\) Assume that for each \(j=1,\ldots ,p,\) the Slater condition for \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\) and the strict feasibility condition hold. Then \((L_{{\bar{y}}_{(p)}}(x_1),\ldots ,L_{{\bar{y}}_{(p)}}(x_n))\in K\) is a robust efficient solution of \(\mathrm{(UMP)}.\)
Proof
Assume that for \(j=1,\ldots ,p,\) \(\left( {\bar{y}}_{(j)}, ({\bar{Z}}_{(j)})_1,\ldots ,({\bar{Z}}_{(j)})_m\right) \) is an optimal solution of \((\mathrm{SDP}_j({\bar{\epsilon }}_{(j)})).\) Since for each \(j=1,\ldots ,p,\) the Slater condition and strict feasibility condition hold for \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)})),\) by Theorem 3.2, for each \(j=1,\ldots ,p,\)
Thus, by Theorem 3.3, \({\bar{x}}_{(p)}=(L_{{\bar{y}}_{(p)}}(x_1),\ldots ,L_{{\bar{y}}_{(p)}}(x_n))\) is a robust efficient solution of \(\mathrm{(UMP)}.\) \(\square \)
Remark 3.2
Let \(j\in \{1,\ldots ,p\}\) be any fixed, and let \(k\in \{1,\ldots ,p\}\) with \(k\ne j.\) Since the Slater condition for \((\text {RP}_j(\epsilon ))\) depends on the parameter \(\epsilon ,\) it often occurs that the Slater condition fails for some given \(\epsilon .\) in order to see this more clearly, let us first consider the following robust SOS-convex optimization scalar problem
It is worth mentioning that the problem (15) enjoys a zero duality gap in the sense that the optimal values of the problem (15) and its associated relaxation problems are equal, and an optimal solution of the problem (15) can be found by solving an associated semidefinite programming problem under the suitable constraint qualification (Chieu et al. 2018). If \(\epsilon _k\) is chosen as the optimal value of the problem (15), i.e., \(\epsilon _k=f_k(x^*),\) where \(x^*\in K\) is an optimal solution of the problem (15), then we clearly see that the Slater condition for \((\text {RP}_j(\epsilon ))\) does not hold, and so, it may not enjoy a zero duality gap between the problem \((\text {RP}_j(\epsilon ))\) and its associated relaxation dual problems. In spite of its fault, we still use the Slater condition, since it is usually easier to check the validity of the Slater condition than other weaker regular condition, e.g., stability (see Geoffrion 1971). In turn, to overcome the fault of the Slater condition, we shall give more restrictive assumptions on the objective functions, e.g., strict convexity (see Lee and Jiao 2018).
4 A numerical example
In this section, we design an example to illustrate how to find efficient solutions for a robust multicriteria SOS-convex polynomial optimization problem with a detailed calculation process by using our results.
Example 4.1
Consider the following 2-dimensional robust multiobjective SOS-convex optimization problem:
where
Here, the uncertain sets \({{\mathcal {U}}}_1\) and \({{\mathcal {U}}}_2\) are given by
where \(A_1^0=\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array}\right) ,\) \(A_1^1=\left( \begin{array}{ccc} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 \\ \end{array}\right) ,\) \(A_1^2=\left( \begin{array}{ccc} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 1 &{} 0 \\ \end{array}\right) ,\) \(A_2^0=\left( \begin{array}{ccc} 1 &{} 2 &{} 0 \\ 2 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array}\right) ,\) and \(A_2^1=\left( \begin{array}{rrr} 0 &{} -1 &{}0 \\ -1 &{} 0 &{}0 \\ 0 &{} 0 &{}0\\ \end{array}\right) .\) Then, by simple calculations, we see that
Let \(({\hat{u}}_1^1,{\hat{u}}_1^2)=(0,0)\in {{\mathcal {U}}}_1,\) and let \({\hat{u}}_2^1=2\in {{\mathcal {U}}}_2.\) Then, we see that the strict feasibility condition holds. In addition, \(f_1,\) \(f_2,\) \(g_1^0,\) \(g_2^0,\) and \(g_2^1\) are SOS-convex polynomials, and \(g_1^1\) and \(g_1^2\) are affine functions. Moreover, by simple calculations, the robust feasible set K can be found as follows:
Now, we substitute \( ({\widetilde{\mathrm{RMP}}})\) by the \(\epsilon \)-constraint problems as follows:
where for each \(j=1,2,\) \({\bar{\epsilon }}_{(j)}\) is given by \({\bar{\epsilon }}_{(1)}:=f_2({\bar{x}}_{(0)})\) and \({\bar{\epsilon }}_{(2)}:=f_1({\bar{x}}_{(1)}),\) respectively. Here, \({\bar{x}}_{(0)}\in K\) is given, and \({\bar{x}}_{(1)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_1({\bar{\epsilon }}_{(1)})}f_1(x),\) where \(K_1({\bar{\epsilon }}_{(1)})=\{x\in K : f_2(x)\le f_2({\bar{x}}_{(0)})\}.\)
We now consider the following sum of squares relaxation dual problem of \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\):
Since \(f_j+\sum _{k\ne j}\mu _k(f_k-\epsilon _k)+\sum _{r=0}^{2}\lambda _1^rg_1^r+\sum _{r=0}^{1}\lambda _2^rg_2^r-\gamma _j \in {\Sigma }_{8}^2,\) there exist real polynomials \(q_\ell ,\) \(\ell = 1,\ldots ,r,\) such that
Note that, by Proposition 2.1, there exists \(X\in S_+^{s(4)}(=S_+^{15})\) such that
However, the Newton polytope of the polynomial (17) is easily seen to be the convex hull of the points (8, 0), (0, 2), (0, 0). It follows from (Reznick 1978, Theorem 1) that the Newton polytope of \(q_\ell \) is the convex hull of the points (4, 0), (0, 1), (0, 0), and so, the polynomials \(q_\ell \) in the sum of squares decomposition of (16) will have at most 6 distinct monomials, i.e., \(v_4(x)=(1,x_1,x_2,x_1^2,x_1^3,x_1^4)^T\) in (18), further, the full sum of squares decomposition can be computed by solving a much smaller semidefinite programming problem. With this fact we formulate the semidefinite programming problems for \((\mathrm{RD}_j({\bar{\epsilon }}_{(j)})),\) \(j=1,2,\) as follows:
Solving the above semidefinite programming problems using the MATLAB optimization package CVX (Grant and Boyd 2013) together with the SDP-solver SDPT3 (Toh et al. 1999), we can find optimal solutions for the dual problem of \((\mathrm{SDD}_j({\bar{\epsilon }}_{(j)})),\) \(j=1,2.\) For example, letting \({\bar{x}}_{(0)}=(1,0)\in K\) and \({\bar{\epsilon }}_{(1)}=f_2({\bar{x}}_{(0)})=1,\) we obtain an optimal solution \(({\bar{y}},{\bar{Z}}_1,{\bar{Z}}_2)\) of the dual problem of \((\mathrm{SDD}_1({\bar{\epsilon }}_{(1)}))\), where
It follows from Theorem 3.2 that \({\bar{x}}_{(1)}=(L_{{\bar{y}}}(x_1),L_{{\bar{y}}}(x_2))=(0, 0)\) is an optimal solution of \((\mathrm{RP}_1({\bar{\epsilon }}_{(1)})).\) Now, let \({\bar{\epsilon }}_{(2)}=f_1({\bar{x}}_{(1)})=0.\) Note that
i.e., \({\bar{x}}_{(2)}=(0, 0)\) is an optimal solution of \((\mathrm{RP}_2({\bar{\epsilon }}_{(1)})).\) It means that \({\bar{x}}:={\bar{x}}_{(2)}=(0,0)\) is an efficient solution of \( ({\widetilde{\mathrm{RMP}}}).\) In order to find more efficient solutions of \( ({\widetilde{\mathrm{RMP}}}),\) we give 25 points \({\bar{x}}_{(0)}\) in the boundary of K, and then we get the efficient solutions of \( ({\widetilde{\mathrm{RMP}}})\) in Table 1. An illustration of the found efficient solutions of \( ({\widetilde{\mathrm{RMP}}})\) is given in Fig. 1a. Moreover, we give 1000 points \({\bar{x}}_{(0)}\) in K. The efficient solutions of \(({\widetilde{\mathrm{RMP}}})\) for above points \({\bar{x}}_{(0)}\) are described in Fig. 1b.
5 Conclusions
In this paper, we proved that finding efficient solutions in robust multiobjective optimization with SOS-convex polynomials is tractable by using the \(\epsilon \)-constraint method and the techniques on SOS-convex polynomials. Notwithstanding the fact that the uncertain set \({{\mathcal {U}}}\) is restrict to affinely parameterized data uncertain one, it will be very interesting yet difficult to investigate the case of more general uncertain set \({{\mathcal {U}}}.\)
References
Ahmadi, A. A., & Parrilo, P. A. (2012). A convex polynomial that is not SOS-convex. Mathematical Programming, 135(1–2), 275–292.
Ahmadi, A. A., & Parrilo, P. A. (2013). A complete characterization of the gap between convexity and SOS-convexity. SIAM Journal on Optimization, 23(2), 811–833.
Beck, A., & Ben-Tal, A. (2009). Duality in robust optimization: Primal worst equals dual best. Operations Research Letters, 37(1), 1–6.
Belousov, E. G., & Klatte, D. (2002). A Frank–Wolfe type theorem for convex polynomial programs. Computational Optimization and Applications, 22(1), 37–48.
Ben-Tal, A., Ghaoui, L. E., & Nemirovski, A. (2009). Robust optimization. Princeton, NJ and Oxford: Princeton University Press.
Ben-Tal, A., & Nemirovski, A. (1998). Robust convex optimization. Mathematics of Operations Research, 23(4), 769–805.
Bertsimas, D., Brown, D., & Caramanis, C. (2011). Theory and applications of robust optimization. SIAM Review, 53(3), 464–501.
Boyd, S., & Vandenberghe, L. (2004). Convex optimization. Cambridge: Cambridge University Press.
Chankong, V., & Haimes, Y. Y. (1983). Multiobjective decision making: Theory and methodology. Amsterdam: North-Holland.
Chieu, N. H., Feng, J. W., Gao, W., Li, G., & Wu, D. (2018). SOS-convex semialgebraic programs and its applications to robust optimization: A tractable class of nonsmooth convex optimization. Set-Valued and Variational Analysis, 26(2), 305–326.
Chuong, T. D. (2016). Optimality and duality for robust multiobjective optimization problems. Nonlinear Analysis, 134, 127–143.
Chuong, T. D. (2017). Robust alternative theorem for linear inequalities with applications to robust multiobjective optimization. Operations Research Letters, 45, 575–580.
Chuong, T. D. (2018). Linear matrix inequality conditions and duality for a class of robust multiobjective convex polynomial programs. SIAM Journal on Optimization, 28(3), 2466–2488.
Chuong, T. D., & Jeyakumar, V. (2018). Tight SDP relaxations for a class of robust SOS-convex polynomial programs without the slater condition. Journal of Convex Analysis, 25(4), 1159–1182.
Crespi, G. P., Kuroiwa, D., & Rocca, M. (2018). Quasiconvexity of set-valued maps assures well-posedness of robust vector optimization. Annals of Operations Research, 251(1–2), 89–104.
Doolittle, E. K., Kerivin, H. L. M., & Wiecek, M. M. (2018). Robust multiobjective optimization problem with application to internet routing. Annals of Operations Research, 271(2), 487–525.
Ehrgott, M. (2005). Multicriteria optimization (2nd ed.). New York: Springer.
Ehrgott, M., & Ruzika, S. (2008). Improved \(\epsilon \)-constraint method for multiobjective progamming. Journal of Optimization Theory and Applications, 138(3), 375–396.
Geoffrion, A. M. (1971). Duality in nonlinear programming: A simplified applications-oriented development. SIAM Review, 13, 1–37.
Goldfarb, D., & Iyengar, G. (2003). Robust convex quadratically constrained programs. Mathematical Programming, 97(3), 495–515.
Grant, M. C., & Boyd, S. P. (2013). The CVX user’s guide, release 2.0. User manual. Available at http://cvxr.com/cvx.
Helton, J. W., & Nie, J. W. (2010). Semidefinite representation of convex sets. Mathematical Programming, 122(1), 21–64.
Ide, J., & Schöbel, A. (2016). Robustness for uncertain multi-objective optimization: A survey and analysis of different concepts. OR Spectrum, 38(1), 235–271.
Jeyakumar, V., Li, G., & Vicente-Pérez, J. (2015). Robust SOS-convex polynomial optimization problems: Exact SDP relaxations. Optimization Letters, 9(1), 1–18.
Jeyakumar, V., Li, G., & Wang, J. H. (2013). Some robust convex programs without a duality gap. Journal of Convex Analysis, 20(2), 377–394.
Lee, J. H., & Jiao, L. G. (2018). Solving fractional multicriteria optimization problems with sum of squares convex polynomial data. Journal of Optimization Theory and Applications, 176(2), 428–455.
Lee, J. H., & Lee, G. M. (2018). On optimality conditions and duality theorems for robust semi-infinite multiobjective optimization problems. Annals of Operations Research, 269(1–2), 419–438.
Lasserre, J. B. (2009a). Moments, positive polynomials and their applications. London: Imperial College Press.
Lasserre, J. B. (2009b). Convexity in semialgebraic geometry and polynomial optimization. SIAM Journal on Optimization, 19(4), 1995–2014.
Reznick, B. (1978). Extremal PSD forms with few terms. Duke Mathematical Journal, 45(2), 363–374.
Rockafellar, R. T. (1970). Convex analysis. Princeton, NJ: Princeton University Press.
Toh, K. C., Todd, M. J., & Tütüncü, R. H. (1999). SDPT3–A Matlab software package for semidefinite programming. Optimization Methods and Software, 11(1–4), 545–581.
Vandenberghe, L., & Boyd, S. (1996). Semidefinite programming. SIAM Review, 38(1), 49–95.
Wiecek, M. M., & Dranichak, G. M. (2016). Robust multiobjective optimization for decision making under uncertainty and conflict. In A. Gupta, & A. Capponi (Eds.), TutORials in operations research, optimization challenges in complex, networked, and risky systems (pp. 84–114). INFORMS.
Acknowledgements
The authors would like to express their sincere thanks to anonymous referees for their very helpful and valuable suggestions and comments for the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The first author was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (NRF-2017R1A5A1015722). The second author was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (NRF-2018R1C1B6001842).
Rights and permissions
About this article
Cite this article
Jiao, L., Lee, J.H. Finding efficient solutions in robust multiple objective optimization with SOS-convex polynomial data. Ann Oper Res 296, 803–820 (2021). https://doi.org/10.1007/s10479-019-03216-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-019-03216-z
Keywords
- Multiobjective optimization
- Robust optimization
- Semidefinite programming relaxations
- SOS-convex polynomials