Abstract
This paper deals with an SOS-convex (sum of squares convex) polynomial optimization problem with spectrahedral uncertain data in both the objective and constraints. By using a robust-type characteristic cone constraint qualification, we first obtain necessary and sufficient conditions for robust weakly efficient solutions of this uncertain SOS-convex polynomial optimization problem in terms of sum of squares conditions and linear matrix inequalities. Then, we propose a relaxation dual problem for this uncertain SOS-convex polynomial optimization problem and explore weak and strong duality properties between them. Moreover, we give a numerical example to show that the relaxation dual problem can be reformulated as a semidefinite linear programming problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Convex polynomial optimization is a class of convex optimization problems in which their convex polynomial inequality constraints will constrain convex polynomial objective functions over the whole region. It has become an active research area, not only because of its surprising structural aspects, but also due to abundant applications in a wide range of disciplines, such as automatic control systems [11, 30], engineering design [5, 19] and transportation [1, 30]. In recent years, a wide variety of works have been devoted to the investigation of convex polynomial optimization and its generalizations from different points of view; see, e.g., [1, 7, 11, 16, 30, 33,34,35] and the references therein.
Recently, SOS-convex polynomial optimization [3, 22], as a numerically tractable subclass of convex polynomial optimization problems, has been proposed and attracted the interest of many researchers. One of the most important reasons to work with SOS-convex polynomials is that it can be reformulated as a semidefinite linear programming problem, which can be handled by efficient interior-point methods [22]. Furthermore, the SOS-convex polynomials cover a rich class of convex polynomials, such as convex separable polynomials and convex quadratic functions [2, 22]. In particular, some complete characterizations of the gap between SOS-convex polynomials and convex polynomials are obtained in [3]. Moreover, SOS-convex polynomial optimization enjoys an exact SDP relaxation and zero duality gap between the primal problem and its dual problem; see, e.g., [26, 29, 31]. Interested readers are referred to [12, 14, 26, 29, 31] for some recent works on SOS-convex polynomial optimization problems and their applications.
On the other hand, due to prediction error or lack of information, the data are usually not known precisely for many practical optimization problem involving convex polynomials, such as lot-sizing problems with uncertain demands [14], support vector machine classifiers with uncertain knowledge sets [24] and mean–variance portfolio selection problems with uncertain returns [21]. Therefore, the study of convex polynomial optimization problems with inexact or uncertain data becomes a very interesting topic. Recently, robust optimization [4, 6], as one of the effective ways for dealing with uncertain optimization problems, has attracted an increasing attention of many researchers from different disciplines; see, for example, [9, 10, 18, 20, 23, 27, 39,40,41, 43,44,45,46]. However, in contrast to the deterministic case, there exist only few papers devoting to the investigation of uncertain SOS-convex polynomial optimization problems with special structures of the objective and constraint functions. For example, sum of squares polynomial representations that characterize robust solutions and exact SDP relaxations for robust SOS-convex polynomial optimization problems with various commonly used uncertain sets are investigated in [25]. By using sum of squares conditions and linear matrix inequalities, optimality conditions and duality are obtained in [13] for a class of uncertain multiobjective SOS-convex polynomial optimization. By means of scalarization technique, a method to find robust efficient solutions is given in [28] for an uncertain multiobjective optimization problem where the objective and constraint functions are SOS-convex polynomials. Optimality conditions for robust (weakly) Pareto efficient solutions of a convex quadratic multiobjective optimization problem are explored in [15] under a robust-type closed convex cone constraint qualification.
We observe that there is no results characterizing uncertain vector SOS-convex polynomial optimization problems where uncertain data are involved in both the objective and constraints and belong to general spectrahedral uncertain sets [37, 42]. Note that the spectrahedral uncertain set contains a wide range of commonly used uncertain sets such as ellipsoids, polyhedra, and boxes, encountered frequently in robust optimization problems [4, 5]. The study of such structured polynomial optimization problem is usually complicated due to the challenges of dealing with uncertain data of the objective and constraints. Therefore, the research of a tractable equivalent optimization problem using robust duality for such structured polynomial optimization problem is of high importance, which contributed the main motivation of this paper.
As was mentioned above, motivated by the works [13, 14, 25], we provide some new characterizations of vector SOS-convex polynomial optimization problems where both the objective and constraint functions involve spectrahedral uncertain data in this paper. More precisely, we first give the concept of robust weakly efficient solutions to this uncertain SOS-convex polynomial optimization problem. Then, by using a robust-type characteristic cone constraint qualification, we establish necessary and sufficient optimality conditions for a robust weakly efficient solution of this uncertain SOS-convex polynomial optimization problem based on the sum of squares conditions and linear matrix inequalities. The obtained results are corresponding to the results in [13]. We note that the sum of squares characterizations can be checked for spectrahedral uncertain sets by solving a semidefinite programming problem. Furthermore, we introduce a relaxation dual problem for this uncertain SOS-convex polynomial optimization problem and investigate weak and strong duality properties between them. The main advantage for studying this relaxation dual problem is that it can be reformulated as a semidefinite linear programming problem, which can be done in polynomial time.
The rest of this paper is organized as follows. In Sect. 2, we give some basic definitions and preliminary results. In Sect. 3, we obtain necessary and sufficient conditions for robust weakly efficient solutions to the uncertain SOS-convex polynomial optimization problem based on sum of squares conditions and linear matrix inequalities. In Sect. 4, we establish robust duality properties between this uncertain SOS-convex polynomial optimization problem and its relaxation dual problem.
2 Preliminaries
In this section, we recall some notations and preliminary results from [30, 38] which will be used later in this paper. Let \({\mathbb {R}}^n\) be the n-dimensional Euclidean space equipped with the usual Euclidean norm \(\Vert \cdot \Vert \). We denote by \({\mathbb {R}}^n_+\) the nonnegative orthant of \(\mathbb {R}^{n}\). We also denote by \(\mathrm{{int}}\mathbb {R}^{n}_+\) the topological interior of \(\mathbb {R}^{n}_+\). Given a set \(A\subset \mathbb {R}^n\), \(\textrm{int}A\) \((\mathrm{{resp}}.~\textrm{cl}A,~\textrm{co}A)\) denotes the interior (resp. closure, convex hull) of A, while \(\textrm{coneco}A:=\mathbb {R}_+\textrm{conv}A\) stands for the convex conical hull of \(A\cup \{0\}\). The symbol \(I_n\in \mathbb {R}^{n\times n}\) stands for the identity matrix. Let \(S^n\) be the space of all symmetric matrices. \(M\in S^n\) is said to be a positive semidefinite matrix, denoted by \(M\succeq 0\), if \(x^\top Mx\ge 0\) for any \(x\in \mathbb {R}^n\). Let \(\varphi :\mathbb {R}^{n}\rightarrow \mathbb {R}\cup \{+\infty \}\) be a given real-valued function, the effective domain and the epigraph of \(\varphi \) are defined, respectively, by
The conjugate function of \(\varphi \), denoted by \(\varphi ^{*}: \mathbb {R}^n\rightarrow \mathbb {R}\), is defined by
The following important properties will be used in the sequel.
Lemma 2.1
[8, 17] Let \(\varphi _{1},\varphi _{2}:\mathbb {R}^{n}\rightarrow \mathbb {R}\cup \{+\infty \}\) be proper convex functions such that \(\textrm{dom}\varphi _{1}\cap \textrm{dom}\varphi _{2}\ne \emptyset \).
-
(i)
If \(\varphi _{1}\) and \(\varphi _{2}\) are lower semicontinuous, then,
$$\begin{aligned} \textrm{epi}(\varphi _{1}+\varphi _{2})^{*}=\textrm{cl}(\textrm{epi}\varphi _{1}^{*}+\textrm{epi}\varphi _{2}^{*}). \end{aligned}$$ -
(ii)
If one of \(\varphi _{1}\) and \(\varphi _{2}\) is continuous at some \(\bar{x}\in \textrm{dom}\varphi _{1}\cap \textrm{dom}\varphi _{2}\), then,
$$\begin{aligned} \textrm{epi}(\varphi _{1}+\varphi _{2})^{*}=\textrm{epi}\varphi _{1}^{*}+\textrm{epi}\varphi _{2}^{*}. \end{aligned}$$
Now, we recall the following basic concepts associated with polynomials. The space of all real polynomials on \(\mathbb {R}^n\) is denoted by \(\mathbb {R}[x]\). We denote by \(\mathbb {R}[x]_d\) the space of all real polynomials on \(\mathbb {R}^n\) with degree at most d. We also denote by \(\textrm{deg}f\) the degree of a polynomial f. A real polynomial f is said to be a sum of squares polynomial, if there exist real polynomials \(f_i\), \(i=1,\dots ,p\), such that \(f=\sum ^p_{i=1} f^2_i\). The set of all sum of squares polynomials in \(x\in \mathbb {R}^n\) with degree at most d is denoted by \({\Sigma }^2_d[x]\).
Definition 2.1
[2, 3, 22] A real polynomial f on \(\mathbb {R}^n\) is called SOS-convex iff the polynomial
is a sum of squares polynomial in \(\mathbb {R}[x;y]\) (with respect to variables x and y).
Remark 2.1
Obviously, an SOS-convex polynomial is a convex polynomial, but the converse is not true, see [2, 3]. Moreover, convex quadratic functions and convex separable polynomials are also SOS-convex polynomials [2]. Besides, an SOS-convex polynomial can be neither quadratic nor separable. For example, \(x^8_1+x^2_1+x_1x_2+x^2_2\) is an SOS-convex polynomial which is neither quadratic nor separable, see [22].
The following property of SOS-convex polynomials plays an important role in obtaining our results.
Lemma 2.2
[22, Lemma 8] Let f be an SOS-convex polynomial. Suppose that there exists \(\bar{x}\in \mathbb {R}^n\) such that \(f(\bar{x})=0\) and \(\nabla f(\bar{x})=0\). Then, f is a sum of squares polynomial.
Now, suppose that \(A^r_i\), \(r=0,1,\dots ,s\), and \(B^l_j\), \(l=0,1,\dots ,k\), are given symmetric matrices. The sets \(\mathcal {U}_i\), \(i=1,\dots ,p\), and \(\mathcal {V}_j\), \(j=1,\dots ,m\), are assumed to be spectrahedra [37, 42] described by
and
respectively.
In this paper, we consider the following uncertain vector SOS-convex polynomial optimization problem
where \(u_i\), \(i=1,\dots ,p\), and \(v_j\), \(j=1,\dots ,m\), are uncertain parameters and they belong to their respective spectrahedral uncertain sets \(\mathcal {U}_i\subseteq \mathbb {R}^s\) and \(\mathcal {V}_j\subseteq \mathbb {R}^k\). \(f_i:\mathbb {R}^n\times \mathcal {U}_i\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), and \(g_j:\mathbb {R}^n\times \mathcal {V}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), are given functions. In what follows, we assume that, for each \(u_i\in \mathcal {U}_i\) and \(v_j\in \mathcal {V}_j\), \(f_i(\cdot ,u_i)\) and \(g_j(\cdot ,v_j)\) are SOS-convex polynomials, and for each \(x\in \mathbb {R}^n\), \(f_i(x,\cdot )\) and \(g_j(x,\cdot )\) are affine functions, i.e.,
and
Here \(f^r_i:\mathbb {R}^n\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), \(r=0,1,\dots ,s\), and \(g^l_j:\mathbb {R}^n\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), \(l=0,1,\dots ,k\), are given polynomials.
For \(\mathrm {(UP)}\), it is usually associated with the so-called minimax-type robust counterpart
This paper is devoted to obtain some characterizations of the robust efficient solutions of \(\mathrm {(UP)}\). So, we first recall the following important concepts which will be used later in this paper.
Definition 2.2
[4, 6] The robust feasible set of \(\mathrm {(UP)}\) is defined by
Definition 2.3
[27] A point \(\bar{x}\in \mathcal {F}\) is said to be a robust weakly efficient solution of \(\mathrm {(UP)}\) iff it is a weakly efficient solution of \(\mathrm {(RP)}\), i.e., there is no point \(x\in \mathcal {F}\) such that
Remark 2.2
Note that in this paper, we only deal with robust weakly efficient solutions of \(\mathrm {(UP)}\). For other kinds of robust efficient solutions for \(\mathrm {(UP)}\), they can be handled in a similar way.
3 Robust Optimality Conditions
This section is devoted to the development of necessary and sufficient optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\) based on the sum of squares conditions and linear matrix inequalities. To do this, we first give the following Farkas-type Lemma which plays a key role for the derivation of our results. For similar results, please see [23, 39] for more details.
Lemma 3.1
Let \(\gamma \in \mathbb {R}\) and \(\mathcal {W}_j\subseteq \mathbb {R}^k\), \(j=1,\dots ,m\). Let \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\) be a continuous convex function and let \(h_j:\mathbb {R}^n\times \mathcal {W}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), be continuous functions such that for each \(w_j\in \mathcal {W}_j\), \(h_j(\cdot ,w_j)\) is a convex function. Then, the following two statements are equivalent:
-
(i)
\(\{x\in \mathbb {R}^n\mid h_j(x,w_j)\le 0,~\forall w_j\in \mathcal {W}_j,~j=1,\dots ,m\}\subseteq \{x\in \mathbb {R}^n\mid \phi (x)\ge \gamma \}\).
-
(ii)
\((0,-\gamma )\in \mathrm{{epi}}\phi ^*+\mathrm{{clco}}\bigcup \limits _{\lambda ^0_j\ge 0,w_j\in \mathcal {W}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jh_j(\cdot ,w_j)\right) ^*\).
Proof
The proof is similar to the proof given for [23, Theorem 2.4]. \(\square \)
The following constraint qualification is a necessary assumption for the investigation of robust weakly efficient solutions of \(\mathrm{{(UP)}}\).
Definition 3.1
[23] We say that robust-type characteristic cone constraint qualification \(\mathrm{{(RCCCQ)}}\) holds, iff
Remark 3.1
-
(i)
It is worth noting that \(\mathrm{{(RCCCQ)}}\) was first introduced in [23] to characterize robust strong duality for uncertain convex programming problems. Some complete characterizations of \(\mathrm{{(RCCCQ)}}\) are also given in [23, Propositions 2.2, 2.3 and 3.2]. The \(\mathrm{{(RCCCQ)}}\) has also been used in [26, 31, 41] and the relevant references cited therein to study optimality conditions, duality results and SDP relaxations for polynomial optimization problems.
-
(ii)
Note that for each \(v_j\in \mathcal {V}_j\), \(g_j(\cdot ,v_j)\) is an SOS-convex polynomial, and for each \(x\in \mathbb {R}^n\), \(g_j(x,\cdot )\) is an affine function. Thus, the following Slater-type condition, used in [13],
$$\begin{aligned} \left\{ x\in \mathbb {R}^n\mid g_j(x,v_j)<0,\forall v_j\in \mathcal {V}_j,j=1,\dots ,m\right\} \ne \emptyset \end{aligned}$$(5)is a sufficient condition for \(\mathrm{{(RCCCQ)}}\). See [23, Proposition 3.2] for details.
Now, by using the \(\mathrm {(RCCCQ)}\), we give the following necessary and sufficient optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\) based on the sum of squares conditions and linear matrix inequalities.
Theorem 3.1
Assume that the \(\mathrm {(RCCCQ)}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that
and
where \(d\ge \max \left\{ \max \limits _{1\le i\le p}\left( \deg f^0_i\right) ,\max \limits _{1\le i\le p}\left( \deg f^r_i\right) ,\!\max \limits _{1\le j\le m}\left( \deg g^0_j\right) ,\max \limits _{1\le j\le m}\left( \deg g^l_j\right) \!\right\} \) and \(F_i(\bar{x}):=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\) for \(i=1,\dots ,p\).
Proof
\((\Rightarrow )\) Suppose that \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\). Then, there is no point \(x\in \mathcal {F}\) such that
where \(F_i(x):=\max \limits _{u_i\in \mathcal {U}_i}f_i(x,u_i)\), \(i=1,\dots ,p\). By [32, Proposition 8.2], there exist \(\alpha ^0_i\in \mathbb {R}_+\), \(i=1,\dots ,p\), with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), such that
This means that
By Lemma 3.1, we have
Since \(f_i(\cdot ,u_i)\) is an SOS-convex polynomial and \(f_i(x,\cdot )\) is an affine function, it follows from the equality (4) of [39] that
Note that the \(\mathrm{{(RCCCQ)}}\) holds. Then, from (8) and (9), we deduce that
This means that there exist \(u_i\in \mathcal {U}_i\) with \((\xi ^*,\eta )\in \mathrm{{epi}}\left( \sum \limits _{i=1}^{p}\alpha ^0_if_i(\cdot ,u_i)\right) ^*\), \(v_j\in \mathcal {V}_j\) with \((\omega ^*,\gamma )\in \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*\) and \(\lambda _j^0\ge 0\), such that
Therefore, for any \(x\in \mathbb {R}^n\),
For any \(x\in \mathbb {R}^n\), set
Clearly, \(\sigma (x)\ge 0\). This, together with \(F_i(\bar{x})\ge f_i(\bar{x},u_i)\), gives \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)\ge 0\). Note that \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)\le 0\) due to \(\bar{x}\in \mathcal {F}\). Thus, \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)=0\). Together with \(F_i(\bar{x})\ge f_i(\bar{x},u_i)\), we have \(\sigma (\bar{x})\le 0\). Consequently,
Obviously, \(\nabla \sigma (\bar{x})=0\). Since \(f_i(\cdot ,u_i)\), \(i=1,\dots ,p\), and \(g_j(\cdot ,v_j)\), \(j=1,\dots ,m\), are SOS-convex polynomials, we have \(\sigma \) is an SOS-convex polynomial. Hence, it follows from Lemma 2.2 that \(\sigma \) is a sum of squares polynomial. Note that the degree of \(\sigma \) is not larger than d. Then, \(\sigma \in {\Sigma }^2_d[x]\). Thus, from (3), (4) and (10), we have
Let \(\alpha ^r_i:=\alpha ^0_i u^r_i\) and \(\lambda ^l_j:=\lambda ^0_j v^l_j\). It follows that (6) holds.
On the other hand, by \(\alpha ^0_i\ge 0\), \(\alpha ^r_i=\alpha ^0_i u^r_i\) and (1), we deduce that
Similarly, by \(\lambda ^0_j\ge 0\), \(\lambda ^l_j=\lambda ^0_j v^l_j\) and (2), we have
Thus, (7) holds.
\((\Leftarrow )\) Suppose that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that (6) and (7) hold.
Consider any \(i=1,\dots ,p\), and \(j=1,\dots ,m\). By a similar argument as in [13, Theorem 2.3], we can show that if \(\alpha ^0_i=0\), then, \(\alpha ^r_i=0\) for all \(r=1,\dots ,s\). If \(\lambda ^0_j=0\), then, \(\lambda ^l_j=0\) for all \(l=1,\dots ,k\).
Now, let \(\hat{u}_i:=(\hat{u}^1_i,\dots ,\hat{u}^s_i)\in \mathcal {U}_i\). Set \(\tilde{u}_i:=(\tilde{u}^1_i,\dots ,\tilde{u}^s_i)\) with
Obviously, \(\tilde{u}_i\in \mathcal {U}_i\), and for any \(x\in \mathbb {R}^n\),
Here, note that if \(\alpha ^0_i=0\), then, \(\alpha ^r_i=0\) for any \(r=1,\dots ,s\).
Similarly, let \(\hat{v}_j:=(\hat{v}^1_j,\dots ,\hat{v}^k_j)\in \mathcal {V}_j\). Set \(\tilde{v}_j:=(\tilde{v}^1_j,\dots ,\tilde{v}^k_j)\) with
Obviously, \(\tilde{v}_j\in \mathcal {V}_j\), and for any \(x\in \mathbb {R}^n\),
By (6), (12) and (13), we obtain
Note that \(g_j(x,\tilde{v}_j)\le 0\), \(\forall x\in \mathcal {F}\). It follows from (14) that
Taking \(F_i(x)\ge f_i(x,\tilde{u}_i)\) into account, we have
Note that \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\). Then, (15) yields that there is no point \(x\in \mathcal {F}\) such that
Thus, \(\bar{x}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\). This completes the proof. \(\square \)
In the special case, when \(\mathcal {U}_i\), \(i=1,\dots ,p\), are singletons, \(\mathrm {(UP)}\) reduces to the following polynomial optimization problem
Here, \(f_i:\mathbb {R}^n\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), are SOS-convex polynomials, and \(g_j:\mathbb {R}^n\times \mathcal {V}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), are defined as (4).
The following corollary gives a characterization of robust weakly efficient solutions of \(\mathrm {(UP)_0}\) based on \(\mathrm{{(RCCCQ)}}\).
Corollary 3.1
Assume that the \(\mathrm{{(RCCCQ)}}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)_0}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(i=1,\dots ,p\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that
and
where \(d\ge \max \left\{ \max \limits _{1\le i\le p}\left( \deg f_i\right) ,\max \limits _{1\le j\le m}\left( \deg g^0_j\right) ,\max \limits _{1\le j\le m}\left( \deg g^l_j\right) \right\} \).
Remark 3.2
In [13, Theorem 2.3], under the Slater-type condition (5), optimality conditions similar to the one in Corollary 3.1 have been investigated. Moreover, as mentioned above, \(\mathrm{{(RCCCQ)}}\) is weaker than the Slater-type condition (5). Thus, our results cover the corresponding results in [13].
Now, we give an example to explain the case where, for \(\mathrm {(UP)}\), the \(\mathrm{{(RCCCQ)}}\) holds and optimality conditions obtained in Theorem 3.1 are satisfied, whereas the Slater-type condition (5) fails.
Example 3.1
For problem \((\textrm{UP})\). Let \(m=n=1\) and \(p=s=k=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by
and
Let the polynomials \(f_1:\mathbb {R}\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by
Obviously, \(f^0_1(x)=x^4\), \(f^1_1(x)=-x\), \(f^2_1(x)=1\), \(f^0_2(x)=x^4\), \(f^1_2(x)=1\), \(f^2_2(x)=-2x\), \(g^0_1(x)=x^2\), \(g^1_1(x)=x\) and \(g^2_1(x)=x\). Moreover, by (1) and (2), we have
Clearly, the Slater-type condition fails for this problem. On the other hand, it is easy to show that
which is a closed and convex cone. Thus, \(\mathrm{{(RCCCQ)}}\) holds. Moreover, it can be checked that \(\bar{x}:=0\) is a robust weakly efficient solution of \((\textrm{UP})\). Now, we assert that the conditions (6) and (7) hold at \(\bar{x}\). In fact, we only need to show that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,2\), \(r=1,2\), and \(\lambda ^0_1\in \mathbb {R}_+\), \(\lambda ^l_1\in \mathbb {R}\), \(l=1,2\), such that
and
For instance, let \(\alpha ^0_1=\alpha ^0_2:=\frac{1}{2}\), \(\alpha ^1_1:=0\), \(\alpha ^2_1:=1\), \(\alpha ^1_2:=\sqrt{2}\), \(\alpha ^2_2:=0\), \(\lambda ^0_1:=1\) and \(\lambda ^1_1=\lambda ^2_1:=0\). Then, (16) and (17) hold. Thus, Theorem 3.1 is applicable.
The following example shows that the conclusion of Theorem 3.1 may go awry if the \(\mathrm{{(RCCCQ)}}\) fails.
Example 3.2
For problem \((\textrm{UP})\). Let \(m=1\) and \(p=s=k=n=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by
and
Let the polynomials \(f_1:\mathbb {R}^2\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}^2\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}^2\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by
Obviously, \(f^0_1(x)=x^4_1+2x_2\), \(f^1_1(x)=-1\), \(f^2_1(x)=0\), \(f^0_2(x)=x^4_1-x_2\), \(f^1_2(x)=0\), \(f^2_2(x)=1\), \(g^0_1(x)=x_1\), \(g^1_1(x)=x_1\) and \(g^2_1(x)=x_2\). Moreover, by (1) and (2), we have
Clearly, \(\bar{x}:=(0,0)\) is a robust weakly efficient solution of \(\mathrm{{(UP)}}\) and
which is not closed. Indeed, take a sequence \(s_n:= (\frac{1}{n},1,0) \), \(\forall n\in \mathbb {N}\). Obviously,
Note that \(s_n\rightarrow (0,1,0)\) as \(n\rightarrow \infty \). However, \((0,1,0)\notin \bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^* \). Thus, \(\mathrm{{(RCCCQ)}}\) fails.
Now, we assert that the conclusion of Theorem 3.1 is not satisfied at \(\bar{x}\). Otherwise, there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{2}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,2\), \(r=1,2\), and \(\lambda ^0_1\in \mathbb {R}_+\), \(\lambda ^l_1\in \mathbb {R}\), \(l=1,2\), such that (6) and (7) hold. Note that \(F_1(\bar{x})=\sqrt{2}\) and \(F_2(\bar{x})=\sqrt{5}\). For any \( x:=(x_1,x_2)\in \mathbb {R}^2\), it follows from (6) that
Then,
and
Furthermore, from (7), we have
Then, \( -\alpha ^1_1-\sqrt{2}\alpha ^0_1\le |\alpha ^1_1|-\sqrt{2}\alpha ^0_1\le 0 \) and \(\alpha ^2_2-\sqrt{5}\alpha ^0_2\le |\alpha ^2_2|-\sqrt{5}\alpha ^0_2\le 0. \) Consequently,
From (19), we have
and
Then, by (20) and (22), we have
Moreover, from (18), it holds that \(\lambda ^0_1+\lambda ^1_1=0\). This, together with \(~(\lambda ^1_1)^2+(\lambda ^2_1)^2\le (\lambda ^0_1)^2\) and (21), yields that
Thus, we arrive at a contradiction due to \(\alpha ^0_1, \alpha ^0_2\in \mathbb {R}_+\) with \(\alpha ^0_1+\alpha ^0_2=1\). This means that Theorem 3.1 is not applicable.
Next, we consider the problem \(\mathrm{{(UP)}}\) with the functions \(f_i\) and \(g_j\) being convex quadratic functions defined by
and
here \(Q_i\succeq 0\), \(\xi _i\in \mathbb {R}^n\), \(\xi ^r_i\in \mathbb {R}^n\), \(\beta _i\in \mathbb {R}\), \(\beta ^r_i\in \mathbb {R}\), \(r=1,\dots ,s\), and \(M_j\succeq 0\), \(\theta _j\in \mathbb {R}^n\), \(\theta ^l_j\in \mathbb {R}^n\), \(\gamma _j\in \mathbb {R}\), \(\gamma ^l_j\in \mathbb {R}\), \(l=1,\dots ,k\).
In this case, we obtain the following optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\), which have been considered in [15, Theorem 3.1].
Corollary 3.2
Consider the problem \(\mathrm{{(UP)}}\) with the functions \(f_i\) and \(g_j\) given by (23) and (24). Assume that the \(\mathrm {(RCCCQ)}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that
and
where \(F_i(\bar{x}):=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\) for \(i=1,\dots ,p\).
Proof
By using a similar approach as that given to establish Theorem 3.1, it follows that
and
Then, by (25), we have, for any \(x\in \mathbb {R}^n\),
This, together with the Simple Lemma in [5, p. 163], leads to the desired conclusion. The proof is complete. \(\square \)
Remark 3.3
With minor modifications of the proof given for Proposition 2.2 in [41], we can show that
Then, in Corollary 3.2, the \(\mathrm{{(RCCCQ)}}\) can be replaced by the closeness of \(\textrm{coneco}\{(0,1)\cup \) \(\textrm{epi} g^*_j(\cdot ,v_j)\mid \forall v_j\in \mathcal {V}_j,j=1,\dots ,m\}\). Clearly, the result obtained in Corollary 3.2 coincides with [15, Theorem 3.1].
At the end of this section, we show how the optimality conditions obtained in (6) and (7) can be checked by a robust Karush–Kuhn–Tucker \((\textrm{KKT})\) condition. Note that the robust \(\textrm{KKT}\) condition of \(\mathrm {(UP)}\) holds at \(\bar{x}\in \mathcal {F}\) iff there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\bar{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\bar{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that
and
Here \(\nabla _1f_i\) and \(\nabla _1g_j\) denote the derivative of \(f_i\) and \(g_j\) with respect to the first variable, respectively.
Now, we give the following proposition which describes the relation between optimality conditions obtained in (6) and (7) and the robust \(\textrm{KKT}\) condition. It is worth noticing that the proof of Proposition 3.1, which is similar to the proof of [13, Proposition 2.10] and [41, Proposition 3.4], is included here for the sake of completeness.
Proposition 3.1
Let \(\bar{x}\in \mathcal {F}\). Then, the following statements are equivalent:
\(\mathrm{{(i)}}\) The robust KKT condition holds at \(\bar{x}\in \mathcal {F}\).
\(\mathrm{{(ii)}}\) There exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that
\(\mathrm{{(iii)}}\) The optimality conditions given by (6) and (7) hold.
Proof
(i)\(\Rightarrow \)(ii) Assume that the robust KKT condition of \(\mathrm {(UP)}\) holds at \(\bar{x}\in \mathcal {F}\). Then, there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\bar{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\) and \(\lambda ^0_j\in \mathbb {R}_+\), \(\bar{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\) such that (26) and (27) hold. By \(\bar{u}_i=(\bar{u}^1_i,\dots ,\bar{u}^s_i)\in \mathcal {U}_i\), we get
Let \(\alpha ^r_i:=\alpha ^0_i \bar{u}^r_i\), \(i=1,\dots ,p\), \(r=1,\dots ,s\). Then, for any \(i=1,\dots ,p\),
and
Similarly, let \(\lambda ^l_j:=\lambda ^0_j \bar{v}^l_j\), \(j=1,\dots ,m\), \(l=1,\dots ,k\). Then, for any \(j=1,\dots ,m\), we have
and
Together with (26), (27), (32), (33), (34) and (35), it follows that (ii) holds.
(ii)\(\Rightarrow \)(iii) Assume that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that (28), (29), (30) and (31) hold. Using similar arguments as those given for (12) and (13), it is easy to show that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that for any \(x\in \mathbb {R}^n\),
and
Let
By (28), (36) and (37), we deduce that
Moreover, from (29) and (37), we get
This, together with (30) and (36), gives
On the other hand, \(\phi \) is an SOS-convex polynomial. Therefore, from Lemma 2.2, we have \(\phi \in {\Sigma }^2_d[x]\). This means that
So, (iii) holds.
(iii)\(\Rightarrow \)(i) Assume that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that (6) and (7) hold. By (6), we have
Then, for \(\bar{x}\in \mathcal {F}\),
Note that \(\lambda ^0_j g_j(\bar{x},\tilde{v}_j)\le 0\) due to \(\bar{x}\in \mathcal {F}\). Hence,
Combining (39) and (40), we obtain
Clearly, \(\phi (\bar{x})=0\). Consequently, from (38), we have \(\phi (x)\ge \phi (\bar{x})\), \(\forall x\in \mathbb {R}^n\). Thus, \(\nabla \phi (\bar{x})=0\), which means that
Thus, the robust \(\textrm{KKT}\) condition holds at \(\bar{x}\). The proof is complete. \(\square \)
Remark 3.4
Proposition 3.1 encompasses [15, Proposition 3.4], where the objective and constraint functions are convex quadratic functions. Proposition 3.1 also improves [13, Proposition 2.10], where there were no uncertain data in the objective functions \(f_i\), \(i=1,\dots ,p,\) of \(\mathrm {(UP)}\).
4 SDP Relaxation Dual Problem
In this section, we first introduce the sum of squares relaxation dual problem for \(\mathrm {(UP)}\) from the perspective of sum of squares conditions and linear matrix inequalities and then discuss weak and strong duality properties between them. Moreover, we give a numerical example to illustrate that the relaxation problem can be formulated as a semidefinite linear programming problem.
Let \(\omega :=( {\omega }_1,\dots , {\omega }_p)\in \mathbb {R}^p\), \(\alpha :=(\alpha _1^0,\dots ,\alpha ^0_p,\alpha _1^1,\dots ,\alpha ^1_p, \alpha _1^2,\dots ,\alpha ^2_p,\dots , \alpha _1^s,\dots ,\alpha ^s_p)\in \mathbb {R}_+^p\times \mathbb {R}^{ps}\), and \(\lambda :=( \lambda _1^0,\dots ,\lambda ^0_m,\lambda _1^1,\dots ,\lambda ^1_m, \lambda _1^2,\dots ,\lambda ^2_m,\dots , \lambda _1^k,\dots ,\lambda ^k_m)\in \mathbb {R}_+^m\times \mathbb {R}^{mk}\). Now, we propose the sum of squares relaxation dual problem of \(\mathrm {(UP)}\) as follows:
Here the feasible set of \(\mathrm {(RD)}\) is denoted by \(\mathcal {F}_D\).
Remark 4.1
In the special case when there is no uncertainty in the objective functions, \(\mathrm {(UP)}\) becomes \(\mathrm {(UP)_0}\), and \(\mathrm {(RD)}\) collapses to
Note that related results on the characterization of sum of squares relaxation problem for different kinds of polynomial optimization problems can be found in [12,13,14, 25] and the references therein.
Now, similar to the concept of robust weakly efficient solutions of \(\mathrm {(UP)}\) in Definition 2.3, we give the definition of weakly efficient solutions of \(\mathrm {(RD)}\).
Definition 4.1
A point \(( \bar{\omega }, \bar{\alpha }, \bar{\lambda })\in \mathcal {F}_D\) is said to be a weakly efficient solution of \(\mathrm {(RD)}\) iff there is no point \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\) such that
The following theorem gives a weak duality relation between \(\mathrm {(UP)}\) and \(\mathrm {(RD)}\).
Theorem 4.1
For any \(x\in \mathcal {F}\) and \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\). We have
where \(F_i(x):=\max \limits _{u_i \in \mathcal {U}_i}f_i(x,u_i)\), \(i=1,\dots ,p\).
Proof
Since \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\), we have \(\omega _i\in \mathbb {R}\), \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), and
This means that there exists \(\sigma \in {\Sigma }^2_d[x]\) such that
Using a similar argument as that given for the proof of Theorem 3.1, we can show that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that for any \(x\in \mathbb {R}^n\),
and
Hence, (41) can be equivalently written as
Note that for any \(x\in \mathcal {F}\), we have \(g_j(x,\tilde{v}_j)\le 0\), \(j=1,\dots ,m\). This, together with (42) and \(\sigma (x)\ge 0\), gives
Note that \(F_i(x)\ge f_i(x,\tilde{u}_i)\), \(i=1,\dots ,p\). Thus, it follows from (43), \(\alpha ^0_i\in \mathbb {R}_+\) and \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\) that
The proof is complete. \(\square \)
The next theorem describes strong duality properties for weakly efficient solutions between \(\mathrm {(UP)}\) and \(\mathrm {(RD)}\).
Theorem 4.2
Assume that the \(\mathrm{{(RCCCQ)}}\) holds. If \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\), then, there exist \(\bar{\omega }_i\in \mathbb {R}\), \(\bar{\alpha }^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\bar{\alpha }^0_i=1\), \(\bar{\alpha }^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\bar{\lambda }^0_j\in \mathbb {R}_+\), \(\bar{\lambda }^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that \((\bar{\omega }, \bar{\alpha },\bar{\lambda } )\) is a weakly efficient solution of \(\mathrm{{(RD)}}\), where \(\bar{\omega }_i:=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\), \(i=1,\dots ,p\).
Proof
Let \(\bar{x}\in \mathcal { F}\) be a robust weakly efficient solution of \(\mathrm {(UP)}\). By Theorem 3.1, there exist \(\bar{\alpha }^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\bar{\alpha }^0_i=1\), \(\bar{\alpha }^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\bar{\lambda }^0_j\in \mathbb {R}_+\), \(\bar{\lambda }^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that
and
where \(\bar{\omega }_i:=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\). Clearly,
We claim that \((\bar{\omega }, \bar{\alpha },\bar{\lambda } )\in \mathcal {F}_D\) is a weakly efficient solution of \(\mathrm {(RD)}\). On the contrary, there exists \((\tilde{\omega }, \tilde{\alpha },\tilde{\lambda } )\in \mathcal {F}_D\) such that
This, together with \(\bar{\omega }_i=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\), \(i=1,\dots ,p\), implies that
which contradicts Theorem 4.1. The proof is complete. \(\square \)
Note that the sum of squares conditions can be equivalently expressed as linear matrix inequalities. Then, we give a numerical example to show that \(\mathrm {(RD)}\) can be formulated as a semidefinite linear programming problem.
Example 4.1
For problem \((\textrm{UP})\), let \(m=n=1\) and \(p=s=k=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by
and
Let the polynomials \(f_1:\mathbb {R}\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by
Obviously, \(f^0_1(x)=x^4\), \(f^1_1(x)=x\), \(f^2_1(x)=1\), \(f^0_2(x)=x^4\), \(f^1_2(x)=3\), \(f^2_2(x)=-2x\), \(g^0_1(x)=-1\), \(g^1_1(x)=x\) and \(g^2_1(x)=x\). Moreover, by (1) and (2), we have
The sum of squares relaxation dual problem of \(\mathrm {(UP)}\) becomes
Let
Clearly,
By \(\sigma \in {\Sigma }^2_d[x]\) and [36, Lemma 3.33], there exists a symmetric \((3\times 3)\) matrix C such that
where \(X:=(1,x,x^2)\). Let
By (44), we have \( W_1=\alpha ^2_1+3\alpha ^1_2-\alpha ^0_1\omega _1-\alpha ^0_2\omega _2,\) \(2W_2=\alpha ^1_1-2\alpha ^2_2+\lambda ^1_1+\lambda ^2_1,\) \(2W_3+W_4=0\), \(W_5=0\) and \(W_6=\alpha ^0_1+\alpha ^0_2\). Let \(W_3:=\mu \in \mathbb {R}\). Then, \(W_4=-2\mu \). Thus, the relaxation dual problem becomes the following semidefinite programming problem
Therefore, \(\mathrm{{(RD)}}\) can be verified by solving a semidefinite linear programming problem.
5 Conclusions
In this paper, a class of SOS-convex polynomial optimization problems under uncertain data in both the objective and constraints is considered. By using a new robust-type characteristic cone constraint qualification, we obtain optimality conditions for robust weakly efficient solutions to this uncertain SOS-convex optimization problem based on the sum of squares conditions and linear matrix inequalities. We also introduce a relaxation dual problem for this uncertain SOS-convex optimization problem and establish its robust duality properties. Furthermore, we give a numerical example to illustrate that the relaxation problem can be formulated as a semidefinite programming problem.
Although some new results of robust weakly efficient solutions have been established for SOS-convex polynomial optimization problems with uncertain data, there are remaining questions to be addressed in the future. For instance, similar to [16, 41], whether we can investigate second-order cone programming dual relaxations for more general classes of convex polynomial optimization problem. Two-stage SOS-convex polynomial optimization is an interesting model for polynomial optimization problems. It is also of importance to consider how the proposed approach can be extended to handle affinely adjustable robust two-stage SOS-convex polynomial optimization problems.
References
Ahmadi, A.A., Majumdar, A.: Some applications of polynomial optimization in operations research and real-time decision making. Optim. Lett. 10, 709–729 (2016)
Ahmadi, A.A., Parrilo, P.A.: A convex polynomial that is not SOS-convex. Math. Program. 135, 275–292 (2012)
Ahmadi, A.A., Parrilo, P.A.: A complete characterization of the gap between convexity and SOS-convexity. SIAM J. Optim. 23, 811–833 (2013)
Ben-Tal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton University Press, Princeton (2009)
Ben-Tal, A., Nemirovski, A.: Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications. SIAM, Philadelphia (2001)
Ben-Tal, A., Nemirovski, A.: Robust optimization-methodology and applications. Math. Program. 92, 453–480 (2002)
Blekherman, G., Parrilo, P.A., Thomas, R.: Semidefinite Optimization and Convex Algebraic Geometry. SIAM, Philadelphia (2012)
Bo̧t, R.I.: Conjugate Duality in Convex Optimization. Springer, Berlin (2010)
Chen, J.W., Köbis, E., Yao, J.C.: Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. 181, 411–436 (2019)
Chen, J.W., Li, J., Li, X.B., Lv, Y., Yao, J.C.: Radius of robust feasibility of system of convex inequalities with uncertain data. J. Optim. Theory Appl. 184, 384–399 (2020)
Chesi, G.: LMI techniques for optimization over polynomials in control: a survey. IEEE Trans. Autom. Control. 55, 2500–2510 (2010)
Chieu, N.H., Feng, J.W., Gao, W., Li, G., Wu, D.: SOS-convex semialgebraic programs and its applications to robust optimization: a tractable class of nonsmooth convex optimization. Set-Valued Var. Anal. 26, 305–326 (2018)
Chuong, T.D.: Linear matrix inequality conditions and duality for a class of robust multiobjective convex polynomial programs. SIAM J. Optim. 28, 2466–2488 (2018)
Chuong, T.D., Jeyakumar, V., Li, G., Woolnough, D.: Exact dual semi-definite programs for affinely adjustable robust SOS-convex polynomial optimization problems. Optimization 71, 3539–3569 (2022)
Chuong, T.D., Mak-Hau, V.H., Yearwood, J., Dazeley, R., Nguyen, M.-T., Cao, T.: Robust Pareto solutions for convex quedratic multiobjective optimization problems under data uncertainty. Ann. Oper. Res. 319, 1533–1564 (2022)
Chuong, T.D.: Second-order cone programming relaxations for a class of multiobjective convex polynomial problems. Ann. Oper. Res. 311, 1017–1033 (2022)
Fang, D.H., Li, C., Ng, K.F.: Constraint qualification for extended Farkas’s lemmas and Lagrangian dualities in convex infinite programming. SIAM J. Optim. 20, 1311–1332 (2009)
Fang, D.H., Li, C., Yao, J.C.: Stable Lagrange dualities for robust conical programming. J. Nonlinear Convex Anal. 16, 2141–2158 (2015)
Feng, J., Liu, L., Wu, D., Li, G., Beer, M.: Dynamic reliability analysis using the extended support vector regression (X-SVR). Mech. Syst. Signal Proc. 126, 368–391 (2019)
Fliege, J., Werner, R.: Robust multiobjective optimization and applications in portfolio optimization. Eur. J. Oper. Res. 234, 422–433 (2013)
Goldfarb, D., Iyengar, G.: Robust convex quadratically constrained programs. Math. Program. 97, 495–515 (2003)
Helton, J.W., Nie, J.: Semidefinite representation of convex sets. Math. Program. 122, 21–64 (2010)
Jeyakumar, V., Li, G.: Strong duality in robust convex programming: complete characterizations. SIAM J. Optim. 20, 3384–3407 (2010)
Jeyakumar, V., Li, G., Suthaharan, S.: Support vector machine classifiers with uncertain knowledge sets via robust optimization. Optimization 63, 1099–1116 (2014)
Jeyakumar, V., Li, G., Vicente-Pérez, J.: Robust SOS-convex polynomial optimization problems: exact SDP relaxations. Optim. Lett. 9, 1–18 (2015)
Jiao, L., Lee, J.H., Zhou, Y.: A hybrid approach for finding efficient solutions in vector optimization with SOS-convex polynomials. Oper. Res. Lett. 48, 188–194 (2020)
Kuroiwa, D., Lee, G.M.: On robust multiobjective optimization. Vietnam J. Math. 40, 305–317 (2012)
Jiao, L., Lee, J.H.: Finding efficient solutions in robust multiple objective optimization with SOS-convex polynomial data. Ann. Oper. Res. 296, 803–820 (2021)
Lasserre, J.B.: Convexity in semialgebraic geometry and polynomial optimization. SIAM J. Optim. 19, 1995–2014 (2009)
Lasserre, J.B.: Moments. Positive Polynomials and Their Applications. Imperial College Press, London (2009)
Lee, J.H., Jiao, L.: Finding efficient solutions for multicriteria optimization problems with SOS-convex polynomials. Taiwan. J. Math. 23, 1535–1550 (2019)
Lee, G.M., Kim, G.S., Dinh, N.: Optimality conditions for approximate solutions of convex semi-infinite vector optimization problems. In: Ansari, Q.H., Yao, J.C. (eds.) Recent Developments in Vector Optimization, Vector Optimization, vol. 1, pp. 275–295. Springer, Berlin (2012)
Li, X.B., Al-Homidan, S., Ansari, Q.H., Yao, J.C.: A sufficient condition for asymptotically well behaved property of convex polynomials. Oper. Res. Lett. 49, 548–552 (2021)
Ngai, H.V.: Global error bounds for systems of convex polynomial over polyhedral constraints. SIAM J. Optim. 25, 521–539 (2015)
Nie, J.W.: Polynomial matrix inequality and semidefinite representation. Math. Oper. Res. 36, 398–415 (2011)
Parrilo, P.A.: Polynomial optimization, sums of squares, and application. In: Semidefinite Optimization and Convex Algebraic Geometry. MOS-SIAM Ser. Optim., vol. 13, pp. 251–291. SIAM, Philadelphia (2013)
Ramana, M., Goldman, A.J.: Some geometric results in semidefinite programming. J. Global Optim. 7, 33–50 (1995)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Sun, X.K., Teo, K.L., Zeng, J., Guo, X.L.: On approximate solutions and saddle point theorems for robust convex optimization. Optim. Lett. 14, 1711–1730 (2020)
Sun, X.K., Teo, K.L., Long, X.J.: Some characterizations of approximate solutions for robust semi-infinite optimization problems. J. Optim. Theory Appl. 191, 281–310 (2021)
Tinh, A.T., Chuong, T.D.: Conic linear programming duals for classes of quadratic semi-infinite programs with applications. J. Optim. Theory Appl. 194, 570–596 (2022)
Vinzant, C.: What is a spectrahedron? Notices Am. Math. Soc. 61, 492–494 (2014)
Wang, J., Li, S.J., Chen, C.R.: Generalized robust duality in constrained nonconvex optimization. Optimization 70, 591–612 (2021)
Wei, H.Z., Chen, C.R., Li, S.J.: Characterizations for optimality conditions of general robust optimization problems. J. Optim. Theory Appl. 177, 835–856 (2018)
Wei, H.Z., Chen, C.R., Li, S.J.: A unified approach through image space analysis to robustness in uncertain optimization problems. J. Optim. Theory Appl. 184, 466–493 (2020)
Yu, H., Liu, H.M.: Robust multiple objective game theory. J. Optim. Theory Appl. 159, 272–280 (2013)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Funding
This research was supported by the Natural Science Foundation of Chongqing (cstc2020jcyj-msxmX0016), the Science and Technology Research Program of Chongqing Municipal Education Commission (KJZDK202100803), the ARC Discovery Grant (DP190103361), the Education Committee Project Foundation of Chongqing for Bayu Young Scholar, and the Innovation Project of CTBU (yjscxx2022-112-71).
Additional information
Communicated by Alper Yildirim.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Sun, X., Tan, W. & Teo, K.L. Characterizing a Class of Robust Vector Polynomial Optimization via Sum of Squares Conditions. J Optim Theory Appl 197, 737–764 (2023). https://doi.org/10.1007/s10957-023-02184-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-023-02184-6