1 Introduction

Convex polynomial optimization is a class of convex optimization problems in which their convex polynomial inequality constraints will constrain convex polynomial objective functions over the whole region. It has become an active research area, not only because of its surprising structural aspects, but also due to abundant applications in a wide range of disciplines, such as automatic control systems [11, 30], engineering design [5, 19] and transportation [1, 30]. In recent years, a wide variety of works have been devoted to the investigation of convex polynomial optimization and its generalizations from different points of view; see, e.g., [1, 7, 11, 16, 30, 33,34,35] and the references therein.

Recently, SOS-convex polynomial optimization [3, 22], as a numerically tractable subclass of convex polynomial optimization problems, has been proposed and attracted the interest of many researchers. One of the most important reasons to work with SOS-convex polynomials is that it can be reformulated as a semidefinite linear programming problem, which can be handled by efficient interior-point methods [22]. Furthermore, the SOS-convex polynomials cover a rich class of convex polynomials, such as convex separable polynomials and convex quadratic functions [2, 22]. In particular, some complete characterizations of the gap between SOS-convex polynomials and convex polynomials are obtained in [3]. Moreover, SOS-convex polynomial optimization enjoys an exact SDP relaxation and zero duality gap between the primal problem and its dual problem; see, e.g., [26, 29, 31]. Interested readers are referred to [12, 14, 26, 29, 31] for some recent works on SOS-convex polynomial optimization problems and their applications.

On the other hand, due to prediction error or lack of information, the data are usually not known precisely for many practical optimization problem involving convex polynomials, such as lot-sizing problems with uncertain demands [14], support vector machine classifiers with uncertain knowledge sets [24] and mean–variance portfolio selection problems with uncertain returns [21]. Therefore, the study of convex polynomial optimization problems with inexact or uncertain data becomes a very interesting topic. Recently, robust optimization [4, 6], as one of the effective ways for dealing with uncertain optimization problems, has attracted an increasing attention of many researchers from different disciplines; see, for example, [9, 10, 18, 20, 23, 27, 39,40,41, 43,44,45,46]. However, in contrast to the deterministic case, there exist only few papers devoting to the investigation of uncertain SOS-convex polynomial optimization problems with special structures of the objective and constraint functions. For example, sum of squares polynomial representations that characterize robust solutions and exact SDP relaxations for robust SOS-convex polynomial optimization problems with various commonly used uncertain sets are investigated in [25]. By using sum of squares conditions and linear matrix inequalities, optimality conditions and duality are obtained in [13] for a class of uncertain multiobjective SOS-convex polynomial optimization. By means of scalarization technique, a method to find robust efficient solutions is given in [28] for an uncertain multiobjective optimization problem where the objective and constraint functions are SOS-convex polynomials. Optimality conditions for robust (weakly) Pareto efficient solutions of a convex quadratic multiobjective optimization problem are explored in [15] under a robust-type closed convex cone constraint qualification.

We observe that there is no results characterizing uncertain vector SOS-convex polynomial optimization problems where uncertain data are involved in both the objective and constraints and belong to general spectrahedral uncertain sets [37, 42]. Note that the spectrahedral uncertain set contains a wide range of commonly used uncertain sets such as ellipsoids, polyhedra, and boxes, encountered frequently in robust optimization problems [4, 5]. The study of such structured polynomial optimization problem is usually complicated due to the challenges of dealing with uncertain data of the objective and constraints. Therefore, the research of a tractable equivalent optimization problem using robust duality for such structured polynomial optimization problem is of high importance, which contributed the main motivation of this paper.

As was mentioned above, motivated by the works [13, 14, 25], we provide some new characterizations of vector SOS-convex polynomial optimization problems where both the objective and constraint functions involve spectrahedral uncertain data in this paper. More precisely, we first give the concept of robust weakly efficient solutions to this uncertain SOS-convex polynomial optimization problem. Then, by using a robust-type characteristic cone constraint qualification, we establish necessary and sufficient optimality conditions for a robust weakly efficient solution of this uncertain SOS-convex polynomial optimization problem based on the sum of squares conditions and linear matrix inequalities. The obtained results are corresponding to the results in [13]. We note that the sum of squares characterizations can be checked for spectrahedral uncertain sets by solving a semidefinite programming problem. Furthermore, we introduce a relaxation dual problem for this uncertain SOS-convex polynomial optimization problem and investigate weak and strong duality properties between them. The main advantage for studying this relaxation dual problem is that it can be reformulated as a semidefinite linear programming problem, which can be done in polynomial time.

The rest of this paper is organized as follows. In Sect. 2, we give some basic definitions and preliminary results. In Sect. 3, we obtain necessary and sufficient conditions for robust weakly efficient solutions to the uncertain SOS-convex polynomial optimization problem based on sum of squares conditions and linear matrix inequalities. In Sect. 4, we establish robust duality properties between this uncertain SOS-convex polynomial optimization problem and its relaxation dual problem.

2 Preliminaries

In this section, we recall some notations and preliminary results from [30, 38] which will be used later in this paper. Let \({\mathbb {R}}^n\) be the n-dimensional Euclidean space equipped with the usual Euclidean norm \(\Vert \cdot \Vert \). We denote by \({\mathbb {R}}^n_+\) the nonnegative orthant of \(\mathbb {R}^{n}\). We also denote by \(\mathrm{{int}}\mathbb {R}^{n}_+\) the topological interior of \(\mathbb {R}^{n}_+\). Given a set \(A\subset \mathbb {R}^n\), \(\textrm{int}A\) \((\mathrm{{resp}}.~\textrm{cl}A,~\textrm{co}A)\) denotes the interior (resp. closure, convex hull) of A, while \(\textrm{coneco}A:=\mathbb {R}_+\textrm{conv}A\) stands for the convex conical hull of \(A\cup \{0\}\). The symbol \(I_n\in \mathbb {R}^{n\times n}\) stands for the identity matrix. Let \(S^n\) be the space of all symmetric matrices. \(M\in S^n\) is said to be a positive semidefinite matrix, denoted by \(M\succeq 0\), if \(x^\top Mx\ge 0\) for any \(x\in \mathbb {R}^n\). Let \(\varphi :\mathbb {R}^{n}\rightarrow \mathbb {R}\cup \{+\infty \}\) be a given real-valued function, the effective domain and the epigraph of \(\varphi \) are defined, respectively, by

$$\begin{aligned} \textrm{dom}\varphi :=\{x\in \mathbb {R}^{n}\mid \varphi (x)<+\infty \} \text{ and } \textrm{epi}\varphi :=\{(x,r)\in \mathbb {R}^{n}\times \mathbb {R}~|~\varphi (x)\le r\}. \end{aligned}$$

The conjugate function of \(\varphi \), denoted by \(\varphi ^{*}: \mathbb {R}^n\rightarrow \mathbb {R}\), is defined by

$$\begin{aligned} \varphi ^{*}(x^{*}):=\mathop {\text{ sup }}\{\langle x^{*},x\rangle -\varphi (x)\mid x\in \textrm{dom}\varphi \},~x^* \in \mathbb {R}^n. \end{aligned}$$

The following important properties will be used in the sequel.

Lemma 2.1

[8, 17] Let \(\varphi _{1},\varphi _{2}:\mathbb {R}^{n}\rightarrow \mathbb {R}\cup \{+\infty \}\) be proper convex functions such that \(\textrm{dom}\varphi _{1}\cap \textrm{dom}\varphi _{2}\ne \emptyset \).

  1. (i)

    If \(\varphi _{1}\) and \(\varphi _{2}\) are lower semicontinuous, then,

    $$\begin{aligned} \textrm{epi}(\varphi _{1}+\varphi _{2})^{*}=\textrm{cl}(\textrm{epi}\varphi _{1}^{*}+\textrm{epi}\varphi _{2}^{*}). \end{aligned}$$
  2. (ii)

    If one of \(\varphi _{1}\) and \(\varphi _{2}\) is continuous at some \(\bar{x}\in \textrm{dom}\varphi _{1}\cap \textrm{dom}\varphi _{2}\), then,

    $$\begin{aligned} \textrm{epi}(\varphi _{1}+\varphi _{2})^{*}=\textrm{epi}\varphi _{1}^{*}+\textrm{epi}\varphi _{2}^{*}. \end{aligned}$$

Now, we recall the following basic concepts associated with polynomials. The space of all real polynomials on \(\mathbb {R}^n\) is denoted by \(\mathbb {R}[x]\). We denote by \(\mathbb {R}[x]_d\) the space of all real polynomials on \(\mathbb {R}^n\) with degree at most d. We also denote by \(\textrm{deg}f\) the degree of a polynomial f. A real polynomial f is said to be a sum of squares polynomial, if there exist real polynomials \(f_i\), \(i=1,\dots ,p\), such that \(f=\sum ^p_{i=1} f^2_i\). The set of all sum of squares polynomials in \(x\in \mathbb {R}^n\) with degree at most d is denoted by \({\Sigma }^2_d[x]\).

Definition 2.1

[2, 3, 22] A real polynomial f on \(\mathbb {R}^n\) is called SOS-convex iff the polynomial

$$\begin{aligned} f(x)-f(y)-\nabla f(y)^\top (x-y) \end{aligned}$$

is a sum of squares polynomial in \(\mathbb {R}[x;y]\) (with respect to variables x and y).

Remark 2.1

Obviously, an SOS-convex polynomial is a convex polynomial, but the converse is not true, see [2, 3]. Moreover, convex quadratic functions and convex separable polynomials are also SOS-convex polynomials [2]. Besides, an SOS-convex polynomial can be neither quadratic nor separable. For example, \(x^8_1+x^2_1+x_1x_2+x^2_2\) is an SOS-convex polynomial which is neither quadratic nor separable, see [22].

The following property of SOS-convex polynomials plays an important role in obtaining our results.

Lemma 2.2

[22, Lemma 8] Let f be an SOS-convex polynomial. Suppose that there exists \(\bar{x}\in \mathbb {R}^n\) such that \(f(\bar{x})=0\) and \(\nabla f(\bar{x})=0\). Then, f is a sum of squares polynomial.

Now, suppose that \(A^r_i\), \(r=0,1,\dots ,s\), and \(B^l_j\), \(l=0,1,\dots ,k\), are given symmetric matrices. The sets \(\mathcal {U}_i\), \(i=1,\dots ,p\), and \(\mathcal {V}_j\), \(j=1,\dots ,m\), are assumed to be spectrahedra [37, 42] described by

$$\begin{aligned} \mathcal {U}_i:=\left\{ u_i:= \left( u^1_i,\dots ,u^s_i \right) \in \mathbb {R}^s \mid A^0_i+\sum \limits _{r=1}^{s}{u^r_i}A^r_i\succeq 0\right\} \end{aligned}$$
(1)

and

$$\begin{aligned} \mathcal {V}_j:=\left\{ v_j:= \left( v^1_j,\dots ,v^k_j \right) \in \mathbb {R}^k \mid B^0_j+\sum \limits _{l=1}^{k}{v^l_j}B^l_j\succeq 0\right\} , \end{aligned}$$
(2)

respectively.

In this paper, we consider the following uncertain vector SOS-convex polynomial optimization problem

$$\begin{aligned} (\textrm{UP}) ~~~~~~ \begin{array}{ll} \mathop {\text{ min }}\limits _{x\in {\mathbb {R}^n}} ~ \left\{ \left( f_{1}(x,u_1), \dots ,f_{p}(x,u_p)\right) ~|~g_j(x,v_j)\le 0,j=1,\dots ,m\right\} , \end{array} \end{aligned}$$

where \(u_i\), \(i=1,\dots ,p\), and \(v_j\), \(j=1,\dots ,m\), are uncertain parameters and they belong to their respective spectrahedral uncertain sets \(\mathcal {U}_i\subseteq \mathbb {R}^s\) and \(\mathcal {V}_j\subseteq \mathbb {R}^k\). \(f_i:\mathbb {R}^n\times \mathcal {U}_i\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), and \(g_j:\mathbb {R}^n\times \mathcal {V}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), are given functions. In what follows, we assume that, for each \(u_i\in \mathcal {U}_i\) and \(v_j\in \mathcal {V}_j\), \(f_i(\cdot ,u_i)\) and \(g_j(\cdot ,v_j)\) are SOS-convex polynomials, and for each \(x\in \mathbb {R}^n\), \(f_i(x,\cdot )\) and \(g_j(x,\cdot )\) are affine functions, i.e.,

$$\begin{aligned} f_i(x,u_i):=f^0_i(x)+\sum \limits _{r=1}^{s}{u^r_i}f^r_i(x),~u_i:= \left( u^1_i,\dots ,u^s_i \right) \in \mathcal {U}_i, \end{aligned}$$
(3)

and

$$\begin{aligned} g_j(x,v_j):=g^0_j(x)+\sum \limits _{l=1}^{k}{v^l_j}g^l_j(x),~v_j:= \left( v^1_j,\dots ,v^k_j \right) \in \mathcal {V}_j. \end{aligned}$$
(4)

Here \(f^r_i:\mathbb {R}^n\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), \(r=0,1,\dots ,s\), and \(g^l_j:\mathbb {R}^n\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), \(l=0,1,\dots ,k\), are given polynomials.

For \(\mathrm {(UP)}\), it is usually associated with the so-called minimax-type robust counterpart

$$\begin{aligned} (\textrm{RP}) ~~{} & {} \mathop {\text{ min }}\limits _{x\in {\mathbb {R}^n}} ~ \left\{ \left( \max \limits _{u_1\in \mathcal {U}_1}f_{1}(x,u_1),\dots ,\max \limits _{u_p\in \mathcal {U}_p}f_{p}(x,u_p)\right) ~|~g_j(x,v_j)\right. \\{} & {} \left. \le 0,\forall v_j\in \mathcal {V}_j,j=1,\dots ,m\right\} . \end{aligned}$$

This paper is devoted to obtain some characterizations of the robust efficient solutions of \(\mathrm {(UP)}\). So, we first recall the following important concepts which will be used later in this paper.

Definition 2.2

[4, 6] The robust feasible set of \(\mathrm {(UP)}\) is defined by

$$\begin{aligned} \mathcal {F}:=\{x\in \mathbb {R}^n\mid g_j(x,v_j)\le 0,\forall v_j\in \mathcal {V}_j,j=1,\dots ,m\}. \end{aligned}$$

Definition 2.3

[27] A point \(\bar{x}\in \mathcal {F}\) is said to be a robust weakly efficient solution of \(\mathrm {(UP)}\) iff it is a weakly efficient solution of \(\mathrm {(RP)}\), i.e., there is no point \(x\in \mathcal {F}\) such that

$$\begin{aligned} \max \limits _{u_i\in \mathcal {U}_i}f_i(x,u_i)<\max \limits _{u_i\in \mathcal {U}_i}f_i(\bar{x},u_i),~i=1,\dots ,p. \end{aligned}$$

Remark 2.2

Note that in this paper, we only deal with robust weakly efficient solutions of \(\mathrm {(UP)}\). For other kinds of robust efficient solutions for \(\mathrm {(UP)}\), they can be handled in a similar way.

3 Robust Optimality Conditions

This section is devoted to the development of necessary and sufficient optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\) based on the sum of squares conditions and linear matrix inequalities. To do this, we first give the following Farkas-type Lemma which plays a key role for the derivation of our results. For similar results, please see [23, 39] for more details.

Lemma 3.1

Let \(\gamma \in \mathbb {R}\) and \(\mathcal {W}_j\subseteq \mathbb {R}^k\), \(j=1,\dots ,m\). Let \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\) be a continuous convex function and let \(h_j:\mathbb {R}^n\times \mathcal {W}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), be continuous functions such that for each \(w_j\in \mathcal {W}_j\), \(h_j(\cdot ,w_j)\) is a convex function. Then, the following two statements are equivalent:

  1. (i)

    \(\{x\in \mathbb {R}^n\mid h_j(x,w_j)\le 0,~\forall w_j\in \mathcal {W}_j,~j=1,\dots ,m\}\subseteq \{x\in \mathbb {R}^n\mid \phi (x)\ge \gamma \}\).

  2. (ii)

    \((0,-\gamma )\in \mathrm{{epi}}\phi ^*+\mathrm{{clco}}\bigcup \limits _{\lambda ^0_j\ge 0,w_j\in \mathcal {W}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jh_j(\cdot ,w_j)\right) ^*\).

Proof

The proof is similar to the proof given for [23, Theorem 2.4]. \(\square \)

The following constraint qualification is a necessary assumption for the investigation of robust weakly efficient solutions of \(\mathrm{{(UP)}}\).

Definition 3.1

[23] We say that robust-type characteristic cone constraint qualification \(\mathrm{{(RCCCQ)}}\) holds, iff

$$\begin{aligned} \bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^* ~\mathrm{{is~a~closed~convex~cone}}. \end{aligned}$$

Remark 3.1

  1. (i)

    It is worth noting that \(\mathrm{{(RCCCQ)}}\) was first introduced in [23] to characterize robust strong duality for uncertain convex programming problems. Some complete characterizations of \(\mathrm{{(RCCCQ)}}\) are also given in [23, Propositions 2.2, 2.3 and 3.2]. The \(\mathrm{{(RCCCQ)}}\) has also been used in [26, 31, 41] and the relevant references cited therein to study optimality conditions, duality results and SDP relaxations for polynomial optimization problems.

  2. (ii)

    Note that for each \(v_j\in \mathcal {V}_j\), \(g_j(\cdot ,v_j)\) is an SOS-convex polynomial, and for each \(x\in \mathbb {R}^n\), \(g_j(x,\cdot )\) is an affine function. Thus, the following Slater-type condition, used in [13],

    $$\begin{aligned} \left\{ x\in \mathbb {R}^n\mid g_j(x,v_j)<0,\forall v_j\in \mathcal {V}_j,j=1,\dots ,m\right\} \ne \emptyset \end{aligned}$$
    (5)

    is a sufficient condition for \(\mathrm{{(RCCCQ)}}\). See [23, Proposition 3.2] for details.

Now, by using the \(\mathrm {(RCCCQ)}\), we give the following necessary and sufficient optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\) based on the sum of squares conditions and linear matrix inequalities.

Theorem 3.1

Assume that the \(\mathrm {(RCCCQ)}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\in \Sigma ^2_d[x] \end{aligned}$$
(6)

and

$$\begin{aligned} \alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i\succeq 0,~\lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,~i=1,\dots ,p,~j=1,\dots ,m, \end{aligned}$$
(7)

where \(d\ge \max \left\{ \max \limits _{1\le i\le p}\left( \deg f^0_i\right) ,\max \limits _{1\le i\le p}\left( \deg f^r_i\right) ,\!\max \limits _{1\le j\le m}\left( \deg g^0_j\right) ,\max \limits _{1\le j\le m}\left( \deg g^l_j\right) \!\right\} \) and \(F_i(\bar{x}):=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\) for \(i=1,\dots ,p\).

Proof

\((\Rightarrow )\) Suppose that \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\). Then, there is no point \(x\in \mathcal {F}\) such that

$$\begin{aligned} F_i(x)<F_i(\bar{x}),~i=1,\dots ,p, \end{aligned}$$

where \(F_i(x):=\max \limits _{u_i\in \mathcal {U}_i}f_i(x,u_i)\), \(i=1,\dots ,p\). By [32, Proposition 8.2], there exist \(\alpha ^0_i\in \mathbb {R}_+\), \(i=1,\dots ,p\), with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i F_i(x)\ge \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}),~\forall x\in \mathcal {F}. \end{aligned}$$

This means that

$$\begin{aligned}{} & {} \left\{ x\in \mathbb {R}^n\mid g_j(x,v_j)\le 0,~\forall v_j\in \mathcal {V}_j,~j=1,\dots ,m\right\} \\{} & {} \subseteq \left\{ x\in \mathcal {F}~\Big | ~\sum \limits _{i=1}^{p}\alpha ^0_i F_i(x)\ge \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\right\} . \end{aligned}$$

By Lemma 3.1, we have

$$\begin{aligned} \left( 0,-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\right) \in \mathrm{{epi}}\left( \sum \limits _{i=1}^{p}\alpha ^0_i F_i\right) ^*+\mathrm{{clco}}\bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*\!.\nonumber \\ \end{aligned}$$
(8)

Since \(f_i(\cdot ,u_i)\) is an SOS-convex polynomial and \(f_i(x,\cdot )\) is an affine function, it follows from the equality (4) of [39] that

$$\begin{aligned} \mathrm{{epi}}\left( \sum \limits _{i=1}^{p}\alpha ^0_i F_i\right) ^*=\bigcup \limits _{u_i\in \mathcal {U}_i}\mathrm{{epi}}\left( \sum \limits _{i=1}^{p}\alpha ^0_if_i(\cdot ,u_i)\right) ^*. \end{aligned}$$
(9)

Note that the \(\mathrm{{(RCCCQ)}}\) holds. Then, from (8) and (9), we deduce that

$$\begin{aligned}{} & {} \left( 0,-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\right) \in \bigcup \limits _{u_i\in \mathcal {U}_i}\mathrm{{epi}} \left( \sum \limits _{i=1}^{p}\alpha ^0_if_i(\cdot ,u_i)\right) ^*\\{} & {} +\bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*. \end{aligned}$$

This means that there exist \(u_i\in \mathcal {U}_i\) with \((\xi ^*,\eta )\in \mathrm{{epi}}\left( \sum \limits _{i=1}^{p}\alpha ^0_if_i(\cdot ,u_i)\right) ^*\), \(v_j\in \mathcal {V}_j\) with \((\omega ^*,\gamma )\in \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*\) and \(\lambda _j^0\ge 0\), such that

$$\begin{aligned} \left( \xi ^*,\eta \right) +\left( \omega ^*,\gamma \right) =\left( 0,-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\right) . \end{aligned}$$

Therefore, for any \(x\in \mathbb {R}^n\),

$$\begin{aligned}{} & {} -\sum \limits _{i=1}^{p}\alpha ^0_if_i(x,u_i)-\sum \limits _{j=1}^{m}\lambda ^0_jg_j(x,v_j)\\{} & {} =\langle \xi ^*,x\rangle -\sum \limits _{i=1}^{p}\alpha ^0_if_i(x,u_i)+\langle \omega ^*,x\rangle -\sum \limits _{j=1}^{m}\lambda ^0_jg_j(x,v_j) \\{} & {} \le \left( \sum \limits _{i=1}^{p}\alpha ^0_if_i(\cdot ,u_i)\right) ^*(\xi ^*)+\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*(\omega ^*)\\{} & {} \le \eta +\gamma \\{} & {} =-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}). \end{aligned}$$

For any \(x\in \mathbb {R}^n\), set

$$\begin{aligned} \sigma (x):=\sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,u_i)+\sum \limits _{j=1}^{m}\lambda ^0_jg_j(x,v_j)-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}). \end{aligned}$$
(10)

Clearly, \(\sigma (x)\ge 0\). This, together with \(F_i(\bar{x})\ge f_i(\bar{x},u_i)\), gives \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)\ge 0\). Note that \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)\le 0\) due to \(\bar{x}\in \mathcal {F}\). Thus, \(\sum \limits _{j=1}^{m}\lambda ^0_jg_j(\bar{x},v_j)=0\). Together with \(F_i(\bar{x})\ge f_i(\bar{x},u_i)\), we have \(\sigma (\bar{x})\le 0\). Consequently,

$$\begin{aligned} \sigma (\bar{x})=0=\inf _{x\in \mathbb {R}^n}\sigma (x). \end{aligned}$$
(11)

Obviously, \(\nabla \sigma (\bar{x})=0\). Since \(f_i(\cdot ,u_i)\), \(i=1,\dots ,p\), and \(g_j(\cdot ,v_j)\), \(j=1,\dots ,m\), are SOS-convex polynomials, we have \(\sigma \) is an SOS-convex polynomial. Hence, it follows from Lemma 2.2 that \(\sigma \) is a sum of squares polynomial. Note that the degree of \(\sigma \) is not larger than d. Then, \(\sigma \in {\Sigma }^2_d[x]\). Thus, from (3), (4) and (10), we have

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i \left( f^0_i+\sum \limits _{r=1}^{s}u^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\lambda ^0_j\left( g^0_j+\sum \limits _{l=1}^{k}v^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\in {\Sigma }^2_d[x]. \end{aligned}$$

Let \(\alpha ^r_i:=\alpha ^0_i u^r_i\) and \(\lambda ^l_j:=\lambda ^0_j v^l_j\). It follows that (6) holds.

On the other hand, by \(\alpha ^0_i\ge 0\), \(\alpha ^r_i=\alpha ^0_i u^r_i\) and (1), we deduce that

$$\begin{aligned} \alpha ^0_i A^0_i +\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i=\alpha ^0_i\left( A^0_i+\sum \limits _{r=1}^{s}u^r_i A^r_i\right) \succeq 0,~i=1,\dots ,p. \end{aligned}$$

Similarly, by \(\lambda ^0_j\ge 0\), \(\lambda ^l_j=\lambda ^0_j v^l_j\) and (2), we have

$$\begin{aligned} \lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j=\lambda ^0_j\left( B^0_j+\sum \limits _{l=1}^{k}v^l_j B^l_j\right) \succeq 0,~j=1,\dots ,m. \end{aligned}$$

Thus, (7) holds.

\((\Leftarrow )\) Suppose that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that (6) and (7) hold.

Consider any \(i=1,\dots ,p\), and \(j=1,\dots ,m\). By a similar argument as in [13, Theorem 2.3], we can show that if \(\alpha ^0_i=0\), then, \(\alpha ^r_i=0\) for all \(r=1,\dots ,s\). If \(\lambda ^0_j=0\), then, \(\lambda ^l_j=0\) for all \(l=1,\dots ,k\).

Now, let \(\hat{u}_i:=(\hat{u}^1_i,\dots ,\hat{u}^s_i)\in \mathcal {U}_i\). Set \(\tilde{u}_i:=(\tilde{u}^1_i,\dots ,\tilde{u}^s_i)\) with

$$\begin{aligned} \tilde{u}^r_i:=\left\{ \begin{aligned} \hat{u}^r_i~~~~~~&\mathrm{{if}}~\alpha ^0_i=0,\\ \frac{\alpha ^r_i}{\alpha ^0_i}~~~~~~&\mathrm{{if}}~\alpha ^0_i\not =0, \end{aligned} \right. ~r=1,\dots ,s. \end{aligned}$$

Obviously, \(\tilde{u}_i\in \mathcal {U}_i\), and for any \(x\in \mathbb {R}^n\),

$$\begin{aligned} \alpha ^0_i f^0_i(x)+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i(x)= & {} \alpha ^0_i \left( f^0_i(x)+\sum \limits _{r=1}^{s} \tilde{u}^r_i f^r_i(x)\right) \nonumber \\= & {} \alpha ^0_i f_i(x,\tilde{u}_i),~i=1,\dots ,p. \end{aligned}$$
(12)

Here, note that if \(\alpha ^0_i=0\), then, \(\alpha ^r_i=0\) for any \(r=1,\dots ,s\).

Similarly, let \(\hat{v}_j:=(\hat{v}^1_j,\dots ,\hat{v}^k_j)\in \mathcal {V}_j\). Set \(\tilde{v}_j:=(\tilde{v}^1_j,\dots ,\tilde{v}^k_j)\) with

$$\begin{aligned} \tilde{v}^l_j:=\left\{ \begin{aligned} \hat{v}^l_j~~~~~~&\mathrm{{if}}~\lambda ^0_j=0, \\ \frac{\lambda ^l_j}{\lambda ^0_j}~~~~~~&\mathrm{{if}}~\lambda ^0_j\not =0, \end{aligned} \right. ~l=1,\dots ,k. \end{aligned}$$

Obviously, \(\tilde{v}_j\in \mathcal {V}_j\), and for any \(x\in \mathbb {R}^n\),

$$\begin{aligned} \lambda ^0_j g^0_j(x)+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j(x)= & {} \lambda ^0_j\left( g^0_j(x)+\sum \limits _{l=1}^{k}\tilde{v}^l_j g^l_j(x)\right) \nonumber \\= & {} \lambda ^0_j g_j(x,\tilde{v}_j),~j=1,\dots ,m. \end{aligned}$$
(13)

By (6), (12) and (13), we obtain

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j g_j(x,\tilde{v}_j)-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\ge 0,~\forall x\in \mathbb {R}^n. \end{aligned}$$
(14)

Note that \(g_j(x,\tilde{v}_j)\le 0\), \(\forall x\in \mathcal {F}\). It follows from (14) that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)\ge \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}),\forall x\in \mathcal {F}. \end{aligned}$$

Taking \(F_i(x)\ge f_i(x,\tilde{u}_i)\) into account, we have

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i F_i(x)\ge \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}),~\forall x\in \mathcal {F}. \end{aligned}$$
(15)

Note that \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\). Then, (15) yields that there is no point \(x\in \mathcal {F}\) such that

$$\begin{aligned} F_i(x)<F_i(\bar{x}),~i=1,\dots ,p. \end{aligned}$$

Thus, \(\bar{x}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\). This completes the proof. \(\square \)

In the special case, when \(\mathcal {U}_i\), \(i=1,\dots ,p\), are singletons, \(\mathrm {(UP)}\) reduces to the following polynomial optimization problem

$$\begin{aligned} (\textrm{UP})_0 ~~~~~~ \begin{array}{ll} \mathop {\text{ min }}\limits _{x\in {\mathbb {R}^n}} ~ \left\{ \left( f_{1}(x),\dots ,f_{p}(x)\right) ~|~g_j(x,v_j)\le 0,j=1,\dots ,m\right\} . \end{array} \end{aligned}$$

Here, \(f_i:\mathbb {R}^n\rightarrow \mathbb {R}\), \(i=1,\dots ,p\), are SOS-convex polynomials, and \(g_j:\mathbb {R}^n\times \mathcal {V}_j\rightarrow \mathbb {R}\), \(j=1,\dots ,m\), are defined as (4).

The following corollary gives a characterization of robust weakly efficient solutions of \(\mathrm {(UP)_0}\) based on \(\mathrm{{(RCCCQ)}}\).

Corollary 3.1

Assume that the \(\mathrm{{(RCCCQ)}}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)_0}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(i=1,\dots ,p\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_if_i+\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_if_i(\bar{x})\in {\Sigma }^2_d[x] \end{aligned}$$

and

$$\begin{aligned} \lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,~j=1,\dots ,m, \end{aligned}$$

where \(d\ge \max \left\{ \max \limits _{1\le i\le p}\left( \deg f_i\right) ,\max \limits _{1\le j\le m}\left( \deg g^0_j\right) ,\max \limits _{1\le j\le m}\left( \deg g^l_j\right) \right\} \).

Remark 3.2

In [13, Theorem 2.3], under the Slater-type condition (5), optimality conditions similar to the one in Corollary 3.1 have been investigated. Moreover, as mentioned above, \(\mathrm{{(RCCCQ)}}\) is weaker than the Slater-type condition (5). Thus, our results cover the corresponding results in [13].

Now, we give an example to explain the case where, for \(\mathrm {(UP)}\), the \(\mathrm{{(RCCCQ)}}\) holds and optimality conditions obtained in Theorem 3.1 are satisfied, whereas the Slater-type condition (5) fails.

Example 3.1

For problem \((\textrm{UP})\). Let \(m=n=1\) and \(p=s=k=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by

$$\begin{aligned} \mathcal {U}_1:= & {} \left\{ u_1:= \left( u^1_1,u^2_1 \right) \in \mathbb {R}^2 \mid \frac{ \left( u^1_1 \right) ^2}{3}+ \frac{\left( u^2_1 \right) ^2}{4}\le 1\right\} ,\\ \mathcal {U}_2:= & {} \left\{ u_2:= \left( u^1_2,u^2_2 \right) \in \mathbb {R}^2 \mid \frac{ \left( u^1_2 \right) ^2}{8}+\frac{ \left( u^2_2 \right) ^2}{10}\le 1\right\} \end{aligned}$$

and

$$\begin{aligned} \mathcal {V}_1:=\left\{ v_1:= \left( v^1_1,v^2_1 \right) \in \mathbb {R}^2 \mid \left( v^1_1 \right) ^2+ \left( v^2_1 \right) ^2\le 1\right\} . \end{aligned}$$

Let the polynomials \(f_1:\mathbb {R}\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by

$$\begin{aligned} f_1(x,u_1)= & {} x^4-u^1_1x+u^2_1,~~f_2(x,u_2)=x^4-2u^2_2x+u^1_2,~ \text{ and } \\ g_1(x,v_1)= & {} x^2+v^1_1x+v^2_1x. \end{aligned}$$

Obviously, \(f^0_1(x)=x^4\), \(f^1_1(x)=-x\), \(f^2_1(x)=1\), \(f^0_2(x)=x^4\), \(f^1_2(x)=1\), \(f^2_2(x)=-2x\), \(g^0_1(x)=x^2\), \(g^1_1(x)=x\) and \(g^2_1(x)=x\). Moreover, by (1) and (2), we have

$$\begin{aligned}&A^0_1= \begin{pmatrix} 3&{}\quad 0&{}\quad 0\\ 0&{}\quad 4&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~A^1_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~A^2_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix},\\&A^0_2= \begin{pmatrix} 8&{}\quad 0&{}\quad 0\\ 0&{}\quad 10&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, A^1_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~A^2_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix},\\&B^0_1= \begin{pmatrix} 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~B^1_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~B^2_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix}. \end{aligned}$$

Clearly, the Slater-type condition fails for this problem. On the other hand, it is easy to show that

$$\begin{aligned}&\bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*&\\ {}&\quad =\bigcup \limits _{\lambda ^0_1>0,v_1\in \mathcal {V}_1}\mathrm{{epi}}\left( \lambda ^0_1 g_1(\cdot ,v_1)\right) ^*\cup \left( \{0\}\times [0,+\infty )\right) \\&\quad =\bigcup \limits _{\lambda ^0_1>0,v_1\in \mathcal {V}_1}\lambda ^0_1\left\{ (x^*,r)\in \mathbb {R}\times \mathbb {R}~|~r\ge \frac{(x^*-v^1_1-v^2_1)^2}{4}\right\} \cup \left( \{0\}\times [0,+\infty )\right) \\&\quad =\mathbb {R}\times [0,+\infty ), \end{aligned}$$

which is a closed and convex cone. Thus, \(\mathrm{{(RCCCQ)}}\) holds. Moreover, it can be checked that \(\bar{x}:=0\) is a robust weakly efficient solution of \((\textrm{UP})\). Now, we assert that the conditions (6) and (7) hold at \(\bar{x}\). In fact, we only need to show that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,2\), \(r=1,2\), and \(\lambda ^0_1\in \mathbb {R}_+\), \(\lambda ^l_1\in \mathbb {R}\), \(l=1,2\), such that

$$\begin{aligned}{} & {} \left( \alpha ^0_1+\alpha ^0_2 \right) x^4+\lambda ^0_1x^2- \left( 2\alpha ^2_2+ \alpha ^1_1-\lambda ^1_1-\lambda ^2_1 \right) x+\alpha ^1_2\nonumber \\{} & {} \qquad +\alpha ^2_1-2\alpha ^0_1-2\sqrt{2}\alpha ^0_2\in {\Sigma }^2_d[x] \end{aligned}$$
(16)

and

$$\begin{aligned} \frac{ \left( \alpha ^1_1 \right) ^2}{3}+\frac{ \left( \alpha ^2_1 \right) ^2}{4}\le \left( \alpha ^0_1 \right) ^2,~\frac{ \left( \alpha ^1_2 \right) ^2}{8}+\frac{ \left( \alpha ^2_2 \right) ^2}{10} \le \left( \alpha ^0_2 \right) ^2,~ \left( \lambda ^1_1 \right) ^2+ \left( \lambda ^2_1 \right) ^2\le \left( \lambda ^0_1 \right) ^2.\nonumber \\ \end{aligned}$$
(17)

For instance, let \(\alpha ^0_1=\alpha ^0_2:=\frac{1}{2}\), \(\alpha ^1_1:=0\), \(\alpha ^2_1:=1\), \(\alpha ^1_2:=\sqrt{2}\), \(\alpha ^2_2:=0\), \(\lambda ^0_1:=1\) and \(\lambda ^1_1=\lambda ^2_1:=0\). Then, (16) and (17) hold. Thus, Theorem 3.1 is applicable.

The following example shows that the conclusion of Theorem 3.1 may go awry if the \(\mathrm{{(RCCCQ)}}\) fails.

Example 3.2

For problem \((\textrm{UP})\). Let \(m=1\) and \(p=s=k=n=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by

$$\begin{aligned} \mathcal {U}_1:= & {} \left\{ u_1:= \left( u^1_1,u^2_1 \right) \in \mathbb {R}^2 \mid \frac{ \left( u^1_1 \right) ^2}{2}+\frac{\left( u^2_1 \right) ^2}{3}\le 1\right\} ,\\ \mathcal {U}_2:= & {} \left\{ u_2:= \left( u^1_2,u^2_2 \right) \in \mathbb {R}^2 \mid \frac{\left( u^1_2 \right) ^2}{4}+\frac{\left( u^2_2 \right) ^2}{5}\le 1\right\} \end{aligned}$$

and

$$\begin{aligned} \mathcal {V}_1:=\left\{ v_1:= \left( v^1_1,v^2_1 \right) \in \mathbb {R}^2 \mid \left( v^1_1 \right) ^2+ \left( v^2_1 \right) ^2\le 1\right\} . \end{aligned}$$

Let the polynomials \(f_1:\mathbb {R}^2\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}^2\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}^2\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by

$$\begin{aligned} f_1(x,u_1)= & {} x^4_1+2x_2-u^1_1,~~f_2(x,u_2)=x^4_1-x_2+u^2_2~ \text{ and } \\ g_1(x,v_1)= & {} (v^1_1+1)x_1+v^2_1x_2. \end{aligned}$$

Obviously, \(f^0_1(x)=x^4_1+2x_2\), \(f^1_1(x)=-1\), \(f^2_1(x)=0\), \(f^0_2(x)=x^4_1-x_2\), \(f^1_2(x)=0\), \(f^2_2(x)=1\), \(g^0_1(x)=x_1\), \(g^1_1(x)=x_1\) and \(g^2_1(x)=x_2\). Moreover, by (1) and (2), we have

$$\begin{aligned}&A^0_1= \begin{pmatrix} 2&{}\quad 0&{}\quad 0\\ 0&{}\quad 3&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~A^1_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~A^2_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix},\\&A^0_2= \begin{pmatrix} 4&{}\quad 0&{}\quad 0\\ 0&{}\quad 5&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~A^1_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~A^2_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix},\\&B^0_1= \begin{pmatrix} 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~B^1_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, ~B^2_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix}. \end{aligned}$$

Clearly, \(\bar{x}:=(0,0)\) is a robust weakly efficient solution of \(\mathrm{{(UP)}}\) and

$$\begin{aligned}&\bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*&\\ {}&\quad =\bigcup \limits _{\lambda ^0_1>0,v_1\in \mathcal {V}_1}\mathrm{{epi}}\left( \lambda ^0_1 g_1(\cdot ,v_1)\right) ^*\cup \left( \{(0,0)\}\times [0,+\infty )\right) \\&\quad =\bigcup \limits _{\lambda ^0_1>0}\lambda ^0_1\left\{ (x^*_1,x^*_2, r)\in \mathbb {R}^2\times \mathbb {R}~|~(x^*_1-1)^2+(x^*_2)^2\le 1,r\ge 0\right\} \\&\qquad \cup \left( \{(0,0)\}\times [0,+\infty )\right) , \end{aligned}$$

which is not closed. Indeed, take a sequence \(s_n:= (\frac{1}{n},1,0) \), \(\forall n\in \mathbb {N}\). Obviously,

$$\begin{aligned} s_n=n \left( \frac{1}{n^2},\frac{1}{n},0 \right) \in \bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*, \forall n\in \mathbb {N}. \end{aligned}$$

Note that \(s_n\rightarrow (0,1,0)\) as \(n\rightarrow \infty \). However, \((0,1,0)\notin \bigcup \limits _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum \limits _{j=1}^{m}\lambda ^0_jg_j(\cdot ,v_j)\right) ^* \). Thus, \(\mathrm{{(RCCCQ)}}\) fails.

Now, we assert that the conclusion of Theorem 3.1 is not satisfied at \(\bar{x}\). Otherwise, there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{2}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,2\), \(r=1,2\), and \(\lambda ^0_1\in \mathbb {R}_+\), \(\lambda ^l_1\in \mathbb {R}\), \(l=1,2\), such that (6) and (7) hold. Note that \(F_1(\bar{x})=\sqrt{2}\) and \(F_2(\bar{x})=\sqrt{5}\). For any \( x:=(x_1,x_2)\in \mathbb {R}^2\), it follows from (6) that

$$\begin{aligned} \left( \alpha ^0_1+\alpha ^0_2 \right) x^4_1+\left( \lambda ^0_1+\lambda ^1_1 \right) x_1+ \left( 2\alpha ^0_1 -\alpha ^0_2+\lambda ^2_1 \right) x_2-\alpha ^1_1+\alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2\ge 0. \end{aligned}$$

Then,

$$\begin{aligned} \left( \alpha ^0_1+\alpha ^0_2 \right) x^4_1+ \left( \lambda ^0_1+\lambda ^1_1 \right) x_1-\alpha ^1_1+ \alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2\ge 0, \forall x_1\in \mathbb {R}, \end{aligned}$$
(18)

and

$$\begin{aligned} \left( 2\alpha ^0_1-\alpha ^0_2+\lambda ^2_1 \right) x_2-\alpha ^1_1+\alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2\ge 0, \forall x_2\in \mathbb {R}. \end{aligned}$$
(19)

Furthermore, from (7), we have

$$\begin{aligned} \frac{\left( \alpha ^1_1 \right) ^2}{2}+\frac{\left( \alpha ^2_1 \right) ^2}{3}\le \left( \alpha ^0_1 \right) ^2,~ \frac{\left( \alpha ^1_2 \right) ^2}{4}+\frac{\left( \alpha ^2_2 \right) ^2}{5}\le \left( \alpha ^0_2 \right) ^2~ \text{ and } \left( \lambda ^1_1 \right) ^2+ \left( \lambda ^2_1 \right) ^2\!\le \left( \lambda ^0_1 \right) ^2. \end{aligned}$$

Then, \( -\alpha ^1_1-\sqrt{2}\alpha ^0_1\le |\alpha ^1_1|-\sqrt{2}\alpha ^0_1\le 0 \) and \(\alpha ^2_2-\sqrt{5}\alpha ^0_2\le |\alpha ^2_2|-\sqrt{5}\alpha ^0_2\le 0. \) Consequently,

$$\begin{aligned} -\alpha ^1_1+\alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2\le 0. \end{aligned}$$
(20)

From (19), we have

$$\begin{aligned} 2\alpha ^0_1-\alpha ^0_2+\lambda ^2_1=0, \end{aligned}$$
(21)

and

$$\begin{aligned} -\alpha ^1_1+\alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2\ge 0. \end{aligned}$$
(22)

Then, by (20) and (22), we have

$$\begin{aligned} -\alpha ^1_1+\alpha ^2_2-\sqrt{2}\alpha ^0_1-\sqrt{5}\alpha ^0_2=0. \end{aligned}$$

Moreover, from (18), it holds that \(\lambda ^0_1+\lambda ^1_1=0\). This, together with \(~(\lambda ^1_1)^2+(\lambda ^2_1)^2\le (\lambda ^0_1)^2\) and (21), yields that

$$\begin{aligned} \left( 2\alpha ^0_1-\alpha ^0_2 \right) ^2\le 0. \end{aligned}$$

Thus, we arrive at a contradiction due to \(\alpha ^0_1, \alpha ^0_2\in \mathbb {R}_+\) with \(\alpha ^0_1+\alpha ^0_2=1\). This means that Theorem 3.1 is not applicable.

Next, we consider the problem \(\mathrm{{(UP)}}\) with the functions \(f_i\) and \(g_j\) being convex quadratic functions defined by

$$\begin{aligned} f_i(x,u_i):=x^\top Q_i x+(\xi _i)^ \top x+\beta _i+\sum \limits _{r=1}^{s}u^r_i\left( (\xi ^r_i)^\top x+\beta ^r_i\right) ,~x\in \mathbb {R}^n, \end{aligned}$$
(23)

and

$$\begin{aligned} g_j(x,v_j):=x^\top M_jx+(\theta _j)^\top x+\gamma _j+\sum \limits _{l=1}^{k}v^l_j\left( \left( \theta ^l_j \right) ^\top x+\gamma ^l_j\right) ,~x\in \mathbb {R}^n. \end{aligned}$$
(24)

here \(Q_i\succeq 0\), \(\xi _i\in \mathbb {R}^n\), \(\xi ^r_i\in \mathbb {R}^n\), \(\beta _i\in \mathbb {R}\), \(\beta ^r_i\in \mathbb {R}\), \(r=1,\dots ,s\), and \(M_j\succeq 0\), \(\theta _j\in \mathbb {R}^n\), \(\theta ^l_j\in \mathbb {R}^n\), \(\gamma _j\in \mathbb {R}\), \(\gamma ^l_j\in \mathbb {R}\), \(l=1,\dots ,k\).

In this case, we obtain the following optimality conditions for robust weakly efficient solutions of \(\mathrm {(UP)}\), which have been considered in [15, Theorem 3.1].

Corollary 3.2

Consider the problem \(\mathrm{{(UP)}}\) with the functions \(f_i\) and \(g_j\) given by (23) and (24). Assume that the \(\mathrm {(RCCCQ)}\) holds. Then, \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\) if and only if there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that

$$\begin{aligned} \begin{pmatrix} \sum \limits _{i=1}^{p}\alpha ^0_iQ_i+\sum \limits _{j=1}^{m}\lambda ^0_jM_j &{} \frac{1}{2}\left( \sum \limits _{i=1}^{p}\left( \alpha ^0_i\xi _i+\sum \limits _{r=1}^{s} \alpha ^r_i\xi ^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j\theta _j+ \sum \limits _{l=1}^{k}\lambda ^l_j\theta ^l_j\right) \right) \\ \frac{1}{2}\left( \sum \limits _{i=1}^{p}\left( \alpha ^0_i\xi _i+\sum \limits _{r=1}^{s} \alpha ^r_i\xi ^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j\theta _j+ \sum \limits _{l=1}^{k}\lambda ^l_j\theta ^l_j\right) \right) ^\top &{} \sum \limits _{i=1}^{p}\left( \alpha ^0_i\beta _j+\sum \limits _{r=1}^{s} \alpha ^r_i\beta ^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j\gamma _j+ \sum \limits _{l=1}^{k}\lambda ^l_j\gamma ^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i F_i\left( \bar{x}\right) \end{pmatrix} \succeq 0 \end{aligned}$$

and

$$\begin{aligned} \alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i\succeq 0,~\lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,~i=1,\dots ,p,~j=1,\dots ,m, \end{aligned}$$

where \(F_i(\bar{x}):=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\) for \(i=1,\dots ,p\).

Proof

By using a similar approach as that given to establish Theorem 3.1, it follows that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,u_i)+\sum \limits _{j=1}^{m}\lambda ^0_jg_j(x,v_j)-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\ge 0 \end{aligned}$$
(25)

and

$$\begin{aligned} \alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i\succeq 0,~\lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,~i=1,\dots ,p,~j=1,\dots ,m. \end{aligned}$$

Then, by (25), we have, for any \(x\in \mathbb {R}^n\),

$$\begin{aligned}&x^\top \left( \sum \limits _{i=1}^{p}\alpha ^0_iQ_i+\sum \limits _{j=1}^{m} \lambda ^0_jM_j\right) x+\left( \sum \limits _{i=1}^{p}\left( \alpha ^0_i\xi _i +\sum \limits _{r=1}^{s}\alpha ^r_i\xi ^r_i\right) \right. \\&\left. \quad +\sum \limits _{j=1}^{m} \left( \lambda ^0_j\theta _j+\sum \limits _{l=1}^{k}\lambda ^l_j\theta ^l_j\right) \right) ^\top x +\sum \limits _{i=1}^{p}\left( \alpha ^0_i\beta _j+\sum \limits _{r=1}^{s} \alpha ^r_i\beta ^r_i\right) \\&\quad +\sum \limits _{j=1}^{m}\left( \lambda ^0_j\gamma _j +\sum \limits _{l=1}^{k}\lambda ^l_j\gamma ^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\ge 0. \end{aligned}$$

This, together with the Simple Lemma in [5, p. 163], leads to the desired conclusion. The proof is complete. \(\square \)

Remark 3.3

With minor modifications of the proof given for Proposition 2.2 in [41], we can show that

$$\begin{aligned}{} & {} \textrm{coneco}\left\{ (0,1)\cup \textrm{epi}g^*_j(\cdot ,v_j)\mid \forall v_j\in \mathcal {V}_j,j=1,\dots ,m\right\} \\{} & {} = \bigcup _{\lambda ^0_j\ge 0,v_j\in \mathcal {V}_j} \mathrm{{epi}}\left( \sum ^{m}_{j=1}\lambda ^0_jg_j(\cdot ,v_j)\right) ^*. \end{aligned}$$

Then, in Corollary 3.2, the \(\mathrm{{(RCCCQ)}}\) can be replaced by the closeness of \(\textrm{coneco}\{(0,1)\cup \) \(\textrm{epi} g^*_j(\cdot ,v_j)\mid \forall v_j\in \mathcal {V}_j,j=1,\dots ,m\}\). Clearly, the result obtained in Corollary 3.2 coincides with [15, Theorem 3.1].

At the end of this section, we show how the optimality conditions obtained in (6) and (7) can be checked by a robust Karush–Kuhn–Tucker \((\textrm{KKT})\) condition. Note that the robust \(\textrm{KKT}\) condition of \(\mathrm {(UP)}\) holds at \(\bar{x}\in \mathcal {F}\) iff there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\bar{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\bar{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i \nabla _1 f_i(\bar{x},\bar{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j \nabla _1g_j(\bar{x},\bar{v}_j)=0,~\lambda ^0_jg_j(\bar{x},\bar{v}_j)=0,~j=1,\dots ,m,\nonumber \\ \end{aligned}$$
(26)

and

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_if_i(\bar{x},\bar{u}_i)=\sum \limits _{i=1}^{p}\alpha ^0_iF_i(\bar{x}). \end{aligned}$$
(27)

Here \(\nabla _1f_i\) and \(\nabla _1g_j\) denote the derivative of \(f_i\) and \(g_j\) with respect to the first variable, respectively.

Now, we give the following proposition which describes the relation between optimality conditions obtained in (6) and (7) and the robust \(\textrm{KKT}\) condition. It is worth noticing that the proof of Proposition 3.1, which is similar to the proof of [13, Proposition 2.10] and [41, Proposition 3.4], is included here for the sake of completeness.

Proposition 3.1

Let \(\bar{x}\in \mathcal {F}\). Then, the following statements are equivalent:

\(\mathrm{{(i)}}\) The robust KKT condition holds at \(\bar{x}\in \mathcal {F}\).

\(\mathrm{{(ii)}}\) There exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that

$$\begin{aligned}&\sum \limits _{i=1}^{p}\left( \alpha ^0_i \nabla f^0_i(\bar{x})+\sum \limits _{r=1}^{s}\alpha ^r_i \nabla f^r_i(\bar{x})\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j\nabla g^0_j(\bar{x})+ \sum \limits _{l=1}^{k}\lambda ^l_j\nabla g^l_j(\bar{x})\right) =0, \end{aligned}$$
(28)
$$\begin{aligned}&\lambda ^0_jg^0_j(\bar{x})+\sum \limits _{l=1}^{k}\lambda ^l_jg^l_j(\bar{x})=0, \end{aligned}$$
(29)
$$\begin{aligned}&\sum \limits _{i=1}^{p}\left( \alpha ^0_if^0_i(\bar{x})+\sum \limits _{r=1}^{s}\alpha ^r_if^r_i(\bar{x})\right) = \max \limits _{u_i\in \mathcal {U}_i}\sum \limits _{i=1}^{p}\alpha ^0_i\left( f^0_i(\bar{x})+\sum \limits _{r=1}^{s} u^r_if^r_i(\bar{x})\right) , \end{aligned}$$
(30)
$$\begin{aligned}&\alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i\succeq 0,~\lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0. \end{aligned}$$
(31)

\(\mathrm{{(iii)}}\) The optimality conditions given by (6) and (7) hold.

Proof

(i)\(\Rightarrow \)(ii) Assume that the robust KKT condition of \(\mathrm {(UP)}\) holds at \(\bar{x}\in \mathcal {F}\). Then, there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\bar{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\) and \(\lambda ^0_j\in \mathbb {R}_+\), \(\bar{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\) such that (26) and (27) hold. By \(\bar{u}_i=(\bar{u}^1_i,\dots ,\bar{u}^s_i)\in \mathcal {U}_i\), we get

$$\begin{aligned} A^0_i+\sum \limits _{r=1}^{s}\bar{u}^r_i A^r_i\succeq 0, i=1,\dots ,p. \end{aligned}$$

Let \(\alpha ^r_i:=\alpha ^0_i \bar{u}^r_i\), \(i=1,\dots ,p\), \(r=1,\dots ,s\). Then, for any \(i=1,\dots ,p\),

$$\begin{aligned} \alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i=\alpha ^0_i\left( A^0_i+\sum \limits _{r=1}^{s}\bar{u}^r_i A^r_i\right) \succeq 0 \end{aligned}$$
(32)

and

$$\begin{aligned} \alpha ^0_if_i(\bar{x},\bar{u}_i)=\alpha ^0_i f^0_i(\bar{x})+\sum \limits _{r=1}^{s}\alpha ^r_if^r_i(\bar{x}). \end{aligned}$$
(33)

Similarly, let \(\lambda ^l_j:=\lambda ^0_j \bar{v}^l_j\), \(j=1,\dots ,m\), \(l=1,\dots ,k\). Then, for any \(j=1,\dots ,m\), we have

$$\begin{aligned} \lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j=\lambda ^0_j\left( B^0_j+\sum \limits _{l=1}^{k}\bar{v}^l_j B^l_j\right) \succeq 0 \end{aligned}$$
(34)

and

$$\begin{aligned} \lambda ^0_j g_j(\bar{x},\bar{v}_j)=\lambda ^0_jg^0_j(\bar{x})+\sum \limits _{l=1}^{k}\lambda ^l_jg^l_j(\bar{x}). \end{aligned}$$
(35)

Together with (26), (27), (32), (33), (34) and (35), it follows that (ii) holds.

(ii)\(\Rightarrow \)(iii) Assume that there exist \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that (28), (29), (30) and (31) hold. Using similar arguments as those given for (12) and (13), it is easy to show that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that for any \(x\in \mathbb {R}^n\),

$$\begin{aligned} \alpha ^0_if^0_i(x)+\sum \limits _{r=1}^{s}\alpha ^r_if^r_i(x)=\alpha ^0_if_i(x,\tilde{u}_i) \end{aligned}$$
(36)

and

$$\begin{aligned} \lambda ^0_jg^0_j(x)+\sum \limits _{l=1}^{k}\lambda ^l_jg^l_j(x)=\lambda ^0_jg_j(x,\tilde{v}_j). \end{aligned}$$
(37)

Let

$$\begin{aligned} \phi (x):=\sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j g_j(x,\tilde{v}_j)-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}),~\forall x\in \mathbb {R}^n. \end{aligned}$$

By (28), (36) and (37), we deduce that

$$\begin{aligned} \nabla \phi (\bar{x})=0. \end{aligned}$$

Moreover, from (29) and (37), we get

$$\begin{aligned} \lambda ^0_jg_j(\bar{x},\tilde{v}_j)=0,~ j=1,\dots ,m. \end{aligned}$$

This, together with (30) and (36), gives

$$\begin{aligned} \phi (\bar{x})= 0. \end{aligned}$$

On the other hand, \(\phi \) is an SOS-convex polynomial. Therefore, from Lemma 2.2, we have \(\phi \in {\Sigma }^2_d[x]\). This means that

$$\begin{aligned} \sum \limits _{i=1}^{p}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\in {\Sigma }^2_d[x]. \end{aligned}$$

So, (iii) holds.

(iii)\(\Rightarrow \)(i) Assume that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that (6) and (7) hold. By (6), we have

$$\begin{aligned} \phi (x):=\sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j g_j(x,\tilde{v}_j)-\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})\ge 0,~\forall x\in \mathbb {R}^n. \end{aligned}$$
(38)

Then, for \(\bar{x}\in \mathcal {F}\),

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(\bar{x},\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j g_j(\bar{x},\tilde{v}_j)\ge \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}). \end{aligned}$$
(39)

Note that \(\lambda ^0_j g_j(\bar{x},\tilde{v}_j)\le 0\) due to \(\bar{x}\in \mathcal {F}\). Hence,

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(\bar{x},\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j g_j(\bar{x},\tilde{v}_j)\le \sum \limits _{i=1}^{p}\alpha ^0_i f_i(\bar{x},\tilde{u}_i)\le \sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x}). \end{aligned}$$
(40)

Combining (39) and (40), we obtain

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(\bar{x},\tilde{u}_i)=\sum \limits _{i=1}^{p}\alpha ^0_i F_i(\bar{x})~\text{ and }~\sum \limits _{j=1}^{m}\lambda ^0_j g_j(\bar{x},\tilde{v}_j)=0. \end{aligned}$$

Clearly, \(\phi (\bar{x})=0\). Consequently, from (38), we have \(\phi (x)\ge \phi (\bar{x})\), \(\forall x\in \mathbb {R}^n\). Thus, \(\nabla \phi (\bar{x})=0\), which means that

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i \nabla _1 f_i(\bar{x},\tilde{u}_i)+\sum \limits _{j=1}^{m}\lambda ^0_j \nabla _1g_j(\bar{x},\tilde{v}_j)=0. \end{aligned}$$

Thus, the robust \(\textrm{KKT}\) condition holds at \(\bar{x}\). The proof is complete. \(\square \)

Remark 3.4

Proposition  3.1 encompasses [15, Proposition 3.4], where the objective and constraint functions are convex quadratic functions. Proposition 3.1 also improves [13, Proposition 2.10], where there were no uncertain data in the objective functions \(f_i\), \(i=1,\dots ,p,\) of \(\mathrm {(UP)}\).

4 SDP Relaxation Dual Problem

In this section, we first introduce the sum of squares relaxation dual problem for \(\mathrm {(UP)}\) from the perspective of sum of squares conditions and linear matrix inequalities and then discuss weak and strong duality properties between them. Moreover, we give a numerical example to illustrate that the relaxation problem can be formulated as a semidefinite linear programming problem.

Let \(\omega :=( {\omega }_1,\dots , {\omega }_p)\in \mathbb {R}^p\), \(\alpha :=(\alpha _1^0,\dots ,\alpha ^0_p,\alpha _1^1,\dots ,\alpha ^1_p, \alpha _1^2,\dots ,\alpha ^2_p,\dots , \alpha _1^s,\dots ,\alpha ^s_p)\in \mathbb {R}_+^p\times \mathbb {R}^{ps}\), and \(\lambda :=( \lambda _1^0,\dots ,\lambda ^0_m,\lambda _1^1,\dots ,\lambda ^1_m, \lambda _1^2,\dots ,\lambda ^2_m,\dots , \lambda _1^k,\dots ,\lambda ^k_m)\in \mathbb {R}_+^m\times \mathbb {R}^{mk}\). Now, we propose the sum of squares relaxation dual problem of \(\mathrm {(UP)}\) as follows:

$$\begin{aligned} \mathrm{{(RD)}}\left\{ \begin{array}{ll} {\mathop {\mathop {\max }\limits _{\omega \in \mathbb {R}^p, \alpha \in \mathbb {R}_+^p\times \mathbb {R}^{ps},}} \limits _{\lambda \in \mathbb {R}_+^m\times \mathbb {R}^{mk}}}~~~~~ (\omega _1,\dots ,\omega _p) \\ ~~~~~~s.t. \quad \sum \limits _{i=1}^{p}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) \\ \qquad \qquad \quad -\sum \limits _{i=1}^{p}\alpha ^0_i \omega _i\in {\Sigma }^2_d[x],\\ \qquad ~~~~~~~ \alpha ^0_i A^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i A^r_i\succeq 0,~\lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,\\ \qquad ~~~~~~~i=1,\dots ,p,~j=1,\dots ,m, \sum \limits _{i=1}^{p}\alpha ^0_i=1. \end{array} \right. \end{aligned}$$

Here the feasible set of \(\mathrm {(RD)}\) is denoted by \(\mathcal {F}_D\).

Remark 4.1

In the special case when there is no uncertainty in the objective functions, \(\mathrm {(UP)}\) becomes \(\mathrm {(UP)_0}\), and \(\mathrm {(RD)}\) collapses to

$$\begin{aligned} \begin{array}{ll} \max \limits _{(\omega _i,\alpha _i,\lambda ^0_j,\lambda ^l_j)} (\omega _1,\dots ,\omega _p) \\ ~~~~~~s.t. \quad \sum \limits _{i=1}^{p}\alpha _i f_i+\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha _i \omega _i\in {\Sigma }^2_d[x],\\ \qquad ~~~~~~~ \lambda ^0_j B^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j B^l_j\succeq 0,\\ \qquad ~~~~~~~ \alpha _i\in \mathbb {R}_+,~ \sum \limits _{i=1}^{p}\alpha _i=1,~\lambda ^0_j\in \mathbb {R}_+,~\lambda ^l_j\in \mathbb {R},~\omega _i\in \mathbb {R},\\ \qquad ~~~~~~~i=1,\dots ,p,~j=1,\dots ,m,~l=1,\dots ,k. \end{array} \end{aligned}$$

Note that related results on the characterization of sum of squares relaxation problem for different kinds of polynomial optimization problems can be found in [12,13,14, 25] and the references therein.

Now, similar to the concept of robust weakly efficient solutions of \(\mathrm {(UP)}\) in Definition 2.3, we give the definition of weakly efficient solutions of \(\mathrm {(RD)}\).

Definition 4.1

A point \(( \bar{\omega }, \bar{\alpha }, \bar{\lambda })\in \mathcal {F}_D\) is said to be a weakly efficient solution of \(\mathrm {(RD)}\) iff there is no point \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\) such that

$$\begin{aligned} ( {\omega }_1,\dots , {\omega }_p)-(\bar{\omega }_1,\dots ,\bar{\omega }_p)\in \mathrm{{int}}\mathbb {R}^p_+. \end{aligned}$$

The following theorem gives a weak duality relation between \(\mathrm {(UP)}\) and \(\mathrm {(RD)}\).

Theorem 4.1

For any \(x\in \mathcal {F}\) and \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\). We have

$$\begin{aligned} (F_1(x),\dots ,F_p(x))-(\omega _1,\dots ,\omega _p)\notin -\mathrm{{int}}\mathbb {R}^p_+, \end{aligned}$$

where \(F_i(x):=\max \limits _{u_i \in \mathcal {U}_i}f_i(x,u_i)\), \(i=1,\dots ,p\).

Proof

Since \(({\omega }, {\alpha }, {\lambda })\in \mathcal {F}_D\), we have \(\omega _i\in \mathbb {R}\), \(\alpha ^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\), \(\alpha ^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), \(\lambda ^0_j\in \mathbb {R}_+\), \(\lambda ^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), and

$$\begin{aligned} \sum \limits _{i=1}^{p}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i \omega _i \in {\Sigma }^2_d[x]. \end{aligned}$$

This means that there exists \(\sigma \in {\Sigma }^2_d[x]\) such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \lambda ^0_j g^0_j+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\alpha ^0_i \omega _i =\sigma . \end{aligned}$$
(41)

Using a similar argument as that given for the proof of Theorem 3.1, we can show that there exist \(\tilde{u}_i\in \mathcal {U}_i\), \(i=1,\dots ,p\), and \(\tilde{v}_j\in \mathcal {V}_j\), \(j=1,\dots ,m\), such that for any \(x\in \mathbb {R}^n\),

$$\begin{aligned} \alpha ^0_i f^0_i(x)+\sum \limits _{r=1}^{s}\alpha ^r_i f^r_i(x)=\alpha ^0_i f_i(x,\tilde{u}_i) \end{aligned}$$

and

$$\begin{aligned} \lambda ^0_j g^0_j(x)+\sum \limits _{l=1}^{k}\lambda ^l_j g^l_j(x)=\lambda ^0_j g_j(x,\tilde{v}_j). \end{aligned}$$

Hence, (41) can be equivalently written as

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)=\sigma (x)-\sum \limits _{j=1}^{m}\lambda ^0_j g_j(x,\tilde{v}_j)+\sum \limits _{i=1}^{p}\alpha ^0_i\omega _i,~\forall x\in \mathbb {R}^n. \end{aligned}$$
(42)

Note that for any \(x\in \mathcal {F}\), we have \(g_j(x,\tilde{v}_j)\le 0\), \(j=1,\dots ,m\). This, together with (42) and \(\sigma (x)\ge 0\), gives

$$\begin{aligned} \sum \limits _{i=1}^{p}\alpha ^0_i f_i(x,\tilde{u}_i)\ge \sum \limits _{i=1}^{p}\alpha ^0_i \omega _i, \forall x\in \mathcal {F}. \end{aligned}$$
(43)

Note that \(F_i(x)\ge f_i(x,\tilde{u}_i)\), \(i=1,\dots ,p\). Thus, it follows from (43), \(\alpha ^0_i\in \mathbb {R}_+\) and \(\sum \limits _{i=1}^{p}\alpha ^0_i=1\) that

$$\begin{aligned} (F_1(x),\dots ,F_p(x))-(\omega _1,\dots ,\omega _p)\notin -\mathrm{{int}}\mathbb {R}^p_+. \end{aligned}$$

The proof is complete. \(\square \)

The next theorem describes strong duality properties for weakly efficient solutions between \(\mathrm {(UP)}\) and \(\mathrm {(RD)}\).

Theorem 4.2

Assume that the \(\mathrm{{(RCCCQ)}}\) holds. If \(\bar{x}\in \mathcal {F}\) is a robust weakly efficient solution of \(\mathrm {(UP)}\), then, there exist \(\bar{\omega }_i\in \mathbb {R}\), \(\bar{\alpha }^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\bar{\alpha }^0_i=1\), \(\bar{\alpha }^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\bar{\lambda }^0_j\in \mathbb {R}_+\), \(\bar{\lambda }^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that \((\bar{\omega }, \bar{\alpha },\bar{\lambda } )\) is a weakly efficient solution of \(\mathrm{{(RD)}}\), where \(\bar{\omega }_i:=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\), \(i=1,\dots ,p\).

Proof

Let \(\bar{x}\in \mathcal { F}\) be a robust weakly efficient solution of \(\mathrm {(UP)}\). By Theorem 3.1, there exist \(\bar{\alpha }^0_i\in \mathbb {R}_+\) with \(\sum \limits _{i=1}^{p}\bar{\alpha }^0_i=1\), \(\bar{\alpha }^r_i\in \mathbb {R}\), \(i=1,\dots ,p\), \(r=1,\dots ,s\), and \(\bar{\lambda }^0_j\in \mathbb {R}_+\), \(\bar{\lambda }^l_j\in \mathbb {R}\), \(j=1,\dots ,m\), \(l=1,\dots ,k\), such that

$$\begin{aligned} \sum \limits _{i=1}^{p}\left( \bar{\alpha }^0_i f^0_i+\sum \limits _{r=1}^{s}\bar{\alpha }^r_i f^r_i\right) +\sum \limits _{j=1}^{m}\left( \bar{\lambda }^0_j g^0_j+\sum \limits _{l=1}^{k}\bar{\lambda }^l_j g^l_j\right) -\sum \limits _{i=1}^{p}\bar{\alpha }^0_i \bar{\omega }_i\in {\Sigma }^2_d[x] \end{aligned}$$

and

$$\begin{aligned} \bar{\alpha }^0_i A^0_i+\sum \limits _{r=1}^{s}\bar{\alpha }^r_i A^r_i\succeq 0,~\bar{\lambda }^0_j B^0_j+\sum \limits _{l=1}^{k}\bar{\lambda }^l_j B^l_j\succeq 0,~i=1,\dots ,p,~j=1,\dots ,m, \end{aligned}$$

where \(\bar{\omega }_i:=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\). Clearly,

$$\begin{aligned} (\bar{\omega }, \bar{\alpha },\bar{\lambda } )\in \mathcal {F}_D. \end{aligned}$$

We claim that \((\bar{\omega }, \bar{\alpha },\bar{\lambda } )\in \mathcal {F}_D\) is a weakly efficient solution of \(\mathrm {(RD)}\). On the contrary, there exists \((\tilde{\omega }, \tilde{\alpha },\tilde{\lambda } )\in \mathcal {F}_D\) such that

$$\begin{aligned} (\tilde{\omega }_1,\dots ,\tilde{\omega }_p)-(\bar{\omega }_1,\dots ,\bar{\omega }_p)\in \mathrm{{int}}\mathbb {R}^p_+. \end{aligned}$$

This, together with \(\bar{\omega }_i=\max \limits _{u_i \in \mathcal {U}_i}f_i(\bar{x},u_i)\), \(i=1,\dots ,p\), implies that

$$\begin{aligned} (F_1(\bar{x}),\dots ,F_p(\bar{x}))-(\tilde{\omega }_1,\dots ,\tilde{\omega }_p)\in -\mathrm{{int}}\mathbb {R}^p_+, \end{aligned}$$

which contradicts Theorem 4.1. The proof is complete. \(\square \)

Note that the sum of squares conditions can be equivalently expressed as linear matrix inequalities. Then, we give a numerical example to show that \(\mathrm {(RD)}\) can be formulated as a semidefinite linear programming problem.

Example 4.1

For problem \((\textrm{UP})\), let \(m=n=1\) and \(p=s=k=2\). The uncertain sets \(\mathcal {U}_1 \subseteq \mathbb {R}^2,\) \(\mathcal {U}_2 \subseteq \mathbb {R}^2\) and \(\mathcal {V}_1 \subseteq \mathbb {R}^2\) are defined, respectively, by

$$\begin{aligned} \mathcal {U}_1:= & {} \left\{ u_1:= \left( u^1_1,u^2_1 \right) \in \mathbb {R}^2 \mid \frac{ \left( u^1_1 \right) ^2}{2}+\frac{ \left( u^2_1 \right) ^2}{3}\le 1\right\} ,\\ \mathcal {U}_2:= & {} \left\{ u_2:= \left( u^1_2,u^2_2 \right) \in \mathbb {R}^2 \mid \frac{ \left( u^1_2 \right) ^2}{4}+\frac{\left( u^2_2 \right) ^2}{5}\le 1\right\} \end{aligned}$$

and

$$\begin{aligned} \mathcal {V}_1:=\left\{ v_1:=(v^1_1,v^2_1)\in \mathbb {R}^2 \mid (v^1_1)^2+(v^2_1)^2\le 1\right\} . \end{aligned}$$

Let the polynomials \(f_1:\mathbb {R}\times \mathcal {U}_1\rightarrow \mathbb {R}\), \(f_2:\mathbb {R}\times \mathcal {U}_2\rightarrow \mathbb {R}\) and \(g_1:\mathbb {R}\times \mathcal {V}_1\rightarrow \mathbb {R}\) be defined, respectively, by

$$\begin{aligned} f_1(x,u_1)= & {} x^4+u^1_1x+u^2_1,~f_2(x,u_2)=x^4-2u^2_2x+3u^1_2,~ \text{ and } \\ g_1(x,v_1)= & {} v^1_1x+v^2_1x-1. \end{aligned}$$

Obviously, \(f^0_1(x)=x^4\), \(f^1_1(x)=x\), \(f^2_1(x)=1\), \(f^0_2(x)=x^4\), \(f^1_2(x)=3\), \(f^2_2(x)=-2x\), \(g^0_1(x)=-1\), \(g^1_1(x)=x\) and \(g^2_1(x)=x\). Moreover, by (1) and (2), we have

$$\begin{aligned}&A^0_1= \begin{pmatrix} 2&{}\quad 0&{}\quad 0\\ 0&{}\quad 3&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~A^0_2= \begin{pmatrix} 4&{}\quad 0&{}\quad 0\\ 0&{}\quad 5&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, ~A^1_1=A^1_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix},\\&A^2_1=A^2_2= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix},\\&B^0_1= \begin{pmatrix} 1&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ \end{pmatrix}, B^1_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 0&{}\quad 0\\ 1&{}\quad 0&{}\quad 0\\ \end{pmatrix}, B^2_1= \begin{pmatrix} 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 0&{}\quad 1\\ 0&{}\quad 1&{}\quad 0\\ \end{pmatrix}. \end{aligned}$$

The sum of squares relaxation dual problem of \(\mathrm {(UP)}\) becomes

$$\begin{aligned} \begin{array}{llll} {\mathop {\mathop {\max }\limits _{\omega \in \mathbb {R}^2, \alpha \in \mathbb {R}_+^2 \times \mathbb {R}^{4},}} \limits _{\lambda \in \mathbb {R}_+\times \mathbb {R}^{2}}} (\omega _1,\omega _2) \\ ~~~~~~s.t. \quad \sum \limits _{i=1}^{2}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{2} \alpha ^r_i f^r_i\right) +\left( \lambda ^0_1 g^0_1+\sum \limits _{l=1}^{2} \lambda ^l_1 g^l_1\right) -\sum \limits _{i=1}^{2}\alpha ^0_i \omega _i\in \mathrm{{\Sigma }}^2_d[x],\\ \qquad ~~~~~~~ \alpha ^0_i A^0_i+\sum \limits _{r=1}^{2}\alpha ^r_i A^r_i \succeq 0,~i=1,2, ~\lambda ^0_1 B^0_1+\sum \limits _{l=1}^{2}\lambda ^l_1 B^l_1\succeq 0,\\ \qquad ~~~~~~~ ~\sum \limits _{i=1}^{2}\alpha ^0_i=1. \end{array} \end{aligned}$$

Let

$$\begin{aligned} \sigma :&=\sum \limits _{i=1}^{2}\left( \alpha ^0_i f^0_i+\sum \limits _{r=1}^{2}\alpha ^r_i f^r_i\right) +\left( \lambda ^0_1 g^0_1+\sum \limits _{l=1}^{2}\lambda ^l_1 g^l_1\right) -\sum \limits _{i=1}^{2}\alpha ^0_i \omega _i. \end{aligned}$$

Clearly,

$$\begin{aligned} \sigma (x)= \left( \alpha ^0_1+\alpha ^0_2 \right) x^4+ \left( \alpha ^1_1-2\alpha ^2_2+\lambda ^1_1 +\lambda ^2_1 \right) x+\alpha ^2_1+3\alpha ^1_2-\alpha ^0_1\omega _1-\alpha ^0_2\omega _2. \end{aligned}$$

By \(\sigma \in {\Sigma }^2_d[x]\) and [36, Lemma 3.33], there exists a symmetric \((3\times 3)\) matrix C such that

$$\begin{aligned} \sigma =X^\top WX ~\text{ and } ~W\succeq 0, \end{aligned}$$
(44)

where \(X:=(1,x,x^2)\). Let

$$\begin{aligned} W:= \begin{pmatrix} W_1&{}\quad W_2&{}\quad W_3\\ W_2&{}\quad W_4&{}\quad W_5\\ W_3&{}\quad W_5&{}\quad W_6 \end{pmatrix}. \end{aligned}$$

By (44), we have \( W_1=\alpha ^2_1+3\alpha ^1_2-\alpha ^0_1\omega _1-\alpha ^0_2\omega _2,\) \(2W_2=\alpha ^1_1-2\alpha ^2_2+\lambda ^1_1+\lambda ^2_1,\) \(2W_3+W_4=0\), \(W_5=0\) and \(W_6=\alpha ^0_1+\alpha ^0_2\). Let \(W_3:=\mu \in \mathbb {R}\). Then, \(W_4=-2\mu \). Thus, the relaxation dual problem becomes the following semidefinite programming problem

$$\begin{aligned} \begin{array}{ll} {\mathop {\mathop {\max }\limits _{\omega \in \mathbb {R}^2, \mu \in \mathbb {R}, \alpha \in \mathbb {R}_+^2 \times \mathbb {R}^{4},}} \limits _{\lambda \in \mathbb {R}_+\times \mathbb {R}^{2}}} (\omega _1,\omega _2) \\ ~~~~~~s.t. \quad \begin{pmatrix} \alpha ^2_1+3\alpha ^1_2-\alpha ^0_1\omega _1-\alpha ^0_2\omega _2&{}\frac{\alpha ^1_1}{2}- \alpha ^2_2+\frac{\lambda ^1_1}{2}+\frac{\lambda ^2_1}{2}&{}\mu \\ \frac{\alpha ^1_1}{2}-\alpha ^2_2+\frac{\lambda ^1_1}{2}+\frac{\lambda ^2_1}{2}&{}-2\mu &{}0\\ \mu &{}0&{}\alpha ^0_1+\alpha ^0_2 \end{pmatrix} \succeq 0,\\ \qquad ~~~~~~~\begin{pmatrix} 2\alpha ^0_1&{}\quad 0&{}\quad \alpha ^1_1\\ 0&{}\quad 3\alpha ^0_1&{}\quad \alpha ^2_1\\ \alpha ^1_1&{}\quad \alpha ^2_1&{}\quad \alpha ^0_1\\ \end{pmatrix} \succeq 0, \begin{pmatrix} 4\alpha ^0_2&{}\quad 0&{}\quad \alpha ^1_2\\ 0&{}\quad 5\alpha ^0_2&{}\quad \alpha ^2_2\\ \alpha ^1_2&{}\quad \alpha ^2_2\quad &{}\alpha ^0_2\\ \end{pmatrix} \succeq 0, \begin{pmatrix} \lambda ^0_1&{}\quad 0&{}\quad \lambda ^1_1\\ 0&{}\quad \lambda ^0_1&{}\quad \lambda ^2_1\\ \lambda ^1_1&{}\quad \lambda ^2_1&{}\quad \lambda ^0_1\\ \end{pmatrix} \succeq 0,\\ \qquad ~~~~~~~~~~\sum \limits _{i=1}^{2}\alpha ^0_i=1. \end{array} \end{aligned}$$

Therefore, \(\mathrm{{(RD)}}\) can be verified by solving a semidefinite linear programming problem.

5 Conclusions

In this paper, a class of SOS-convex polynomial optimization problems under uncertain data in both the objective and constraints is considered. By using a new robust-type characteristic cone constraint qualification, we obtain optimality conditions for robust weakly efficient solutions to this uncertain SOS-convex optimization problem based on the sum of squares conditions and linear matrix inequalities. We also introduce a relaxation dual problem for this uncertain SOS-convex optimization problem and establish its robust duality properties. Furthermore, we give a numerical example to illustrate that the relaxation problem can be formulated as a semidefinite programming problem.

Although some new results of robust weakly efficient solutions have been established for SOS-convex polynomial optimization problems with uncertain data, there are remaining questions to be addressed in the future. For instance, similar to [16, 41], whether we can investigate second-order cone programming dual relaxations for more general classes of convex polynomial optimization problem. Two-stage SOS-convex polynomial optimization is an interesting model for polynomial optimization problems. It is also of importance to consider how the proposed approach can be extended to handle affinely adjustable robust two-stage SOS-convex polynomial optimization problems.