1 Introduction

Consider the following mathematical programming problem with multiple objective functions:

where \(f_j: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}},\) \(j=1,\ldots ,p,\) and \(g_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}},\) \(i=1,\ldots ,m,\) are convex polynomials.

  • If \(p=1,\) the model problem \(\mathrm{(MP)}\) deduces to a scalar one, and admits a hierarchy of semidefinite programming (SDP) relaxations, which is the well-known Lasserre hierarchy of SDP relaxations. More broadly speaking, the Lasserre hierarchy of SDP relaxations is often used to solve nonconvex polynomial optimization problems with compact feasible sets (Lasserre 2009a, b), and it has finite convergence generically as shown in Helton and Nie (2010). In particular, if the involving functions are SOS-convex polynomials (see Definition 2.1), then \(\mathrm{(MP)}\) enjoys an exact SDP relaxation in the sense that the optimal values of \(\mathrm{(MP)}\) and its relaxation problem are equal; moreover, the relaxation problem attains its optimum under the Slater condition (Lasserre 2009b, Theorem 3.4).

  • If \(p\ge 2,\) and the involving functions are SOS-convex polynomials (the reason why we consider SOS-convex polynomials is that deciding whether a polynomial is SOS-convex or not can be equivalently rewritten as a feasibility problem of a semidefinite programming problem, which can be efficiently validated), Lee and Jiao (2018) contributed an article on how to find efficient solutions of \(\mathrm{(MP)}\) by employing the \(\epsilon \)-constraint method, which is a scalarization technique, i.e., solving a scalar objective problem, where one of the objective functions is minimized while all the other objective functions are bounded from above by means of additional constraints.

On the other hand, the data of many real-world optimization problems are often uncertain (that is, they are not known exactly at the time of the decision) due to lack of information, estimation errors or prediction errors. Recently, the robust optimization approach, which associates an uncertain mathematical program with its robust counterpart, has emerged as a powerful deterministic approach for studying mathematical programming with data uncertainty; see, for example, Beck and Ben-Tal (2009), Ben-Tal and Nemirovski (1998), Ben-Tal et al. (2009), Chuong (2016), Doolittle et al. (2018), Ide and Schöbel (2016), and Wiecek and Dranichak (2016) and the reference therein.

Indeed, the robust optimization approach (also known as worst-case approach) is a method that deals with optimization problems, in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution; see, for example, Crespi et al. (2018), Lee and Lee (2018), and Wiecek and Dranichak (2016). It is well known that a robust convex quadratic optimization problem under ellipsoidal data uncertainty enjoys exact SDP relaxation as it can be equivalently reformulated as a semidefinite programming problem (see Ben-Tal and Nemirovski 1998). In the same vein, a robust convex quadratic optimization problems under restricted ellipsoidal data uncertainty can be equivalently reformulated as a second-order cone programming problem (see Goldfarb and Iyengar 2003).

In this paper, we are interested in the study of finding robust efficient solutions to \(\mathrm{(MP)}\) in the face of data uncertainty, where the involving functions are still SOS-convex polynomials. This model problem under data uncertainty in the constraints can be captured by the following problem:

where \(u_i\) is an uncertain parameter and \(u_i \in {{\mathcal {U}}}_i\) for some nonempty compact and convex uncertain set \({{\mathcal {U}}}_i \subseteq {{\mathbb {R}}}^{q_i},\) \(q_i\in {\mathbb {N}},\) and the uncertain data is affinely parameterized in the sense that (Jeyakumar et al. 2015)

$$\begin{aligned} g_i(\cdot ,u_i):= & {} g_i^{0}(\cdot )+\sum _{r=1}^{q_i}u_i^rg_i^r(\cdot ),\\ u_i:= & {} (u_i^{1},\ldots ,u_i^{t_i},u_i^{t_i+1},\ldots ,u_i^{q_i}) \in {{\mathcal {U}}}_i\subseteq {{\mathbb {R}}}^{t_i}_+\times {{\mathbb {R}}}^{q_i-t_i}, \end{aligned}$$

where \(g_i^r,\) \(r=0,\ldots ,t_i,\) are SOS-convex polynomials and \(g_i^r,\) \(r=t_i+1,\ldots ,q_i,\) are affine functions, for all \(i=1,\ldots ,m.\)

Following the robust optimization approach, the robust counterpart of \(\mathrm{(UMP)}\) is given by

  • For the case \(p=1,\) finding robust optimal solutions, that are immunized against data uncertainty, has become an important research topic in convex optimization and has recently been extensively studied in the literature; see, for example, Bertsimas et al. (2011), Chuong and Jeyakumar (2018), and Jeyakumar et al. (2013) and the references therein.

  • However, in the case of \(p\ge 2\), as far as we know, it seems that there are not so many results on finding efficient solutions that are immunized against data uncertainty, notwithstanding the fact that some results focusing on theoretical research are investigated; see, for example, Chuong (2016), and Lee and Lee (2018) and the references therein. This motivates us to contribute the present research paper.

In this paper, we make the following contributions to the robust multiobjective convex optimization (RMP):

  1. (i)

    By employing the \(\epsilon \)-constraint method (Chankong and Haimes 1983; Ehrgott 2005), we substitute \(\mathrm{(RMP)}\) to a class of scalar problems, and each scalar problem enjoys a zero duality gap in the sense that the optimal value of the scalar objective problem and its associated relaxation problems are equal under the Slater condition and the strict feasibility condition (see Theorem 3.1).

  2. (ii)

    We do the study of finding efficient solutions of \(\mathrm{(RMP)}\) (see Theorem 3.3 and Theorem 3.4). A nontrivial numerical example is designed to show how to find efficient solutions of \(\mathrm{(RMP)}.\)

The outline of the paper is organized as follows. Section 2 presents some notations and preliminaries. Section 3 states the main results of the paper on how to find efficient solutions of \(\mathrm{(RMP)}.\) Section 4 provides a numerically tractable example. Finally, conclusions are given in Sect. 5.

2 Notation and preliminaries

First of all, let us recall some notations and preliminaries. \({{\mathbb {R}}}^n\) denotes the Euclidean space with dimension n. The non-negative orthant of \({{\mathbb {R}}}^n\) is denoted by \({{\mathbb {R}}}_{+}^n.\) Let \(S^n\) be the set of \(n\times n\) symmetric matrices. For a matrix \(X\in S^n,\) X is positive semidefinite denoted by \(X\succeq 0,\) if \(z^TXz\ge 0\) for any \(z\in {{\mathbb {R}}}^n.\) We denote by \(S_+^n\) the set of \(n\times n\) symmetric positive semidefinite matrices. For \(M, N\in S^n,\) \(\langle M, N\rangle \) stands for \(\mathrm{trace}(MN).\)

The space of all real polynomials on \({{\mathbb {R}}}^n\) is denoted by \({{\mathbb {R}}}[x];\) moreover, the space of all real polynomials on \({{\mathbb {R}}}^n\) with degree at most d is denoted by \({{\mathbb {R}}}[x]_d.\) The degree of a polynomial f is denoted by \(\deg f.\) We say that a real polynomial f is sum of squares, if there exist real polynomials \(q_\ell ,\) \(\ell = 1,\ldots ,r,\) such that \(f =\sum _{\ell =1}^{r}q_{\ell }^2.\) The set consisting of all sum of squares of real polynomials with degree at most d is denoted by \({\Sigma }_{d}^2.\) For a multi-index \(\alpha \in {\mathbb {N}}^n,\) let \(|\alpha |:=\sum _{i=1}^n\alpha _i,\) and let \({\mathbb {N}}^n_d:= \{\alpha \in {\mathbb {N}}^n : |\alpha |\le d\}.\) \(x^\alpha \) denotes the monomial \(x_1^{\alpha _1}\cdots x_n^{\alpha _n}.\) The canonical basis of \({{\mathbb {R}}}[x]_d\) is denoted by

$$\begin{aligned} v_d(x):=(x^\alpha )_{\alpha \in {\mathbb {N}}^n_{d}}=(1,x_1,\ldots ,x_n,x_1^2,x_1x_2, \ldots ,x_n^2,\ldots ,x_1^d,\ldots ,x_n^d)^T, \end{aligned}$$
(1)

which has dimension \(s(n,d):=\left( \begin{array}{c} n+d \\ d \end{array}\right) .\) Given a polynomial \(f\in {{\mathbb {R}}}[x]\) with vector of coefficients \((f_\alpha ),\) let \(\mathrm{supp}(f) \subset {\mathbb {N}}^n\) denote the support of f, i.e., the set \(\{ \alpha \in {\mathbb {N}}^n : f_\alpha \ne 0\}.\) The Newton polytope of \(f\in {{\mathbb {R}}}[x]\) is the convex hull of \(\mathrm{supp}(f)\). Given an s(n, 2d)-vector \(y:= (y_\alpha )_{\alpha \in {\mathbb {N}}^n_{2d}}\) with \(y_\mathbf{0 }=1,\) let \(M_{d}(y)\) be the moment matrix of dimension s(nd),  with rows and columns labeled by (1). For example, for \(n=2\) and \(d=1,\)

$$\begin{aligned} y=(y_\alpha )_{\alpha \in {\mathbb {N}}^2_{2}}=(1,y_{1,0},y_{0,1},y_{2,0},y_{1,1},y_{0,2})^T \ \text { and } \ M_1(y)=\left( \begin{array}{ccc} 1 &{} y_{1,0} &{} y_{0,1} \\ y_{1,0} &{} y_{2,0} &{} y_{1,1} \\ y_{0,1} &{} y_{1,1} &{} y_{0,2} \\ \end{array} \right) . \end{aligned}$$

In what follows, for convenience, we denote \(M_{d}(y):=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}y_\alpha B_\alpha ,\) where \(y=(y_\alpha )_{\alpha \in {\mathbb {N}}^n_{2d}}\) with \(y_\mathbf{0 }=1\), and for each \(\alpha \in {\mathbb {N}}^n_d,\) \(B_\alpha \) is an \(s(n,d)\times s(n,d)\) symmetric matrix, which is understood from the definition of \(M_d(y).\)

We now recall the notion of SOS-convex polynomials.

Definition 2.1

(Ahmadi and Parrilo 2012, 2013; Helton and Nie 2010) A real polynomial f on \({{\mathbb {R}}}^n\) is called SOS-convex if the polynomial

$$\begin{aligned} f(x)-f(y)-\nabla f(y)^T(x-y) \end{aligned}$$

is a sum of squares polynomial in \({{\mathbb {R}}}[x;y]\) (with respect to variables x and y).

Observe that an SOS-convex polynomial is convex. However, the converse is not true, which means that there exists a convex polynomial which is not SOS-convex (Ahmadi and Parrilo 2012, 2013). In addition, the problem which related by SOS-convex polynomials can be checked numerically by solving a semidefinite program.

The following proposition shows to confirm whether a polynomial can be written as a sum of squares via semidefinite optimization methods.

Proposition 2.1

(Lasserre 2009a) A polynomial \(f\in {{\mathbb {R}}}[x]_{2d}\) has a sum of squares decomposition if and only if there exists \(Q\in S^{s(n,d)}_+\) such that \(f(x)=\langle v_d(x)v_d(x)^T,Q\rangle \) for all \(x\in {{\mathbb {R}}}^n.\)

Consider a polynomial \(f\in {{\mathbb {R}}}[x]_{2d},\) and let \(v_d(x)v_d(x)^T:=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}x^\alpha B_\alpha .\) Then \(f(x)=\sum _{\alpha \in {\mathbb {N}}^n_{2d}}f_\alpha x^\alpha \) is a sum of squares if and only if we can solve the following semidefinite feasibility problem (Lasserre 2009a):

$$\begin{aligned} \text {Find } X\in S^{s(n,d)}_+ \text { such that } \langle B_\alpha ,X\rangle =f_\alpha , \ \forall \alpha \in {\mathbb {N}}_{2d}^n. \end{aligned}$$

3 Finding efficient solutions of \(\mathrm{(RMP)}\)

In this section, we focus on the study of finding efficient solutions of \(\mathrm{(RMP)}.\) Now, let us recall the robust multiobjective SOS-convex polynomial optimization problem

Let \(K:=\{x\in {{\mathbb {R}}}^n: g_i(x,u_i) \le 0, \ \forall u_i\in {{\mathcal {U}}}_i, \ i\in I\}\) be the set of feasible solutions of \(\mathrm{(RMP)},\) where \(I:=\{1,\ldots ,m\}.\) Let \(2d:=\max \{\mathrm{deg}f_1,\ldots ,\mathrm{deg}f_p,\) \( \mathrm{deg}g_1(\cdot ,u_1),\ldots ,\) \(\mathrm{deg}g_m(\cdot ,u_m)\}.\)

Hereafter, we consider the problem \(\mathrm{(RMP)}\) with restricted spectrahedron data uncertain sets (Chieu et al. 2018; Chuong 2017, 2018), which are compact and convex sets given by for each \(i\in I,\)

where \(A_i^r,\) \(r=0,1,\ldots ,q_i,\) are \(s_i\times s_i\) symmetric matrices with some \(s_i\in {\mathbb {N}}.\)

Now, we recall the notions of (robust) efficient solutions, which can be seen in (Wiecek and Dranichak 2016, Section 4) .

Definition 3.1

A point \({\bar{x}}\in K\) is said to be a robust efficient solution of \((\mathrm{UMP});\) equivalently, \({\bar{x}}\in K\) is an efficient solution of \((\mathrm{RMP}),\) if

$$\begin{aligned} f(x)-f({\bar{x}})\not \in -{{\mathbb {R}}}^p_+\backslash \{\mathbf{0 }\}. \end{aligned}$$

In the present paper, we adopt the famous \(\epsilon \)-constraint method—one kind of scalarization methods—to study the problem (RMP). Indeed, the \(\epsilon \)-constraint method was minutely studied by Chankong and Haimes (1983) (see also Ehrgott 2005) and improved by Ehrgott and Ruzika (2008); moreover, it is shown to be an effective method to find efficient solutions of multicriteria optimization problem.

Now, we substitute \((\mathrm{RMP})\) by the scalar problems as follows:

where \(\epsilon \in {{\mathbb {R}}}^p.\) Note that the component \(\epsilon _j\) is unrelated for \((\mathrm{RP}_j(\epsilon ))\), the convention involving it here will be convenient for our later analytic. For each \(j=1,\ldots ,p,\) let \(K_j(\epsilon ):=\{x\in K : f_k(x)\le \epsilon _k, \ k\ne j\}\) be the feasible set of \((\mathrm{RP}_j(\epsilon )).\)

Let \(j\in \{1,\ldots ,p\}\) be any fixed. Then, the corresponding sum of squares relaxation dual problem of \((\mathrm{RP}_j(\epsilon ))\) with degree 2d is the following problem:

Thanks to Proposition 2.1, \((\mathrm{RD}_j(\epsilon ))\) can be rewritten as the following semidefinite programming problem:

The dual problem of \((\mathrm{SDD}_j(\epsilon ))\) can be stated as follows:

Let \(v(\cdot )\) be the optimal value of the problem \((\cdot ),\) correspondingly. For example, \(v(\mathrm{RP}_j(\epsilon ))\) stands for the optimal value of the problem \((\mathrm{RP}_j(\epsilon )).\)

Definition 3.2

For each fixed \(j=1,\ldots ,p,\) we say that the Slater condition holds for \((\text {RP}_j(\epsilon )),\) if there exists \({\hat{x}}\in {{\mathbb {R}}}^n\) such that

$$\begin{aligned} f_k({\hat{x}})-\epsilon _k< 0, \ k\ne j, \ \text { and } \ g_i({\hat{x}},u_i) <0, \ \forall u_i\in {{\mathcal {U}}}_i, \ i\in I. \end{aligned}$$

Definition 3.3

We say a strict feasibility condition holds, if for each \(i\in I,\) there exists \({\hat{u}}_i\in {{\mathbb {R}}}^{t_i}_+\times {{\mathbb {R}}}^{q_i-t_i}\) such that \(A_i^0+\sum _{r=1}^{q_i}{\hat{u}}_i^rA_i^r\succ 0.\)

The following lemma, which includes the information on the nonemptiness of solution set of our problem and will play an important role in Theorem 3.1 (see below), was studied by Belousov and Klatte (2002).

Lemma 3.1

(Belousov and Klatte 2002) Let \(f, g_1,\ldots , g_m\) be convex polynomials on \({{\mathbb {R}}}^n.\) Let \(F := \{x \in {{\mathbb {R}}}^n : g_i(x) \le 0, \ i = 1,\ldots ,m\}.\) Suppose that \(\inf _{x\in F}f(x) >-\infty .\) Then,  \(\mathop {\mathrm{argmin}}\limits _{x\in F}f(x)\ne \emptyset .\)

It is also worth mentioning that, recently, Helton and Nie (2010) gave an important property of SOS-convex polynomials, which also plays an important role in the proof of Theorem 3.1.

Lemma 3.2

(Helton and Nie 2010, Lemma 8) Let f be an SOS-convex polynomial. Suppose that there exists \(\bar{x}\in {\mathbb {R}}^n\) such that \(f({\bar{x}}) = 0\) and \(\nabla f({\bar{x}}) = 0.\) Then,  f is a sum of squares polynomial.

Below, we give a zero duality gap result for \((\mathrm{RP}_j(\epsilon )),\) \((\mathrm{SDD}_j(\epsilon )),\) and \((\mathrm{SDP}_j(\epsilon )),\) under the Slater condition and the strict feasibility condition. Note that the proof is motivated by (Chieu et al. 2018, Theorem 4.3).

Theorem 3.1

For each fixed \(j=1,\ldots ,p,\) if the Slater condition and the strict feasibility condition for \((\mathrm{RP}_j(\epsilon ))\) hold,  then

$$\begin{aligned} v(\mathrm{RP}_j(\epsilon ))=v(\mathrm{SDD}_j(\epsilon ))=v(\mathrm{SDP}_j(\epsilon )). \end{aligned}$$

Proof

Let \(j\in \{1,\ldots ,p\}\) be any fixed. We first claim that

$$\begin{aligned} v(\mathrm{RP}_j(\epsilon ))\le v(\mathrm{RD}_j(\epsilon )). \end{aligned}$$

Let \(\gamma _j:=v(\mathrm{RP}_j(\epsilon ))\in {{\mathbb {R}}}.\) Since the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) holds, by the Lagrangian duality for convex optimization problems (Boyd and Vandenberghe 2004), we see that

$$\begin{aligned} \gamma _j= & {} \inf _{x\in {{\mathbb {R}}}^n}\{f_j(x): f_k(x)\le \epsilon _k, \ k\ne j, \ g_i(x,u_i) \le 0, \forall u_i\in {{\mathcal {U}}}_i, \ i\in I\}\nonumber \\= & {} \inf _{x\in {{\mathbb {R}}}^n}\{f_j(x): f_k(x)\le \epsilon _k, \ k\ne j, \ \max _{u_i\in {{\mathcal {U}}}_i}g_i(x,u_i) \le 0, \ i\in I\}\nonumber \\= & {} \max _{\mu \in {{\mathbb {R}}}^{p-1}_+,\lambda \in {{\mathbb {R}}}^m_+ }\inf _{x\in {{\mathbb {R}}}^n}\max _{u\in {{\mathcal {U}}}}\varPsi (x,u,\mu ,\lambda ), \end{aligned}$$
(2)

where \(u=(u_1,\ldots ,u_m)\in {{\mathcal {U}}}:=\prod _{i=1}^m{{\mathcal {U}}}_i\) and

$$\begin{aligned} \varPsi (x,u,\mu ,\lambda ):=\left\{ f_j(x)+\sum _{k\ne j}\mu _k( f_k(x)-\epsilon _k)+\sum _{i=1}^m\lambda _ig_i(x,u_i)\right\} . \end{aligned}$$

Note that \(g_i(\cdot , u_i)\) is convex for each fixed \(u_i\) and \(g_i (x, \cdot )\) is concave for each fixed x. Since \({{\mathcal {U}}}\) is compact and convex, by the convex-concave minimax theorem (Rockafellar 1970), we have for each \(\mu \in {{\mathbb {R}}}^{p-1}_+\) and \(\lambda \in {{\mathbb {R}}}^m_+,\)

$$\begin{aligned} \inf _{x\in {{\mathbb {R}}}^n}\max _{u_i\in {{\mathcal {U}}}_i}\varPsi (x,u,\mu ,\lambda )=\max _{u_i\in {{\mathcal {U}}}_i} \inf _{x\in {{\mathbb {R}}}^n}\varPsi (x,u,\mu ,\lambda ). \end{aligned}$$

This, together with (2), yields

$$\begin{aligned} \gamma _j= \max _{\begin{array}{c} \mu \in {{\mathbb {R}}}^{p-1}_+,\lambda \in {{\mathbb {R}}}^m_+,u\in {{\mathcal {U}}} \end{array}} \inf \limits _{x\in {{\mathbb {R}}}^n}\varPsi (x,u,\mu ,\lambda ). \end{aligned}$$

So, there exist \({\tilde{\mu }}_k\ge 0,\) \(k\ne j,\) \({\tilde{\lambda }}_i\ge 0,\) and \({\tilde{u}}_i\in {{\mathcal {U}}}_i,\) \(i\in I,\) such that

$$\begin{aligned} \gamma _j=\inf _{x\in {{\mathbb {R}}}^n}\left\{ f_j(x)+\sum _{k\ne j}{\tilde{\mu }}_k( f_k(x)-\epsilon _k)+\sum _{i=1}^m{\tilde{\lambda }}_ig_i(x,{\tilde{u}}_i)\right\} . \end{aligned}$$

Let \(\varPhi (x):=f_j(x)+\sum _{k\ne j}{\tilde{\mu }}_k( f_k(x)-\epsilon _k)+\sum _{i=1}^m{\tilde{\lambda }}_ig_i(x,{\tilde{u}}_i)-\gamma _j.\) Since \(f_\ell ,\) \(\ell =1,\ldots ,p,\) and \(g_i(\cdot ,{\bar{u}}_i),\) \(i\in I,\) are SOS-convex polynomials, so is \(\varPhi (x).\) Moreover, since \(\inf _{x\in {{\mathbb {R}}}^n}\varPhi (x)=0,\) by Lemma 3.1, there exists \({\bar{x}}\in \mathop {\mathrm{argmin}}\limits _{x\in {{\mathbb {R}}}^n}\varPhi (x)\) such that \(\varPhi ({\bar{x}})=0,\) and so, \(\nabla \varPhi ({\bar{x}})=0.\) It follows from Lemma 3.2 that \(\varPhi (x)\) is a sum of squares polynomial, i.e,

$$\begin{aligned} f_j+\sum _{k\ne j}{\tilde{\mu }}_k( f_k-\epsilon _k)+\sum _{i=1}^m{\tilde{\lambda }}_ig_i(\cdot ,{\tilde{u}}_i)- \gamma _j\in {\Sigma }^2_{2d}. \end{aligned}$$
(3)

On the other hand, for each \(i=1,\ldots , m,\) as \({\tilde{u}}_i=({\tilde{u}}_i^1,\ldots ,{\tilde{u}}_i)\in {{\mathcal {U}}}_i,\)

$$\begin{aligned} A_i^0+\sum _{r=1}^{q_i}{\tilde{u}}_i^rA_i^r\succeq 0, \ i\in I. \end{aligned}$$
(4)

Let \(\lambda _i^0:={\tilde{\lambda }}_i\) and \(\lambda _i^r:={\tilde{\lambda }}_i{\tilde{u}}_i^r,\) \(i\in I,\) \(r=1,\ldots ,q_i.\) It follows from (3) and (4) that

$$\begin{aligned}&f_j+\sum _{k\ne j}{\tilde{\mu }}_k( f_k-\epsilon _k)+\sum _{i=1}^m{\tilde{\lambda }}_ig_i(\cdot ,{\tilde{u}}_i)-\gamma _j \\= & {} f_j+\sum _{k\ne j}{\tilde{\mu }}_k( f_k-\epsilon _k)+\sum _{i=1}^m({\tilde{\lambda }}_ig_i^{0}+ \sum _{r=1}^{q_i}{\tilde{\lambda }}_i{\tilde{u}}_i^rg_i^r)-\gamma _j\\= & {} f_j+\sum _{k\ne j}{\tilde{\mu }}_k( f_k-\epsilon _k)+\sum _{i=1}^m\sum _{r=0}^{q_i}\lambda _i^rg_i^r- \gamma _j\in {\Sigma }^2_{2d} \end{aligned}$$

and

$$\begin{aligned} {\tilde{\lambda }}_i\left( A_i^0+\sum _{r=1}^{q_i}{\tilde{u}}_i^rA_i^r\right) ={ \tilde{\lambda }}_iA_i^0+\sum _{r=1}^{q_i}{\tilde{\lambda }}_i{\tilde{u}}_i^rA_i^r= \lambda _i^0A_i^0+\sum _{r=1}^{q_i}\lambda _i^rA_i^r\succeq 0. \end{aligned}$$

It means that \((\gamma _j,\mu ,\varvec{\lambda }_1,\ldots ,\varvec{\lambda }_m)\in {{\mathbb {R}}}\times {{\mathbb {R}}}_+^{p-1}\times ({{\mathbb {R}}}_+\times {{\mathbb {R}}}^{t_1}_+\times {{\mathbb {R}}}^{q_1-t_1})\times \cdots \times ({{\mathbb {R}}}_+\times {{\mathbb {R}}}^{t_m}_+\times {{\mathbb {R}}}^{q_m-t_m})\) is feasible for \((\mathrm{RD}_j(\epsilon )),\) where \(\varvec{\lambda }_i:=(\lambda _i^0,\lambda _i^1,\ldots ,\lambda _i^{q_i}),\) \(i\in I.\) So, we have

$$\begin{aligned} v(\mathrm{RP}_j(\epsilon ))=\gamma _j\le v(\mathrm{RD}_j(\epsilon )). \end{aligned}$$

Moreover, \(v(\mathrm{RD}_j(\epsilon ))=v(\mathrm{SDD}_j(\epsilon ))\) obviously holds by the construction of \((\mathrm{RD}_j(\epsilon ))\) and \((\mathrm{SDD}_j(\epsilon )).\)

Note that \((\mathrm{SDD}_j(\epsilon ))\) and \((\mathrm{SDP}_j(\epsilon ))\) are dual problems to each other. So, by the usual weak duality for semidefinite programming (Vandenberghe and Boyd 1996), we see that \(v(\mathrm{SDD}_j(\epsilon ))\le v(\mathrm{SDP}_j(\epsilon )).\)

To finish the proof of this theorem, it remains to show that \(v(\mathrm{RP}_j(\epsilon ))\ge v(\mathrm{SDP}_j(\epsilon )).\) Let x be any feasible solution of \((\mathrm{RP}_j(\epsilon ))\) and let \(\gamma _j:=f_j(x).\) Then \(f_k(x)\le \epsilon _k,\) \(k\ne j,\) and

$$\begin{aligned} g_i(x,u_i) = g_i^0(x) +\sum _{r=1}^{q_i}u_i^rg_i^r(x)\le 0, \ \forall u_i\in {{\mathcal {U}}}_i, \ i\in I, \end{aligned}$$

where for each \(i\in I,\) the compact and convex set \({{\mathcal {U}}}_i\subset {{\mathbb {R}}}^{q_i}\) is given by

$$\begin{aligned} {{\mathcal {U}}}_i=\left\{ u_i\in {{\mathbb {R}}}^{q_i} : A_i^0+\sum \limits _{r=1}^{q_i}u_i^rA_i^r\succeq 0,\, (u^{1}_i,\ldots ,u^{t_i}_i,u^{t_i+1}_i,\ldots ,u^{q_i}_i)\in {{\mathbb {R}}}^{t_i}_+\times {{\mathbb {R}}}^{q_i-t_i}\right\} . \end{aligned}$$

It follows from the strict feasibility condition and the strong duality theorem for semidefinite programming (Vandenberghe and Boyd 1996) that there exist \(Z_i\succeq 0,\) \(i\in I,\) such that

$$\begin{aligned}&g_i^r(x)+\langle A_i^r,Z_i\rangle \le 0, \ i\in I, \ r=0,1,\ldots ,t_i,\\&g_i^r(x)+\langle A_i^r,Z_i\rangle =0, \ i\in I, \ r=t_i+1,\ldots ,q_i. \end{aligned}$$

Let \(y:=(1,x_1 ,\ldots , x_n, x_1^2, x_1x_2 ,\ldots , x_1^{2d},\ldots , x_n^{2d})^T.\) Then we have

$$\begin{aligned}&\displaystyle f_j(x)=\sum \limits _{\alpha \in {\mathbb {N}}_{2d}^n} (f_j)_\alpha y_\alpha =\gamma _j,\\&\displaystyle f_k(x)-\epsilon _k=\sum \limits _{\alpha \in {\mathbb {N}}_{2d}^n} (f_k)_\alpha y_\alpha -\epsilon _k\le 0, \ k\ne j, \end{aligned}$$

and for each \(i\in I,\)

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle g_i^r(x)+\langle A_i^r,Z_i\rangle =\sum \limits _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^r)_\alpha y_\alpha +\langle A_i^r,Z_i\rangle \le 0,&{} r=0,1,\ldots ,t_i,\\ \displaystyle g_i^r(x)+\langle A_i^r,Z_i\rangle =\sum \limits _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^r)_\alpha y_\alpha +\langle A_i^r,Z_i\rangle =0, &{} r=t_i+1,\ldots ,q_i. \end{array}\right. \end{aligned}$$

Moreover, \(yy^T = \sum _{\alpha \in {\mathbb {N}}_{2d}^n}y_\alpha B_\alpha \succeq 0\) with \(y_\mathbf{0 }=1.\) So, \((y, Z_1,\ldots , Z_m)\) is a feasible solution of \((\mathrm{SDP}_j(\epsilon )),\) and hence, we see that

$$\begin{aligned} \gamma _j=f_j(x)=\sum \limits _{\alpha \in {\mathbb {N}}_{2d}^n} (f_j)_\alpha y_\alpha \ge v(\mathrm{SDP}_j(\epsilon )). \end{aligned}$$

It means that \(v(\mathrm{RP}_j(\epsilon ))\ge v(\mathrm{SDP}_j(\epsilon )).\) Thus, we obtain the desired result. \(\square \)

Remark 3.1

In Theorem 3.1, it is worth mentioning that we can prove that \(v(\mathrm{RP}_j(\epsilon ))=v(\mathrm{RD}_j(\epsilon ))\) under the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) (the strict feasibility condition is not necessary) due to the weak duality relationship between \((\mathrm{RP}_j(\epsilon ))\) and \((\mathrm{RD}_j(\epsilon ));\) however, for the simplicity of the proof of Theorem 3.1, we would omit the detailed proof.

In what follows, we recall a result obtained by Lasserre (2009b), and we will use the result in the proof of Theorem 3.2; indeed, the result is an extension of Jensen’s inequality to a class of linear functionals that are not necessarily probability measures when one restricts its application to the class of SOS-convex polynomials.

Lemma 3.3

(Lasserre 2009b) Let \(f\in {{\mathbb {R}}}[x]_{2d}\) be SOS-convex,  and let \(y=(y_\alpha )_{\alpha \in {\mathbb {N}}_{2d}^n}\) satisfy \(y_\mathbf{0 }=1\) and \(\sum _{\alpha \in {\mathbb {N}}_{2d}^n}y_\alpha B_\alpha \succeq 0.\) Let \(L_y : {{\mathbb {R}}}[x]\rightarrow {{\mathbb {R}}}\) be a linear functional defined by \(L_y(f):=\sum _{\alpha \in {\mathbb {N}}_{2d}^n} f_\alpha y_\alpha ,\) where \(f(x)=\sum _{\alpha \in {\mathbb {N}}_{2d}^n} f_\alpha x^\alpha .\) Then

$$\begin{aligned} L_y(f)\ge f(L_y(x)), \end{aligned}$$

where \(L_y(x):=(L_y(x_1),\ldots ,L_y(x_n)).\)

The following theorem provides a way to find an optimal solution to problem \((\mathrm{RP}_j(\epsilon ))\) from an optimal solution of \((\mathrm{SDP}_j(\epsilon )).\)

Theorem 3.2

For each fixed \(j=1,\ldots ,p,\) assume that the Slater condition for \((\mathrm{RP}_j(\epsilon ))\) and the strict feasibility condition hold. If \(({\bar{y}}, {\bar{Z}}_1,\ldots ,{\bar{Z}}_m)\) is an optimal solution of \((\mathrm{SDP}_j(\epsilon )),\) then \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots ,L_{{\bar{y}}}(x_n))\) is an optimal solution of \((\mathrm{RP}_j(\epsilon )).\)

Proof

Let \(j\in \{1,\ldots ,p\}\) be any fixed. Assume that \(({\bar{y}}, {\bar{Z}}_1,\ldots ,{\bar{Z}}_m)\) is an optimal solution of \((\mathrm{SDP}_j(\epsilon )).\) Then, we have

$$\begin{aligned}&\displaystyle \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (f_k)_\alpha {\bar{y}}_\alpha -\epsilon _k\le 0, \ k\ne j, \nonumber \\&\displaystyle \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^r)_\alpha {\bar{y}}_\alpha +\langle A_i^r,{\bar{Z}}_i\rangle \le 0, \ i\in I, \ r=0,1,\ldots ,t_i, \end{aligned}$$
(5)
$$\begin{aligned}&\displaystyle \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^r)_\alpha {\bar{y}}_\alpha +\langle A_i^r,{\bar{Z}}_i\rangle =0, \ i\in I, \ r=t_i+1,\ldots ,q_i, \end{aligned}$$
(6)
$$\begin{aligned}&\displaystyle \sum _{\alpha \in {\mathbb {N}}_{2d}^n}{\bar{y}}_\alpha B_\alpha \succeq 0, \ {\bar{y}}_\mathbf{0 }=1. \end{aligned}$$
(7)

Let \((u_i^1,\ldots ,u_i^{q_i})\in {{\mathcal {U}}}_i,\) \(i\in I,\) be any given. Then for each \(i\in I,\) \(A_i^0+\sum _{r=1}^{q_i}u_i^rA_i^r\succeq 0.\) It follows from (5) and (6) that

$$\begin{aligned} \begin{aligned} 0\ge \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^0)_\alpha {\bar{y}}_\alpha +\langle A_i^0,{\bar{Z}}_i\rangle&\ge \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^0)_\alpha {\bar{y}}_\alpha -\left\langle \sum _{r=1}^{q_i}u_i^rA_i^r,{\bar{Z}}_i\right\rangle \\&\ge \sum _{\alpha \in {\mathbb {N}}_{2d}^n} (g_i^0)_\alpha {\bar{y}}_\alpha +\sum _{r=1}^{q_i}u_i^r\sum _{\alpha \in {\mathbb {N}}_{2d}^n}(g_i^r)_\alpha {\bar{y}}_\alpha \\&=L_{{\bar{y}}}(g_i^0+\sum _{r=1}^{q_i}u_i^rg_i^r). \end{aligned} \end{aligned}$$
(8)

Note that for each \(i\in I,\) \(g_i^0+\sum _{r=1}^{q_i}u_i^rg_i^r\) is SOS-convex. Since \({\bar{y}}\) satisfies (7), by Lemma 3.3, we see that for each \(i\in I,\)

$$\begin{aligned} L_{{\bar{y}}}\left( g_i^0+\sum _{r=1}^{q_i}u_i^rg_i^r\right) \ge \left( g_i^0+\sum _{r=1}^{q_i}u_i^rg_i^r\right) (L_{{\bar{y}}}(x_1), \ldots ,L_{{\bar{y}}}(x_n))=g_i^0({\bar{x}})+\sum _{r=1}^{q_i}u_i^rg_i^r({\bar{x}}). \end{aligned}$$

This, together with (8), yields that

$$\begin{aligned} g_i^0({\bar{x}})+\sum _{r=1}^{q_i}u_i^rg_i^r({\bar{x}})\le 0, \ i\in I. \end{aligned}$$
(9)

Also, by a similar argument as above, it shows that

$$\begin{aligned} f_k({\bar{x}})-\epsilon _k\le 0, \ k\ne j. \end{aligned}$$
(10)

So, from (9) and (10), \({\bar{x}}\) is a feasible solution of \((\mathrm{RP}_j(\epsilon ))\) as for each \(i\in I,\) \((u_i^1,\ldots ,u_i^{q_i})\in {{\mathcal {U}}}_i\) is arbitrary. Moreover, by a similar argument as (9), we see that \(\sum _{\alpha \in {\mathbb {N}}_{2d}^n}(f_j)_\alpha {\bar{y}}_\alpha \ge f_j({\bar{x}}).\) It follows from Theorem 3.1 that

$$\begin{aligned} \sum _{\alpha \in {\mathbb {N}}_{2d}^n}(f_j)_\alpha {\bar{y}}_\alpha =v(\mathrm{SDP}_j(\epsilon ))=v(\mathrm{RP}_j(\epsilon ))\le f_j({\bar{x}}) \le \sum _{\alpha \in {\mathbb {N}}_{2d}^n}(f_j)_\alpha {\bar{y}}_\alpha . \end{aligned}$$

Thus, \({\bar{x}}\) is an optimal solution of \((\mathrm{RP}_j(\epsilon )).\) \(\square \)

The following proposition, which is called the \(\epsilon \)-constraint method and plays a key role for our main result, shows that an efficient solution of \(\mathrm{(RMP)}\) is related to an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}))\) for all \(j=1,\ldots ,p\) with the same \({\bar{\epsilon }}.\)

Proposition 3.1

(Chankong and Haimes 1983; Ehrgott 2005) A feasible solution \({\bar{x}}\) is an efficient solution of \(\mathrm{(RMP)}\) if and only if there exists \({\bar{\epsilon }}\in {{\mathbb {R}}}^{p}\) such that \({\bar{x}}\) is an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}))\) for all \(j = 1,\ldots , p.\)

Now, we give a theorem, which shows that how we can find efficient solutions of \(\mathrm{(RMP)};\) also known as robust efficient solutions of \(\mathrm{(UMP)},\) by using the \(\epsilon \)-constraint method.

Theorem 3.3

Let \({\bar{x}}_{(0)}\in K\) be any given. Assume that for \(j=1,\ldots ,p,\)

$$\begin{aligned} {\bar{x}}_{(j)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x)\ne \emptyset , \end{aligned}$$

where \({\bar{\epsilon }}_{(j)}:=(f_1({\bar{x}}_{(j-1)}),\ldots ,f_p({\bar{x}}_{(j-1)}))\in {{\mathbb {R}}}^p.\) Then,  \({\bar{x}}_{(p)}\) is a robust efficient solution of \(\mathrm{(UMP)}.\)

Proof

For \(j=1,\ldots ,p,\) let \({\bar{x}}_{(j)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x).\) Note that for each \(j=1,\ldots ,p,\) the feasible set of \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\) is as follows:

$$\begin{aligned} K_j({\bar{\epsilon }}_{(j)}):=\{x\in K : f_k(x)\le f_k({\bar{x}}_{(j-1)}), \ k\ne j\}. \end{aligned}$$

Since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j)}\in K_j({\bar{\epsilon }}_{(j)}),\)

$$\begin{aligned} f_k({\bar{x}}_{(j)})\le f_k({\bar{x}}_{(j-1)}), \ k\ne j. \end{aligned}$$
(11)

Moreover, since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x)\),

$$\begin{aligned} f_j({\bar{x}}_{(j)})\le f_j(x) \end{aligned}$$
(12)

for any \(x\in K_j({\bar{\epsilon }}_{(j)}).\) Since for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(j-1)}\in K_j({\bar{\epsilon }}_{(j)}),\) from (12), we see that

$$\begin{aligned} f_j({\bar{x}}_{(j)})\le f_j({\bar{x}}_{(j-1)}), \ j=1,\ldots ,p. \end{aligned}$$
(13)

So, by (11) and (13), we obtain

$$\begin{aligned} f_j({\bar{x}}_{(p)})\le f_j({\bar{x}}_{(p-1)})\le \cdots \le f_j({\bar{x}}_{(1)}) \le f_j({\bar{x}}_{(0)}), \ j=1,\ldots ,p. \end{aligned}$$
(14)

Hence, we see that for each \(j=1,\ldots ,p,\) \({\bar{x}}_{(p)}\in K_j({\bar{\epsilon }}_{(p)})\subseteq K_j({\bar{\epsilon }}_{(j)}).\) So, we have for each \(j=1,\ldots ,p,\)

$$\begin{aligned} f_j({\bar{x}}_{(p)})\ge \min _{x\in K_j({\bar{\epsilon }}_{(p)})}f_j(x)\ge \min _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x)=f_j({\bar{x}}_{(j)}). \end{aligned}$$

Since, by (14), for each \(j=1,\ldots ,p,\) \(f_j({\bar{x}}_{(p)})\le f_j({\bar{x}}_{(j)}),\) we have for each \(j=1,\ldots ,p,\)

$$\begin{aligned} f_j({\bar{x}}_{(p)})=f_j({\bar{x}}_{(j)}) \ \text { and } \ {\bar{x}}_{(p)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(p)})}f_j(x). \end{aligned}$$

It means that \({\bar{x}}_{(p)}\) is an optimal solution of \((\mathrm{RP}_j({\bar{\epsilon }}_{(p)}))\) for all \(j=1,\ldots ,p.\) Thus, by Proposition 3.1, we obtain the desired result. \(\square \)

Immediately, according to Theorems 3.2 and 3.3, we obtain the following theorem, which shows that finding efficient solutions of \((\mathrm{RMP})\) is tractable under the Slater condition and the strict feasibility condition.

Theorem 3.4

Let \({\bar{x}}_{(0)}\in K\) be any given. Assume that for \(j=1,\ldots ,p,\)

$$\begin{aligned} \left( {\bar{y}}_{(j)}, ({\bar{Z}}_{(j)})_1,\ldots ,({\bar{Z}}_{(j)})_m\right) \in \mathop {\mathrm{argmin}}\limits \{ (\mathrm{SDP}_j({\bar{\epsilon }}_{(j)}))\}\ne \emptyset , \end{aligned}$$

where for each \(j=1,\ldots ,p,\) \({\bar{\epsilon }}_{(j)}:=(f_1({\bar{x}}_{(j-1)}),\ldots ,f_p({\bar{x}}_{(j-1)}))\in {{\mathbb {R}}}^p\) and \({\bar{x}}_{(j)}:=(L_{{\bar{y}}_{(j)}}(x_1),\ldots ,L_{{\bar{y}}_{(j)}}(x_n)).\) Assume that for each \(j=1,\ldots ,p,\) the Slater condition for \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\) and the strict feasibility condition hold. Then \((L_{{\bar{y}}_{(p)}}(x_1),\ldots ,L_{{\bar{y}}_{(p)}}(x_n))\in K\) is a robust efficient solution of \(\mathrm{(UMP)}.\)

Proof

Assume that for \(j=1,\ldots ,p,\) \(\left( {\bar{y}}_{(j)}, ({\bar{Z}}_{(j)})_1,\ldots ,({\bar{Z}}_{(j)})_m\right) \) is an optimal solution of \((\mathrm{SDP}_j({\bar{\epsilon }}_{(j)})).\) Since for each \(j=1,\ldots ,p,\) the Slater condition and strict feasibility condition hold for \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)})),\) by Theorem 3.2, for each \(j=1,\ldots ,p,\)

$$\begin{aligned} {\bar{x}}_{(j)}:=(L_{{\bar{y}}_{(j)}}(x_1),\ldots ,L_{{\bar{y}}_{(j)}}(x_n))\in \mathop {\mathrm{argmin}}\limits _{x\in K_j({\bar{\epsilon }}_{(j)})}f_j(x). \end{aligned}$$

Thus, by Theorem 3.3, \({\bar{x}}_{(p)}=(L_{{\bar{y}}_{(p)}}(x_1),\ldots ,L_{{\bar{y}}_{(p)}}(x_n))\) is a robust efficient solution of \(\mathrm{(UMP)}.\) \(\square \)

Remark 3.2

Let \(j\in \{1,\ldots ,p\}\) be any fixed, and let \(k\in \{1,\ldots ,p\}\) with \(k\ne j.\) Since the Slater condition for \((\text {RP}_j(\epsilon ))\) depends on the parameter \(\epsilon ,\) it often occurs that the Slater condition fails for some given \(\epsilon .\) in order to see this more clearly, let us first consider the following robust SOS-convex optimization scalar problem

$$\begin{aligned} \begin{aligned} \min \limits _{x\in {{\mathbb {R}}}^n}&\ f_k(x)\\ \text{ s.t. }&\ g_i^{0}(x)+\sum _{r=1}^{q_i}u_i^rg_i^r(x)\le 0, \ \forall u_i\in {{\mathcal {U}}}_i, \ i=1,\ldots ,m. \end{aligned} \end{aligned}$$
(15)

It is worth mentioning that the problem (15) enjoys a zero duality gap in the sense that the optimal values of the problem (15) and its associated relaxation problems are equal, and an optimal solution of the problem (15) can be found by solving an associated semidefinite programming problem under the suitable constraint qualification (Chieu et al. 2018). If \(\epsilon _k\) is chosen as the optimal value of the problem (15), i.e., \(\epsilon _k=f_k(x^*),\) where \(x^*\in K\) is an optimal solution of the problem (15), then we clearly see that the Slater condition for \((\text {RP}_j(\epsilon ))\) does not hold, and so, it may not enjoy a zero duality gap between the problem \((\text {RP}_j(\epsilon ))\) and its associated relaxation dual problems. In spite of its fault, we still use the Slater condition, since it is usually easier to check the validity of the Slater condition than other weaker regular condition, e.g., stability (see Geoffrion 1971). In turn, to overcome the fault of the Slater condition, we shall give more restrictive assumptions on the objective functions, e.g., strict convexity (see Lee and Jiao 2018).

4 A numerical example

In this section, we design an example to illustrate how to find efficient solutions for a robust multicriteria SOS-convex polynomial optimization problem with a detailed calculation process by using our results.

Example 4.1

Consider the following 2-dimensional robust multiobjective SOS-convex optimization problem:

where

$$\begin{aligned}&f_1(x_1,x_2)=x_1^8+2x_1^2-2x_1x_2+x_2^2, \ f_2(x_1,x_2)=x_1^4-x_2,\\&g_1^0(x_1,x_2)=x_1^2+x_2^2-2, \ g_1^1(x_1,x_2)=x_1, \ g_1^2(x_1,x_2)=x_2,\\&g_2^0(x_1,x_2)=3x_1^2-6x_1, \text { and } \ g_2^1(x_1,x_2)=x_2^2. \end{aligned}$$

Here, the uncertain sets \({{\mathcal {U}}}_1\) and \({{\mathcal {U}}}_2\) are given by

$$\begin{aligned} {{\mathcal {U}}}_1=\left\{ (u_1^1,u_1^2)\in {{\mathbb {R}}}^2 : A_1^0+\sum _{r=1}^2u_1^rA_1^r\succeq 0 \right\} ,\ {{\mathcal {U}}}_2=\Big \{u_2^1\in {{\mathbb {R}}}_+ : A_2^0+u_2^1A_2^1\succeq 0\Big \}, \end{aligned}$$

where \(A_1^0=\left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array}\right) ,\) \(A_1^1=\left( \begin{array}{ccc} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 \\ 1 &{} 0 &{} 0 \\ \end{array}\right) ,\) \(A_1^2=\left( \begin{array}{ccc} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 \\ 0 &{} 1 &{} 0 \\ \end{array}\right) ,\) \(A_2^0=\left( \begin{array}{ccc} 1 &{} 2 &{} 0 \\ 2 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array}\right) ,\) and \(A_2^1=\left( \begin{array}{rrr} 0 &{} -1 &{}0 \\ -1 &{} 0 &{}0 \\ 0 &{} 0 &{}0\\ \end{array}\right) .\) Then, by simple calculations, we see that

$$\begin{aligned} {{\mathcal {U}}}_1=\{(u_1^1,u_1^2)\in {{\mathbb {R}}}^2 : (u_1^1)^2+(u_1^2)^2\le 1 \} \, \text { and } \, {{\mathcal {U}}}_2=\{u_2^1\in {{\mathbb {R}}}:1\le u_2^1\le 3\}. \end{aligned}$$

Let \(({\hat{u}}_1^1,{\hat{u}}_1^2)=(0,0)\in {{\mathcal {U}}}_1,\) and let \({\hat{u}}_2^1=2\in {{\mathcal {U}}}_2.\) Then, we see that the strict feasibility condition holds. In addition, \(f_1,\) \(f_2,\) \(g_1^0,\) \(g_2^0,\) and \(g_2^1\) are SOS-convex polynomials, and \(g_1^1\) and \(g_1^2\) are affine functions. Moreover, by simple calculations, the robust feasible set K can be found as follows:

$$\begin{aligned} K:= & {} \{x\in {{\mathbb {R}}}^2 : g_1(x,u_1)\le 0, \ \forall u_1\in {{\mathcal {U}}}_1,\ g_2(x,u_2)\le 0, \ \forall u_2\in {{\mathcal {U}}}_2\}\\= & {} \left\{ (x_1,x_2)\in {{\mathbb {R}}}^2 : x_1^2+x_2^2-2+\max _{(u_1^1,u_1^2)\in {{\mathcal {U}}}_1} \left\{ u_1^1x_1+u_1^2x_2\right\} \le 0,\right. \\&\left. \qquad \qquad \qquad \qquad \times 3x_1^2-6x_1+\max _{u_2^1\in {{\mathcal {U}}}_2}\left\{ u_2^1x_2^2\right\} \le 0\right\} \\= & {} \{(x_1,x_2)\in {{\mathbb {R}}}^2 : x_1^2+x_2^2\le 1, \ (x_1-1)^2+x_2^2\le 1\}. \end{aligned}$$

Now, we substitute \( ({\widetilde{\mathrm{RMP}}})\) by the \(\epsilon \)-constraint problems as follows:

$$\begin{aligned} (\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))&\min \limits _{(x_1,x_2)\in K}&f_j(x_1,x_2)\\&\mathrm{s.t. }&f_k(x_1,x_2)\le {\bar{\epsilon }}_{(j)}, \ k\ne j, \end{aligned}$$

where for each \(j=1,2,\) \({\bar{\epsilon }}_{(j)}\) is given by \({\bar{\epsilon }}_{(1)}:=f_2({\bar{x}}_{(0)})\) and \({\bar{\epsilon }}_{(2)}:=f_1({\bar{x}}_{(1)}),\) respectively. Here, \({\bar{x}}_{(0)}\in K\) is given, and \({\bar{x}}_{(1)}\in \mathop {\mathrm{argmin}}\limits _{x\in K_1({\bar{\epsilon }}_{(1)})}f_1(x),\) where \(K_1({\bar{\epsilon }}_{(1)})=\{x\in K : f_2(x)\le f_2({\bar{x}}_{(0)})\}.\)

We now consider the following sum of squares relaxation dual problem of \((\mathrm{RP}_j({\bar{\epsilon }}_{(j)}))\):

Since \(f_j+\sum _{k\ne j}\mu _k(f_k-\epsilon _k)+\sum _{r=0}^{2}\lambda _1^rg_1^r+\sum _{r=0}^{1}\lambda _2^rg_2^r-\gamma _j \in {\Sigma }_{8}^2,\) there exist real polynomials \(q_\ell ,\) \(\ell = 1,\ldots ,r,\) such that

$$\begin{aligned} f_j(x)+\sum _{k\ne j}\mu _k(f_k(x)-\epsilon _k)+\sum _{r=0}^{2}\lambda _1^rg_1^r(x)+\sum _{r=0}^{1}\lambda _2^rg_2^r(x)-\gamma _j=\sum _{\ell =1}^rq_\ell (x)^2. \end{aligned}$$
(16)

Note that, by Proposition 2.1, there exists \(X\in S_+^{s(4)}(=S_+^{15})\) such that

$$\begin{aligned}&f_j(x)+\sum _{k\ne j}\mu _k(f_k(x)-\epsilon _k)+\sum _{r=0}^{2}\lambda _1^rg_1^r(x)+\sum _{r=0}^{1}\lambda _2^rg_2^r(x)-\gamma _j \end{aligned}$$
(17)
$$\begin{aligned}&=\langle v_4(x)v_4(x)^T,X\rangle \ \forall x\in {{\mathbb {R}}}^2. \end{aligned}$$
(18)
Table 1 Efficient solutions of \(({\widetilde{\mathrm{RMP}}})\) for given 25 points \({\bar{x}}_{(0)}\in K.\)
Fig. 1
figure 1

a Efficient solutions of \(({\widetilde{\mathrm{RMP}}})\) at Table 1. b Efficient solutions of \(({\widetilde{\mathrm{RMP}}})\) for given 1000 points

However, the Newton polytope of the polynomial (17) is easily seen to be the convex hull of the points (8, 0), (0, 2), (0, 0). It follows from (Reznick 1978, Theorem 1) that the Newton polytope of \(q_\ell \) is the convex hull of the points (4, 0), (0, 1), (0, 0),  and so, the polynomials \(q_\ell \) in the sum of squares decomposition of (16) will have at most 6 distinct monomials, i.e., \(v_4(x)=(1,x_1,x_2,x_1^2,x_1^3,x_1^4)^T\) in (18), further, the full sum of squares decomposition can be computed by solving a much smaller semidefinite programming problem. With this fact we formulate the semidefinite programming problems for \((\mathrm{RD}_j({\bar{\epsilon }}_{(j)})),\) \(j=1,2,\) as follows:

Solving the above semidefinite programming problems using the MATLAB optimization package CVX (Grant and Boyd 2013) together with the SDP-solver SDPT3 (Toh et al. 1999), we can find optimal solutions for the dual problem of \((\mathrm{SDD}_j({\bar{\epsilon }}_{(j)})),\) \(j=1,2.\) For example, letting \({\bar{x}}_{(0)}=(1,0)\in K\) and \({\bar{\epsilon }}_{(1)}=f_2({\bar{x}}_{(0)})=1,\) we obtain an optimal solution \(({\bar{y}},{\bar{Z}}_1,{\bar{Z}}_2)\) of the dual problem of \((\mathrm{SDD}_1({\bar{\epsilon }}_{(1)}))\), where

$$\begin{aligned}&{\bar{y}}=(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0), \\&{\bar{Z}}_1\approx \left( \begin{array}{rrr} 0.6672 &{} -0.0003 &{} -0.0000 \\ -0.0003 &{} 0.06665 &{} -0.0000 \\ -0.0000 &{} -0.0000 &{} 0.6663 \\ \end{array} \right) , \, \text { and } \ {\bar{Z}}_2\approx \left( \begin{array}{rrr} 0.0000 &{} 0.0000 &{} 0.0000 \\ 0.0000 &{} 0.0000 &{} 0.0000 \\ 0.0000 &{} 0.0000 &{} 0.0000 \\ \end{array} \right) . \end{aligned}$$

It follows from Theorem 3.2 that \({\bar{x}}_{(1)}=(L_{{\bar{y}}}(x_1),L_{{\bar{y}}}(x_2))=(0, 0)\) is an optimal solution of \((\mathrm{RP}_1({\bar{\epsilon }}_{(1)})).\) Now, let \({\bar{\epsilon }}_{(2)}=f_1({\bar{x}}_{(1)})=0.\) Note that

$$\begin{aligned} K_2({\bar{\epsilon }}_{(2)})=\{x\in K : f_1(x)\le f_1({\bar{x}}_{(1)})\}=\{(0,0)\}, \end{aligned}$$

i.e., \({\bar{x}}_{(2)}=(0, 0)\) is an optimal solution of \((\mathrm{RP}_2({\bar{\epsilon }}_{(1)})).\) It means that \({\bar{x}}:={\bar{x}}_{(2)}=(0,0)\) is an efficient solution of \( ({\widetilde{\mathrm{RMP}}}).\) In order to find more efficient solutions of \( ({\widetilde{\mathrm{RMP}}}),\) we give 25 points \({\bar{x}}_{(0)}\) in the boundary of K, and then we get the efficient solutions of \( ({\widetilde{\mathrm{RMP}}})\) in Table 1. An illustration of the found efficient solutions of \( ({\widetilde{\mathrm{RMP}}})\) is given in Fig. 1a. Moreover, we give 1000 points \({\bar{x}}_{(0)}\) in K. The efficient solutions of \(({\widetilde{\mathrm{RMP}}})\) for above points \({\bar{x}}_{(0)}\) are described in Fig. 1b.

5 Conclusions

In this paper, we proved that finding efficient solutions in robust multiobjective optimization with SOS-convex polynomials is tractable by using the \(\epsilon \)-constraint method and the techniques on SOS-convex polynomials. Notwithstanding the fact that the uncertain set \({{\mathcal {U}}}\) is restrict to affinely parameterized data uncertain one, it will be very interesting yet difficult to investigate the case of more general uncertain set \({{\mathcal {U}}}.\)