1 Introduction

Since the seminal paper by Soyster [34], robust optimization has become a useful and efficient deterministic mathematical approach to handle problems relating to decision making in the face of data uncertainty [5, 7]. Recently, the robust optimization approach has been applied to various domains in multiobjective/vector frameworks with many further developments and high-potential techniques to solve various real-life decision making problems under the data uncertainty (see e.g., [9, 12, 16,17,18, 20, 23, 24, 26, 36] and the references therein).

A recent research direction in robust multiobjective optimization has been devoted to investigating on how to reformulate and solve a robust multiobjective optimization problem via semidefinite programming (SDP) (see e.g., [4]) techniques by exploiting special structures of the constraint and objective functions such as linear, quadratic or polynomials; see e.g., [10, 11, 14, 25, 28]. For example, Magron et al. [28] used SDP relaxations to approximate Pareto curves of a bi-criteria polynomial program. The author in [11] established duality and linear matrix inequality conditions for a multiobjective SOS-convex optimization problem under constraint uncertainty, while Lee and Jiao in [25] examined such a multiobjective problem by employing an approximate constraint scalarization approach to find robust efficient solutions via SDP duals. In terms of two-stage frameworks, the authors in [14] reformulated and solved an adjustable robust linear multiobjective optimization using SDP relaxations.

Remarkably, a new class of first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials has been recently introduced in [8]. This class is a subclass of SOS-convex polynomials [3, 21], and is a numerically tractable subclass of convex polynomials in the sense that checking a given polynomial is 1st-SDSOS-convex or not can be done by solving a feasibility problem of a second order cone programming (SOCP) problem [1, 2, 8]. The class of 1st-SDSOS-convex polynomials covers all separable convex quadratic functions and any polynomial that can be expressed as a sum of even powers with the corresponding nonnegative coefficients [8]. The interested readers are referred to a more recent paper [13] for an application of the class of 1st-SDSOS-convex polynomials to examine duality and optimality conditions for weak efficient solutions of a multiobjective optimization problem.

In this paper, we consider an uncertain multiobjective optimization problem that is defined by

$$\begin{aligned} \min _{x\in \mathbb {R}^n}\big \{\big (f_1(x,u_1),\ldots ,f_m(x,u_m)\big )\,|\,g_l(x,\omega _l)\le 0,\; l=1,\dots ,q\big \}, \end{aligned}$$
(U)

where \(x\in \mathbb {R}^n\) is the vector of decision variables, \(u_\zeta \in U_\zeta , \zeta =1,\ldots , m, \omega _l\in \Omega _l, l=1,\dots ,q,\) are uncertain parameters, \(U_\zeta , \zeta =1,\ldots ,m, \Omega _l, l=1,\ldots , q\) are uncertainty sets and \(f_\zeta : \mathbb {R}^n\times U_\zeta \rightarrow \mathbb {R}, \zeta =1,\dots ,m, g_l: \mathbb {R}^n\times \Omega _l\rightarrow \mathbb {R}, l=1,\dots ,q\) are bi-functions.

In what follows, the uncertainty sets \(U_\zeta , \zeta =1,\ldots ,m, \Omega _l, l=1,\ldots , q\) are assumed to be ellipsoids given by

$$\begin{aligned} U_\zeta :=\big \{u_\zeta \in \mathbb {R}^{s}\mid u_\zeta ^T A_\zeta u_\zeta \le \rho _\zeta \big \},\; \;\;\Omega _l:=\big \{\omega _l\in \mathbb {R}^{r}\mid \omega _l^T B_l\omega _l\le \tau _l\big \}, \end{aligned}$$
(1)

where \(A_\zeta , \zeta =1,\ldots , m, B_l, l=1,\ldots , q\) are positive-definite symmetric matrices \((A_\zeta \succ 0, B_l\succ 0)\) and \(\rho _\zeta , \zeta =1,\ldots ,m, \tau _l, l=1,\ldots ,q\) are positive real numbers \((\rho _\zeta> 0, \tau _l> 0\)), while the bi-functions \(f_\zeta (\cdot ,\cdot ), \zeta =1,\dots ,m, g_l(\cdot ,\cdot ), l=1,\dots ,q\) are assumed to satisfy that \(f_\zeta (\cdot ,u_\zeta ), u_\zeta \in U_\zeta \) and \(g_l(\cdot ,\omega _l), \omega _l\in \Omega _l\) are 1st-SDSOS-convex polynomials (cf. Definition 2) and that \( f_\zeta (x,\cdot ), g_l(x,\cdot ), x\in \mathbb {R}^n\) are affine functions given by

$$\begin{aligned} f_\zeta (x,u_\zeta ):&=p^0_\zeta (x)+\sum \limits ^{s}_{i=1}u^i_\zeta p^i_\zeta (x)\; \text{ for } \; u_\zeta :=(u^1_\zeta ,\ldots , u^s_\zeta )\in \mathbb {R}^s, \nonumber \\ g_l(x,\omega _l):&=h^0_l(x)+\sum \limits ^{r}_{j=1}\omega ^j_lh^j_l(x)\; \text{ for } \; \omega _l:=(\omega ^1_l,\ldots , \omega ^r_l)\in \mathbb {R}^r, \end{aligned}$$
(2)

where \(p_\zeta ^i, i=0,1,\ldots ,s, \zeta =1,\ldots ,m \) and \(h^j_l, j=0,1,\ldots ,r, l=1,\ldots ,q\) are given polynomials with degrees at most \(\eta \in \mathbb {N}_0:=\{0,1,2,\ldots \}\). Without loss of generality, we suppose that \(\eta \) is an even number because otherwise we can replace \(\eta \) by \(\eta +1.\)

To deal with the uncertainty problem (U), we associate with it the following robust counterpart:

$$\begin{aligned} \min _{x\in \mathbb {R}^n}\big \{\big (\max \limits _{u_1\in U_1}f_1(x,u_1),\ldots ,\max \limits _{u_m\in U_m}f_m(x,u_m)\big )\,|\, g_l(x,\omega _l)\le 0,\;\forall \omega _l\in \Omega _l, \; l=1,\dots ,q\big \}, \end{aligned}$$
(R)

where the uncertain objective functions and uncertain constraint functions are enforced for all possible realizations within the corresponding uncertainty sets.

From now on, the feasible set of problem (R) is denoted by

$$\begin{aligned} C:=\big \{x\in \mathbb {R}^n\mid \; g_l(x,\omega _l)\le 0,\;\forall \omega _l\in \Omega _l,\, l=1,\dots ,q\big \}. \end{aligned}$$
(3)

We also use the notations \(F:=(F_1,\ldots ,F_m)\), where \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).

The forthcoming concepts of solutions are in terms of worst-case robustness efficiencies (see e.g., [16, 22]) of multiobjective/vector optimization (cf. [15, 19, 27, 29, 31, 35]).

Definition 1

(i) A point \({\bar{x}}\in C\) is called a robust Pareto solution of problem (U) if \({\bar{x}}\) is a Pareto solution of problem (R), i.e., for all \( x\in C,\)

$$\begin{aligned} \quad F(x)-F(\bar{x})\notin -\mathbb {R}^m_+\setminus \{0\}, \end{aligned}$$

where \(\mathbb {R}^m_+\) stands for the nonnegative orthant of \(\mathbb {R}^m\). (ii) We say that \({\bar{x}}\in C\) is a robust weak Pareto solution of problem (U) if \({\bar{x}}\) is a weak Pareto solution of problem (R), i.e., for all \( x\in C,\)

$$\begin{aligned} \quad F(x)-F({\bar{x}})\notin -\mathrm{int\,} \mathbb {R}^m_+, \end{aligned}$$

where \(\mathrm{int\,} \mathbb {R}^m_+\) denotes the topological interior of \(\mathbb {R}^m_+\).

The purpose of this paper is threefold: Firstly, we propose necessary and sufficient optimality conditions for robust (weak) Pareto solutions of the uncertain multiobjective optimization problem (U). The obtained optimality conditions are displayed by means of second order cone (SOC) conditions, which provide a numerically checkable certificate of the solvability of the robust optimality conditions. Interestingly, it is shown that the SOC conditions are equivalent to the robust Karush-Kuhn-Tucker (KKT) condition. These are done by employing the special structures of the 1st-SDSOS-convex polynomials that empower us to provide robust optimality conditions by way of SOC conditions. These results will be given in Sect. 2.

Secondly, we suggest a dual multiobjective problem in terms of SOC conditions to the robust multiobjective optimization problem (R) and explore robust strong, robust weak and robust converse duality relations between (R) and its dual. The obtained dual results characterize the SOC conditions as well as the fulfilment of the robust KKT condition. In particular, this shows how we can verify a Pareto solution of the robust multiobjective optimization problem (R) by solving the corresponding dual multiobjective problem, which is expressed in terms of SOC conditions. These results will be given in Sect. 3.

Thirdly, we establish robust solution relationships between the uncertain multiobjective optimization problem (U) and an SOCP relaxation of a corresponding weighted-sum optimization problem. These relationships show how one can find a robust (weak) Pareto solution of the uncertain multiobjective polynomial program (U) by solving an SOCP relaxation. In this way, we obtain strong SOCP duality and exact SOCP relaxation among the (scalar) dual, relaxation and weighted-sum optimization programs. These results will be presented in Sect. 4. The last section summarizes the main results and provides some perspectives on this research topic.

2 Second-order cone optimality conditions and solution relationships

To begin with, some notations and definitions are presented. The notation \(\mathbb {R}^n\) signifies the Euclidean space whose norm is denoted by \(\Vert \cdot \Vert \) for each \(n\in \mathbb {N}:=\{1,2,\ldots \}\). The inner product in \(\mathbb {R}^n\) is defined by \(\langle x,y\rangle :=x^T y\) for all \(x, y\in \mathbb {R}^n.\) Denote by \(\mathbb {R}[x]\) (or \(\mathbb {R}[x_1,\ldots ,x_n]\)) the ring of polynomials in \(x:=(x_1,\ldots ,x_n)\) with real coefficients. Consider a polynomial f with degree at most \(\eta \) where \(\eta \) is an even number, and let \(l:=\eta /2\). We note the canonical basis of \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) as

$$\begin{aligned} x^{(\eta )}:=(1,x_1,x_2,\ldots ,x_n,x_1^2,x_1x_2,\ldots ,x_2^2,\ldots ,x_n^2,\ldots ,x_1^{\eta },\ldots ,x_n^\eta )^T, \end{aligned}$$

where \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) is the space including of all real polynomials on \(\mathbb {R}^n\) with degree \(\eta \). Let \(s(\eta ,n)\) be the dimension of \(\mathbb {R}_\eta [x_1,\ldots ,x_n]\) and for each \(1 \le \alpha \le s(\eta ,n)\), let \(i(\alpha )=(i_1(\alpha ),\ldots ,i_n(\alpha )) \in (\mathbb {N}_0)^n\) denote the multi-index satisfying

$$\begin{aligned} x^{(\eta )}_{\alpha }=x^{i(\alpha )}:=x_1^{i_1(\alpha )}\ldots x_n^{i_n(\alpha )}. \end{aligned}$$

Let the monomials \(m_\alpha (x)=x^{(\eta )}_{\alpha }\) be the \(\alpha \)-th coordinate of \(x^{(\eta )}\), \(1 \le \alpha \le s(\eta ,n)\). Thus, we can write

$$\begin{aligned} f(x) = \sum \limits _{\alpha =1}^{s(\eta ,n)} f_{\alpha }m_{\alpha }(x)=\sum \limits _{\alpha =1}^{s(\eta ,n)} f_{\alpha }x^{(\eta )}_{\alpha }. \end{aligned}$$

Given \(u \in \mathbb {N}\) and \(y=(y_\alpha )\in \mathbb {R}^{s(u,n)},\) the Riesz functional \(L_y:\mathbb {R}_\eta [x]\rightarrow \mathbb {R}\) is defined by

$$\begin{aligned} L_y(f)=\sum _{\alpha =1}^{s(u,n)}f_\alpha y_\alpha \text{ for } f(x)=\sum _{\alpha =1}^{s(u,n)}f_\alpha x^{(\eta )}_\alpha , \end{aligned}$$
(4)

and the moment matrix \(\textbf{M}_u(y)\) about \(x \in \mathbb {R}^n\) with degree u generated by \(y=(y_\alpha )\in \mathbb {R}^{s(2u,n)}\) is defined by

$$\begin{aligned} \textbf{M}_u(y)=\sum _{\alpha =1}^{ s(2u,n)} y_\alpha M_\alpha , \end{aligned}$$
(5)

where \(M_\alpha ,\alpha =1,\ldots , s(2u,n),\) are the \((s(u,n)\times s(u,n))\) symmetric matrices such that

$$\begin{aligned} x^{(u)}(x^{(u)})^T= \sum _{\alpha =1}^{ s(2u,n)} x^{(u)}_\alpha M_\alpha . \end{aligned}$$
(6)

Definition 2

Let d be a polynomial on \(\mathbb {R}^n\). (i) The polynomial d with degree \(\eta =2\ell , l\in \mathbb {N}_0,\) is scaled diagonally dominant sum-of-squares (SDSOS) [1] if there exist nonnegative numbers \(\alpha _i\), \(\beta _{ij}^+,\beta _{ij}^-\) \(\gamma _{ij}^+,\gamma ^-_{ij}\) and \(p \in \mathbb {N}\) with \(1 \le p \le s(\ell ,n)\) such that

$$\begin{aligned} d(x)=\sum _{i=1}^{p} \alpha _i m_i^2(x)+\sum _{i,j=1, i \ne j}^{p} (\beta _{ij}^+ m_i(x)+\gamma _{ij}^+ m_j(x))^2+\sum _{i,j=1, i \ne j}^{p} (\beta _{ij}^- m_i(x)-\gamma _{ij}^- m_j(x))^2, \end{aligned}$$

where \(m_i, m_j\) are monomials in the variable x. We let \(\textbf{SDSOS}_\eta [x]\) denote the set of all SDSOS polynomials on \(\mathbb {R}^n\) with degree at most \(\eta \). (ii) For the polynomial d with degree \(\eta \in \mathbb {N}_0\), we consider a polynomial D defined as

$$\begin{aligned} D(x,y):= d(x)-d(y)- \nabla d(y)^T(x-y), \end{aligned}$$

where \(\nabla d(y)\) stands for the derivative of d at y. The polynomial d is called first-order scaled diagonally dominant sum-of-squares convex (1st-SDSOS-convex) [8] if D is an SDSOS polynomial in the variable of (xy).

It is worth mentioning here that all 1st-SDSOS-convex polynomials are convex, the sum of two 1st-SDSOS-convex polynomials is a 1st-SDSOS-convex polynomial, and the product of a 1st-SDSOS-convex polynomial and a nonnegative scalar is also a 1st-SDSOS-convex polynomial.

The first theorem in this section provides necessary and sufficient optimality conditions for robust (weak) Pareto solutions of problem (U) in terms of second order cone (SOC) conditions. Let \(Q_\zeta , \zeta =1,\ldots ,m\) and \(E_l, l=1,\ldots ,q\) be square matrices such that \(A_\zeta =Q_\zeta ^T Q_\zeta \) and \(B_l=E_l^T E_l.\)

Theorem 1

(SOC conditions for robust (weak) efficiencies) Let \(\bar{x}\in \mathbb {R}^n\) be a robust feasible point of problem (U).

  1. (i)

    Suppose that the Slater condition is satisfied; i.e., there exists \({\hat{x}}\in \mathbb {R}^n\) such that for all \( \omega _l\in \Omega _l, l=1\ldots ,q,\)

    $$\begin{aligned} g_l({\hat{x}},\omega _l)<0. \end{aligned}$$
    (7)

    If \({\bar{x}}\) is a robust weak Pareto solution of problem (U), then there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that

    $$\begin{aligned}&\sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )- \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x],\end{aligned}$$
    (8)
    $$\begin{aligned}&\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0,\; \zeta =1,\ldots ,m,\end{aligned}$$
    (9)
    $$\begin{aligned}&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q, \end{aligned}$$
    (10)

    where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).

  2. (ii)

    If there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) are satisfied, then \({\bar{x}}\) is a robust weak Pareto solution of problem (U).

  3. (iii)

    If there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,} \mathbb {R}^m_+, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) are satisfied, then \({\bar{x}}\) is a robust Pareto solution of problem (U).

Proof

(i) Suppose that the Slater condition (7) is satisfied and let \({\bar{x}}\) be a robust weak Pareto solution of problem (U). Put \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) and \( G_l(x):=\max \limits _{\omega _l\in \Omega _l}g_l(x,\omega _l), l=1,\ldots ,q\) for \(x\in \mathbb {R}^n\). Then, it holds that \(F_\zeta , \zeta =1,\ldots ,m\) and \(G_l,l=1,\ldots ,q\) are convex functions finite on \(\mathbb {R}^n\) because \(f_\zeta (\cdot ,u_\zeta ), u_\zeta \in U_\zeta \) and \(g_l(\cdot ,\omega _l), \omega _l\in \Omega _l\) are 1st-SDSOS-convex polynomials (hence, convex polynomials), \( f_\zeta (x,\cdot )\) and \( g_l(x,\cdot )\) are affine functions for each \(x\in \mathbb {R}^n,\) and \(U_\zeta \) and \(\Omega _l\) are compact sets. Note that the Slater condition (7) ensures that for each \(\;l=1,\ldots ,q,\)

$$\begin{aligned} G_l({\hat{x}})<0. \end{aligned}$$
(11)

Since \({\bar{x}}\) is a robust weak Pareto solution of problem (U), it ensures that

$$\begin{aligned} \{x\in \mathbb {R}^n\mid F_\zeta (x)-F_\zeta ({\bar{x}})<0,\; \zeta =1,\ldots , m,\; G_l(x)<0,\; l=1,\ldots ,q\}=\emptyset . \end{aligned}$$

Invoking a classical alternative theorem in convex analysis (see e.g., [32, Theorem 21.1]), we can find \(\lambda _\zeta \ge 0, \zeta =1,\ldots , m, \gamma _l\ge 0, l=1,\ldots , q\) such that \(\sum \limits _{\zeta =1}^m(\lambda _\zeta )^2+\sum \limits _{l=1}^q(\gamma _l)^2> 0\) and

$$\begin{aligned} \sum \limits _{\zeta =1}^m\lambda _\zeta \big (F_\zeta (x)-F_\zeta (\bar{x})\big )+\sum \limits _{l=1}^q\gamma _lG_l(x)\ge 0,\quad \forall x\in \mathbb {R}^n. \end{aligned}$$
(12)

We observe by (11) and (12) that \((\lambda _1,\ldots ,\lambda _m)\ne 0\), and that

$$\begin{aligned} \inf \limits _{x\in \mathbb {R}^n}{\Big \{\sum \limits _{\zeta =1}^m\lambda _\zeta F_\zeta (x)+\sum \limits _{l=1}^q\gamma _lG_l(x)\Big \}}\ge \sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta ({\bar{x}}). \end{aligned}$$
(13)

Denoting \(W:=\prod \limits _{\zeta =1}^mU_\zeta \times \prod \limits _{l=1}^q\Omega _l\subset \mathbb {R}^{ms+qr}\), it is true that W is a convex compact set. Now, by recalling the definition of \(F_\zeta , \zeta =1,\ldots ,m\) and \(G_l, l=1,\ldots ,q,\) we get by (13) that

$$\begin{aligned} \inf _{x\in \mathbb {R}^n}{\max \limits _{w\in W}{{\Big \{\sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,u_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\omega _l)\Big \}}}}\ge \sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta ({\bar{x}}), \end{aligned}$$
(14)

where \(w:=(u_1,\ldots ,u_m,\omega _1,\ldots ,\omega _q).\) Let \(H:\mathbb {R}^n\times \mathbb {R}^{ms+qr}\rightarrow \mathbb {R}\) be defined by \(H(x,w):=\sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,u_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\omega _l)\) for \(x\in \mathbb {R}^n, w:=(u_1,\ldots ,u_m,\omega _1,\ldots ,\omega _q)\in \mathbb {R}^{ms+qr}.\) Then, H is a convex function in variable x and affine function in variable w and so, we refer to the classical minimax theorem (see e.g., [33, Theorem 4.2]) to claim that

$$\begin{aligned} \inf _{x\in \mathbb {R}^n}{\max \limits _{w\in W}{{H(x,w)}}}=\max _{w\in W}{\inf \limits _{x\in \mathbb {R}^n}{{H(x,w)}}}. \end{aligned}$$

This together with (14) entails that

$$\begin{aligned} \max _{w\in W}{\inf \limits _{x\in \mathbb {R}^n}{{\left\{ \sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,u_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\omega _l)\right\} }}}\ge \sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta ({\bar{x}}). \end{aligned}$$

Hence, we can find \({\bar{u}}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,\bar{u}^{s}_\zeta )\in U_\zeta , \zeta =1,\ldots ,m\) and \( \bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q\) such that

$$\begin{aligned} \sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,{\bar{u}}_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\bar{\omega }_l)\ge \sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta (\bar{x}),\quad \forall x\in \mathbb {R}^n. \end{aligned}$$
(15)

We now consider a polynomial on \(\mathbb {R}^n\) given by for each \(x\in \mathbb {R}^n,\) \(\sigma (x):=\sum \limits _{\zeta =1}^m\lambda _\zeta f_\zeta (x,{\bar{u}}_\zeta )+\sum \limits _{l=1}^q\gamma _lg_l(x,\bar{\omega }_l)-\sum \limits _{\zeta =1}^m \lambda _\zeta F_\zeta ({\bar{x}}).\) By our assumption, it is easy to see that \(\sigma \) is a 1st-SDSOS-convex polynomial and moreover, \(\sigma (x)\ge 0, \forall x\in \mathbb {R}^n,\) by virtue of (15). So, \(\sigma \in \textbf{SDSOS}_\eta [x]\) (cf. [8, Proposition 5.3]).

Now, the relations \({\bar{u}}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,\bar{u}^s_\zeta )\in U_\zeta , \zeta =1,\ldots ,m\) mean that

$$\begin{aligned} {\bar{u}}_\zeta ^T A_\zeta {\bar{u}}_\zeta \le \rho _\zeta , \; \zeta =1,\ldots ,m. \end{aligned}$$
(16)

Setting \(\alpha ^0_\zeta :=\lambda _\zeta \ge 0, \zeta =1,\ldots ,m\) and \(\alpha ^i_\zeta :=\alpha ^0_\zeta {\bar{u}}^i_\zeta \in \mathbb {R}, i=1,\ldots , s, \zeta =1,\ldots ,m,\) we claim that (9) holds. To see this, let \(\zeta \in \{1,\ldots ,m\}.\) If \(\alpha _\zeta ^0= 0\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s,\) and hence, (9) is trivially valid. If \(\alpha _\zeta ^0\ne 0,\) then it follows from (16) that

$$\begin{aligned} {\hat{u}}_\zeta ^T A_\zeta {\hat{u}}_\zeta \le \rho _\zeta (\alpha _\zeta ^0)^2, \end{aligned}$$

where \({\hat{u}}_\zeta :=(\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s).\) Since \(A_\zeta =Q_\zeta ^T Q_\zeta \), we arrive at the conclusion that \(\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0\), which shows that (9) holds, too. Similarly, by letting \(\mu ^0_l:=\gamma _l\ge 0, l=1,\ldots ,q\) and \(\mu ^j_l:=\mu ^0_l\bar{\omega }^j_l\in \mathbb {R}, j=1,\ldots , r, l=1,\ldots ,q\), we conclude from the relations \(\bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q\) that (10) also holds.

Next, by noting the definitions of functions \(f_\zeta , \zeta =1,\ldots ,m\) and \(g_l, l=1,\ldots ,q\) in (2), we obtain that \(\lambda _\zeta f_\zeta (x,{\bar{u}}_\zeta )=\alpha ^0_\zeta p^0_\zeta (x)+\sum \limits _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)\) and \(\gamma _lg_l(x,\bar{\omega }_l)=\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x),\;l=1,\ldots ,q,\) for all \(x\in \mathbb {R}^n.\) This together with the definition of \(\sigma \) yields

$$\begin{aligned} \sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )-\sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})=\sigma , \end{aligned}$$

which shows that (8) is satisfied. Therefore, the proof of (i) is accomplished.

(ii) Assume that there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) hold.

Consider any \(\zeta \in \{1,\ldots ,m\}\). We claim by (9) that if \(\alpha _\zeta ^0=0\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s.\) Assume on contrary that \(\alpha _\zeta ^0=0\) but there exists \(i_0\in \{1,\ldots ,s\}\) with \(\alpha _\zeta ^{i_0}\ne 0.\) In this case, (9) entails that \(\Vert Q_\zeta w_\zeta \Vert = 0\), where \(w_\zeta :=(\alpha ^1_\zeta ,\dots ,\alpha ^s_\zeta )\ne 0.\) This establishes a contradiction as inasmuch \( \Vert Q_\zeta w_\zeta \Vert ^2=w^T_\zeta A_\zeta w_\zeta >0\), where the strict inequality is satisfied by virtue of \(A_\zeta \succ 0\). So, our claim is valid. Similarly, we assert by (10) that if \(\mu _l^0=0\) for each \(l\in \{1,\ldots ,q\}\), then \(\mu _l^j=0\) for all \(j=1,\ldots ,r.\)

Now, denote \({\hat{u}}_\zeta :=({\hat{u}}_\zeta ^1,\ldots ,{\hat{u}}_\zeta ^s), \zeta =1,\ldots , m\) and \(\hat{\omega }_l:=(\hat{\omega }_l^1,\ldots ,\hat{\omega }_l^r), l=1,\ldots ,q\), where

$$\begin{aligned} {\hat{u}}^i_\zeta :={\left\{ \begin{array}{ll} 0 &{} \text{ if } \alpha _\zeta ^0=0,\\ \frac{\alpha _\zeta ^i}{\alpha _\zeta ^0} &{} \text{ if } \alpha _\zeta ^0\ne 0,\end{array}\right. }\; i=1,\ldots , s,\; \hat{\omega }^j_l:={\left\{ \begin{array}{ll}0 &{} \text{ if } \mu _l^0=0,\\ \frac{\mu _l^j}{\mu _l^0} &{} \text{ if } \mu _l^0\ne 0,\end{array}\right. }\;\; j=1,\ldots , r. \end{aligned}$$

It follows from (9) and (10) that

$$\begin{aligned} {\hat{u}}_\zeta ^T A_\zeta {\hat{u}}_\zeta \le \rho _\zeta , \; \zeta =1,\ldots ,m,\; \hat{\omega }_l^T B_l\hat{\omega }_l\le \tau _l,\; l=1,\ldots ,q, \end{aligned}$$

which show that \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q.\) Then, for any \(x\in \mathbb {R}^n,\)

$$\begin{aligned} \alpha ^0_\zeta p^0_\zeta (x)+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)=&\alpha ^0_\zeta p_\zeta ^0(x)+\sum \limits _{i=1}^s(\alpha _\zeta ^0{\hat{u}}^i_\zeta )p^i_\zeta (x)\\ =&\alpha _\zeta ^0\big [p^0_\zeta (x)+\sum \limits _{i=1}^s{\hat{u}}^i_\zeta p^i_\zeta (x)\big ]=\alpha _\zeta ^0f_\zeta (x,{\hat{u}}_\zeta ),\;\zeta =1,\ldots ,m, \end{aligned}$$

where we note that if \(\alpha _\zeta ^0=0\) for each \(\zeta \in \{1,\ldots ,m\}\), then \(\alpha _\zeta ^i=0\) for all \(i=1,\ldots ,s,\) as shown above. Similarly, \(\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\;l=1,\ldots ,q\) for each \(x\in \mathbb {R}^n.\)

Therefore, we use (8) to find \(\sigma _0\in \textbf{SDSOS}_\eta [x]\) such that

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta f_\zeta (x,{\hat{u}}_\zeta )=\sigma _0(x)-\sum \limits _{l=1}^q\mu _l^0g_l(x,\hat{\omega }_l)+\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta (\bar{x}),\quad \forall x\in \mathbb {R}^n. \end{aligned}$$
(17)

Let \({\tilde{x}}\in \mathbb {R}^n\) be an arbitrary robust feasible point of problem (U). Note here that \(\sigma _0(\tilde{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\tilde{x}},{\hat{\omega }}_l)\ge 0\) inasmuch as \(\sigma _0\) is an SDSOS polynomial, and hence \(\sigma _0({\tilde{x}})\ge 0\) and \(\mu _l^0\ge 0, g_l({\tilde{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q.\) Then, we estimate (17) at \({\tilde{x}}\) to obtain that \(\sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\tilde{x}},{\hat{u}}_\zeta )\ge \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})\) and thus,

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\tilde{x}})\ge \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}), \end{aligned}$$
(18)

due to the fact that \(\alpha _\zeta ^0\ge 0, F_\zeta ({\tilde{x}})\ge f_\zeta ({\tilde{x}},{\hat{u}}_\zeta )\) for \(\zeta =1,\ldots ,m.\)

Note by (18) that \(F({\tilde{x}})-F({\bar{x}})\notin -\mathrm{int\,} \mathbb {R}^m_+\) because of \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}.\) So, \(\bar{x}\) is a robust weak Pareto solution of problem (U).

(iii) Assume that there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,}\mathbb {R}^m_+, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that (8), (9) and (10) hold. Following the same lines as in the proof of (ii), we arrive at the assertion that

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\tilde{x}})\ge \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}) \end{aligned}$$
(19)

for each given robust feasible point \({\tilde{x}}\) of problem (U). We claim that \({\bar{x}}\) is a robust Pareto solution of problem (U). Otherwise, there exists a robust feasible point \(x^*\) of problem (U) such that \(F(x^*)-F(\bar{x})\in -\mathbb {R}^m_+{\setminus }\{0\}.\) Moreover, by \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathrm{int\,}\mathbb {R}^m_+,\) we see that \(\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta (x^*)-\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})<0,\) which contradicts the assertion in (19). So, our claim is true and the proof is complete. \(\square \)

Remark 1

The Slater condition (7) is important for obtaining necessary SOC conditions. The following example shows that the conclusion of (i) in Theorem 1 may go awry if this Slater condition does not hold.

Example 1

(The importance of the Slater condition) Consider the robust multiobjective convex polynomial optimization problem

$$\begin{aligned} \min _{x\in \mathbb {R}}\big \{\big (\max \limits _{u\in U}\{&x^8+(3-u^2)x^2-2x+1\},\max \limits _{u\in U}\{(2-u^1)x^2-3x-1\}\big )\mid x^8+2x^2\le 0,\\&(1-\omega ^2)x^8+(1-\omega ^1)x^2 -1\le 0, \forall \omega :=(\omega ^1,\omega ^2)\in \Omega \big \}, \end{aligned}$$
(PE1)

where \(U:=\{u:=(u^1,u^2)\in \mathbb {R}^2\mid \frac{(u^1)^2}{2}+\frac{(u^2)^2}{3}\le 1\}\) and \(\Omega :=\{\omega :=(\omega ^1,\omega ^2)\in \mathbb {R}^2\mid (\omega ^1)^{2}+(\omega ^2) \le 1\}.\) The problem (PE1) can be expressed in terms of problem (R), where the ellipsoids \(U_1:=U_2:=U\) and \(\Omega _1:=\Omega _2:=\Omega \) are described respectively by

$$\begin{aligned} A_1:=A_2:= \left( \begin{array}{cc} \frac{1}{2} &{} 0 \\ 0&{}\frac{1}{3}\\ \end{array} \right) ,\quad B_1:=B_2:= \left( \begin{array}{cc } 1 &{} 0 \\ 0&{} 1\\ \end{array} \right) , \end{aligned}$$

\(\rho _1:=\rho _2:=\tau _1:=\tau _2:=1\) and the bi-functions \(f_\zeta , \zeta =1,2, g_l, l=1,2\) are given by \(f_\zeta (x,u):=p^0_\zeta (x)+\sum \limits ^{2}_{i=1}u^ip^i_\zeta (x)\) for \( u:=(u^1, u^2)\in \mathbb {R}^2\) with \(p^0_1(x):=x^8+3x^2-2x+1, p^1_1(x):=0, p^2_1(x):=-x^2, p^0_2(x):=2x^2-3x-1, p^1_2(x):=-x^2, p^2_2(x):=0\) for \(x\in \mathbb {R}\) and \( g_l(x,\omega ):=h^0_l(x)+\sum \limits ^{2}_{j=1}\omega ^jh^j_l(x)\) for \(\omega :=(\omega ^1,\omega ^2)\in \mathbb {R}^2\) with \( h^0_1(x):=x^8+2x^2, h^1_1(x):=h^2_1(x):=0, h^0_2(x):=x^8+x^2-1, h^1_2(x):=-x^2, h^2_2(x):=-x^8\) for \(x\in \mathbb {R}\).

It is clear that \( f_\zeta (\cdot ,u), u\in U, \zeta =1,2\) and \( g_l(\cdot ,\omega ), \omega \in \Omega , l=1,2\) are 1st-SDSOS-convex polynomials and that \({\bar{x}}:=0\) is a Pareto solution of the robust problem (PE1). However, we assert that the SOC condition given in (8) is not valid at \({\bar{x}}.\) To see this, suppose by the contradiction that there exist \((\alpha ^0_1,\alpha ^0_2)\in \mathbb {R}^2_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,2, i=1,2,\) \(\mu _l^0\in \mathbb {R}_+, \mu _l^j\in \mathbb {R}, l=1,2, j=1,2\) and \(\sigma \in \textbf{SDSOS}_8[x]\) such that

$$\begin{aligned}&\sum _{\zeta =1}^2(\alpha ^0_\zeta p^0_\zeta +\sum _{i=1}^2\alpha ^i_\zeta p^i_\zeta )+\sum _{l=1}^2( \mu ^0_lh^0_l+\sum _{j=1}^2\mu ^j_lh^j_l) -\sum _{\zeta =1}^2\alpha ^0_\zeta F_\zeta ({\bar{x}})=\sigma , \end{aligned}$$
(20)

where \(F_1({\bar{x}}):=\max \limits _{u\in U}\{p^0_1(\bar{x})+\sum \limits _{i=1}^2u^ip^i_1({\bar{x}})\}=1\) and \(F_2(\bar{x}):=\max \limits _{u\in U}\{p^0_2(\bar{x})+\sum \limits _{i=1}^2u^ip^i_2({\bar{x}})\}=-1\). We rearrange (20) and obtain that

$$\begin{aligned}&(\alpha ^0_1+ \mu ^0_1+\mu ^0_2-\mu ^2_2)x^8+(3\alpha ^0_1+2\alpha ^0_2-\alpha ^2_1-\alpha ^1_2+2\mu ^0_1+\mu ^0_2-\mu ^1_2)x^2-(2\alpha ^0_1+3\alpha ^0_2)x\\&=\mu ^0_2+\sigma (x)\; \text{ for } \text{ all } \; x\in \mathbb {R}. \end{aligned}$$

For each \(x\in \mathbb {R},\) as \(\mu ^0_2\ge 0\) and \(\sigma (x)\ge 0\), it entails that

$$\begin{aligned} (\alpha ^0_1+ \mu ^0_1+\mu ^0_2-\mu ^2_2)x^8\ge (2\alpha ^0_1+3\alpha ^0_2)x-(3\alpha ^0_1+2\alpha ^0_2-\alpha ^2_1-\alpha ^1_2+2\mu ^0_1+\mu ^0_2-\mu ^1_2)x^2. \end{aligned}$$
(21)

Considering \(x_\zeta :=\frac{1}{\zeta }\), where \(\zeta \in \mathbb {N}\), we claim by (21) that

$$\begin{aligned} \alpha ^0_1+ \mu ^0_1+\mu ^0_2-\mu ^2_2&\ge \zeta ^6[(2\alpha ^0_1+3\alpha ^0_2)\zeta -(3\alpha ^0_1+2\alpha ^0_2-\alpha ^2_1-\alpha ^1_2+2\mu ^0_1+\mu ^0_2-\mu ^1_2)]\\&\ge (2\alpha ^0_1+3\alpha ^0_2)\zeta -(3\alpha ^0_1+2\alpha ^0_2-\alpha ^2_1-\alpha ^1_2+2\mu ^0_1+\mu ^0_2-\mu ^1_2) \end{aligned}$$

for \(\zeta \) large enough. This means that

$$\begin{aligned} \alpha ^0_1+ \mu ^0_1+\mu ^0_2-\mu ^2_2\ge (2\alpha ^0_1+3\alpha ^0_2)\zeta -(3\alpha ^0_1+2\alpha ^0_2-\alpha ^2_1-\alpha ^1_2+2\mu ^0_1+\mu ^0_2-\mu ^1_2) \end{aligned}$$
(22)

for all \(\zeta \) large enough. Now, letting \(\zeta \rightarrow \infty \) in (22), we arrive at a contradiction. Consequently, the result of (i) in Theorem 1 is not true for this setting. The reason is that the Slater condition fails for this problem.

Similarly as shown in [13, Example 2.7] that the 1st-SDSOS-convexity assumption in our framework is also important for the validation of SOC conditions. That is, the conclusion of (i) in Theorem 1 may fail for a robust multiobjective polynomial problem, where the related functions are convex (but not 1st-SDSOS-convex) polynomials although the Slater qualification holds.

We now show that the SOC conditions in (8), (9) and (10) are equivalent to the robust Karush-Kuhn-Tucker (KKT) condition of problem (U). In doing so, we say that the robust KKT condition is valid at \({\bar{x}}\in C\) if there exist \( \bar{\omega }_l\in \Omega _l, l=1,\ldots ,q\) and \(\alpha :=(\alpha _1,\ldots ,\alpha _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda :=(\lambda _1,\ldots ,\lambda _q)\in \mathbb {R}^q_+\), \({\bar{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) such that

$$\begin{aligned}&\sum \limits _{\zeta =1}^m\alpha _\zeta \nabla _1 f_\zeta ({\bar{x}},\bar{u}_\zeta )+\sum \limits _{l=1}^q\lambda _l\nabla _1g_l({\bar{x}},\bar{\omega }_l)=0,\nonumber \\&\sum \limits _{\zeta =1}^m \alpha _\zeta f_\zeta ({\bar{x}},{\bar{u}}_\zeta )- \sum \limits _{\zeta =1}^m \alpha _\zeta F_\zeta ({\bar{x}})=0,\; \lambda _l g_l({\bar{x}},\bar{\omega }_l)=0,\; l=1,\ldots ,q, \end{aligned}$$
(23)

where \(\nabla _1f_\zeta \) (resp., \(\nabla _1g_l\)) denotes the derivative of \(f_\zeta \) (resp., \(g_l\)) with respect to the first variable and \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\bar{x}},u_\zeta )\) for \( \zeta =1,\ldots ,m\). Note that if \(f_\zeta (\cdot ,u_\zeta ):=p^0_\zeta \) for all \( u_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) (i.e., there is no uncertainty on the objective functions), then the above robust KKT condition was examined in [11] for a class of SOS-convex polynomials, and in [9] for a more general class of local Lipschitz nonsmooth functions.

Theorem 2

(SOC and robust KKT conditions) Let \({\bar{x}}\in \mathbb {R}^n\) be a robust feasible point of problem (U). Then, the following conditions are equivalent:

  1. (i)

    The SOC conditions given in (8), (9) and (10) hold.

  2. (ii)

    The robust KKT condition is valid at \({\bar{x}}.\)

  3. (iii)

    There exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\alpha ^0_\zeta \nabla p^0_\zeta (\bar{x})+\sum _{i=1}^s\alpha ^i_\zeta \nabla p^i_\zeta (\bar{x})\big )+\sum _{l=1}^q\big (\mu ^0_l\nabla h^0_l(\bar{x})+\sum _{j=1}^r\mu ^j_l\nabla h^j_l({\bar{x}})\big )=0, \nonumber \\&\sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta (\bar{x})+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta ({\bar{x}})\big )- \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})=0,\; \mu ^0_lh^0_l({\bar{x}})+\sum _{j=1}^r\mu ^j_lh^j_l({\bar{x}})=0,\; l=1,\ldots ,q, \end{aligned}$$
(24)
$$\begin{aligned}&\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0,\; \zeta =1,\ldots ,m, \end{aligned}$$
(25)
$$\begin{aligned}&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q, \end{aligned}$$
(26)

where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).

Proof

[(i) \(\Longrightarrow \) (ii)] Suppose that the SOC conditions given in (8), (9) and (10) are valid. Following the proof of (ii) in Theorem 1, we observe by (9) and (10) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that

$$\begin{aligned}&\alpha ^0_\zeta p^0_\zeta (x)+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)=\alpha _\zeta ^0f_\zeta (x,{\hat{u}}_\zeta ),\;\forall x\in \mathbb {R}^n, \;\zeta =1,\ldots ,m,\nonumber \\&\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\; \forall x\in \mathbb {R}^n,\;l=1,\ldots ,q. \end{aligned}$$
(27)

Now, we use (8) to find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta f_\zeta (x,{\hat{u}}_\zeta )-\sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta (\bar{x})=\sigma (x)-\sum \limits _{l=1}^q\mu _l^0g_l(x,\hat{\omega }_l),\quad \forall x\in \mathbb {R}^n, \end{aligned}$$
(28)

where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m.\) Observe here that \(\sigma (\bar{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\bar{x}},\hat{\omega }_l)\ge 0\) inasmuch as \(\sigma \) is an SDSOS polynomial and hence \(\sigma (\bar{x})\ge 0\), and \(\mu _l^0g_l({\bar{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q\) as \({\bar{x}}\) is a robust feasible point of problem (U). By estimating (28) at \({\bar{x}}\), we obtain that \(\sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\bar{x}},{\hat{u}}_\zeta )\ge \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})\) and thus,

$$\begin{aligned} \sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\bar{x}},{\hat{u}}_\zeta )= \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}) \end{aligned}$$
(29)

due to the fact that \(\alpha _\zeta ^0\ge 0, F_\zeta ({\bar{x}})\ge f_\zeta ({\bar{x}},{\hat{u}}_\zeta )\) for \(\zeta =1,\ldots ,m.\) Substituting \(x:={\bar{x}}\) into (28) and taking (29) into account, we arrive at \(\sigma (\bar{x})-\sum \limits _{l=1}^q\mu _l^0g_l({\bar{x}},\hat{\omega }_l)=0\). This ensures that \(\sigma ({\bar{x}})=0\) and

$$\begin{aligned} \mu _l^0g_l({\bar{x}},\hat{\omega }_l)= 0, \; l=1,\ldots ,q. \end{aligned}$$
(30)

Therefore, \(\sigma (x)\ge \sigma ({\bar{x}}), \forall x\in \mathbb {R}^n\), which means that \({\bar{x}}\) is a minimizer of \(\sigma \) on \(\mathbb {R}^n.\) It entails that \(\nabla \sigma ({\bar{x}})=0\), and so we get by (28) that

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta \nabla _1 f_\zeta ({\bar{x}},{\hat{u}}_\zeta )+\sum \limits _{l=1}^q\mu ^0_l\nabla _1g_l({\bar{x}},\hat{\omega }_l)=0 \end{aligned}$$

which together with (29) and (30) shows that the robust KKT condition is valid at \({\bar{x}}.\)

[(ii) \(\Longrightarrow \) (iii)] Suppose that the robust KKT condition is valid at \({\bar{x}}\). This means that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda :=(\lambda _1,\ldots ,\lambda _q)\in \mathbb {R}^q_+\) and \(\bar{u}_\zeta :=({\bar{u}}^1_\zeta ,\ldots ,{\bar{u}}^s_\zeta )\in U_\zeta , \zeta =1,\ldots ,m, \bar{\omega }_l:=(\bar{\omega }^1_l,\ldots ,\bar{\omega }^r_l)\in \Omega _l, l=1,\ldots ,q,\) such that (23) is valid. Let \(\alpha ^0_\zeta :=\alpha _\zeta \ge 0, \zeta =1,\ldots ,m, \alpha ^i_\zeta :=\alpha ^0_\zeta {\bar{u}}^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots , s\) and \(\mu ^0_l:=\lambda _l\ge 0, l=1,\ldots ,q, \mu ^j_l:=\mu ^0_l\bar{\omega }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots , r\). Arguing similarly as in the proof of Theorem 1, we show that (25) and (26) hold and that

$$\begin{aligned}&\alpha _\zeta f_\zeta (x,\bar{u}_\zeta )=\alpha ^0_\zeta p^0_\zeta (x)+\sum \limits _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x),\;\forall x\in \mathbb {R}^n,\;\zeta =1,\ldots ,m,\\&\lambda _lg_l(x,\bar{\omega }_l)=\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x),\;\forall x\in \mathbb {R}^n, \;l=1,\ldots ,q. \end{aligned}$$

This together with (23) proves that (24) is valid.

[(iii) \(\Longrightarrow \) (i)] Let \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \alpha ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) be such that (24), (25) and (26) hold. As above, we use (25) and (26) to find \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that

$$\begin{aligned}&\alpha ^0_\zeta p^0_\zeta (x)+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)=\alpha _\zeta ^0f_\zeta (x,{\hat{u}}_\zeta ),\;\forall x\in \mathbb {R}^n, \;\zeta =1,\ldots ,m,\nonumber \\&\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\; \forall x\in \mathbb {R}^n,\;l=1,\ldots ,q. \end{aligned}$$
(31)

Consider a function \(d:\mathbb {R}^n\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} d(x):=\sum \limits _{\zeta =1}^m\alpha ^0_\zeta f_\zeta (x,{\hat{u}}_\zeta )+\sum \limits _{l=1}^q\mu ^0_lg_l(x,\hat{\omega }_l)-\sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta (\bar{x}),\quad x\in \mathbb {R}^n. \end{aligned}$$

As \( f_\zeta (\cdot ,{\hat{u}}_\zeta ), \zeta =1,\ldots ,m\) and \( g_l(\cdot ,\hat{\omega }_l), l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials, d is 1st-SDSOS-convex and hence, being a convex function. We show by (24) and (31) that

$$\begin{aligned} \nabla d({\bar{x}})=0,\; \sum \limits _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\bar{x}},{\hat{u}}_\zeta )- \sum \limits _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}})=0,\; \mu ^0_lg_l({\bar{x}},\hat{\omega }_l)=0,\; l=1,\ldots , q. \end{aligned}$$

Then, the convexity of d entails that \(d(x)\ge d({\bar{x}})=0\) for all \(x\in \mathbb {R}^n\). This together with the 1st-SDSOS-convexity of d guarantees that \(d\in \textbf{SDSOS}_\eta [x]\) (cf. [8, Proposition 5.3]). Taking (31) into account, we arrive at

$$\begin{aligned} \sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )- \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x]. \end{aligned}$$

Consequently, the assertion in (i) holds. The proof is complete. \(\square \)

Remark 2

  1. (i)

    The result in Theorem 2 provides new characterizations for the robust KKT condition via the proposed SOC conditions in the setting of 1st-SDSOS-convex polynomials. The interested reader is referred to [11, Proposition 2.10] for corresponding characterizations for the robust KKT condition in terms of the linear matrix inequality (LMI) conditions in the setting of SOS-convex polynomials.

  2. (ii)

    In the definition of robust KKT condition in (23), if \(\alpha \in \mathbb {R}^m_+\setminus \{0\}\) is replaced by \(\alpha \in \textrm{int}\mathbb {R}^m_+\), then the corresponding condition is called robust strong KKT condition. Using a concept of robust proper Pareto solution for our setting and proceeding similarly as in the proof of Theorem 2, we could obtain some characterizations or at least some sufficient conditions for the fulfillment of the robust strong KKT condition and thus for the validation of the hypothesis (iii) of Theorem 1. This would be an interesting topic for a further study.

3 Robust duality via second order cone conditions

In this section, we propose a dual multiobjective problem to the robust multiobjective optimization problem (R) and explore duality relations between them. More precisely, we justify that robust dual results characterize the SOC conditions as well as the robust KKT condition. This particularly shows a Pareto point of the primal robust multiobjective optimization problem (R) by solving its dual problem, which is exhibited in terms of SOC conditions.

In connection with (R), we consider a dual multiobjective problem as follows:

figure a

It is worth noticing here that a Pareto solution (resp., a weak Pareto solution) of a “max” multiobjective problem like the dual problem (\(\textrm{D}\)) is defined similarly as in Definition 1 by replacing \(-\mathbb {R}^m_+\setminus \{0\}\) (resp., \(-\mathrm{int\,} \mathbb {R}^m_+\)) by \(\mathbb {R}^m_+\setminus \{0\} \) (resp., \(\mathrm{int\,} \mathbb {R}^m_+\)). We also recall here the notation \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).

The following theorem provides weak and strong duality relations between the dual problem (\(\textrm{D}\)) and the primal problem (R).

Theorem 3

(i) (Robust weak duality) Let \((t_\zeta ,\alpha ^0_\zeta ,\alpha ^i_\zeta ,\mu ^0_l,\mu ^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a feasible point of problem (\(\textrm{D}\)) and let \({\hat{x}}\) be a feasible point of problem (R). We have

$$\begin{aligned} F({\hat{x}})-(t_1,\ldots ,t_m)\notin -\textrm{int}\mathbb {R}^m_+. \end{aligned}$$
(32)

(ii) (Robust strong dual characterization) Let \({\bar{x}}\) be a weak Pareto solution of problem (R). The robust KKT condition is valid at \({\bar{x}}\) if and only if there exists a weak Pareto solution of problem (\(\textrm{D}\)), say \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that

$$\begin{aligned} F({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_m). \end{aligned}$$

Proof

(i) As \((t_\zeta ,\alpha ^0_\zeta ,\alpha ^i_\zeta ,\mu ^0_l,\mu ^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)), we find \( \sigma \in \textbf{SDSOS}_\eta [x]\) such that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\alpha ^0_\zeta p^0_\zeta (x)+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)\big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l(x)+\sum _{j=1}^r\mu ^j_lh^j_l(x)\big ) -\sum _{\zeta =1}^m \alpha ^0_\zeta t_\zeta =\sigma (x), \quad \forall x\in \mathbb {R}^n,\end{aligned}$$
(33)
$$\begin{aligned}&\Vert Q_\zeta (\alpha _\zeta ^1,\ldots ,\alpha _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\alpha _\zeta ^0,\; \zeta =1,\ldots ,m, \end{aligned}$$
(34)
$$\begin{aligned}&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q, \end{aligned}$$
(35)

where \( (\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}\) \(\alpha ^i_\zeta \in \mathbb {R}, t_\zeta \in \mathbb {R}, \zeta =1,\ldots , m, i=1,\ldots , s\) and \( \mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, j=1,\ldots ,r, l=1,\ldots ,q \). As shown in the proof of (ii) of Theorem 1, we show by (34) and (35) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that

$$\begin{aligned}&\alpha ^0_\zeta p^0_\zeta (x)+\sum _{i=1}^s\alpha ^i_\zeta p^i_\zeta (x)=\alpha _\zeta ^0f_\zeta (x,{\hat{u}}_\zeta ),\;\forall x\in \mathbb {R}^n, \;\zeta =1,\ldots ,m,\\&\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\; \forall x\in \mathbb {R}^n,\;l=1,\ldots ,q. \end{aligned}$$

Note further that \(\sigma ({\hat{x}})\ge 0\) as \(\sigma \) is an SDSOS polynomial and that \(g_l({\hat{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q,\) as \({\hat{x}}\) is a feasible point of problem (R). We now estimate (33) at \({\hat{x}}\) to arrive at

$$\begin{aligned} \sum _{\zeta =1}^m \alpha ^0_\zeta F_\zeta ({\hat{x}})\ge \sum _{\zeta =1}^m \alpha ^0_\zeta f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\ge \sum _{\zeta =1}^m \alpha ^0_\zeta t_\zeta , \end{aligned}$$
(36)

where we should remind that \(F_\zeta ({\hat{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\hat{x}},u_\zeta )\ge f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\) for all \( \zeta =1,\ldots ,m\). On account of \( (\alpha ^0_1,\ldots ,\alpha ^0_m)\in \mathbb {R}^m_+\setminus \{0\}\), we conclude by (36) that \(F({\hat{x}})-(t_1,\ldots ,t_m)\notin -\mathrm{int\,} \mathbb {R}^m_+\), which concludes that (32) holds.

(ii) Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Suppose that the robust KKT condition is valid at \({\bar{x}}\). By Theorem 2, the SOC conditions hold, i.e., there exist \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\bar{\alpha }^0_\zeta p^0_\zeta +\sum _{i=1}^s\bar{\alpha }^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\bar{\mu }^0_lh^0_l+\sum _{j=1}^r\bar{\mu }^j_lh^j_l\big )- \sum _{\zeta =1}^m \bar{\alpha }^0_\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x],\\&\Vert Q_\zeta (\bar{\alpha }_\zeta ^1,\ldots ,\bar{\alpha }_\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\bar{\alpha }_\zeta ^0,\; \zeta =1,\ldots ,m,\\&\Vert E_l(\bar{\mu }_l^1,\ldots ,\bar{\mu }_l^r)\Vert \le \sqrt{\tau _l}\bar{\mu }_l^0,\;l=1,\dots ,q, \end{aligned}$$

where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\). Letting \(\bar{t}_\zeta :=F_\zeta ({\bar{x}}), \zeta =1,\ldots ,m\), it holds that

$$\begin{aligned} F({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_m) \end{aligned}$$

and that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)).

We assert further that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)). To see this, assume on the contrary that there exists another feasible point of problem (\(\textrm{D}\)), say \((\tilde{t}_\zeta ,\tilde{\alpha }^0_\zeta ,\tilde{\alpha }^i_\zeta ,\tilde{\mu }^0_l,\tilde{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that \(({\tilde{t}}_1,\ldots ,{\tilde{t}}_m)-({\bar{t}}_1,\ldots ,{\bar{t}}_m)\in \textrm{int}\mathbb {R}^m_+,\) which amounts to the following relation:

$$\begin{aligned} F(\bar{x})-({\tilde{t}}_1,\ldots ,{\tilde{t}}_m)\in -\textrm{int}\mathbb {R}^m_+. \end{aligned}$$

This clearly shows a contradiction to the robust weak duality of (i).

Conversely, let \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a weak Pareto solution of problem (\(\textrm{D}\)) such that \(F({\bar{x}})=(\bar{t}_1,\ldots ,{\bar{t}}_m).\) Then, we obtain that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\bar{\alpha }^0_\zeta p^0_\zeta +\sum _{i=1}^s\bar{\alpha }^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\bar{\mu }^0_lh^0_l+\sum _{j=1}^r\bar{\mu }^j_lh^j_l\big )- \sum _{\zeta =1}^m \bar{\alpha }^0_\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x],\\&\Vert Q_\zeta (\bar{\alpha }_\zeta ^1,\ldots ,\bar{\alpha }_\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\bar{\alpha }_\zeta ^0,\; \zeta =1,\ldots ,m,\\&\Vert E_l(\bar{\mu }_l^1,\ldots ,\bar{\mu }_l^r)\Vert \le \sqrt{\tau _l}\bar{\mu }_l^0,\;l=1,\dots ,q, \end{aligned}$$

where \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\). In other words, the SOC conditions given in Theorem 1 hold, and so, by Theorem 2, the robust KKT condition is valid at \({\bar{x}}\) as well. \(\square \)

In the forthcoming corollary, we derive a robust strong duality relation under the validation of the Slater condition, which is an easy-to-verify condition in practice.

Corollary 1

(Robust strong duality with the Slater condition) Suppose that the Slater condition (7) is satisfied. Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Then, there exists a weak Pareto solution of problem (\(\textrm{D}\)), say \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\), such that

$$\begin{aligned} F({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_m), \end{aligned}$$

where \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\bar{x}},u_\zeta ), \zeta =1,\ldots ,m\).

Proof

Let \({\bar{x}}\) be a weak Pareto solution of problem (R). Under the Slater condition (7), we employ Theorem 1(i) to conclude that the SOC conditions given in (8), (9) and (10) hold. In view of Theorem 2, it is equivalent to saying that the robust KKT condition is valid at \({\bar{x}}\). To finish the proof of the theorem, it remains to invoke Theorem 3(ii). \(\square \)

In the following theorem, we present a robust converse duality relation between the robust multiobjective problem (R) and its dual problem (\(\textrm{D}\)).

Theorem 4

(Robust converse duality) Let the feasible set C in (3) be compact and the Slater condition (7) hold. If \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)), then there exists a weak Pareto solution of problem (R), denoted by \({\bar{x}}\), such that

$$\begin{aligned} F({\bar{x}})- ({\bar{t}}_1,\ldots ,\bar{t}_m)\in -\mathbb {R}^m_+. \end{aligned}$$
(37)

Proof

Let \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) be a weak Pareto solution of problem (\(\textrm{D}\)). Then, it is clear that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\bar{\alpha }^0_\zeta p^0_\zeta +\sum _{i=1}^s\bar{\alpha }^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\bar{\mu }^0_lh^0_l+\sum _{j=1}^r\bar{\mu }^j_lh^j_l\big )- \sum _{\zeta =1}^m \bar{\alpha }^0_\zeta {\bar{t}}_\zeta \in \textbf{SDSOS}_\eta [x],\end{aligned}$$
(38)
$$\begin{aligned}&\Vert Q_\zeta (\bar{\alpha }_\zeta ^1,\ldots ,\bar{\alpha }_\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\bar{\alpha }_\zeta ^0,\; \zeta =1,\ldots ,m, \end{aligned}$$
(39)
$$\begin{aligned}&\Vert E_l(\bar{\mu }_l^1,\ldots ,\bar{\mu }_l^r)\Vert \le \sqrt{\tau _l}\bar{\mu }_l^0,\;l=1,\dots ,q, \end{aligned}$$
(40)

where \((\bar{\alpha }^0_1,\ldots ,\bar{\alpha }^0_m)\in \mathbb {R}^m_+{\setminus }\{0\}, \bar{\alpha }^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\bar{\mu }^0_l\in \mathbb {R}_+, \bar{\mu }^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\).

Put \(X:=\{F(x)+w\mid x\in C, w\in \mathbb {R}^m_+\},\) where \(F:=(F_1,\ldots ,F_m)\) with \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\). Since \(F_\zeta , \zeta =1,\ldots ,m\) are convex functions finite on \(\mathbb {R}^n\), it holds that X is a closed convex set, and we claim that

$$\begin{aligned} ({\bar{t}}_1,\ldots ,{\bar{t}}_m)\in X. \end{aligned}$$
(41)

Indeed, if this is not the case, i.e., \(({\bar{t}}_1,\ldots ,\bar{t}_m)\notin X.\) By a classical strong separation theorem in convex analysis (see e.g., [30, Theorem 2.2]), there exists \(\lambda :=(\lambda _1,\ldots , \lambda _m)\in \mathbb {R}^m_+\setminus \{0\}\) such that \(\sum _{\zeta =1}^m\lambda _\zeta {\bar{t}}_\zeta < \inf {\big \{\lambda ^Tv \mid v\in X\big \}}.\) This entails that

$$\begin{aligned} \sum _{\zeta =1}^m\lambda _\zeta {\bar{t}}_\zeta < \inf _{}\big \{\sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x)\mid x\in C\big \}. \end{aligned}$$
(42)

Observe that the (scalar) optimization problem on the right hand-side of (42) has an optimal solution as the feasible set C is compact and \(F_\zeta , \zeta =1,\ldots ,m\) are convex functions finite on \(\mathbb {R}^n\) and thus being continuous. Then, there exists an optimal solution \(x_*\in C\) such that

$$\begin{aligned} \sum \limits _{\zeta =1}^m\lambda _\zeta \bar{t}_\zeta < \sum \limits _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*) \end{aligned}$$
(43)

and that

$$\begin{aligned} \big \{x\in \mathbb {R}^n\mid \sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x)-\sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*)<0,\; \zeta =1,\ldots , m,\; G_l(x)<0,\; l=1,\ldots ,q\big \}=\emptyset , \end{aligned}$$

where \( G_l(x):=\max \limits _{\omega _l\in \Omega _l}g_l(x,\omega _l),\) for \(x\in \mathbb {R}^n, l=1,\ldots ,q.\) Invoking a classical alternative theorem in convex analysis (see e.g., [32, Theorem 21.1]), we find \(\lambda _0\ge 0, \gamma _l\ge 0, l=1,\ldots , q\), not all zero, such that

$$\begin{aligned} \lambda _0\big (\sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x)-\sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*)\big )+\sum \limits _{l=1}^q\gamma _lG_l(x)\ge 0,\quad \forall x\in \mathbb {R}^n. \end{aligned}$$
(44)

On account of the Slater condition (7), we assert by (44) that \(\lambda _0> 0\), and so there is no loss of generality in assume that \(\lambda _0:=1\). Consequently, we arrive at

$$\begin{aligned} \sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x)-\sum _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*)+\sum \limits _{l=1}^q\gamma _lG_l(x)\ge 0,\quad \forall x\in \mathbb {R}^n. \end{aligned}$$

Now, following the same lines (after (12)) as in the proof of Theorem 1(i), we can find \( \alpha ^{i*}_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) and \(\mu ^{0*}_l\in \mathbb {R}_+, \mu ^{j*}_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) such that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\lambda _\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^{i*}_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^{0*}_lh^0_l+\sum _{j=1}^r\mu ^{j*}_lh^j_l\big )- \sum _{\zeta =1}^m \lambda _\zeta F_\zeta (x_*) \in \textbf{SDSOS}_\eta [x],\nonumber \\&\Vert Q_\zeta (\alpha _\zeta ^{1*},\ldots ,\alpha _\zeta ^{s*})\Vert \le \sqrt{\rho _\zeta }\lambda _\zeta ,\; \zeta =1,\ldots ,m,\nonumber \\&\Vert E_l(\mu _l^{1*},\ldots ,\mu _l^{r*})\Vert \le \sqrt{\tau _l}\mu _l^{0*},\;l=1,\dots ,q. \end{aligned}$$
(45)

Letting \(\gamma :=\frac{1}{\sum _{\zeta =1}^m\lambda _\zeta }\Big (\sum \limits _{\zeta =1}^m\lambda _\zeta F_\zeta (x_*)-\sum \limits _{\zeta =1}^m\lambda _\zeta {\bar{t}}_\zeta \Big )\), we conclude by (43) that \(\gamma >0\) and, by (45), that

$$\begin{aligned} \sum _{\zeta =1}^m\big (\lambda _\zeta p^0_\zeta +\sum _{i=1}^s\alpha ^{i*}_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^{0*}_lh^0_l+\sum _{j=1}^r\mu ^{j*}_lh^j_l\big )- \sum _{\zeta =1}^m \lambda _\zeta ({\bar{t}}_\zeta +\gamma ) \in \textbf{SDSOS}_\eta [x]. \end{aligned}$$

Therefore, \((\bar{t}_\zeta +\gamma ,\lambda _\zeta ,\alpha ^{i*}_\zeta ,\mu ^{0*}_l,\mu ^{j*}_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\textrm{D}\)), and

$$\begin{aligned} ({\bar{t}}_1+\gamma ,\ldots ,{\bar{t}}_m+\gamma )-({\bar{t}}_1,\ldots ,\bar{t}_m)=(\gamma ,\ldots ,\gamma )\in \textrm{int}\mathbb {R}^m_+. \end{aligned}$$

This contradicts the fact that \((\bar{t}_\zeta ,\bar{\alpha }^0_\zeta ,\bar{\alpha }^i_\zeta ,\bar{\mu }^0_l,\bar{\mu }^j_l, \zeta =1,\ldots ,m, i=1,\ldots ,s, j=1,\ldots ,r, l=1,\ldots ,q)\) is a weak Pareto solution of problem (\(\textrm{D}\)). Hence, our assertion in (41) is valid.

Granting this, we find \({\bar{x}}\in C\) and \({\bar{w}}:=(\bar{w}_1,\ldots ,{\bar{w}}_m)\in \mathbb {R}^m_+\) such that

$$\begin{aligned} ({\bar{t}}_1,\ldots ,{\bar{t}}_m)=F({\bar{x}})+\bar{w}, \end{aligned}$$
(46)

which concludes that (37) holds. By (38) and (46), we find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that

$$\begin{aligned} \sum _{\zeta =1}^m\big (\bar{\alpha }^0_\zeta p^0_\zeta +\sum _{i=1}^s\bar{\alpha }^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\bar{\mu }^0_lh^0_l+\sum _{j=1}^r\bar{\mu }^j_lh^j_l\big )- \sum _{\zeta =1}^m \bar{\alpha }^0_\zeta \big (F_\zeta ({\bar{x}})+\bar{w}_\zeta \big )=\sigma . \end{aligned}$$

This, together with \(\sum _{\zeta =1}^m\bar{\alpha }^0_\zeta {\bar{w}}_\zeta \ge 0,\) guarantees that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\bar{\alpha }^0_\zeta p^0_\zeta +\sum _{i=1}^s\bar{\alpha }^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\bar{\mu }^0_lh^0_l+\sum _{j=1}^r\bar{\mu }^j_lh^j_l\big )- \sum _{\zeta =1}^m \bar{\alpha }^0_\zeta F_\zeta (\bar{x})\nonumber \\&=\sum _{\zeta =1}^m \bar{\alpha }^0_\zeta {\bar{w}}_\zeta +\sigma \in \textbf{SDSOS}_\eta [x]. \end{aligned}$$
(47)

In view of Theorem 1(ii), we get by (39), (40) and (47) that \({\bar{x}}\) is a weak Pareto solution of problem (R), and so the proof of the theorem is complete. \(\square \)

4 Finding robust efficient solutions via SOCP relaxations

In this section, we show how to find a robust (weak) Pareto solution of the uncertain multiobjective problem (U) by using a second-order cone programming (SOCP) relaxation. This is done by establishing robust solution relationships between the uncertain multiobjective problem (U) and an SOCP relaxation of a corresponding weighted-sum optimization program.

For a given \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+\setminus \{0\},\) let us consider a corresponding weighted-sum optimization problem of (R) as follows:

figure b

where \(F_\zeta (x):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (x,u_\zeta ), \zeta =1,\ldots ,m\) for \(x\in \mathbb {R}^n\).

A second-order cone programming (SOCP) dual program of problem (\(\hbox {P}_{\nu }\)) is defined by

figure c

where \(Q_\zeta , \zeta =1,\ldots ,m\) and \(E_l, l=1,\ldots ,q\) are square matrices such that \(A_\zeta =Q_\zeta ^T Q_\zeta \) and \(B_l=E_l^T E_l\) as previously.

We also consider an SOCP relaxation program of problem (\(\hbox {P}_{\nu }\)) as

figure d

where \(Q^i_\zeta , i=1,\ldots ,s\) are the columns of the matrix \(Q_\zeta \) for \(\zeta =1,\ldots ,m\) and \(E^j_l, j=1,\ldots ,r\) are the columns of the matrix \(E_l\) for \(l=1,\ldots ,q.\)

The following lemma is needed for our analysis in the sequel.

Lemma 1

(Jensen’s inequality with 1st-SDSOS-convex polynomials, cf. [8, Proposition 6.1]) Consider a 1st-SDSOS-convex polynomial d on \(\mathbb {R}^n\) with degree \(\eta :=2\ell \). Let \(y=(y_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) with \(y_1=1\) and \(1 \le i,j \le s(\ell ,n),\)

$$\begin{aligned} \Vert \bigg (\begin{array}{c} 2(\textbf{M}_{\ell }(y))_{ij}\\ (\textbf{M}_{\ell }(y))_{ii}-(\textbf{M}_{\ell }(y))_{jj} \end{array}\bigg )\Vert \le (\textbf{M}_\ell (y))_{ii}+(\textbf{M}_\ell (y))_{jj}. \end{aligned}$$
(48)

Then, one has

$$\begin{aligned} L_y(d)\ge d\big (L_y(x_1),\ldots , L_y(x_n)\big ), \end{aligned}$$

where \(L_y\) is given as in (4) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate.

Robust solution relationships between the uncertain multiobjective optimization problem (U) and an SOCP relaxation (\(\hbox {D}_{\nu }^*\)) of the corresponding weighted-sum optimization problem (\(\hbox {P}_{\nu }\)) is now ready to be established. This result illustrates, in particular, that we can find robust (weak) Pareto solutions of program (U) by solving the (scalar) SOCP relaxation problem (\(\hbox {D}_{\nu }^*\)). In addition, we obtain strong SOCP duality-exact SOCP relaxation among the dual problem (\(\hbox {D}_{\nu }\)), the relaxation (\(\hbox {D}_{\nu }^*\)) and the weighted-sum optimization problem (\(\hbox {P}_{\nu }\)) that might be of independent interest.

Theorem 5

Let the Slater qualification condition (7) be valid. Then, we have the following statements:

  1. (i)

    If \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) is a robust weak Pareto solution of problem (U), then there exist \(\nu \in \mathbb {R}^m_+{\setminus }\{0\}\) and \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) such that

    $$\begin{aligned} \min {(P_{\nu })}=\max {(D_{\nu })}=\min {(D_{\nu })} \end{aligned}$$
    (49)

    and \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)), where \({\bar{y}}:=(1, {\bar{x}}_1,\ldots ,\bar{x}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,\bar{x}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,{\bar{x}}_n^\eta )\).

  2. (ii)

    Consider \(L_{{\bar{y}}}\) given as in (4) and \(x_i\) being the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate. Let \(\nu \in \mathbb {R}^m_+\setminus \{0\}\) be such that the problem (\(\hbox {P}_{\nu }\)) admits an optimal solution. If \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) with \({\bar{y}}:=({\bar{y}}_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)), then \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots , L_{{\bar{y}}}(x_n))\in \mathbb {R}^n\) is a robust weak Pareto solution of problem (U). Moreover, if \(\nu \in \textrm{int}\mathbb {R}^m_+\), then \({\bar{x}}\) is a robust Pareto solution of problem (U).

Proof

(i) Let \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) be a robust weak Pareto solution of problem (U). In view of Theorem 1(i), there exist \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+{\setminus }\{0\}, \lambda ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) such that

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\nu _\zeta p^0_\zeta +\sum _{i=1}^s\lambda ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )- \sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}}) \in \textbf{SDSOS}_\eta [x],\end{aligned}$$
(50)
$$\begin{aligned}&\Vert Q_\zeta (\lambda _\zeta ^1,\ldots ,\lambda _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\nu _\zeta ,\; \zeta =1,\ldots ,m, \end{aligned}$$
(51)
$$\begin{aligned}&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q, \end{aligned}$$
(52)

where \(F_\zeta ({\bar{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta (\bar{x},u_\zeta )\) for \( \zeta =1,\ldots ,m\).

Step 1. Let us first prove that

$$\begin{aligned} \min {(P_{\nu })}=\sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}})=\max {(D_{\nu })}. \end{aligned}$$
(53)

By (50), we find \(\sigma \in \textbf{SDSOS}_\eta [x]\) such that

$$\begin{aligned} \sum _{\zeta =1}^m\big (\nu _\zeta p^0_\zeta (x)+\sum _{i=1}^s\lambda ^i_\zeta p^i_\zeta (x)\big )=\sigma (x)-\sum _{l=1}^q\big (\mu ^0_lh^0_l(x)+\sum _{j=1}^r\mu ^j_lh^j_l(x)\big )+ \sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}}),\;\forall x\in \mathbb {R}^n. \end{aligned}$$
(54)

As shown in the proof of (ii) in Theorem 1, we show by (51) and (52) that there exist \({\hat{u}}_\zeta \in U_\zeta , \zeta =1,\ldots ,m\) and \(\hat{\omega }_l\in \Omega _l, l=1,\ldots ,q\) such that

$$\begin{aligned}&\nu _\zeta p^0_\zeta (x)+\sum _{i=1}^s\lambda ^i_\zeta p^i_\zeta (x)=\nu _\zeta f_\zeta (x,{\hat{u}}_\zeta ),\;\forall x\in \mathbb {R}^n, \;\zeta =1,\ldots ,m,\\&\mu ^0_lh^0_l(x)+\sum \limits _{j=1}^r\mu ^j_lh^j_l(x)=\mu ^0_lg_l(x,\hat{\omega }_l),\; \forall x\in \mathbb {R}^n,\;l=1,\ldots ,q. \end{aligned}$$

Let \( {\hat{x}}\) be an arbitrary feasible point of problem (\(\hbox {P}_{\nu }\)). This entails that \(g_l({\hat{x}},\hat{\omega }_l)\le 0, l=1,\ldots ,q\). Note further that \(F_\zeta ({\hat{x}}):=\max \limits _{u_\zeta \in U_\zeta }f_\zeta ({\hat{x}},u_\zeta )\ge f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\) for all \( \zeta =1,\ldots ,m\) and that \(\sigma ({\hat{x}})\ge 0\) as \(\sigma \) is an SDSOS polynomial. Estimating (54) at \({\hat{x}}\), we arrive at

$$\begin{aligned} \sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\hat{x}})\ge \sum _{\zeta =1}^m \nu _\zeta f_\zeta ({\hat{x}}, {\hat{u}}_\zeta )\ge \sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}}). \end{aligned}$$
(55)

This concludes that \({\bar{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)), and so \(\min {(P_{\nu })}=\sum _{\zeta =1}^m \nu _\zeta F_\zeta (\bar{x}).\)

To proceed, let

$$\begin{aligned} \nu ^*:=\sup _{(t,\lambda ^i_\zeta ,\mu ^0_l,\mu ^j_l)}{}\big \{&t \mid \sum _{\zeta =1}^m\big (\nu _\zeta p^0_\zeta +\sum _{i=1}^s\lambda ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big ) -t\in \textbf{SDSOS}_\eta [x], t\in \mathbb {R},\\&\Vert Q_\zeta (\lambda _\zeta ^1,\ldots ,\lambda _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\nu _\zeta ,\; \lambda ^i_\zeta \in \mathbb {R}, i=1,\ldots ,s, \zeta =1,\ldots ,m, \\&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0, \mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, j=1,\ldots ,r, l=1,\ldots ,q\big \}. \end{aligned}$$

By putting \({\bar{t}}:=\sum _{\zeta =1}^m \nu _\zeta F_\zeta ({\bar{x}})\), we get by (50) that \(({\bar{t}},\lambda ^i_\zeta ,\mu ^0_l,\mu ^j_l, i=1,\ldots ,s, \zeta =1,\ldots ,m, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\hbox {D}_{\nu }\)) and so, \({\bar{t}}\le \nu ^*\). Let us show that

$$\begin{aligned} \nu ^*\le \min {(P_{\nu })}. \end{aligned}$$
(56)

To see this, assume that \((t,\lambda ^i_\zeta ,\mu ^0_l,\mu ^j_l, i=1,\ldots ,s, \zeta =1,\ldots ,m, j=1,\ldots ,r, l=1,\ldots ,q)\) is a feasible point of problem (\(\hbox {D}_{\nu }\)). Then, \(\mu ^0_l\in \mathbb {R}_+, \mu ^j_l\in \mathbb {R}, l=1,\ldots ,q, j=1,\ldots ,r\) and \( \lambda ^i_\zeta \in \mathbb {R}, \zeta =1,\ldots ,m, i=1,\ldots ,s\) satisfying

$$\begin{aligned}&\sum _{\zeta =1}^m\big (\nu _\zeta p^0_\zeta +\sum _{i=1}^s\lambda ^i_\zeta p^i_\zeta \big )+\sum _{l=1}^q\big (\mu ^0_lh^0_l+\sum _{j=1}^r\mu ^j_lh^j_l\big )- t \in \textbf{SDSOS}_\eta [x],\\&\Vert Q_\zeta (\lambda _\zeta ^1,\ldots ,\lambda _\zeta ^s)\Vert \le \sqrt{\rho _\zeta }\nu _\zeta ,\; \zeta =1,\ldots ,m,\\&\Vert E_l(\mu _l^1,\ldots ,\mu _l^r)\Vert \le \sqrt{\tau _l}\mu _l^0,\;l=1,\dots ,q. \end{aligned}$$

By similar arguments as in (54) to (55), we come to the conclusion \(\sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\bar{x}}) \ge t\), which proves that (56) is valid due to \(\min {(P_{\nu })}=\sum _{\zeta =1}^m \nu _\zeta F_\zeta (\bar{x}).\) Consequently, (53) holds.

Step 2. Denote \({\bar{y}}:={\bar{x}}^{(\eta )}=(1, \bar{x}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,\bar{x}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,\bar{x}_n^\eta )\). We now prove that there exist \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) such that

\((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)) and

$$\begin{aligned} \min {(P_{\nu })}=\min {(D_{\nu })}=\sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha \bar{y}_\alpha +\sum _{\zeta =1}^m\nu _\zeta \sqrt{\rho _\zeta }\xi _\zeta . \end{aligned}$$
(57)

Since \({\bar{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)), it is true that

$$\begin{aligned}&\max \limits _{\omega _l:=(\omega ^1_l,\ldots , \omega ^r_l)\in \Omega _l}{\{h^0_l(\bar{x})+\sum \limits ^{r}_{j=1}\omega ^j_lh^j_l({\bar{x}})\}}\le 0, \; l=1,\ldots , q. \end{aligned}$$

This entails that

$$\begin{aligned}&[\omega _l:=(\omega ^1_l,\ldots , \omega ^r_l)\in \mathbb {R}^{r}, \Vert E_l\omega _l\Vert \le \sqrt{\tau _l}]\nonumber \\&\Rightarrow h^0_l(\bar{x})+\sum \limits ^{r}_{j=1}\omega ^j_lh^j_l({\bar{x}}) \le 0,\; l=1,\ldots ,q. \end{aligned}$$
(58)

Letting \(\hat{\omega }_l:=0_r\in \mathbb {R}^r,\) we have \(\Vert E_l\hat{\omega }_l\Vert < \sqrt{\tau _l},\) i.e., the strict feasibility condition holds for (58). This allows us to employ the strong duality in second-order cone programming (see e.g., [6, Theorem 7.1]) to assert that there exist \( {\tilde{z}}^l\in \mathbb {R}^r, \tilde{\xi }_l\in \mathbb {R}, l=1,\ldots ,q\) such that \(\Vert {\tilde{z}}^l\Vert \le \tilde{\xi }_l, l=1,\ldots ,q\) and

$$\begin{aligned}&h^0_l({\bar{x}}) + \sqrt{\tau _l} \tilde{\xi }_l\le 0 ,\;l=1,\ldots ,q,\\&h_l^j({\bar{x}}) +(E^j_l)^T{\tilde{z}}^l= 0, j=1,\ldots ,r, l=1,\ldots ,q. \end{aligned}$$

Similarly, by \(F_\zeta ({\bar{x}}):= \max \limits _{u_\zeta :=(u_\zeta ^1,\ldots ,u^s_\zeta )\in U_\zeta }\{p^0_\zeta (x)+\sum \limits ^{s}_{i=1}u^i_\zeta p^i_\zeta (x)\}, \zeta =1,\dots ,m\), we have

$$\begin{aligned}&[u_\zeta :=(u^1_\zeta ,\ldots ,u^s_\zeta )\in \mathbb {R}^{s}, \Vert Q_\zeta u_\zeta \Vert \le \sqrt{\rho _\zeta }]\\&\Rightarrow p^0_\zeta (\bar{x})+\sum \limits ^{s}_{i=1}u^i_\zeta p^i_\zeta ({\bar{x}}) \le F_\zeta ({\bar{x}}),\; \zeta =1,\ldots ,m, \end{aligned}$$

and so, we can find \(z^\zeta \in \mathbb {R}^s, \xi _\zeta \in \mathbb {R}, \zeta =1,\ldots ,m\) such that \(\Vert z^\zeta \Vert \le \xi _\zeta , \zeta =1,\ldots ,m\) and

$$\begin{aligned}&p^0_\zeta ({\bar{x}}) +\sqrt{\rho _\zeta }\xi _\zeta \le F_\zeta ({\bar{x}}), \zeta =1,\ldots ,m, \nonumber \\&p^i_\zeta ({\bar{x}})+(Q^i_\zeta )^Tz^\zeta = 0, i=1,\ldots ,s, \zeta =1,\ldots ,m. \end{aligned}$$
(59)

As \({\bar{y}}:={\bar{x}}^{(\eta )}=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, \bar{x}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,\bar{x}_n^2,\ldots ,{\bar{x}}_1^{\eta },\ldots ,{\bar{x}}_n^\eta )\), it holds that \({\bar{y}}_1:=1\) and

$$\begin{aligned}&L_{\bar{y}}(p^i_\zeta )=\sum _{\alpha =1}^{s(\eta ,n)}(p^i_\zeta )_{\alpha } \bar{y}_\alpha =\sum _{\alpha =1}^{s(\eta ,n)}(p^i_\zeta )_{\alpha } \bar{x}^{(\eta )}_\alpha =p^i_\zeta ({\bar{x}}), i=0,1,\ldots ,s, \zeta =1,\ldots , m,\nonumber \\&L_{{\bar{y}}}(h^j_l)=\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_{\alpha } \bar{y}_\alpha =\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_{\alpha } \bar{x}^{(\eta )}_\alpha =h^j_l({\bar{x}}), j=0,1,\ldots ,r, l=1,\ldots , q,\end{aligned}$$
(60)

where \(L_{y}\) is defined as in (4). Furthermore, by the definition of the moment matrix (cf. (5) and (6)), we see that

$$\begin{aligned} \textbf{M}_{\frac{\eta }{2}}({\bar{y}})=\sum _{\alpha =1}^{ s(\eta ,n)} \bar{y}_\alpha M_\alpha = \sum _{\alpha =1}^{ s(\eta ,n)} \bar{x}^{(\frac{\eta }{2})}_\alpha M_\alpha ={\bar{x}}^{(\frac{\eta }{2})}(\bar{x}^{(\frac{\eta }{2})})^T\succeq 0, \end{aligned}$$

which ensures that for each \(1 \le i,j \le s(\frac{\eta }{2},n),\)

$$\begin{aligned} \Vert \bigg (\begin{array}{c} 2(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{ij}\\ (\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{ii}-(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{jj} \end{array}\bigg )\Vert \le (\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{ii}+(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{jj}. \end{aligned}$$

Consequently, \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is a feasible point of problem (\(\hbox {D}_{\nu }^*\)), and it in turn implies that

$$\begin{aligned} \inf {(\hbox {D}_{\nu })}\le \sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha \bar{y}_\alpha +\sum _{\zeta =1}^m\nu _\zeta \sqrt{\rho _\zeta }\xi _\zeta \le \sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\bar{x}}) = \min {(P_{\nu })}, \end{aligned}$$
(61)

where the second inequality holds by (59) and (60), while the equality holds by (53).

We now justify that \(\min {(P_{\nu })}\le \inf {(D_{\nu })}.\) Let \((y,{\xi }_{1},\ldots ,{\xi }_{m},{\tilde{\xi }}_1,\ldots ,{\tilde{\xi }}_q,z^1,\ldots ,z^m,{\tilde{z}}^1,\ldots ,{\tilde{z}}^q)\) be a feasible point of problem (\(\hbox {D}_{\nu }^*\)), where \(y:=(y_\alpha )\in \mathbb {R}^{s(\eta ,n)}\), and denote \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) Then,

$$\begin{aligned}&\sum _{\alpha =1}^{s(\eta ,n)}(h^0_l)_\alpha y_\alpha +\sqrt{\tau _l}\tilde{\xi }_l\le 0, l=1,\ldots ,q, \end{aligned}$$
(62)
$$\begin{aligned}&\sum _{\alpha =1}^{s(\eta ,n)}(p^i_\zeta )_\alpha y_\alpha +(Q_\zeta ^{i})^Tz^\zeta = 0, \Vert z^\zeta \Vert \le \xi _\zeta , i=1,\ldots , s, \zeta =1,\ldots ,m,\nonumber \\&\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_\alpha y_\alpha +(E^{j}_l)^T {\tilde{z}}^l= 0, \Vert {\tilde{z}}^l\Vert \le \tilde{\xi }_l, j=1,\ldots , r, l=1,\ldots ,q, \end{aligned}$$
(63)
$$\begin{aligned}&y_1=1, \end{aligned}$$
(64)
$$\begin{aligned}&\Vert \bigg (\begin{array}{c} 2(\textbf{M}_{\frac{\eta }{2}}(y))_{ij}\\ (\textbf{M}_{\frac{\eta }{2}}(y))_{ii}-(\textbf{M}_{\frac{\eta }{2}}(y))_{jj} \end{array}\bigg )\Vert \le (\textbf{M}_{\frac{\eta }{2}}(y))_{ii}+(\textbf{M}_{\frac{\eta }{2}}(y))_{jj}, \; 1 \le i,j \le s(\frac{\eta }{2},n). \end{aligned}$$
(65)

Considering any \(l\in \{1,\ldots ,q\}\), we verify that

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(h_l^0)_\alpha y_\alpha +\sqrt{\tau _l}\tilde{\xi }_l\ge \max \limits _{\omega _l:=(\omega ^1_l,\ldots ,\omega ^r_l)\in \Omega _l}{\{h_l^0(\tilde{x})+\sum _{j=1}^{r}\omega _l^jh_l^j({\tilde{x}})\}}, \end{aligned}$$
(66)

where \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) It suffices to show that for any \(\omega _l\in \Omega _l,\) it holds that

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(h^0_l)_\alpha y_\alpha +\sqrt{\tau _l}\tilde{\xi }_l\ge h_l^0(\tilde{x})+\sum _{j=1}^{r}\omega _l^jh_l^j({\tilde{x}}). \end{aligned}$$
(67)

Since \(\omega _l:=(\omega ^1_l,\ldots ,\omega ^r_l)\in \Omega _l\), it holds that \(||E_l(\omega ^1_l,\ldots ,\omega ^r_l)||\le \sqrt{\tau _l}.\) This together with (63) ensures that

$$\begin{aligned}&\sqrt{\tau _l}\tilde{\xi }_l\ge \Vert {\tilde{z}}_l\Vert \Vert E_l(\omega ^1_l,\ldots ,\omega ^r_l)\Vert \ge -\tilde{z}^{T}_lE_l(\omega ^1_l,\ldots ,\omega ^r_l)\\&=-\sum _{j=1}^r\omega ^j_l(E_l^j)^T\tilde{z}_l=\sum _{j=1}^r\omega ^j_l\big (\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_\alpha y_\alpha ). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(h^0_l)_\alpha y_\alpha +\sqrt{\tau _l}\tilde{\xi }_l\ge \sum _{\alpha =1}^{s(\eta ,n)}(h^0_l)_\alpha y_\alpha +\sum _{j=1}^r\omega ^j_l\big (\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_\alpha y_\alpha )=L_{y}\big (h^0_l+\sum _{j=1}^{r}\omega ^j_lh^j_l\big ), \end{aligned}$$
(68)

where \(L_{y}\) is defined as in (4). Note that the polynomial \(h^0_l+\sum _{j=1}^{r}\omega ^j_lh^j_l\) is 1st-SDSOS-convex by our assumption. On account of (64) and (65), we invoke the Jensen’s inequality given in Lemma 1 to claim that

$$\begin{aligned} L_{y}\big (h^0_l+\sum _{j=1}^{r}\omega ^j_lh^j_l\big )\ge \big (h^0_l+\sum _{j=1}^{r}\omega ^j_lh^j_l\big )(L_{y}(x_1),\ldots , L_{y}(x_n))=h^0_l({\tilde{x}})+\sum _{j=1}^{r}\omega ^j_lh^j_l(\tilde{x}),\end{aligned}$$

which together with (68) shows that (67) holds. So, (66) is true. Granting this, we conclude from (62) that \({\tilde{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)) and hence,

$$\begin{aligned} \min {(P_{\nu })}\le \sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\tilde{x}}). \end{aligned}$$
(69)

Similarly, we can verify that

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(p^0_\zeta )_\alpha y_\alpha +\sqrt{\rho _\zeta } \xi _\zeta \ge \max \limits _{u_\zeta \in U_\zeta }{\{p^0_\zeta ({\tilde{x}})+\sum _{i=1}^{s}u^i_\zeta p^i_\zeta ({\tilde{x}})\}},\; \zeta =1,\ldots ,m, \end{aligned}$$

and so

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha y_\alpha + \sum _{\zeta =1}^m \nu _\zeta \sqrt{\rho _\zeta } \xi _\zeta \ge \sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\tilde{x}}). \end{aligned}$$

Hence, by (69), it follows that

$$\begin{aligned} \inf {(D_{\nu })}\ge \min {(P_{\nu })}. \end{aligned}$$

Granting this, we invoke (61) to conclude that

$$\begin{aligned} \min {(P_{\nu })}=\min {(D_{\nu })}= \sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha \bar{y}_\alpha +\sum _{\zeta =1}^m\nu _\zeta \sqrt{\rho _\zeta }\xi _\zeta , \end{aligned}$$

i.e., (57) is valid and \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) is an optimal solution of problem (\(\hbox {D}_{\nu }^*\)).

(ii) Let \(\nu :=(\nu _1,\ldots ,\nu _m)\in \mathbb {R}^m_+\setminus \{0\}\) be such that the problem (\(\hbox {P}_{\nu }\)) admits an optimal solution. Suppose that \({\hat{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)). Arguing similarly as in the proof of Step 2 of (i), we arrive at

$$\begin{aligned} \min {(P_{\nu })}=\min {(D_{\nu })}. \end{aligned}$$
(70)

Let \((\bar{y},\xi _1,\ldots ,\xi _m,\tilde{\xi }_1,\ldots ,\tilde{\xi }_q,z^1,\ldots ,z^m,\tilde{z}^1,\ldots ,{\tilde{z}}^q)\) with \({\bar{y}}:=({\bar{y}}_\alpha )\in \mathbb {R}^{s(\eta ,n)}\) be an optimal solution of problem (\(\hbox {D}_{\nu }^*\)). Then, we have \(\xi _\zeta \in \mathbb {R}, \tilde{\xi }_l\in \mathbb {R}, z^\zeta \in \mathbb {R}^s, {\tilde{z}}^l\in \mathbb {R}^r, \zeta =1,\ldots ,m, l=1,\ldots ,q\) and

$$\begin{aligned}&\min {(D_{\nu })}=\sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha {\bar{y}}_\alpha +\sum _{\zeta =1}^m\nu _\zeta \sqrt{\rho _\zeta }\xi _\zeta , \end{aligned}$$
(71)
$$\begin{aligned}&\sum _{\alpha =1}^{s(\eta ,n)}(h^0_l)_\alpha {\bar{y}}_\alpha +\sqrt{\tau _l}\tilde{\xi }_l\le 0, l=1,\ldots ,q,\end{aligned}$$
(72)
$$\begin{aligned}&\sum _{\alpha =1}^{s(\eta ,n)}(p^i_\zeta )_\alpha {\bar{y}}_\alpha +(Q_\zeta ^{i})^Tz^\zeta = 0, \Vert z^\zeta \Vert \le \xi _\zeta , i=1,\ldots , s, \zeta =1,\ldots ,m, \end{aligned}$$
(73)
$$\begin{aligned}&\sum _{\alpha =1}^{s(\eta ,n)}(h^j_l)_\alpha {\bar{y}}_\alpha +(E^{j}_l)^T {\tilde{z}}^l= 0, \Vert {\tilde{z}}^l\Vert \le \tilde{\xi }_l, j=1,\ldots , r, l=1,\ldots ,q,\end{aligned}$$
(74)
$$\begin{aligned}&{\bar{y}}_1=1,\end{aligned}$$
(75)
$$\begin{aligned}&\Vert \bigg (\begin{array}{c} 2(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{ij}\\ (\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{ii}-(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{jj} \end{array}\bigg )\Vert \le (\textbf{M}_{\frac{\eta }{2}}(\bar{y}))_{ii}+(\textbf{M}_{\frac{\eta }{2}}({\bar{y}}))_{jj}, \; 1 \le i,j \le s(\frac{\eta }{2},n). \end{aligned}$$
(76)

Denote \({\bar{x}}:=(L_{{\bar{y}}}(x_1),\ldots , L_{\bar{y}}(x_n))\in \mathbb {R}^n\). As shown in (i), under the validation of (72), (74), (75) and (76), we can verify that \({\bar{x}}\) is a feasible point of problem (\(\hbox {P}_{\nu }\)), and so

$$\begin{aligned} \min {(\hbox {P}_{\nu })}\le \sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\bar{x}}). \end{aligned}$$
(77)

Similarly, we derive from (73), (75) and (76) that

$$\begin{aligned} \sum _{\alpha =1}^{s(\eta ,n)}(\sum _{\zeta =1}^m\nu _\zeta p^0_\zeta )_\alpha {\bar{y}}_\alpha + \sum _{\zeta =1}^m \nu _\zeta \sqrt{\rho _\zeta } \xi _\zeta \ge \sum _{\zeta =1}^m\nu _\zeta F_\zeta ({\bar{x}}). \end{aligned}$$

Combining this with (77), (71) and (70) shows that \(\min {(\hbox {P}_{\nu })}= \sum _{\zeta =1}^m\nu _\zeta F_\zeta (\bar{x})\), and so \({\bar{x}}\) is an optimal solution of problem (\(\hbox {P}_{\nu }\)). We can make conclusion that \({\hat{x}}\) is a robust weak Pareto solution of problem (U). In the case of \(\nu \in \textrm{int}\mathbb {R}^m_+\), we obtain that \({\bar{x}}\) is a robust Pareto solution of problem (U). The proof is accomplished. \(\square \)

5 Conclusions

In this paper, we have presented necessary and sufficient optimality conditions in terms of second order cone conditions for robust (weak) Pareto solutions of an uncertain multiobjective optimization problem. We have also addressed a dual problem to the robust multiobjective optimization problem and examined robust converse, robust weak and robust strong duality relations between the primal and dual problems. Furthermore, robust solution relationships between the uncertain multiobjective optimization program and (scalar) second order cone programming (SOCP) dual and relaxation problems have been established by using the duality theory in second order cone programming. Particularly, it has been shown that one can calculate a robust (weak) Pareto solution of the uncertain multiobjective optimization problem by solving a related second order cone programming relaxation.

The results obtained in this paper are conceptual dual/relaxation schemes that have potentially numerical experiments for a subclass of robust convex multiobjective polynomial optimization problems. It would be of great interest to see how we can develop associated algorithms that allow to find robust (weak) Pareto solutions of an uncertain multiobjective optimization problem via its SOCP dual or relaxation programs.