1 Introduction

Multiobjective (or vector/multi-criteria) optimization problems are mathematical models that can handle the real-world optimization problems and have been applied in various areas of human life and work such as economics, engineering and industry (Ehrgott 2005; Jahn 2004; Sawaragi et al. 1985; Steuer 1986; Luc 1989). A multiobjective optimization model usually satisfies several conflicting objectives from a set of feasible choices, and as a result, finding efficient solutions or establishing optimality criteria for a multiobjective optimization problem is generally hard by its nature (Chuong 2016; Boţ et al. 2009; Chuong and Kim 2014; Ehrgott et al. 2014; Lee and Lee 2015, 2018; Zamani et al. 2015; Chinchuluun and Pardalos 2007; Chuong 2019, 2020).

A recent trend of attention has been focused on developing semidefinite programming (SDP) (see e.g., Blekherman et al. (2012)) approaches for treating multiobjective convex and nonconvex programming problems by exploiting special structures of the objective and constraint functions such as linear or polynomials; see, e.g., (Georgiev et al. 2013; Chuong 2017, 2018; Chuong and Jeyakumar 2017; Goberna et al. 2014; Gorissen and den Hertog 2012; Lee and Jiao 2019a, b; Magron et al. 2014; Chieu et al. 2018). For example, the authors in Goberna et al. (2014) or in Chuong (2017) studied robust solutions of multiobjective (semi-infinite) linear optimization problems under constraint data uncertainty. By employing an approximate scalarization technique, the authors in Lee and Jiao (2019a, b) proposed a method to finding efficient solutions for (robust) multiobjective optimization problems, where the objective and constraint functions are SOS-convex polynomials. Recall Ahmadi and Parrilo (2013); Helton and Nie (2010) here that a polynomial \(f:\mathbb {R}^n\rightarrow \mathbb R\) is called SOS-convex if the polynomial \(h_f:\mathbb {R}^n\times \mathbb R^n\rightarrow \mathbb R\) given by \(h_f(x,y):= f(x)-f(y)- \nabla f(y)^T(x-y)\) is a sums-of-squares polynomial in the variable of (xy), where \(\nabla f(y)\) denotes the derivative of f at y. The interested reader is referred to (Chuong 2018) for optimality conditions and duality in terms of linear matrix inequalities for a class of (robust) multiobjective SOS-convex polynomial programs.

It is worth mentioning that the class of first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials (see Definition 2.2 below), which is a subclass of SOS-convex polynomials, is a numerically tractable subclass of convex polynomials because checking a given real polynomial is 1st-SDSOS-convex or not can be equivalently reformulated as a feasibility problem of a second-order cone programming (SOCP) problem (Ahmadi and Majumdar 2016, 2019; Chuong et al. 2019). In particular, it is shown in Chuong et al. (2019) that the class of (scalar) optimization problems involving 1st-SDSOS-convex polynomials has strong SOCP duality in the sense that the optimal values of the primal and dual programs are equal under a constraint qualification. This result allows us to find the optimal value of a 1st-SDSOS-convex polynomial program by solving an SOCP dual problem.

In this paper, inspired by Chuong et al. (2019), we consider a multiobjective convex polynomial optimization problem of the form:

$$\begin{aligned} \min _{x\in \mathbb R^{n}}{\big \{\big (f_1(x),\ldots ,f_p(x)\big ) \mid g_l(x)\le 0,\; l=1,\ldots ,q\big \}}, \end{aligned}$$
(P)

where \(f_k:\mathbb R^n\rightarrow \mathbb R, k=1,\ldots , p\) and \( g_l:\mathbb R^n\rightarrow \mathbb R, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials.

The first main aim of the paper is to provide optimality characterizations for weak efficient solutions of the multiobjective optimization problem (P) under a constraint qualification, where the optimality characterizations are expressed in terms of second-order cone (SOC) or Karush-Kuhn-Tucker (KKT) conditions. We also establish solution relationships between the multiobjective program (P) and an SOCP dual problem or an SOCP relaxation of a given weighted-sum optimization problem. This result provides us a way to find (weak) efficient solutions of the multiobjective polynomial program (P) by solving the (scalar) SOCP relaxation problem of the given weighted-sum optimization problem.

The second main aim of the paper is to propose a dual multiobjective problem in terms of SOC conditions to the multiobjective optimization problem (P) and explore weak, strong and converse duality relations between them. Interestingly, one of the dual results shows that the existence of a weak efficient solution of the dual problem is equivalent to the validation of the KKT condition at a given weak efficient solution of the primal problem and moreover, the efficient values of both the problems are the same. In this way, we are able to find a weak efficient solution of the dual problem by verifying alternatively the KKT condition at a given weak efficient solution of the primal problem.

The outline of the paper is as follows. In Sect. 2, we first present necessary and sufficient optimality criteria for weak efficiencies of problem (P) and then establish solution relationships between the multiobjective program (P) and an SOCP dual problem or an SOCP relaxation of a given weighted-sum optimization problem. Section 3 addresses a dual multiobjective problem to the multiobjective optimization problem (P) and explores weak, strong and converse duality relations. Section 4 summarises the obtained results and provides some perspective studies.

2 Second-order cone optimality conditions and solution relationships

Let us first start by fixing some notation and definitions. The notation \(\mathbb R^n\) signifies the Euclidean space whose norm is denoted by \(\Vert \cdot \Vert \) for each \(n\in \mathbb N:=\{1,2,\ldots \}\). The inner product in \(\mathbb R^n\) is defined by \(\langle x,y\rangle :=x^T y\) for all \(x, y\in \mathbb R^n.\) Let \(\mathbb R^n_+\) be the nonnegative orthant of \(\mathbb R^n\) and \(\mathrm{int\,} \mathbb R^n_+\) be the topological interior of \(\mathbb R^n_+\). Denote by \(\mathbb R[x]\) (or \(\mathbb {R}[x_1,\ldots ,x_n]\)) the ring of polynomials in \(x:=(x_1,\ldots ,x_n)\) with real coefficients. Let \(\mathbb {R}_d[x_1,\ldots ,x_n]\) be the space consisting of all real polynomials on \(\mathbb {R}^n\) with degree at most \(d\in \mathbb {N}_0:= \mathbb {N} \cup \{0\}\) and let s(dn) be the dimension of \(\mathbb {R}_d[x_1,\ldots ,x_n]\). Write the canonical basis of \(\mathbb {R}_d[x_1,\ldots ,x_n]\) by

$$\begin{aligned} x^{(d)}:=(1,x_1,x_2,\ldots ,x_n,x_1^2,x_1x_2,\ldots ,x_2^2,\ldots ,x_n^2,\ldots ,x_1^{d},\ldots ,x_n^d)^T. \end{aligned}$$

For each \(1 \le \alpha \le s(d,n)\), we denote \(i(\alpha )=(i_1(\alpha ),\ldots ,i_n(\alpha )) \in (\mathbb {N}_0)^n\) to be the multi-index such that

$$\begin{aligned} x^{(d)}_{\alpha }=x^{i(\alpha )}:=x_1^{i_1(\alpha )}\ldots x_n^{i_n(\alpha )}. \end{aligned}$$

Let the monomials \(m_\alpha (x)=x^{(d)}_{\alpha }\) be the \(\alpha \)th coordinate of \(x^{(d)}\), \(1 \le \alpha \le s(d,n)\). Thus, we can write

$$\begin{aligned} f(x) = \sum \limits _{\alpha =1}^{s(d,n)} f_{\alpha }m_{\alpha }(x)=\sum \limits _{\alpha =1}^{s(d,n)} f_{\alpha }x^{(d)}_{\alpha }, \end{aligned}$$

where \(f_\alpha \) is the corresponding coefficient of f at the \(\alpha \)th coordinate in the canonical basis of monomials of \(\mathbb {R}_d[x_1,\ldots ,x_n]\).

Recall that (see e.g., Lasserre (2009)) for a given \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)},\) \(L_y:\mathbb {R}_d[x]\rightarrow \mathbb {R}\) is the Riesz functional given by

$$\begin{aligned} L_y(f):=\sum _{\alpha =1}^{s(d,n)}f_\alpha y_\alpha \; \text{ for } \; f(x):=\sum _{\alpha =1}^{s(d,n)}f_\alpha x^{(d)}_\alpha . \end{aligned}$$
(2.1)

The notation \(\mathbf{M}_d(y)\) is the moment matrix with degree d generated by \( y:=(y_\alpha )\in \mathbb {R}^{s(2d,n)}\), which is defined by

$$\begin{aligned} \mathbf{M}_d(y):=\sum _{\alpha =1}^{ s(2d,n)} y_\alpha M_\alpha , \end{aligned}$$
(2.2)

where \(M_\alpha ,\alpha =1,\ldots , s(2d,n),\) are \(s(d,n)\times s(d,n)\) symmetric matrices such that

$$\begin{aligned} x^{(d)}(x^{(d)})^T= \sum _{\alpha =1}^{ s(2d,n)} x^{(d)}_\alpha M_\alpha . \end{aligned}$$
(2.3)

Definition 2.1

(SDSOS polynomials ) Ahmadi and Majumdar (2016, 2019) We say that a polynomial f on \(\mathbb R^n\) with degree \(d:=2\ell \) is scaled diagonally dominant sums-of-squares (SDSOS) if there exist \(k \in \mathbb {N}\) with \(1 \le k \le s(\ell ,n)\) and scalars \(\alpha _i\), \(\beta _{ij}^+, \gamma _{ij}^+, \beta _{ij}^-, \gamma ^-_{ij}\) with \(\alpha _i\ge 0\) such that

$$\begin{aligned} f(x)=\sum _{i=1}^{k} \alpha _i m_i^2(x)+\sum _{i,j=1}^{k} (\beta _{ij}^+ m_i(x)+\gamma _{ij}^+ m_j(x))^2+\sum _{i,j=1}^{k} (\beta _{ij}^- m_i(x)-\gamma _{ij}^- m_j(x))^2, \end{aligned}$$

where \(m_i\) and \(m_j\) are monomials in the variable x. We denote by \(\mathbf{SDSOS}_d[x]\) the set of all scaled diagonally dominant sums-of-squares polynomials on \(\mathbb R^n\) with degree at most d.

It is obvious by definition that any scaled diagonally dominant sums-of-squares polynomial is nonnegative but the converse is not true in general Ahmadi and Majumdar (2016, 2019) (see also, Chuong et al. 2019, Example 5.4 for a particular example). The next definition classifies a subclass of convex polynomials that can be verified by solving a second-order cone programming (SOCP) problem.

Definition 2.2

(1st-SDSOS-convex polynomials)Chuong et al. (2019) Let f be a polynomial on \(\mathbb {R}^n\) and let \(h_f\) be a polynomial on \(\mathbb {R}^n\times \mathbb R^n\) given by

$$\begin{aligned} h_f(x,y):= f(x)-f(y)- \nabla f(y)^T(x-y), \end{aligned}$$

where \(\nabla f(y)\) denotes the derivative of f at y. We say that f is first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) whenever \(h_f\) is an SDSOS polynomial in the variable of (xy).

Note from the definition that any 1st-SDSOS-convex polynomial is convex, and for any 1st-SDSOS-convex polynomials fg and \(\gamma \ge 0\), \(f+g\) and \(\gamma f\) are also 1st-SDSOS-convex. The class of 1st-SDSOS-convex polynomials is sufficiently large including separable convex quadratic functions and any polynomial described as a sum of even powers with the corresponding nonnegative coefficients, and many other functions important in polynomial optimization (cf. Chuong et al. (2019)).

From now on, we assume that \(f_k:\mathbb R^n\rightarrow \mathbb R, k=1,\ldots , p\) and \( g_l:\mathbb R^n\rightarrow \mathbb R, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials with degree at most \(d\in \mathbb N_0\). For convenience, we assume that d is an even number, because otherwise we will replace d by \(d+1.\) In what follows, we sometimes use the notation \(f:=(f_1,\ldots ,f_p)\) for brief and denote the set of feasible points of problem (P) by

$$\begin{aligned} C:=\{x\in \mathbb R^n\mid g_l(x)\le 0,\; l=1,\ldots ,q\}. \end{aligned}$$
(2.4)

Definition 2.3

  1. (i)

    We say that \({\bar{x}}\in C\) is an efficient solution of problem (P) if there is no \(x\in C\) such that \( f_k(x)\le f_k({\bar{x}})\) for all \(k=1,\ldots , p\) and \(f(x)\ne f({\bar{x}})\).

  2. (ii)

    A point \({\bar{x}}\in C\) is called a weak efficient solution of problem (P) if there is no \(x\in C\) such that \(f_k(x)<f_k({\bar{x}})\) for all \(k=1,\ldots , p\).

The following theorem establishes necessary and sufficient optimality criteria for weak efficiencies of problem (P) in terms of SOC condition and KKT condition. We say that the Slater condition holds if there exists \({\hat{x}}\in \mathbb R^n\) such that

$$\begin{aligned} g_l({\hat{x}})<0,\;l=1,\ldots ,q. \end{aligned}$$
(2.5)

Theorem 2.4

(Characterizations for weak efficiency) Let \({\bar{x}}\in \mathbb R^n\) be a feasible point of problem (P) and assume that the Slater condition (2.5) holds. The following conditions are equivalent:

  1. (i)

    (Efficiency)   \({\bar{x}}\) is a weak efficient solution of problem (P).

  2. (ii)

    (SOC condition) There exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that

    $$\begin{aligned}&\sum _{k=1}^p\alpha _kf_k+\sum _{l=1}^q \mu _lg_l-\sum \limits _{k=1}^p\alpha _kf_k({\bar{x}})\in \mathbf{SDSOS}_d[x]. \end{aligned}$$
    (2.6)
  3. (iii)

    (KKT condition) There exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \(\mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R^q_+\) such that

    $$\begin{aligned} \sum \limits _{k=1}^p\alpha _k\nabla f_k({\bar{x}})+\sum \limits _{l=1}^q\mu _l\nabla g_l({\bar{x}})=0,\; \mu _l g_l({\bar{x}})=0,\; l=1,\ldots ,q. \end{aligned}$$
    (2.7)

Proof

\([\mathbf{(i)}\ \Longrightarrow \ \mathbf{(ii)}]\) Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, it holds that

$$\begin{aligned} \{x\in \mathbb R^n\mid f_k(x)-f_k({\bar{x}})<0,\; k=1,\ldots , p,\; g_l(x)<0,\; l=1,\ldots ,q\}=\emptyset .\ \end{aligned}$$

Since \(f_k, k=1,\ldots ,p \) and \(g_l, l=1,\ldots ,q\) are convex polynomials, invoking the classical alternative theorem in convex analysis (cf. (Rockafellar 1970, Theorem 21.1)), we find \(\alpha _k\ge 0, k=1,\ldots , p, \mu _l\ge 0, l=1,\ldots , q\), not all zero, such that

$$\begin{aligned} \sum \limits _{k=1}^p\alpha _k\big (f_k(x)-f_k({\bar{x}})\big )+\sum \limits _{l=1}^q\mu _lg_l(x)\ge 0,\quad \forall x\in \mathbb R^n. \end{aligned}$$

This together with the Slater condition (2.5) ensures that \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\ne 0\). Let \(h:\mathbb R^n\rightarrow \mathbb R\) be given by \(h(x):=\sum \nolimits _{k=1}^p\alpha _kf_k(x)+\sum \nolimits _{l=1}^q\mu _lg_l(x)-\sum \nolimits _{k=1}^p\alpha _kf_k({\bar{x}})\) for \( x\in \mathbb R^n.\) Under our assumption, h is a 1st-SDSOS-convex polynomial on \(\mathbb R^n\), and thus, \(h\in \mathbf{SDSOS}_d[x]\) because of \(h(x)\ge 0\) for all \(x\in \mathbb R^n\) (see Chuong et al. 2019, Proposition5.3 ). So, (2.6) holds.

[(ii) \(\Longrightarrow \) (iii)] Assume that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that

$$\begin{aligned} \sum _{k=1}^p\alpha _kf_k+\sum _{l=1}^q \mu _lg_l-\sum \limits _{k=1}^p\alpha _kf_k({\bar{x}})\in \mathbf{SDSOS}_d[x]. \end{aligned}$$

Then, we find \(\sigma \in \mathbf{SDSOS}_d[x]\) such that

$$\begin{aligned} \sum _{k=1}^p \alpha _kf_k(x)=\sigma (x)-\sum \limits _{l=1}^q\mu _lg_l(x)+\sum _{k=1}^p \alpha _kf_k({\bar{x}}),\quad \forall x\in \mathbb R^n. \end{aligned}$$
(2.8)

Substituting \(x:={\bar{x}}\) into (2.8), we arrive at \(\sigma ({\bar{x}})-\sum \nolimits _{l=1}^q\mu _lg_l({\bar{x}})=0\). Note in addition that \(\mu _lg_l({\bar{x}})\le 0, l=1,\ldots ,q\) inasmuch as \({\bar{x}}\) is a feasible point of problem (P), and that \(\sigma ({\bar{x}})\ge 0\) as \(\sigma \) is a 1st-SDSOS polynomial and thus nonnegative. Therefore, it entails that

$$\begin{aligned} \mu _lg_l({\bar{x}})= 0, \; l=1,\ldots ,q \end{aligned}$$
(2.9)

and that \(\sigma ({\bar{x}})=0\). So, \(\sigma (x)\ge \sigma ({\bar{x}})\) for all \(x\in \mathbb R^n;\) i.e., \({\bar{x}}\) is a minimizer of \(\sigma \) on \(\mathbb R^n.\) So, \(\nabla \sigma ({\bar{x}})=0\), and moreover, by (2.8) and (2.9), we have

$$\begin{aligned} \sum _{k=1}^p \alpha _k\nabla f_k({\bar{x}})+\sum \limits _{l=1}^q\mu _l\nabla g_l({\bar{x}})=0,\; \mu _l g_l({\bar{x}})=0,\; l=1,\ldots ,q, \end{aligned}$$

which shows that the KKT condition in (2.7) holds.

[(iii) \(\Longrightarrow \) (i)] Assume that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \(\mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R^q_+\) such that

$$\begin{aligned} \sum \limits _{k=1}^p\alpha _k\nabla f_k({\bar{x}})+\sum \limits _{l=1}^q\mu _l\nabla g_l({\bar{x}})=0,\; \mu _l g_l({\bar{x}})=0,\; l=1,\ldots ,q. \end{aligned}$$
(2.10)

Consider as above the function \(h:\mathbb R^n\rightarrow \mathbb R\) given by \(h(x):=\sum \limits _{k=1}^p\alpha _kf_k(x)+\sum \limits _{l=1}^q\mu _lg_l(x)-\sum \limits _{k=1}^p\alpha _kf_k({\bar{x}})\) for \( x\in \mathbb R^n.\) We assert by (2.10) that \(\nabla h({\bar{x}})=0\) and then, the convexity of h entails that

$$\begin{aligned} h(x)\ge h({\bar{x}})=0,\quad \forall x\in \mathbb R^n. \end{aligned}$$
(2.11)

Now, let \({\tilde{x}}\in \mathbb R^n\) be an arbitrary feasible point of problem (P). By evaluating (2.11) at \({\tilde{x}}\), we arrive at

$$\begin{aligned} \sum \limits _{k=1}^p\alpha _kf_k({\tilde{x}})\ge \sum \limits _{k=1}^p\alpha _kf_k({\tilde{x}})+\sum \limits _{l=1}^q\mu _lg_l({\tilde{x}})\ge \sum \limits _{k=1}^p\alpha _kf_k({\bar{x}}), \end{aligned}$$

where the first inequality holds due to the fact that \(\mu _lg_l({\tilde{x}})\le 0, l=1,\ldots ,q\). Then,

$$\begin{aligned} f({\tilde{x}})-f({\bar{x}})\notin -\mathrm{int\,} \mathbb R^p_+ \end{aligned}$$

by virtue of \(\alpha \in \mathbb R^p_+\setminus \{0\}.\) So, \({\bar{x}}\) is a weak efficient solution of problem (P).\(\Box \)

Remark 2.5

(SOC condition & KKT condition) A closer inspection of the proof of Theorem 2.4 reveals that the equivalence between (ii) and (iii) holds that is independent of the Slater condition (2.5). The following example shows that the equivalence between (i) and (ii) may go awry if the Slater condition (2.5) fails.

Example 2.6

(The importance of the Slater condition) Consider the multiobjective convex polynomial optimization problem

$$\begin{aligned} \min _{x\in \mathbb R}{\big \{(x^8-2x+1,x^2-3x-1)} \mid x^8-1\le 0,\; x^8+x^2\le 0,\; 2x^8-1\le 0\big \}. \end{aligned}$$
(EP1)

The problem (EP1) is in the form of problem (P), where the objective functions \(f_1(x):=x^8-2x+1, f_2(x):=x^2-3x-1, x\in \mathbb R,\) and the constraint functions \( g_1(x):= x^8-1, g_2(x):= x^8+x^2, g_3(x):=2x^8-1, x\in \mathbb R,\) are 1st-SDSOS-convex polynomials.

It can be verified that \({\bar{x}}:=0\) is an efficient solution of problem (EP1), and that the Slater condition fails. We claim that the SOC condition (2.6) does not hold for this problem. To see this, assume on the contrary that there exist \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}, \mu :=(\mu _1,\mu _2,\mu _3)\in \mathbb R^3_+\) and \(\sigma \in \mathbf{SDSOS}_8[x],\) such that

$$\begin{aligned}&\sum _{k=1}^2\alpha _kf_k+\sum _{l=1}^3 \mu _lg_l -\sum _{k=1}^2\alpha _kf_k({\bar{x}})=\sigma . \end{aligned}$$

By rearranging this, we obtain that

$$\begin{aligned} (\alpha _1+\mu _1+\mu _2+2\mu _3)x^8+(\alpha _2+\mu _2)x^2-(2\alpha _1+3\alpha _2)x-\mu _1-\mu _3 =\sigma (x)\ge 0 \end{aligned}$$

for each \(x\in \mathbb R.\) It follows that, for each \(x\in \mathbb R,\)

$$\begin{aligned} (\alpha _1+ \mu _1+\mu _2+2\mu _3)x^8\ge (2\alpha _1+3\alpha _2)x-(\alpha _2+\mu _2)x^2. \end{aligned}$$
(2.12)

Considering \(x_k:=\frac{1}{k}\), where \(k\in \mathbb N\), we assert by (2.12) that

$$\begin{aligned} \alpha _1+ \mu _1+\mu _2+2\mu _3\ge k^6[(2\alpha _1+3\alpha _2)k-(\alpha _2+\mu _2)]\ge (2\alpha _1+3\alpha _2)k-(\alpha _2+\mu _2) \end{aligned}$$

for k large enough. Thus,

$$\begin{aligned} \alpha _1+ \mu _1+\mu _2+2\mu _3\ge (2\alpha _1+3\alpha _2)k-(\alpha _2+\mu _2) \end{aligned}$$
(2.13)

for such large k. Now, letting \(k\rightarrow \infty \) in (2.13), we arrive at a contradiction. Consequently, the implication of [(i) \(\Longrightarrow \) (ii)] in Theorem 2.4 is no longer valid for this setting.

The next example illustrates that the equivalence between (i) and (ii) in Theorem 2.4 may fail for a multiobjective convex (but not 1st-SDSOS-convex) polynomial optimization problem under the validation of the Slater condition.

Example 2.7

(The importance of the 1st-SDSOS-convexity) Consider a multiobjective convex polynomial optimization problem

$$\begin{aligned} \min _{x\in \mathbb R^2}{\big \{\big ((x_1+x_2-1)^2+1,(x_1+x_2-1)^2+2\big )} \mid&x_1\le 2,\; x_2\le 1\big \}. \end{aligned}$$
(EP2)

The problem (EP2) is a type of problem (P), where the objective functions \(f_1(x):=(x_1+x_2-1)^2+1, f_2(x):=(x_1+x_2-1)^2+2, x\in \mathbb R^2,\) are convex (but not 1st-SDSOS-convex) polynomials (Chuong et al. 2019, Example 5.4) and the constraint functions \( g_1(x):=x_1-2, g_2(x):=x_2-1, x\in \mathbb R^2,\) are linear polynomials.

It is easy to see that the Slater condition holds, and we can verify that \({\bar{x}}:=(1,0)\) is a weak efficient solution of problem (EP2). We now show that the SOC condition (2.6) does not hold for this problem. Assume on the contrary that there exist \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}, \mu :=(\mu _1,\mu _2)\in \mathbb R^2_+ \) and \(\sigma \in \mathbf{SDSOS}_2[x],\) such that

$$\begin{aligned} \sum _{k=1}^2\alpha _kf_k+\sum _{l=1}^2\mu _lg_l-\sum _{k=1}^2\alpha _kf_k({\bar{x}})=\sigma . \end{aligned}$$

By rearranging this, we obtain that

$$\begin{aligned} \big (\alpha _1+\alpha _2\big )(x_1+x_2-1)^2+\mu _1(x_1-2)+\mu _2(x_2-1)=\sigma (x)\ge 0\; \text{ for } \text{ all } \; x\in \mathbb R^2. \end{aligned}$$
(2.14)

This entails that

$$\begin{aligned} \big (\alpha _1+\alpha _2\big )(x_1+x_2-1)^2\ge -\mu _1(x_1-2)-\mu _2(x_2-1) \; \text{ for } \text{ all } \; x\in \mathbb R^2, \end{aligned}$$
(2.15)

which ensures that \(\mu _1=\mu _2=0\). Assume on the contrary that \(\mu _1\ne 0\) or \(\mu _2\ne 0\). By taking \(x_1:=1-t, x_2:=t\), where \(t\in \mathbb R\), we get by (2.15) that

$$\begin{aligned} 0\ge (\mu _1-\mu _2)t+\mu _1+\mu _2\; \text{ for } \text{ all } \; t\in \mathbb R, \end{aligned}$$

which results in a contradiction. So, \(\mu _1=\mu _2=0\). Then, we deduce from (2.14) that \(h=\frac{1}{\alpha _1+\alpha _2}\sigma \in \mathbf{SDSOS}_2[x]\), where \(h(x):=(x_1+x_2-1)^2, x\in \mathbb R^2.\) This is impossible because h is not an SDSOS polynomial as said in (Chuong et al. 2019, Example 5.4). In conclusion, the equivalence between (i) and (ii) in Theorem 2.4 fails for this setting. The reason is that \(f_k, k=1,2\) are not 1st-SDSOS-convex polynomials.

Let \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\},\) and consider the corresponding weighted-sum optimization problem of (P) as follows:

figure a

where \(C:=\{x\in \mathbb R^n\mid g_l(x)\le 0, \; l=1,\ldots ,q\}\) is defined as in (2.4).

A second-order cone programming (SOCP) dual for the weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)) is given by

figure b

We also consider a second-order cone programming (SOCP) relaxation for the weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)) that is defined as follows:

figure c

To prove the next theorem, we recall from (cf. Chuong et al. 2019, Proposition 6.1) a Jensen’s inequality for 1st-SDSOS-convex polynomials.

Lemma 2.8

(Jensen’s inequality) Let f be a 1st-SDSOS-convex polynomial on \(\mathbb {R}^n\) with degree \(d:=2\ell , \ell \in \mathbb N_0\). Let \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)}\) with \(y_1=1\) and

$$\begin{aligned} \Vert \bigg (\begin{array}{c} 2(\mathbf{M}_{\ell }(y))_{ij}\\ (\mathbf{M}_{\ell }(y))_{ii}-(\mathbf{M}_{\ell }(y))_{jj} \end{array}\bigg )\Vert \le (\mathbf{M}_\ell (y))_{ii}+(\mathbf{M}_\ell (y))_{jj}, \; 1 \le i,j \le s(\ell ,n). \end{aligned}$$
(2.16)

Then, we have

$$\begin{aligned} L_y(f)\ge f\big (L_y(x_1),\ldots , L_y(x_n)\big ), \end{aligned}$$

where \(L_y\) is given as in (2.1) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate.

The following theorem establishes solution relationships among the multiobjective program (P), the SOCP dual problem (\(\hbox {D}_{\lambda }\)) and the SOCP relaxation problem (\(\hbox {R}_{\lambda }\)) of the corresponding weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)). This result provides us a way to find (weak) efficient solutions of the multiobjective program (P) by solving the SOCP relaxation problem (\(\hbox {R}_{\lambda }\)).

Theorem 2.9

Assume that the Slater condition (2.5) holds. Then, the following assertions are valid:

  1. (i)

    If \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) is a weak efficient solution of problem (P), then there exists \(\lambda \in \mathbb R^p_+\setminus \{0\}\) such that

    $$\begin{aligned} \min (\mathrm{P}_{\lambda })= \max {(\mathrm{D}_{\lambda })} =\min (\mathrm{R}_{\lambda }) \end{aligned}$$
    (2.17)

    and \({\bar{y}}:=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{d},\ldots ,{\bar{x}}_n^d)\) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)).

  2. (ii)

    Let \(\lambda \in \mathbb R^p_+\setminus \{0\}\) be such that the problem (\({\hbox {P}_{\lambda }}\)) admits an optimal solution. If \({\hat{y}}\in \mathbb {R}^{s(d,n)}\) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)), then \({\hat{x}}:=(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))\in \mathbb {R}^n\) is a weak efficient solution of problem (P), where \(L_{{\hat{y}}}\) is given as in (2.1) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate. Moreover, if \(\lambda \in \mathrm{int}\mathbb R^p_+\), then \({\hat{x}}\) is an efficient solution of problem (P).

Proof

(i) Let \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) be a weak efficient solution of problem (P). By Theorem 2.4, we find \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that

$$\begin{aligned} \sum _{k=1}^p\lambda _kf_k+\sum _{l=1}^q \mu _lg_l-\sum \limits _{k=1}^p\lambda _kf_k({\bar{x}})\in \mathbf{SDSOS}_d[x]. \end{aligned}$$
(2.18)

Thus, there exists \(\sigma \in \mathbf{SDSOS}_d[x]\) such that

$$\begin{aligned} \sum _{k=1}^p \lambda _kf_k(x)=\sigma (x)-\sum \limits _{l=1}^q\mu _lg_l(x)+\sum _{k=1}^p \lambda _kf_k({\bar{x}}),\quad \forall x\in \mathbb R^n. \end{aligned}$$
(2.19)

For any \(x\in C:=\{x\in \mathbb R^n\mid g_l(x)\le 0, \; l=1,\ldots ,q\}\), we get by (2.19) that

$$\begin{aligned} \sum _{k=1}^p \lambda _kf_k(x)\ge \sum _{k=1}^p \lambda _kf_k({\bar{x}}) \end{aligned}$$

due to the fact that \(\mu _lg_l(x)\le 0, l=1,\ldots ,q\) and \(\sigma (x)\ge 0\). So, \({\bar{x}}\) is an optimal solution of problem (\({\hbox {P}_{\lambda }}\)) and then,

$$\begin{aligned} \min ({\hbox {P}_{\lambda }} )=\sum _{k=1}^{p} \lambda _kf_k({\bar{x}}). \end{aligned}$$

Now, denote

$$\begin{aligned} \lambda ^*:=\sup _{(\mu , \mu _1,\ldots ,\mu _q)}{}\left\{ \mu \mid \sum _{k=1}^p\lambda _kf_k+\sum _{l=1}^{q}\mu _lg_l -\mu \in \mathbf{SDSOS}_d[x], \mu \in \mathbb R,\mu _l\in \mathbb R_+, l=1,\ldots ,q\right\} . \end{aligned}$$

By letting \({\bar{\mu }}:=\sum _{k=1}^p \lambda _kf_k({\bar{x}})\), we observe first by (2.18) that \(({\bar{\mu }}, \mu _1,\ldots ,\mu _q)\) is a feasible point of problem (\(\hbox {D}_{\lambda }\)) and thus, \({\bar{\mu }}\le \lambda ^*\). We assert that

$$\begin{aligned} \lambda ^*\le \min ({\hbox {P}_{\lambda }}). \end{aligned}$$
(2.20)

To see this, let \((\mu , \mu _1,\ldots ,\mu _q)\) be an arbitrary feasible point of problem (\(\hbox {D}_{\lambda }\)). Then, we have \(\mu \in \mathbb R,\mu _l\in \mathbb R_+, l=1,\ldots ,q\) and

$$\begin{aligned} \sum _{k=1}^p\lambda _kf_k+\sum _{l=1}^{q}\mu _lg_l -\mu \in \mathbf{SDSOS}_d[x]. \end{aligned}$$

Arguing similarly as in (2.19), we arrive at \(\sum _{k=1}^p\lambda _kf_k({\bar{x}}) \ge \mu \), which proves that (2.20) is valid. Consequently,

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})}= \max {({\hbox {D}}_{\lambda })}. \end{aligned}$$
(2.21)

Next, we show that \(\min {({\hbox {P}_{\lambda }})}\le \inf {({\hbox {R}}_{\lambda })}.\) To see this, let \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)}\) be a feasible point of problem (\(\hbox {R}_{\lambda }\)) and denote \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) Then,

$$\begin{aligned}&\sum _{\alpha =1}^{s(d,n)}(g_l)_\alpha y_\alpha \le 0, l=1,\ldots , q, \end{aligned}$$
(2.22)
$$\begin{aligned}&y_1=1, \end{aligned}$$
(2.23)
$$\begin{aligned}&\Vert \bigg (\begin{array}{c} 2(\mathbf{M}_{\frac{d}{2}}(y))_{ij} (\mathbf{M}_{\frac{d}{2}}(y))_{ii}-(\mathbf{M}_{\frac{d}{2}}(y))_{jj} \end{array}\bigg )\Vert \le (\mathbf{M}_{\frac{d}{2}}(y))_{ii}\nonumber \\&\quad + \left( \mathbf{M}_{\frac{d}{2}}(y)\right) _{jj}, \; 1 \le i,j \le s\left( \frac{d}{2},n\right) . \end{aligned}$$
(2.24)

Note that \(g_l, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials. Under the validation of (2.23) and (2.24), applying Jensen’s inequality for 1st-SDSOS-convex polynomials (see Lemma 2.8), we get by (2.22) that

$$\begin{aligned} 0\ge L_{y}(g_l)\ge g_l(L_{y}(x_1),\ldots , L_{y}(x_n))=g_l({\tilde{x}}), l=1,\ldots , q, \end{aligned}$$

which shows that \({\tilde{x}}\) is a feasible point of problem (\({\hbox {P}_{\lambda }}\)) and thus,

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})} \le \sum _{k=1}^p\lambda _kf_k({\tilde{x}}). \end{aligned}$$
(2.25)

Similarly, we derive from the 1st-SDSOS-convex polynomials \(f_k, k=1,\ldots ,p\) and the Jensen’s inequality that

$$\begin{aligned} \sum _{\alpha =1}^{s(d,n)}(\sum _{k=1}^p\lambda _kf_k)_\alpha y_\alpha = \sum _{k=1}^p\lambda _kL_{y}(f_k)\ge \sum _{k=1}^p\lambda _k f_k(L_{y}(x_1),\ldots , L_{y}(x_n))=\sum _{k=1}^p\lambda _k f_k({\tilde{x}}), \end{aligned}$$

which together with (2.25) guarantees that

$$\begin{aligned} \inf {({\hbox {R}}_{\lambda })}\ge \min {({\hbox {P}_{\lambda }})}. \end{aligned}$$
(2.26)

Let us denote \({\bar{y}}:={\bar{x}}^{(d)}=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{d},\ldots ,{\bar{x}}_n^d)\). Then, \({\bar{y}}_1:=1\) and

$$\begin{aligned} L_{{\bar{y}}}(g_l)=\sum _{\alpha =1}^{s(d,n)}(g_{l})_{\alpha } {\bar{y}}_\alpha =\sum _{\alpha =1}^{s(d,n)}(g_{l})_{\alpha } {\bar{x}}^{(d)}_\alpha =g_l({\bar{x}})\le 0, l=1,\ldots , q. \end{aligned}$$

Moreover, from the definition of the moment matrix (see (2.2) and (2.3)), we have

$$\begin{aligned} \mathbf{M}_{\frac{d}{2}}({\bar{y}})=\sum _{\alpha =1}^{ s(d,n)} {\bar{y}}_\alpha M_\alpha = \sum _{\alpha =1}^{ s(d,n)} {\bar{x}}^{(\frac{d}{2})}_\alpha M_\alpha ={\bar{x}}^{(\frac{d}{2})}({\bar{x}}^{(\frac{d}{2})})^T\succeq 0, \end{aligned}$$

which guarantees that

$$\begin{aligned} \Vert \bigg (\begin{array}{c} 2(\mathbf{M}_{\frac{d}{2}}({\bar{y}}))_{ij}\\ (\mathbf{M}_{\frac{d}{2}}({\bar{y}}))_{ii}-(\mathbf{M}_{\frac{d}{2}}({\bar{y}}))_{jj} \end{array}\bigg )\Vert \le (\mathbf{M}_{\frac{d}{2}}({\bar{y}}))_{ii}+(\mathbf{M}_{\frac{d}{2}}({\bar{y}}))_{jj}, \; 1 \le i,j \le s \left( \frac{d}{2},n \right) . \end{aligned}$$

Therefore, \({\bar{y}}\) is a feasible point of problem (\(\hbox {R}_{\lambda }\)) and it in turn entails that

$$\begin{aligned} \inf {({\hbox {R}}_{\lambda })}&\le \sum _{\alpha =1}^{s(d,n)}(\sum _{k=1}^p\lambda _kf_k)_\alpha {\bar{y}}_\alpha = \sum _{k=1}^p\lambda _kL_{{\bar{y}}}(f_k)=\sum _{k=1}^p\lambda _k(\sum _{\alpha =1}^{s(d,n)}(f_k)_{\alpha } {\bar{y}}_\alpha )\\&=\sum _{k=1}^p\lambda _k(\sum _{\alpha =1}^{s(d,n)}(f_k)_{\alpha } {\bar{x}}^{(d)}_\alpha =\sum _{k=1}^p\lambda _kf_k({\bar{x}})=\min {({\hbox {P}_{\lambda }})}. \end{aligned}$$

This together with (2.26) entails that

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})}=\max {({\hbox {R}}_{\lambda })}= \sum _{\alpha =1}^{s(d,n)}(\sum _{k=1}^p\lambda _kf_k)_\alpha {\bar{y}}_\alpha , \end{aligned}$$

which shows, in particular, that \({\bar{y}} \) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)).

(ii) Let \(\lambda \in \mathbb R^p_+\setminus \{0\}\) be such that the problem (\({\hbox {P}_{\lambda }}\)) admits an optimal solution and assume that \({\bar{x}}\) is an optimal solution of problem (\({\hbox {P}_{\lambda }}\)). Then, \({\bar{x}}\) is a weakly efficient solution of problem (P). In view of (i), we obtain that

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})}=\max {({\hbox {D}}_{\lambda })}=\min {({\hbox {R}}_{\lambda })}. \end{aligned}$$
(2.27)

Now, let \({\hat{y}}:=({\hat{y}}_\alpha )\in \mathbb {R}^{s(d,n)}\) be an optimal solution of problem (\(\hbox {R}_{\lambda }\)). Then,

$$\begin{aligned}&\min {({\hbox {R}}_{\lambda })}=\sum _{\alpha =1}^{s(d,n)}(\sum _{k=1}^p\lambda _kf_k)_\alpha {\hat{y}}_\alpha , \end{aligned}$$
(2.28)
$$\begin{aligned}&\sum _{\alpha =1}^{s(d,n)}(g_l)_\alpha {\hat{y}}_\alpha \le 0, l=1,\ldots , q, \end{aligned}$$
(2.29)
$$\begin{aligned}&{\hat{y}}_1=1, \end{aligned}$$
(2.30)
$$\begin{aligned}&\Vert \bigg (\begin{array}{c} 2(\mathbf{M}_{\frac{d}{2}}({\hat{y}}))_{ij}\\ (\mathbf{M}_{\frac{d}{2}}({\hat{y}}))_{ii}-(\mathbf{M}_{\frac{d}{2}}({\hat{y}}))_{jj} \end{array}\bigg )\Vert \le (\mathbf{M}_{\frac{d}{2}}({\hat{y}}))_{ii}+(\mathbf{M}_{\frac{d}{2}}({\hat{y}}))_{jj}, \; 1 \le i,j \le s \left( \frac{d}{2},n \right) . \end{aligned}$$
(2.31)

As above, under the validation of (2.30) and (2.31), applying Jensen’s inequality for 1st-SDSOS-convex polynomials, we obtain from (2.28) that

$$\begin{aligned} 0\ge L_{{\hat{y}}}(g_l)\ge g_l(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))=g_l({\hat{x}}), l=1,\ldots , q, \end{aligned}$$

which shows that \({\hat{x}}:=(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))\) is a feasible point of problem (\({\hbox {P}_{\lambda }}\)) and thus,

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})}\le \sum _{k=1}^p\lambda _kf_k({\hat{x}}). \end{aligned}$$

Similarly, we derive from (2.27), (2.29) and the Jensen’s inequality that

$$\begin{aligned} \min {({\hbox {P}_{\lambda }})}= \sum _{k=1}^p\lambda _k L_{{\hat{y}}}(f_k)\ge \sum _{k=1}^p\lambda _k f_k(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))=\sum _{k=1}^p\lambda _kf_k({\hat{x}}). \end{aligned}$$

Therefore, \(\min {({\hbox {P}_{\lambda }})}= \sum _{k=1}^p\lambda _kf_k({\hat{x}})\) and \({\hat{x}}\) is an optimal solution of problem (\(\hbox {P}_{\lambda }\)). The latter assertion entails that \({\hat{x}}\) is a weak efficient solution of problem (P). Assume in addition that \(\lambda \in \mathrm{int}\mathbb R^p_+\). Then, \({\hat{x}}\) is an efficient solution of problem (P), which completes the proof of the theorem.\(\Box \)

3 Duality via second-order cone conditions

This section is devoted to the study of duality for the multiobjective optimization problem (P). As we see below, dual results justify the fulfilment of the KKT condition and show us efficient values of the primal multiobjective optimization problem (P) by solving a corresponding dual multiobjective problem, which is formulated in terms of second-order cone (SOC) conditions.

We consider a dual multiobjective problem to the multiobjective optimization problem (P) as follows:

$$\begin{aligned} \max _{(t_k,\lambda _k,\mu _l)}{}\bigg \{&(t_1,\ldots , t_p)\in \mathbb R^p\mid \sum _{k=1}^p\lambda _kf_k+\sum _{l=1}^{q}\mu _lg_l -\sum _{k=1}^p\lambda _kt_k\in \mathbf{SDSOS}_d[x],\nonumber \\&t_k\in \mathbb R, k=1,\ldots ,p, \mu _l\in \mathbb R_+, l=1,\ldots ,q, (\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\bigg \}. \end{aligned}$$
(D)

Note here that an efficient solution (resp., a weak efficient solution) of a “max” problem like the dual problem (D) is defined similarly as in Definition 2.3 by replacing \(\le \) (resp., <) by \(\ge \) (resp., >).

The first theorem in this section describes weak and strong duality relations between the primal problem (P) and the dual problem (D).

Theorem 3.1

  1. (i)

    (Weak duality) Let \({\tilde{x}}\) be a feasible point of problem (P) and let \((t_1,\ldots ,t_p,\lambda _1,\ldots ,\lambda _p,\mu _1,\ldots ,\mu _q)\) be a feasible point of problem (D). It holds that

    $$\begin{aligned} f({\tilde{x}})-(t_1,\ldots ,t_p)\notin -\mathrm{int}\mathbb R^p_+, \end{aligned}$$
    (3.1)

    where \(f:=(f_1,\ldots ,f_p).\)

  2. (ii)

    (Strong dual characterization) Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, the KKT condition holds at \({\bar{x}}\) if and only if there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that

    $$\begin{aligned} f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p). \end{aligned}$$

Proof

  1. (i)

    Since \((t_1,\ldots ,t_p,\lambda _1,\ldots ,\lambda _p,\mu _1,\ldots ,\mu _q)\) is a feasible point of problem (D), there exists \( \sigma \in \mathbf{SDSOS}_d[x]\) such that

    $$\begin{aligned} \sum _{k=1}^p\lambda _kf_k(x)+\sum _{l=1}^{q}\mu _lg_l(x) -\sum _{k=1}^p\lambda _kt_k=\sigma (x), \quad \forall x\in \mathbb R^n,\end{aligned}$$
    (3.2)

    where \(t_k\in \mathbb R, k=1,\ldots , p, \mu _l \in \mathbb R_+, l=1,\ldots , q \) and \( (\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\). Keeping in mind the nonnegativity of SDSOS polynomials, estimating (3.2) at \({\tilde{x}}\), we obtain that

    $$\begin{aligned} \sum _{k=1}^p \lambda _kf_k({\tilde{x}})\ge \sum _{k=1}^p \lambda _kt_k, \end{aligned}$$
    (3.3)

    where we recall that \(g_l({\tilde{x}})\le 0, l=1,\ldots ,q\) inasmuch as \({\tilde{x}}\) is a feasible point of problem (P). Since \( (\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\), (3.3) guarantees that

    $$\begin{aligned} f({\tilde{x}})-(t_1,\ldots ,t_p)\notin -\mathrm{int\,} \mathbb R^p_+, \end{aligned}$$

    which shows that (3.1) is valid.

  2. (ii)

    Let \({\bar{x}}\) be a weak efficient solution of problem (P). Assume that the KKT condition holds at \({\bar{x}}\). By Theorem 2.4 and Remark 2.5, the SOC condition holds; i.e., there exist \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+,\) such that

    $$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l -\sum _{k=1}^p{\bar{\lambda }}_k{\bar{t}}_k\in \mathbf{SDSOS}_d[x], \end{aligned}$$
    (3.4)

    where \({\bar{t}}_k:=f_k({\bar{x}}), k=1,\ldots ,p\). It shows that \(f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p)\) and that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a feasible point of problem (D). We assert that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D). Indeed, if this is not the case, then there exists another feasible point of problem (D), say \(({\hat{t}}_1,\ldots ,{\hat{t}}_p,\hat{\lambda }_1,\ldots ,\hat{\lambda }_p,\hat{\mu }_1,\ldots ,\hat{\mu }_q)\), such that \(({\hat{t}}_1,\ldots ,{\hat{t}}_p)-({\bar{t}}_1,\ldots ,{\bar{t}}_p)\in \mathrm{int}\mathbb R^p_+,\) or equivalently,

    $$\begin{aligned} f({\bar{x}})-({\hat{t}}_1,\ldots ,{\hat{t}}_p)\in -\mathrm{int}\mathbb R^p_+, \end{aligned}$$

    which contradicts the weak duality relation given in (i).

Conversely, assume that there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that \(f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p).\) Then, we have

$$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l -\sum _{k=1}^p{\bar{\lambda }}_k f_k({\bar{x}})\in \mathbf{SDSOS}_d[x], \end{aligned}$$

where \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+\). It means that the SOC condition given in Theorem 2.4 holds, and so the KKT condition holds at \({\bar{x}}\) due to Remark 2.5. The proof of the theorem is complete.\(\Box \)

The following corollary provides a strong duality result under the fulfilment of the Slater condition.

Corollary 3.2

(Strong duality under the Slater condition) Assume that the Slater condition (2.5) holds. Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that

$$\begin{aligned} f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p). \end{aligned}$$

Proof

Since the Slater condition (2.5) holds and \({\bar{x}}\) is a weak efficient solution of problem (P), we conclude by Theorem 2.4 that the KKT condition holds at \({\bar{x}}\). So, the conclusion is now followed by Theorem 3.1(ii). \(\Box \)

The next theorem establishes a converse duality relation between the primal problem (P) and the dual problem (D).

Theorem 3.3

(Converse duality relation) Assume that the Slater condition (2.5) holds and that the feasible set C in (2.4) is compact. If \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D), then there exists a weak efficient solution of problem (P), say \({\bar{x}}\), such that

$$\begin{aligned} f({\bar{x}})- ({\bar{t}}_1,\ldots ,{\bar{t}}_p)\in -\mathbb R^p_+. \end{aligned}$$
(3.5)

Proof

Let \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) be a weak efficient solution of problem (D), and let \(W:=\{f(x)+v\mid x\in C, v\in \mathbb R^p_+\}.\) It is easy to see that W is a closed convex set, and we assert that

$$\begin{aligned} ({\bar{t}}_1,\ldots ,{\bar{t}}_p)\in W. \end{aligned}$$
(3.6)

To see this, assume on the contrary that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p)\notin W.\) Invoking the classical strong separation theorem in convex analysis (see e.g., Mordukhovich and Nam 2014, Theorem 2.2), one can find \(\alpha :=(\alpha _1,\ldots , \alpha _p)\in \mathbb R^p_+\setminus \{0\}\) such that

$$\begin{aligned} \sum _{k=1}^p\alpha _k{\bar{t}}_k< \inf {\Big \{\langle \alpha ,\omega \rangle \mid \omega \in W\Big \}}. \end{aligned}$$

This guarantees that

$$\begin{aligned} \sum _{k=1}^p\alpha _k{\bar{t}}_k< \min _{x\in C}{\sum _{k=1}^p\alpha _kf_k(x)}.\end{aligned}$$
(3.7)

Since the feasible set C is compact, the (scalar) optimization problem on the right hand-side of (3.7) admits an optimal solution. Namely, there exists an optimal solution \(x^0\in C\) such that \(\sum \nolimits _{k=1}^p\alpha _k{\bar{t}}_k< \sum \nolimits _{k=1}^p\alpha _kf_k(x^0).\) Under the fulfilment of the Slater condition (2.5), applying Theorem 2.4 to the (scalar) optimization problem on the right hand-side of (3.7) (i.e., \(f:=\sum _{k=1}^p\alpha _kf_k\)), we find \(\mu _l\in \mathbb R_+, l=1,\ldots ,q, \) such that

$$\begin{aligned} \sum _{k=1}^p\alpha _kf_k+\sum _{l=1}^q\mu _lg_l -\sum _{k=1}^p\alpha _kf_k(x^0)\in \mathbf{SDSOS}_d[x].\end{aligned}$$
(3.8)

Denoting \(\epsilon :=\frac{1}{\sum _{k=1}^p\alpha _k}\Big (\sum \nolimits _{k=1}^p\alpha _kf_k(x^0)-\sum \nolimits _{k=1}^p\alpha _k{\bar{t}}_k\Big )\), it holds that \(\epsilon >0\) and, by (3.8),

$$\begin{aligned} \sum _{k=1}^p\alpha _kf_k+\sum _{l=1}^q\mu _lg_l -\sum _{k=1}^p\alpha _k\big ({\bar{t}}_k+\epsilon \big )\in \mathbf{SDSOS}_d[x]. \end{aligned}$$

Then, we conclude that \(({\bar{t}}_1+\epsilon ,\ldots ,{\bar{t}}_p+\epsilon , \alpha _1,\ldots , \alpha _p, \mu _1,\ldots , \mu _q)\) is a feasible point of problem (D), and

$$\begin{aligned} ({\bar{t}}_1+\epsilon ,\ldots ,{\bar{t}}_p+\epsilon )-({\bar{t}}_1,\ldots ,{\bar{t}}_p)=(\epsilon ,\ldots ,\epsilon )\in \mathrm{int}\mathbb R^p_+, \end{aligned}$$

which contradicts the fact that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D). Consequently, (3.6) holds.

Now, we find \({\bar{x}}\in C\) and \({\bar{v}}:=({\bar{v}}_1,\ldots ,{\bar{v}}_p)\in \mathbb R^p_+\) such that

$$\begin{aligned} ({\bar{t}}_1,\ldots ,{\bar{t}}_p)=f({\bar{x}})+{\bar{v}}, \end{aligned}$$
(3.9)

which shows that (3.5) is valid. Since \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a feasible point of problem (D), we have \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+\) such that

$$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l-\sum _{k=1}^p{\bar{\lambda }}_k{\bar{t}}_k\in \mathbf{SDSOS}_d[x]. \end{aligned}$$
(3.10)

From (3.9) and (3.10), we find \(\sigma _0\in \mathbf{SDSOS}_d[x]\) such that

$$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l-\sum _{k=1}^p{\bar{\lambda }}_k\big (f_k({\bar{x}})+{\bar{v}}_k\big )=\sigma _0, \end{aligned}$$

and, due to \(\sum _{k=1}^p{\bar{\lambda }}_k{\bar{v}}_k\ge 0,\) we obtain

$$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l-\sum _{k=1}^p{\bar{\lambda }}_kf_k({\bar{x}})=\sum _{k=1}^p{\bar{\lambda }}_k{\bar{v}}_k+\sigma _0\in \mathbf{SDSOS}_d[x]. \end{aligned}$$

So, by Theorem 2.4, we conclude that \({\bar{x}}\) is a weak efficient solution of problem (P), which completes the proof of the theorem. \(\Box \)

We finish this section with an example that illustrates the main results of Theorems 2.4 and 3.1.

Example 3.4

Consider a multiobjective convex polynomial optimization problem

$$\begin{aligned} \min _{x\in \mathbb R^2}{\big \{\big (x_1^6+x_2^4+1,x_1^4+2x_2^2-1\big )} \mid&x_1+x_2^4-2\le 0,\; x_1^4+x_2\le 0\big \}. \end{aligned}$$
(EP3)

The problem (EP3) is in the form of problem (P), where the objective functions \(f_1(x):=x_1^6+x_2^4+1, f_2(x):=x_1^4+2x_2^2-1, x\in \mathbb R^2,\) and the constraint functions \( g_1(x):= x_1+x_2^4-2, g_2(x):=x_1^4+x_2, x\in \mathbb R^2,\) are 1st-SDSOS-convex polynomials.

Observe first that the Slater condition holds and we can verify directly that \({\bar{x}}:=(0,0)\) is an efficient solution (and hence, a weak efficient solution) of problem (EP3). Let us now show that the SOC condition (2.6) holds for this problem. To see this, just take \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}\) and \(\mu :=(\mu _1,\mu _2):=(0,0).\) Then, it holds that

$$\begin{aligned}&\sum _{k=1}^2\alpha _kf_k+\sum _{l=1}^2\mu _lg_l-\sum _{k=1}^2\alpha _kf_k({\bar{x}})=\alpha _1(x_1^3)^2\\&\quad +\alpha _1(x_2^2)^2+\alpha _2(x_1^2)^2+2\alpha _2x_2^2\in \mathbf{SDSOS}_6[x], \end{aligned}$$

showing that (2.6) is valid. Moreover, we see that

$$\begin{aligned} \sum _{k=1}^2\alpha _k\nabla f_k({\bar{x}})+\sum _{l=1}^2\mu _l\nabla g_l({\bar{x}})=0,\; \mu _lg_l({\bar{x}})=0,\, l=1,2, \end{aligned}$$
(3.11)

which means that the KKT condition (2.7) is also valid. So, the conclusion of Theorem 2.4 holds.

Next, we consider a dual multiobjective problem to the multiobjective optimization problem (EP3) as follows:

$$\begin{aligned} \max _{(t_k,\lambda _k,\mu _l)}{}\bigg \{(t_1,t_2)\in \mathbb R^2\mid&\sum _{k=1}^2\lambda _kf_k+\sum _{l=1}^{2}\mu _lg_l -\sum _{k=1}^2\lambda _kt_k\in \mathbf{SDSOS}_6[x], \nonumber \\&t_k\in \mathbb R, k=1,2, \mu _l\in \mathbb R_+, l=1,2, (\lambda _1, \lambda _2)\in \mathbb R^2_+\setminus \{0\}\bigg \}. \end{aligned}$$
(DE3)

Let \({\tilde{x}}:=({\tilde{x}}_1,{\tilde{x}}_2)\) be a feasible point of problem (EP3) and let \((t_1, t_2,\lambda _1, \lambda _2,\mu _1, \mu _2)\) be a feasible point of problem (DE3). Then, there exists \( \sigma \in \mathbf{SDSOS}_6[x]\) such that

$$\begin{aligned}&\sum _{k=1}^2\lambda _kf_k({\tilde{x}})+\sum _{l=1}^{2}\mu _lg_l({\tilde{x}}) -\sum _{k=1}^2\lambda _kt_k=\sigma ({\tilde{x}}),\\&g_l({\tilde{x}})\le 0, l=1,2, \end{aligned}$$

which implies that \(\sum \limits _{k=1}^2\lambda _kf_k({\tilde{x}})\ge \sum \limits _{k=1}^2\lambda _kt_k\). By \( (\lambda _1, \lambda _2)\in \mathbb R^2_+\setminus \{0\},\) we arrive at

$$\begin{aligned} f({\tilde{x}})-(t_1, t_2)\notin -\mathrm{int\,} \mathbb R^2_+, \end{aligned}$$

where \(f:=(f_1,f_2)\). This means that the weak duality in Theorem 3.1 is valid. Moreover, we can verify that the KKT condition holds at \({\bar{x}}\) (cf. (3.11)) if and only if there exists a weak efficient solution of problem (DE3), say \(({\bar{t}}_1,{\bar{t}}_2,{\bar{\lambda }}_1, {\bar{\lambda }}_2,{\bar{\mu }}_1, {\bar{\mu }}_2)\), such that

$$\begin{aligned} ({\bar{t}}_1, {\bar{t}}_2)=f({\bar{x}})=(1,-1). \end{aligned}$$

So, the conclusion of Theorem 3.1 holds.

4 Conclusion and further work

In this paper, we have presented necessary and sufficient optimality conditions in terms of second-order cone conditions for a subclass of multiobjective convex optimization problems, where the objective and constraint functions are first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials. It has been shown that the second-order cone condition is equivalent to the Karush-Kuhn-Tucker (KKT) condition at a given weak efficient solution of the underlying multiobjective program. We have also addressed a dual multiobjective problem to the considered multiobjective convex polynomial optimization problem and examined weak, strong and converse duality relations between them. In this way, we have obtained a strong dual characterization via the KKT condition.

The obtained results are conceptual schemes for solving multiobjective optimization problems involving 1st-SDSOS-convex polynomials. Relationships between these results and some other existing multiobjective algorithms (see e.g., Ehrgott 2005; Gorissen and den Hertog 2012; Magron et al. 2014; Jahn 2004; Steuer 1986) need to be investigated in order to produce efficient methods and perform numerical tests for some concrete multiobjective optimization problems with 1st-SDSOS-convex polynomial data. Furthermore, it would be of interest to see how the proposed approach can be deployed to multiobjective optimization problems, where the objective and constraint functions are diagonally dominant sums-of-squares convex (DSOS-convex) polynomials (cf. Ahmadi and Majumdar (2016, 2019)). This would form an interesting topic and will be carried out in a forthcoming paper.