Abstract
This paper is concerned with a multiobjective convex polynomial problem, where the objective and constraint functions are first-order scaled diagonally dominant sums-of-squares convex polynomials. We first establish necessary and sufficient optimality criteria in terms of second-order cone (SOC) conditions for (weak) efficiencies of the underlying multiobjective optimization problem. We then show that the obtained result provides us a way to find (weak) efficient solutions of the multiobjective program by solving a scalar second-order cone programming relaxation problem of a given weighted-sum optimization problem. In addition, we propose a dual multiobjective problem by means of SOC conditions to the multiobjective optimization problem and examine weak, strong and converse duality relations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Multiobjective (or vector/multi-criteria) optimization problems are mathematical models that can handle the real-world optimization problems and have been applied in various areas of human life and work such as economics, engineering and industry (Ehrgott 2005; Jahn 2004; Sawaragi et al. 1985; Steuer 1986; Luc 1989). A multiobjective optimization model usually satisfies several conflicting objectives from a set of feasible choices, and as a result, finding efficient solutions or establishing optimality criteria for a multiobjective optimization problem is generally hard by its nature (Chuong 2016; Boţ et al. 2009; Chuong and Kim 2014; Ehrgott et al. 2014; Lee and Lee 2015, 2018; Zamani et al. 2015; Chinchuluun and Pardalos 2007; Chuong 2019, 2020).
A recent trend of attention has been focused on developing semidefinite programming (SDP) (see e.g., Blekherman et al. (2012)) approaches for treating multiobjective convex and nonconvex programming problems by exploiting special structures of the objective and constraint functions such as linear or polynomials; see, e.g., (Georgiev et al. 2013; Chuong 2017, 2018; Chuong and Jeyakumar 2017; Goberna et al. 2014; Gorissen and den Hertog 2012; Lee and Jiao 2019a, b; Magron et al. 2014; Chieu et al. 2018). For example, the authors in Goberna et al. (2014) or in Chuong (2017) studied robust solutions of multiobjective (semi-infinite) linear optimization problems under constraint data uncertainty. By employing an approximate scalarization technique, the authors in Lee and Jiao (2019a, b) proposed a method to finding efficient solutions for (robust) multiobjective optimization problems, where the objective and constraint functions are SOS-convex polynomials. Recall Ahmadi and Parrilo (2013); Helton and Nie (2010) here that a polynomial \(f:\mathbb {R}^n\rightarrow \mathbb R\) is called SOS-convex if the polynomial \(h_f:\mathbb {R}^n\times \mathbb R^n\rightarrow \mathbb R\) given by \(h_f(x,y):= f(x)-f(y)- \nabla f(y)^T(x-y)\) is a sums-of-squares polynomial in the variable of (x, y), where \(\nabla f(y)\) denotes the derivative of f at y. The interested reader is referred to (Chuong 2018) for optimality conditions and duality in terms of linear matrix inequalities for a class of (robust) multiobjective SOS-convex polynomial programs.
It is worth mentioning that the class of first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials (see Definition 2.2 below), which is a subclass of SOS-convex polynomials, is a numerically tractable subclass of convex polynomials because checking a given real polynomial is 1st-SDSOS-convex or not can be equivalently reformulated as a feasibility problem of a second-order cone programming (SOCP) problem (Ahmadi and Majumdar 2016, 2019; Chuong et al. 2019). In particular, it is shown in Chuong et al. (2019) that the class of (scalar) optimization problems involving 1st-SDSOS-convex polynomials has strong SOCP duality in the sense that the optimal values of the primal and dual programs are equal under a constraint qualification. This result allows us to find the optimal value of a 1st-SDSOS-convex polynomial program by solving an SOCP dual problem.
In this paper, inspired by Chuong et al. (2019), we consider a multiobjective convex polynomial optimization problem of the form:
where \(f_k:\mathbb R^n\rightarrow \mathbb R, k=1,\ldots , p\) and \( g_l:\mathbb R^n\rightarrow \mathbb R, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials.
The first main aim of the paper is to provide optimality characterizations for weak efficient solutions of the multiobjective optimization problem (P) under a constraint qualification, where the optimality characterizations are expressed in terms of second-order cone (SOC) or Karush-Kuhn-Tucker (KKT) conditions. We also establish solution relationships between the multiobjective program (P) and an SOCP dual problem or an SOCP relaxation of a given weighted-sum optimization problem. This result provides us a way to find (weak) efficient solutions of the multiobjective polynomial program (P) by solving the (scalar) SOCP relaxation problem of the given weighted-sum optimization problem.
The second main aim of the paper is to propose a dual multiobjective problem in terms of SOC conditions to the multiobjective optimization problem (P) and explore weak, strong and converse duality relations between them. Interestingly, one of the dual results shows that the existence of a weak efficient solution of the dual problem is equivalent to the validation of the KKT condition at a given weak efficient solution of the primal problem and moreover, the efficient values of both the problems are the same. In this way, we are able to find a weak efficient solution of the dual problem by verifying alternatively the KKT condition at a given weak efficient solution of the primal problem.
The outline of the paper is as follows. In Sect. 2, we first present necessary and sufficient optimality criteria for weak efficiencies of problem (P) and then establish solution relationships between the multiobjective program (P) and an SOCP dual problem or an SOCP relaxation of a given weighted-sum optimization problem. Section 3 addresses a dual multiobjective problem to the multiobjective optimization problem (P) and explores weak, strong and converse duality relations. Section 4 summarises the obtained results and provides some perspective studies.
2 Second-order cone optimality conditions and solution relationships
Let us first start by fixing some notation and definitions. The notation \(\mathbb R^n\) signifies the Euclidean space whose norm is denoted by \(\Vert \cdot \Vert \) for each \(n\in \mathbb N:=\{1,2,\ldots \}\). The inner product in \(\mathbb R^n\) is defined by \(\langle x,y\rangle :=x^T y\) for all \(x, y\in \mathbb R^n.\) Let \(\mathbb R^n_+\) be the nonnegative orthant of \(\mathbb R^n\) and \(\mathrm{int\,} \mathbb R^n_+\) be the topological interior of \(\mathbb R^n_+\). Denote by \(\mathbb R[x]\) (or \(\mathbb {R}[x_1,\ldots ,x_n]\)) the ring of polynomials in \(x:=(x_1,\ldots ,x_n)\) with real coefficients. Let \(\mathbb {R}_d[x_1,\ldots ,x_n]\) be the space consisting of all real polynomials on \(\mathbb {R}^n\) with degree at most \(d\in \mathbb {N}_0:= \mathbb {N} \cup \{0\}\) and let s(d, n) be the dimension of \(\mathbb {R}_d[x_1,\ldots ,x_n]\). Write the canonical basis of \(\mathbb {R}_d[x_1,\ldots ,x_n]\) by
For each \(1 \le \alpha \le s(d,n)\), we denote \(i(\alpha )=(i_1(\alpha ),\ldots ,i_n(\alpha )) \in (\mathbb {N}_0)^n\) to be the multi-index such that
Let the monomials \(m_\alpha (x)=x^{(d)}_{\alpha }\) be the \(\alpha \)th coordinate of \(x^{(d)}\), \(1 \le \alpha \le s(d,n)\). Thus, we can write
where \(f_\alpha \) is the corresponding coefficient of f at the \(\alpha \)th coordinate in the canonical basis of monomials of \(\mathbb {R}_d[x_1,\ldots ,x_n]\).
Recall that (see e.g., Lasserre (2009)) for a given \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)},\) \(L_y:\mathbb {R}_d[x]\rightarrow \mathbb {R}\) is the Riesz functional given by
The notation \(\mathbf{M}_d(y)\) is the moment matrix with degree d generated by \( y:=(y_\alpha )\in \mathbb {R}^{s(2d,n)}\), which is defined by
where \(M_\alpha ,\alpha =1,\ldots , s(2d,n),\) are \(s(d,n)\times s(d,n)\) symmetric matrices such that
Definition 2.1
(SDSOS polynomials ) Ahmadi and Majumdar (2016, 2019) We say that a polynomial f on \(\mathbb R^n\) with degree \(d:=2\ell \) is scaled diagonally dominant sums-of-squares (SDSOS) if there exist \(k \in \mathbb {N}\) with \(1 \le k \le s(\ell ,n)\) and scalars \(\alpha _i\), \(\beta _{ij}^+, \gamma _{ij}^+, \beta _{ij}^-, \gamma ^-_{ij}\) with \(\alpha _i\ge 0\) such that
where \(m_i\) and \(m_j\) are monomials in the variable x. We denote by \(\mathbf{SDSOS}_d[x]\) the set of all scaled diagonally dominant sums-of-squares polynomials on \(\mathbb R^n\) with degree at most d.
It is obvious by definition that any scaled diagonally dominant sums-of-squares polynomial is nonnegative but the converse is not true in general Ahmadi and Majumdar (2016, 2019) (see also, Chuong et al. 2019, Example 5.4 for a particular example). The next definition classifies a subclass of convex polynomials that can be verified by solving a second-order cone programming (SOCP) problem.
Definition 2.2
(1st-SDSOS-convex polynomials)Chuong et al. (2019) Let f be a polynomial on \(\mathbb {R}^n\) and let \(h_f\) be a polynomial on \(\mathbb {R}^n\times \mathbb R^n\) given by
where \(\nabla f(y)\) denotes the derivative of f at y. We say that f is first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) whenever \(h_f\) is an SDSOS polynomial in the variable of (x, y).
Note from the definition that any 1st-SDSOS-convex polynomial is convex, and for any 1st-SDSOS-convex polynomials f, g and \(\gamma \ge 0\), \(f+g\) and \(\gamma f\) are also 1st-SDSOS-convex. The class of 1st-SDSOS-convex polynomials is sufficiently large including separable convex quadratic functions and any polynomial described as a sum of even powers with the corresponding nonnegative coefficients, and many other functions important in polynomial optimization (cf. Chuong et al. (2019)).
From now on, we assume that \(f_k:\mathbb R^n\rightarrow \mathbb R, k=1,\ldots , p\) and \( g_l:\mathbb R^n\rightarrow \mathbb R, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials with degree at most \(d\in \mathbb N_0\). For convenience, we assume that d is an even number, because otherwise we will replace d by \(d+1.\) In what follows, we sometimes use the notation \(f:=(f_1,\ldots ,f_p)\) for brief and denote the set of feasible points of problem (P) by
Definition 2.3
-
(i)
We say that \({\bar{x}}\in C\) is an efficient solution of problem (P) if there is no \(x\in C\) such that \( f_k(x)\le f_k({\bar{x}})\) for all \(k=1,\ldots , p\) and \(f(x)\ne f({\bar{x}})\).
-
(ii)
A point \({\bar{x}}\in C\) is called a weak efficient solution of problem (P) if there is no \(x\in C\) such that \(f_k(x)<f_k({\bar{x}})\) for all \(k=1,\ldots , p\).
The following theorem establishes necessary and sufficient optimality criteria for weak efficiencies of problem (P) in terms of SOC condition and KKT condition. We say that the Slater condition holds if there exists \({\hat{x}}\in \mathbb R^n\) such that
Theorem 2.4
(Characterizations for weak efficiency) Let \({\bar{x}}\in \mathbb R^n\) be a feasible point of problem (P) and assume that the Slater condition (2.5) holds. The following conditions are equivalent:
-
(i)
(Efficiency) \({\bar{x}}\) is a weak efficient solution of problem (P).
-
(ii)
(SOC condition) There exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that
$$\begin{aligned}&\sum _{k=1}^p\alpha _kf_k+\sum _{l=1}^q \mu _lg_l-\sum \limits _{k=1}^p\alpha _kf_k({\bar{x}})\in \mathbf{SDSOS}_d[x]. \end{aligned}$$(2.6) -
(iii)
(KKT condition) There exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \(\mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R^q_+\) such that
$$\begin{aligned} \sum \limits _{k=1}^p\alpha _k\nabla f_k({\bar{x}})+\sum \limits _{l=1}^q\mu _l\nabla g_l({\bar{x}})=0,\; \mu _l g_l({\bar{x}})=0,\; l=1,\ldots ,q. \end{aligned}$$(2.7)
Proof
\([\mathbf{(i)}\ \Longrightarrow \ \mathbf{(ii)}]\) Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, it holds that
Since \(f_k, k=1,\ldots ,p \) and \(g_l, l=1,\ldots ,q\) are convex polynomials, invoking the classical alternative theorem in convex analysis (cf. (Rockafellar 1970, Theorem 21.1)), we find \(\alpha _k\ge 0, k=1,\ldots , p, \mu _l\ge 0, l=1,\ldots , q\), not all zero, such that
This together with the Slater condition (2.5) ensures that \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\ne 0\). Let \(h:\mathbb R^n\rightarrow \mathbb R\) be given by \(h(x):=\sum \nolimits _{k=1}^p\alpha _kf_k(x)+\sum \nolimits _{l=1}^q\mu _lg_l(x)-\sum \nolimits _{k=1}^p\alpha _kf_k({\bar{x}})\) for \( x\in \mathbb R^n.\) Under our assumption, h is a 1st-SDSOS-convex polynomial on \(\mathbb R^n\), and thus, \(h\in \mathbf{SDSOS}_d[x]\) because of \(h(x)\ge 0\) for all \(x\in \mathbb R^n\) (see Chuong et al. 2019, Proposition5.3 ). So, (2.6) holds.
[(ii) \(\Longrightarrow \) (iii)] Assume that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that
Then, we find \(\sigma \in \mathbf{SDSOS}_d[x]\) such that
Substituting \(x:={\bar{x}}\) into (2.8), we arrive at \(\sigma ({\bar{x}})-\sum \nolimits _{l=1}^q\mu _lg_l({\bar{x}})=0\). Note in addition that \(\mu _lg_l({\bar{x}})\le 0, l=1,\ldots ,q\) inasmuch as \({\bar{x}}\) is a feasible point of problem (P), and that \(\sigma ({\bar{x}})\ge 0\) as \(\sigma \) is a 1st-SDSOS polynomial and thus nonnegative. Therefore, it entails that
and that \(\sigma ({\bar{x}})=0\). So, \(\sigma (x)\ge \sigma ({\bar{x}})\) for all \(x\in \mathbb R^n;\) i.e., \({\bar{x}}\) is a minimizer of \(\sigma \) on \(\mathbb R^n.\) So, \(\nabla \sigma ({\bar{x}})=0\), and moreover, by (2.8) and (2.9), we have
which shows that the KKT condition in (2.7) holds.
[(iii) \(\Longrightarrow \) (i)] Assume that there exist \(\alpha :=(\alpha _1,\ldots ,\alpha _p)\in \mathbb R^p_+\setminus \{0\}\) and \(\mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R^q_+\) such that
Consider as above the function \(h:\mathbb R^n\rightarrow \mathbb R\) given by \(h(x):=\sum \limits _{k=1}^p\alpha _kf_k(x)+\sum \limits _{l=1}^q\mu _lg_l(x)-\sum \limits _{k=1}^p\alpha _kf_k({\bar{x}})\) for \( x\in \mathbb R^n.\) We assert by (2.10) that \(\nabla h({\bar{x}})=0\) and then, the convexity of h entails that
Now, let \({\tilde{x}}\in \mathbb R^n\) be an arbitrary feasible point of problem (P). By evaluating (2.11) at \({\tilde{x}}\), we arrive at
where the first inequality holds due to the fact that \(\mu _lg_l({\tilde{x}})\le 0, l=1,\ldots ,q\). Then,
by virtue of \(\alpha \in \mathbb R^p_+\setminus \{0\}.\) So, \({\bar{x}}\) is a weak efficient solution of problem (P).\(\Box \)
Remark 2.5
(SOC condition & KKT condition) A closer inspection of the proof of Theorem 2.4 reveals that the equivalence between (ii) and (iii) holds that is independent of the Slater condition (2.5). The following example shows that the equivalence between (i) and (ii) may go awry if the Slater condition (2.5) fails.
Example 2.6
(The importance of the Slater condition) Consider the multiobjective convex polynomial optimization problem
The problem (EP1) is in the form of problem (P), where the objective functions \(f_1(x):=x^8-2x+1, f_2(x):=x^2-3x-1, x\in \mathbb R,\) and the constraint functions \( g_1(x):= x^8-1, g_2(x):= x^8+x^2, g_3(x):=2x^8-1, x\in \mathbb R,\) are 1st-SDSOS-convex polynomials.
It can be verified that \({\bar{x}}:=0\) is an efficient solution of problem (EP1), and that the Slater condition fails. We claim that the SOC condition (2.6) does not hold for this problem. To see this, assume on the contrary that there exist \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}, \mu :=(\mu _1,\mu _2,\mu _3)\in \mathbb R^3_+\) and \(\sigma \in \mathbf{SDSOS}_8[x],\) such that
By rearranging this, we obtain that
for each \(x\in \mathbb R.\) It follows that, for each \(x\in \mathbb R,\)
Considering \(x_k:=\frac{1}{k}\), where \(k\in \mathbb N\), we assert by (2.12) that
for k large enough. Thus,
for such large k. Now, letting \(k\rightarrow \infty \) in (2.13), we arrive at a contradiction. Consequently, the implication of [(i) \(\Longrightarrow \) (ii)] in Theorem 2.4 is no longer valid for this setting.
The next example illustrates that the equivalence between (i) and (ii) in Theorem 2.4 may fail for a multiobjective convex (but not 1st-SDSOS-convex) polynomial optimization problem under the validation of the Slater condition.
Example 2.7
(The importance of the 1st-SDSOS-convexity) Consider a multiobjective convex polynomial optimization problem
The problem (EP2) is a type of problem (P), where the objective functions \(f_1(x):=(x_1+x_2-1)^2+1, f_2(x):=(x_1+x_2-1)^2+2, x\in \mathbb R^2,\) are convex (but not 1st-SDSOS-convex) polynomials (Chuong et al. 2019, Example 5.4) and the constraint functions \( g_1(x):=x_1-2, g_2(x):=x_2-1, x\in \mathbb R^2,\) are linear polynomials.
It is easy to see that the Slater condition holds, and we can verify that \({\bar{x}}:=(1,0)\) is a weak efficient solution of problem (EP2). We now show that the SOC condition (2.6) does not hold for this problem. Assume on the contrary that there exist \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}, \mu :=(\mu _1,\mu _2)\in \mathbb R^2_+ \) and \(\sigma \in \mathbf{SDSOS}_2[x],\) such that
By rearranging this, we obtain that
This entails that
which ensures that \(\mu _1=\mu _2=0\). Assume on the contrary that \(\mu _1\ne 0\) or \(\mu _2\ne 0\). By taking \(x_1:=1-t, x_2:=t\), where \(t\in \mathbb R\), we get by (2.15) that
which results in a contradiction. So, \(\mu _1=\mu _2=0\). Then, we deduce from (2.14) that \(h=\frac{1}{\alpha _1+\alpha _2}\sigma \in \mathbf{SDSOS}_2[x]\), where \(h(x):=(x_1+x_2-1)^2, x\in \mathbb R^2.\) This is impossible because h is not an SDSOS polynomial as said in (Chuong et al. 2019, Example 5.4). In conclusion, the equivalence between (i) and (ii) in Theorem 2.4 fails for this setting. The reason is that \(f_k, k=1,2\) are not 1st-SDSOS-convex polynomials.
Let \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\},\) and consider the corresponding weighted-sum optimization problem of (P) as follows:
where \(C:=\{x\in \mathbb R^n\mid g_l(x)\le 0, \; l=1,\ldots ,q\}\) is defined as in (2.4).
A second-order cone programming (SOCP) dual for the weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)) is given by
We also consider a second-order cone programming (SOCP) relaxation for the weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)) that is defined as follows:
To prove the next theorem, we recall from (cf. Chuong et al. 2019, Proposition 6.1) a Jensen’s inequality for 1st-SDSOS-convex polynomials.
Lemma 2.8
(Jensen’s inequality) Let f be a 1st-SDSOS-convex polynomial on \(\mathbb {R}^n\) with degree \(d:=2\ell , \ell \in \mathbb N_0\). Let \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)}\) with \(y_1=1\) and
Then, we have
where \(L_y\) is given as in (2.1) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate.
The following theorem establishes solution relationships among the multiobjective program (P), the SOCP dual problem (\(\hbox {D}_{\lambda }\)) and the SOCP relaxation problem (\(\hbox {R}_{\lambda }\)) of the corresponding weighted-sum optimization problem (\({\hbox {P}_{\lambda }}\)). This result provides us a way to find (weak) efficient solutions of the multiobjective program (P) by solving the SOCP relaxation problem (\(\hbox {R}_{\lambda }\)).
Theorem 2.9
Assume that the Slater condition (2.5) holds. Then, the following assertions are valid:
-
(i)
If \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) is a weak efficient solution of problem (P), then there exists \(\lambda \in \mathbb R^p_+\setminus \{0\}\) such that
$$\begin{aligned} \min (\mathrm{P}_{\lambda })= \max {(\mathrm{D}_{\lambda })} =\min (\mathrm{R}_{\lambda }) \end{aligned}$$(2.17)and \({\bar{y}}:=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{d},\ldots ,{\bar{x}}_n^d)\) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)).
-
(ii)
Let \(\lambda \in \mathbb R^p_+\setminus \{0\}\) be such that the problem (\({\hbox {P}_{\lambda }}\)) admits an optimal solution. If \({\hat{y}}\in \mathbb {R}^{s(d,n)}\) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)), then \({\hat{x}}:=(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))\in \mathbb {R}^n\) is a weak efficient solution of problem (P), where \(L_{{\hat{y}}}\) is given as in (2.1) and \(x_i\) denotes the polynomial which maps a vector x in \(\mathbb {R}^n\) to its ith coordinate. Moreover, if \(\lambda \in \mathrm{int}\mathbb R^p_+\), then \({\hat{x}}\) is an efficient solution of problem (P).
Proof
(i) Let \({\bar{x}}:=({\bar{x}}_1,\ldots ,{\bar{x}}_n)\) be a weak efficient solution of problem (P). By Theorem 2.4, we find \(\lambda :=(\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\) and \( \mu :=(\mu _1,\ldots ,\mu _q)\in \mathbb R_+^q\) such that
Thus, there exists \(\sigma \in \mathbf{SDSOS}_d[x]\) such that
For any \(x\in C:=\{x\in \mathbb R^n\mid g_l(x)\le 0, \; l=1,\ldots ,q\}\), we get by (2.19) that
due to the fact that \(\mu _lg_l(x)\le 0, l=1,\ldots ,q\) and \(\sigma (x)\ge 0\). So, \({\bar{x}}\) is an optimal solution of problem (\({\hbox {P}_{\lambda }}\)) and then,
Now, denote
By letting \({\bar{\mu }}:=\sum _{k=1}^p \lambda _kf_k({\bar{x}})\), we observe first by (2.18) that \(({\bar{\mu }}, \mu _1,\ldots ,\mu _q)\) is a feasible point of problem (\(\hbox {D}_{\lambda }\)) and thus, \({\bar{\mu }}\le \lambda ^*\). We assert that
To see this, let \((\mu , \mu _1,\ldots ,\mu _q)\) be an arbitrary feasible point of problem (\(\hbox {D}_{\lambda }\)). Then, we have \(\mu \in \mathbb R,\mu _l\in \mathbb R_+, l=1,\ldots ,q\) and
Arguing similarly as in (2.19), we arrive at \(\sum _{k=1}^p\lambda _kf_k({\bar{x}}) \ge \mu \), which proves that (2.20) is valid. Consequently,
Next, we show that \(\min {({\hbox {P}_{\lambda }})}\le \inf {({\hbox {R}}_{\lambda })}.\) To see this, let \(y:=(y_\alpha )\in \mathbb {R}^{s(d,n)}\) be a feasible point of problem (\(\hbox {R}_{\lambda }\)) and denote \({\tilde{x}}:=(L_{y}(x_1),\ldots , L_{y}(x_n)).\) Then,
Note that \(g_l, l=1,\ldots ,q\) are 1st-SDSOS-convex polynomials. Under the validation of (2.23) and (2.24), applying Jensen’s inequality for 1st-SDSOS-convex polynomials (see Lemma 2.8), we get by (2.22) that
which shows that \({\tilde{x}}\) is a feasible point of problem (\({\hbox {P}_{\lambda }}\)) and thus,
Similarly, we derive from the 1st-SDSOS-convex polynomials \(f_k, k=1,\ldots ,p\) and the Jensen’s inequality that
which together with (2.25) guarantees that
Let us denote \({\bar{y}}:={\bar{x}}^{(d)}=(1, {\bar{x}}_1,\ldots ,{\bar{x}}_n, {\bar{x}}_1^2,{\bar{x}}_1{\bar{x}}_2,\ldots ,{\bar{x}}_2^2,\ldots ,{\bar{x}}_n^2,\ldots ,{\bar{x}}_1^{d},\ldots ,{\bar{x}}_n^d)\). Then, \({\bar{y}}_1:=1\) and
Moreover, from the definition of the moment matrix (see (2.2) and (2.3)), we have
which guarantees that
Therefore, \({\bar{y}}\) is a feasible point of problem (\(\hbox {R}_{\lambda }\)) and it in turn entails that
This together with (2.26) entails that
which shows, in particular, that \({\bar{y}} \) is an optimal solution of problem (\(\hbox {R}_{\lambda }\)).
(ii) Let \(\lambda \in \mathbb R^p_+\setminus \{0\}\) be such that the problem (\({\hbox {P}_{\lambda }}\)) admits an optimal solution and assume that \({\bar{x}}\) is an optimal solution of problem (\({\hbox {P}_{\lambda }}\)). Then, \({\bar{x}}\) is a weakly efficient solution of problem (P). In view of (i), we obtain that
Now, let \({\hat{y}}:=({\hat{y}}_\alpha )\in \mathbb {R}^{s(d,n)}\) be an optimal solution of problem (\(\hbox {R}_{\lambda }\)). Then,
As above, under the validation of (2.30) and (2.31), applying Jensen’s inequality for 1st-SDSOS-convex polynomials, we obtain from (2.28) that
which shows that \({\hat{x}}:=(L_{{\hat{y}}}(x_1),\ldots , L_{{\hat{y}}}(x_n))\) is a feasible point of problem (\({\hbox {P}_{\lambda }}\)) and thus,
Similarly, we derive from (2.27), (2.29) and the Jensen’s inequality that
Therefore, \(\min {({\hbox {P}_{\lambda }})}= \sum _{k=1}^p\lambda _kf_k({\hat{x}})\) and \({\hat{x}}\) is an optimal solution of problem (\(\hbox {P}_{\lambda }\)). The latter assertion entails that \({\hat{x}}\) is a weak efficient solution of problem (P). Assume in addition that \(\lambda \in \mathrm{int}\mathbb R^p_+\). Then, \({\hat{x}}\) is an efficient solution of problem (P), which completes the proof of the theorem.\(\Box \)
3 Duality via second-order cone conditions
This section is devoted to the study of duality for the multiobjective optimization problem (P). As we see below, dual results justify the fulfilment of the KKT condition and show us efficient values of the primal multiobjective optimization problem (P) by solving a corresponding dual multiobjective problem, which is formulated in terms of second-order cone (SOC) conditions.
We consider a dual multiobjective problem to the multiobjective optimization problem (P) as follows:
Note here that an efficient solution (resp., a weak efficient solution) of a “max” problem like the dual problem (D) is defined similarly as in Definition 2.3 by replacing \(\le \) (resp., <) by \(\ge \) (resp., >).
The first theorem in this section describes weak and strong duality relations between the primal problem (P) and the dual problem (D).
Theorem 3.1
-
(i)
(Weak duality) Let \({\tilde{x}}\) be a feasible point of problem (P) and let \((t_1,\ldots ,t_p,\lambda _1,\ldots ,\lambda _p,\mu _1,\ldots ,\mu _q)\) be a feasible point of problem (D). It holds that
$$\begin{aligned} f({\tilde{x}})-(t_1,\ldots ,t_p)\notin -\mathrm{int}\mathbb R^p_+, \end{aligned}$$(3.1)where \(f:=(f_1,\ldots ,f_p).\)
-
(ii)
(Strong dual characterization) Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, the KKT condition holds at \({\bar{x}}\) if and only if there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that
$$\begin{aligned} f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p). \end{aligned}$$
Proof
-
(i)
Since \((t_1,\ldots ,t_p,\lambda _1,\ldots ,\lambda _p,\mu _1,\ldots ,\mu _q)\) is a feasible point of problem (D), there exists \( \sigma \in \mathbf{SDSOS}_d[x]\) such that
$$\begin{aligned} \sum _{k=1}^p\lambda _kf_k(x)+\sum _{l=1}^{q}\mu _lg_l(x) -\sum _{k=1}^p\lambda _kt_k=\sigma (x), \quad \forall x\in \mathbb R^n,\end{aligned}$$(3.2)where \(t_k\in \mathbb R, k=1,\ldots , p, \mu _l \in \mathbb R_+, l=1,\ldots , q \) and \( (\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\). Keeping in mind the nonnegativity of SDSOS polynomials, estimating (3.2) at \({\tilde{x}}\), we obtain that
$$\begin{aligned} \sum _{k=1}^p \lambda _kf_k({\tilde{x}})\ge \sum _{k=1}^p \lambda _kt_k, \end{aligned}$$(3.3)where we recall that \(g_l({\tilde{x}})\le 0, l=1,\ldots ,q\) inasmuch as \({\tilde{x}}\) is a feasible point of problem (P). Since \( (\lambda _1,\ldots ,\lambda _p)\in \mathbb R^p_+\setminus \{0\}\), (3.3) guarantees that
$$\begin{aligned} f({\tilde{x}})-(t_1,\ldots ,t_p)\notin -\mathrm{int\,} \mathbb R^p_+, \end{aligned}$$which shows that (3.1) is valid.
-
(ii)
Let \({\bar{x}}\) be a weak efficient solution of problem (P). Assume that the KKT condition holds at \({\bar{x}}\). By Theorem 2.4 and Remark 2.5, the SOC condition holds; i.e., there exist \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+,\) such that
$$\begin{aligned} \sum _{k=1}^p{\bar{\lambda }}_kf_k+\sum _{l=1}^q {\bar{\mu }}_lg_l -\sum _{k=1}^p{\bar{\lambda }}_k{\bar{t}}_k\in \mathbf{SDSOS}_d[x], \end{aligned}$$(3.4)where \({\bar{t}}_k:=f_k({\bar{x}}), k=1,\ldots ,p\). It shows that \(f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p)\) and that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a feasible point of problem (D). We assert that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D). Indeed, if this is not the case, then there exists another feasible point of problem (D), say \(({\hat{t}}_1,\ldots ,{\hat{t}}_p,\hat{\lambda }_1,\ldots ,\hat{\lambda }_p,\hat{\mu }_1,\ldots ,\hat{\mu }_q)\), such that \(({\hat{t}}_1,\ldots ,{\hat{t}}_p)-({\bar{t}}_1,\ldots ,{\bar{t}}_p)\in \mathrm{int}\mathbb R^p_+,\) or equivalently,
$$\begin{aligned} f({\bar{x}})-({\hat{t}}_1,\ldots ,{\hat{t}}_p)\in -\mathrm{int}\mathbb R^p_+, \end{aligned}$$which contradicts the weak duality relation given in (i).
Conversely, assume that there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that \(f({\bar{x}})=({\bar{t}}_1,\ldots ,{\bar{t}}_p).\) Then, we have
where \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+\). It means that the SOC condition given in Theorem 2.4 holds, and so the KKT condition holds at \({\bar{x}}\) due to Remark 2.5. The proof of the theorem is complete.\(\Box \)
The following corollary provides a strong duality result under the fulfilment of the Slater condition.
Corollary 3.2
(Strong duality under the Slater condition) Assume that the Slater condition (2.5) holds. Let \({\bar{x}}\) be a weak efficient solution of problem (P). Then, there exists a weak efficient solution of problem (D), say \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\), such that
Proof
Since the Slater condition (2.5) holds and \({\bar{x}}\) is a weak efficient solution of problem (P), we conclude by Theorem 2.4 that the KKT condition holds at \({\bar{x}}\). So, the conclusion is now followed by Theorem 3.1(ii). \(\Box \)
The next theorem establishes a converse duality relation between the primal problem (P) and the dual problem (D).
Theorem 3.3
(Converse duality relation) Assume that the Slater condition (2.5) holds and that the feasible set C in (2.4) is compact. If \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D), then there exists a weak efficient solution of problem (P), say \({\bar{x}}\), such that
Proof
Let \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) be a weak efficient solution of problem (D), and let \(W:=\{f(x)+v\mid x\in C, v\in \mathbb R^p_+\}.\) It is easy to see that W is a closed convex set, and we assert that
To see this, assume on the contrary that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p)\notin W.\) Invoking the classical strong separation theorem in convex analysis (see e.g., Mordukhovich and Nam 2014, Theorem 2.2), one can find \(\alpha :=(\alpha _1,\ldots , \alpha _p)\in \mathbb R^p_+\setminus \{0\}\) such that
This guarantees that
Since the feasible set C is compact, the (scalar) optimization problem on the right hand-side of (3.7) admits an optimal solution. Namely, there exists an optimal solution \(x^0\in C\) such that \(\sum \nolimits _{k=1}^p\alpha _k{\bar{t}}_k< \sum \nolimits _{k=1}^p\alpha _kf_k(x^0).\) Under the fulfilment of the Slater condition (2.5), applying Theorem 2.4 to the (scalar) optimization problem on the right hand-side of (3.7) (i.e., \(f:=\sum _{k=1}^p\alpha _kf_k\)), we find \(\mu _l\in \mathbb R_+, l=1,\ldots ,q, \) such that
Denoting \(\epsilon :=\frac{1}{\sum _{k=1}^p\alpha _k}\Big (\sum \nolimits _{k=1}^p\alpha _kf_k(x^0)-\sum \nolimits _{k=1}^p\alpha _k{\bar{t}}_k\Big )\), it holds that \(\epsilon >0\) and, by (3.8),
Then, we conclude that \(({\bar{t}}_1+\epsilon ,\ldots ,{\bar{t}}_p+\epsilon , \alpha _1,\ldots , \alpha _p, \mu _1,\ldots , \mu _q)\) is a feasible point of problem (D), and
which contradicts the fact that \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a weak efficient solution of problem (D). Consequently, (3.6) holds.
Now, we find \({\bar{x}}\in C\) and \({\bar{v}}:=({\bar{v}}_1,\ldots ,{\bar{v}}_p)\in \mathbb R^p_+\) such that
which shows that (3.5) is valid. Since \(({\bar{t}}_1,\ldots ,{\bar{t}}_p,{\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p,{\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\) is a feasible point of problem (D), we have \(({\bar{\lambda }}_1,\ldots ,{\bar{\lambda }}_p)\in \mathbb R^p_+\setminus \{0\}\) and \(({\bar{\mu }}_1,\ldots ,{\bar{\mu }}_q)\in \mathbb R^q_+\) such that
From (3.9) and (3.10), we find \(\sigma _0\in \mathbf{SDSOS}_d[x]\) such that
and, due to \(\sum _{k=1}^p{\bar{\lambda }}_k{\bar{v}}_k\ge 0,\) we obtain
So, by Theorem 2.4, we conclude that \({\bar{x}}\) is a weak efficient solution of problem (P), which completes the proof of the theorem. \(\Box \)
We finish this section with an example that illustrates the main results of Theorems 2.4 and 3.1.
Example 3.4
Consider a multiobjective convex polynomial optimization problem
The problem (EP3) is in the form of problem (P), where the objective functions \(f_1(x):=x_1^6+x_2^4+1, f_2(x):=x_1^4+2x_2^2-1, x\in \mathbb R^2,\) and the constraint functions \( g_1(x):= x_1+x_2^4-2, g_2(x):=x_1^4+x_2, x\in \mathbb R^2,\) are 1st-SDSOS-convex polynomials.
Observe first that the Slater condition holds and we can verify directly that \({\bar{x}}:=(0,0)\) is an efficient solution (and hence, a weak efficient solution) of problem (EP3). Let us now show that the SOC condition (2.6) holds for this problem. To see this, just take \(\alpha :=(\alpha _1,\alpha _2)\in \mathbb R^2_+\setminus \{0\}\) and \(\mu :=(\mu _1,\mu _2):=(0,0).\) Then, it holds that
showing that (2.6) is valid. Moreover, we see that
which means that the KKT condition (2.7) is also valid. So, the conclusion of Theorem 2.4 holds.
Next, we consider a dual multiobjective problem to the multiobjective optimization problem (EP3) as follows:
Let \({\tilde{x}}:=({\tilde{x}}_1,{\tilde{x}}_2)\) be a feasible point of problem (EP3) and let \((t_1, t_2,\lambda _1, \lambda _2,\mu _1, \mu _2)\) be a feasible point of problem (DE3). Then, there exists \( \sigma \in \mathbf{SDSOS}_6[x]\) such that
which implies that \(\sum \limits _{k=1}^2\lambda _kf_k({\tilde{x}})\ge \sum \limits _{k=1}^2\lambda _kt_k\). By \( (\lambda _1, \lambda _2)\in \mathbb R^2_+\setminus \{0\},\) we arrive at
where \(f:=(f_1,f_2)\). This means that the weak duality in Theorem 3.1 is valid. Moreover, we can verify that the KKT condition holds at \({\bar{x}}\) (cf. (3.11)) if and only if there exists a weak efficient solution of problem (DE3), say \(({\bar{t}}_1,{\bar{t}}_2,{\bar{\lambda }}_1, {\bar{\lambda }}_2,{\bar{\mu }}_1, {\bar{\mu }}_2)\), such that
So, the conclusion of Theorem 3.1 holds.
4 Conclusion and further work
In this paper, we have presented necessary and sufficient optimality conditions in terms of second-order cone conditions for a subclass of multiobjective convex optimization problems, where the objective and constraint functions are first-order scaled diagonally dominant sums-of-squares convex (1st-SDSOS-convex) polynomials. It has been shown that the second-order cone condition is equivalent to the Karush-Kuhn-Tucker (KKT) condition at a given weak efficient solution of the underlying multiobjective program. We have also addressed a dual multiobjective problem to the considered multiobjective convex polynomial optimization problem and examined weak, strong and converse duality relations between them. In this way, we have obtained a strong dual characterization via the KKT condition.
The obtained results are conceptual schemes for solving multiobjective optimization problems involving 1st-SDSOS-convex polynomials. Relationships between these results and some other existing multiobjective algorithms (see e.g., Ehrgott 2005; Gorissen and den Hertog 2012; Magron et al. 2014; Jahn 2004; Steuer 1986) need to be investigated in order to produce efficient methods and perform numerical tests for some concrete multiobjective optimization problems with 1st-SDSOS-convex polynomial data. Furthermore, it would be of interest to see how the proposed approach can be deployed to multiobjective optimization problems, where the objective and constraint functions are diagonally dominant sums-of-squares convex (DSOS-convex) polynomials (cf. Ahmadi and Majumdar (2016, 2019)). This would form an interesting topic and will be carried out in a forthcoming paper.
References
Ahmadi, A. A., & Majumdar, A. (2016). Some applications of polynomial optimization in operations research and real-time decision making. Optimization Letters, 10(4), 709–729.
Ahmadi, A. A., & Majumdar, A. (2019). DSOS and SDSOS optimization: More tractable alternatives to sum of squares and semidefinite optimization. SIAM Journal on Applied Algebra and Geometry, 3, 193–230.
Ahmadi, A. A., & Parrilo, P. A. (2013). A complete characterization of the gap between convexity and SOS-convexity. SIAM Journal on Optimization, 23(2), 811–833.
Blekherman, G., Parrilo, P. A., & Thomas, R. (2012). Semidefinite optimization and convex algebraic geometry. Philadelphia: SIAM Publications.
Boţ, R. I., Grad, S. M., & Wanka, G. (2009). Duality in vector optimization., Vector Optimization Berlin: Springer.
Chieu, N. H., Feng, J. W., Gao, W., Li, G., & Wu, D. (2018). SOS-convex semialgebraic programs and its applications to robust optimization: a tractable class of nonsmooth convex optimization. Set-Valued and Variational Analysis, 26(2), 305–326.
Chinchuluun, A., & Pardalos, P. M. (2007). A survey of recent developments in multiobjective optimization. Annals of Operations Research, 154, 29–50.
Chuong, T. D. (2016). Optimality and duality for robust multiobjective optimization problems. Nonlinear Analysis: Theory, Methods & Applications, 134, 127–143.
Chuong, T. D. (2017). Robust alternative theorem for linear inequalities with applications to robust multiobjective optimization. Operations Research Letters, 45(6), 575–580.
Chuong, T. D. (2018). Linear matrix inequality conditions and duality for a class of robust multiobjective convex polynomial programs. SIAM Journal on Optimization, 28, 2466–2488.
Chuong, T. D. (2019). Optimality and duality in nonsmooth composite vector optimization and applications. Annals of Operations Research,. https://doi.org/10.1007/s10479-019-03349-1.
Chuong, T. D. (2020). Optimality conditions for nonsmooth multiobjective bilevel optimization problems. Annals of Operations Research, 287, 617–642.
Chuong, T. D., & Jeyakumar, V. (2017). Convergent hierarchy of SDP relaxations for a class of semi-infinite convex polynomial programs and applications. Applied Mathematics and Computation, 315, 381–399.
Chuong, T. D., Jeyakumar, V., & Li, G. (2019). A new bounded degree hierarchy with SOCP relaxations for global polynomial optimization and conic convex semi-algebraic programs. Journal of Global Optimization, 75, 885–919.
Chuong, T. D., & Kim, D. S. (2014). Optimality conditions and duality in nonsmooth multiobjective optimization problems. Annals of Operations Research, 217, 117–136.
Ehrgott, M. (2005). Multicriteria optimization. Berlin: Springer.
Ehrgott, M., Ide, J., & Schobel, A. (2014). Minmax robustness for multi-objective optimization problems. European Journal of Operational Research, 239, 17–31.
Georgiev, P. G., Luc, D. T., & Pardalos, P. M. (2013). Robust aspects of solutions in deterministic multiple objective linear programming. European Journal of Operational Research, 229(1), 29–36.
Goberna, M. A., Jeyakumar, V., Li, G., & Perez, J.-V. (2014). Robust solutions of multi-objective linear semi-infinite programs under constraint data uncertainty. SIAM Journal on Optimization, 24(3), 1402–1419.
Jahn, J. (2004). Vector optimization. theory, applications, and extensions. Berlin: Springer.
Gorissen, B. L., & den Hertog, D. (2012). Approximating the Pareto sets of multiobjective linear programs via robust optimizaton. Operations Research Letters, 40(5), 319–324.
Helton, J. W., & Nie, J. (2010). Semidefinite representation of convex sets. Mathematical Programming, 122(1), 21–64. Ser. A,
Lasserre, J. B. (2009). Moments, positive polynomials and their applications. London: Imperial College Press,
Lee, G. M., & Lee, J. H. (2015). On nonsmooth optimality theorems for robust multiobjective optimization problems. Journal of Nonlinear and Convex Analysis, 16(10), 2039–2052.
Lee, J. H., & Jiao, L. (2019). Finding efficient solutions in robust multiple objective optimization with SOS-convex polynomial data. Annals of Operations Research,. https://doi.org/10.1007/s10479-019-03216-z.
Lee, J. H., & Jiao, L. (2019). Finding efficient solutions for multicriteria optimization problems with SOS-convex polynomials. Taiwanese Journal of Mathematics, 23(6), 1535–1550.
Lee, J. H., & Lee, G. M. (2018). On optimality conditions and duality theorems for robust semi-infinite multiobjective optimization problems. Annals of Operations Research, 269(1–2), 419–438.
Lobo, M. S., Vandenberghe, L., Boyd, S., & Lebret, H. (1998). Applications of second-order cone programming. Linear Algebra and its Applications, 284, 193–228.
Luc, D. T. (1989). Theory of vector optimization (Vol. 319)., Lecture Notes in Economics and Mathematical Systems Berlin: Springer.
Magron, V., Henrion, D., & Lasserre, J.-B. (2014). Approximating Pareto curves using semidefinite relaxations. Operations Research Letters, 42(6–7), 432–437.
Marshall, M. (2008). Positive polynomials and sums-of-squares (Vol. 146)., Mathematical Surveys and Monographs Providence, RI: American Mathematical Society.
Mordukhovich, B. S., & Nam, N. M. (2014). An easy path to convex analysis and applications (Vol. 14)., Synthesis Lectures on Mathematics and Statistics Williston: Morgan & Claypool Publishers.
Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
Sawaragi, Y., Nakayama, H., & Tanino, T. (1985). Theory of multiobjective optimization (Vol. 176)., Mathematics in Science and Engineering Orlando: Academic Press Inc.
Steuer, R. E. (1986). Multiple criteria optimization. Theory, computation, and application., Wiley Series in Probability and Mathematical Statistics: Applied New York: Wiley.
Zamani, M., Soleimani-damaneh, M., & Kabgani, A. (2015). Robustness in nonsmooth nonlinear multi-objective programming. European Journal of Operational Research, 247(2), 370–378.
Acknowledgements
The author is grateful to the referees for the valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chuong, T.D. Second-order cone programming relaxations for a class of multiobjective convex polynomial problems. Ann Oper Res 311, 1017–1033 (2022). https://doi.org/10.1007/s10479-020-03577-w
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-020-03577-w
Keywords
- Multiobjective optimization
- Duality
- 1st-SDSOS-convex polynomial
- Second-order cone condition
- Slater condition