Abstract
In this paper, we present a unified approach for studying convex composite multiobjective optimization problems via asymptotic analysis. We characterize the nonemptiness and compactness of the weak Pareto optimal solution sets for a convex composite multiobjective optimization problem. Then, we employ the obtained results to propose a class of proximal-type methods for solving the convex composite multiobjective optimization problem, and carry out their convergence analysis under some mild conditions.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is known that scalar-valued composite optimization model is very important in both theory and methodology, it provides a unified framework for studying convergence behaviour of various algorithms and Lagrangian optimality conditions. The study of scalar-valued composite optimization model has recently received a great deal of attention in the literature, see e.g. ( [4, 10, 15, 17, 25, 27] and the references therein).
However, we are rarely asked to make decisions based on only one criterion; most often, decisions are based on several conflicting criteria. Multiobjective optimization model provides the mathematical framework to deal with these situations, there is no doubt that it is a powerful tool in decision analysis. Moreover, it also has found a lot of significant applications in the other fields such as economics, management science and engineering design. Many papers have been published to study optimality conditions, duality theory and topological properties of solution sets of multiobjective optimization problems (see, e.g., [5–7, 9, 12, 14, 19, 24]).
Composite multiobjective optimization model is broad and flexible enough to cover many common types of multiobjective optimization problems, seen in the literature. Moreover, the model obviously includes the wide class of scalar-valued composite optimization problems, which is now recognized as fundamental for theory and computation in scalar nonsmooth optimization. Recently, some investigations for composite multiobjective optimization models has been proposed in following papers: Jeyakumar and Yang [16–18, 26] investigated some first and second order optimality conditions for both nonsmooth and smooth convex composite multi-objective optimization problems, they also obtained some duality results for the problems even when the objective functions were not cone convex. Reddy and Mukherjee studied some first order optimality conditions for a class of composite multiobjective optimization problem with \(V-\rho \)-invexity in [21]. Bot et al. [3] also obtained some conjugate duality results for multiobjective composed optimization problems. It is worth noticing that there are fewer results for the composite multiobjective optimization problems since the complexity of objective functions and the variety of solution sets. Furthermore, to the best of our knowledge, there is no numerical method has be designed for solving composite multiobjective optimization problems, even no conceptual one.
In this paper, we consider the following extended-valued composite multiobjective optimization problem:
where \(S\subset R^n\) is closed and convex. The outer function \(F: R^l\rightarrow R^m\cup \{+\infty _C\}\) is a vector-valued function, \(F_i\) is the \(i\)th components of F, denote by \(domF_i\) the effective domain of \(F_i\), i.e. \(domF_i=\{x\in R^l| F_i(x)<+\infty \}\). The inner function \(A: S\rightarrow R^l\) is a \(l\times n\) matrix such that \(A(S)\subset \cap _{i=1}^m dom{F}_i\). Denote by r(A) and \(A^\top \), the rank and the transpose of the matrix \(A\), respectively. To illustrate the nature of the model \((CMOP)\), let us look at an example.
Example 1.1
Consider the vector approximation (model) problem:
where \(S\subset R^n\) is closed and convex. \(\Vert .\Vert _i, i=1, 2,\ldots , m\) is a norm in \(R^l\), and for each \(i=1, 2,\ldots , m, A_i(x)\) is a \(l\times n\) matrix. Various examples of vector approximation problems of this type that arise in simultaneous approximation are given in [13, 14].
The idea is that by studying the composite model problem \((CMOP)\) a unified framework can be given for the treatment of many questions of theoretical and computational interest in multiobjective optimization. The motivation of this paper is to consider how to design a iterative algorithm for computing the model \((CMOP)\) via asymptotic analysis. Although the inner function \(A(x)\) is linear, the composite structure \(F(A(x))\) captures some elementary characterizations of composite optimization. On the other hand, there are some technical difficulties in computing asymptotic function and subdifferential of a vector-valued composite function, when the inner function is not linear.
The paper is organized as follows. In Sect. 2, we present some concepts, basic assumptions and preliminary results. In Sect. 3, we we characterize the nonemptiness and compactness of the weak Pareto optimal solution set of the problem (CMOP). In Sect. 4, we employ the obtained results to construct a class of proximal-type method for solving the problem (CMOP), convergence analysis is made under some mild conditions. In Sect. 5, we draw some conclusions.
2 Preliminaries
In this section, we introduce various notions of Pareto optimal solutions and present some preliminary results that will be used throughout this paper.
Let \(C=R^m_+\subset R^m\) and \(C_1=\{x\in R^m_+| ~ \Vert x\Vert =1\}\). We define, for any \(y_1, y_2\in R^m\),
The extended space of \(R^m \) is \(\bar{R}^m= R^m\cup \{-\infty _C, +\infty _C\}\), where \(-\infty _C\) is an imaginary point, each of the coordinates is \(-\infty \) and the imaginary point \(+\infty _C\) is analogously understood (with the conventions \(\infty _C+\infty _C=\infty _C, \mu (+\infty _C)=+\infty _C\) for each positive number \(\mu \)). The point \(y\in R^m\) is a column vector and its transpose is denoted by \(y^\top \). The inner product in \(R^m\) is denoted by \(\langle \cdot , \cdot \rangle \).
It is worth noticing that the binary relation \(\not \le _{intC}\) is closed in the sense that if \(x_k\rightarrow x^*\) as \(k \rightarrow \infty \), \(x_k \not \le _{intC} 0 \), then we have \(x^* \not \le _{intC} 0\). This is because of the closeness of the set \(W= : R^m\setminus \{-intC\}\).
Definition 2.1
[5] Let \(K\subset R^n\) be convex and a map \(F: K\rightarrow R^m\cup \{+\infty _C\}\) is said to be C-convex if
for any \(x, y\in K\) and \(\lambda \in [0, 1]. F\) is said to be strictly C-convex if
for any \(x, y\in K\) with \(x\not =y\) and \(\lambda \in (0, 1)\).
Definition 2.2
[11] A map \(F: K\subset R^n\rightarrow R^m\cup \{+\infty _C\}\) is said to be C-lsc at \(x_0\in K\) if, for any neighborhood \(V\) of \(F(x_0)\) in \(R^m\), there exists a neighborhood \(U\) of \(x_0\) in \(R^n\) such that \(F(U\cap K)\subseteq V+C\). The map \(F: K\subset R^n\rightarrow R^m\cup \{+\infty _C\}\) is said to be C-lsc on \(K\) if it is C-lsc at every point \(x_0\in K\).
Remark 2.1
In fact, when \(C=R^m_+\), the \(R^m_+\)-lower semicontinuity of \(F=(F_1,\ldots , F_m)\) is equivalent to the (usual) lower semicontinuity of each \(F_i\).
Definition 2.3
[5] Let \(K\subset R^n\) be convex and \(F: K\rightarrow R^m\cup \{+\infty _C\}\) be a vector-valued function. \(x^*\in K\) is said to be a Pareto optimal solution of \(F\) on K if
\(x^*\in K\) is said to be a weak Pareto optimal solution of \(F\) on \(K\) if
\(x^*\in K\) is said to be an ideal optimal solution of \(F\) on \(K\) if
Lemma 2.1
[11] Let \(K\subset R^n\) be a closed set, and suppose that \(W\subset R^m\) is a closed set such that \(W+C\subseteq W\). Assume that \(F: K\rightarrow R^m\cup \{+\infty _C\}\) is C-lsc. Then, the set \(P=\{x\in K |~F(x)-\lambda \in -W\}\) is closed for all \(\lambda \in R^m\).
Definition 2.4
[1] Let K be a nonempty set in \(R^n\). Then the asymptotic cone of the set \(K\), denoted by \(K^\infty \), is the set of all vectors \(d\in R^n\) that are limits in the direction of the sequence \(\{x_k\}\subset K\), namely
In the case that \(K\) is convex and closed, then, for any \( x_0\in K\),
Lemma 2.2
[1] A set \(K\subset R^n\) is bounded if and only if its asymptotic cone is just the zero cone: \(K^\infty =\{0\}\).
Definition 2.5
[1] For any given function \(f: R^n\rightarrow R\cup \{+\infty \}\), the asymptotic function of \(f\) is defined as the function \(f^\infty \) such that \(epi f^\infty =(epi f)^\infty \), where \(epi f=\{(x, t)\in R^n\times R | f(x)\le t\}\) is the epigraph of \(f\). Consequently, we can give the analytic representation of the asymptotic function \(f^\infty \):
When \(f\) is a proper convex and lower semi-continuous (lsc in short) function, we have
or equivalently
and
For the indicator function \(\delta _K\), we have that \(\delta ^\infty _K=\delta _{K^\infty }\), where \(K\subset R^n\) is a nonempty set.
Definition 2.6
[23] The function \(f: R^n\rightarrow R\cup \{+\infty \}\) is said to be coercive if its asymptotic function \(f^\infty (d)>0\), for all \( d\ne 0\in R^n\) and it is said to be counter-coercive if its asymptotic function \(f^\infty (d)=-\infty \), for some \( d\ne 0\in R^n\).
Lemma 2.3
[1] Let \(f: R^n\rightarrow R\cup \{+\infty \}\) be proper convex and lower semicontinuous, then the following three statement are equivalent:
-
(a)
f is coercive;
-
(b)
the optimal set \(\{x\in R^n |~ f(x)=\inf f\}\) is nonempty and compact;
-
(c)
\(\lim \limits _{\Vert x\Vert \rightarrow +\infty }\inf \frac{f(x)}{\Vert x\Vert }>0.\)
Definition 2.7
[19] A cone \(C_2\subseteq R^m\) is called \(Daniell\) if any decreasing sequence of \(R^m\) having a lower bound converges to its infimum. For example, the cone \(C=R^m_+\) has the Daniell property.
Definition 2.8
[24] A set \(S\subset R^m\) is said to have the domination property with respect to \(C\), if there exists \(s\in R^m\) such that \(S\subseteq s+C\).
Let \(H: R^n\rightarrow R^m\cup \{+\infty _C\}\) be a vector-valued function, denote by \(EpiF\) the epigraph of \(H\), i.e.
Similarly, we define the asymptotic function of a vector-valued function.
Definition 2.9
For any given vector-valued function \(H: R^n\rightarrow R^m\cup \{+\infty _C\}\), the asymptotic function of H is defined as the function \(H^\infty : R^n\rightarrow R^m\cup \{+\infty _C\}\) such that
Proposition 2.1
Let \(H: R^n\rightarrow R^m\cup \{+\infty _C\}\) be a proper, C-lsc and C-convex vector-valued function. One has
where x is any vector in \(domH\).
Proof
From the \(C\)-convexity of \(H\), we know that the set \(EpiH\) is also convex in \(R^n\times R^m\). By the definition of the asymptotic cones of \(EpiH\), one has that for any \(x\in domH\),
That is for each \((d,u)\in (EpiH)^\infty \) if and only if for any \(x\in domH\), we have
From the inequality (2.11), we define a new function \(T: R^n\rightarrow R^m\cup \{+\infty _C\}\):
and hence
On the other side, for any fixed \(x,d\in R^n\) and \(i\in \{1,\ldots ,m\}\), the function \(\frac{H_i(x+td)-H_i(x)}{t} \) is nondecreasing with \(t>0\). Thus we obtain that
and
That is
The proof is complete.\(\square \)
Remark 2.2
From the statement of Proposition 2.1 and the formula 2.7, we have
Proposition 2.2
Let \(F: R^l\rightarrow R^m\cup \{+\infty _C\}\) be a proper function, let \(A\) be a linear map from \(R^n\rightarrow R^l\) with \(A(R^n)\subset domF\), and let \(G(x)=F(Ax)\) be a proper composite function. If \(F\) is proper, \(C-lsc\) and \(C-convex\). Then, \(G\) is C-lsc and C-convex.
Proof
From Remark 2.1, we know that \(G\) is \(C\)-lsc if and only if \(G_i\) is lsc for any \(i\in \{1,\ldots ,m\}\). Since \(A\) is a linear map and \(F_i\) is lsc for any fixed \(i\in \{1,\ldots ,m\}\). So \(G_i\) is lsc for any \(i\in \{1,\ldots ,m\}\), clearly \(G\) is \(C\)-lsc.
On the other side, let \(x_1, x_2\in S\) and \(\lambda \in [0,1]\). From the assumptions, we obtain
and
By the \(C-convexity\) of \(F\), we derive
That is \(G\) is \(C\)-convex. The proof is complete. \(\square \)
Proposition 2.3
Let the same assumptions as in Proposition 2.2 hold. Then,
Proof
The proof of Proposition 2.3 is a little bit trivial, so we omit it.
Throughout this paper. we denote by \(\bar{X}\) the weak Pareto optimal solutions set and \(X^*\) the ideal solutions set of problem (CMOP), respectively.
3 Characterizations of weak Pareto solution optimal sets
We denote by
and
Theorem 3.1
In problem (CMOP), suppose that \(F\) is proper, \(C\)-lsc and \(C\)-convex. If \(\bar{X}\ne \emptyset \), then
Furthermore, if the ideal solution set \(X^*\) is nonempty, then \(S_1=\bar{X}^\infty \).
Proof
(1) Taking any \(u\in \bar{X}^\infty \), from the definition of asymptotic cone, we have there exist some sequences \(\{x_k\}\subset \bar{X}\) and \(\{t_k\}\) with \(t_k\rightarrow +\infty \) such that \(\mathop {\text{ lim}}\limits _{k\rightarrow +\infty } \frac{x_k}{ t_k}= u\). By the fact of \(x_k\in \bar{X}\), one has
For each \(y\in S\), we have
By virtue of Proposition 2.2, for any fixed \(\lambda >0\), we have
when \(t_k\) is sufficiently large. That is
From the inequality (2.8), one has
and by the formula (3.3), we derive
Hence, combining (3.4), (3.5) with (3.6), we obtain
Taking the limit in (3.7) as \(k\rightarrow +\infty \), we derive
That is \(u\in S_1\).
(2). Taking any \(d\in S_1\), which means that for any \(x\in S\), we have
Without lose of generality, we assume \(x_k=x+\lambda _kd\), where \(\lambda _k\rightarrow +\infty \). From the inclusion (3.8), we have
Choosing \(t_k=\lambda _k\), we obtain
That is \(d\in S_2\).
(3). Taking any \(d\in S_1\), by the assumption that \(X^*\) is nonempty, we have
where \(\bar{x}\in X^*\) is fixed and \(t_k\rightarrow +\infty \). Taking any \(y\in S\), it is easy to check that
From the definition of \(X^*\), we have
and by the definition of \(S_1\), we obtain
Combining (3.12) with (3.13), we have
The formula (3.14) means that \(\bar{x}+t_kd\in \bar{X}\). We denote by \(x_k=\bar{x}+t_kd\) and it follows that
Hence, we conclude \(d\in \bar{X}^\infty \). The proof is complete.\(\square \)
Remark 3.1
When \(A=I\) is an identical mapping, some corresponding results have obtained in [8, 11].
Next let’s consider some necessary and sufficient conditions for the nonemptiness and compactness of weak Pareto optimal solution sets in the problem (CMOP).
Lemma 3.1
In the problem (CMOP), we assume \(F\) is proper, C-lsc and C-convex. Then we have \(\bar{X}\) is nonempty and compact if and only if
Proof
Denote by \(argmin_SF_j\) the solution set of the following scalar-valued optimization problem:
where \(j\in [1,\ldots ,m]\). By virtue of Theorem 2.1 of [8], one has that \(\bar{X}\) is nonempty and compact if and only if \(argmin_SF_j\) is nonempty and compact for any \(j\in [1,..,m]\). We observe that the nonemptiness and compactness of \(argmin_SF_j\) is equivalent to
for each \(j\in [1,..,m]\), see the Theorem 27.1 of [22]. Thus, we obtain the formula (3.15) is equivalent to \(\bar{X}\) is nonempty and compact. The proof is complete.\(\square \)
Remark 3.2
The linearity of \(A(x)\) makes it possible to obtain the analytical expression of the asymptotic function of the vector-valued function (Proposition 2.3). By virtue of the analytical expression of the asymptotic function, Lemma 3.1 generalizes some corresponding results of [8].
4 Proximal-type method for convex composite multiobjective optimization problem
It is known that the following constrained multiobjective optimization problem
is equivalent to the unconstrained multiobjective optimization problem
in the sense that they have the same sets of Pareto optimal solutions and the same sets of weak Pareto optimal solutions, where
Lemma 4.1
If \(K\subset R^n\) is a convex set and \(F_1: K\rightarrow R^m\cup \{+\infty _C\}\) is a proper \(C\)-convex map, then
where \(C-ARGMIN_w \{F_1(x)\mid x\in K\}\) is the weak Pareto optimal solution set of \(F_1\) on \(K\).
This follows immediately from Theorem 2.1 in [2].
Now we make the following assumption:
(A) the set \(\bar{X}\) is nonempty and compact.
Here we propose the following vector-valued proximal-type method (VPM, in short):
-
Step (1): Taking any \(x_0\in R^n\);
-
Step (2): Given \(x_k\), if \(x_k\in \bar{X}\), then \(x_{k+p}=x_k\) for all \(p\ge 1\) and the algorithm stops, otherwise goes to step (3).
-
Step (3): If \(x_k\notin \bar{X}\), then compute \(x_{k+1}\) satisfying
where \(\theta _k:=\{x\in R^n| F_0(Ax)\le _CF_0(Ax_k)\}\), \(\varepsilon _k\in (0, \varepsilon ], \varepsilon >0\) and goes to step (2).
Next we will establish the main results in this section.
Theorem 4.1
In the problem \((MOP)\), let \(F_0: R^l\rightarrow R^m\cup \{+\infty _C\}\) be proper \(C\)-convex and \(C\)-lower semicontinuous mapping. Further suppose that \(r(A)=n\). Under the assumption (A), any sequence \(\{x_k\}\) generated by the method (VPM) is well-defined and bounded.
Proof
Let \(x_0\in R^n\) be an initial point and we assume the algorithm has reached step \(k\). We will show that the next iterative \(x_{k+1}\) does exist. Defining a new function \(T_k(x): R^n\rightarrow R\cup \{+\infty \}\) with
where \(\lambda \in C_1\) and \(\delta _{\theta _k}(x)\) is the indicator function of set \(\theta _k\). Denote by \(\bar{X}_k\) the solution set of the following scalar-valued optimization problem:
It is clear that \(\theta _k\) is a nonempty and convex set by its definition. From the assumptions that \(F_0\) is \(C\)-lsc and \(C\)-convex, we know that for any \(\lambda \in C_1\), the function \(\langle F_0(Ax), \lambda \rangle +\delta _{\theta _k}(x)\) is convex and lsc with respect to \(x\). That is \(T_k(x)\) is lsc and convex. By the assumption (A) and the virtue of Lemma 2.3, we have
From the assumption that \(r(A)=n\), we have the following inequality
From the definition of an indicator function, we know that
Thus, combining (4.3) with (4.4), we obtain that
On the other side, by the fact of \(\{e_k\}\subset R^m_+\) and the definition of \(\lambda \), we have \(\langle e_k, \lambda \rangle >0\). And by the formula (2.8), we obtain
From the Proposition 2.6.1 of [1], we derive that
Combining (4.4), (4.5) with (4.6), we obtain that
By the virtue of Lemma 2.3, we conclude that the set \(\bar{X}_k\) is nonempty and compact for every \(k\in N\). From Lemma 4.1, we have a minimizer of \(T_k(x)\) satisfies 4.1 and can be taken as \(x_{k+1}\).
Next we will show that the sequence \(\{x_k\}\) is bounded as \(k\rightarrow +\infty \). Let’s consider its contrary, assume \(\Vert x_k\Vert =+\infty \) as \(k\rightarrow +\infty \). From the formula (4.2), we know that the function \(\langle F_0(Ax), \lambda \rangle \) is coercive. From the statements (c) of Lemma 2.3, we have
However by the definition of the method (VPM), we know
a contradiction with (4.8). Thus the sequence \(\{x_k\}\) is bounded. The proof is complete.\(\square \)
Remark 4.1
The main statements of Theorem 4.1 are concerned with the existence and the boundedness of sequences. Compared with some corresponding results in [2], our contributions are that we present a quite different method to prove the existence of iterates and the boundedness of sequences via asymptotic analysis. It is worth noticing that when the regular term in (4.1) is not quadratic, the traditional method does not deal with such complex cases. However, the method in this paper does still work.
Lemma 4.2
Let the assumptions in Theorem 4.1 hold and suppose that \(F_0(A(R^n))\) have the domination property. Then, we have
Proof
From the method (VPM), we know that if the sequence stops at some iteration, \(x_k\) will be a constant vector thereafter. Now we assume that the sequence \(\{x_k\}\) will not stop finitely. Define \(E\subset R^n\) as follows
By the assumption that \(F_0(A(R^n))\) has the domination property, it follows from the Daniell property of \(R^m_+\) that we have E is nonempty. Since \(x_{k+1}\) is a weak Pareto optimal solution of problem (4.1), there exists a \(\lambda _k\in C_1\) such that \(x_{k+1}\) is the solution of the following problem \((MOP_{\lambda _k})\):
where \(T_k(x)=\langle F_0(Ax), \lambda _k\rangle +\frac{\varepsilon _k}{2} \Vert x-x_k \Vert ^2\langle e_k, \lambda _k\rangle +\delta _{\theta _k}(x)\) and \(\delta _{\theta _k}(x)\) is the scalar-valued indicator function. Thus \(x_{k+1}\) satisfies the first-order necessary optimality condition of problem \((MOP_{\lambda _k}\!)\). It follows from Theorem 3.23 of [20] that there exists \(\mu _k\in \partial \langle F_0(.), \lambda _k\rangle (Ax_{k+1})\) and \(\nu _k\in \partial \delta _{\theta _k}(x_{k+1})\) such that
Denote by \(\alpha _k={\varepsilon _k}\langle e_k, \lambda _k\rangle \), obviously the sequence \(\alpha _k>0\) for all \(k\in N\). By the fact that \(\langle \nu _k, x-x_{k+1}\rangle \le 0\) for any \(x\in \theta _k\), we have that
Let \(x^*\in E\). It is obvious that \(x^*\in \theta _k\) for all \(k\in N\) and we deduce that
By the definition of subgradient of \(\langle F_0(Ax_{k+1}), \lambda _k\rangle \), we have that
From the fact that \(x^*\in \theta _k\) for all \(k\in N\), we have \(\langle F_0(Ax^*)-F_0(Ax_{k+1}), \lambda _k\rangle \le 0\). It follows that
and
Combining the inequality (4.13) and the fact of \(\alpha _k>0\), we obtain that
From the statements of Theorem 4.1, we have the sequence \(\{\Vert x_k-x^*\Vert ^2\}\) is bounded, furthermore by the inequality (4.14), we know \(\{\Vert x_k-x^*\Vert ^2\}\) is a nonnegative and nonincreasing sequence, and hence is convergent. We conclude that
The proof is complete.\(\square \)
Theorem 4.2
Let the assumptions in Theorem 4.1 and Lemma 4.2 hold. Then any cluster point of \(\{x_k\}\) belongs to \(\bar{X}\).
Proof
If there exists \(k_0 \ge 1\) such that \(x_{k_0+p}=x_{k_0},\forall p\ge 1.\) Then, it is obvious that \(x_{k_0}\) is a cluster point of \(\{ x_k\}\) and it is also a weak Pareto optimal solution of problem (MOP). Now suppose that the algorithm does not terminate finitely. Then, by Theorem 4.1, we have that \(\{x_k\}\) is bounded and it has some cluster points. Next we will show that all of cluster points are weak Pareto optimal solutions of problem (MOP). Let \(\hat{x}\) be one of the cluster points of \(\{x_k\}\) and \(\{x_{k_j}\}\) be a subsequence of \(\{x_k\}\), which converges to \(\hat{x}\). Let \(\lambda \in C_1\). We define a function \(\psi _\lambda : S\rightarrow R\cup {+\infty }\) as \(\phi _\lambda (x)= \langle F_0(Ax), \lambda \rangle \). Since \(F_0\) is C-lsc and \(C\)-convex, \(\psi _\lambda \) is also lsc and convex, it follows that \(\psi _\lambda (\bar{x})\le \liminf \limits _{j\rightarrow +\infty } \psi _\lambda (x_{k_j})\). By the fact that \(x_{k+1}\in \theta _k\), we can see that \(F_0(Ax_{k+1})\le _C F_0(Ax_k)\) for \(k\in N\). Thus, \(\psi _\lambda (x_{k+1})\le \psi _\lambda (x_{k})\). Therefore,
Hence, we have that \(\psi _\lambda (\bar{x})\le \psi _\lambda (x_{k})\), which implies \(F_0(A\bar{x})\le _C F_0(Ax_k)\). Assume that \(\bar{x}\) is not the weak Pareto optimal solution of problem (MOP), then there exists \(x^*\in R^n\) such that \(F(Ax^*)\le _{intC}F(A\bar{x})\). Taking \(\lambda _{k_j}\in R^m_+\) as a subsequence of \(\lambda _k\), obviously there exists \(\bar{\lambda }\in R^m_+\) such that \(\bar{\lambda }\) is a cluster point of \(\{\lambda _{k_j}\}\). Without loss of generality, we assume that
Thus we have that
Let \(T_k(x)\) be the function defined in the proof of Theorem 4.1. There exist some \(\xi _{k_j}\in \partial \psi _{k_j}(x_{k_{j}+1})\) and \(\rho _{k_j}\in \partial T_{k_j}(x_{k_j+1})\) such that
It follows that
From the definition of the method (VPM), we have
That is
Similarly denote by \(\alpha _{k_j}=\varepsilon _{k_j}\langle e_{k_j}, \lambda _{k_j}\rangle \), obviously \(\alpha _{k_j}>0\). From the inequality (4.18), we deduce that
Noted that \(\{x_k\}\) is bounded so that \(\{\Vert x^*-x_{k_{j}+1}\Vert \}\) is also bounded. By virtue of Lemma 4.2, we have \(\lim \limits _{j\rightarrow +\infty }\Vert x_{k_{j}+1}-x_{k_j}\Vert =0\). We conclude that the limit of the rightmost expression in (4.19) as \(j\rightarrow +\infty \) vanishes. Thus, taking limit in (4.16) we obtain
where \(\bar{\lambda }\) is the cluster point of \(\{\lambda _{k_j}\}\). Then we can conclude that (4.16) contradicts with the facts that \(\bar{\lambda }\in C_1\) and the assumption \(F_0(A x^*)\le _{intC} F_0(A\bar{x})\), thus we can claim that \(\bar{x}\) is a weak Pareto optimal solution of problem (MOP). The proof is complete. \(\square \)
Theorem 4.3
Assume the same assumptions as in Theorem 4.2 Then the whole sequence \(\{x_k\}\) converges to a weak Pareto optimal solution of problem (MOP).
Proof
Suppose to the contrary both \(\hat{x}\) and \(\tilde{x}\) are two distinct cluster points of \(\{x_k\}\) and
From the Theorem 4.2, we know that \(\hat{x}\) and \(\tilde{x}\) are also the weak Pareto optimal solutions of problem (CMOP), \(\Vert \tilde{x}-x_k\Vert \) and \(\Vert \hat{x}-x_k\Vert \) are convergent. So there exists \(\tilde{\alpha }, \hat{\alpha }\in R\) such that
Since
and
The left-hand of (4.23) vanishes for \(\hat{x}\) is a cluster point of \(\{x_k\}\), it follows that
Repeating it again by changing the roles of \(\hat{x}\) and \(\tilde{x}\), we have that
Combining (4.24) with (4.25), it is obvious that
which contradicts with the assumption that \(\hat{x}\ne \tilde{x}\). We conclude that \(\{x_n\}\) is convergent to a weak Pareto optimal solution of the problem (CMOP) and the proof is complete. \(\square \)
5 Conclusion
In this paper, we defined the asymptotic function of a vector-valued function, obtained the analytical expression of the asymptotic function of a class of cone-convex vector-valued function, characterized the nonemptiness and compactness of the weak Pareto optimal solution sets of a composite multiobjective optimization problem. We then applied the obtained results to construct a proximal-type method for solving the composite multiobjective optimization problem. Under some conditions, we proved that any sequence generated by this method converges to a weak Pareto optimal solution of the problem.
References
Auslender, A., Teboulle, M.: Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer, Berlin (2003)
Bonnel, H., Iusem, A.N., Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim. 15(4), 953–970 (2005)
Bot, R., Vargyas, E., Wanka, G.: Conjugate duality for multiobjective composed optimization problems. Acta Mathematica Hungarica 116, 177–196 (2009)
Burke, J.V., Ferris, M.C.: A Gauss–Newton method for convex composite optimization. Math. Program. 71, 179–194 (1995)
Chen, G.Y., Huang, X.X., Yang, X.Q.: Vector Optimization: Set-Valued and Variational Analysis, Lecture Notes in Economics and Mathematical Systems, 541. Springer, Berlin (2005)
Chen, Z., Huang, X.X., Yang, X.Q.: Generalized proximal point algorithms for multiobjective optimization problems. Appl. Anal. 90, 935–949 (2011)
Chen, Z.: Generalized viscosity approximation methods in multiobjective optimization problems. Comput. Optim. Appl. 49, 179–192 (2011)
Deng, S.: Characterzationa of nonemptiness and compactness of solutions sets in convex optimzation. J. Optim. Theory Appl. 96, 123–131 (1998)
Deng, S.: Characterizations of the nonemptiness and boundedness of weakly efficient solution sets of convex vector optimization problems in real reflexive Banach spaces. J. Optim. Theory Appl. 140, 1–7 (2009)
Fletcher, R.: Practical Methods of Optimization. Wiley, New York (1987)
Flores-Bazan, F.: Ideal, weakly efficient solutions for vector problems. Math. Program. 93, 453–475 (2002)
Huang, X.X., Yang, X.Q.: Characterizations of nonemptiness and compactness of the set of weakly efficient solutions for convex vector optimization and applications. J. Math. Anal. Appl. 264, 270–287 (2001)
Jahn, J.: Scalarization in multi-objective optimization. Math. Program. 29, 203–219 (1984)
Jahn, J.: Vector Optimization: Theory. Applications and Extensions. Springer, Berlin (2004)
Jeyakumar, V.: Composite nonsmooth programming with Gateaux differentiability. SIAM J. Optim. 1, 31–40 (1991)
Jeyakumar, V., Yang, X.Q.: Convex composite multi-objective nonsmooth programming. Math. Program. 59, 325–343 (1993)
Jeyakumar, V., Yang, X.Q.: Convex composite minimization with C1,1 functions. J. Optim. Theory Appl. 86, 631–648 (1995)
Jeyakumar, V., Luc, D.T., Tinh, P.N.: Convex composite non-Lipschitz programming. Math. Program. 92, 177–195 (2002)
Luc, T.D.: Theory of Vector Optimization, Lecture Notes in Economics and Mathematical Systems, 319. Springer, Berlin (1989)
Phelps, R.R.: Convex Functions, Montone Operators and Differentiability, Lecture notes in Mathematics, 1364. Springer, Berlin (1988)
Reddy, L.V., Mukherjee, R.N.: Composite nonsmooth multiobjective programs with V-\(\rho \)-invexity. J. Math. Anal. Appl. 235, 567–577 (1999)
Rockafellar, R.T.: Convex Anal. Princeton University Press, Princeton (1970)
Rockafellar, R.T., Wets, J.B.: Variational Analysis. Springer, Berlin (1998)
Sawaragi, Y., Nakayama, H., Tanino, T.: The Theory of Multiobjective Optimization. Academic Press, New York (1985)
Womersley, R.S.: Local properties of algorithms forminimizing nonsmooth composite functions. Math. Program. 32, 69–89 (1985)
Yang, X.Q., Jeyakumar, V.: First and second-order optimality conditions for convex composite multi-objective optimization. J. Optim. Theory Appl. 95, 209–224 (1997)
Yang, X.Q.: Second-order global optimality conditions for convex composite optimization. Math. Program. 81, 327–347 (1998)
Acknowledgments
The author thanks two anonymous referees for carefully reading the original submission and supplying many helpful suggestions, which greatly improved the paper. The author thanks Prof. X. Q. Yang, The Hong Kong Polytechnic University, for his teaching. The author also thanks Prof. J. Chen, Tsinghua University, for his some useful suggestions in the revised versions. This work is supported by the National Science Foundation of China (11001289) and the Key Project of Chinese Ministry of Education (211151), the National Science Foundation of Chongqing Science and Technology Commission (CSTC, 2011BB0118) and Research Grant of Education Committee of Chongqing (KJ110633).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chen, Z. Asymptotic analysis in convex composite multiobjective optimization problems. J Glob Optim 55, 507–520 (2013). https://doi.org/10.1007/s10898-012-0032-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-012-0032-z
Keywords
- Convex composite multiobjective optimization
- Asymptotic analysis
- Proximal-type method
- Nonemptiness and compactness
- Weak Pareto optimal solution