Abstract
We introduce a (possibly infinite) collection of mutually dual nonconvex optimization problems, which share a common optimal value, and give a characterization of their global optimal solutions. As immediate consequences of our general multiduality principle, we obtain Toland–Singer duality theorem as well as an analogous result involving generalized perspective functions. Based on our duality theory, we propose an extension of an existing algorithm for the minimization of d.c. functions, which exploits Toland–Singer duality, to a more general class of nonconvex optimization problems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
One of the main tools in optimization is duality theory, which associates to a given (primal) problem (\(\mathcal {P}\)) another (dual) problem (\(\mathcal {D} \)), in such a way that the relation between the two problems provides useful information about (\(\mathcal {P}\)). In the case of convex optimization problems, under suitable assumptions, the optimal values of (\(\mathcal {P}\)) and (\(\mathcal {D}\)) coincide and the primal optimal solutions can be recovered from the dual optimal solutions; this is particularly useful when (\(\mathcal {D}\)) happens to be easier to solve than (\(\mathcal {P}\)). The essential tools of convex duality theory are convex conjugation and the notion of subgradient; we refer to the classical book by Rockafellar [1] for a detailed treatment of classical convex duality. In the last decades, there has also been a very active research in nonconvex programming, motivated by the fact that many real life optimization problems are nonconvex. In this paper, we propose a generalization of one of the most classical nonconvex duality theories, namely, Toland–Singer duality for d.c. (difference of convex) functions [2,3,4], consisting in a multiduality principle involving a (possibly infinite) collection of mutually dual problems. Our simple multiduality result does not require any convexity assumption, and one of its consequences is a new duality theorem involving just two mutually dual problems, expressed in terms of classical Fenchel conjugates. This theorem generalizes the well-known Toland–Singer duality theorem, and we use it to characterize (approximate) global optimal solutions, thus generalizing a well-known necessary and sufficient global optimality condition due to Hiriart-Urruty [5]. Another application of the new theorem yields a duality result involving the generalized perspective functions introduced by Maréchal in [6].
There are some numerical algorithms based on Toland–Singer duality, such as the DC algorithm [7] and branch-and-bound/cutting-plane type algorithms [8, 9]. Following the approach by Tao and El Bernoussi [7], we propose an algorithm for searching local optimal solutions of nonconvex problems having the format considered in our duality theorem.
The paper consists of three more sections. In Sect. 2, we state the new multiduality principle, which is naturally formulated in the framework of generalized conjugation theory. Section 3 considers the special case when the collection of mutually dual problems consists of just two problems; for simplicity, this section is presented in the setting of classical convex conjugation. In Sect. 4, we propose an extension of the DC algorithm to the broader class of nonconvex problems considered in Sect. 3.
2 Multiduality
Let \(\left( X_{i}\right) _{i\in I}\) be a family of nonempty sets, indexed by \(I\ne \emptyset \). We denote \(X:=\prod _{i\in I}X_{i}\) and \( X_{-i}:=\prod _{j\in I{\setminus } \{i\}}X_{j}\ \)for \(i\in I.\) For \(y\in X_{-i}\) and \(z\in X_{i},\) we define \(x_{(-i,y),(i,z)}\in X\) by
Given a function \(c:X\rightarrow \overline{\mathbb {R}}\), for each \(i\in I\) we consider the coupling function \(c_{i}:X_{-i}\times X_{i}\rightarrow \overline{\mathbb {R}}\) defined by
Notice that, for \(x\in X,\) one has \(c_{i}(x_{-i},x_{i})=c(x)\); here and throughout we denote by \(x_{-i}\) the projection of \(x\in X\) onto \(X_{-i}\), that is,
Following the generalized conjugation scheme of [10], we define a new coupling function \(c_{i}^{\prime }:X_{i}\times X_{-i}\rightarrow \overline{\mathbb {R}}\) by
and consider the conjugation operators associated with \(c_{i}\) and \( c_{i}^{\prime },\) namely the \(c_{i}\)-conjugate of \(g:X_{-i}\rightarrow \mathbb {\overline{R}}\) is the function \(g^{c_{i}}:X_{i}\rightarrow \mathbb { \overline{R}}\) defined by
and, analogously, the \(c_{i}^{\prime }\)-conjugate of \(h:X_{i}\rightarrow \mathbb {\overline{R}}\) is \(h^{c_{i}^{\prime }}:X_{-i}\rightarrow \mathbb { \overline{R}}, \) defined by
here and in the sequel we adopt the conventions
For \(h:X_{i}\rightarrow \mathbb {\overline{R}},\) \(\overline{z}\in h^{-1}\left( {\mathbb {R}}\right) \) and \(\epsilon \ge 0,\) we set
One can easily check that
Given \(f:X\rightarrow \overline{\mathbb {R}}\), for \(i\in I\) and \({z}\in X_{i}\) we define \(f_{i,z}:X_{-i}\rightarrow \overline{\mathbb {R}}\) by
Notice that, for \(x\in X,\) one has \(f_{i,x_{i}}(x_{-i})=f(x)\).
Theorem 2.1
Let \(f:X\rightarrow \overline{\mathbb {R}}.\) For every \( i\in I\) one has
Hence, the optimal value of problem
does not depend on i.
Proof
We have
\(\square \)
Corollary 2.1
Let \(G\subseteq X\) and, for \(i\in I\), denote by \( C_{i}:X_{i}\rightrightarrows X_{-i}\) the set-valued mapping defined by
Then
Proof
Apply Theorem 2.1 with \(f:=\delta _{G}\). \(\square \)
Corollary 2.2
Let \(G_{i}\subseteq X_{i}\) for every \(i\in I.\) Then
Proof
Apply Corollary 2.1 with \(G:=\prod _{i\in I}X_{i}.\) \(\square \)
Corollary 2.3
Let \(f_{i}:X_{i}\rightarrow \mathbb {R}\cup \{+\infty \} ( i=1,\ldots ,n)\), and define
Then, for every \(i\in I,\) one has
Theorem 2.2
Assume that the optimal value of \((\mathcal {P}_{i})\) is finite, and let \(\epsilon ,\eta \ge 0.\) If \(\bar{z}\) is an \(\epsilon \)-optimal solution to \((\mathcal {P}_{i})\) such that \(f_{i,\bar{z} }^{c_{i}c_{i}^{\prime }}=f_{i,\bar{z}}\), then, for every \(\bar{y}\in \partial _{\eta }f_{i,\bar{z}}^{c_{i}}(\bar{z})\) and \(j\in I{\setminus } \{i\},\) the point \(\bar{y}_{j}\) is an \((\epsilon +\eta )\)-optimal solution to \( \left( \mathcal {P}_{j}\right) \).
Proof
We have
the last equality being an immediate consequence of Theorem 2.1. \(\square \)
3 Nonconvex Duality
3.1 A Generalization of Toland–Singer Duality
Let \((X_{1},X_{2},\langle \cdot ,\cdot \rangle )\) be a dual pair of locally convex spaces and \(f:X_{1}\times X_{2}\rightarrow \mathbb {\overline{R}}\). In this section, we will apply the general results of Sect. 2 to the special case when \(I:=\left\{ 1,2\right\} \) and c is the duality pairing \(\langle \cdot ,\cdot \rangle \).
We recall that the Fenchel conjugates of \(g:X_{2}\rightarrow \mathbb {\overline{R}} \) and \(h:X_{1}\rightarrow \mathbb {\overline{R}}\) are the functions \(g^{*}:X_{1}\rightarrow \mathbb {\overline{R}}\) and \(h^{*}:X_{2}\rightarrow \mathbb {\overline{R}},\) respectively, defined by
It is easy to see that, for the coupling function c considered in this section, one has \(g^{c_{1}}=g^{*}\) and \(h^{c_{2}}=h^{*}.\) Thus, Theorem 2.1 and Corollaries 2.1 and 2.2 yield:
Corollary 3.1
Let \(f:X_{1}\times X_{2}\rightarrow \mathbb {\overline{R}}.\) Then
Corollary 3.2
Let \(T:X_1\rightrightarrows X_2.\) Then
Proof
Apply Corollary 2.1 with \(G:=Graph\ T.\)
Corollary 3.3
Let \(Z\subseteq X_{1}\) and \(Y\subseteq X_{2}.\) Then
Similarly, from Corollary 2.3 or, alternatively, Corollary 3.1 (by setting \(f\left( z,y\right) :=h\left( z\right) +g\left( y\right) \)), one obtains the following version of the classical Toland–Singer duality theorem [2, 4]:
Corollary 3.4
Let \(g:X_{2}\rightarrow \mathbb {\overline{R}}\) and \(h:X_{1}\rightarrow \mathbb {\overline{R}}\). Then one has
From the preceding corollary, by taking \(g:=k^{*}\), with \(k:X_{1}\rightarrow \mathbb {\overline{R}}\) such that \(k^{**}=k\), one immediately obtains the standard Toland–Singer formula.
We recall that, for \(\eta \ge 0,\) the \(\eta \)-subdifferential of \( h:X_{1}\rightarrow \mathbb {\overline{R}}\) at \(\bar{z}\in h^{-1}\left( \mathbb {R}\right) \) is the set
From Theorem 2.2, one gets the following result on approximate optimal solutions of the pair of dual problems
and
of Corollary 3.1. A related result, showing how to obtain optimal dual solutions from optimal primal solutions of nonconvex problems, can be found in [3, Theorem 2.4].
Corollary 3.5
Assume that the optimal value of \((\mathcal {P})\) is finite, and let \( \epsilon ,\eta \ge 0.\) If \(\bar{z}\) is an \(\epsilon \)-optimal solution to \(( \mathcal {P})\) such that \(f\left( \overline{z},\cdot \right) \) is convex and l.s.c. then, for every \(\bar{y}\in \partial _{\eta }f\left( \overline{z} ,\cdot \right) ^{*}(\bar{z}),\) the point \(\bar{y}\) is an \((\epsilon +\eta )\)-optimal solution to \(\left( \mathcal {D}\right) \).
The following characterization of approximate global optimal solutions generalizes a well-known necessary and sufficient global optimality condition due to Hiriart-Urruty [5].
Theorem 3.1
Assume that the optimal value of \((\mathcal {P})\) is finite, and let \(\bar{z}\in X_{1}\) and \(\epsilon \ge 0.\) Then \(\bar{z}\) is an \(\epsilon \)-optimal solution to \((\mathcal {P})\) if and only if for every \( \eta \ge 0\) and every \(y\in f(\bar{z},\cdot )^{-1}\left( \mathbb {R}\right) \) such that \(\bar{z}\in \partial _{\eta }f(\bar{z},\cdot )({y})\) one has
Proof
Let \(\bar{z}\) be an \(\epsilon \)-optimal solution to \((\mathcal {P}),\) and \( \eta \ge 0\) and \(y\in f(\bar{z},\cdot )^{-1}\left( \mathbb {R}\right) \) be such that \(\bar{z}\in \partial _{\eta }f(\bar{z},\cdot )({y}).\) Using Corollary 3.1, we obtain
which proves (2).
Conversely, assume that the condition stated after “if and only if” holds, let \(y\in f(\bar{z},\cdot )^{-1}\left( \mathbb {R}\right) ,\) and define \(\eta :=f(\bar{z},y)+f(\bar{z},\cdot )^{*}(\bar{z})-\left\langle \bar{z},y\right\rangle \). Then \(\eta \ge 0\) and \( \bar{z}\in \partial _{\eta }f(\bar{z},\cdot )({y}),\) and therefore (2) holds. Hence
which shows that \(\bar{z}\) is an \(\epsilon \)-optimal solution to \((\mathcal {P }).\) \(\square \)
To apply the duality theory presented in this section to a given optimization problem, one has to be able to recognize whether the objective function of the problem under consideration can be written in the form \( f\left( z,\cdot \right) ^{*}\left( z\right) \) for some function \( f:X_{1}\times X_{2}\rightarrow \mathbb {\overline{R}}\). The following theorem provides a useful criterion to make this recognition possible.
Theorem 3.2
For every \(f:X_{1}\times X_{2}\rightarrow \mathbb {\overline{R}},\) the function \(\varphi :X_{1}^{2}\rightarrow \overline{\mathbb {R}}\) defined by \( \varphi (z,z^{\prime }):=f\left( z,\cdot \right) ^{*}\left( z^{\prime }\right) \) is convex, proper (accepting as proper the constant functions \( +\infty \) and \(-\infty \)) and l.s.c. in its second argument. Conversely, for every function \(\varphi :X_{1}^{2}\rightarrow \overline{\mathbb {R}}\) with these properties there exists \(f:X_{1}\times X_{2}\rightarrow \mathbb { \overline{R}}\) such that \(f\left( z,\cdot \right) ^{*}\left( z\right) =\varphi (z,z)\) for every \(z\in X_{1}\).
Proof
The first part of the statement is an immediate consequence of well-known properties of conjugate functions. To prove the converse, define \( f:X_{1}\times X_{2}\rightarrow \overline{\mathbb {R}}\) by \(f\left( z,y\right) :=\varphi \left( z,\cdot \right) ^{*}\left( y\right) ;\) then, since \( \varphi \left( z,\cdot \right) \) is convex, proper and l.s.c, from the equality \(f\left( z,\cdot \right) :=\varphi \left( z,\cdot \right) ^{*}\) it follows that \(\varphi \left( z,\cdot \right) :=f\left( z,\cdot \right) ^{*},\) and therefore \(\varphi \left( z,z\right) :=f\left( z,\cdot \right) ^{*}\left( z\right) \). \(\square \)
According to the preceding theorem, the class of problems
to which our duality theory applies coincides with that consisting of problems with format
the function \(\varphi :X_{1}^{2}\rightarrow \overline{\mathbb {R}}\) being as in the statement. Since, in view of the proof, the objective function of these two problems are linked by the relation \(f\left( z,\cdot \right) :=\varphi \left( z,\cdot \right) ^{*},\) a straightforward computation yields the dual objective function in terms of \(\varphi :\)
Hence, if the primal problem is stated as (3), the formulation of the dual problem is
To illustrate that solving the dual problem may be advantageous, we present the following academic example, in which the primal problem consists in maximizing a nonconcave function and, on the contrary, the dual objective function is concave.
Example 3.1
Let us consider the primal problem
where \(\alpha \) is a nonconcave function defined on a normed space \(X_{1}\). Let us define \(\varphi :X_{1}^{2}\rightarrow \overline{\mathbb {R}}\) by
here B is the closed unit ball in \(X_{1}\) and \(N>0.\) Clearly, \(\varphi (z,z)=\alpha (z),\) and therefore \(\left( \mathcal {P}\right) \) can be rewritten as (3). Since
according to (4) we get the following expression for the dual objective function:
which, in view of [11, p. 200], shows that it is the smallest N-Lipschitz majorant of \(\alpha \). This N-Lipschitz envelope is concave in some cases; for instance, let us consider the case when \(X_{1}:=\mathbb {R},\)
and \(N:=10\). Then one can easily prove that
with r being the smallest positive real number satisfying \(\alpha ^{\prime }(-r)=10\).
Figure 1 depicts the graphs of both \(\alpha \) and \( y\rightarrow f\left( \cdot ,y\right) ^{*}\left( y\right) \) and shows that the latter function is concave.
3.2 Example: Generalized Perspective Functions
Let \(g:X_{2}\rightarrow \left] 0,+\infty \right[ \) and \(h:X_{1}\rightarrow \left] 0,+\infty \right[ ,\) and define \(f:X_{1}\times X_{2}\rightarrow \left] 0,+\infty \right[ \) by \(f(z,y):=h(z)g(y)\). For this particular function f, the objective functions of problems \((\mathcal {P})\) and \((\mathcal {D})\) can be expressed by means of the operation \(\Delta \) introduced in [6], which constitutes a generalization of the so called perspective function of Convex Analysis. Indeed, for \(z\in X_{1}\) one has \(f(z,\cdot )=h(z)g,\) and hence
Similarly, for \(y\in X_{2}\) one has
Thus, Theorem 3.1 yields:
Corollary 3.6
Let \(g:X_{2}\rightarrow \left] 0,+\infty \right[ \) and \(h:X_{1}\rightarrow ] 0,+\infty [\). Then
4 An Algorithm
In the setting of the preceding section, in this one we will assume that \( X_{1}\) and \(X_{2}\) are the Euclidean space \(\mathbb {R}^{n}\) and \(\langle \cdot ,\cdot \rangle \) is the standard Euclidean product. Let \(f:\mathbb {R} ^{n}\times \mathbb {R}^{n}\rightarrow \mathbb {\overline{R}},\) and consider the sets
and
According to Theorem 3.1, the set \(\mathcal {P}_{l}\) contains all the optimal solutions to \((\mathcal {P})\) and the set \(\mathcal {D }_{l}\) contains all the optimal solutions to \((\mathcal {D})\).
For \({\bar{z}}\in \mathbb {R}^{n}\) and \({\bar{y}}\in \mathbb {R}^{n},\) we consider the auxiliary problems
and
We will denote by \(\mathcal {O}_{S(\bar{z})}\) and \(\mathcal {O}_{T(\bar{y})}\) the sets of optimal solutions to problems \({S}{(\bar{z})}\) and \({T}{(\bar{y}) }\), respectively.
Proposition 4.1
-
(1)
If \(\bar{z}\in \mathcal {P}_{l}\) and \(f(\bar{z},\cdot )\) is convex, proper and l.s.c., then the function \(y\mapsto f(\cdot ,y)^{*}(y) \) is constant on \(\partial f(\bar{z},\cdot )^{*}(\bar{z})\).
-
(2)
If \(\bar{y}\in \mathcal {D}_{l}\) and \(f(\cdot ,\bar{y})\) is convex, proper and l.s.c., then the function \(z\mapsto f(z,\cdot )^{*}(z) \) is constant on \(\partial f(\cdot ,\bar{y})^{*}(\bar{y})\).
Proof
By symmetry, we only need to prove (1). Let \(\bar{z}\in \mathcal {P} _{l}.\) If \(y\in \) \(\partial f(\bar{z},\cdot )^{*}(\bar{z}),\) then
and, since \(y\in \partial f(\cdot ,y)(\bar{z})\) (as \(\bar{z}\in \mathcal {P} _{l}),\)
From (6) and (5) it immediately follows that
\(\square \)
Starting with an initial point \(z^{0}\in \mathbb {R}^{n},\) we construct two sequences \(z^{k}\) and \(y^{k}\) as follows:
This algorithm requires the auxiliary problems occurring at each iteration to be solvable, and from now on we will assume that this is the case. The following proposition states sufficient conditions for solvability of these problems. We recall that a function is called co-finite if its conjugate is finite valued everywhere.
Proposition 4.2
Let \({\bar{z}}\in \mathbb {R}^{n}\) and \({\bar{y}}\in \mathbb {R}^{n},\) and assume that f is continuous. If \(f(\bar{z},\cdot )\) is co-finite and the mapping \(y\longmapsto \)\({domf(\cdot ,y)}\) is compact-valued and continuous, then \(\mathcal {O}_{S(\bar{z})}\ne \emptyset \). Analogously, if \(f(\cdot ,\bar{y})\) is co-finite and the mapping \(z\longmapsto domf(z,\cdot )\) is compact-valued and continuous, then \(\mathcal {O}_{T(\bar{y})}\ne \emptyset \).
Proof
The co-finiteness of \(f(\bar{z},\cdot )\) guarantees that the feasible set \( \partial f(\bar{z},\cdot )^{*}(\bar{z})\) of \({S}{(\bar{z})}\) is nonempty and compact. On the other hand, continuity of f together with compact-valuedness and continuity of the mapping \(y\longmapsto {domf(\cdot ,y)}\) implies, by Berge’ maximum theorem, continuity of the objective function \(y\longmapsto f(\cdot ,y)^{*}(y).\) It thus suffices to apply Weierstrass’ extreme value theorem to conclude that \(\mathcal {O}_{S(\bar{z} )}\ne \emptyset \). The proof of the nonemptiness of \(\mathcal {O}_{T(\bar{y} )}\) is the same, mutatis mutandis. \(\square \)
Theorem 4.1
Let \(f:\mathbb {R}^{n}\times \mathbb {R} ^{n}\rightarrow \mathbb {R}\cup \{+\infty \}\) and let the sequences \(z^{k}\) and \(y^{k}\) be constructed according to (8). Then
The first inequality holds with the equal sign if and only if \(z^{k+1}\in \partial f(z^{k+1},\cdot )(y^{k}),\) in which case, assuming that the function \(f(\cdot ,y^{k})\) is convex, proper and l.s.c., we have y \(^{k}\in \mathcal {D}_{l}\). The second inequality holds with the equal sign if and only if \(y^{k}\in \partial f(\cdot ,y^{k})(z^{k}),\) in which case, assuming that the function \( f(z^{k},\cdot )\) is convex, proper and l.s.c., we have z \(^{k}\in \mathcal { P}_{l}\).
Proof
Because of Fenchel inequality and the relations \(z^{k+1}\in \partial f(\cdot ,y^{k})^{*}(y^{k})\) and \(y^{k}\in \partial f(z^{k},\cdot )^{*}(z^{k}) \), we obtain
The ‘if and only if” assertions follow from the above chain of inequalities, combined with the well-known characterization of subgradients as those elements that satisfy the Fenchel inequality with the equal sign.
Let us assume that the first inequality holds with the equal sign, and let \( z\in \mathbb {R}^{n}\) be such that \(y^{k}\in \partial f(\cdot ,y^{k})(z)\). Since we have \(z^{k+1}\in \mathcal {O}_{T(y^{k})}\) and \(z^{k+1}\in \partial f(z^{k+1},\cdot )(y^{k}),\) using that \(f(\cdot ,y^{k})\) is convex, proper and l.s.c., we obtain
therefore \(f(z,\cdot )^{*}(z)=\left\langle z,y^{k}\right\rangle -f(z,y^{k}),\) that is, \(z\in \partial f(z,\cdot )(y^{k}).\) This shows that y \(^{k}\in \mathcal {D}_{l}\).
Let us assume that the second inequality holds with the equal sign, and let \( y\in \mathbb {R}^{n}\) be such that \(z^{k}\in \partial f(z^{k},\cdot )(y)\). Since we have \(y^{k}\in \mathcal {O}_{S(z^{k})}\) and \(y^{k}\in \partial f(\cdot ,y^{k})(z^{k}),\) using that \(f(z^{k},\cdot )\) is convex, proper and l.s.c. and the fact that \(z^{k}\in \partial f(z^{k},\cdot )^{**}(y),\) we obtain
therefore \(f(\cdot ,y)^{*}(y)=\left\langle z^{k},y\right\rangle -f(z^{k},y),\) that is, \(y\in \partial f(\cdot ,y)(z^{k}).\) This shows that \( z^{k}\in \mathcal {P}_{l}\). \(\square \)
Corollary 4.1
-
(1)
Let f, \(z^{k}\) and \(y^{k}\) be as in Theorem 4.1. Then
$$\begin{aligned} \lim _{k\rightarrow \infty }f\left( z^{k},\cdot \right) ^{*}\left( z^{k}\right) =\lim _{k\rightarrow \infty }f\left( \cdot ,y^{k}\right) ^{*}\left( y^{k}\right) . \end{aligned}$$(10) -
(2)
If f is continuous, \(\overline{z}\) is a cluster point of the sequence \(z^{k},\) and the sequence \(y^{k}\) is bounded, then
$$\begin{aligned} f(\overline{z},\cdot )^{*}(\overline{z})=\lim _{k\rightarrow \infty }f\left( z^{k},\cdot \right) ^{*}\left( z^{k}\right) . \end{aligned}$$(11) -
(3)
If f is continuous, \(\overline{y}\) is a cluster point of the sequence \(y^{k},\) and the sequence \(z^{k}\) is bounded, then
$$\begin{aligned} f(\cdot ,\overline{y})^{*}(\overline{y})=\lim _{k\rightarrow \infty }f\left( \cdot ,y^{k}\right) ^{*}\left( y^{k}\right) . \end{aligned}$$
Proof
(1) The equality (10) is an immediate consequence of (9).
(2) If f is continuous, then the function \(z\mapsto f(z,\cdot )^{*}(z)\) is l.s.c.. If \(\overline{z}\) is a cluster point of \(z^{k}\), then
for some subsequence \(z^{k_{j}}.\) Since the sequence \(y^{k}\) is bounded, we can suppose (extracting again subsequences if necessary) that the sequence \( y^{k_{j}}\) converges to a point \(\overline{y}\). Using the chain of inequalities at the beginning of the proof of Theorem 4.1, the continuity of f, and the lower semicontinuity of the function \(z\mapsto f(z,\cdot )^{*}(z),\) we obtain
which proves (11).
The proof of (3) is similar to that of (2). \(\square \)
Theorem 4.2
Let f be continuous.
-
(1)
If f is convex and proper in its first argument, \( \overline{z}\) is a cluster point of \(z^{k}\), and the sequence \(y^{k}\) is bounded, then \(\overline{z}\in \mathcal {P}_{l}\).
-
(2)
If f is convex and proper in its second argument, \( \overline{y}\) is a cluster point of \(y^{k}\), and the sequence \(z^{k}\) is bounded, then \(\overline{y}\in \mathcal {D}_{l}\).
Proof
(1) Let \(y\in \mathbb {R}^{n}\) be such that \(\overline{z}\in \partial f(\overline{z},\cdot )(y).\) Since \(\overline{z}\) is a cluster point of \(z^{k}\), we have
for some subsequence \(z^{k_{j}}.\) We can suppose (extracting subsequences if necessary) that the sequence \(y^{k_{j}-1}\) converges to a point \(\overline{y} \in \mathbb {R}^{n}\). Given that \(z^{k_{j}}\in \mathcal {O}_{T(y^{k_{j}-1})}\) and \(y^{k_{j}}\in \mathcal {O}_{S(z^{k_{j}})},\) we have \(z^{k_{j}}\in \partial f(\cdot ,y^{k_{j}-1})^{*}(y^{k_{j}-1})\) and \(y^{k_{j}}\in \partial f(z^{k_{j}},\cdot )^{*}(z^{k_{j}})\). Hence, using the properties of f and Corollary 4.1, we get
From (12) and Fenchel inequality, we obtain \({y}\in \partial f(\cdot ,{y})(\overline{z})\). This shows that \(\overline{z}\in \mathcal {P} _{l}\).
The proof of (2) is similar. \(\square \)
References
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton (1970)
Toland, J.F.: A duality principle for non-convex optimisation and the calculus of variations. Arch. Ration. Mech. Anal. 71, 41–61 (1979)
Toland, J.F.: Duality in nonconvex optimization. J. Math. Anal. Appl. 66, 399–415 (1978)
Singer, I.: A Fenchel–Rockafellar type duality theorem for maximization. Bull. Aust. Math. Soc. 20, 193–198 (1979)
Hiriart-Urruty, J.-B.: From convex to nonconvex optimization. Necessary and sufficient conditions for global optimality. In: Clarke, F.H., et al. (eds.) Non-smooth Optimization and Related Topics, pp. 219–240. Plenum Press, New York (1989)
Maréchal, P.: On a functional operation generating convex functions. I. Duality. J. Optim. Theory Appl. 126(1), 175–189 (2005)
Tao, P.D., El Bernoussi, S.: Duality in D.C. (Difference of Convex Functions) Optimization. Subgradient Methods. International Series of Numerical Mathematics, pp. 277–293. Birkhäuser, Basel (1988)
Horst, R., Thoai, N.V.: DC programming: overview. J. Optim. Theory Appl. 103(1), 1–43 (1999)
Sun, W., Sampaio, R.J.-B., Candido, M.A.B.: Proximal point algorithm for minimization of DC function. J. Comput. Math. 21(4), 451–462 (2003)
Martínez-Legaz, J.E.: Generalized Convex Duality and Its Economic Applications. Handbook of Generalized Convexity and Generalized Monotonicity. Nonconvex Optimization and Its Applications, vol. 76, pp. 237–292. Springer, New York (2005)
Martínez-Legaz, J.E.: On Lower Subdifferentiable Functions. Trends in Mathematical Optimization, pp. 197–232. Birkhäuser, Basel (1988)
Acknowledgements
The research of the corresponding author was supported by the MINECO of Spain, Grant MTM2014-59179-C2-2-P, and the Severo Ochoa Programme for Centres of Excellence in R&D [SEV-2015-0563]. He is affiliated to MOVE (Markets, Organizations and Votes in Economics). We are very grateful to Elisabetta Allevi for her valuable help on the elaboration of Example 3.1. We also thank the Associate Editor and an anonymous reviewer for their careful reading of our manuscript, their corrections, and their helpful suggestions to improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Radu Ioan Boţ.
Rights and permissions
About this article
Cite this article
Bonenti, F., Martínez-Legaz, J.E. & Riccardi, R. A General Nonconvex Multiduality Principle. J Optim Theory Appl 176, 527–540 (2018). https://doi.org/10.1007/s10957-018-1245-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-018-1245-1