Abstract
A class of non-convex optimization problems with DC objective function is studied, where DC stands for being representable as the difference \(f=g-h\) of two convex functions g and h. In particular, we deal with the special case where one of the two convex functions g or h is polyhedral. In case g is polyhedral, we show that a solution of the DC program can be obtained from a solution of an associated polyhedral projection problem. In case h is polyhedral, we prove that a solution of the DC program can be obtained by solving a polyhedral projection problem and finitely many convex programs. Since polyhedral projection is equivalent to multiple objective linear programming (MOLP), a MOLP solver (in the second case together with a convex programming solver) can be used to solve instances of DC programs with polyhedral component. Numerical examples are provided, among them an application to locational analysis.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) is said to be polyhedral convex if its epigraph \(\hbox {epi}\,f:={\left\{ (x,r)\in \mathbb {R}^n\times \mathbb {R}|\;r \ge f(x)\right\} }\) is a polyhedral convex set. We consider the following DC optimization problem
where we assume that \(g: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) and \(h: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) are convex and at least one of the functions g and h is polyhedral convex. DC optimization problems are investigated for instance in [1, 10, 16, 20, 23, 26, 32, 33].
The methods presented in this article are based on computing the vertices of \(\hbox {epi}\,g\) in case g is polyhedral and the vertices of \(\hbox {epi}\,h^*\) in case h is polyhedral, where \(h^*\) is the conjugate of h. It is shown that a solver for multiple objective linear programs (MOLP) can be used for this task. The aim of this paper is to show that global solutions of the considered class of DC-programs can be obtained by utilizing a MOLP-solver. The main advantage of this procedure is that the algorithms are very easy to implement. Nevertheless, the algorithms can be applied to certain non-convex problems in location theory, discussed below. We will show that they are competitive to some algorithms from the recent literature. It should be noted that the presented methods can be combined with other global optimization techniques in order to avoid to compute all vertices of the epigraphs. But then, a modification of the MOLP-solver is required, which would make the implementation more difficult.
The computation of vertices of an epigraph is related to the problem to project a polyhedron into a subspace. A polyhedral convex set P in \(\mathbb {R}^n\) (or polyhedron for short) is defined as the solution set of a system of finitely many linear inequalities, that is,
for a matrix \(B \in \mathbb {R}^{m \times n}\) and a vector \(c \in \mathbb {R}^m\). The pair \((B,\; c)\) is said to be an H-representation of P. The well-known Weyl-Minkowski theorem states that every polyhedron can be expressed as the generalized convex hull of finitely many points \(v^i \in \mathbb {R}^n\); \(i=1,\dots , r\); \(r \ge 1\) and finitely many directions \(d^j \in \mathbb {R}^q\!\setminus \!\{0\}\); \(j=1,\dots ,s\); \(s \ge 0\), that is
where \(v^i\) and \(d^j\) are the columns of \(V\in \mathbb {R}^{n\times r}\) and \(D \in \mathbb {R}^{n\times s}\), respectively. Further we denote by \(e:=(1,\dots ,1)^T\) the all-one-vector. We say that (V, D) is a V-representation of the polyhedron P. Both H-representation and V-representation are special cases of the projection representation or P-representation (B, C, c), that is
for matrices \(B \in \mathbb {R}^{m\times n}\), \(C\in \mathbb {R}^{m\times k}\), \(c\in \mathbb {R}^{m\times 1}\). In Sects. 2 and 7 we will see that in many situations only a P-representation of a polyhedron is known. An H-representation can be obtained from a P-representation by Fourier-Motzkin elimination and a V-representation can be obtained from an H-representation by vertex enumeration, for instance, by using the double description method by Motzkin et al. [34]. However, the first part of this procedure (Fourier-Motzkin elimination) is not tractable if k is large. It has been shown in [29] that both an H-representation and a V-representation can be obtained from a P-representation by solving a multiple objective linear program (MOLP). This follows from the equivalence between a polyhedral projection problem (which is roughly speaking the problem to obtain a V-representation from a P-representation (3)) and multiple objective linear programming, as shown in [29]. By equivalence the authors of [29] understand that a solution of the one problem can be “easily” (for details see [29] or Sect. 6 below) obtained from a solution of the other problem. In order to compute a V-representation of (3), a multiple objective linear program with \(n+1\) objective functions needs to be solved.
Multiple objective linear programs can be solved by Benson’s algorithms [2]. Numerical examples for up to 10 objective functions are provided in [5] and [30]. A more direct way to solve the polyhedral projection problem is the convex hull method by Lassez and Lassez [25]. This method, however, is closely related to the dual variant of Benson’s algorithm for multiple objective linear programs [12], which has been developed independently.
We close this section with some notation. We denote by \(v_i\) the components of a vector v and by \(M^i\) the rows of a matrix M. As mentioned above, we set \(e=(1,\dots ,1)^T\). Given a convex function \(g: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) we denote by \(\hbox {dom}\,g:= {\left\{ x\in \mathbb {R}^n|\;g(x) \ne \infty \right\} }\) the domain of g, and by \(g^*:\mathbb {R}^n\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\), \(g^*(x^*):=\sup _{x\in \mathrm{dom}~g} [{x^*}^Tx - g(x)]\) the conjugate of g. If \(\hbox {dom}\,g\ne \emptyset \) then the function g is called proper. The recession cone of a polyhedron (1) is the polyhedral convex cone \(0^+P := {\left\{ x \in \mathbb {R}^n |\;B x \ge 0\right\} }\). The closure, interior, boundary of a set A is denoted, respectively, by \(\hbox {cl}\,A\), \(\hbox {int}\,A\), \(\hbox {bd}\,A\).
2 Representing polyhedral convex functions
Polyhedral convex functions can be represented by their epigraphs. The following definition is based on a P-representation of the epigraph.
Definition 1
A matrix \(A \in \mathbb {R}^{m \times (n+1+k+1)}\) is called a representation of a polyhedral convex function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) if
If the matrix A is partitioned as \(A=(B,b,C,c)\), where \(B \in \mathbb {R}^{m\times n}\), \(b \in \mathbb {R}^m\), \(C \in \mathbb {R}^{m \times k}\) and \(c \in \mathbb {R}^m\), we obtain
The following examples motivate the definition of a representation. We show that for many (classes of) polyhedral convex functions a representation can be obtained without any essential computational effort.
Example 2
Consider finitely many affine functions \(f_i(x)={D^i} x + d_i\) \((i=1\dots ,m)\), a matrix \(P \in \mathbb {R}^{k\times n}\) and a vector \(p \in \mathbb {R}^k\). Then
is a polyhedral convex function with representation
where \(D^1,\ldots ,D^m\) are the rows of \(D\in \mathbb {R}^{m\times n}\) and \(d:=(d_1,\ldots ,d_m)\in \mathbb {R}^m\).
Example 3
Let f be a polyhedral convex function and let (V, D) be a V-representation of \(\hbox {epi}\,f\), that is, \(V \in \mathbb {R}^{(n+1)\times r}\) with \(r \ge 1\) and \(D \in \mathbb {R}^{(n+1)\times s}\) where \(s \ge 0\),
Then
is a representation of f, where ± and are used to express the corresponding equations by inequalities.
Example 4
Let \(g,h: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be polyhedral convex functions with representations
The infimal convolution of g and h is the polyhedral convex function f with
Thus, its representation is \(A_f = \begin{pmatrix} B_f&b_f&C_f&c_f\end{pmatrix}\) with
Example 5
Let a representation \(A = \begin{pmatrix} B&b&C&c\end{pmatrix}\) of a polyhedral convex function \(f :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be given, then
is a representation of the conjugate \(f^* :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) of f. The details are shown in the following proposition.
Proposition 6
For a polyhedral convex function \(f :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\), the following two statements are equivalent:
Proof
Assume that (6) is satisfied. Let \((x^*,r^*) \in \hbox {epi}\,f^*\), i.e.,
This is equivalent to the implication
By (6), this can be expressed as
By Farkas’ lemma (see e.g. [18, Theorem 7.20]) we obtain the existence of \(v \in \mathbb {R}^m_+\) such that
i.e., (7) holds. The opposite implication follows likewise by taking into account that \(f=f^{**}\) for a polyhedral convex function f. \(\square \)
The list of examples could be extended. Note that, in general, it is much more expensive to compute an H-representation or a V-representation of the epigraph of a polyhedral convex function. This problem, however, is important for our studies and will be discussed in Sect. 6.
3 Maximizing a convex function over a polyhedron
A well-known result on maximizing a convex function over a polytope P is the attainment of the maximum in at least one vertex of P. We discuss in this short section the case of unbounded polyhedra.
Let \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be convex and let P be a polyhedron. Assume that P has a vertex. We consider the problem
Proposition 7
If Problem (C) has an optimal solution, then some vertex \(x^*\) of P is an optimal solution of (C).
Proof
We denote by \(v^1,\dots ,v^k\) the vertices of P and set \(B:= \hbox {conv}\,{\left\{ v^1,\dots ,v^k\right\} }\). Convexity of f implies that there is a vertex \(v^j\) of B which is an optimal solution of
Assume there is \(x \in P {\setminus } B\) with \(f(x) > f(v^j)\). The point x can be represented as the sum \(x=b+d\) of a point \(b \in B\) and a direction \(d \in (0^+ P) \!\setminus \!\{0\}\). Then, \(f(b+d) > f(v^j) \ge f(b)\). Convexity of f implies that \(f(b+nd) \rightarrow \infty \) as \(n \rightarrow \infty \), which contradicts the existence of an optimal solution. \(\square \)
Then next statement characterizes the existence of a solution.
Proposition 8
Problem (C) is unbounded if and only if there exists a vertex v of P, an extreme direction d of P and some \(\beta > 0\) with
Proof
Let (C) be unbounded. There is a sequence \((x_\nu )\) in P with \((f(x_{\nu })) \rightarrow \infty \) as \(\nu \rightarrow \infty \). For all \(\nu \in \mathbb {N}\), we have
where \(v^i\) are the vertices and \(d^i\) the extreme directions of P. Setting
we obtain
By Proposition 7, we have \(\beta ^\nu \ne 0\) for \(\nu \) being sufficiently large. Convexity implies that
tends to infinity. Hence \(f(v^i + \beta ^\nu d^j) \rightarrow \infty \) for at least one i and j, which proves the first part of the statement.
The converse implication follows from convexity of f along the ray \({\left\{ v+\beta d |\;\beta \ge 0\right\} }\), which belongs to P. \(\square \)
4 The case of g being polyhedral
The algorithm considered in this section is based on an enumeration of the vertices of \(\hbox {epi}\,g\). We start with a reformulation of the DC program (P).
Proposition 9
Problem (P) can be expressed equivalently as
Proof
Since \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) (i.e. the value \(-\infty \) is not allowed), for the graph \(\hbox {gr}\,g:={\left\{ (x,g(x)) |\;x \in \hbox {dom}\,g\right\} }\) of g, we have
Let \(\bar{x}\) be an optimal solution of (P). For all \((x,r) \in \hbox {epi}\,g\) (and in particular for all \((x,r) \in \hbox {bd}\,\hbox {epi}\,g\)) we have \(g(\bar{x}) - h(\bar{x}) \le g(x)-h(x) \le r - h(x)\). Since \((\bar{x},g(\bar{x})) \in \hbox {bd}\,\hbox {epi}\,g\), we conclude that \((\bar{x},g(\bar{x}))\) solves (8).
Vice versa, let \((\bar{x},\bar{r}) \in \hbox {bd}\,\hbox {epi}\,g\) be a solution of (8), then
We get \(\bar{r}=g(\bar{x})\) and see that \(\bar{x} \in \hbox {dom}\,g\) solves (P). \(\square \)
Assume now that \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) is polyhedral convex. Let \({\left\{ F_i|\;i=1,\dots ,k\right\} }\) be the (finite) set of all facets of \(\hbox {epi}\,g\). Then, it is evident that
The following statement is now obvious.
Corollary 10
For \(i=1,\dots ,k\), let \((x^i,r^i)\) be an optimal solution of
Let \(j \in \hbox {argmin}\,{\left\{ r^i-h(x^i)|\;i=1,\dots ,k\right\} }\). Then, \((x^j,r^j)\) is an optimal solution of (P). Problem (P) has an optimal solution if and only if, for all \(i \in {\left\{ 1,\dots ,k\right\} }\), (\(\hbox {P}_i\)) has an optimal solution.
Assume that (P) has an optimal solution. Then, for every \(i \in {\left\{ 1,\dots ,k\right\} }\), Problem (\(\hbox {P}_i\)) has an optimal solution which is, by Proposition 7, a vertex of the facet \(F_i\) of \(\hbox {epi}\,g\). Since a vertex of a facet F of \(\hbox {epi}\,g\) is also a vertex of \(\hbox {epi}\,g\), an optimal solution of (P) can always be found among the vertices of \(\hbox {epi}\,g\), denoted \(\hbox {vert}\,\hbox {epi}\,g\).
Corollary 11
Assume that (P) has an optimal solution. Every optimal solution \((\bar{x},\bar{r})\) of
is also an optimal solution of (P).
If a representation A of the polyhedral convex function g is given, the vertices of \(\hbox {epi}\,g\) can be computed by solving a polyhedral projection problem, see Sect. 6. If \(\hbox {vert}\,\hbox {epi}\,g\) has been computed, it remains to determine the minimum over finitely many points in problem (9). We obtain the following solution procedure for the DC optimization problem (P).
Assumption 12
-
(i)
(P) has an optimal solution,
-
(ii)
g is polyhedral convex,
-
(iii)
\(\hbox {epi}\,g\) has a vertex,
-
(iv)
a representation A of g is known.
Algorithm 13
Assume that Assumptions 12 (i)–(iv) are satisfied.
Corollary 14
Algorithm 13 works correctly if Assumptions 12 (i)–(iv) are satisfied.
Proof
This follows from the considerations above. \(\square \)
Remark 15
Of course one can add linear constraints to Problem (P) in case g is polyhedral. These constraints can be included into a representation of g, see e.g. Example 25.
5 The case of h being polyhedral
We assume now that the function h in Problem (P) is polyhedral. We consider the Toland–Singer dual problem of (P), see [37, 38], that is,
0 where \(h^*(y) := \sup _{x \in \mathbb {R}^n} [{y}^Tx - h(x)]\) is the conjugate of h and likewise for g. Due to the duality theory by Toland and Singer, the optimal objective values of (P) and (D) coincide in case of h being closed. This condition is satisfied as we assume h to be a polyhedral convex function.
Since \(g^*\) is convex and \(h^*\) is polyhedral convex, Problem (D) can be solved by applying Algorithm 13 in Sect. 4. As pointed out below, a solution of (P) can be computed from a solution of (D). We suppose the following assumptions, all being related to the given problem (P):
Assumption 16
-
(i)
(P) has an optimal solution,
-
(ii)
h is polyhedral convex,
-
(iii)
\(\dim \hbox {epi}\,h = n+1\),
-
(iv)
a representation A of h is known,
-
(v)
g is closed.
-
(vi)
A solution of the following problem exists whenever the problem is bounded for the parameter \(y \in \mathbb {R}^n\):
$$\begin{aligned} \min _{x \in \mathbb {R}^n} [g(x) - {y}^T x]. \end{aligned}$$(10)
The following result relates solutions of (P) and solutions of (D) to each other. We denote by \(\partial f(\bar{x}):= {\left\{ y \in \mathbb {R}^n|\;\forall x \in \mathbb {R}^n: f(x) \ge f(\bar{x}) + {y}^T(x-\bar{x})\right\} }\) the subdifferential of a convex function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) at \(\bar{x} \in \hbox {dom}\,f\). We set \(\partial f(\bar{x}) := \emptyset \) for \(\bar{x} \not \in \hbox {dom}\,f\). For convenience of the reader and with regard to Remark 18 below, we provide a short proof of the following statement.
Proposition 17
(e.g. [24, Proposition 4.7] or [39, Proposition 3.20]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) and \(h:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be proper convex functions. Then the following holds true.
-
(i)
If x is an optimal solution of (P), then each \(y \in \partial h(x)\) is an optimal solution of (D).
If, in addition, g and h are closed, a dual statement holds:
-
(ii)
If y is an optimal solution of (D), then each \(x \in \partial g^*(y)\) is an optimal solution of (P).
Proof
Let \(y\in \hbox {dom}\,h\) be optimal for (D) and \(x\in \partial g^*(y)\). To prove statement (ii), we note that we have
where the latter equation requires g being closed, compare [36, Theorem 23.5]. Moreover we use Fenchel’s inequality, that is, \(g^*(y^*) + g({z}) \ge {y^*}^T{z}\) for all \({z}\in \mathbb {R}^n\) and all \(y^* \in \mathbb {R}^n\) (and likewise for h). For all \(y^* \in \hbox {dom}\,h^*\) and all \(x \in \hbox {dom}\,g\) we get
Taking the infimum over \(y^* \in \hbox {dom}\,h^*\), we get
The claim follows as \(h^{**}({z}) = h({z})\). Here we use that h is assumed to be closed, compare [36, Theorem 12.2]. The proof of (i) is analogous but neither g nor h needs to be closed, compare [36, Theorem 23.5] and take into account the fact that no biconjugation argument is necessary. \(\square \)
Remark 18
In [24, Proposition 4.7] the assumption of g and h being closed for statement (ii) of Proposition 17 is missing. Moreover in [39, Proposition 3.20] closedness of g needs to be assumed. Indeed, consider the following example. Let
Then f is not closed and we have
Set \(g=f\) and \(h=\hbox {cl}\,f\). Then \(y=0\) solves (D) and \(0 \in \partial g^*(0)\). But \(x=0\) is not a solution of (P). This shows the failure when g is not closed.
Set \(g=\hbox {cl}\,f\) and \(h=f\). Then \(y=0\) solves (D) and \(1 \in \partial g^*(0)\). But \(x=1\) is not a solution of (P). This shows the failure when h is not closed.
Proposition 19
(e.g. [36, Theorem 23.5]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) be a proper closed convex function. Then \(\bar{x} \in \partial g^*(y)\) if and only if \(\bar{x}\) is an optimal solution of (10).
The next result provides a consequence of Assumption 16 (iii).
Proposition 20
Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be a polyhedral convex function such that \(\dim \hbox {epi}\,h = n+1\). Then \(\hbox {epi}\,h^*\) has a vertex.
Proof
The assumption implies that there is a point \(x \in \hbox {int}\,\hbox {dom}\,h\), such that h is affine in a neighborhood of x. Whence \(\partial h(x)\) is a singleton set \({\left\{ \bar{y}\right\} }={\left\{ \nabla h(x)\right\} }\). The Fenchel-Young inequality implies
Using [36, Theorem 23.5], we see that equality holds if and only if \(z=\bar{y}\). This means that
is a supporting hyperplane of \(\hbox {epi}\,h^*\) such that \(\hbox {epi}\,h^* \cap H = {\left\{ (\bar{y},h^*(\bar{y}))\right\} }\), i.e., \((\bar{y},h^*(\bar{y}))\) is a vertex of \(\hbox {epi}\,h^*\). \(\square \)
We obtain the following algorithm:
Algorithm 21
Assume that Assumptions 16 (i)–(vi) are satisfied.
Theorem 22
Let the Assumptions 16 (i)–(vi) be satisfied. Then Algorithm 21 works correctly.
Proof
Note first that (P) has a solution \(\bar{x}\) by Assumption 16 (i). A solution \(\bar{x}\) of (P) is by definition a point in \(\hbox {dom}\,g \cap \hbox {dom}\,h\). Since h is polyhedral convex and \(\bar{x} \in \hbox {dom}\,h\), we have \(\partial h(\bar{x})\ne \emptyset \), see e.g. [36, Theorem 23.10]. By Proposition 17 (i), a point \(\bar{y} \in \partial h(\bar{x})\) is a solution to (D). Whence a solution of (D) exists. In Steps (1) and (2) of the algorithm, a solution \(\bar{y}\) of (D) is computed. To see this, we need to check Assumptions 16 (i)–(vi) of Algorithm 13 in Sect. 4, reformulated for problem (D):
-
(D) has an optimal solution, as shown above.
-
\(h^*\) is polyhedral convex as so is h.
-
\(\hbox {epi}\,h^*\) has a vertex. This follows from Proposition 20.
-
A representation A of \(h^*\) is known, as it can be obtained from a representation of h, see Example 5 in Sect. 2.
Since \(\bar{y}\) computed in Steps (1) and (2) is a solution to (D), we have \(\bar{y} \in \hbox {dom}\,g^*\). Hence Problem (10) is bounded (by the definition of conjugates). In this case, by Assumption 16 (iv), an optimal solution \(\bar{x}\) of (10) exists. By Proposition 19, we get \(\bar{x} \in \partial g^*(\bar{y})\). Proposition 17 (ii) implies that \(\bar{x}\) is an optimal solution of (P). \(\square \)
Remark 23
One can add convex constraints \(f_i(x)\le 0\) \((i=1,\dots ,m)\) to Problem (P) in case h is polyhedral. Setting \(g(x) = \infty \) whenever one constraint is violated, we maintain convexity of g. If g and \(f_i\) \((i=1,\dots ,m)\) are closed, so is the modified g. The only difference in the algorithm is that the constraints have to be added to Problem (10).
Remark 24
An alternative approach for minimizing a DC function \(f=g-h\) where h is polyhedral is the following: The domain of h can be “partitioned” into finitely many convex polyhedral sections \(S_i\) (such that the intersection of the interior of any \(S_i\) and \(S_j\) for \(i\ne j\) is empty) such that h is affine on each \(S_i\). Minimizing \(g-h\) over such a section \(S_i\) is a convex program with linear constraints. Solving such a convex program for each \(S_i\) and choosing the one with smallest optimal value, we obtain an optimal solution of the original problem. To compute the sections \(S_i\) one needs to solve a polyhedral projection problem (or similar) involving the epigraph of h, which is not easier than Step 1 in Algorithm 21. The advantage of our algorithm is as follows: First, in case the conjugate of g is known, we have to solve only one convex program. Secondly, only unconstrained convex programs need to be solved in our algorithm.
6 Polyhedral projection via multiple objective linear programming
For matrices \(B \in \mathbb {R}^{m \times n}\), \(C\in \mathbb {R}^{m \times k}\) and a vector \(c \in \mathbb {R}^m\), we consider the problem
Solving (PP) essentially means finding a V-representation of Y, see [29] for details. Using the terminology of Sect. 2, (PP) provides a P-representation of Y. The polyhedron Y can be seen as the projection of the polyhedron
onto \(\mathbb {R}^n\).
In [29], the equivalence between the projection problem (PP) and multiple objective linear programming (MOLP) is shown. In particular, [29, Theorem 3] yields that a V-representation of Y can be obtained by solving the multiple objective linear program
where \(e=(1,\dots ,1)^T\).
A MOLP-solver such as Bensolve [28, 30] computes not only a solution to (MOLP) in the sense of [21, 27] but also a V-representation of the so-called upper image. The upper image of (MOLP) is the set
whereas the image of (MOLP), in the literature also called feasible set in the objective space, is
Clearly, we have
and
Thus, a V-representation of Q can be obtained from a V-representation of \(\mathcal {P}\) by omitting all points and directions x with \(e^T z \ne 0\). Evidently, a V-representation of Y can be obtained from a V-representation of Q by omitting the last component of each vector.
7 Application and numerical results
First, we solve an instance of a location problem, then we compare our results with two algorithms from the literature and finally we present two new examples. We used the VLP-Solver Bensolve version 2.0.1 [28, 30] to compute the polyhedral projection problems. VLP stands for vector linear programming which is a generalization of MOLP. The remaining part was implemented in Matlab\(^{\textregistered }\) R2015b. In particular, we use the +linprog+ command if problem (10) needs to be solved, which reduces to an LP here. All computations were run on a computer with Intel\(^{\textregistered }\) Core\(^\mathrm{TM}\) i5 CPU with 3.3 GHz.
As both g and h are polyhedral convex functions in our first two examples, we obtain two alternatives to solve the problems. Algorithm 13 developed in Sect. 4 (case g polyhedral) is referred to as the primal algorithm, whereas Algorithm 21 of Sect. 5 (case h polyhedral) is called the dual algorithm. As we will see, both algorithms are justified.
Example 25
The classical location theory is dedicated to the problem of locating a new facility such that all relevant distances, e.g. to customers, are minimized [7, 8, 11, 13, 19, 35, 42]. Nevertheless, since some facilities also cause negative effects like noise, stench or even danger, such facilities need to be established as far away as possible from nature reserves and residential areas. As examples one may think of an airport, railway station, industrial plant, dump site, power plant or wireless station.
The problem of locating such a so-called semi-obnoxious facility consists of minimizing the weighted sum of distances to the so-called attraction points (due to economical reasons) and to maximize the weighted sum of distances to the so-called repulsive ones (in order to avoid negative effects).
First attempts to solve location problems with such undesirable facilities appeared in the 1970’s [4, 6, 17]. Algorithms based on a DC formulation are presented for instance in [3, 9, 31, 40].
It is reasonable to establish the new facility in a given area, city or state, which we assume to be a bounded polyhedral set \(\mathcal {P}=\left\{ x\in \mathbb {R}^n|\,Px\ge p\right\} \subseteq \hbox {dom}\,h\). The DC location problem can be formulated as
with functions \(g,h:\;\mathbb {R}^n\rightarrow \mathbb {R}_+\) defined as
where \(\mathbb {I}_{\mathcal {P}}\) denotes the indicator function that attains the value 0, if \(x\in \mathcal {P}\), and \(+\infty \) otherwise. The parameters \(\overline{a}^1,\ldots ,\overline{a}^{\overline{M}}\in \mathbb {R}^n\), \((\overline{M}\ge 1)\) denote the attracting points with weights \(\overline{w}_1,\ldots ,\overline{w}_{\overline{M}}>0\), and \(\underline{a}^1,\ldots ,\underline{a}^{\underline{M}}\in \mathbb {R}^n\), \((\underline{M}\ge 1)\) denote the repulsive points with weights \(\underline{w}_1,\ldots ,\underline{w}_{\underline{M}}>0\). The polyhedral unit balls \(\overline{B}_1,\ldots ,\overline{B}_{\overline{M}}\) and \(\underline{B}_1,\ldots ,\underline{B}_{\underline{M}}\) induce gauge distances that we associate with \(\overline{a}^1,\ldots ,\overline{a}^{\overline{M}}\) and \(\underline{a}^1,\ldots ,\underline{a}^{\underline{M}}\), respectively.
For a compact polyhedral set \(B\subseteq \mathbb {R}^n\), which contains the origin in its interior, the polyhedral gauge distance \(\gamma _B:\;\mathbb {R}^n\rightarrow \mathbb {R}\) from \(a\in \mathbb {R}^n\) to \(x\in \mathbb {R}^n\) is defined as \(\gamma _B(x-a):=\min \left\{ \lambda \ge 0|\,x-a\in \lambda B\right\} \), see [11, 36].
The epigraph of g is given by
where \(\hat{\overline{B}}_1,\ldots ,\hat{\overline{B}}_{\overline{M}}\) are the matrices defining the polyhedral unit balls \(\overline{B}_1,\ldots ,\overline{B}_{\overline{M}}\), respectively, i.e., \(x\in \overline{B}_i\) if and only if \(\hat{\overline{B}}_ix\le e\) for \( (i=1,\ldots ,\overline{M})\).
A representation \(\overline{A}=(\overline{B},\overline{b},\overline{C},\overline{c})\) of g is given by
A representation of h is given likewise.
The epigraphs \(\hbox {epi}\,g^*\) and \(\hbox {epi}\,h^*\) and corresponding representations can easily be obtained by applying Example 5 and Proposition 6. Moreover, Assumptions 12 (i)–(iv) in Sect. 4 (case g polyhedral) and 16 (i)–(vi) in Sect. 5 (case h polyhedral) are satisfied and hence, Algorithms 13 and 21 can be applied in order to solve the location problem (L).
We note further that the conjugates of g and h can be written as
[41], where \(B^*:=\left\{ y\in \mathbb {R}^n|\,\forall x\in B:\;x^Ty\le 1\right\} \) defines the dual unit ball of B and \(\sigma \) denotes the support function. If B is polyhedral then so is \(B^*\).
We solve several instances of (L) in the plane. In dependence of the number of attracting and repulsive points, Tables 1 and 2 show the computational results of the primal and dual algorithm, respectively. In most cases, the primal algorithm performs better.
Remark 26
In Step (2) of Algorithm 21, \(g^*(y^i)\) is calculated for all vertices \((y^i, s^i)\), \(i=1,\ldots ,k\), of \(\hbox {epi}\,h^*\), obtained in Step (1). In general, we cannot directly calculate these objective values, see for example the representation (15), which constitutes a linear program. In the general setting, the convex program (10) needs to be solved for each vertex. On the other hand, in Step (2) of Algorithm 13, we need to calculate \(h(x^i)\) for all vertices \((x^i, r^i)\), \(i=1,\ldots ,k\), of \(\hbox {epi}\,g\), obtained in Step (1). If (in case both g and h are polyhedral) only a representation of h (i.e. a P-representation \(\hbox {epi}\,h\)) is given, a general procedure to get the values \(h(x^i)\) is to solve the linear program \(\min r\) s.t. \((x^i,r) \in \hbox {epi}\,h\). However, in the present example (and often in practice) one can directly calculate these objective values, i.e. there is no additional linear program to be solved. This is an advantage of the primal algorithm for our example from locational analysis.
Nevertheless, we also observe for the primal algorithm that an increasing number of attracting points influences the running time much more than an increasing number of repulsion points. It is vices versa for the dual algorithm, which leads to settings where the dual algorithm performs even better, such as \(\overline{M}=100\), \(\underline{M}=20\). Thus, both algorithms have their justification.
Example 27
Ferrer, Bagirov and Baliakov [15] introduce the DC extended cutting angle method (DCECAM) and compare it with the DC prismatic algorithm (DCPA) described in [14]. We compare Algorithms 13 and 21 with these two methods by means of Example 10 in [15]: Consider a DC programming problem (P) with
and
In Table 3 we present numerical results for the primal and dual algorithm developed in this article and compare them with the results of DCECAM and DCPA obtained in [15]. We see that both of our algorithms perform better than DCECAM and DCPA, where it should be noted, that the latter algorithms are designed for a more general problem class (non-polyhedral DC problems). In particular, the dual method is preferable for this example. Note that for \(n=10\) we need to solve a multiple objective linear program with \(n+2=12\) objectives.
Example 28
Let \(Q \in \mathbb {R}^{n \times n}\) be a positive semi-definite symmetric matrix. The problem to maximize the convex quadratic function \(h:\mathbb {R}^n \rightarrow \mathbb {R}\), \(h(x) = x^T Q x\) over a polyhedral convex set \(S \subseteq \mathbb {R}^n\) can be reformulated as a DC-program by letting \(g=\mathbb {I}_S\), i.e. the indicator function of S, and minimizing \(f=g-h\). For example, let \(S:= {\left\{ x \in \mathbb {R}^n |\;-e \le x \le e\right\} }\) be an n-dimensional hypercube. As h is convex, Q has only non-negative eigenvalues. We assume that Q has at most \(m \le n\) positive eigenvalues. Equivalently, Q can be expressed as \(Q=P^T P\) for some \(P \in \mathbb {R}^{m \times n}\). For example, let the entries of P be defined by
where \(\lfloor x\rfloor := \max {\left\{ z \in \mathbb {Z}| z \le x\right\} }\). Setting \(y:=Px\) and \(T = {\left\{ y \in \mathbb {R}^m | \exists x \in S: y = Px\right\} }\), we obtain the equivalent problem to minimize \(\bar{h}:\mathbb {R}^m \rightarrow \mathbb {R}\cup {\left\{ +\infty \right\} }, \bar{h}(y) = y^T y\) over the polyhedron \(T\subseteq \mathbb {R}^m\). For \(n=10\) and \(m=4\) we have
and the optimal solution obtained by Algorithm 13 applied to the reformulated problem is
In Table 4, further numerical results for this example can be found.
Example 29
Let \(g:\mathbb {R}^n\rightarrow \mathbb {R}\), \(g(x) = x^T Q x\) for some positive definite symmetric matrix Q. Then the conjugate is \(g^*(y) = \frac{1}{4} y^T Q^{-1} y\) and the optimal solution of (10) is \(x=\frac{1}{2}Q^{-1} y\). For example, let \(Q=P^T P\) where \(P \in \mathbb {R}^{n \times n}\) defined by \(P_{ij} = 1\) for \(i \ge j\) and \(P_{ij}=0\) otherwise, then Q is positive definite. Further, we set
Some numerical results of minimizing \(f = g - h\) using Algorithm 21 are shown in Table 5.
8 Conclusion and final remarks
It is demonstrated that MOLP solvers provide a very easy way to solve DC programs where at least one of the two functions g, h in the objective \(f=g-h\) is polyhedral. In case of both g and h being polyhedral, the investigations result in two different algorithms, called primal and dual algorithm. The running time of the primal algorithm depends mainly on the effort to compute the vertices of \(\hbox {epi}\,g\) by the MOLP solver. Likewise the dual algorithm depends on \(\hbox {epi}\,h^*\). The proposed methods work well, if the dimension of \(\hbox {epi}\,g\) in Algorithm 13 or \(\hbox {epi}\,h^*\) in Algorithm 21 is not too high (up to 11 in the examples above).
In this study we consider functions decomposed as \(f = g-h\) where g (resp. h) is polyhedral. The examination of such a decomposition is motivated by applications such as the problem to locate semi-obnoxious facilities, see Example 25. Nevertheless, one may wish to define the class of all functions that have such a structure and formulate an algorithm to obtain it. It is known that twice continuously differentiable functions are DC [20, 22]. A DC function \(f=g-h\) with g being polyhedral (resp. h being polyhedral) is piecewise concave (resp. convex). Here “piecewise” means that the domain of f can be decomposed into finitely many convex polyhedra \(P_i\) such that they have disjoint interior and f is concave (resp. convex) on each \(P_i\). Moreover, f is locally Lipschitz on the interior of its domain as so is every concave (or convex) function. An interesting question is whether or not the opposite implication holds: Can each piecewise concave (resp. convex) function f which is locally Lipschitz on the interior of its domain be decomposed as \(f = g-h\) where g (resp. h) is polyhedral? To answer this question and also to provide an appropriate algorithm is left as an open problem.
References
Aleksandrov, A.D.: On the surfaces representable as difference of convex functions. (Russian) Izvestiya Akad. Nauk Kazah. SSR 3, 3–20 (1949)
Benson, H.: An outer approximation algorithm for generating all efficient extreme points in the outcome set of a multiple objective linear programming problem. J. Glob. Optim. 13, 1–24 (1998)
Chen, P.C., Hansen, P., Jaumard, B., Tuy, H.: Weber’s problem with attraction and repulsion. J. Reg. Sci. 32, 467–486 (1992)
Church, R.L., Garfinkel, R.S.: Locating an obnoxious facility on a network. Transp. Sci. 12(2), 107–118 (1978)
Csirmaz, L.: Using multiobjective optimization to map the entropy region. Comput. Optim. Appl. 63(1), 45–67 (2016)
Dasarathy, B., White, L.J.: A maxmin location problem. Oper. Res. 28(6), 1385–1401 (1980)
Drezner, Z. (ed.): Facility Location: A Survey of Applications and Methods, Springer Series in Operations Research. Springer, New York (1995)
Drezner, Z., Klamroth, K., Schöbel, A., Wesolowsky, G.O.: The Weber problem. In: Drezner, Z., Hamacher, H. (eds.) Facility Location—Applications and Theory. Springer, New York (2002)
Drezner, Z., Wesolowsky, G.O.: The Weber problem on the plane with some negative weights. INFOR 29(2), 87–99 (1991)
Dür, M.: Conditions characterizing minima of the difference of functions. Monatshefte für Mathematik 134(4), 295–303 (2002)
Durier, R., Michelot, C.: Geometrical properties of the Fermat–Weber problem. Eur. J. Oper. Res. 20(3), 332–343 (1985)
Ehrgott, M., Löhne, A., Shao, L.: A dual variant of Benson’s outer approximation algorithm. J. Glob. Optim. 52(4), 757–778 (2012)
Eiselt, H.A., Laporte, G.: Objectives in location problems. In: Drezner, Z. (ed.) In Facility Location, a Survey of Applications and Methods, Springer Series in Operations Research. Springer, New york (1995)
Ferrer, A.: Applying global optimization to a problem in short-term hydrothermal scheduling. In: Generalized Convexity, Generalized Monotonicity and Applications. Proceedings of the 7th international Symposium on Generalized Convexity and Generalized Monotonicity, Hanoi, Vietnam, August 27–31, 2002, pp. 63–285. Springer, New York (2005)
Ferrer, A., Bagirov, A., Beliakov, G.: Solving dc programs using the cutting angle method. J. Glob. Optim. 61(1), 71–89 (2014)
Flores-Bazán, F.: On minima of the difference of functions. J. Optim. Theory Appl. 93(3), 525–531 (1997)
Goldman, A.J., Dearing, P.M.: Concepts of optimal location for partially noxious facilities. Bull. Oper. Res. Soc. Am. 23(1), B-31 (1975)
Güler, O.: Foundations of Optimization. Springer, New York (2010)
Hamacher, H.W.: Mathematische Lösungsverfahren für planare Standortprobleme. Vieweg, Wiesbaden (1995)
Hartman, P.: On functions representable as a difference of convex functions. Pac. J. Math. 9(3), 707–713 (1959)
Heyde, F., Löhne, A.: Solution concepts in vector optimization: a fresh look at an old story. Optimization 60(12), 1421–1440 (2011)
Hiriart-Urruty, J.-B.: Generalized differentiability, duality and optimization for problems dealing with differences of convex functions. In: Convexity and Duality in Optimization (Groningen, 1984), Volume 256 of Lecture Notes in Economics and Mathematical Systems, pp. 37–70. Springer, Berlin (1985)
Hiriart-Urruty, J.-B.: A general formula on the conjugate of the difference of functions. Can. Math. Bull. 29(4), 482–485 (1986)
Horst, R., Thoai, N.V.: DC programming: overview. J. Optim. Theory Appl. 103(1), 1–43 (1999)
Lassez, C., Lassez, J.: Quantifier Elimination for Conjunctions of Linear Constraints Via a Convex Hull Algorithm. IBM Thomas J, Watson Research Division, Yorktown Heights (1990)
Lemaire, B., Volle, M.: Duality in DC programming. In: Generalized Convexity, Generalized Monotonicity: Recent Results (Luminy, 1996), Volume 27 of Nonconvex Optimization and Its Applications, pp. 331–345. Kluwer, Dordrecht (1998)
Löhne, A.: Vector Optimization with Infimum and Supremum. Vector Optimization. Springer, Berlin (2011)
Löhne, A., Weißing, B.: Bensolve - VLP solver, version 2.0.1. www.bensolve.org
Löhne, A., Weißing, B.: Equivalence between polyhedral projection, multiple objective linear programming and vector linear programming. Math. Methods Oper. Res. 84(2), 411–426 (2016)
Löhne, A., Weißing, B.: The vector linear program solver Bensolve-notes on theoretical background. Eur. J. Oper. Res. (2016). doi:10.1016/j.ejor.2016.02.039
Maranas, C.D., Floudas, C.: A global optimization method for Weber’s problem with attraction and repulsion. In: Hager, W., Hearn, D., Pardalos, P. (eds.) Large Scale Optimization, pp. 259–285. Springer, New York (1994)
Martínez-Legaz, J.E., Seeger, A.: A formula on the approximate subdifferential of the difference of convex functions. Bull. Aust. Math. Soc. 45(1), 37–41 (1992)
Martínez-Legaz, J.E., Singer, I.: An extension of DC duality theory, with an appendix on \(*\)-subdifferentials. Optimization 42(1), 9–37 (1997)
Motzkin, T., Raiffa, H., Thompson, G., Thrall, R.: The double description method. Contrib. Theory Games II Ann. Math. Stud. 28(51–73), 1953 (1953)
Nickel, S.: Location Theory: A Unified Approach. Springer, New York (2005)
Rockafellar, R.: Convex Analysis. Princeton University Press, Princeton (1972)
Singer, I.: A Fenchel–Rockafellar type duality theorem for maximization. Bull. Aust. Math. Soc. 20(2), 193–198 (1979)
Toland, J.F.: Duality in nonconvex optimization. J. Math. Anal. Appl. 66(2), 399–415 (1978)
Tuy, H.: Convex Analysis and Global Optimization, Nonconvex Optimization and its Applications, vol. 22. Kluwer, Dordrecht (1998)
Tuy, H., Al-Khayyal, F., Zhou, F.: A DC optimization method for single facility location problems. J. Glob. Optim. 7(2), 209–227 (1995)
Wagner, A., Martinez-Legaz, J.E., Tammer, C.: Locating a semi-obnoxious facility —a toland-singer duality based approach. J. Convex Anal. 23(4), 1185–1204 (2016)
Weber, A.: Über den Standort der Industrien. Erster Teil: Reine Theorie des Standortes. Mohr, Tübingen (1909)
Acknowledgements
We thank both anonymous reviewers for their useful comments which inspired the Remarks 23 and 24 and the discussion about the class of DC functions whose decomposition contains a polyhedral component in the last paragraph of the conclusions. This research was funded by Deutsche Forschungsgemeinschaft (LO 1379/7-1).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Löhne, A., Wagner, A. Solving DC programs with a polyhedral component utilizing a multiple objective linear programming solver. J Glob Optim 69, 369–385 (2017). https://doi.org/10.1007/s10898-017-0519-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10898-017-0519-8
Keywords
- DC programming
- Global optimization
- Polyhedral projection
- Multiple objective linear programming
- Linear vector optimization