1 Introduction

A function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) is said to be polyhedral convex if its epigraph \(\hbox {epi}\,f:={\left\{ (x,r)\in \mathbb {R}^n\times \mathbb {R}|\;r \ge f(x)\right\} }\) is a polyhedral convex set. We consider the following DC optimization problem

$$\begin{aligned} \min \, [g(x) - h(x) ] \; \text { subject to }\; x \in \hbox {dom}\,g, \end{aligned}$$
(P)

where we assume that \(g: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) and \(h: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) are convex and at least one of the functions g and h is polyhedral convex. DC optimization problems are investigated for instance in [1, 10, 16, 20, 23, 26, 32, 33].

The methods presented in this article are based on computing the vertices of \(\hbox {epi}\,g\) in case g is polyhedral and the vertices of \(\hbox {epi}\,h^*\) in case h is polyhedral, where \(h^*\) is the conjugate of h. It is shown that a solver for multiple objective linear programs (MOLP) can be used for this task. The aim of this paper is to show that global solutions of the considered class of DC-programs can be obtained by utilizing a MOLP-solver. The main advantage of this procedure is that the algorithms are very easy to implement. Nevertheless, the algorithms can be applied to certain non-convex problems in location theory, discussed below. We will show that they are competitive to some algorithms from the recent literature. It should be noted that the presented methods can be combined with other global optimization techniques in order to avoid to compute all vertices of the epigraphs. But then, a modification of the MOLP-solver is required, which would make the implementation more difficult.

The computation of vertices of an epigraph is related to the problem to project a polyhedron into a subspace. A polyhedral convex set P in \(\mathbb {R}^n\) (or polyhedron for short) is defined as the solution set of a system of finitely many linear inequalities, that is,

$$\begin{aligned} P = {\left\{ x \in \mathbb {R}^n |\;B x \ge c\right\} } \end{aligned}$$
(1)

for a matrix \(B \in \mathbb {R}^{m \times n}\) and a vector \(c \in \mathbb {R}^m\). The pair \((B,\; c)\) is said to be an H-representation of P. The well-known Weyl-Minkowski theorem states that every polyhedron can be expressed as the generalized convex hull of finitely many points \(v^i \in \mathbb {R}^n\); \(i=1,\dots , r\); \(r \ge 1\) and finitely many directions \(d^j \in \mathbb {R}^q\!\setminus \!\{0\}\); \(j=1,\dots ,s\); \(s \ge 0\), that is

$$\begin{aligned} P={\left\{ x\in \mathbb {R}^n|\;x=V\lambda + D\mu ,\; e^T \lambda = 1,\; \lambda \ge 0,\; \mu \ge 0\right\} }, \end{aligned}$$
(2)

where \(v^i\) and \(d^j\) are the columns of \(V\in \mathbb {R}^{n\times r}\) and \(D \in \mathbb {R}^{n\times s}\), respectively. Further we denote by \(e:=(1,\dots ,1)^T\) the all-one-vector. We say that (VD) is a V-representation of the polyhedron P. Both H-representation and V-representation are special cases of the projection representation or P-representation (BCc), that is

$$\begin{aligned} P = {\left\{ x\in \mathbb {R}^n |\;\exists u \in \mathbb {R}^k: B x + C u \ge c\right\} }, \end{aligned}$$
(3)

for matrices \(B \in \mathbb {R}^{m\times n}\), \(C\in \mathbb {R}^{m\times k}\), \(c\in \mathbb {R}^{m\times 1}\). In Sects. 2 and 7 we will see that in many situations only a P-representation of a polyhedron is known. An H-representation can be obtained from a P-representation by Fourier-Motzkin elimination and a V-representation can be obtained from an H-representation by vertex enumeration, for instance, by using the double description method by Motzkin et al. [34]. However, the first part of this procedure (Fourier-Motzkin elimination) is not tractable if k is large. It has been shown in [29] that both an H-representation and a V-representation can be obtained from a P-representation by solving a multiple objective linear program (MOLP). This follows from the equivalence between a polyhedral projection problem (which is roughly speaking the problem to obtain a V-representation from a P-representation (3)) and multiple objective linear programming, as shown in [29]. By equivalence the authors of [29] understand that a solution of the one problem can be “easily” (for details see [29] or Sect. 6 below) obtained from a solution of the other problem. In order to compute a V-representation of (3), a multiple objective linear program with \(n+1\) objective functions needs to be solved.

Multiple objective linear programs can be solved by Benson’s algorithms [2]. Numerical examples for up to 10 objective functions are provided in [5] and [30]. A more direct way to solve the polyhedral projection problem is the convex hull method by Lassez and Lassez [25]. This method, however, is closely related to the dual variant of Benson’s algorithm for multiple objective linear programs [12], which has been developed independently.

We close this section with some notation. We denote by \(v_i\) the components of a vector v and by \(M^i\) the rows of a matrix M. As mentioned above, we set \(e=(1,\dots ,1)^T\). Given a convex function \(g: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) we denote by \(\hbox {dom}\,g:= {\left\{ x\in \mathbb {R}^n|\;g(x) \ne \infty \right\} }\) the domain of g, and by \(g^*:\mathbb {R}^n\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\), \(g^*(x^*):=\sup _{x\in \mathrm{dom}~g} [{x^*}^Tx - g(x)]\) the conjugate of g. If \(\hbox {dom}\,g\ne \emptyset \) then the function g is called proper. The recession cone of a polyhedron (1) is the polyhedral convex cone \(0^+P := {\left\{ x \in \mathbb {R}^n |\;B x \ge 0\right\} }\). The closure, interior, boundary of a set A is denoted, respectively, by \(\hbox {cl}\,A\), \(\hbox {int}\,A\), \(\hbox {bd}\,A\).

2 Representing polyhedral convex functions

Polyhedral convex functions can be represented by their epigraphs. The following definition is based on a P-representation of the epigraph.

Definition 1

A matrix \(A \in \mathbb {R}^{m \times (n+1+k+1)}\) is called a representation of a polyhedral convex function \(f: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) if

$$\begin{aligned} \text {epi}\, f = \left\{ \left( \begin{array}{c}x \\ r \\ \end{array}\right) \in \mathbb {R}^{n+1} \bigg | \exists u \in \mathbb {R}^k: A \left( \begin{array}{c} x\\ r\\ u\\ -1\\ \end{array}\right) \ge 0\right\} . \end{aligned}$$
(4)

If the matrix A is partitioned as \(A=(B,b,C,c)\), where \(B \in \mathbb {R}^{m\times n}\), \(b \in \mathbb {R}^m\), \(C \in \mathbb {R}^{m \times k}\) and \(c \in \mathbb {R}^m\), we obtain

$$\begin{aligned} \hbox {epi}\,f = \left\{ \left( \begin{array}{c} x\\ r\end{array}\right) \in \mathbb {R}^{n+1} \bigg |\; \exists u \in \mathbb {R}^k: B x + b r + C u \ge c\right\} . \end{aligned}$$
(5)

The following examples motivate the definition of a representation. We show that for many (classes of) polyhedral convex functions a representation can be obtained without any essential computational effort.

Example 2

Consider finitely many affine functions \(f_i(x)={D^i} x + d_i\) \((i=1\dots ,m)\), a matrix \(P \in \mathbb {R}^{k\times n}\) and a vector \(p \in \mathbb {R}^k\). Then

$$\begin{aligned} f(x):= \left\{ \begin{array}{ll} \displaystyle \max _{i\in {\left\{ 1,\dots ,m\right\} } }f_i(x) &{} \text { if } P x \ge p\\ + \infty &{} \text { otherwise} \end{array}\right. \end{aligned}$$

is a polyhedral convex function with representation

$$\begin{aligned} A = \left( \begin{array}{cccc} -D &{}\quad e &{}\quad 0 &{}\quad d \\ P &{}\quad 0 &{}\quad 0 &{}\quad p\end{array}\right) , \end{aligned}$$

where \(D^1,\ldots ,D^m\) are the rows of \(D\in \mathbb {R}^{m\times n}\) and \(d:=(d_1,\ldots ,d_m)\in \mathbb {R}^m\).

Example 3

Let f be a polyhedral convex function and let (VD) be a V-representation of \(\hbox {epi}\,f\), that is, \(V \in \mathbb {R}^{(n+1)\times r}\) with \(r \ge 1\) and \(D \in \mathbb {R}^{(n+1)\times s}\) where \(s \ge 0\),

$$\begin{aligned} \hbox {epi}\,f := {\left\{ z \in \mathbb {R}^{n+1} |\;\exists \lambda \in \mathbb {R}^r, \mu \in \mathbb {R}^s:\; z = V \lambda + D \mu ,\; e^T \lambda = 1,\; \lambda \ge 0,\; \mu \ge 0\right\} }. \end{aligned}$$

Then

$$\begin{aligned} A = \left( \begin{array}{ccc} (B,b)&\quad C&\quad c \end{array}\right) = \left( \begin{array}{ccc} \mp I &{}\quad (\pm V, \pm D) &{}\quad 0 \\ 0 &{}\quad I &{}\quad 0\\ 0 &{}\quad (\pm e^T, 0) &{}\quad \pm 1 \end{array}\right) \end{aligned}$$

is a representation of f, where ± and are used to express the corresponding equations by inequalities.

Example 4

Let \(g,h: \mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be polyhedral convex functions with representations

$$\begin{aligned} A_g = \left( \begin{array}{cccc} B_g&\quad b_g&\quad C_g&\quad c_g\end{array}\right) \quad \text {and} \quad A_h = \left( \begin{array}{cccc} B_h&\quad b_h&\quad C_h&\quad c_h\end{array}\right) . \end{aligned}$$

The infimal convolution of g and h is the polyhedral convex function f with

$$\begin{aligned} \hbox {epi}\,f = \hbox {epi}\,g + \hbox {epi}\,h. \end{aligned}$$

Thus, its representation is \(A_f = \begin{pmatrix} B_f&b_f&C_f&c_f\end{pmatrix}\) with

$$\begin{aligned} (B_f\;b_f) = \begin{pmatrix} \pm I \\ 0 \\ 0 \end{pmatrix}, \quad C_f= \begin{pmatrix} \mp I &{} 0 &{} \mp I &{} 0 \\ (B_g\; b_g) &{} C_g &{} 0 &{} 0 \\ 0 &{} 0 &{} (B_h\; b_h) &{} C_h \end{pmatrix}, \quad c_f = \begin{pmatrix} 0 \\ c_g \\ c_h \end{pmatrix}. \end{aligned}$$

Example 5

Let a representation \(A = \begin{pmatrix} B&b&C&c\end{pmatrix}\) of a polyhedral convex function \(f :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be given, then

$$\begin{aligned} A^* = \begin{pmatrix} \pm I &{}\quad 0 &{}\quad \pm B^T &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \pm C^T &{}\quad 0 \\ 0 &{} \quad 0 &{}\quad \pm b^T &{}\quad \pm 1 \\ 0 &{} \quad 1&{}\quad c^T &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad I &{}\quad 0 \end{pmatrix} \end{aligned}$$

is a representation of the conjugate \(f^* :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) of f. The details are shown in the following proposition.

Proposition 6

For a polyhedral convex function \(f :\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\), the following two statements are equivalent:

$$\begin{aligned}&\hbox {epi}\,f = {\left\{ \begin{pmatrix}x\\ r\end{pmatrix} \in \mathbb {R}^{n+1} \bigg |\; \exists u \in \mathbb {R}^k: B x + b r + C u \ge c\right\} }; \end{aligned}$$
(6)
$$\begin{aligned}&\hbox {epi}\,f^* = {\left\{ \begin{pmatrix}x^*\\ r^*\end{pmatrix} \in \mathbb {R}^{n+1} \bigg | \exists v \in \mathbb {R}^m_+: B^T v + x^* = 0,\; b^T v = 1, \; C^T v = 0, \; c^T v + r^* \ge 0\right\} }.\nonumber \\ \end{aligned}$$
(7)

Proof

Assume that (6) is satisfied. Let \((x^*,r^*) \in \hbox {epi}\,f^*\), i.e.,

$$\begin{aligned} r^* \ge f^*(x^*) = \sup _{x \in \mathbb {R}^n} \left( {x^*}^T x - f(x)\right) = \sup _{(x,r)\in \mathrm{epi}~f} \left( {x^*}^T x - r\right) . \end{aligned}$$

This is equivalent to the implication

$$\begin{aligned} (x,r)\in \hbox {epi}\,f \implies r^* \ge {x^*}^T x - r. \end{aligned}$$

By (6), this can be expressed as

$$\begin{aligned} B x + b r + C u \ge c \implies r^* \ge {x^*}^T x - r. \end{aligned}$$

By Farkas’ lemma (see e.g. [18, Theorem 7.20]) we obtain the existence of \(v \in \mathbb {R}^m_+\) such that

$$\begin{aligned} B^T v + x^* = 0,\; b^T v = 1, \; C^T v = 0, \; c^T v + r^* \ge 0, \end{aligned}$$

i.e., (7) holds. The opposite implication follows likewise by taking into account that \(f=f^{**}\) for a polyhedral convex function f. \(\square \)

The list of examples could be extended. Note that, in general, it is much more expensive to compute an H-representation or a V-representation of the epigraph of a polyhedral convex function. This problem, however, is important for our studies and will be discussed in Sect. 6.

3 Maximizing a convex function over a polyhedron

A well-known result on maximizing a convex function over a polytope P is the attainment of the maximum in at least one vertex of P. We discuss in this short section the case of unbounded polyhedra.

Let \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be convex and let P be a polyhedron. Assume that P has a vertex. We consider the problem

$$\begin{aligned} \max f(x) \quad \text { subject to } \quad x \in P. \end{aligned}$$
(C)

Proposition 7

If Problem (C) has an optimal solution, then some vertex \(x^*\) of P is an optimal solution of (C).

Proof

We denote by \(v^1,\dots ,v^k\) the vertices of P and set \(B:= \hbox {conv}\,{\left\{ v^1,\dots ,v^k\right\} }\). Convexity of f implies that there is a vertex \(v^j\) of B which is an optimal solution of

$$\begin{aligned} \max f(x) \quad \text { subject to }\quad x \in B. \end{aligned}$$

Assume there is \(x \in P {\setminus } B\) with \(f(x) > f(v^j)\). The point x can be represented as the sum \(x=b+d\) of a point \(b \in B\) and a direction \(d \in (0^+ P) \!\setminus \!\{0\}\). Then, \(f(b+d) > f(v^j) \ge f(b)\). Convexity of f implies that \(f(b+nd) \rightarrow \infty \) as \(n \rightarrow \infty \), which contradicts the existence of an optimal solution. \(\square \)

Then next statement characterizes the existence of a solution.

Proposition 8

Problem (C) is unbounded if and only if there exists a vertex v of P, an extreme direction d of P and some \(\beta > 0\) with

$$\begin{aligned} f(v)<f(v+ \beta d). \end{aligned}$$

Proof

Let (C) be unbounded. There is a sequence \((x_\nu )\) in P with \((f(x_{\nu })) \rightarrow \infty \) as \(\nu \rightarrow \infty \). For all \(\nu \in \mathbb {N}\), we have

$$\begin{aligned} x_{\nu } = \sum _{i=1}^r \lambda ^\nu _i v^i + \sum _{j=1}^s \mu ^\nu _j d^j,\qquad \sum _{i=1}^r \lambda ^\nu _i = 1, \qquad \lambda ^\nu _i \ge 0,\quad \mu ^\nu _j \ge 0, \end{aligned}$$

where \(v^i\) are the vertices and \(d^i\) the extreme directions of P. Setting

$$\begin{aligned} \alpha ^\nu _{ij} := \frac{\lambda ^\nu _i \mu ^\nu _j}{\beta ^\nu } \quad \text { and } \quad \beta ^\nu := \sum _{k=1}^s \mu ^\nu _k, \end{aligned}$$

we obtain

$$\begin{aligned} x_{\nu } = \sum _{i=1}^r\sum _{j=1}^s \alpha ^\nu _{ij}(v^i + \beta ^\nu d^j), \qquad \sum _{i=1}^r\sum _{j=1}^s \alpha ^\nu _{ij} = 1,\qquad \alpha ^\nu _{ij}\ge 0. \end{aligned}$$

By Proposition 7, we have \(\beta ^\nu \ne 0\) for \(\nu \) being sufficiently large. Convexity implies that

$$\begin{aligned} \sum _{i=1}^r\sum _{j=1}^s \alpha ^\nu _{ij} f(v^i + \beta ^\nu d^j) \end{aligned}$$

tends to infinity. Hence \(f(v^i + \beta ^\nu d^j) \rightarrow \infty \) for at least one i and j, which proves the first part of the statement.

The converse implication follows from convexity of f along the ray \({\left\{ v+\beta d |\;\beta \ge 0\right\} }\), which belongs to P. \(\square \)

4 The case of g being polyhedral

The algorithm considered in this section is based on an enumeration of the vertices of \(\hbox {epi}\,g\). We start with a reformulation of the DC program (P).

Proposition 9

Problem (P) can be expressed equivalently as

$$\begin{aligned} \min [r - h(x)] \quad \text { subject to }\quad (x,r) \in \hbox {bd}\,\hbox {epi}\,g. \end{aligned}$$
(8)

Proof

Since \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) (i.e. the value \(-\infty \) is not allowed), for the graph \(\hbox {gr}\,g:={\left\{ (x,g(x)) |\;x \in \hbox {dom}\,g\right\} }\) of g, we have

$$\begin{aligned} \hbox {gr}\,g \subseteq \hbox {bd}\,\hbox {epi}\,g \subseteq \hbox {epi}\,g = \hbox {gr}\,g + ({\left\{ 0\right\} }\times \mathbb {R}_+). \end{aligned}$$

Let \(\bar{x}\) be an optimal solution of (P). For all \((x,r) \in \hbox {epi}\,g\) (and in particular for all \((x,r) \in \hbox {bd}\,\hbox {epi}\,g\)) we have \(g(\bar{x}) - h(\bar{x}) \le g(x)-h(x) \le r - h(x)\). Since \((\bar{x},g(\bar{x})) \in \hbox {bd}\,\hbox {epi}\,g\), we conclude that \((\bar{x},g(\bar{x}))\) solves (8).

Vice versa, let \((\bar{x},\bar{r}) \in \hbox {bd}\,\hbox {epi}\,g\) be a solution of (8), then

$$\begin{aligned} \forall (x,r) \in \hbox {gr}\,g \subseteq \hbox {bd}\,\hbox {epi}\,g: \quad \bar{r} - h(\bar{x}) \le r - h(x). \end{aligned}$$

We get \(\bar{r}=g(\bar{x})\) and see that \(\bar{x} \in \hbox {dom}\,g\) solves (P). \(\square \)

Assume now that \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) is polyhedral convex. Let \({\left\{ F_i|\;i=1,\dots ,k\right\} }\) be the (finite) set of all facets of \(\hbox {epi}\,g\). Then, it is evident that

$$\begin{aligned} \hbox {bd}\,\hbox {epi}\,g = \bigcup _{i=1,\dots ,k} F_i. \end{aligned}$$

The following statement is now obvious.

Corollary 10

For \(i=1,\dots ,k\), let \((x^i,r^i)\) be an optimal solution of

figure a

Let \(j \in \hbox {argmin}\,{\left\{ r^i-h(x^i)|\;i=1,\dots ,k\right\} }\). Then, \((x^j,r^j)\) is an optimal solution of (P). Problem (P) has an optimal solution if and only if, for all \(i \in {\left\{ 1,\dots ,k\right\} }\), (\(\hbox {P}_i\)) has an optimal solution.

Assume that (P) has an optimal solution. Then, for every \(i \in {\left\{ 1,\dots ,k\right\} }\), Problem (\(\hbox {P}_i\)) has an optimal solution which is, by Proposition 7, a vertex of the facet \(F_i\) of \(\hbox {epi}\,g\). Since a vertex of a facet F of \(\hbox {epi}\,g\) is also a vertex of \(\hbox {epi}\,g\), an optimal solution of (P) can always be found among the vertices of \(\hbox {epi}\,g\), denoted \(\hbox {vert}\,\hbox {epi}\,g\).

Corollary 11

Assume that (P) has an optimal solution. Every optimal solution \((\bar{x},\bar{r})\) of

$$\begin{aligned} \min [r - h(x)] \quad \text { subject to }\quad (x,r) \in \hbox {vert}\,\hbox {epi}\,g \end{aligned}$$
(9)

is also an optimal solution of (P).

If a representation A of the polyhedral convex function g is given, the vertices of \(\hbox {epi}\,g\) can be computed by solving a polyhedral projection problem, see Sect. 6. If \(\hbox {vert}\,\hbox {epi}\,g\) has been computed, it remains to determine the minimum over finitely many points in problem (9). We obtain the following solution procedure for the DC optimization problem (P).

Assumption 12

  1. (i)

    (P) has an optimal solution,

  2. (ii)

    g is polyhedral convex,

  3. (iii)

    \(\hbox {epi}\,g\) has a vertex,

  4. (iv)

    a representation A of g is known.

Algorithm 13

Assume that Assumptions 12 (i)–(iv) are satisfied.

figure b

Corollary 14

Algorithm 13 works correctly if Assumptions 12 (i)–(iv) are satisfied.

Proof

This follows from the considerations above. \(\square \)

Remark 15

Of course one can add linear constraints to Problem (P) in case g is polyhedral. These constraints can be included into a representation of g, see e.g. Example 25.

5 The case of h being polyhedral

We assume now that the function h in Problem (P) is polyhedral. We consider the Toland–Singer dual problem of (P), see [37, 38], that is,

$$\begin{aligned} \min [h^*(y) - g^*(y)] \quad \hbox {subject to } \ {y \in \hbox {dom}\,h^*} \end{aligned}$$
(D)

0 where \(h^*(y) := \sup _{x \in \mathbb {R}^n} [{y}^Tx - h(x)]\) is the conjugate of h and likewise for g. Due to the duality theory by Toland and Singer, the optimal objective values of (P) and (D) coincide in case of h being closed. This condition is satisfied as we assume h to be a polyhedral convex function.

Since \(g^*\) is convex and \(h^*\) is polyhedral convex, Problem (D) can be solved by applying Algorithm 13 in Sect. 4. As pointed out below, a solution of (P) can be computed from a solution of (D). We suppose the following assumptions, all being related to the given problem (P):

Assumption 16

  1. (i)

    (P) has an optimal solution,

  2. (ii)

    h is polyhedral convex,

  3. (iii)

    \(\dim \hbox {epi}\,h = n+1\),

  4. (iv)

    a representation A of h is known,

  5. (v)

    g is closed.

  6. (vi)

    A solution of the following problem exists whenever the problem is bounded for the parameter \(y \in \mathbb {R}^n\):

    $$\begin{aligned} \min _{x \in \mathbb {R}^n} [g(x) - {y}^T x]. \end{aligned}$$
    (10)

The following result relates solutions of (P) and solutions of (D) to each other. We denote by \(\partial f(\bar{x}):= {\left\{ y \in \mathbb {R}^n|\;\forall x \in \mathbb {R}^n: f(x) \ge f(\bar{x}) + {y}^T(x-\bar{x})\right\} }\) the subdifferential of a convex function \(f:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) at \(\bar{x} \in \hbox {dom}\,f\). We set \(\partial f(\bar{x}) := \emptyset \) for \(\bar{x} \not \in \hbox {dom}\,f\). For convenience of the reader and with regard to Remark 18 below, we provide a short proof of the following statement.

Proposition 17

(e.g. [24, Proposition 4.7] or [39, Proposition 3.20]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) and \(h:\mathbb {R}^n \rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be proper convex functions. Then the following holds true.

  1. (i)

    If x is an optimal solution of (P), then each \(y \in \partial h(x)\) is an optimal solution of (D).

If, in addition, g and h are closed, a dual statement holds:

  1. (ii)

    If y is an optimal solution of (D), then each \(x \in \partial g^*(y)\) is an optimal solution of (P).

Proof

Let \(y\in \hbox {dom}\,h\) be optimal for (D) and \(x\in \partial g^*(y)\). To prove statement (ii), we note that we have

$$\begin{aligned} \forall y^* \in \hbox {dom}\,h^*: \; h^*(y^*)-g^*(y^*)\ge & {} h^*(y)-g^*(y), \end{aligned}$$
(11)
$$\begin{aligned} g(x) + g^*(y)= & {} {y}^Tx, \end{aligned}$$
(12)

where the latter equation requires g being closed, compare [36, Theorem 23.5]. Moreover we use Fenchel’s inequality, that is, \(g^*(y^*) + g({z}) \ge {y^*}^T{z}\) for all \({z}\in \mathbb {R}^n\) and all \(y^* \in \mathbb {R}^n\) (and likewise for h). For all \(y^* \in \hbox {dom}\,h^*\) and all \(x \in \hbox {dom}\,g\) we get

$$\begin{aligned} g({z}) + h^*(y^*)- {y^*}^T{z}&\ge h^*(y^*)-g^*(y^*) \mathop {\ge }\limits ^{(11)} h^*(y)- g^*(y)\\&\ge {y}^Tx - h(x) - g^*(y) \mathop {=}\limits ^{(12)} g(x) - h(x). \end{aligned}$$

Taking the infimum over \(y^* \in \hbox {dom}\,h^*\), we get

$$\begin{aligned} \forall {z} \in \hbox {dom}\,g:\; g({z}) - h^{**}({z}) \ge g(x) - h(x). \end{aligned}$$

The claim follows as \(h^{**}({z}) = h({z})\). Here we use that h is assumed to be closed, compare [36, Theorem 12.2]. The proof of (i) is analogous but neither g nor h needs to be closed, compare [36, Theorem 23.5] and take into account the fact that no biconjugation argument is necessary. \(\square \)

Remark 18

In [24, Proposition 4.7] the assumption of g and h being closed for statement (ii) of Proposition 17 is missing. Moreover in [39, Proposition 3.20] closedness of g needs to be assumed. Indeed, consider the following example. Let

$$\begin{aligned} f:\mathbb {R}\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} },\quad f(x):=\left\{ \begin{array}{rcl} \infty &{} \text { if } &{} x < 0\\ 1 &{} \text { if } &{} x=0\\ 0 &{} \text { if } &{} x>0. \end{array}\right. \end{aligned}$$

Then f is not closed and we have

$$\begin{aligned} f^*:\mathbb {R}\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} },\quad f^*(y):=\left\{ \begin{array}{rcl} 0 &{} \text { if } &{} y \le 0\\ \infty &{} \text { if } &{} y>0. \end{array}\right. \end{aligned}$$

Set \(g=f\) and \(h=\hbox {cl}\,f\). Then \(y=0\) solves (D) and \(0 \in \partial g^*(0)\). But \(x=0\) is not a solution of (P). This shows the failure when g is not closed.

Set \(g=\hbox {cl}\,f\) and \(h=f\). Then \(y=0\) solves (D) and \(1 \in \partial g^*(0)\). But \(x=1\) is not a solution of (P). This shows the failure when h is not closed.

Proposition 19

(e.g. [36, Theorem 23.5]) Let \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) be a proper closed convex function. Then \(\bar{x} \in \partial g^*(y)\) if and only if \(\bar{x}\) is an optimal solution of (10).

The next result provides a consequence of Assumption 16 (iii).

Proposition 20

Let \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup {\left\{ \infty \right\} }\) be a polyhedral convex function such that \(\dim \hbox {epi}\,h = n+1\). Then \(\hbox {epi}\,h^*\) has a vertex.

Proof

The assumption implies that there is a point \(x \in \hbox {int}\,\hbox {dom}\,h\), such that h is affine in a neighborhood of x. Whence \(\partial h(x)\) is a singleton set \({\left\{ \bar{y}\right\} }={\left\{ \nabla h(x)\right\} }\). The Fenchel-Young inequality implies

$$\begin{aligned} \forall z \in \mathbb {R}^n :\quad h^*(z) \ge {z}^Tx - h(x). \end{aligned}$$

Using [36, Theorem 23.5], we see that equality holds if and only if \(z=\bar{y}\). This means that

$$\begin{aligned} H={\left\{ (z,r^*) \in \mathbb {R}^{n+1} |\;r^* = {z}^Tx - h(x)\right\} } \end{aligned}$$

is a supporting hyperplane of \(\hbox {epi}\,h^*\) such that \(\hbox {epi}\,h^* \cap H = {\left\{ (\bar{y},h^*(\bar{y}))\right\} }\), i.e., \((\bar{y},h^*(\bar{y}))\) is a vertex of \(\hbox {epi}\,h^*\). \(\square \)

We obtain the following algorithm:

Algorithm 21

Assume that Assumptions 16 (i)–(vi) are satisfied.

figure c

Theorem 22

Let the Assumptions 16 (i)–(vi) be satisfied. Then Algorithm 21 works correctly.

Proof

Note first that (P) has a solution \(\bar{x}\) by Assumption 16 (i). A solution \(\bar{x}\) of (P) is by definition a point in \(\hbox {dom}\,g \cap \hbox {dom}\,h\). Since h is polyhedral convex and \(\bar{x} \in \hbox {dom}\,h\), we have \(\partial h(\bar{x})\ne \emptyset \), see e.g. [36, Theorem 23.10]. By Proposition 17 (i), a point \(\bar{y} \in \partial h(\bar{x})\) is a solution to (D). Whence a solution of (D) exists. In Steps (1) and (2) of the algorithm, a solution \(\bar{y}\) of (D) is computed. To see this, we need to check Assumptions 16 (i)–(vi) of Algorithm 13 in Sect. 4, reformulated for problem (D):

  • (D) has an optimal solution, as shown above.

  • \(h^*\) is polyhedral convex as so is h.

  • \(\hbox {epi}\,h^*\) has a vertex. This follows from Proposition 20.

  • A representation A of \(h^*\) is known, as it can be obtained from a representation of h, see Example 5 in Sect. 2.

Since \(\bar{y}\) computed in Steps (1) and (2) is a solution to (D), we have \(\bar{y} \in \hbox {dom}\,g^*\). Hence Problem (10) is bounded (by the definition of conjugates). In this case, by Assumption 16 (iv), an optimal solution \(\bar{x}\) of (10) exists. By Proposition 19, we get \(\bar{x} \in \partial g^*(\bar{y})\). Proposition 17 (ii) implies that \(\bar{x}\) is an optimal solution of (P). \(\square \)

Remark 23

One can add convex constraints \(f_i(x)\le 0\) \((i=1,\dots ,m)\) to Problem (P) in case h is polyhedral. Setting \(g(x) = \infty \) whenever one constraint is violated, we maintain convexity of g. If g and \(f_i\) \((i=1,\dots ,m)\) are closed, so is the modified g. The only difference in the algorithm is that the constraints have to be added to Problem (10).

Remark 24

An alternative approach for minimizing a DC function \(f=g-h\) where h is polyhedral is the following: The domain of h can be “partitioned” into finitely many convex polyhedral sections \(S_i\) (such that the intersection of the interior of any \(S_i\) and \(S_j\) for \(i\ne j\) is empty) such that h is affine on each \(S_i\). Minimizing \(g-h\) over such a section \(S_i\) is a convex program with linear constraints. Solving such a convex program for each \(S_i\) and choosing the one with smallest optimal value, we obtain an optimal solution of the original problem. To compute the sections \(S_i\) one needs to solve a polyhedral projection problem (or similar) involving the epigraph of h, which is not easier than Step 1 in Algorithm 21. The advantage of our algorithm is as follows: First, in case the conjugate of g is known, we have to solve only one convex program. Secondly, only unconstrained convex programs need to be solved in our algorithm.

6 Polyhedral projection via multiple objective linear programming

For matrices \(B \in \mathbb {R}^{m \times n}\), \(C\in \mathbb {R}^{m \times k}\) and a vector \(c \in \mathbb {R}^m\), we consider the problem

$$\begin{aligned} \text {compute }\quad Y = {\left\{ x \in \mathbb {R}^n |\;\exists u \in \mathbb {R}^k: B x + C u \ge c\right\} }. \end{aligned}$$
(PP)

Solving (PP) essentially means finding a V-representation of Y, see [29] for details. Using the terminology of Sect. 2, (PP) provides a P-representation of Y. The polyhedron Y can be seen as the projection of the polyhedron

$$\begin{aligned} X = {\left\{ (x,u) \in \mathbb {R}^n \times \mathbb {R}^k |\;B x + C u \ge c\right\} } \end{aligned}$$
(13)

onto \(\mathbb {R}^n\).

In [29], the equivalence between the projection problem (PP) and multiple objective linear programming (MOLP) is shown. In particular, [29, Theorem 3] yields that a V-representation of Y can be obtained by solving the multiple objective linear program

$$\begin{aligned} \min _{x\in \mathbb {R}^n,u\in \mathbb {R}^k} \begin{pmatrix} x \\ -e^T x\end{pmatrix} \quad \text { subject to }\quad B x + C u \ge c, \end{aligned}$$
(MOLP)

where \(e=(1,\dots ,1)^T\).

A MOLP-solver such as Bensolve [28, 30] computes not only a solution to (MOLP) in the sense of [21, 27] but also a V-representation of the so-called upper image. The upper image of (MOLP) is the set

$$\begin{aligned} \mathcal {P}:= {\left\{ z \in \mathbb {R}^{n+1} \bigg |\; \exists u \in \mathbb {R}^k,\, \exists x \in \mathbb {R}^n:\; z \ge \begin{pmatrix} x \\ -e^T x\end{pmatrix}, \; Bx + Cu \ge c\right\} }, \end{aligned}$$

whereas the image of (MOLP), in the literature also called feasible set in the objective space, is

$$\begin{aligned} Q := {\left\{ z \in \mathbb {R}^{n+1} \bigg |\; \exists u \in \mathbb {R}^k,\, \exists x \in \mathbb {R}^n:\; z = \begin{pmatrix} x \\ -e^T x\end{pmatrix}, \; Bx + Cu \ge c\right\} }. \end{aligned}$$

Clearly, we have

$$\begin{aligned} \mathcal {P}= Q + \mathbb {R}^{n+1}_+ \end{aligned}$$

and

$$\begin{aligned} Q \subseteq {\left\{ z \in \mathbb {R}^{n+1} |\; e^T z = 0\right\} }. \end{aligned}$$

Thus, a V-representation of Q can be obtained from a V-representation of \(\mathcal {P}\) by omitting all points and directions x with \(e^T z \ne 0\). Evidently, a V-representation of Y can be obtained from a V-representation of Q by omitting the last component of each vector.

7 Application and numerical results

First, we solve an instance of a location problem, then we compare our results with two algorithms from the literature and finally we present two new examples. We used the VLP-Solver Bensolve version 2.0.1 [28, 30] to compute the polyhedral projection problems. VLP stands for vector linear programming which is a generalization of MOLP. The remaining part was implemented in Matlab\(^{\textregistered }\) R2015b. In particular, we use the +linprog+ command if problem (10) needs to be solved, which reduces to an LP here. All computations were run on a computer with Intel\(^{\textregistered }\) Core\(^\mathrm{TM}\) i5 CPU with 3.3 GHz.

As both g and h are polyhedral convex functions in our first two examples, we obtain two alternatives to solve the problems. Algorithm 13 developed in Sect. 4 (case g polyhedral) is referred to as the primal algorithm, whereas Algorithm 21 of Sect. 5 (case h polyhedral) is called the dual algorithm. As we will see, both algorithms are justified.

Example 25

The classical location theory is dedicated to the problem of locating a new facility such that all relevant distances, e.g. to customers, are minimized [7, 8, 11, 13, 19, 35, 42]. Nevertheless, since some facilities also cause negative effects like noise, stench or even danger, such facilities need to be established as far away as possible from nature reserves and residential areas. As examples one may think of an airport, railway station, industrial plant, dump site, power plant or wireless station.

The problem of locating such a so-called semi-obnoxious facility consists of minimizing the weighted sum of distances to the so-called attraction points (due to economical reasons) and to maximize the weighted sum of distances to the so-called repulsive ones (in order to avoid negative effects).

First attempts to solve location problems with such undesirable facilities appeared in the 1970’s [4, 6, 17]. Algorithms based on a DC formulation are presented for instance in [3, 9, 31, 40].

It is reasonable to establish the new facility in a given area, city or state, which we assume to be a bounded polyhedral set \(\mathcal {P}=\left\{ x\in \mathbb {R}^n|\,Px\ge p\right\} \subseteq \hbox {dom}\,h\). The DC location problem can be formulated as

figure d

with functions \(g,h:\;\mathbb {R}^n\rightarrow \mathbb {R}_+\) defined as

$$\begin{aligned} g(x):=\mathbb {I}_{\mathcal {P}}(x)+\sum _{i=1}^{\overline{M}}\overline{w}_i\gamma _{\overline{B}_i}(x-\overline{a}^{i}),&h(x):=\sum _{i=1}^{\underline{M}}\underline{w}_i\gamma _{\underline{B}_i}(x-\underline{a}^{i}), \end{aligned}$$
(14)

where \(\mathbb {I}_{\mathcal {P}}\) denotes the indicator function that attains the value 0, if \(x\in \mathcal {P}\), and \(+\infty \) otherwise. The parameters \(\overline{a}^1,\ldots ,\overline{a}^{\overline{M}}\in \mathbb {R}^n\), \((\overline{M}\ge 1)\) denote the attracting points with weights \(\overline{w}_1,\ldots ,\overline{w}_{\overline{M}}>0\), and \(\underline{a}^1,\ldots ,\underline{a}^{\underline{M}}\in \mathbb {R}^n\), \((\underline{M}\ge 1)\) denote the repulsive points with weights \(\underline{w}_1,\ldots ,\underline{w}_{\underline{M}}>0\). The polyhedral unit balls \(\overline{B}_1,\ldots ,\overline{B}_{\overline{M}}\) and \(\underline{B}_1,\ldots ,\underline{B}_{\underline{M}}\) induce gauge distances that we associate with \(\overline{a}^1,\ldots ,\overline{a}^{\overline{M}}\) and \(\underline{a}^1,\ldots ,\underline{a}^{\underline{M}}\), respectively.

For a compact polyhedral set \(B\subseteq \mathbb {R}^n\), which contains the origin in its interior, the polyhedral gauge distance \(\gamma _B:\;\mathbb {R}^n\rightarrow \mathbb {R}\) from \(a\in \mathbb {R}^n\) to \(x\in \mathbb {R}^n\) is defined as \(\gamma _B(x-a):=\min \left\{ \lambda \ge 0|\,x-a\in \lambda B\right\} \), see [11, 36].

The epigraph of g is given by

$$\begin{aligned} \hbox {epi}\,g&=\left\{ (x,r)\in \mathcal {P}\times \mathbb {R}|\; r\ge \sum _{i=1}^{\overline{M}}\overline{w}_i\gamma _{\overline{B}_i}(x-\overline{a}^{i})\right\} \\&=\left\{ (x,r)\in \mathcal {P}\times \mathbb {R}|\; r\ge \sum _{i=1}^{\overline{M}}\overline{w}_i\min \left\{ \lambda _i\ge 0|\;x-\overline{a}^i\in \lambda _i\overline{B}_i \right\} \right\} \\&=\left\{ (x,r)\in \mathcal {P}\times \mathbb {R}|\;\exists \lambda \in \mathbb {R}^{\overline{M}}_+:\; r\ge \sum _{i=1}^{\overline{M}}\lambda _i\overline{w}_i,\;x-\overline{a}^i\in \lambda _i\overline{B}_i \right\} \\&=\left\{ (x,r)\in \mathbb {R}^{n+1}|\;\exists \lambda \in \mathbb {R}^{\overline{M}}:\; r\ge \sum _{i=1}^{\overline{M}}\lambda _i\overline{w}_i,\;\hat{\overline{B}}_i(x-\overline{a}^i)\le \lambda _ie;\; Px\ge p;\;\lambda \ge 0 \right\} , \end{aligned}$$

where \(\hat{\overline{B}}_1,\ldots ,\hat{\overline{B}}_{\overline{M}}\) are the matrices defining the polyhedral unit balls \(\overline{B}_1,\ldots ,\overline{B}_{\overline{M}}\), respectively, i.e., \(x\in \overline{B}_i\) if and only if \(\hat{\overline{B}}_ix\le e\) for \( (i=1,\ldots ,\overline{M})\).

A representation \(\overline{A}=(\overline{B},\overline{b},\overline{C},\overline{c})\) of g is given by

figure e

A representation of h is given likewise.

The epigraphs \(\hbox {epi}\,g^*\) and \(\hbox {epi}\,h^*\) and corresponding representations can easily be obtained by applying Example 5 and Proposition 6. Moreover, Assumptions 12 (i)–(iv) in Sect. 4 (case g polyhedral) and 16 (i)–(vi) in Sect. 5 (case h polyhedral) are satisfied and hence, Algorithms 13 and 21 can be applied in order to solve the location problem (L).

We note further that the conjugates of g and h can be written as

$$\begin{aligned} g^*(y)&=\min \biggl \{\sum _{i=1}^{\overline{M}}\left[ {\overline{a}^i}^Ty^i+\mathbb {I}_{\overline{w}_i\overline{B}_i^*}(y^i)\right] +\sigma _{\mathcal {P}}(y^0)\bigg \vert \; \sum _{i=0}^{\overline{M}} y^{i}=y\biggr \} \end{aligned}$$
(15)
$$\begin{aligned} h^*(y)&=\min \biggl \{\sum _{i=1}^{\underline{M}}\left[ {\underline{a}^i}^Ty^i+\mathbb {I}_{\underline{w}_i\underline{B}_i^*}(y^i)\right] \bigg \vert \; \sum _{i=1}^{\underline{M}} y^{i}=y\biggr \} \end{aligned}$$
(16)

[41], where \(B^*:=\left\{ y\in \mathbb {R}^n|\,\forall x\in B:\;x^Ty\le 1\right\} \) defines the dual unit ball of B and \(\sigma \) denotes the support function. If B is polyhedral then so is \(B^*\).

We solve several instances of (L) in the plane. In dependence of the number of attracting and repulsive points, Tables 1 and 2 show the computational results of the primal and dual algorithm, respectively. In most cases, the primal algorithm performs better.

Table 1 Computational results (running time in seconds) for Example 25 using the primal algorithm (case g polyhedral)
Table 2 Computational results (running time in seconds) for Example 25 using the dual algorithm (case h polyhedral)

Remark 26

In Step (2) of Algorithm 21, \(g^*(y^i)\) is calculated for all vertices \((y^i, s^i)\), \(i=1,\ldots ,k\), of \(\hbox {epi}\,h^*\), obtained in Step (1). In general, we cannot directly calculate these objective values, see for example the representation (15), which constitutes a linear program. In the general setting, the convex program (10) needs to be solved for each vertex. On the other hand, in Step (2) of Algorithm 13, we need to calculate \(h(x^i)\) for all vertices \((x^i, r^i)\), \(i=1,\ldots ,k\), of \(\hbox {epi}\,g\), obtained in Step (1). If (in case both g and h are polyhedral) only a representation of h (i.e. a P-representation \(\hbox {epi}\,h\)) is given, a general procedure to get the values \(h(x^i)\) is to solve the linear program \(\min r\) s.t. \((x^i,r) \in \hbox {epi}\,h\). However, in the present example (and often in practice) one can directly calculate these objective values, i.e. there is no additional linear program to be solved. This is an advantage of the primal algorithm for our example from locational analysis.

Nevertheless, we also observe for the primal algorithm that an increasing number of attracting points influences the running time much more than an increasing number of repulsion points. It is vices versa for the dual algorithm, which leads to settings where the dual algorithm performs even better, such as \(\overline{M}=100\), \(\underline{M}=20\). Thus, both algorithms have their justification.

Example 27

Ferrer, Bagirov and Baliakov [15] introduce the DC extended cutting angle method (DCECAM) and compare it with the DC prismatic algorithm (DCPA) described in [14]. We compare Algorithms 13 and 21 with these two methods by means of Example 10 in [15]: Consider a DC programming problem (P) with

$$\begin{aligned} g(x) = |x_1-1|+200 \sum _{i=2}^n \max {\left\{ 0,|x_{i-1}|-x_i\right\} } \end{aligned}$$

and

$$\begin{aligned} h(x) = 100 \sum _{i=2}^n \left( |x_{i-1}|-x_i\right) . \end{aligned}$$

In Table 3 we present numerical results for the primal and dual algorithm developed in this article and compare them with the results of DCECAM and DCPA obtained in [15]. We see that both of our algorithms perform better than DCECAM and DCPA, where it should be noted, that the latter algorithms are designed for a more general problem class (non-polyhedral DC problems). In particular, the dual method is preferable for this example. Note that for \(n=10\) we need to solve a multiple objective linear program with \(n+2=12\) objectives.

Table 3 Computational results for Example 10 in [15]
Table 4 Running times in seconds for Example 28
Table 5 Numerical results for Example 29

Example 28

Let \(Q \in \mathbb {R}^{n \times n}\) be a positive semi-definite symmetric matrix. The problem to maximize the convex quadratic function \(h:\mathbb {R}^n \rightarrow \mathbb {R}\), \(h(x) = x^T Q x\) over a polyhedral convex set \(S \subseteq \mathbb {R}^n\) can be reformulated as a DC-program by letting \(g=\mathbb {I}_S\), i.e. the indicator function of S, and minimizing \(f=g-h\). For example, let \(S:= {\left\{ x \in \mathbb {R}^n |\;-e \le x \le e\right\} }\) be an n-dimensional hypercube. As h is convex, Q has only non-negative eigenvalues. We assume that Q has at most \(m \le n\) positive eigenvalues. Equivalently, Q can be expressed as \(Q=P^T P\) for some \(P \in \mathbb {R}^{m \times n}\). For example, let the entries of P be defined by

$$\begin{aligned} P_{ij} = \lfloor m \cdot \sin ((j-1)\cdot m+i)\rfloor , \end{aligned}$$

where \(\lfloor x\rfloor := \max {\left\{ z \in \mathbb {Z}| z \le x\right\} }\). Setting \(y:=Px\) and \(T = {\left\{ y \in \mathbb {R}^m | \exists x \in S: y = Px\right\} }\), we obtain the equivalent problem to minimize \(\bar{h}:\mathbb {R}^m \rightarrow \mathbb {R}\cup {\left\{ +\infty \right\} }, \bar{h}(y) = y^T y\) over the polyhedron \(T\subseteq \mathbb {R}^m\). For \(n=10\) and \(m=4\) we have

$$\begin{aligned} P=\left( \begin{array}{rrrrrrrrrr} 3&{}\quad -4&{}\quad 1&{}\quad 1&{}\quad -4&{}\quad 3&{}\quad -1&{}\quad -3&{}\quad 3&{}\quad -3\\ 3&{}\quad -2&{}\quad -3&{}\quad 3&{}\quad -4&{}\quad -1&{}\quad 3&{}\quad -4&{}\quad 2&{}\quad 1\\ 0&{}\quad 2&{}\quad -4&{}\quad 2&{}\quad 0&{}\quad -4&{}\quad 3&{}\quad -2&{}\quad -2&{}\quad 3\\ -4&{}\quad 3&{}\quad -3&{}\quad -2&{}\quad 3&{}\quad -4&{}\quad 1&{}\quad 2&{}\quad -4&{}\quad 2\\ \end{array}\right) \end{aligned}$$

and the optimal solution obtained by Algorithm 13 applied to the reformulated problem is

$$\begin{aligned} x = ( 1, -1, 1, 1, -1, 1, -1, -1, 1, -1)^T. \end{aligned}$$

In Table 4, further numerical results for this example can be found.

Example 29

Let \(g:\mathbb {R}^n\rightarrow \mathbb {R}\), \(g(x) = x^T Q x\) for some positive definite symmetric matrix Q. Then the conjugate is \(g^*(y) = \frac{1}{4} y^T Q^{-1} y\) and the optimal solution of (10) is \(x=\frac{1}{2}Q^{-1} y\). For example, let \(Q=P^T P\) where \(P \in \mathbb {R}^{n \times n}\) defined by \(P_{ij} = 1\) for \(i \ge j\) and \(P_{ij}=0\) otherwise, then Q is positive definite. Further, we set

$$\begin{aligned} h(x) = \sum _{i=2}^n \left( |x_{i-1}|-x_i\right) . \end{aligned}$$

Some numerical results of minimizing \(f = g - h\) using Algorithm 21 are shown in Table 5.

8 Conclusion and final remarks

It is demonstrated that MOLP solvers provide a very easy way to solve DC programs where at least one of the two functions gh in the objective \(f=g-h\) is polyhedral. In case of both g and h being polyhedral, the investigations result in two different algorithms, called primal and dual algorithm. The running time of the primal algorithm depends mainly on the effort to compute the vertices of \(\hbox {epi}\,g\) by the MOLP solver. Likewise the dual algorithm depends on \(\hbox {epi}\,h^*\). The proposed methods work well, if the dimension of \(\hbox {epi}\,g\) in Algorithm 13 or \(\hbox {epi}\,h^*\) in Algorithm 21 is not too high (up to 11 in the examples above).

In this study we consider functions decomposed as \(f = g-h\) where g (resp. h) is polyhedral. The examination of such a decomposition is motivated by applications such as the problem to locate semi-obnoxious facilities, see Example 25. Nevertheless, one may wish to define the class of all functions that have such a structure and formulate an algorithm to obtain it. It is known that twice continuously differentiable functions are DC [20, 22]. A DC function \(f=g-h\) with g being polyhedral (resp. h being polyhedral) is piecewise concave (resp. convex). Here “piecewise” means that the domain of f can be decomposed into finitely many convex polyhedra \(P_i\) such that they have disjoint interior and f is concave (resp. convex) on each \(P_i\). Moreover, f is locally Lipschitz on the interior of its domain as so is every concave (or convex) function. An interesting question is whether or not the opposite implication holds: Can each piecewise concave (resp. convex) function f which is locally Lipschitz on the interior of its domain be decomposed as \(f = g-h\) where g (resp. h) is polyhedral? To answer this question and also to provide an appropriate algorithm is left as an open problem.