Keywords

1 Introduction

The fractional optimization is quite challenging and arises in various on a both economic and non-economic applications, whenever one or several ratios are required to be optimized. Let us mention a few examples mainly following the surveys by Schaible [8, 19, 20], where numerous other applications can be found. Numerators and denominators in ratios may represent cost, capital, profit, risk or time, etc. Fractional programs is closely related to the associated multiple-objective optimization problem, where a number of ratios are to be maximized simultaneously. Thus, the objective function in a fractional program can be considered as a utility function expressing a compromise between the different objective functions of the multiple-objective problem. Other applications include a multistage stochastic shipping problem [1, 7], profit maximization under fixed cost [4], various models in cluster analysis [17], multi-objective bond portfolio [13], and queuing location problems [5].

As known, without assumptions, the sum-of-ratios program is NP-complete [9]. Surveys on methods for solving this problem can be found in [8, 20, 28, 29]. According to the surveys, the majority of the methods make restrictive assumptions either on the concavity or linearity of the ratios. When the ratios are nonlinear, the most popular techniques are based on the Branch and Bound approach, see e.g. [2, 6].

We propose reducing a fractional problem to the optimization problem with nonconvex constraints [10, 23], as it was mentioned in [6], with a subsequent application of the Global search theory for solving this class of nonconvex problems [2123, 25, 26].

The outline of the paper is as follows. In Sect. 2 we substantiate the reduction of the sum-of-ratios fractional problem to the optimization problem with nonconvex constraints. Then in Sect. 3 we recall the local search method from [21], which implies linearization of the functions defining the basic non-convexity of the problem with d.c. constraints in the current point. In Sect. 4 we show how to explicitly represent the nonconvex functions, describing the constraints of the problem in question, as differences of two convex functions (the d.c. representation). The final section offers computational testing of the local search method on fractional program instances with a small number of variables and terms in the sum. We use the examples found in the literature as well as randomly generated problems of higher dimension.

2 Reduction to the Problem with Nonconvex Constraints

Now consider the following problem of the fractional optimization [3, 20]

$$f(x):=\sum \limits _{i=1}^m\frac{\psi _i(x)}{\varphi _i(x)}\downarrow \min \limits _x,\;\;x\in S,\qquad \qquad (\mathcal{P}_{0}) $$

where \(\psi _i:{I\!R}^n \rightarrow {I\!R},\;\;\varphi _i:{I\!R}^n \rightarrow {I\!R},\;\;\varphi _i(x)> 0,\;\forall x\in S,\;i=1,\ldots , m\), and \(S\subset {I\!R}^n\) is a convex set.

Proposition 1

(i) Let the pair \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) be a solution to the following problem:

$$\sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\;\;x\in S,\;\; \displaystyle \frac{\psi _i(x)}{\varphi _i(x)}=\alpha _i,\;\;i=1,\ldots , m. \qquad \qquad (\mathcal{P}_{1}) $$

Then \(x_*\) is the solution to Problem \((\mathcal{P}_0)\) and \(f(x_*)=\sum \limits _{i=1}^m \alpha _{*i}\).

(ii) Conversely, if \(x_*\) is the solution to Problem \((\mathcal{P}_0)\) (\(x_*\in Sol(\mathcal{P}_0)\)), then the vector \(\alpha _*=(\alpha _{*1},\ldots ,\alpha _{*m})^T\in {I\!R}^m\) defined as \(\alpha _{*i}=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\;\;i=1,\ldots , m,\) is part of the solution \((x_*,\alpha _*)\) to Problem \((\mathcal{P}_1)\).

Proof

(i) Let \((x_*,\alpha _*)\in Sol(\mathcal{P}_1)\), i.e. \(\alpha _{*i}=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\) \(i=1,\ldots , m,\;x_*\in S,\) and \(\sum \limits _{i=1}^m \alpha _{*i}\le \sum \limits _{i=1}^m \alpha _i\) for all \(\alpha _i:\;\;\exists x\in S,\;\;\displaystyle \frac{\psi _i(x)}{\varphi _i(x)}=\alpha _i,\) \(i=1,\ldots , m,\)

$$f(x_*)=\sum \limits _{i=1}^m\frac{\psi _i(x_*)}{\varphi _i(x_*)}=\sum \limits _{i=1}^m \alpha _{*i}\le \sum \limits _{i=1}^m \alpha _i=\sum \limits _{i=1}^m\frac{\psi _i(x)}{\varphi _i(x)}=f(x)\;\;\;\;\forall x\in S. $$

Therefore, \(x_*\) is the solution to Problem \((\mathcal{P}_0)\).

(ii) Now let \(x_*\in Sol(\mathcal{P}_0)\). Then

$$\begin{aligned} f(x_*)=\sum \limits _{i=1}^m\frac{\psi _i(x_*)}{\varphi _i(x_*)}\le \sum \limits _{i=1}^m\frac{\psi _i(x)}{\varphi _i(x)}=f(x)\;\;\;\;\forall x\in S. \end{aligned}$$
(1)

Define \(\alpha _{*i}:=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\;i=1,\ldots , m,\) and consider the set

$$D_\alpha =\left\{ \alpha \in {I\!R}^m\;:\;\exists x\in S,\;\;\alpha _i=\displaystyle \frac{\psi _i(x)}{\varphi _i(x)}\right\} .$$

Then (1) implies \(\sum \limits _{i=1}^m \alpha _{*i}=f(x_*)\le f(x)=\sum \limits _{i=1}^m \alpha _i\;\;\forall \alpha \in D_\alpha ,\;\;\forall x\in S.\) Therefore \(x_*\in Sol(\mathcal{P}_0)\).    \(\square \)

Proposition 2

Let the pair \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) be a solution to the following problem:

$$ \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\;\;x\in S,\;\; \displaystyle \frac{\psi _i(x)}{\varphi _i(x)}\le \alpha _i,\;\;i=1,\ldots , m.\qquad \qquad (\mathcal{P}_{2}) $$

Then

$$\begin{aligned} \displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)}=\alpha _{*i},\;\;i=1,\ldots , m. \end{aligned}$$
(2)

Proof

Let \((x_*,\alpha _*)\in Sol(\mathcal{P}_2)\). Suppose that

$$\begin{aligned} \exists j\in \{1,2,\ldots ,m\}:\; \displaystyle \frac{\psi _j(x_*)}{\varphi _j(x_*)}<\alpha _{*j}, \end{aligned}$$
(3)

and construct \( \hat{\alpha }: \;\hat{\alpha _i}=\alpha _{*i}\;\;\forall i\ne j,\;\;\hat{\alpha _j}=\displaystyle \frac{\psi _j(x_*)}{\varphi _j(x_*)}\). It can be readily seen that \((x_*,\hat{\alpha })\) is a feasible pair to Problem \((\mathcal{P}_2)\). The assumption (3) implies

$$\sum \limits _{i=1}^m \alpha _{*i}= \sum \limits _{i\ne j} \alpha _{*i}+ \alpha _{*j} >\sum \limits _{i=1}^m \hat{\alpha _{i}}.$$

It means that in Problem \((\mathcal{P}_2)\) the pair \((x_*,\hat{\alpha })\) is better than the pair \((x_*,\alpha _*)\), so, \((x_*,\alpha _*)\notin Sol(\mathcal{P}_2)\). This contradiction proves the equality (2).   \(\square \)

Corollary 1

Any solution \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) to Problem \((\mathcal{P}_2)\) will be a solution to Problem \((\mathcal{P}_1)\), and, therefore, will be a solution to Problem \((\mathcal{P}_0)\).

Remark 1

The inequality constraints in Problem \((\mathcal{P}_2)\) can be replaced by equivalent constraints \({\psi _i(x)} - \alpha _{i}{\varphi _i(x)}\le 0,\;\;i=1,\ldots , m,\) since \(\varphi _i(x) > 0\;\;\forall x\in S\). It leads us to the following problem with m nonconvex constraints:

$$\;\;\;\;\;\; \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\;\;x\in S,\;\; \psi _i(x)- \alpha _{i}\varphi _i(x)\le 0,\;\;i=1,\ldots , m. \;\;\;\;\;\;\;(\mathcal{P}) $$

It is easy to see that Problem \((\mathcal{P})\) is a global optimization problem with the nonconvex feasible set (see, e.g., [10, 23]), and we can apply the Global Search Theory for solving this class of nonconvex problems [2123, 25, 26]. The Theory allows one to construct an algorithm for solving problems with nonconvex constraints. The algorithm contains two principal stages: (a) local search, which provides an approximately critical point; (b) procedures of escaping from critical points.

In the next section, we shall consider a local search method.

3 Local Search for Problem with D.C. Constraints

The local search methods (LSMs) play an important role in searching for the global solution to nonconvex problems, since it provides the so-called critical (stationary) points that might be considerably be better than a simple feasible point. Moreover, if a starting point occurs rather close to the global solution, then the LSMs can provide the global solution.

In order to find a local solution to Problem \((\mathcal{P})\), we apply a special LSM [21].

Let us consider the following problem with d.c. constraints:

$$\begin{aligned} \left. \begin{array}{c} f_0(x) \downarrow \min \limits _x,\quad x\in S,\\ f_i(x):=g_i(x)-h_i(x)\le 0,\;i\in I\triangleq \{1,\dots ,m\}, \end{array} \right\} \end{aligned}$$
(4)

where the functions \(f_0\) and \(g_i,h_i\), \(i\in I\), as well as the set \(S\subset {I\!R}^n\), are convex. Further, suppose that the feasible set \(D:=\{\, x\in S \mid f_i(x) \le 0,\;\; i\in I \,\} \) of the problem (4) is not empty and the optimal value \(\mathcal V\) (4) \(:= \inf \limits _x \{ f_0(x) \mid x\in D \}\) of the problem (4) is finite: \(\mathcal V\) (4) \(> -\infty .\)

Furthermore, assume that a feasible starting point \(x^0\in D\) is given and, in addition, after several iterations it has derived the current iterate \(x^s\in D\), \(s\in \mathbb {Z}_+=\{0,1,2,\ldots \}\).

In order to propose a local search method for the problem (4), apply a classical idea of linearization with respect to the basic nonconvexity of the problem (i.e. with respect to \(h_i(\cdot ),\;i\in I\)) at the point \(x^s\) [21]. Thus, we obtain the following linearized problem:

$$ \left. \begin{array}{c} f_0(x) \downarrow \min \limits _x,\quad x\in S,\\ \varphi _{is}(x):=g_i(x)-\langle \nabla h_i(x^s),x-x^s\rangle -h_i(x^s)\le 0,\quad i\in I. \end{array} \right\} \qquad \qquad (\mathcal {PL}_s) $$

Suppose the point \(x^{s+1}\) is provided by solving Problem \((\mathcal {PL}_s)\), so that

$$x^{s+1}\in D_s= \{ x\in S \mid g_i(x) - \langle \nabla h_i(x^s), x-x^s\rangle - h_i(x^s) \le 0,\quad i\in I\}$$

and inequality \( f_{0} (x^{s+1}) \le \mathcal {V} (\mathcal {PL}_s) + \delta _s \) holds. Here \(\mathcal {V}(\mathcal {PL}_s)\) is the optimal value to Problem \((\mathcal {PL}_s)\):

$$ \mathcal {V}_s :=\mathcal {V}(\mathcal {PL}_s)\triangleq \inf \limits _x \{f_{0}(x) \mid x\in S,\; \varphi _{is}(x) \le 0,\;\; i\in I \}, $$

and the sequence \(\{\delta _s\}\) satisfies the following condition functions \(\sum \limits _{s = 0}^{\infty }\delta _{s} < + \infty \). It can be easily seen that \(D_s \subset D\), so \(x^{s+1}\) turns out to be feasible in the problem (4). Actually, since the functions \(h_i(\cdot )\), \(i\in I\) are convex, the following inequalities hold

$$ \begin{array}{c} 0\ge g_i(x^{s+1})-\langle h'_i(x^s),x^{s+1}-x^s\rangle - h_i(x^s) = \varphi _{is}(x^{s+1}) \ge \\ \ge g_i(x^{s^{ }+1}) - h_i(x^{s+1}) = f_i(x^{s+1}),\;\; i\in I. \end{array} $$

Therefore, the LSM generates the sequence \(\{x^s\}\), \(x^s\in D_s\), \(s=0,1,2,\ldots \), of solutions to Problems \((\mathcal {PL}_s)\). As it was proven in [21], the cluster point \(x_*\in D_*\) of the sequence \(\{x^s\}\) is a solution to the linearized Problem \((\mathcal {PL}_*)\) (which is Problem \((\mathcal {PL}_s)\) with \(x^s\) instead of \(x_*\)), and \(x_*\) can be called the critical point with respect to the LSM. Thus, the algorithm constructed in this way provides critical points by employing suitable convex optimization methods [15] for any given accuracy \(\tau \). The following inequality:

$$ f_0(x^s) - f_0(x^{s+1}) \le \frac{\tau }{2},\quad \delta _s\le \frac{\tau }{2}, $$

can be chosen as a stopping criterion for the LSM [21].

In order to implement the LSM, we need an explicit d.c. representation of \(f_i(\cdot )\), i.e. \(f_i(\cdot )=g_i(\cdot )-h_i(\cdot )\), \(i\in I\).

4 D.C. Representation of the Constraints

The first stage of any algorithm developed according the Global search theory is the decomposition of a nonconvex function as a difference of two convex functions. Such decomposition is constructing in several different ways depending on the functions \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\).

4.1 Affine Functions

Let the functions \(\psi _i (\cdot )\) and \(\varphi _i (\cdot )\) be constructed by means of the vectors \(a^i,c^i\in {I\!R}^n,\) and numbers \(b_i,d_i\in {I\!R},\)

$$\psi _i(x)=\langle a^i,x\rangle +b_i,\;\;\varphi _i(x)=\langle c^i,x\rangle +d_i >0,\;\;i\in I.$$

In this case, the basic nonconvexity of Problem \((\mathcal{P})\) is the bilinear term \(\alpha _{i}\varphi _i(x)= \langle \alpha _{i}c^i,x\rangle +\alpha _{i}d_i \) in each constraint \((i\in I)\). Then, the bilinear function can be represented as a difference of two convex functions in the following way [27]:

$$\begin{aligned} \langle \alpha _{i}c^i,x\rangle =\frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2 -\frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2,\;\;i\in I. \end{aligned}$$
(5)

Hence, the functions \(f_i(\cdot )\) have the form \(f_i(x,\alpha _{i})=g_i(x,\alpha _{i})-h_i(x,\alpha _{i}),\) where

$$\begin{aligned} \begin{array}{c} g_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2-\alpha _{i}d_i+\langle a^i,x\rangle +b_i,\\ h_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2. \end{array} \end{aligned}$$
(6)

Taking into account the d.c. representation (6), the linearized Problem \((\mathcal {PL}_s)\) has the following form

$$\begin{aligned} \left. \begin{array}{c} \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\quad x\in S,\\ \displaystyle \frac{1}{4}\! \parallel \alpha _{i}c^i - x\parallel ^2+\langle a^i,x\rangle -\alpha _{i}d_i\langle \nabla h_i(x^s,\alpha _{i}^s),(x,\alpha _{i})\rangle +\mathcal{C}_{is}\le 0,\; i\in I, \end{array} \right\} \end{aligned}$$
(7)

where \(\;\;\mathcal{C}_{is}=b_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\),

$$\begin{aligned} \begin{array}{c} \nabla h_i(x^s,\alpha ^s_{i})=(\nabla h_{ix},\nabla h_{i\alpha })^T,\\ \;\nabla h_{ix}= \displaystyle \frac{1}{2} ( \alpha ^s_{i}c^i + x^s), \;\; \nabla h_{i\alpha }= \displaystyle \frac{1}{2} ( \alpha ^s_{i}\parallel c^i\parallel ^2 + \langle c^i,x^s\rangle ) . \end{array} \end{aligned}$$
(8)

The problem (7) is a convex optimization problem and it can be solved by an appropriate convex optimization method [15] at a given accuracy: \(\delta _s >0,\) \(s=0,1,\ldots \).

Further, we will consider Problem \((\mathcal{P})\) where \(\psi _i(\cdot )\) are convex quadratic functions and \(\varphi _i(\cdot )\) are affine functions, \(i\in I\).

4.2 Quadratic/Affine Functions

Suppose we are given symmetric positive definite matrices \(A^i\) \((n\times n)\), vectors \(p^i,c^i\in {I\!R}^n,\) and scalars \(q_i, d_i\in {I\!R},\)

$$\psi _i(x)=\langle x,A^ix\rangle +\langle p^i,x\rangle +q_i,\;\;\varphi _i(x)=\langle c^i,x\rangle +d_i >0,\;\;i\in I.$$

As has been done in Subsect. 4.1, we represent the bilinear term \(\alpha _{i}\varphi _i(x)\) as the difference of two convex functions, which yields us the d.c. representations \(f_i(x,\alpha _{i})=g_i(x,\alpha _{i})-h_i(x,\alpha _{i}),\;\;i\in I,\) where

$$\begin{aligned} \begin{array}{c} g_i(x,\alpha _{i})=\langle x,A^ix\rangle +\langle p^i,x\rangle +q_i+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2-\alpha _{i}d_i,\\ h_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2. \end{array} \end{aligned}$$
(9)

Taking into account the d.c. representation (9), the linearized Problem \((\mathcal {PL}_s)\) takes the following form

$$\begin{aligned} \left. \begin{array}{c} \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\quad x\in S,\\ \langle x,A^ix\rangle +\langle p^i,x\rangle +\displaystyle \frac{1}{4}\! \parallel \alpha _{i}c^i - x\parallel ^2\! -\alpha _{i}d_i\\ -\langle \nabla h_i(x^s,\alpha _{i}^s),(x,\alpha _{i})\rangle +\mathcal{C}_{is}\le 0, \;i\in I, \end{array} \right\} \end{aligned}$$
(10)

where the gradient \(\nabla h_i(x^s,\alpha ^s_{i})\) is calculated by the formula (8), and \(\mathcal{C}_{is}=q_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\).

The problem (10), as well as (7), can be solved by a suitable convex optimization method [15].

Remark 2

If the symmetric matrices \(A^i\) in the quadratic functions \(\psi _i(\cdot )\) are indefinite, then one can represent \(A^i\) as the difference of two symmetric positive definite matrices \(A^i = A^i_1 - A^i_2,\; A^i_1, A^i_2 > 0,\) using, for example, a simple method from [24]. Afterwards, it is possible to construct functions \(g_i(\cdot )\) and \(h_i(\cdot )\) as follows: for all \(i\in I\) add the convex part with the matrix \(A^i_1\) to the function \(g_i(\cdot )\) and the nonconvex part with the matrix \(A^i_2\) to \(h_i(\cdot )\).

In what follows, we will examine the case where \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\) are convex quadratic functions, \(i\in I\).

4.3 Quadratic Functions

Now let us consider the following functions:

$$\psi _i(x)=\langle x,A^ix\rangle +\langle p^i,x\rangle +q_i,\;\;\varphi _i(x)=\langle x,B^ix\rangle +\langle c^i,x\rangle +d_i >0,$$

\(A^i\) and \(B^i\) are positive definite \((n\times n)\) matrices, \(p^i,c^i\in {I\!R}^n,\;q_i,d_i\in {I\!R},\) \(i\in I.\) Therefore, Problem \((\mathcal{P})\) has the following term

$$\begin{aligned} \alpha _{i}\varphi _i(x)= \alpha _{i}\langle x,B^ix\rangle +\alpha _{i}\langle c^i,x\rangle +\alpha _{i}d_i, \end{aligned}$$
(11)

which generate nonconvexity in every constraint (\(i\in I\)).

The term \(\alpha _{i}\langle c^i,x\rangle \) in (11) can be presented in the d.c. form by the formula (5).

Further, let us denote \(r_i:=\langle x,B^ix\rangle \). Then, the product \(\alpha _{i}u_i\) can be expressed by formula (5) as follows

$$ \begin{array}{c} \alpha _ir_i=\displaystyle \frac{1}{4} (\alpha _i + r_i)^2 -\displaystyle \frac{1}{4} ( \alpha _i - r_i)^2 \\ =\displaystyle \frac{1}{4} \left( \alpha _i + \langle x,B^ix\rangle \right) ^2 -\displaystyle \frac{1}{4} \left( \alpha _i - \langle x,B^ix\rangle \right) ^2,\;\;i\in I. \end{array} $$

If \(B^i,\; i\in I,\) are positive definite matrices and the following conditions hold

$$\begin{aligned} \alpha _i + \langle x,B^ix\rangle \ge 0, \;\;\alpha _i - \langle x,B^ix\rangle \ge 0\;\;\forall x\in S,\;\;i\in I, \end{aligned}$$
(12)

then

$$ \begin{array}{c} g_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \left( \alpha _i - \langle x,B^ix\rangle \right) ^2+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2-\alpha _{i}d_i+\psi _i(x),\\ h_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \left( \alpha _i + \langle x,B^ix\rangle \right) ^2+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2 \end{array} $$

are convex functions. Hence, we obtain the following d.c. representation:

$$\begin{aligned} f_i(x,\alpha _{i})=g_i(x,\alpha _{i})-h_i(x,\alpha _{i}), \;\;i\in I, \end{aligned}$$
(13)

and the following linearized Problem \((\mathcal {PL}_s)\)

$$\begin{aligned} \left. \begin{array}{c} \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\quad x\in S,\\ \langle x,A^ix\rangle +\langle p^i,x\rangle +\displaystyle \frac{1}{4} \left( \alpha _i - \langle x,B^ix\rangle \right) ^2+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2-\alpha _{i}d_i\\ -\langle \nabla h_i(x^s,\alpha _{i}^s),(x,\alpha _{i})\rangle +\mathcal{C}_{is}\le 0, \;\;\;i\in I, \end{array} \right\} \end{aligned}$$
(14)

where \(\mathcal{C}_{is}=q_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \left( \alpha ^s_i + \langle x^s,B^ix^s\rangle \right) ^2-\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\), \(\nabla h_i(x^s,\alpha ^s_{i})=(\nabla h_{ix},\nabla h_{i\alpha })^T\) is the gradient of the function \(h(\cdot )\):

$$ \begin{array}{c} \nabla h_{ix}= \left( \alpha ^s_i + \langle x^s,B^ix^s\rangle \right) B^ix^s+\displaystyle \frac{1}{2} ( \alpha ^s_{i}c^i + x^s),\\ \nabla h_{i\alpha }= \displaystyle \frac{1}{2}\left( \alpha ^s_i + \langle x^s,B^ix^s\rangle \right) +\displaystyle \frac{1}{2} \left( \alpha ^s_{i}\parallel c^i\parallel ^2 + \langle c^i,x^s\rangle \right) . \end{array} $$

If the conditions (12) are not satisfied, one can construct the d.c. representation (13) by decomposition of the trilinear term \( \alpha _{i}\langle x,B^ix\rangle = \sum \limits _{l=1}^n \sum \limits _{j=1}^n b^i_{lj}x_lx_j\alpha _i \) using the following equality holding for the product of three variables (for example, uvw):

$$\begin{array}{c} uvw=\displaystyle \frac{1}{8} \hat{g}(u,v,w)-\displaystyle \frac{1}{8} \hat{h}(u,v,w),\\ \\ \hat{g}(u,v,w)=((u+v)^2+(1+w)^2)^2+(1+w)^4+(u^2+w^2)^2 \\ \;\;\;\;\;\;\;\;+ (v^2+w^2)^2+2(u^2+v^2),\\ \\ \hat{h}(u,v,w)=w^4+((u+v)^2+w^2)^2+(u^2+(1+w)^2)^2\\ \;\;\;\;\;\;\;\;+ (v^2+(1+w)^2)^2+2(u+v)^2. \end{array} $$

Therefore, we get (13), where

$$ \begin{array}{c} g_i(x,\alpha _{i})=\psi _i(x)+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i - x\parallel ^2-\alpha _{i}d_i+\displaystyle \frac{1}{8} \sum \limits _{l=1}^n \sum \limits _{j=1}^n b^i_{lj}\hat{g}(x_l,x_j,\alpha _i),\\ \\ h_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2+ \displaystyle \frac{1}{8} \sum \limits _{l=1}^n \sum \limits _{j=1}^n b^i_{lj}\hat{h}(x_l,x_j,\alpha _i). \end{array} $$

Obviously, in this case the linearized Problem \((\mathcal {PL}_s)\) is the problem of minimization of the linear function over the convex feasible set given by more complicated nonlinear functions \(\varphi _{ik}(x,\alpha _{i})\) in comparison with the problems (7), (10) or even (14). At the same time, the linearized problems are convex, and therefore can be solved by a suitable convex optimization method [15].

Remark 3

If the symmetric matrices \(A^i\) and \(B^i\) in the functions \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\), respectively, are indefinite, then this case is already described above in Remark 2.

5 Computational Simulations

The algorithm of the local search method (LSM) from Sect. 3 was coded in C++ language and was tested with various starting points. All computational experiments were performed on the Intel Core i7-4790K CPU 4.0 GHz.

At each iteration of the LSM, the convex Problem \((\mathcal{PL}_s)\) was solved by the software package IBM ILOG CPLEX 12.6.2 [11]. The accuracy of the LSM was \(\tau =10^{-6} \). The accuracy of the solution to the linearized problems \((\mathcal{PL}_s)\) increased during the LSM. Thus, we solved \((\mathcal{PL}_s)\) at a low accuracy at the first steps; further, the accuracy \(\delta _s\) was gradually improved \((\delta _s \downarrow 0)\), i.e., \(\delta _0 = 0.1,\) \(\delta _{s+1} = 0.5\delta _s,\) until the condition \(\delta _s\le \frac{\tau }{2}\) was fulfilled with a given accuracy \(\tau > 0\).

At the first stage, we numerically solved several instances of fractional programming problems from [2, 3, 7, 14, 16, 18] with a small number of variables.

5.1 Low-Dimensional Fractional Program with Affine Functions

Tables 1 and 2 represent the results of the computational testing of the LSM and employ the following designations:

Table 1. Low-dimensional fractional program. Minimization.

name is the name of the test example;

n is the number of variables (problem’s dimension);

m is the number of terms in the sum;

\(f_0(x^0)\) is the value of the goal function to Problem \((\mathcal{P})\) at the starting point;

\(f_0(z)\) is the value of the function at the critical point provided by the LSM;

it is the number of linearized problems solved (iterations of the LSM);

Time stands for the CPU time of computing (seconds);

\(x^0\) stands for the starting point chosen in the test problem;

z is the critical point provided by the LSM.

Note that in the problems “Prob6 [2]” and “Prob1 [16]” in Table 2, local solutions derived by the LSM are not global (shown in bold).

Known global solutions to all problem instances were found just by the local search that confirms the computational effectiveness of the LSM. All test problems were successfully solved.

Table 2. Low-dimensional fractional program. Maximization.

Further, we study if the LSM performance is affected by the increase in dimension of the variable x and the number of terms in the sum.

5.2 Randomly Generated Problems with Affine and Quadratic Functions

In this subsection, we will report computational results of testing the LSM on randomly generated problems of the form

$$\begin{aligned} \begin{array}{c} f_0(x):=\sum \limits _{i=1}^m\displaystyle \frac{\langle a^i,x\rangle +b_i}{\langle c^i,x\rangle +d_i}\uparrow \max \limits _x,\;\; \langle \bar{A},x\rangle \le \bar{b},\;\;x\ge 0. \end{array}\end{aligned}$$
(15)

Data \(a^i_j,\;c^i_j,\;\bar{A}_{lj}\in [0, 10]\) were uniformly random numbers, \(b_i=d_i=10,\) \(\bar{b}_l=10,\; i=1,\ldots ,m,\;j=1,\ldots ,n,\; l=1,\ldots ,L\).

Results of the computational testing of the LSM on fractional problems (15) up to 100 variables and 100 terms in the sum are listed in Table 3. The denotations in Table 3 are the same as in Tables 1 and 2.

Table 3. Randomly generated problems (15) with affine functions

Moreover, we have carried out testing of the LSM on fractional problems with quadratic functions in the numerators of ratios. We generated the problems from [12] up to 200 variables and 200 terms in the sum:

$$\begin{aligned} \begin{array}{c} f_0(x):=\sum \limits _{i=1}^m\displaystyle \frac{\frac{1}{2} \langle x,A^ix\rangle +\langle p^i,x\rangle }{\langle c^i,x\rangle }\downarrow \min \limits _x,\;\; \langle \bar{A},x\rangle \le \bar{b},\;\;x\in [1,5]^n, \end{array}\end{aligned}$$
(16)

where \(A^i=U_iD^iU^T_i,\;U_i=Q_1Q_2Q_3,\;i=1,\ldots , m,\;Q_j=I\!-\!2\frac{w_jw_j^T}{\parallel w_j\parallel ^2},j=1,2,3\) and \(w_1=-i+rand(n,1),\;w_2=-2i+rand(n,1),\;w_3=-3i+rand(n,1),\) \( D^i=rand(n,n),\;c^i=i-i\cdot rand(n,1),\; p^i=i+ i\cdot rand(n,1),\; i=1,\ldots , m,\) \(\bar{A}=-1+2\cdot rand(5,n),\;\bar{b}=2+3\cdot rand(5,1)\) [12]. (We denote by \(rand(k_1,k_2)\) the random matrix with \(k_1\) rows, \(k_2\) columns and elements generated randomly on [0, 1].)

Table 4. Randomly generated problems (16) with quadratic functions

As it is shown in Table 4, the number of iteration (it) of the LSM is almost independent of the number of variables (n) but approximately proportional to the number of terms in the sum (m). The run-time increased proportionally to n and m.

Computational simulations confirm the efficiency of the LSM developed, the performance of which naturally depends on the choice of the method or the software package (IBM ILOG CPLEX) employed to solve auxiliary problems.

Thus, LSM can be applied in future implementations of the global search algorithm for solving the sum of ratios fractional problems via problems with d.c. constraints.

6 Conclusions

In this paper, we considered the fractional programming problem as an optimization problem with d.c. constraints. To this end, we carried out the explicit representation of nonconvex functions as differences of two convex functions and applied the local search algorithm based on linearization of the functions defining the basic non-convexity of the problem under study.

We investigated the effectiveness of the local search method for solving problems with d.c. constraints that generate the nonconvexity of the feasible set.

The numerical experiments demonstrated that the local search algorithm can globally solve low-dimensional sum-of-ratios test problems. Moreover, the local search algorithm developed in this paper turned out to be rather efficient at finding critical points in randomly generated fractional programming problems of high dimension.

Therefore, the method developed can be applied within the global search procedures for fractional programming problems.