Keywords

1 Introduction

The global minimization of the sum of fractional functions has attracted the interest of researchers and practitioners for a number of years, because these problems have a large number of important real-life applications. From the theoretical viewpoint, the solution of these problems implies facing significant challenges, because, in general, fractional programs are nonconvex problems, i.e., they generally have several (often a huge number of) local optimal solutions that are not globally optimal [18]. It was proven that the sum-of-ratios program is NP-complete [6]. Various specialized methods and algorithms have been proposed for solving these problems globally (see, for example, surveys in [3, 18]), but the development of new efficient methods for the following sum-of-ratios problem [3, 7, 18]

$$\begin{aligned} {(\mathcal{FP}):} \qquad \qquad \quad f(x):=\sum \limits _{i=1}^m\frac{\psi _i(x)}{\varphi _i(x)}\downarrow \min \limits _x,\;\;x\in S, \end{aligned}$$

where \(S\subset I\!\! R^n\) is a closed convex set and \(\psi _i, \varphi _i:I\!\! R^n \rightarrow I\!\! R,\) are continuous function such that

$$ (\mathcal{H}_0):\;\;\;\;\;\;\;\;\;\;\;\;\;\psi _i(x)> 0,\;\varphi _i(x)> 0\;\;\forall x\in S,\;\;i=1,\ldots , m;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; $$

still remains an important field of research in the mathematical optimization. If functions \(\psi _i(\cdot ),\;\varphi _i(\cdot ),\;i=1,\ldots , m\) are quadratic functions, then we classify Problem \((\mathcal{FP})\) as a quadratic fractional program.

On the other hand, the following general problem of d.c. optimization

$$\begin{aligned} {(\mathcal{P}):} \qquad \quad \quad \left\{ \begin{array}{c} f_0(x):=g_0(x)-h_0(x)\downarrow \min \limits _{x},\quad x\in S,\\ f_j(x):=g_j(x)-h_j(x)\le 0,\;\; j=1,\ldots ,m, \end{array} \right. \end{aligned}$$

where the functions \(g_j(\cdot )\), \(h_j(\cdot )\), \(j=1,\ldots ,m,\) are convex on \(I\!\! R^n\), S is a closed and convex set, and \(S\subset I\!\! R^n\), remained over recent years one of the attractive objects in nonconvex optimization [13,14,15, 19, 27, 28].

It is worth noting that any continuous optimization problem can be approximated by a d.c. problem with any prescribed accuracy [13, 14, 28]. In addition, the space of d.c. functions, being a linear space, is then closed under most of operations usually considered in optimization (see, e.g., [19, 28]). Moreover, any twice differentiable function belong to the space of d.c. functions [13,14,15, 19, 27, 28]. The convexity of the two convex components g and h of the d.c. function f is widely used to develop appropriate theoretical and algorithmic tools.

We develop a new efficient global search method for the fractional optimization problems, which is based on the two following ideas [9, 11, 12]. First, generalizing the Dinkelbach’s approach [4], we propose to reduce the sum-of-ratios problem with d.c. functions to solving an equation with the optimal value function of an auxiliary parametric problem with the vector parameter that satisfies the nonnegativity assumption. Secondly, we also use the reduction of the fractional program to a problem of type \((\mathcal{P})\), where \(f_0(x)\) is a linear function, i.e. an optimization problem over nonconvex feasible set given by d.c. inequality constraints.

Furthermore, based on the Global Search Theory for d.c. optimization [19, 22] and on the solution of these two particular cases of the general d.c. optimization problem \((\mathcal{P})\), we develop a two-method technology for solving a sum-of-ratios problem and verify it on test problems with nonconvex quadratic functions in the numerators and denominators of the ratios.

Most of the approaches and techniques for solving fractional programs are designed for problems with affine functions in the numerators and denominators of the ratios [1, 5, 16, 17]. Problems with nonconvex quadratic (d.c.) functions are more complex problems since it concerns a finding a global solution.

The outlines of the paper are as follows. In Sect. 2 and 3, we recall two approaches to solving Problem \((\mathcal{FP})\) using auxiliary d.c. minimization problems and problems with d.c. inequality constraints, respectively. In Sect. 4, we show how to represent explicitly the nonconvex functions, describing the goal function and the constraints of the auxiliary d.c. problems as differences of two convex functions (the d.c. representation). Further, in Sect. 5, we propose the algorithm which combines the two approaches to solving the fractional program via d.c. optimization. The final section offers computational testing of the developed algorithm on fractional program instances with up to 100 variables or 1000 terms in the sum generated by the approach from [2].

2 Reduction to the D.C. Minimization Problem

Consider the following auxiliary problem

$$ {(\mathcal{P}_\alpha ):} \qquad \qquad \varPhi _\alpha (x):= \sum \limits _{i=1}^m [\psi _i(x) -\alpha _i\varphi _i(x)]\downarrow \min \limits _x,\;\;x\in S, $$

where \(\alpha =(\alpha _1,\ldots , \alpha _m)^\top \in I\!\! R^m_+\) is the vector parameter and the set \(S\subset I\!\! R^n\) is, as above closed and convex.

First, let us recall some results from [9] about the relations between Problems \((\mathcal{FP})\) and \((\mathcal{P}_{\alpha })\). Further introduce the function \(\mathcal{V}(\alpha )\) of the optimal value to Problem \((\mathcal{P}_\alpha )\):

$$\begin{aligned} \mathcal{V}(\alpha ):=\inf \limits _x \{\varPhi _\alpha (x)\;\mid \; x\in S\}=\inf \limits _x \left\{ \sum \limits _{i=1}^m [\psi _i(x) -\alpha _i\varphi _i(x)] \;:\; x\in S\right\} . \end{aligned}$$
(1)

In addition, suppose that the following assumptions are fulfilled:

$$ (\mathcal{H}_1):\;\;\left\{ \begin{array}{l} (a)\;\; \mathcal{V}(\alpha ) > -\infty \;\;\forall \alpha \in \mathcal{K}, \text{ where } \mathcal{K} \text{ is } \text{ a } \text{ convex } \text{ set } \text{ from } I\!\! R^m; \\ (b)\;\; \forall \alpha \in \mathcal{K}\subset I\!\! R^m\;\;\text{ there } \text{ exists } \text{ a } \text{ solution } z=z(\alpha ) \text{ to } \text{ Problem } (\mathcal{P}_\alpha )\text{. } \end{array}\;\; \right. $$

In what follows, we say that a given parameter vector \(\alpha =(\alpha _1,\ldots , \alpha _m)^\top \in I\!\! R^m\) satisfies “the nonnegativity condition” in Problem \((\mathcal{FP})\), if the following inequalities hold

$$(\mathcal{H}(\alpha )):\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \psi _i(x) -\alpha _i\varphi _i(x)\ge 0\;\;\forall x\in S,\;\;i=1,\ldots ,m.\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; $$

Theorem 1

[9] Suppose that in Problem \((\mathcal{FP})\) the assumptions \((\mathcal{H}_0)\), \((\mathcal{H}_1)\) are satisfied. In addition, let there exist a vector \(\alpha _0=(\alpha _{01},\ldots , \alpha _{0m})^\top \in \mathcal{K}\subset I\!\! R^m\) at which “the nonnegativity condition” \((\mathcal{H}(\alpha _0))\) holds.

Finally, suppose that in Problem \((\mathcal{P}_{\alpha _0})\) the following equality takes place:

$$\begin{aligned} \mathcal{V}(\alpha _0) {\mathop {=}\limits ^{\triangle }}\min \limits _x \left\{ \sum \limits _{i=1}^m [\psi _i(x) -\alpha _{0i}\varphi _i(x)] \;:\; x\in S\right\} =0. \end{aligned}$$
(2)

Then, any solution \(z=z(\alpha _0)\) to Problem \((\mathcal{P}_{\alpha _0})\) is a solution to Problem \((\mathcal{FP})\), so that \( z\in Sol(\mathcal{P}_{\alpha _0})\subset Sol (\mathcal{FP}). \)

Hence, in order to check up the equality (2), we should be able to solve globally Problem \((\mathcal{P}_\alpha )\) at a current \(\alpha \in I\!\! R^m_+\). Since \(\psi _i(\cdot ),\;\varphi _i(\cdot ),\) \(i=1,\ldots , m,\) are convex or d.c. functions it can be readily seen that Problem \((\mathcal{P}_\alpha )\) turns out to be a parametric d.c. minimization problem. As a consequence, in order to solve the auxiliary Problem \((\mathcal{P}_\alpha )\), we can apply the global search strategy for d.c. minimization problems [19, 22]. Its theoretical foundation can be provided by the following Global Optimality Condition written here in the terms of Problem \((\mathcal{P}_\alpha )\) under the assumption that \(\psi _i(\cdot )\), \(\varphi _i(\cdot ),\;i=1,\ldots ,m,\) are convex functions and therefore \(g(x)=\sum \limits _{i=1}^m \psi _i(x),\;\;h_\alpha (x)=\sum \limits _{i=1}^m \alpha _i \varphi _i(x),\)

$$ \varPhi _\alpha (x)= \sum \limits _{i=1}^m (\psi _i(x)- \alpha _i \varphi _i(x))=g(x)-h_\alpha (x).$$

If \(\psi _i(\cdot )\), \(\varphi _i(\cdot ),\;i=1,\ldots ,m,\) are d.c. functions, it is easy to see that the goal function of \((\mathcal{P}_\alpha )\) is d.c., as well, but having another d.c. decomposition.

Theorem 2

[19, 22] Suppose, \(z(\alpha )\) is a global solution to (\(\mathcal{P}_\alpha \)), \(\zeta :=\varPhi _\alpha (z(\alpha )).\) Then,

$$\begin{aligned} (\mathcal{E}):\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\; \;\;\;\;\; \left\{ \begin{array}{c} (a)\;\;\;\forall (y,\beta )\in I\!\! R^n\times I\!\!R:\;\;\; h_\alpha (y)=\beta -\zeta ,\\ (b)\;\;\;g(x)-\beta \ge \langle \nabla h_\alpha (y),x-y\rangle \;\;\forall x\in S. \end{array} \right. \;\;\;\;\;\;\;\;\;\;\; \end{aligned}$$
(3)

The meaning of Theorem 2 lies in the fact that by selecting the “perturbation parameters” \((y,\beta )\) (satisfying (3a)) and solving the linearized problem

$$ (\mathcal{P}_\alpha \mathcal{L}):\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; g(x)-\langle \nabla h_\alpha (y),x\rangle \downarrow \min \limits _x,\;\;\;x\in S,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; $$

we try to violate the principal inequality (3b) (where \(y\in I\!\!R^n \) is not obligatory feasible).

Furthermore, according to the Theorem 1, we are able to avoid the direct solution of Problem \((\mathcal{FP})\) and address the parametrized problem \((\mathcal{P}_\alpha )\) with \(\alpha \in I\!\! R^m_+\). Hence, we propose to combine a solution of Problem \((\mathcal{P}_{\alpha })\) with a search of the parameter \(\alpha \in I\!\! R^m_+\) in order to find \(\alpha _0\in I\!\! R^m_+\) such that \(\mathcal{V}(\alpha _0)=0\). This idea can be implemented by the following algorithm. Let \([\alpha _-^0,\alpha _+^0]\) be an initial segment for varying \(\alpha \), and \(x_0\in S\) stands for the starting point.

Algorithm 1 for solving the fractional problem via d.c. minimization

  1. Stage 0.

    (Initialization) \(k:=0. \) Set \(x^k:=x_0\), \(\alpha _-^k:=0,\;\alpha _+^k:=\alpha _+^0,\;\alpha ^k:=\frac{\alpha _+^0}{2}\in [\alpha _-^0, \alpha _+^0] .\)

  2. Stage I.

    (Local search) Starting at \(x^k\) find a critical point to Problem \((\mathcal{P}_{k}):=(\mathcal{P}_{\alpha ^k})\) using the special local search method for d.c. minimization [19].

  3. Stage II.

    (Global search) Find a global solution \(z(\alpha ^k)\) to Problem \((\mathcal{P}_{k})\) using the global search scheme for the parametric d.c. minimization [19, 22] Problem \((\mathcal{P}_{k})\).

  4. Stage III.

    (Stopping criterion) If \(\mathcal{V}(\alpha ^k)=0\) and \(\min \limits _i \left\{ \psi _i(z(\alpha ^k))\right. - \alpha ^k_i \varphi _i\left. (z(\alpha ^k))\right\} \ge 0\), then STOP: \(z(\alpha ^k)\in Sol(\mathcal{FP})\).

  5. Stage IV.

    (Parameter variation) Find new parameters \(\alpha ^{k+1},\;\alpha _-^{k+1}\) and \(\alpha _+^{k+1}\); \(k:=k+1\) and go to Stage I.

Remark 1

The algorithm for solving the fractional program \((\mathcal{FP})\) consists of three basic stages: (a) a local and (b) a global searches in Problem \((\mathcal{P}_\alpha )\) with a fixed vector parameter \(\alpha \) (Stages I, II) and (c) the method for finding the vector \(\alpha \) (Stage IV).

Let \([\alpha _-, \alpha _+]\) be a segment for varying \(\alpha \). In addition, assume that we are able to compute the value \(\mathcal{V}:=\mathcal{V}(\alpha )\) and let a solution \(z(\alpha )\) to Problem \((\mathcal{P}_\alpha )\) be given.

In order to calculate a new parameter \(\alpha ^{new}\) and new boundaries \(\alpha _-^{new}\) and \(\alpha _+^{new}\) at the current iteration k for a segment for varying \(\alpha \) on Stage IV of Algorithm 1 one can use the following procedure, where \(\alpha :=\alpha ^k\) from Stage III of Algorithm 1.

Stage IV. Parameter variation algorithm

  1. Step 1.

    If \(\mathcal{V}>0\), then set \(\alpha _-^{new}:=\alpha \), \(\alpha ^{new}:=\frac{1}{2}(\alpha _+ +\alpha )\), \(\alpha _+^{new}:=\alpha _+\).

  2. Step 2.

    If \(\mathcal{V}<0\), then set \(\alpha _+^{new}:=\alpha \), \(\alpha ^{new}:=\frac{1}{2}(\alpha _- +\alpha )\), \(\alpha _-^{new}:=\alpha _-\).

  3. Step 3.

    If \(\mathcal{V}=0\) and \(\min \limits _i \varPhi _{i}(z(\alpha ),\alpha )< 0\), then set

    $$\begin{array}{c} \alpha ^{new}_i:= \displaystyle \frac{\psi _{i}(z(\alpha ))}{\varphi _i(z(\alpha ))} \;\;\forall i:\psi _i(z(\alpha ))- \alpha _i \varphi _i(z(\alpha ))<0;\\ \alpha ^{new}_i:= \alpha _i\;\;\;\forall i:\psi _i(z(\alpha ))- \alpha _i \varphi _i(z(\alpha ))\ge 0. \end{array} $$

    In addition, set \(\alpha ^{new}_-:=0\), \(\alpha _+^{new}:=t\alpha ^{new}\), where \(t=\displaystyle \frac{\max \{\alpha ^0_{+1};\ldots ;\alpha ^0_{+m}\}}{\max \{\alpha _1;\ldots ;\alpha _m\}}\).

  4. Step 4.

    \(\alpha ^{k+1}:=\alpha ^{new},\;\alpha _-^{k+1}:=\alpha _-^{new}\) and \(\alpha _+^{k+1}:=\alpha _+^{new}\); \(k:=k+1\) and go to Stage I of Algorithm 1.

Remark 2

To choose an initial segment \([\alpha _-^0,\alpha _+^0]\) for varying \(\alpha \), we should take into account the following considerations. According to “the nonnegativity condition” \((\mathcal{H}(\alpha ))\) and the assumption \((\mathcal{H}_0)\), we have

$$\alpha _i\le \displaystyle \frac{\psi _i(x)}{\varphi _i(x)} \le \sum \limits _{i=1}^m\displaystyle \frac{\psi _i(x)}{\varphi _i(x)}\;\;\forall x\in S,\;\;\forall i=1,\ldots , m,$$

therefore one can choose \(\alpha _{+\;i}^0:=\displaystyle \frac{\psi _i(x^0)}{\varphi _i(x^0)},\;\;\alpha _{-\;i}^0=0,\;\;i=1,\ldots , m.\)

The performed computational experiment [8, 9, 11] showed that solving fractional problem via d.c. minimization takes a large number of iterations, generated by an ineffective work of Stage IV. Therefore, it is very important to choose a suitable parameter \(\alpha \) in order to reduce the total number of iterations and, therefore, the corresponding run-time of Algorithm 1.

Using the reduction of the sum-of-ratios problem to the optimization problem with nonconvex constraints, we will look not only for a starting value of the parameter \(\alpha \) for Problem \((\mathcal{P}_\alpha )\), but also for a better vector for Problem \((\mathcal{FP})\).

3 Reduction to the Problem with D.C. Constraints

In this section we consider the following optimization problem

$$ (\mathcal{DCC}):\;\;\;\;\;\;\;\;\;\; \left\{ \begin{array}{c} \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\;\;x\in S,\\ \psi _i(x)- \alpha _{i}\varphi _i(x)\le 0,\;\;i\in I=\{1,\ldots , m\}. \end{array}\right. \;\;\;\;\;\;\;\;\; $$

The relations between Problems \((\mathcal{FP})\) and \((\mathcal{DCC})\) are as follows.

Proposition 1

[10] For any global solution \((x_*,\alpha _*)\in I\!\! R^n \times I\!\! R^m\) to Problem \((\mathcal{DCC})\), the point \(x_*\) will be a global solution to Problem \((\mathcal{FP})\) and\(\alpha _{*i}=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\;i\in I.\)

Remark 3

It is clear that, in contrast to Theorem 1 from Sect. 2, the vector \(\alpha _*=(\alpha _{*1},\ldots ,\alpha _{*m})^\top \in I\!\! R^m\) must be found simultaneously with the solution vector \(x_*\).

It is easy to see that Problem \((\mathcal{DCC})\) is a nonconvex optimization problem with the linear goal function and the nonconvex feasible set (see, e.g., [14, 25]). So, we can solve Problem \((\mathcal{DCC})\) using the exact penalization approach for d.c. optimization developed in [20, 23, 24]

However, the computational experiments showed [11, 12] that solving fractional program via problem with d.c. constraints \((\mathcal{DCC})\) took more run-time than using the parametric d.c. minimization, i.e. Problem \((\mathcal{P}_\alpha )\).

Notwithstanding, in low-dimensional test problems of fractional programming the known global solutions were found just by the local search method (LSM) [21] for Problem \((\mathcal{DCC})\) (see [10]). Therefore, we apply here only the LSM, based on the classical idea of linearization with respect to the basic nonconvexity of the problem [19, 21, 26].

The LSM for Problem \((\mathcal{DCC})\) is based on the consecutive solutions of the following partially linearized (at the point \((x^s,\alpha ^s)\)) problem [10, 21, 26]:

$$ (\mathcal {DCCL}_s):\left\{ \begin{array}{c} \sum \limits _{i=1}^m \alpha _i\downarrow \min \limits _{(x, \alpha )},\;\;x\in S,\\ g_i(x, \alpha _i)-\langle \nabla h_i(x^s,\alpha _i^s),(x, \alpha _i)-(x^s,\alpha _i^s)\rangle -h_i(x^s,\alpha _i^s)\le 0,\;\; i\in I, \end{array} \right. $$

where the functions \(g_i(\cdot )\) and \(h_i(\cdot )\), \(i\in I\), are the convex functions obtained by the d.c. representation of the constraint functions \(\psi _i(x)- \alpha _{i}\varphi _i(x),\;\;i\in I\) (see, for example, Sect. 4 for quadratic \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\)) and where \(\nabla h_i(x,\alpha _i)=\) \((\nabla _{x_1} h_i(x,\alpha _i),\ldots ,\nabla _{x_n} h_i(x,\alpha _i),\nabla _{\alpha _i} h_i(x,\alpha _i))^\top \in I\!\! R^{n+1}\).

Due to the consecutive solutions of linearized convex problems \((\mathcal {DCCL}_s)\) starting at the point \((x^0,\alpha ^0)\) we generate the sequence \(\{(x^s,\alpha ^s)\}\): \((x^{s+1},\alpha ^{s+1})\in Sol (\mathcal {DCCL}_s)\). As it was proven in [21], the cluster point \((x_*,\alpha _*)\in \{ x\in S \mid g_i(x, \alpha _i)-\langle \nabla h_i(x_*,\alpha _{*i}),(x, \alpha _i)-(x_*,\alpha _{*i})\rangle -h_i(x_*,\alpha _{*i}) \le 0,\;\; i\in I \} \) of the sequence \(\{(x^s,\alpha ^s)\}\) generated by the LSM, is a solution to the linearized Problem \((\mathcal {DCCL}_*)\) (which is Problem \((\mathcal {DCCL}_s)\) with \((x_*,\alpha _*)\) instead of \((x_s,\alpha _s)\)), and a critical point of Problem \((\mathcal{DCC})\): \((x_*,\alpha _*)\in Sol(\mathcal {DCCL}_*)\).

In order to implement the LSM [21, 26] and the global search scheme for d.c. minimization [19, 22] (Stage II of Algorithm 1), we need an explicit d.c. representation of functions \(\psi _i(x) -\alpha _i\varphi _i(x)\), \(i\in I\). As well-know the d.c. decomposition of function is not unique. The next section presents several possible d.c. representations for the goal function of Problem \((\mathcal{P}_\alpha )\) and constraint’s functions of Problem \((\mathcal{DCC})\) (in the case of quadratic \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\)).

4 D.C. Representations of the Goal Function and Constraint’s Functions

Consider the following quadratic functions \(\psi _i(x)>0,\;\;\varphi _i(x)>0\;\;\forall x\in S\)):

$$\psi _i(x)=\langle x,A^ix\rangle +\langle a^i,x\rangle +\xi _i,\;\;\varphi _i(x)=\langle x,B^ix\rangle +\langle b^i,x\rangle +\gamma _i,$$

where the matrices \(A^i\) and \(B^i\) are \((n\times n)\) positive definite, \(a^i,b^i\in I\!\!R^n,\;\xi _i,\gamma _i\in I\!\!R,\) \(i\in I.\) Therefore, the functions \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\) are convex functions, \(i\in I.\)

In this case, the d.c. representation of the goal function \(\varPhi _\alpha (x)\) of Problem \((\mathcal{P}_\alpha )\) (where \(\alpha =(\alpha _1,\ldots , \alpha _m)\in I\!\! R^m_+\) is the vector parameter) can be rather simple: \( \varPhi _\alpha (x)=g(x)-h_\alpha (x),\) where \( g(x)=\sum \limits _{i=1}^m \psi _i(x),\;\;h_{\alpha }(x):=\sum \limits _{i=1}^m\alpha _i\varphi _i(x), \) or, in another way, \( g^1_{\alpha }(x)=\sum \limits _{i=1}^m \left[ \langle x,A^ix\rangle +\langle (a^i-\alpha _ib^i),x\rangle \right] + \sum \limits _{i=1}^m (\xi _i-\alpha _i\gamma _i),\) \(h^1_{\alpha }(x)=\sum \limits _{i=1}^m\alpha _i\langle x,B^ix\rangle . \)

Remark 4

If the symmetric matrices \(A^i\) or/and \(B^i\) are indefinite, then they can be represented as the difference of two symmetric positive definite matrices \(A^i = A^i_1 - A^i_2,\; A^i_1, A^i_2 > 0,\) \(B^i = B^i_1 - B^i_2,\; B^i_1, B^i_2 > 0,\) using, for example, the method from [19]. After this it is possible to construct functions \(g(\cdot )\) and \(h(\cdot )\) by adding for all \(i\in I\) a convex part with the matrix \(A^i_1\) or/and \(B^i_1\) into \(g(\cdot )\) and a nonconvex part with the matrix \(A^i_2\) or/and \(B^i_2\) into \(h(\cdot )\), i.e.

$$\begin{aligned}&g^2_{\alpha }(x)=\sum \limits _{i=1}^m \left[ \langle x,A^i_1x\rangle +\alpha _i\langle x,B^i_2x\rangle +\langle (a^i-\alpha _ib^i),x\rangle \right] + \sum \limits _{i=1}^m (\xi _i-\alpha _i\gamma _i),\\&\qquad \qquad \qquad \quad \; h^2_{\alpha }(x)=\sum \limits _{i=1}^m \left[ \langle x,A^i_2x\rangle +\alpha _i\langle x,B^i_1x\rangle \right] . \end{aligned}$$

A more complicated d.c. representation of functions appears in Problem \((\mathcal{DCC})\), because \(\alpha =(\alpha _1,\ldots , \alpha _m)\in I\!\! R^m_+\) is a variable and the problem has the following nonconvex term

$$\begin{aligned} \alpha _{i}\varphi _i(x)= \alpha _{i}\langle x,B^ix\rangle +\alpha _{i}\langle b^i,x\rangle +\alpha _{i}\gamma _i, \end{aligned}$$
(4)

which generates the bilinearity, and as a consequence, the nonconvexity in every constraint (\(i\in I\)).

The term \(\alpha _{i}\langle b^i,x\rangle \) in (4) can be represented in the d.c. form as follows

$$\begin{aligned} \langle \alpha _{i}b^i,x\rangle =\frac{1}{4} \parallel \alpha _{i}b^i + x\parallel ^2 -\frac{1}{4} \parallel \alpha _{i}b^i - x\parallel ^2,\;\;i\in I. \end{aligned}$$
(5)

Further, the product \(\alpha _{i}\langle x,B^ix\rangle \) can be expressed by formula (5):

$$ \begin{array}{c} \alpha _i\langle x,B^ix\rangle =\displaystyle \frac{1}{4} \left( \alpha _i + \langle x,B^ix\rangle \right) ^2 -\displaystyle \frac{1}{4} \left( \alpha _i - \langle x,B^ix\rangle \right) ^2,\;\;i\in I, \end{array} $$

if \(B^i,\; i\in I,\) are positive definite matrices and the following conditions hold

$$\begin{aligned} \alpha _i + \langle x,B^ix\rangle \ge 0, \;\;\alpha _i - \langle x,B^ix\rangle \ge 0\;\;\forall x\in S,\;\;i\in I. \end{aligned}$$
(6)
$$ \begin{array}{c} \text{ Then } \;\;\;\;\;\; g_i(x,\alpha _{i})=\psi _i(x)+\displaystyle \frac{1}{4} \left( \alpha _i - \langle x,B^ix\rangle \right) ^2+\displaystyle \frac{1}{4} \parallel \alpha _{i}b^i - x\parallel ^2-\,\alpha _{i}\gamma _i,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\ h_i(x,\alpha _{i})=\displaystyle \frac{1}{4} \left( \alpha _i + \langle x,B^ix\rangle \right) ^2+\displaystyle \frac{1}{4} \parallel \alpha _{i}c^i + x\parallel ^2 \end{array} $$

are convex functions and their difference present the constraints in Problem \((\mathcal{DCC})\) in the following d.c. form:

$$\begin{aligned} \psi _i(x)- \alpha _{i}\varphi _i(x)=g_i(x,\alpha _{i})-h_i(x,\alpha _{i})\le 0, \;\;i\in I. \end{aligned}$$
(7)

5 Global Search Scheme for Solving the Sum-of-Ratios Problems

The previous computational experiments [9, 11, 12] have demonstrated that Algorithm 1 developed for solving fractional programs via d.c. minimization (see Sect. 2) is quite efficient when applied to problems with affine functions in the ratios. The algorithm has also shown its effectiveness for problems with quadratic numerators and linear denominators [9, 11, 12]. However, the algorithm wastes a lot of run-time on finding the vector parameter \(\alpha \) at which the optimal value of Problem \((\mathcal{P}_\alpha )\) is equal to zero. The shortcoming of the approach of the reduction of the sum-of-ratios problems \(( \mathcal{FP})\) to problems with d.c. constraints \(( \mathcal{DCC})\) is a lot of run-times spent on solving Problem \(( \mathcal{DCC})\).

The results of computational experiments suggest a combination of the two approaches for solving the fractional programs. So, we propose to use the local search for Problem \(( \mathcal{DCC})\) to find a starting value \(\alpha ^0\) of the parameter \(\alpha \), which takes less time to reduce the optimal value function of Problem \(( \mathcal{P}_\alpha )\) to zero. This idea could be implemented by the method which consists of 3 basic parts: the local search in Problem \((\mathcal{DCC})\) with d.c. inequality constraints, the global search in d.c. minimization Problem \((\mathcal{P}_\alpha )\) with a fixed vector parameter \(\alpha \) (found by the LSM for Problem \((\mathcal{DCC})\)) and the method for finding the vector \(\alpha \) at which the optimal value of Problem \((\mathcal{P}_\alpha )\) is equal to zero.

We denote \(g ^k(\cdot )=g_{\alpha ^k}(\cdot ),\;\;h ^k(\cdot )=h_{\alpha ^k}(\cdot ),\;\; \varPhi _{k}(\cdot )=\varPhi _{\alpha ^k}(\cdot )\).

Let an initial point \(x_0\in S\), a vector \(\alpha _0=(\alpha _{01},\ldots ,\alpha _{0m})\in I\!\! R^m_+\) and an initial segment \([\alpha ^0_-, \alpha ^0_+]\) for varying \(\alpha \) be given. In addition, let there be given the number sequences \(\{\tau _s\}\), \(\{ \delta _s\}\), \(\{ \varepsilon _s\}\), such that \(\tau _s,\;\delta _s,\; \varepsilon _s > 0\), \(s=0,1,2,\ldots ,\;\tau _s\downarrow 0,\;\delta _s\downarrow 0\; \varepsilon _s\downarrow 0\;\;(s\rightarrow \infty ).\)

Global search scheme for fractional program (F-GSS)

  1. Stage 0.

    (Initialization) \(k:=0,\;\;\vartheta ^k:=(x_0,\alpha _0),\;\;\alpha _-^k:=0,\;\;\alpha _+^k:=\alpha _+^0.\)

  2. Stage I.

    (Local search) Starting from the point \(\vartheta ^k\), find by the LSM from [21] a critical point \((x(\alpha ^k),\alpha ^k)\) in the d.c. constrained Problem \((\mathcal{DCC})\).

  3. Stage II.

    (Global search) Starting from the point \(x(\alpha ^k)\) find a solution \(z(\alpha ^k)\) to Problem \((\mathcal{P}_{k})\) with the help of the global search scheme for d.c. minimization [19, 22], which consists of the following steps:

    1. Step 0.

      \(s:=0, \;\;z^s:=x(\alpha ^k),\;\;\zeta _s:=\varPhi _{k}(z^s)\).

    2. Step 1.

      Choose a number \(\beta \in [\beta _-,\beta _+]\), where the numbers \(\beta _-=\inf (g^k,S),\) \(\beta _+=\sup (g ^k,S)\) can be approximated by rather rough estimates. Set \(\beta _0:=g ^k(z^0)\).

    3. Step 2.

      Construct an approximation \( \mathcal{A} (\beta ) = \{y^1, \dots , y^N \mid h ^k(y^j)=\beta - \zeta _s,\) \(j=1,\ldots , N=N(\beta )\}\) of the level surface \(\{y\in I\!\! R^n \mid h ^k(y)=\beta - \zeta _s\}\) of the function \(h ^k(\cdot )\) and, according to Theorem 2, form a collection of indices \(J_s\) defined as follows \(J_s=J_s(\beta )=\{j\in \{1,\ldots ,N_s\}\;|\;g ^k(y^j)\le \beta \}.\)

    4. Step 3.

      If \(J_s=\emptyset ,\) then set \(\beta :=\beta +\varDelta \beta \in [\beta _-,\beta _+],\) and loop to Step 2.

    5. Step 4.

      For every \(j\in J_s\) find a global \(2\delta _s\)-solution \(\overline{u}^j\in S\) to the following linearized convex problem \( g ^k(x)-\langle \nabla h ^k(y^{j}),x\rangle \downarrow \min \limits _x,\;\;\;x\in S. \) and after that, starting at \(\overline{u}^j\in S\), apply the LSM from [19] to produce a \(2\tau _s\)-critical vector \(u^j\in S\), so that \( g ^k({u}^j)-\langle \nabla h ^k(u^j),u^j\rangle -2\tau _s \le \inf \limits _x\{g ^k(x)-\langle \nabla h ^k(u^j),x\rangle \;|\; x\in S\}. \)

    6. Step 5.

      For every \(j\in J_s\) find a global \(2\delta _s\)-solution \(v^j\): \(h ^k(v^j)=\beta -\zeta _s\), to the following level problem (see Global Optimality Condition (\(\mathcal{E}\))) \(\langle \nabla h ^k(v),u^j-v\rangle \uparrow \max \limits _v,\;\;h ^k(v)=\beta -\zeta _s.\) Note that for a quadratic function \(h ^k(\cdot )\), this problem can be solved manually.

    7. Step 6.

      Compute the number \(\eta _s(\beta ):=\eta ^0_s(\beta )-\beta ,\) where \(\eta ^0_s (\beta ):=g ^k(u^p)- \langle \nabla h ^k(v^p),u^p-v^p\rangle : =\min \limits _{j\in J_s} \{ g ^k(u^j)-\langle \nabla h ^k(v^j),u^j-v^j\rangle \}. \)

    8. Step 7.

      If \(\eta _s(\beta )<0,\) then set \(s:=s+1\), \(z^s:=u^p\) and loop to Step 2.

    9. Step 8.

      (Else) set \(\beta :=\beta +\varDelta \beta \in [\beta _-,\beta _+] \) and go to Step 2. If \(\eta _s(\beta )\ge 0\;\forall \beta \in [\beta _-,\beta _+ ]\) (i.e. the one-dimensional search on \(\beta \) is terminated) and \(\delta _s\le \delta _*,\tau _s\le \tau _*, \varepsilon _s\le \varepsilon _*\), where \(\delta _*>0,\tau _*>0,\) \( \varepsilon _*>0\) are the fixed accuracies of corresponding computations, then the global search (Stage II) has been terminated: \(z(\alpha ^k):=z^s\).

  4. Stage III.

    (Stopping criterion) If \(\mathcal{V}(\alpha ^k)=0\) and \(\min \limits _i \left\{ \psi _i(z(\alpha ^k))\right. - \alpha ^k_i \varphi _i\left. (z(\alpha ^k))\right\} \ge 0\), then STOP: \(z(\alpha ^k)\in Sol(\mathcal{FP})\).

  5. Stage IV.

    (Parameter variation) Implement the parameter variation algorithm to find new parameters \(\alpha ^{k+1},\;\alpha _-^{k+1}\) and \(\alpha _+^{k+1}\). Set \(\vartheta ^{k+1}=(z(\alpha ^k),\alpha ^{k+1})\), \(k=k+1\) and go to Stage I.

6 Computational Simulations

The algorithm developed on the basis of Global search scheme F-GSS from Sect. 5 combining two approaches for solving the fractional programs \((\mathcal{FP})\) via d.c. optimization problems was coded in C++ language and applied to solve sum-of-ratios problems with quadratic functions in the ratios.

The set of test examples was generated by the technique from [2]. The method of generation was based on the Calamai’s and Vicente’s idea [29] to construct a nonconvex quadratic problem with known local and global solutions and on the reduction Theorem 1 to obtain fractional problem with quadratic functions in the numerators and denominators of the ratios.

All computational experiments were performed on the Intel Core i7-4790K CPU 4.0 GHz. All auxiliary convex (linearized) problems arising not only during the implementation of the LSM for a d.c. constrained problem (Stage I of the F-GSS) but also during the global search procedures for a d.c. minimization problem (Stage II of the F-GSS) were solved by the software package IBM ILOG CPLEX 12.6.2.

Table 1 shows the results of computational testing of the F-GSS and employs the following denotations: n is the number of variables (problem’s dimension); m is the number of terms in the sum; \(f(x_0)\) is the value of the goal function of Problem \((\mathcal{FP})\) at the starting point \(x_0\); \(f_{glob}\) stands for the value of the function at the solution provided by the F-GSS; St is the number of critical points passed by the algorithm; it-\(\alpha \) is the number of variation of the parameter \(\alpha \) in the F-GSS; PL is the number of solved auxiliary (linearized) problems; T stands for the CPU time; \(f_{M} \) is the value of the goal function of Problem \((\mathcal{FP})\) provided by the fmincon solver of MATLAB.

Table 1. Results of computational testing of the F-GSS

The test problems (with known global solution) constructed with up to 100 variables and 1000 terms in the sum were successfully solved (see Table 1). We got the global solution in all tests. At the same time, the fmincon solver of MATLAB fails to find the global solution in all test problems (the starting point was the same for both algorithms) and the gap was from 1% (\(n=100,\;m=70\)) to 4.3% (\(n=10,\;m=10\)), and 1.8% on average.

Thus, we conclude that new computational results on solving of the sum-of-ratios problems with quadratic functions in the ratios are rather promising. The computational experiment showed that the algorithm developed for solving fractional programs is quite efficient.

7 Conclusions

In this paper, we showed how fractional programs with nonconvex quadratic functions can be solved by applying the Global Search Theory for d.c. optimization problems. Instead of considering a sum-of-ratios problem directly, we developed an efficient global search algorithm, which is based on two approaches. The first one adopts a reduction of the fractional minimization problem to the solution of an equation with an optimal value of the d.c. minimization problem with a vector parameter. The second method is based on the reduction of the sum-of-ratios problem to the optimization problem with nonconvex constraints.

The global search algorithm developed for fractional program was tested on examples with nonconvex quadratic functions in the numerators and denominators of the ratios. The numerical experiments demonstrated that the algorithm performs well when solving rather complicated sum-of-ratios problems with up to 100 variables or 1000 terms in the sum.