Abstract
We consider the problem of optimizing the sum of several rational functions via reduction to a problem with d.c. constraints. We propose a method of finding a local solution to the fractional program which can be subsequently used in the global search method based on the global optimality conditions for a problem with nonconvex (d.c.) constraints [21–23]. According to the theory, we construct explicit representations of the constraints in the form of differences of two convex functions and perform a local search method that takes into account the structure of the problem in question. This algorithm was verified on a set of low-dimensional test problems taken from literature as well as on randomly generated problems with up to 200 variables and 200 terms in the sum.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The fractional optimization is quite challenging and arises in various on a both economic and non-economic applications, whenever one or several ratios are required to be optimized. Let us mention a few examples mainly following the surveys by Schaible [8, 19, 20], where numerous other applications can be found. Numerators and denominators in ratios may represent cost, capital, profit, risk or time, etc. Fractional programs is closely related to the associated multiple-objective optimization problem, where a number of ratios are to be maximized simultaneously. Thus, the objective function in a fractional program can be considered as a utility function expressing a compromise between the different objective functions of the multiple-objective problem. Other applications include a multistage stochastic shipping problem [1, 7], profit maximization under fixed cost [4], various models in cluster analysis [17], multi-objective bond portfolio [13], and queuing location problems [5].
As known, without assumptions, the sum-of-ratios program is NP-complete [9]. Surveys on methods for solving this problem can be found in [8, 20, 28, 29]. According to the surveys, the majority of the methods make restrictive assumptions either on the concavity or linearity of the ratios. When the ratios are nonlinear, the most popular techniques are based on the Branch and Bound approach, see e.g. [2, 6].
We propose reducing a fractional problem to the optimization problem with nonconvex constraints [10, 23], as it was mentioned in [6], with a subsequent application of the Global search theory for solving this class of nonconvex problems [21–23, 25, 26].
The outline of the paper is as follows. In Sect. 2 we substantiate the reduction of the sum-of-ratios fractional problem to the optimization problem with nonconvex constraints. Then in Sect. 3 we recall the local search method from [21], which implies linearization of the functions defining the basic non-convexity of the problem with d.c. constraints in the current point. In Sect. 4 we show how to explicitly represent the nonconvex functions, describing the constraints of the problem in question, as differences of two convex functions (the d.c. representation). The final section offers computational testing of the local search method on fractional program instances with a small number of variables and terms in the sum. We use the examples found in the literature as well as randomly generated problems of higher dimension.
2 Reduction to the Problem with Nonconvex Constraints
Now consider the following problem of the fractional optimization [3, 20]
where \(\psi _i:{I\!R}^n \rightarrow {I\!R},\;\;\varphi _i:{I\!R}^n \rightarrow {I\!R},\;\;\varphi _i(x)> 0,\;\forall x\in S,\;i=1,\ldots , m\), and \(S\subset {I\!R}^n\) is a convex set.
Proposition 1
(i) Let the pair \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) be a solution to the following problem:
Then \(x_*\) is the solution to Problem \((\mathcal{P}_0)\) and \(f(x_*)=\sum \limits _{i=1}^m \alpha _{*i}\).
(ii) Conversely, if \(x_*\) is the solution to Problem \((\mathcal{P}_0)\) (\(x_*\in Sol(\mathcal{P}_0)\)), then the vector \(\alpha _*=(\alpha _{*1},\ldots ,\alpha _{*m})^T\in {I\!R}^m\) defined as \(\alpha _{*i}=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\;\;i=1,\ldots , m,\) is part of the solution \((x_*,\alpha _*)\) to Problem \((\mathcal{P}_1)\).
Proof
(i) Let \((x_*,\alpha _*)\in Sol(\mathcal{P}_1)\), i.e. \(\alpha _{*i}=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\) \(i=1,\ldots , m,\;x_*\in S,\) and \(\sum \limits _{i=1}^m \alpha _{*i}\le \sum \limits _{i=1}^m \alpha _i\) for all \(\alpha _i:\;\;\exists x\in S,\;\;\displaystyle \frac{\psi _i(x)}{\varphi _i(x)}=\alpha _i,\) \(i=1,\ldots , m,\)
Therefore, \(x_*\) is the solution to Problem \((\mathcal{P}_0)\).
(ii) Now let \(x_*\in Sol(\mathcal{P}_0)\). Then
Define \(\alpha _{*i}:=\displaystyle \frac{\psi _i(x_*)}{\varphi _i(x_*)},\;i=1,\ldots , m,\) and consider the set
Then (1) implies \(\sum \limits _{i=1}^m \alpha _{*i}=f(x_*)\le f(x)=\sum \limits _{i=1}^m \alpha _i\;\;\forall \alpha \in D_\alpha ,\;\;\forall x\in S.\) Therefore \(x_*\in Sol(\mathcal{P}_0)\). \(\square \)
Proposition 2
Let the pair \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) be a solution to the following problem:
Then
Proof
Let \((x_*,\alpha _*)\in Sol(\mathcal{P}_2)\). Suppose that
and construct \( \hat{\alpha }: \;\hat{\alpha _i}=\alpha _{*i}\;\;\forall i\ne j,\;\;\hat{\alpha _j}=\displaystyle \frac{\psi _j(x_*)}{\varphi _j(x_*)}\). It can be readily seen that \((x_*,\hat{\alpha })\) is a feasible pair to Problem \((\mathcal{P}_2)\). The assumption (3) implies
It means that in Problem \((\mathcal{P}_2)\) the pair \((x_*,\hat{\alpha })\) is better than the pair \((x_*,\alpha _*)\), so, \((x_*,\alpha _*)\notin Sol(\mathcal{P}_2)\). This contradiction proves the equality (2). \(\square \)
Corollary 1
Any solution \((x_*,\alpha _*)\in {I\!R}^n \times {I\!R}^m\) to Problem \((\mathcal{P}_2)\) will be a solution to Problem \((\mathcal{P}_1)\), and, therefore, will be a solution to Problem \((\mathcal{P}_0)\).
Remark 1
The inequality constraints in Problem \((\mathcal{P}_2)\) can be replaced by equivalent constraints \({\psi _i(x)} - \alpha _{i}{\varphi _i(x)}\le 0,\;\;i=1,\ldots , m,\) since \(\varphi _i(x) > 0\;\;\forall x\in S\). It leads us to the following problem with m nonconvex constraints:
It is easy to see that Problem \((\mathcal{P})\) is a global optimization problem with the nonconvex feasible set (see, e.g., [10, 23]), and we can apply the Global Search Theory for solving this class of nonconvex problems [21–23, 25, 26]. The Theory allows one to construct an algorithm for solving problems with nonconvex constraints. The algorithm contains two principal stages: (a) local search, which provides an approximately critical point; (b) procedures of escaping from critical points.
In the next section, we shall consider a local search method.
3 Local Search for Problem with D.C. Constraints
The local search methods (LSMs) play an important role in searching for the global solution to nonconvex problems, since it provides the so-called critical (stationary) points that might be considerably be better than a simple feasible point. Moreover, if a starting point occurs rather close to the global solution, then the LSMs can provide the global solution.
In order to find a local solution to Problem \((\mathcal{P})\), we apply a special LSM [21].
Let us consider the following problem with d.c. constraints:
where the functions \(f_0\) and \(g_i,h_i\), \(i\in I\), as well as the set \(S\subset {I\!R}^n\), are convex. Further, suppose that the feasible set \(D:=\{\, x\in S \mid f_i(x) \le 0,\;\; i\in I \,\} \) of the problem (4) is not empty and the optimal value \(\mathcal V\) (4) \(:= \inf \limits _x \{ f_0(x) \mid x\in D \}\) of the problem (4) is finite: \(\mathcal V\) (4) \(> -\infty .\)
Furthermore, assume that a feasible starting point \(x^0\in D\) is given and, in addition, after several iterations it has derived the current iterate \(x^s\in D\), \(s\in \mathbb {Z}_+=\{0,1,2,\ldots \}\).
In order to propose a local search method for the problem (4), apply a classical idea of linearization with respect to the basic nonconvexity of the problem (i.e. with respect to \(h_i(\cdot ),\;i\in I\)) at the point \(x^s\) [21]. Thus, we obtain the following linearized problem:
Suppose the point \(x^{s+1}\) is provided by solving Problem \((\mathcal {PL}_s)\), so that
and inequality \( f_{0} (x^{s+1}) \le \mathcal {V} (\mathcal {PL}_s) + \delta _s \) holds. Here \(\mathcal {V}(\mathcal {PL}_s)\) is the optimal value to Problem \((\mathcal {PL}_s)\):
and the sequence \(\{\delta _s\}\) satisfies the following condition functions \(\sum \limits _{s = 0}^{\infty }\delta _{s} < + \infty \). It can be easily seen that \(D_s \subset D\), so \(x^{s+1}\) turns out to be feasible in the problem (4). Actually, since the functions \(h_i(\cdot )\), \(i\in I\) are convex, the following inequalities hold
Therefore, the LSM generates the sequence \(\{x^s\}\), \(x^s\in D_s\), \(s=0,1,2,\ldots \), of solutions to Problems \((\mathcal {PL}_s)\). As it was proven in [21], the cluster point \(x_*\in D_*\) of the sequence \(\{x^s\}\) is a solution to the linearized Problem \((\mathcal {PL}_*)\) (which is Problem \((\mathcal {PL}_s)\) with \(x^s\) instead of \(x_*\)), and \(x_*\) can be called the critical point with respect to the LSM. Thus, the algorithm constructed in this way provides critical points by employing suitable convex optimization methods [15] for any given accuracy \(\tau \). The following inequality:
can be chosen as a stopping criterion for the LSM [21].
In order to implement the LSM, we need an explicit d.c. representation of \(f_i(\cdot )\), i.e. \(f_i(\cdot )=g_i(\cdot )-h_i(\cdot )\), \(i\in I\).
4 D.C. Representation of the Constraints
The first stage of any algorithm developed according the Global search theory is the decomposition of a nonconvex function as a difference of two convex functions. Such decomposition is constructing in several different ways depending on the functions \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\).
4.1 Affine Functions
Let the functions \(\psi _i (\cdot )\) and \(\varphi _i (\cdot )\) be constructed by means of the vectors \(a^i,c^i\in {I\!R}^n,\) and numbers \(b_i,d_i\in {I\!R},\)
In this case, the basic nonconvexity of Problem \((\mathcal{P})\) is the bilinear term \(\alpha _{i}\varphi _i(x)= \langle \alpha _{i}c^i,x\rangle +\alpha _{i}d_i \) in each constraint \((i\in I)\). Then, the bilinear function can be represented as a difference of two convex functions in the following way [27]:
Hence, the functions \(f_i(\cdot )\) have the form \(f_i(x,\alpha _{i})=g_i(x,\alpha _{i})-h_i(x,\alpha _{i}),\) where
Taking into account the d.c. representation (6), the linearized Problem \((\mathcal {PL}_s)\) has the following form
where \(\;\;\mathcal{C}_{is}=b_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\),
The problem (7) is a convex optimization problem and it can be solved by an appropriate convex optimization method [15] at a given accuracy: \(\delta _s >0,\) \(s=0,1,\ldots \).
Further, we will consider Problem \((\mathcal{P})\) where \(\psi _i(\cdot )\) are convex quadratic functions and \(\varphi _i(\cdot )\) are affine functions, \(i\in I\).
4.2 Quadratic/Affine Functions
Suppose we are given symmetric positive definite matrices \(A^i\) \((n\times n)\), vectors \(p^i,c^i\in {I\!R}^n,\) and scalars \(q_i, d_i\in {I\!R},\)
As has been done in Subsect. 4.1, we represent the bilinear term \(\alpha _{i}\varphi _i(x)\) as the difference of two convex functions, which yields us the d.c. representations \(f_i(x,\alpha _{i})=g_i(x,\alpha _{i})-h_i(x,\alpha _{i}),\;\;i\in I,\) where
Taking into account the d.c. representation (9), the linearized Problem \((\mathcal {PL}_s)\) takes the following form
where the gradient \(\nabla h_i(x^s,\alpha ^s_{i})\) is calculated by the formula (8), and \(\mathcal{C}_{is}=q_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\).
The problem (10), as well as (7), can be solved by a suitable convex optimization method [15].
Remark 2
If the symmetric matrices \(A^i\) in the quadratic functions \(\psi _i(\cdot )\) are indefinite, then one can represent \(A^i\) as the difference of two symmetric positive definite matrices \(A^i = A^i_1 - A^i_2,\; A^i_1, A^i_2 > 0,\) using, for example, a simple method from [24]. Afterwards, it is possible to construct functions \(g_i(\cdot )\) and \(h_i(\cdot )\) as follows: for all \(i\in I\) add the convex part with the matrix \(A^i_1\) to the function \(g_i(\cdot )\) and the nonconvex part with the matrix \(A^i_2\) to \(h_i(\cdot )\).
In what follows, we will examine the case where \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\) are convex quadratic functions, \(i\in I\).
4.3 Quadratic Functions
Now let us consider the following functions:
\(A^i\) and \(B^i\) are positive definite \((n\times n)\) matrices, \(p^i,c^i\in {I\!R}^n,\;q_i,d_i\in {I\!R},\) \(i\in I.\) Therefore, Problem \((\mathcal{P})\) has the following term
which generate nonconvexity in every constraint (\(i\in I\)).
The term \(\alpha _{i}\langle c^i,x\rangle \) in (11) can be presented in the d.c. form by the formula (5).
Further, let us denote \(r_i:=\langle x,B^ix\rangle \). Then, the product \(\alpha _{i}u_i\) can be expressed by formula (5) as follows
If \(B^i,\; i\in I,\) are positive definite matrices and the following conditions hold
then
are convex functions. Hence, we obtain the following d.c. representation:
and the following linearized Problem \((\mathcal {PL}_s)\)
where \(\mathcal{C}_{is}=q_i+\langle \nabla h_i(x^s,\alpha _{i}^s),(x^s,\alpha ^s_{i})\rangle -\displaystyle \frac{1}{4} \left( \alpha ^s_i + \langle x^s,B^ix^s\rangle \right) ^2-\displaystyle \frac{1}{4} \parallel \alpha ^s_{i}c^i + x^s\parallel ^2\), \(\nabla h_i(x^s,\alpha ^s_{i})=(\nabla h_{ix},\nabla h_{i\alpha })^T\) is the gradient of the function \(h(\cdot )\):
If the conditions (12) are not satisfied, one can construct the d.c. representation (13) by decomposition of the trilinear term \( \alpha _{i}\langle x,B^ix\rangle = \sum \limits _{l=1}^n \sum \limits _{j=1}^n b^i_{lj}x_lx_j\alpha _i \) using the following equality holding for the product of three variables (for example, u, v, w):
Therefore, we get (13), where
Obviously, in this case the linearized Problem \((\mathcal {PL}_s)\) is the problem of minimization of the linear function over the convex feasible set given by more complicated nonlinear functions \(\varphi _{ik}(x,\alpha _{i})\) in comparison with the problems (7), (10) or even (14). At the same time, the linearized problems are convex, and therefore can be solved by a suitable convex optimization method [15].
Remark 3
If the symmetric matrices \(A^i\) and \(B^i\) in the functions \(\psi _i(\cdot )\) and \(\varphi _i(\cdot )\), respectively, are indefinite, then this case is already described above in Remark 2.
5 Computational Simulations
The algorithm of the local search method (LSM) from Sect. 3 was coded in C++ language and was tested with various starting points. All computational experiments were performed on the Intel Core i7-4790K CPU 4.0 GHz.
At each iteration of the LSM, the convex Problem \((\mathcal{PL}_s)\) was solved by the software package IBM ILOG CPLEX 12.6.2 [11]. The accuracy of the LSM was \(\tau =10^{-6} \). The accuracy of the solution to the linearized problems \((\mathcal{PL}_s)\) increased during the LSM. Thus, we solved \((\mathcal{PL}_s)\) at a low accuracy at the first steps; further, the accuracy \(\delta _s\) was gradually improved \((\delta _s \downarrow 0)\), i.e., \(\delta _0 = 0.1,\) \(\delta _{s+1} = 0.5\delta _s,\) until the condition \(\delta _s\le \frac{\tau }{2}\) was fulfilled with a given accuracy \(\tau > 0\).
At the first stage, we numerically solved several instances of fractional programming problems from [2, 3, 7, 14, 16, 18] with a small number of variables.
5.1 Low-Dimensional Fractional Program with Affine Functions
Tables 1 and 2 represent the results of the computational testing of the LSM and employ the following designations:
name is the name of the test example;
n is the number of variables (problem’s dimension);
m is the number of terms in the sum;
\(f_0(x^0)\) is the value of the goal function to Problem \((\mathcal{P})\) at the starting point;
\(f_0(z)\) is the value of the function at the critical point provided by the LSM;
it is the number of linearized problems solved (iterations of the LSM);
Time stands for the CPU time of computing (seconds);
\(x^0\) stands for the starting point chosen in the test problem;
z is the critical point provided by the LSM.
Note that in the problems “Prob6 [2]” and “Prob1 [16]” in Table 2, local solutions derived by the LSM are not global (shown in bold).
Known global solutions to all problem instances were found just by the local search that confirms the computational effectiveness of the LSM. All test problems were successfully solved.
Further, we study if the LSM performance is affected by the increase in dimension of the variable x and the number of terms in the sum.
5.2 Randomly Generated Problems with Affine and Quadratic Functions
In this subsection, we will report computational results of testing the LSM on randomly generated problems of the form
Data \(a^i_j,\;c^i_j,\;\bar{A}_{lj}\in [0, 10]\) were uniformly random numbers, \(b_i=d_i=10,\) \(\bar{b}_l=10,\; i=1,\ldots ,m,\;j=1,\ldots ,n,\; l=1,\ldots ,L\).
Results of the computational testing of the LSM on fractional problems (15) up to 100 variables and 100 terms in the sum are listed in Table 3. The denotations in Table 3 are the same as in Tables 1 and 2.
Moreover, we have carried out testing of the LSM on fractional problems with quadratic functions in the numerators of ratios. We generated the problems from [12] up to 200 variables and 200 terms in the sum:
where \(A^i=U_iD^iU^T_i,\;U_i=Q_1Q_2Q_3,\;i=1,\ldots , m,\;Q_j=I\!-\!2\frac{w_jw_j^T}{\parallel w_j\parallel ^2},j=1,2,3\) and \(w_1=-i+rand(n,1),\;w_2=-2i+rand(n,1),\;w_3=-3i+rand(n,1),\) \( D^i=rand(n,n),\;c^i=i-i\cdot rand(n,1),\; p^i=i+ i\cdot rand(n,1),\; i=1,\ldots , m,\) \(\bar{A}=-1+2\cdot rand(5,n),\;\bar{b}=2+3\cdot rand(5,1)\) [12]. (We denote by \(rand(k_1,k_2)\) the random matrix with \(k_1\) rows, \(k_2\) columns and elements generated randomly on [0, 1].)
As it is shown in Table 4, the number of iteration (it) of the LSM is almost independent of the number of variables (n) but approximately proportional to the number of terms in the sum (m). The run-time increased proportionally to n and m.
Computational simulations confirm the efficiency of the LSM developed, the performance of which naturally depends on the choice of the method or the software package (IBM ILOG CPLEX) employed to solve auxiliary problems.
Thus, LSM can be applied in future implementations of the global search algorithm for solving the sum of ratios fractional problems via problems with d.c. constraints.
6 Conclusions
In this paper, we considered the fractional programming problem as an optimization problem with d.c. constraints. To this end, we carried out the explicit representation of nonconvex functions as differences of two convex functions and applied the local search algorithm based on linearization of the functions defining the basic non-convexity of the problem under study.
We investigated the effectiveness of the local search method for solving problems with d.c. constraints that generate the nonconvexity of the feasible set.
The numerical experiments demonstrated that the local search algorithm can globally solve low-dimensional sum-of-ratios test problems. Moreover, the local search algorithm developed in this paper turned out to be rather efficient at finding critical points in randomly generated fractional programming problems of high dimension.
Therefore, the method developed can be applied within the global search procedures for fractional programming problems.
References
Almogy, Y., Levin, O.: Parametric analysis of a multistage stochastic shipping problem. In: Lawrence, J. (ed.) Operational Research, vol. 69, pp. 359–370. Tavistock Publications, London (1970)
Benson, H.P.: Global optimization algorithm for the nonlinear sum of ratios problem. J. Optim. Theory Appl. 112(1), 1–29 (2002)
Bugarin, F., Henrion, D., Lasserre, J.-B.: Minimizing the sum of many rational functions. Math. Prog. Comp. 8, 83–111 (2016)
Colantoni, C.S., Manes, R.P., Whinston, A.: Programming, profit rates and pricing decisions. Account. Rev. 13, 467–481 (1969)
Drezner, Z., Schaible, S., Simchi-Levi, D.: Queuing-location problems on the plane. Naval Res. Log. 37, 929–935 (1990)
Dur, M., Horst, R., Thoai, N.V.: Solving sum-of-ratios fractional programs using efficient points. Optimization 49, 447–466 (2001)
Falk, J.E., Palocsay, S.W.: Optimizing the sum of linear fractional functions. In: Floudas, C.A., Pardalos, P.M. (eds.) Recent Advances in Global Optimization, Princeton Series in Computer Science, pp. 221–257. Princeton University Press, Stanford (1992)
Frenk, J.B.G., Schaible, S.: Fractional programming. In: Hadjisavvas, S.S.N., Komlosi, S. (eds.) Handbook of Generalized Convexity and Generalized Monotonicity, Series Nonconvex Optimization and Its Applications, vol. 76, pp. 335–386. Springer, Heidelberg (2002)
Freund, R.W., Jarre, F.: Solving the sum-of-ratios problem by an interior-point method. J. Global Optim. 19(1), 83–102 (2001)
Horst, R., Tuy, H.: Global Optimization. Deterministic approaches. Springer-Verlag, Berlin (1993)
Ibm, I.: High-performance mathematical programming solver for linear programming, mixed integer programming, and quadratic programming. http://www-03.ibm.com/software/products/en/ibmilogcpleoptistud
Jong, Y.-Ch.: An efficient global optimization algorithm for nonlinear sum-of-ratios problems. Optim. Online. http://www.optimization-online.org/DB_HTML/2012/08/3586.html
Konno, H.: Watanabe: Bond portfolio optimization problems and their applications to index tracking. J. Oper. Res. Soc. Japan 39, 295–306 (1996)
Ma, B., Geng, L., Yin, J., Fan, L.: An effective algorithm for globally solving a class of linear fractional programming problem. J. Softw. 8(1), 118–125 (2013)
Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006)
Kuno, T.: A branch-and-bound algorithm for maximizing the sum of several linear ratios. J. Global Optim. 22, 155–174 (2002)
Rao, M.R.: Cluster analysis and mathematical programming. J. Amer. Statist. Assoc. 66, 622–626 (1971)
Raouf, O.A., Hezam, I.M.: Solving fractional programming problems based on swarm intelligence. J. Ind. Eng. Int. 10, 56–66 (2014)
Schaible, S.: Fractional programming. In: Horst, R., Pardalos, P.M. (eds.) Handbook of Global Optimization, pp. 495–608. Kluwer Academic Publishers, Dordrecht (1995)
Schaible, S., Shi, J.: Fractional programming: the sum-of-ratios case. Optim. Methods Softw. 18, 219–229 (2003)
Strekalovsky, A.S.: On local search in d.c. optimization problems. Appl. Math. Comput. 255, 73–83 (2015)
Strekalovsky, A.S.: On solving optimization problems with hidden nonconvex structures. In: Rassias, T.M., Floudas, C.A., Butenko, S. (eds.) Optimization in Science and Engineering, pp. 465-–502. Springer, New York (2014)
Strekalovsky, A.S.: Minimizing sequences in problems with D.C. constraints. Comput. Math. Math. Phys. 45(3), 418–429 (2005)
Strekalovsky, A.S.: Elements of nonconvex optimization [in Russian]. Nauka, Novosibirsk (2003)
Strekalovsky, A.S., Gruzdeva, T.V.: Local search in problems with nonconvex constraints. Comput. Math. Math. Phys. 47(3), 381–396 (2007)
Strekalovsky, A.S., Gruzdeva, T.V., Ulianova, N.Y.: Optimization Problems with Nonconvex Constraints [in Russian]. Irk. State University, Irkutsk (2013)
Strekalovsky, A.S., Orlov, A.V.: Bimatrix games and bilinear programming [in Russian]. FizMatLit, Moscow (2007)
Strekalovsky, A.S., Yakovleva, T.V.: On a local and global search involved in nonconvex optimization problems. Autom. Remote Control 65, 375–387 (2004)
Wu, W.-Y., Sheu, R.-L., Birbil, I.S.: Solving the sum-of-ratios problem by a stochastic search algorithm. J. Global Optim. 42(1), 91–109 (2008)
Acknowledgments
This work has been supported by the Russian Science Foundation, Project N 15-11-20015.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Gruzdeva, T., Strekalovsky, A. (2016). An Approach to Fractional Programming via D.C. Constraints Problem: Local Search. In: Kochetov, Y., Khachay, M., Beresnev, V., Nurminski, E., Pardalos, P. (eds) Discrete Optimization and Operations Research. DOOR 2016. Lecture Notes in Computer Science(), vol 9869. Springer, Cham. https://doi.org/10.1007/978-3-319-44914-2_32
Download citation
DOI: https://doi.org/10.1007/978-3-319-44914-2_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-44913-5
Online ISBN: 978-3-319-44914-2
eBook Packages: Computer ScienceComputer Science (R0)