Abstract
This article presents a self-adjustable branch-and-bound algorithm for globally solving a class of linear multiplicative programming problems (LMP). In this algorithm, a self-adjustable branching rule is introduced and it can continuously update the upper bound for the optimal value of LMP by selecting suitable branching point under certain conditions, which differs from the standard bisection rule. The proposed algorithm further integrates the linear relaxation program and the self-adjustable branching rule. The dependability and robustness of the proposed algorithm are demonstrated by establishing the global convergence. Furthermore, the computational complexity of the proposed algorithm is estimated. Finally, numerical results validate the effectiveness of the self-adjustable branching rule and demonstrate the feasibility of the proposed algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the following linear multiplicative problem:
where \(\varvec{c}_{i},\varvec{d}_{i}\in {\mathbb {R}}^{n}\), \(c_{0i},d_{0i}\in {\mathbb {R}}\), \(\varvec{A}\in {\mathbb {R}}^{m\times n}\), \(\varvec{b}\in {\mathbb {R}}^{m}\), and \({\mathbb {Y}}\in {\mathbb {R}}^{n}\) is a nonempty and bounded set.
In recent decades, LMP, being a specialized multiplicative program, has garnered significant attention from researchers. The prominence of LMP stems from its wide-ranging applications in financial optimization [1,2,3], microeconomics [4], plant layout design [5], data mining/patten recognition [6], VLISI chip design [7], robust optimization [8], and various special linear or nonlinear programming problems [9,10,11,12,13,14,15] that can be converted to the form of LMP. Another key aspect is that LMP is known to be NP-hard [16] and typically exhibits multiple locally optimal solutions which are not global optimal solutions of LMP [17].
Various practical algorithms have been proposed for solving LMP and its special cases. Based on their distinctive characteristics, these algorithms can be classified into branch and bound algorithms [18,19,20,21], optimal level solution methods [22, 23], monotonic optimization approaches [24], outcome-space approaches [25, 26], approximate algorithms [27], level set algorithms [28], etc. Specially, the method that introducing other technique into branch-and-bound algorithm has gained respect. For instance, Wang et al. [29, 30] presented a practicable branch-and-bound method and a novel convex relaxation-strategy-based algorithm by using a new linear relaxation technique and a convex approximation approach, respectively. Zhao et al. [31] developed a simple global optimization algorithm based on a new convex relaxation method, the branch and bound scheme and some accelerating techniques. Yin et al. [32] proposed a new outer space rectangle branch and bound algorithm by employing affine approximations technique to subsequently refine the initial outer space rectangle. Shen et al. [33,34,35,36,37] designed a series of global algorithms after reconsideration of the linear relaxation technique, second order cone relaxation and the convex quadratic relaxation, respectively. However, the above branch-and-bound algorithms are usually combined with the standard bisection branching rule which uses the midpoint of the longest edge of the rectangle as the branching point. So the optimal solution of divided rectangle may still be the optimal solution of sub-rectangle, i.e., the upper bound for the optimal solution of LMP over sub-rectangle is not updated during the execution of the algorithm.
Hence, a self-adjustable branch-and-bound (SABB) algorithm is presented to solve LMP. Compared with the existing literatures [29,30,31,32,33,34,35, 38], the features of SABB algorithm are summarized as follows: (i) In spite of the linear relaxation program being similar to the existing methods in [29, 32, 39], the objective function in our algorithm is processed prior to the implementation of the equivalent transformation, which is different from the algorithms in [29, 32]. Furthermore, this algorithm necessitates fewer auxiliary variables for solving LMP compared to those used in previous studies [29, 32]. (ii) The introduction of the self-adjustable branching rule in the proposed algorithm enables constant update to the upper bound for the optimal value of LMP by selecting an appropriate branching point in the direction of better relaxation approximation of the rectangle. And numerical results validate the effectiveness of the self-adjustable branching rule by comparing it with the standard bisection branching rule. (iii) The branching operation of the proposed algorithm is performed in the image space \({\mathbb {R}}^{p}\) rather than decision variable space \({\mathbb {R}}^{n}\) in [29, 30, 34]. This means that the proposed algorithm will help economize on the required computation if \(p\ll n\). (iv) The global convergence and computation complexity of the proposed algorithm are analyzed. However, the complexity of the algorithms in [29, 32, 34, 38] is not specified.
The rest part of this paper is organized as follows. In Sect. 2, we describe the equivalent transformation for LMP and establish its linear relaxation program. A self-adjustable branching rule is introduced in Sect. 3. In Sect. 4, we summarize the SABB algorithm and establish its the global convergence, and its complexity is estimated as well. We test some numerical examples and report numerical results in Sect. 5. Finally, some conclusions are given in Sect. 6.
2 Equivalent Problem and Its Linear Relaxation Programming
To globally solve LMP, an equivalent problem (EP) is established by an equivalent transformation. Subsequently, we propose a linear relaxation programming approach for EP, which provides an upper bound to the optimal value of LMP.
In regard to the objective function \(f(\varvec{y})\) of LMP, we convert it into the following form:
By introducing 2p auxiliary variables, we covert LMP into its EP. Let
and define an initial rectangle \(Z^{0}\), given by
where \({l_{i}}^{0}=\min \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{c}_{i}^{T}\varvec{y},\) \({u_{i}}^{0}=\max \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{c}_{i}^{T}\varvec{y}, i=1, \ldots , p.\) Meanwhile, we also calculate \({L_{i}}=\min \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{d}_{i}^{T}\varvec{y},\) \({U_{i}}=\max \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{d}_{i}^{T}\varvec{y}, i=1,\dots , p.\)
Hence, LMP can be reformulated as EP over \(Z^{0}\) as follows:
The following theorem establishes the equivalence between LMP and EP.
Theorem 1
\(\varvec{y}^{*}\) is an optimal solution to LMP if and only if \((\varvec{y}^{*},\varvec{z}^{*},\varvec{h}^{*})\) is an optimal solution to EP, in which \(zi^{*}=c_{i}^{T}x^{*}, hi=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}, i=1, 2, \ldots , p\).
Proof
This proof bears resemblance to Theorem 1 in Shen et al. [33], so it is omitted from this paper. \(\square \)
According to Theorem 1, LMP and EP share the same global optimal value. Subsequently, our focus will be on addressing the solution to EP.
For convenience, assume that
Thus, for any \(Z\subseteq Z^{0}\), the corresponding EP over Z can be rewritten as follows:
From the structure of EP(Z), its non-convexity is manifested in constraints \(h_{i}=\varvec{d}_{i}^{T}\varvec{y}z_{i}, i=1, \ldots ,p\). Inspired by [29, 32, 39], for any \(Z\subseteq Z^{0}\) and \(\varvec{d}_{i}^{T}\varvec{y}\in [L_{i},U_{i}], i=1, \ldots ,p,\) by using the concave envelope of bilinear function, it can be derived that the following relationships hold
Then, the linear relaxation problem of EP(Z) is formulated.
Apparently, the optimal value of LRP(\( {Z}\)) can provide an upper bound for that of EP(\( {Z}\)).
Remark 1
\(\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}+U_{i}-l_{i}U_{i}, u_{i}\varvec{d}_{i}^{T}\varvec{y}z_{i}+L_{i}z_{i}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}z_{i}\rightarrow 0\), \(i=1, \ldots ,p,\) when \((u_{i}-l_{i})\rightarrow 0\) for \(i=1, \ldots ,p.\) The proof is similar to Yin et al. [31], in which the detail is presented in Theorem 1.
3 The Self-Adjustment Branching Rule
In this subsection, the proposal implements a self-adjustment branching rule for the algorithm in order to expedite the attainment of an optimal solution of LMP. In [29, 30, 32,33,34,35, 38], the standard bisection branching rule is used and it can be described as follows:
-
(i)
Let \(q=\arg \max \{u_{i}-l_{i}|i=1,2,\ldots ,p\}\).
-
(ii)
Z is subdivided into two p-dimensional sub-rectangles \(Z_{1}\) and \(Z_{2}\), i.e.,
$$\begin{aligned} Z_{1}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne q, l_{q}\le z_{q} \le \dfrac{(l_{q}+u_{q})}{2}\right\} ,\\ Z_{2}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne q, \dfrac{(l_{q}+u_{q})}{2}\le z_{q} \le u_{q}\right\} . \end{aligned}$$
Using the standard bisection branching rule, the optimal solution of LRP(\( {Z}\)) may still be the optimal solution to LRP(\( {Z}_{1}\)) or LRP(\( {Z}_{2}\)), so that the upper bound of LMP(\( {Z}\)) is not updated during the execution of the algorithm. The current focus is on devising branching strategies that ensure a continuous update of the upper bound for the optimal value to LMP.
Lemma 1
For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is an optimal solution of LRP(Z). For any i, if \(z_{i}^{*}=l_{i}\) or \(z_{i}^{*}=u_{i}\) or \(u_{i}-l_{i}=0\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\) or \(U_{i}-L_{i}=0\), we have \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)
Proof
If \(z_{i}^{*}=l_{i}\), then \(h_{i}^{*}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)
If \(z_{i}^{*}=u_{i}\), then \(h_{i}^{*}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)
If \(u_{i}-l_{i}=0\), assume that \(u_{i}=l_{i}=z_{i}^{*}\), then \(h_{i}^{*}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\) and \(h_{i}^{*}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)
In the same way, if \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\) or \(U_{i}-L_{i}=0\), then \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\) \(\square \)
For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is an optimal solution of LRP(\( {Z}\)). We choose \(\xi =\arg \max \{\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}, \ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*} \ | \ i=1,2,\ldots ,p\}\) as the branching direction. According to the above Lemma 1, for any \(i=1, \ldots , p\), we can get \(\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i},\ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}=0\) when \(z_{i}^{*}=l_{i}\), \(z_{i}^{*}=u_{i}\), \(u_{i}-l_{i}=0\), \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\), \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\), and \(U_{i}-L_{i}=0\), respectively, which will not be chosen for further division. Thus, Z is divided into \(Z^{1}\) and \(Z^{2}\) along \([l_{\xi }, u_{\xi }]\) at point r, as follows:
in which \(r\in (l_{\xi }, u_{\xi })\) is denoted as the branching point. Note that the inequality constraint of two sub-rectangles corresponding to the \(\xi \) direction is expressed as
Based on the above discussions, if the optimal solution \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) of LRP(\( {Z}\)) is cut off by dividing along \([l_{\xi }, u_{\xi }]\) at point r, i.e., \(h_{\xi }^{*}> r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-rL_{\xi }\) and \( h_{\xi }^{*}> r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-rU_{\xi }\) hold, so that we calculate and obtain
Combining \(l_{\xi }, u_{\xi }, r1\) and r2, the selection of r is determined by the following Theorem 2.
Theorem 2
For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is the optimal solution of LRP(Z). If \({z}^{*}_{\xi }\in (l_{\xi }, u_{\xi })\), \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP\((Z^{1})\) and LRP\((Z^{2} )\), where \(Z^{1}\) and \(Z^{2}\) are given in (3.1) and (3.2), respectively, and r in \(Z^{1}\) and \(Z^{2}\) is given as follows:
with (3.7), (3.8), \(0<{\bar{\varepsilon }}<r1-r2\) and \(w=\frac{1}{2}(u_{\xi }-l_{\xi })\). Otherwise, let \(r=z_{\xi }^{*}\), we get \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}\), then the \(\xi \)-th edge of \(Z^{s}\) \((s=1, 2)\) will not be chosen for further division.
Proof
where the above inequalities follow from \(L_{\xi }< \varvec{d}_{\xi }^{T}\varvec{y}< U_{\xi }\), (3.6) and (3.3), respectively. Thus, we obtain
For \(l_{\xi }< z_{\xi }^{*}< u_{\xi }\), we have the following two cases:
(i). If \(r2\le w\le r1\), then \(r=w.\)
Since \(r>r2\), it follows that
This contradicts (3.5), i.e., \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{2}\)).
Similarly, since \(r<r1\), it follows that
This contradicts (3.4), i.e., \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)).
(ii). If \(w\notin (r2,r1)\), then \(w\le r2\) or \(w\ge r1\).
If \(w\le r2\), then \(r=r2+{\bar{\varepsilon }}\). Since \(r2\le r=r2+{\bar{\varepsilon }}\le r1\), \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{2}\)) by (3.10).
Similarly, if \(w\ge r1\), then \(r=r1-{\bar{\varepsilon }}\). Thus, \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)) by (3.11).
If \(z_{\xi }^{*}=l_{\xi }\) or \(z_{\xi }^{*}=u_{\xi }\), then \(r=z_{\xi }^{*}\), such that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)) and LRP(\(Z^{2}\)). In addition,
i.e., the \(\xi \)-th edge of \(Z^{s}\) \((s=1, 2)\) will not be chosen for further division. \(\square \)
By Theorem 2, the closer r is to w, the smaller the approximation error between LRP and LMP over \(Z^{1}\) and \(Z^{2}\). The reasons are given below: let
The region between \(h(z_{\xi })\) and \(h^{11}(z_{\xi }), h^{12}(z_{\xi }), h^{21}(z_{\xi }), h^{22}(z_{\xi })\) is determined by
It can obtain the minimum of S(r) at \(r=\frac{1}{2}(u_{\xi }-l_{\xi })\), so that the chosen of r is given in (3.9). On the basis of the above discussions, the self-adjusting branching rule is summarized as follows:
The self-adjustable branching rule
\(\mathbf {\varvec{Step \ 1:}}\) Let \(\xi =\arg \max \{\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}, \ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*} \ | \ i=1,2,\ldots ,p\}.\)
\(\mathbf {\varvec{Step \ 2:}}\) If \(l_{\xi }<z_{\xi }^{*}<u_{\xi }\), then choose r by (3.9). otherwise, let \(r=z_{\xi }^{*}\).
\(\mathbf {\varvec{Step \ 3 :}}\) Z is subdivided into two p-dimensional sub-rectangles \(Z^{1}\) and \(Z^{2}\), which are given by (3.1) and (3.2), respectively.
Based on Theorem 2, the optimal solution of LRP(\( {Z}\)) can be cut off by the self-adjusting branching rule. The implication is that each iteration of the algorithm may enhance the upper bound for the optimal value to LMP. In contrast, using the standard bisection branching rule, the optimal solution of LRP corresponding to the divided rectangle may be still the optimal solution of the sub-rectangle, thereby contributing to an increase in computational cost.
4 Algorithm, Convergence and Complexity
In this section, based on LRP and branching rule, we present a self-adjustable branch-and-bound (SABB) algorithm for solving LMP. By subsequently subdividing the initial image space rectangle and solving a series of linear relaxation problems, we establish the global convergence of the algorithm and estimate its complexity.
4.1 SABB Algorithm
The global optimal solution of LMP is achieved through the introduction of SABB algorithm based on the linear relaxation program, self-adjustable branching rule and branch-and-bound framework.
The proposed branching process is conducted on the image space using the self-adjustable branching rule, which is different from other branching methods based on the original decision variables such as the algorithms in [29, 30, 34]. Specifically, the branching process of this text takes place in image space \(R^{p}\) of the affine function \(\varvec{c_{i}}^{T}\varvec{x}, i=1, \ldots ,p.\) These distinctions imply that the proposed algorithm may be even better in economize on the required computations if \(p\ll n.\)
At stage k of SABB algorithm, assume that
Thus, \(Z^{k}\) is divided into two rectangles \(Z^{k1}\) and \(Z^{k2}\) by the self- adjustable branching rule such that \(Z^{k}=Z^{k1}\cup Z^{k2}\).
Based on the above discussions, the basic steps of SABB algorithm for globally solving LMP are summarized as follows.
SABB algorithm statement:
Step 0: Given the error tolerance \(\varepsilon >0\), and an initial rectangle \(Z^{0}.\) Solve LRP\((Z^{0})\) to obtain its optimal solution \((\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\) and optimal value \(\varphi (\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\). Set \(UB_{0}=\varphi (\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\), \(\bar{h_{i}}^{0}=\varvec{d}_{i}^{T}\varvec{y}^{0}z_{i}^{0}\), \(i=1,2,\ldots ,p\), \(LB_{0}=\phi (\varvec{y}^{0}, \varvec{z}^{0}, \bar{\varvec{h}}^{0})\). If \(UB_{0}-LB_{0}\le \varepsilon \), then the algorithm stops, and \(\varvec{y}^{0}\) is a global \(\varepsilon \)-optimal solution for LMP. Otherwise, let \(X=\{(\varvec{y}^{0}, \varvec{z}^{0}, \bar{\varvec{h}}^{0})\}, k=0,\) set \(T_{0}=\{Z^{0}\}\).
Step 1: Using the self-adjustable branching rule to subdivide \(Z^{k}\) into two new sub-rectangles \(Z^{k1},Z^{k2}\) and set \(T=\{Z^{k1},Z^{k2}\}\).
Step 2: For each \(Z^{ks}(s=1,2)\), solve LRP(\(Z^{ks}\)) to obtain the optimal solution \((\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\) and optimal value \(\varphi (\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\). Set \(UB(Z^{ks})=\varphi (\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\), let \(\bar{h}_{i}^{ks}=\varvec{d}_{i}^{T}\varvec{y}^{ks}{z}_{i}^{ks}\), \(i=1,2,\ldots ,p\), \(X=X\bigcup \{(\varvec{y}^{ks}, \varvec{z}^{ks},\bar{\varvec{h}}^{ks})\}.\) If \(LB_{k}>UB(Z^{ks})\), set \(T_{k}=T_{k}\setminus {Z^{ks}}\). Let \(T_{k}=(T_{k}{\setminus }{Z^{k}})\bigcup T.\) Update lower bound \(LB_{k}=\max _{(\varvec{y,z,h})\in X}\phi (\varvec{y,z,h})\), and set \((\varvec{y}^{k}, \varvec{z}^{k},\varvec{h}^{k})=\arg \max _{(\varvec{y,z,h})\in X} \phi (\varvec{y,z,h})\).
Step 3: Set \(T_{k+1}=\{Z\ |\ UB(Z)-LB_{k}>\varepsilon , Z\in T_{k}\}\). If \(T_{k+1}=\emptyset \), then terminate: \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution for LMP. Otherwise, select the rectangle \(Z^{k+1}\) such that \(Z^{k+1}=\arg \max _{Z\in T_{k+1}}UB(Z)\), set \(k=k+1\), and return to Step 1.
4.2 Convergence of SABB Algorithm
In this subsection, we discuss the global convergence of SABB algorithm.
Theorem 3
Given \(\varepsilon \ge 0\), if SABB algorithm terminates finitely, then it returns a global \(\varepsilon \)-optimal solution of LMP; otherwise, an infinite sequence \(\{\varvec{y}^{k}\}\) is generated, and every accumulation point of that is a global optimal solution for LMP.
Proof
If the presented algorithm terminates finitely, without loss of generality, suppose it terminates at kth iteration. Then we have
According to Step 2 of the algorithm,
By (4.2) and (4.3), we can get
Let \(f^*\) be the global optimal value of LMP. Combining (4.2)–(4.4), we have
So, \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution of LMP.
If the algorithm is infinite, it can generate an infinitely nested sequence of rectangles \(\{Z^{k}\}\), such that \(u_{i}^{k}-l_{i}^{k}\rightarrow 0\) as \(k\rightarrow \infty \) for \(i=1, 2,\ldots , p\). It also generates the optimal solution sequence \(\{(\varvec{y}^{k}, \varvec{z}^{k}, \varvec{h}^{k})\}\) to LRP\((Z^{k})\). Let \(\bar{h}_{i}^{k}=\varvec{d_{i}}\varvec{y}^{k}z_{i}^{k}\), \(i=1, \ldots ,p,\) then \({(\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}}^{k})}\) is a feasible solution sequence for EP\((Z^{k})\). Since the feasible region of EP\((Z^{k})\) is bounded, without loss of generality, assume that \(\lim \nolimits _{k\rightarrow \infty }\varvec{y}^{k}=\varvec{y}^{*}\), then we have that
According to the definition of \(LB_{k}\) and the continuity of the function \(\phi \), we have
From the above results and \(l_{i}^{k}\le z^k_i=\varvec{c}_{i}^{T}\varvec{y}^{k}\le u_{i}^{k}\), it follows that
Additionally, from the inequality constraints \(h_{i}^{k}\le l_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+U_{i}z_{i}^{k}-l_{i}^{k}U_{i}\) and \(h_{i}^{k}\le u_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+L_{i}z_{i}^{k}-u_{i}^{k}L_{i},\) for any i, we have
This implies that
Further, combining (4.5) with \(LB_{k}\le UB_{k}\), we have
Based upon the structure of SABB algorithm, then it must have
Since \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}}^{*})\) is a feasible solution of EP\((Z^{k})\), according to (4.8), \((\varvec{y}^{*},\varvec{z}^{*}, \varvec{{h}}^{*})\) is an optimal solution of EP\((Z^{k})\). Furthermore, by using Theorem 1 and Theorem 2, it follows that \(\varvec{y}^{*}\) is a global optimal solution of LMP, i.e., every accumulation point of the sequence \(\{\varvec{y^{k}}\}\) is a global optimal solution for LMP. The proof is completed \(\square \)
4.3 Computational Complexity of SABB Algorithm
In order to estimate the maximum iterations of SABB algorithm, we analyze its computational complexity. To this end, we define the size \(\delta (Z)\) for the rectangle (2.1) as
and for convenience, we define
Lemma 2
Given \(\varepsilon \ge 0\), for any \(Z\in Z^{0}\) and any feasible solution \((\varvec{y}, \varvec{z}, \varvec{h})\) of LRP(Z), if \(\delta {(Z)}\le \varepsilon /p\mu ,\) then we have
in which \(\bar{h}_{i}=\varvec{d}_{i}^{T}\varvec{y}{z}_{i}, i=1,\ldots , p\).
Proof
For any feasible solution \((\varvec{y}, \varvec{z}, \varvec{h})\) for LRP(Z), let \(\bar{h}_{i}=\varvec{d}_{i}^{T}\varvec{y}{z}_{i}, i=1,\ldots , p\), it is obvious that \((\varvec{y}, \varvec{z}, \varvec{\bar{h}})\) is a feasible solution for EP over Z. From Theorem 1 and Theorem 2, we have \(\phi (\varvec{y}, \varvec{z},\varvec{\bar{h}})=f(\varvec{y})\). If \(\delta {(Z)}\le \varepsilon /p\mu \) for any sufficiently small positive number \(\varepsilon \), we can obtain
\(\square \)
Theorem 4
Given the convergence tolerance \(\varepsilon \in (0,1),\) SABB algorithm can find a global \(\varepsilon \)-optimal solution to LMP in at most
iterations, where \(\delta (Z^{0})\) and \(\mu \) are given by (4.9) and (4.10), respectively.
Proof
Without loss of generality, assume that the convergence tolerance \(\varepsilon \in (0,1)\) at the initializing step and that the sub-rectangle Z is selected for partitioning in step 1 of SABB algorithm at every iteration. After k.p iterations, we have
From Lemma 1, if
i.e.,
we can obtain \(|\varphi (\varvec{x}, \varvec{y}, \varvec{z}, \varvec{h})-\phi (\varvec{x}, \varvec{y}, \varvec{z}, \varvec{\bar{h}})|\le \varepsilon .\) Therefore, after at most
iterations, we can follow that
where \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}^{*}})\) is the optimal solution of EP. Therefore, \(\varvec{y}^{*}\) is the optimal solution of LMP. In step 2 of SABB algorithm, \((\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\) is the best currently known feasible solution, and we also note that \(\{\phi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\}\) is a increasing sequence satisfying
Therefore, we have
which implies that \(f(\varvec{y}^{*})-f(\varvec{y}^{k})\le \varepsilon \). When SABB algorithm terminates, \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution to LMP, and the proof is completed. \(\square \)
5 Numerical Experiments
In this section, the effectiveness of SABB algorithm is validated by numerical computation. All of algorithms are coded in MATLAB (2018b), then it runs on in a computer with Intel(R) Core(TM)i9-13900HX CPU(2.20GHz). The linear and quadratic subproblem in the algorithms are solved by the linprog and quadprog in MATLAB, respectively.
Each (m, n, p) generates ten randomly instances, and their average results are obtained. The error tolerance \(\varepsilon \) and \({\bar{\varepsilon }}\) are set to \(10^{-4}\) and the maximum CPU time is limited to 3600 s or 10 s in line with the computation requirement. The notations of computational results are listed in Table 1.
In computational experiments, we consider the following LMP:
where \({c}_{ij}\), \({d}_{ij}\), \({c}_{0i}\) and \({d}_{0i}\), are randomly generated in \([-5,5]\), and all elements of \(\varvec{A}\in {\mathbb {R}}^{m\times n}\) and \( \varvec{b}\in {\mathbb {R}}^{m}\) are randomly generated in [0.1, 1].
5.1 Numerical Comparison of Relaxation Methods
To evaluate the performance of the proposed linear relaxation method (LRP), the comparisons of LRP and the existing relaxation methods in [21, 29, 30, 32,33,34,35, 37] are performed.
Since the relaxation methods in [21], denoted as LRP1 and LRP2, are similar to LRP, their performances are first compared. As shown in Table 2, the upper bound values of LRP are smaller than those of LRP1 and LRP2, and the lower bound values of LRP are lager than those of LRP1 and LRP2. The CPU time spent by LRP is longer than those of LRP1 and LRP2 in most cases. These results mean that LRP provides a tighter bound than LRP1 and LRP2. Addtionally, the upper bound obtained by LRP is smaller than those obtained by the relaxation methods in [29, 30, 32,33,34,35, 37] for all randomly generated instances, as shown Table 3. These results suggest that LRP is superior to the relaxation methods in [29, 30, 32,33,34,35, 37].
5.2 Numerical Comparison of Branching Rules
In order to explore the influence of branching rules on the algorithm, the comparisons of the self-adjustable branching rule (SABB algorithm) and the combination rules (BRBB and CRBB algorithms) are performed.
Let \(Z=\Pi ^{p}_{i=1}[l_{i},u_{i}]\) denote the divided rectangle, \(z^{optx}\) denote the corresponding optimal solution of the relaxation problem over Z, \(\xi \) denote the branching direction chosen, \(z_{\xi }^{*}\) denote the branching point chosen of the interval \([l_{\xi },u_{\xi }]\) and \(z^{M}_{\xi }\) denote the medium point of the interval \([l_{\xi },u_{\xi }]\), respectively. The bisection rule chooses \(z_{\xi }^{*}=z^{M}_{\xi }\) as the the branching point in [29, 30, 32,33,34,35]. The combination rule in [21] chooses a linear combination of \(z^{optx}\) and \(z^{M}_{\xi }\) as its the branching point, i.e., \(z_{\xi }^{*}=\alpha z^{optx}+(1-\alpha )z^{M}_{\xi }\), \(\alpha \in [0,1]\). Note that the bisection rule is the combination rule at \(\alpha =0\), denoted as BRBB algorithm.
Table 4 shows that both the CPU time and the number of iterations of SABB algorithm are lower than those in BRBB algorithm. When (m, p) is fixed, both the value of Gap.CPU and Gap.Iter grow with n grows. In contrast, both the value of Gap\((\%)\).CPU and Gap\((\%)\).Iter decrease with n increases. On the whole, the self-adjustable branching rule in branch-and-bound algorithm is better than the bisection branching rule (the combination rule at \(\alpha =0\)) in branch-and-bound algorithm, which may be the result of the former can keep update the upper bound of LMP.
We further compare the performance of SABB and CRBB algorithms at \(\alpha =0.3, 0.5,\) 0.7 and 1.0 (CRBB algorithms), as shown in Table 5. When \(n\le 50\), the CPU time spent by CRBB algorithm at \(\alpha =1.0\) is less than that of SABB algorithm and those of CRBB algorithms. However, SABB algorithm is better than CRBB algorithms when \(n> 100\), which suggest that the self-adjustable branching rule has a better effect on the algorithm than the combination rules for large-scale test instances.
5.3 Numerical Comparison of the Branching Direction
In this subsection, we test randomly generated problem P to demonstrate the impact of the chosen direction for self-adjustable branching rule on the algorithm.
Table 6 shows that both SABB algorithm and SABB-L algorithm can find the same optimal values for problem P when \((m,p)=(50,2)\). However, the CPU time cost by SABB algorithm is reduced by at lest \(57.18\%\) compared to SABB-L algorithm. And the number of iterations required by SABB algorithm is significantly lower than that by SABB-L algorithm. Note that SABB algorithm terminates and returns the optimal solution when p takes the values of 3, 4 and 5, while SABB-L algorithm is not terminated and keeping run after obtaining the same optimal values. These results indicate that the choosing branching direction of SABB algorithm has higher computing efficiency comparing with the general way of choosing directions.
5.4 Numerical Comparison of Algorithms
In this subsection, the comparisons of SABB algorithm and the algorithms in [29, 32] for solving P are performed.
As shown in Table 7, SABB algorithm and the algorithm in [32] can find the same optimal values for all test random instances, whereas the CPU time cost by SABB algorithm is less than those of the algorithm in [32]. Moreover, the CPU time spent by SABB algorithm grows slowly with the increase of n compared to the other two algorithms. By contrast, the algorithm in [29] finds the global optimal values for only a few random instances, and it costs longer the CPU time than the other two algorithms. These results reveal that the self-adjustable branching rule is helpful to improve the efficiency of SABB algorithm.
6 Conclusions
In this paper, we investigate a linear multiplicative program (LMP), which has important application in various domains such as financial optimization and robust optimization. Firstly, by employing appropriate variable substitution techniques, LMP is transformed into an equivalent problem (EP). Subsequently, EP can be simplified into a series of linear relaxation programs through the application of affine approximations. Then we propose a self-adjustable branch-and-bound algorithm by integrating the self-adjustable branching rule and branch-and-bound framework. The proposed algorithm has been proven to converge to the global optimal solution of the initial LMP. Additionally, we conduct an analysis of the computational complexity of the algorithm. Finally, numerical results demonstrate its feasibility and high efficiency. The future research direction is to investigate whether there exist more effective relaxation methods, alternative branching rules or reduction strategies for addressing general linear multiplicative problem.
Availability of Data and Materials
Not applicable.
References
Kahl, F., Agarwal, S., Chandraker, M.K., Kriegman, D., Belongies, S.: Practical global optimization for multiview geometry. Int. J. Comput. Vis. 79(3), 271–284 (2008)
Qu, S., Zhou, Y., Zhang, Y., Wahab, M.I.M., Zhang, G., Ye, Y.: Optimal strategy for a green supply chain considering shipping policy and default risk. Comput. Ind. Eng. 131, 172–186 (2019)
Konno, H., Shirakawa, H., Yamazaki, H.: A mean-absolute deviation-skewness portfolio optimization model. Ann. Oper. Res. 45, 205–220 (1993)
Konno, H., Kuno, T.: Generalized linear multiplicative and fractional programming. Ann. Oper. Res. 25, 147–162 (1990)
Quesada, I., Grossmann, I.E.: Alternative bounding applications for the global optimization of various engineering design problems. In: Grossmann, I.E. (ed.) Global Optimization in Engineering Design. Nonconvex Optimization and Its Applications, vol. 9, pp. 309–331. Springer, Berlin (1996)
Bennett, K., Mangasarian, O.: Bilinear separation of two sets in n-space. Comput. Optim. Appl. 2, 207–227 (1994)
Dorneich, M., Sahinidis, N.: Global optimization algorithms for chip design and compaction. Eng. Optim. 25(2), 131–154 (1995)
Mulvey, J., Vanderbei, R., Zenios, S.: Robust optimization of large-scale systems. Oper. Res. 43, 264–281 (1995)
Tuy, H.: Convex Analysis and Global Optimization, 2nd edn. Kluwer Academic, Dordrecht (2016)
Benson, H.: Global maximization of a generalized concave multiplicative function. J. Optim. Theory Appl. 137, 105–120 (2008)
Zhao, Y., Liu, S.: Global optimization algorithm for mixed integer quadratically constrained quadratic program. J. Comput. Appl. Math. 319, 159–169 (2017)
Lu, C., Deng, Z., Jin, Q.: An eigenvalue decomposition based branch-and-bound algorithm for non-convex quadratic programming problems with convex quadratic constraints. J. Global Optim. 67(3), 475–493 (2017)
Luo, H., Chen, S., Wu, H.: A new branch-and-cut algorithm for non-convex quadratic programming via alternative direction method and semidefinite relaxation. Numer. Algorithms 88, 993–1024 (2021)
Konno, H., Kuno, T., Yajima, Y.: Parametric simplex algorithms for a class of NP-complete problems whose average number of steps is polynomial. Comput. Optim. Appl. 1, 227–239 (1992)
Raghavachari, M.: On connections between zero-one integer programming and concave programming under linear constraints. Oper. Res. 17, 680–684 (1969)
Matsui, T.: NP-hardness of linear multiplicative programming and related problems. J. Global Optim. 9(2), 113–119 (1996)
Konno, H., Kuno, T.: Linear multiplicative programming. Math. Program. 56, 51–64 (1992)
Ryoo, H.S., Sahinidis, N.V.: Global optimization of multiplicative programs. J. Global Optim. 26, 387–418 (2003)
Gao, Y., Xu, C., Yang, Y.: Outcome-space branch and bound algorithm for solving linear multiplicative programming. Comput. Intell. Secur. 3801, 675–681 (2005)
Zhou, X., Cao, B., Wu, K.: Global optimization method for linear multiplicative programming. Acta Math. Appl. Sin. 31(2), 325–334 (2015)
Cambini, R., Riccardi, R., Scopelliti, D.: Solving linear multiplicative programs via branch-and-bound: a computational experience. CMS 20(1), 38 (2023)
Cambini, R., Sodini, C.: Global optimization of a rank-two nonconvex program. Math. Methods Oper. Res. 71(1), 165–180 (2010)
Cambini, R., Sodini, C.: On the minimization of a class of generalized linear functions on a flow polytope. Optimization 63(10), 1449–1464 (2014)
Yang, L., Shen, P., Pei, Y.: A global optimization approach for solving generalized nonlinear multiplicative programming problem. Abstr. Appl. Anal. 2014(1), 641909 (2014)
Gao, Y., Xu, C., Yang, Y.: An outcome-space finite algorithm for solving linear multiplicative programming. Appl. Math. Comput. 179(2), 494–505 (2006)
Oliveira, Rúbia. M., Ferreira, P.A.V.: An outcome space approach for generalized convex multiplicative programs. J. Global Optim. 47(1), 107–118 (2010)
Shen, P., Huang, B., Wang, L.: Range division and linearization algorithm for a class of linear ratios optimization problems. J. Comput. Appl. Math. 350, 324–342 (2019)
Liu, S., Zhao, Y.: An efficient algorithm for globally solving generalized linear multiplicative programming. J. Comput. Appl. Math. 296, 840–847 (2016)
Wang, C., Bai, Y., Shen, P.: A practicable branch-and-bound algorithm for globally solving multiplicative programming. Optimization 66(3), 397–405 (2017)
Wang, C., Deng, Y., Shen, P.: A novel convex relaxation-strategy-based algorithm for solving linear multiplicative problems. J. Comput. Appl. Math. 407, 114080 (2022)
Zhao, Y., Zhao, T.: Global optimization for generalized linear multiplicative programming using convex relaxation. Math. Problems Eng. 2018, 9146309 (2018)
Yin, J., Jiao, H., Shang, Y.: Global algorithm for generalized affine multiplicative programming Problem. IEEE Access 7, 162245–162253 (2019)
Shen, P., Wang, K., Lu, T.: Outer space branch and bound algorithm for solving linear multiplicative programming problems. J. Global Optim. 78, 453–482 (2020)
Shen, P., Huang, B.: Global algorithm for solving linear multiplicative programming problems. Optim. Lett. 14, 693–710 (2020)
Shen, P., Wang, K., Lu, T.: Global optimization algorithm for solving linear multiplicative programming problems. Optimization 71(6), 1421–1441 (2022)
Shen, P., Wu, D., Wang, F.: An efficient spatial branch-and-bound algorithm using an adaptive branching rule for linear multiplicative programming. J. Comput. Appl. Math. 426, 115100 (2023)
Shen, P., Wu, D., Wang, K.: Globally minimizing a class of linear multiplicative forms via simplicial branch-and-bound. J. Global Optim. 86, 303–321 (2023)
Jiao, H., Wang, W., Chen, R., et al.: An efficient outer space algorithm for generalized linear multiplicative programming problem. IEEE Access 99, 1–1 (2020)
McCormick, G.P.: Computability of global solutions to factorable nonconvex programs: Part I-Convex underestimating problems. Math. Program. 10(1), 147–175 (1976)
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which has helped to substantially improve the presentation of this work.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
The whole work has been carried out by the author.
Corresponding author
Ethics declarations
Conflict of interest
No potential conflict of interest was reported by the author.
Ethics Approval
Not applicable.
Additional information
Communicated by Anton Abdulbasah Kamil.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, Y. A Self-Adjustable Branch-and-Bound Algorithm for Solving Linear Multiplicative Programming. Bull. Malays. Math. Sci. Soc. 47, 137 (2024). https://doi.org/10.1007/s40840-024-01730-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40840-024-01730-3