1 Introduction

Consider the following linear multiplicative problem:

$$\begin{aligned} \mathbf{\mathrm {(LMP)}:}\left\{ \begin{array}{ll} \max &{}f(\varvec{y})=\sum \limits _{i=1}^{p}(\varvec{c}_{i}^{T}\varvec{y}+c_{0i})(\varvec{d}_{i}^{T}\varvec{y}+d_{0i})\\ \mathrm {s.t.}&{} \varvec{y}\in {\mathbb {Y}}=\{\varvec{y}\in {\mathbb {R}}^{n}\ |\ \varvec{A\varvec{y}}\le \varvec{b}\}, \end{array} \right. \end{aligned}$$

where \(\varvec{c}_{i},\varvec{d}_{i}\in {\mathbb {R}}^{n}\), \(c_{0i},d_{0i}\in {\mathbb {R}}\), \(\varvec{A}\in {\mathbb {R}}^{m\times n}\), \(\varvec{b}\in {\mathbb {R}}^{m}\), and \({\mathbb {Y}}\in {\mathbb {R}}^{n}\) is a nonempty and bounded set.

In recent decades, LMP, being a specialized multiplicative program, has garnered significant attention from researchers. The prominence of LMP stems from its wide-ranging applications in financial optimization [1,2,3], microeconomics [4], plant layout design [5], data mining/patten recognition [6], VLISI chip design [7], robust optimization [8], and various special linear or nonlinear programming problems [9,10,11,12,13,14,15] that can be converted to the form of LMP. Another key aspect is that LMP is known to be NP-hard [16] and typically exhibits multiple locally optimal solutions which are not global optimal solutions of LMP [17].

Various practical algorithms have been proposed for solving LMP and its special cases. Based on their distinctive characteristics, these algorithms can be classified into branch and bound algorithms [18,19,20,21], optimal level solution methods [22, 23], monotonic optimization approaches [24], outcome-space approaches [25, 26], approximate algorithms [27], level set algorithms [28], etc. Specially, the method that introducing other technique into branch-and-bound algorithm has gained respect. For instance, Wang et al. [29, 30] presented a practicable branch-and-bound method and a novel convex relaxation-strategy-based algorithm by using a new linear relaxation technique and a convex approximation approach, respectively. Zhao et al. [31] developed a simple global optimization algorithm based on a new convex relaxation method, the branch and bound scheme and some accelerating techniques. Yin et al. [32] proposed a new outer space rectangle branch and bound algorithm by employing affine approximations technique to subsequently refine the initial outer space rectangle. Shen et al. [33,34,35,36,37] designed a series of global algorithms after reconsideration of the linear relaxation technique, second order cone relaxation and the convex quadratic relaxation, respectively. However, the above branch-and-bound algorithms are usually combined with the standard bisection branching rule which uses the midpoint of the longest edge of the rectangle as the branching point. So the optimal solution of divided rectangle may still be the optimal solution of sub-rectangle, i.e., the upper bound for the optimal solution of LMP over sub-rectangle is not updated during the execution of the algorithm.

Hence, a self-adjustable branch-and-bound (SABB) algorithm is presented to solve LMP. Compared with the existing literatures [29,30,31,32,33,34,35, 38], the features of SABB algorithm are summarized as follows: (i) In spite of the linear relaxation program being similar to the existing methods in [29, 32, 39], the objective function in our algorithm is processed prior to the implementation of the equivalent transformation, which is different from the algorithms in [29, 32]. Furthermore, this algorithm necessitates fewer auxiliary variables for solving LMP compared to those used in previous studies [29, 32]. (ii) The introduction of the self-adjustable branching rule in the proposed algorithm enables constant update to the upper bound for the optimal value of LMP by selecting an appropriate branching point in the direction of better relaxation approximation of the rectangle. And numerical results validate the effectiveness of the self-adjustable branching rule by comparing it with the standard bisection branching rule. (iii) The branching operation of the proposed algorithm is performed in the image space \({\mathbb {R}}^{p}\) rather than decision variable space \({\mathbb {R}}^{n}\) in [29, 30, 34]. This means that the proposed algorithm will help economize on the required computation if \(p\ll n\). (iv) The global convergence and computation complexity of the proposed algorithm are analyzed. However, the complexity of the algorithms in [29, 32, 34, 38] is not specified.

The rest part of this paper is organized as follows. In Sect. 2, we describe the equivalent transformation for LMP and establish its linear relaxation program. A self-adjustable branching rule is introduced in Sect. 3. In Sect. 4, we summarize the SABB algorithm and establish its the global convergence, and its complexity is estimated as well. We test some numerical examples and report numerical results in Sect. 5. Finally, some conclusions are given in Sect. 6.

2 Equivalent Problem and Its Linear Relaxation Programming

To globally solve LMP, an equivalent problem (EP) is established by an equivalent transformation. Subsequently, we propose a linear relaxation programming approach for EP, which provides an upper bound to the optimal value of LMP.

In regard to the objective function \(f(\varvec{y})\) of LMP, we convert it into the following form:

$$\begin{aligned} f(\varvec{y})= & {} \sum \limits _{i=1}^{p}\left( \varvec{c}_{i}^{T}\varvec{y}+c_{0i}\right) \left( \varvec{d}_{i}^{T}\varvec{y}+d_{0i}\right) \\= & {} \sum \limits _{i=1}^{p}\left[ \left( \varvec{c}_{i}^{T}\varvec{y}\right) \left( \varvec{d}_{i}^{T}\varvec{y}\right) +\left( d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i}\right) ^{T}\varvec{y}+c_{0i}d_{0i}\right] . \end{aligned}$$

By introducing 2p auxiliary variables, we covert LMP into its EP. Let

$$\begin{aligned} z_{i}=\varvec{c}_{i}^{T}\varvec{y},\ h_{i}=\varvec{d}_{i}^{T}\varvec{y}z_{i}, i=1,\ldots , p, \end{aligned}$$

and define an initial rectangle \(Z^{0}\), given by

$$\begin{aligned} Z^{0}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}\ | l_{i}^{0}\le z_{i} \le u_{i}^{0}, i=1,\ldots ,p\right\} , \end{aligned}$$

where \({l_{i}}^{0}=\min \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{c}_{i}^{T}\varvec{y},\) \({u_{i}}^{0}=\max \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{c}_{i}^{T}\varvec{y}, i=1, \ldots , p.\) Meanwhile, we also calculate \({L_{i}}=\min \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{d}_{i}^{T}\varvec{y},\) \({U_{i}}=\max \nolimits _{\varvec{y}\in {\mathbb {Y}}}\varvec{d}_{i}^{T}\varvec{y}, i=1,\dots , p.\)

Hence, LMP can be reformulated as EP over \(Z^{0}\) as follows:

$$\begin{aligned} \mathbf{\mathrm{{(EP( {Z}^{0}))}}:}\left\{ \begin{array}{ll} \max &{}\phi (\varvec{y},\varvec{z},\varvec{h})=\sum \limits _{i=1}^{p}[h_{i}+(d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i})^{T}\varvec{y}+c_{0i}d_{0i}]\\ \mathrm {s.t.}&{} h_{i}=\varvec{d}_{i}^{T}\varvec{y}z_{i}, i=1, \ldots ,p, \\ &{}z_{i}=\varvec{c}_{i}^{T}\varvec{y}, i=1, \ldots ,p, \\ &{} \varvec{z}\in Z^{0},\\ &{} \varvec{d}_{i}^{T}\varvec{y}\in [L_{i},U_{i}], i=1, \ldots ,p,\\ &{} \varvec{y}\in {\mathbb {Y}}. \end{array} \right. \end{aligned}$$

The following theorem establishes the equivalence between LMP and EP.

Theorem 1

\(\varvec{y}^{*}\) is an optimal solution to LMP if and only if \((\varvec{y}^{*},\varvec{z}^{*},\varvec{h}^{*})\) is an optimal solution to EP, in which \(zi^{*}=c_{i}^{T}x^{*}, hi=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}, i=1, 2, \ldots , p\).

Proof

This proof bears resemblance to Theorem 1 in Shen et al. [33], so it is omitted from this paper. \(\square \)

According to Theorem 1, LMP and EP share the same global optimal value. Subsequently, our focus will be on addressing the solution to EP.

For convenience, assume that

$$\begin{aligned} Z=\{\varvec{z}\in {\mathbb {R}}^{p}\ | l_{i}\le z_{i} \le u_{i}, i=1,\ldots ,p\}\subseteq Z^{0}. \end{aligned}$$
(2.1)

Thus, for any \(Z\subseteq Z^{0}\), the corresponding EP over Z can be rewritten as follows:

$$\begin{aligned} \mathbf{\mathrm{{(EP( {Z}))}}:}\left\{ \begin{array}{ll} \max &{}\phi (\varvec{y},\varvec{z},\varvec{h})=\sum \limits _{i=1}^{p}[h_{i}+(d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i})^{T}\varvec{y}+c_{0i}d_{0i}]\\ \mathrm {s.t.}&{} h_{i}=\varvec{d}_{i}^{T}\varvec{y}z_{i}, i=1, \ldots ,p, \\ &{}z_{i}=\varvec{c}_{i}^{T}\varvec{y}, i=1, \ldots ,p, \\ &{} \varvec{z}\in Z,\\ &{} \varvec{d}_{i}^{T}\varvec{y}\in [L_{i},U_{i}], i=1, \ldots ,p,\\ &{} \varvec{y}\in {\mathbb {Y}}. \end{array} \right. \end{aligned}$$

From the structure of EP(Z), its non-convexity is manifested in constraints \(h_{i}=\varvec{d}_{i}^{T}\varvec{y}z_{i}, i=1, \ldots ,p\). Inspired by [29, 32, 39], for any \(Z\subseteq Z^{0}\) and \(\varvec{d}_{i}^{T}\varvec{y}\in [L_{i},U_{i}], i=1, \ldots ,p,\) by using the concave envelope of bilinear function, it can be derived that the following relationships hold

$$\begin{aligned} h_{i}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}+U_{i}z_{i}-l_{i}U_{i}, h_{i}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}+L_{i}z_{i}-u_{i}L_{i}, i=1, \ldots ,p. \end{aligned}$$

Then, the linear relaxation problem of EP(Z) is formulated.

$$\begin{aligned} \mathbf{\mathrm{{(LRP( {Z}))}}:}\left\{ \begin{array}{ll} \max &{}\varphi (\varvec{y},\varvec{z},\varvec{h})=\sum \limits _{i=1}^{p}[h_{i}+(d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i})^{T}\varvec{y}+c_{0i}d_{0i}]\\ \mathrm {s.t.}&{} h_{i}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}+U_{i}z_{i}-l_{i}U_{i}, i=1, \ldots ,p, \\ &{}h_{i}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}+L_{i}z_{i}-u_{i}L_{i}, i=1, \ldots ,p, \\ &{}z_{i}=\varvec{c}_{i}^{T}\varvec{y}, i=1, \ldots ,p, \\ &{} \varvec{z}\in Z,\\ &{} \varvec{d}_{i}^{T}\varvec{y}\in [L_{i},U_{i}], i=1, \ldots ,p,\\ &{} \varvec{y}\in {\mathbb {Y}}. \end{array} \right. \end{aligned}$$

Apparently, the optimal value of LRP(\( {Z}\)) can provide an upper bound for that of EP(\( {Z}\)).

Remark 1

\(\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}+U_{i}-l_{i}U_{i}, u_{i}\varvec{d}_{i}^{T}\varvec{y}z_{i}+L_{i}z_{i}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}z_{i}\rightarrow 0\), \(i=1, \ldots ,p,\) when \((u_{i}-l_{i})\rightarrow 0\) for \(i=1, \ldots ,p.\) The proof is similar to Yin et al. [31], in which the detail is presented in Theorem 1.

3 The Self-Adjustment Branching Rule

In this subsection, the proposal implements a self-adjustment branching rule for the algorithm in order to expedite the attainment of an optimal solution of LMP. In [29, 30, 32,33,34,35, 38], the standard bisection branching rule is used and it can be described as follows:

  1. (i)

    Let \(q=\arg \max \{u_{i}-l_{i}|i=1,2,\ldots ,p\}\).

  2. (ii)

    Z is subdivided into two p-dimensional sub-rectangles \(Z_{1}\) and \(Z_{2}\), i.e.,

    $$\begin{aligned} Z_{1}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne q, l_{q}\le z_{q} \le \dfrac{(l_{q}+u_{q})}{2}\right\} ,\\ Z_{2}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne q, \dfrac{(l_{q}+u_{q})}{2}\le z_{q} \le u_{q}\right\} . \end{aligned}$$

Using the standard bisection branching rule, the optimal solution of LRP(\( {Z}\)) may still be the optimal solution to LRP(\( {Z}_{1}\)) or LRP(\( {Z}_{2}\)), so that the upper bound of LMP(\( {Z}\)) is not updated during the execution of the algorithm. The current focus is on devising branching strategies that ensure a continuous update of the upper bound for the optimal value to LMP.

Lemma 1

For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is an optimal solution of LRP(Z). For any i, if \(z_{i}^{*}=l_{i}\) or \(z_{i}^{*}=u_{i}\) or \(u_{i}-l_{i}=0\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\) or \(U_{i}-L_{i}=0\), we have \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)

Proof

If \(z_{i}^{*}=l_{i}\), then \(h_{i}^{*}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)

If \(z_{i}^{*}=u_{i}\), then \(h_{i}^{*}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)

If \(u_{i}-l_{i}=0\), assume that \(u_{i}=l_{i}=z_{i}^{*}\), then \(h_{i}^{*}\le l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\) and \(h_{i}^{*}\le u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\)

In the same way, if \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\) or \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\) or \(U_{i}-L_{i}=0\), then \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}.\) \(\square \)

For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is an optimal solution of LRP(\( {Z}\)). We choose \(\xi =\arg \max \{\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}, \ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*} \ | \ i=1,2,\ldots ,p\}\) as the branching direction. According to the above Lemma 1, for any \(i=1, \ldots , p\), we can get \(\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i},\ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}=0\) when \(z_{i}^{*}=l_{i}\), \(z_{i}^{*}=u_{i}\), \(u_{i}-l_{i}=0\), \(\varvec{d}_{i}^{T}\varvec{y}^{*}=L_{i}\), \(\varvec{d}_{i}^{T}\varvec{y}^{*}=U_{i}\), and \(U_{i}-L_{i}=0\), respectively, which will not be chosen for further division. Thus, Z is divided into \(Z^{1}\) and \(Z^{2}\) along \([l_{\xi }, u_{\xi }]\) at point r, as follows:

$$\begin{aligned} Z^{1}=\{\varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne \xi , l_{\xi }\le z_{\xi } \le r\}, \end{aligned}$$
(3.1)
$$\begin{aligned} Z^{2}=\{\varvec{z}\in {\mathbb {R}}^{p}| l_{i}\le z_{i} \le u_{i}, i=1,2,\ldots ,p,i\ne \xi , r\le z_{\xi } \le u_{\xi }\}, \end{aligned}$$
(3.2)

in which \(r\in (l_{\xi }, u_{\xi })\) is denoted as the branching point. Note that the inequality constraint of two sub-rectangles corresponding to the \(\xi \) direction is expressed as

$$\begin{aligned} h_{\xi }^{*}\le l_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-l_{\xi }U_{\xi },\end{aligned}$$
(3.3)
$$\begin{aligned} h_{\xi }^{*}\le r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-rL_{\xi },\end{aligned}$$
(3.4)
$$\begin{aligned} h_{\xi }^{*}\le r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-rU_{\xi },\end{aligned}$$
(3.5)
$$\begin{aligned} h_{\xi }^{*}\le u_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-u_{\xi }L_{\xi }. \end{aligned}$$
(3.6)

Based on the above discussions, if the optimal solution \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) of LRP(\( {Z}\)) is cut off by dividing along \([l_{\xi }, u_{\xi }]\) at point r, i.e., \(h_{\xi }^{*}> r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-rL_{\xi }\) and \( h_{\xi }^{*}> r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-rU_{\xi }\) hold, so that we calculate and obtain

$$\begin{aligned} r<\frac{h_{\xi }^{*}-L_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi }}\triangleq r1, \end{aligned}$$
(3.7)
$$\begin{aligned} r>\frac{h_{\xi }^{*}-U_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi }}\triangleq r2. \end{aligned}$$
(3.8)

Combining \(l_{\xi }, u_{\xi }, r1\) and r2, the selection of r is determined by the following Theorem 2.

Theorem 2

For any \(Z\subseteq Z^{0}\), assume that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is the optimal solution of LRP(Z). If \({z}^{*}_{\xi }\in (l_{\xi }, u_{\xi })\), \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP\((Z^{1})\) and LRP\((Z^{2} )\), where \(Z^{1}\) and \(Z^{2}\) are given in (3.1) and (3.2), respectively, and r in \(Z^{1}\) and \(Z^{2}\) is given as follows:

$$\begin{aligned} r={\left\{ \begin{array}{ll}w, \qquad \quad if \ w\in (r1, r2),\\ r2+{\bar{\varepsilon }}, \qquad if \ w\le r2, \\ r1-{\bar{\varepsilon }},\qquad if \ w\ge r1{,} \end{array}\right. } \end{aligned}$$
(3.9)

with (3.7), (3.8), \(0<{\bar{\varepsilon }}<r1-r2\) and \(w=\frac{1}{2}(u_{\xi }-l_{\xi })\). Otherwise, let \(r=z_{\xi }^{*}\), we get \(h_{i}^{*}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}\), then the \(\xi \)-th edge of \(Z^{s}\) \((s=1, 2)\) will not be chosen for further division.

Proof

From (3.7) and (3.8), we have

$$\begin{aligned} r1-r2= & {} \frac{h_{\xi }^{*}-L_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi }} -\frac{h_{\xi }^{*}-U_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi }} \ge 0, \\ r1-u_{\xi }= & {} \frac{h_{\xi }^{*}-L_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi }}-u_{\xi } =\frac{h_{\xi }^{*}-L_{\xi }z_{\xi }^{*}-u_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+u_{\xi }L_{\xi }}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi }} \le 0, \\ r2-l_{\xi }= & {} \frac{h_{\xi }^{*}-U_{\xi }z_{\xi }^{*}}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi }}-l_{\xi } =\frac{h_{\xi }^{*}-U_{\xi }z_{\xi }^{*}-l_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+l_{\xi }U_{\xi }}{\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi }} \ge 0, \end{aligned}$$

where the above inequalities follow from \(L_{\xi }< \varvec{d}_{\xi }^{T}\varvec{y}< U_{\xi }\), (3.6) and (3.3), respectively. Thus, we obtain

$$\begin{aligned} l_{\xi }\le r2< r1\le u_{\xi }. \end{aligned}$$

For \(l_{\xi }< z_{\xi }^{*}< u_{\xi }\), we have the following two cases:

(i). If \(r2\le w\le r1\), then \(r=w.\)

Since \(r>r2\), it follows that

$$\begin{aligned} h_{\xi }^{*}-r\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi }z_{\xi }^{*}+rU_{\xi } > h_{\xi }^{*}-U_{\xi }z_{\xi }^{*}- r2(\varvec{d}_{\xi }^{T}\varvec{y}^{*}-U_{\xi })=0. \end{aligned}$$
(3.10)

This contradicts (3.5), i.e., \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{2}\)).

Similarly, since \(r<r1\), it follows that

$$\begin{aligned} h_{\xi }^{*}- r\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi }z_{\xi }^{*}+rL_{\xi } > h_{\xi }^{*}-L_{\xi }z_{\xi }^{*}- r1(\varvec{d}_{\xi }^{T}\varvec{y}^{*}-L_{\xi })=0. \end{aligned}$$
(3.11)

This contradicts (3.4), i.e., \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)).

(ii). If \(w\notin (r2,r1)\), then \(w\le r2\) or \(w\ge r1\).

If \(w\le r2\), then \(r=r2+{\bar{\varepsilon }}\). Since \(r2\le r=r2+{\bar{\varepsilon }}\le r1\), \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{2}\)) by (3.10).

Similarly, if \(w\ge r1\), then \(r=r1-{\bar{\varepsilon }}\). Thus, \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)) by (3.11).

If \(z_{\xi }^{*}=l_{\xi }\) or \(z_{\xi }^{*}=u_{\xi }\), then \(r=z_{\xi }^{*}\), such that \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{h}^{*})\) is not the optimal solution of LRP(\(Z^{1}\)) and LRP(\(Z^{2}\)). In addition,

$$\begin{aligned}&l_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-l_{\xi }U_{\xi }-h_{\xi }^{*}=0,\ r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-rL_{\xi }-h_{\xi }^{*}=0,\\&r\varvec{d}_{\xi }^{T}\varvec{y}^{*}+U_{\xi }z_{\xi }^{*}-rU_{\xi }-h_{\xi }^{*}=0,\ u_{\xi }\varvec{d}_{\xi }^{T}\varvec{y}^{*}+L_{\xi }z_{\xi }^{*}-u_{\xi }L_{\xi }-h_{\xi }^{*}=0, \end{aligned}$$

i.e., the \(\xi \)-th edge of \(Z^{s}\) \((s=1, 2)\) will not be chosen for further division. \(\square \)

By Theorem 2, the closer r is to w, the smaller the approximation error between LRP and LMP over \(Z^{1}\) and \(Z^{2}\). The reasons are given below: let

$$\begin{aligned}{} & {} h(z_{\xi })=\varvec{d_{i}}^{T}\varvec{y}z_{\xi }, \\{} & {} h^{11}(z_{\xi })=l_{\xi }\varvec{d_{i}}^{T}\varvec{y}+U_{\xi }z_{\xi }-l_{\xi }U_{\xi }, h^{12}(z_{\xi })=r\varvec{d_{i}}^{T}\varvec{y}+L_{\xi }z_{\xi }^{*}-rL_{\xi }, \\{} & {} h^{21}(z_{\xi })=r\varvec{d_{i}}^{T}\varvec{y}+U_{\xi }z_{\xi }-rU_{\xi }, h^{22}(z_{\xi })=u_{\xi }\varvec{d_{i}}^{T}\varvec{y}+L_{\xi }z_{\xi }-u_{\xi }L_{\xi }. \end{aligned}$$

The region between \(h(z_{\xi })\) and \(h^{11}(z_{\xi }), h^{12}(z_{\xi }), h^{21}(z_{\xi }), h^{22}(z_{\xi })\) is determined by

$$\begin{aligned} S(r)&=\int ^{r}_{l_{\xi }}[h^{11}(z_{\xi })-h(z_{\xi })]dz_{\xi } +\int ^{u_{\xi }}_{r}[h^{12}(z_{\xi })-h(z_{\xi })]dz_{\xi }+ \int ^{r}_{l_{\xi }}[h^{21}(z_{\xi })-h(z_{\xi })]dz_{\xi }\\&\quad +\int ^{u_{\xi }}_{r}[h^{22}(z_{\xi })-h(z_{\xi })]dz_{\xi }\\&=(U_{\xi }-L_{\xi })r^{2}+(u_{\xi }-l_{\xi })(L_{\xi }-U_{\xi })r +u_{\xi }^{2}(z_{\xi }-L_{\xi })-l_{\xi }^{2}(z_{\xi }-U_{\xi }). \end{aligned}$$

It can obtain the minimum of S(r) at \(r=\frac{1}{2}(u_{\xi }-l_{\xi })\), so that the chosen of r is given in (3.9). On the basis of the above discussions, the self-adjusting branching rule is summarized as follows:

The self-adjustable branching rule

\(\mathbf {\varvec{Step \ 1:}}\) Let \(\xi =\arg \max \{\min \{l_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-l_{i}U_{i}, \ u_{i}\varvec{d}_{i}^{T}\varvec{y}^{*}+L_{i}z_{i}^{*}-u_{i}L_{i}\}-\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*} \ | \ i=1,2,\ldots ,p\}.\)

\(\mathbf {\varvec{Step \ 2:}}\) If \(l_{\xi }<z_{\xi }^{*}<u_{\xi }\), then choose r by (3.9). otherwise, let \(r=z_{\xi }^{*}\).

\(\mathbf {\varvec{Step \ 3 :}}\) Z is subdivided into two p-dimensional sub-rectangles \(Z^{1}\) and \(Z^{2}\), which are given by (3.1) and (3.2), respectively.

Based on Theorem 2, the optimal solution of LRP(\( {Z}\)) can be cut off by the self-adjusting branching rule. The implication is that each iteration of the algorithm may enhance the upper bound for the optimal value to LMP. In contrast, using the standard bisection branching rule, the optimal solution of LRP corresponding to the divided rectangle may be still the optimal solution of the sub-rectangle, thereby contributing to an increase in computational cost.

4 Algorithm, Convergence and Complexity

In this section, based on LRP and branching rule, we present a self-adjustable branch-and-bound (SABB) algorithm for solving LMP. By subsequently subdividing the initial image space rectangle and solving a series of linear relaxation problems, we establish the global convergence of the algorithm and estimate its complexity.

4.1 SABB Algorithm

The global optimal solution of LMP is achieved through the introduction of SABB algorithm based on the linear relaxation program, self-adjustable branching rule and branch-and-bound framework.

The proposed branching process is conducted on the image space using the self-adjustable branching rule, which is different from other branching methods based on the original decision variables such as the algorithms in [29, 30, 34]. Specifically, the branching process of this text takes place in image space \(R^{p}\) of the affine function \(\varvec{c_{i}}^{T}\varvec{x}, i=1, \ldots ,p.\) These distinctions imply that the proposed algorithm may be even better in economize on the required computations if \(p\ll n.\)

At stage k of SABB algorithm, assume that

$$\begin{aligned} Z^{k}=\left\{ \varvec{z}\in {\mathbb {R}}^{p}|l_{i}^{k}\le z_{i} \le u_{i}^{k}, i=1,\ldots ,p\right\} \subseteq Z^{0}. \end{aligned}$$
(4.1)

Thus, \(Z^{k}\) is divided into two rectangles \(Z^{k1}\) and \(Z^{k2}\) by the self- adjustable branching rule such that \(Z^{k}=Z^{k1}\cup Z^{k2}\).

Based on the above discussions, the basic steps of SABB algorithm for globally solving LMP are summarized as follows.

SABB algorithm statement:

Step 0: Given the error tolerance \(\varepsilon >0\), and an initial rectangle \(Z^{0}.\) Solve LRP\((Z^{0})\) to obtain its optimal solution \((\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\) and optimal value \(\varphi (\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\). Set \(UB_{0}=\varphi (\varvec{y}^{0}, \varvec{z}^{0}, \varvec{h}^{0})\), \(\bar{h_{i}}^{0}=\varvec{d}_{i}^{T}\varvec{y}^{0}z_{i}^{0}\), \(i=1,2,\ldots ,p\), \(LB_{0}=\phi (\varvec{y}^{0}, \varvec{z}^{0}, \bar{\varvec{h}}^{0})\). If \(UB_{0}-LB_{0}\le \varepsilon \), then the algorithm stops, and \(\varvec{y}^{0}\) is a global \(\varepsilon \)-optimal solution for LMP. Otherwise, let \(X=\{(\varvec{y}^{0}, \varvec{z}^{0}, \bar{\varvec{h}}^{0})\}, k=0,\) set \(T_{0}=\{Z^{0}\}\).

Step 1: Using the self-adjustable branching rule to subdivide \(Z^{k}\) into two new sub-rectangles \(Z^{k1},Z^{k2}\) and set \(T=\{Z^{k1},Z^{k2}\}\).

Step 2: For each \(Z^{ks}(s=1,2)\), solve LRP(\(Z^{ks}\)) to obtain the optimal solution \((\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\) and optimal value \(\varphi (\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\). Set \(UB(Z^{ks})=\varphi (\varvec{y}^{ks},\varvec{z}^{ks},\varvec{h}^{ks})\), let \(\bar{h}_{i}^{ks}=\varvec{d}_{i}^{T}\varvec{y}^{ks}{z}_{i}^{ks}\), \(i=1,2,\ldots ,p\), \(X=X\bigcup \{(\varvec{y}^{ks}, \varvec{z}^{ks},\bar{\varvec{h}}^{ks})\}.\) If \(LB_{k}>UB(Z^{ks})\), set \(T_{k}=T_{k}\setminus {Z^{ks}}\). Let \(T_{k}=(T_{k}{\setminus }{Z^{k}})\bigcup T.\) Update lower bound \(LB_{k}=\max _{(\varvec{y,z,h})\in X}\phi (\varvec{y,z,h})\), and set \((\varvec{y}^{k}, \varvec{z}^{k},\varvec{h}^{k})=\arg \max _{(\varvec{y,z,h})\in X} \phi (\varvec{y,z,h})\).

Step 3: Set \(T_{k+1}=\{Z\ |\ UB(Z)-LB_{k}>\varepsilon , Z\in T_{k}\}\). If \(T_{k+1}=\emptyset \), then terminate: \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution for LMP. Otherwise, select the rectangle \(Z^{k+1}\) such that \(Z^{k+1}=\arg \max _{Z\in T_{k+1}}UB(Z)\), set \(k=k+1\), and return to Step 1.

4.2 Convergence of SABB Algorithm

In this subsection, we discuss the global convergence of SABB algorithm.

Theorem 3

Given \(\varepsilon \ge 0\), if SABB algorithm terminates finitely, then it returns a global \(\varepsilon \)-optimal solution of LMP; otherwise, an infinite sequence \(\{\varvec{y}^{k}\}\) is generated, and every accumulation point of that is a global optimal solution for LMP.

Proof

If the presented algorithm terminates finitely, without loss of generality, suppose it terminates at kth iteration. Then we have

$$\begin{aligned} UB_{k}-LB_{k}\le \varepsilon . \end{aligned}$$
(4.2)

According to Step 2 of the algorithm,

$$\begin{aligned} LB_{k}=\phi (\varvec{y}^{k}, \varvec{z}^{k}, \bar{\varvec{h}}^{k})=f(\varvec{y}^{k}), \end{aligned}$$
(4.3)

By (4.2) and (4.3), we can get

$$\begin{aligned} UB_{k}-f(\varvec{y}^{k})\le \varepsilon . \end{aligned}$$
(4.4)

Let \(f^*\) be the global optimal value of LMP. Combining (4.2)–(4.4), we have

$$\begin{aligned} f^*-\varepsilon \le UB_{k}-\varepsilon \le LB_{k}=f(\varvec{y}^{k}). \end{aligned}$$

So, \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution of LMP.

If the algorithm is infinite, it can generate an infinitely nested sequence of rectangles \(\{Z^{k}\}\), such that \(u_{i}^{k}-l_{i}^{k}\rightarrow 0\) as \(k\rightarrow \infty \) for \(i=1, 2,\ldots , p\). It also generates the optimal solution sequence \(\{(\varvec{y}^{k}, \varvec{z}^{k}, \varvec{h}^{k})\}\) to LRP\((Z^{k})\). Let \(\bar{h}_{i}^{k}=\varvec{d_{i}}\varvec{y}^{k}z_{i}^{k}\), \(i=1, \ldots ,p,\) then \({(\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}}^{k})}\) is a feasible solution sequence for EP\((Z^{k})\). Since the feasible region of EP\((Z^{k})\) is bounded, without loss of generality, assume that \(\lim \nolimits _{k\rightarrow \infty }\varvec{y}^{k}=\varvec{y}^{*}\), then we have that

$$\begin{aligned}{} & {} \lim \limits _{k\rightarrow \infty }z_{i}^{k}=\lim \limits _{k\rightarrow \infty }\varvec{c}_{i}^{T}\varvec{y}^{k}=\varvec{c}_{i}^{T}\varvec{y}^{*}\triangleq z_{i}^{*}, i=1, 2,\ldots , p \\{} & {} \lim \limits _{k\rightarrow \infty }\bar{h}_{i}^{k}=\lim \limits _{k\rightarrow \infty }\varvec{d}_{i}^{T}\varvec{y}^{k}z_{i}^{k}=\varvec{d}_{i}^{T}\varvec{y}^{*}z_{i}^{*}\triangleq h_{i}^{*}, i=1, 2,\ldots , p. \end{aligned}$$

According to the definition of \(LB_{k}\) and the continuity of the function \(\phi \), we have

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }LB_{k}=\lim \limits _{k\rightarrow \infty }f(\varvec{y}^{k}) =\lim \limits _{k\rightarrow \infty }\phi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}}^{k})=\phi (\varvec{y}^{*},\varvec{z}^{*},\varvec{h}^{*})=f(\varvec{y}^{*}). \end{aligned}$$
(4.5)

From the above results and \(l_{i}^{k}\le z^k_i=\varvec{c}_{i}^{T}\varvec{y}^{k}\le u_{i}^{k}\), it follows that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }l_{i}^{k}=\lim \limits _{k\rightarrow \infty } z_{i}^{k}=\lim \limits _{k\rightarrow \infty }u_{i}^{k}= z_{i}^{*}. \end{aligned}$$

Additionally, from the inequality constraints \(h_{i}^{k}\le l_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+U_{i}z_{i}^{k}-l_{i}^{k}U_{i}\) and \(h_{i}^{k}\le u_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+L_{i}z_{i}^{k}-u_{i}^{k}L_{i},\) for any i,  we have

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }h_{i}^{k}\le \lim \limits _{k\rightarrow \infty }(l_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+U_{i}z_{i}^{k}-l_{i}^{k}U_{i}) =z_{i}^{*}\varvec{d}_{i}^{T}\varvec{y}^{*}+U_{i}z_{i}^{*}-z_{i}^{*}U_{i}=z_{i}^{*}\varvec{d}_{i}^{T}\varvec{y}^{*}, \\ \lim \limits _{k\rightarrow \infty }h_{i}^{k}\le \lim \limits _{k\rightarrow \infty } (u_{i}^{k}\varvec{d}_{i}^{T}\varvec{y}^{k}+L_{i}z_{i}^{k}-u_{i}^{k}L_{i})=z_{i}^{*}\varvec{d}_{i}^{T}\varvec{y}^{*} +L_{i}z_{i}^{*}-z_{i}^{*}L_{i}=z_{i}^{*}\varvec{d}_{i}^{T}\varvec{y}^{*}. \end{aligned}$$

This implies that

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }UB_{k}&=\lim \limits _{k\rightarrow \infty } \varphi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{h}^{k})\\&=\lim \limits _{k\rightarrow \infty }\sum \limits _{i=1}^{p}(h_{i}^{k}+(d_{0i}\varvec{c_{i}}+c_{0i}\varvec{d_{i}})^{T}\varvec{y}^{k}+c_{0i}d_{0i})\\&\le \sum \limits _{i=1}^{p}(\varvec{d}_{i}^{T}\varvec{y}^{*}{z}_{i}^{*}+(d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i})^{T}\varvec{y}^{*} +c_{0i}d_{0i})\\&=\sum \limits _{i=1}^{p}({h}_{i}^{*}+(d_{0i}\varvec{c}_{i}+c_{0i}\varvec{d}_{i})^{T}\varvec{y}^{*} +c_{0i}d_{0i})\\&=\phi (\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}}^{*}). \end{aligned}$$

Further, combining (4.5) with \(LB_{k}\le UB_{k}\), we have

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }LB_{k}=\lim \limits _{k\rightarrow \infty }UB_{k}. \end{aligned}$$
(4.6)

Based upon the structure of SABB algorithm, then it must have

$$\begin{aligned} LB_{k}\le f^{*}\le UB_{k}, \quad k=0,1,2,\ldots . \end{aligned}$$
(4.7)

From (4.5)-(4.7), we can get

$$\begin{aligned} \lim \limits _{k\rightarrow \infty }LB_{k}=\lim \limits _{k\rightarrow \infty }UB_{k} =f^{*}=f(\varvec{y}^{*}). \end{aligned}$$
(4.8)

Since \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}}^{*})\) is a feasible solution of EP\((Z^{k})\), according to (4.8), \((\varvec{y}^{*},\varvec{z}^{*}, \varvec{{h}}^{*})\) is an optimal solution of EP\((Z^{k})\). Furthermore, by using Theorem 1 and Theorem 2, it follows that \(\varvec{y}^{*}\) is a global optimal solution of LMP, i.e., every accumulation point of the sequence \(\{\varvec{y^{k}}\}\) is a global optimal solution for LMP. The proof is completed \(\square \)

4.3 Computational Complexity of SABB Algorithm

In order to estimate the maximum iterations of SABB algorithm, we analyze its computational complexity. To this end, we define the size \(\delta (Z)\) for the rectangle (2.1) as

$$\begin{aligned} \delta {(Z)}=\max \{u_{j}-l_{j}|\ j=1,\ldots , p\}, \end{aligned}$$
(4.9)

and for convenience, we define

$$\begin{aligned} \mu =\max \{U_{j}-L_{j}|\ j=1,\ldots , p\}. \end{aligned}$$
(4.10)

Lemma 2

Given \(\varepsilon \ge 0\), for any \(Z\in Z^{0}\) and any feasible solution \((\varvec{y}, \varvec{z}, \varvec{h})\) of LRP(Z), if \(\delta {(Z)}\le \varepsilon /p\mu ,\) then we have

$$\begin{aligned} |\varphi (\varvec{y}, \varvec{z}, \varvec{h})-\phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}})|\le \varepsilon , \end{aligned}$$

in which \(\bar{h}_{i}=\varvec{d}_{i}^{T}\varvec{y}{z}_{i}, i=1,\ldots , p\).

Proof

For any feasible solution \((\varvec{y}, \varvec{z}, \varvec{h})\) for LRP(Z), let \(\bar{h}_{i}=\varvec{d}_{i}^{T}\varvec{y}{z}_{i}, i=1,\ldots , p\), it is obvious that \((\varvec{y}, \varvec{z}, \varvec{\bar{h}})\) is a feasible solution for EP over Z. From Theorem 1 and Theorem 2, we have \(\phi (\varvec{y}, \varvec{z},\varvec{\bar{h}})=f(\varvec{y})\). If \(\delta {(Z)}\le \varepsilon /p\mu \) for any sufficiently small positive number \(\varepsilon \), we can obtain

$$\begin{aligned} \displaystyle \begin{array}{ll} |\varphi (\varvec{y}, \varvec{z}, \varvec{h})-f(\varvec{y})| &{}=|\varphi (\varvec{y}, \varvec{z}, \varvec{h})-\phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}})|\\ &{}=\sum \limits _{i=1}^{p}(h_{i}-\bar{h}_{i})\\ &{}\le \sum \limits _{i=1}^{p}\left[ \min \{{l}_{i}\varvec{d}_{i}^{T}\varvec{y}+{U}_{i}{z}_{i}-{l}_{i}{U}_{i}, {u}_{i}\varvec{d}_{i}^{T}\varvec{y}+{L}_{i}{z}_{i}-{u}_{i}{L}_{i}\}-\varvec{d}_{i}^{T}\varvec{y}{z}_{i}\right] \\ &{}=\sum \limits _{i=1}^{p} \min \{(z_{i}-{l}_{i})(U_{i}-\varvec{d}_{i}^{T}\varvec{y}), (u_{i}-{z}_{i})(\varvec{d}_{i}^{T}\varvec{y}-{L}_{i})\}\\ &{}\le p\mu \delta (Z)\\ &{}\le \varepsilon . \end{array} \end{aligned}$$

\(\square \)

Theorem 4

Given the convergence tolerance \(\varepsilon \in (0,1),\) SABB algorithm can find a global \(\varepsilon \)-optimal solution to LMP in at most

iterations, where \(\delta (Z^{0})\) and \(\mu \) are given by (4.9) and (4.10), respectively.

Proof

Without loss of generality, assume that the convergence tolerance \(\varepsilon \in (0,1)\) at the initializing step and that the sub-rectangle Z is selected for partitioning in step 1 of SABB algorithm at every iteration. After k.p iterations, we have

$$\begin{aligned} \delta (Z)\le {\dfrac{1}{2^{k}}}\delta (Z^{0}). \end{aligned}$$

From Lemma 1, if

$$\begin{aligned} {\dfrac{1}{2^{k}}}\delta (Z^{0})\le \dfrac{\varepsilon }{p \mu }, \end{aligned}$$

i.e.,

$$\begin{aligned} k\ge {\log _{2}\dfrac{p \mu \delta (Z^{0})}{\varepsilon }}, \end{aligned}$$

we can obtain \(|\varphi (\varvec{x}, \varvec{y}, \varvec{z}, \varvec{h})-\phi (\varvec{x}, \varvec{y}, \varvec{z}, \varvec{\bar{h}})|\le \varepsilon .\) Therefore, after at most

$$\begin{aligned} p.\lceil {\log _{2}\dfrac{p \mu \delta (Z^{0})}{\varepsilon }} \rceil \end{aligned}$$

iterations, we can follow that

$$\begin{aligned} \displaystyle \begin{array}{ll} 0&{}\le \phi (\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}^{*}})-\phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}})\\ &{}\le |\varphi (\varvec{y}, \varvec{z}, \varvec{h})-\phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}})|\\ &{}\le \varepsilon , \end{array} \end{aligned}$$

where \((\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}^{*}})\) is the optimal solution of EP. Therefore, \(\varvec{y}^{*}\) is the optimal solution of LMP. In step 2 of SABB algorithm, \((\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\) is the best currently known feasible solution, and we also note that \(\{\phi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\}\) is a increasing sequence satisfying

$$\begin{aligned} \phi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\ge \phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}}). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \phi (\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}^{*}})-\phi (\varvec{y}^{k}, \varvec{z}^{k}, \varvec{\bar{h}^{k}})\le \phi (\varvec{y}^{*}, \varvec{z}^{*}, \varvec{{h}^{*}})-\phi (\varvec{y}, \varvec{z}, \varvec{\bar{h}})\le \varepsilon , \end{aligned}$$

which implies that \(f(\varvec{y}^{*})-f(\varvec{y}^{k})\le \varepsilon \). When SABB algorithm terminates, \(\varvec{y}^{k}\) is a global \(\varepsilon \)-optimal solution to LMP, and the proof is completed. \(\square \)

5 Numerical Experiments

In this section, the effectiveness of SABB algorithm is validated by numerical computation. All of algorithms are coded in MATLAB (2018b), then it runs on in a computer with Intel(R) Core(TM)i9-13900HX CPU(2.20GHz). The linear and quadratic subproblem in the algorithms are solved by the linprog and quadprog in MATLAB, respectively.

Each (mnp) generates ten randomly instances, and their average results are obtained. The error tolerance \(\varepsilon \) and \({\bar{\varepsilon }}\) are set to \(10^{-4}\) and the maximum CPU time is limited to 3600 s or 10 s in line with the computation requirement. The notations of computational results are listed in Table 1.

Table 1 Notations in numerical instances

In computational experiments, we consider the following LMP:

$$\begin{aligned} \mathrm{{(P)}:}\left\{ \begin{array}{ll} \max &{}\sum \limits _{i=1}^{p}\left( \sum \limits _{j=1}^{n}{c}_{ij}{x_j}+c_{0i}\right) \left( \sum \limits _{j=1}^{n}{d}_{ij}{x_j}+d_{0i}\right) \\ \mathrm {s.t.}&{} \varvec{A\varvec{x}}\le \varvec{b}, \end{array} \right. \end{aligned}$$

where \({c}_{ij}\), \({d}_{ij}\), \({c}_{0i}\) and \({d}_{0i}\), are randomly generated in \([-5,5]\), and all elements of \(\varvec{A}\in {\mathbb {R}}^{m\times n}\) and \( \varvec{b}\in {\mathbb {R}}^{m}\) are randomly generated in [0.1, 1].

5.1 Numerical Comparison of Relaxation Methods

To evaluate the performance of the proposed linear relaxation method (LRP), the comparisons of LRP and the existing relaxation methods in [21, 29, 30, 32,33,34,35, 37] are performed.

Table 2 Comparison results of LRP, LRP1 and LRP2 for solving P

Since the relaxation methods in [21], denoted as LRP1 and LRP2, are similar to LRP, their performances are first compared. As shown in Table 2, the upper bound values of LRP are smaller than those of LRP1 and LRP2, and the lower bound values of LRP are lager than those of LRP1 and LRP2. The CPU time spent by LRP is longer than those of LRP1 and LRP2 in most cases. These results mean that LRP provides a tighter bound than LRP1 and LRP2. Addtionally, the upper bound obtained by LRP is smaller than those obtained by the relaxation methods in [29, 30, 32,33,34,35, 37] for all randomly generated instances, as shown Table 3. These results suggest that LRP is superior to the relaxation methods in [29, 30, 32,33,34,35, 37].

Table 3 Comparison results of LRP and the relaxation methods in [29, 30, 32,33,34,35, 37] for solving P

5.2 Numerical Comparison of Branching Rules

In order to explore the influence of branching rules on the algorithm, the comparisons of the self-adjustable branching rule (SABB algorithm) and the combination rules (BRBB and CRBB algorithms) are performed.

Let \(Z=\Pi ^{p}_{i=1}[l_{i},u_{i}]\) denote the divided rectangle, \(z^{optx}\) denote the corresponding optimal solution of the relaxation problem over Z, \(\xi \) denote the branching direction chosen, \(z_{\xi }^{*}\) denote the branching point chosen of the interval \([l_{\xi },u_{\xi }]\) and \(z^{M}_{\xi }\) denote the medium point of the interval \([l_{\xi },u_{\xi }]\), respectively. The bisection rule chooses \(z_{\xi }^{*}=z^{M}_{\xi }\) as the the branching point in [29, 30, 32,33,34,35]. The combination rule in [21] chooses a linear combination of \(z^{optx}\) and \(z^{M}_{\xi }\) as its the branching point, i.e., \(z_{\xi }^{*}=\alpha z^{optx}+(1-\alpha )z^{M}_{\xi }\), \(\alpha \in [0,1]\). Note that the bisection rule is the combination rule at \(\alpha =0\), denoted as BRBB algorithm.

Table 4 shows that both the CPU time and the number of iterations of SABB algorithm are lower than those in BRBB algorithm. When (mp) is fixed, both the value of Gap.CPU and Gap.Iter grow with n grows. In contrast, both the value of Gap\((\%)\).CPU and Gap\((\%)\).Iter decrease with n increases. On the whole, the self-adjustable branching rule in branch-and-bound algorithm is better than the bisection branching rule (the combination rule at \(\alpha =0\)) in branch-and-bound algorithm, which may be the result of the former can keep update the upper bound of LMP.

Table 4 Comparison results of SABB and BRBB algorithms for solving P
Table 5 Comparison results of ARBB and CRBB algorithms for solving P

We further compare the performance of SABB and CRBB algorithms at \(\alpha =0.3, 0.5,\) 0.7 and 1.0 (CRBB algorithms), as shown in Table 5. When \(n\le 50\), the CPU time spent by CRBB algorithm at \(\alpha =1.0\) is less than that of SABB algorithm and those of CRBB algorithms. However, SABB algorithm is better than CRBB algorithms when \(n> 100\), which suggest that the self-adjustable branching rule has a better effect on the algorithm than the combination rules for large-scale test instances.

5.3 Numerical Comparison of the Branching Direction

In this subsection, we test randomly generated problem P to demonstrate the impact of the chosen direction for self-adjustable branching rule on the algorithm.

Table 6 Comparison results of the branching direction for solving P

Table 6 shows that both SABB algorithm and SABB-L algorithm can find the same optimal values for problem P when \((m,p)=(50,2)\). However, the CPU time cost by SABB algorithm is reduced by at lest \(57.18\%\) compared to SABB-L algorithm. And the number of iterations required by SABB algorithm is significantly lower than that by SABB-L algorithm. Note that SABB algorithm terminates and returns the optimal solution when p takes the values of 3, 4 and 5, while SABB-L algorithm is not terminated and keeping run after obtaining the same optimal values. These results indicate that the choosing branching direction of SABB algorithm has higher computing efficiency comparing with the general way of choosing directions.

5.4 Numerical Comparison of Algorithms

In this subsection, the comparisons of SABB algorithm and the algorithms in [29, 32] for solving P are performed.

As shown in Table 7, SABB algorithm and the algorithm in [32] can find the same optimal values for all test random instances, whereas the CPU time cost by SABB algorithm is less than those of the algorithm in [32]. Moreover, the CPU time spent by SABB algorithm grows slowly with the increase of n compared to the other two algorithms. By contrast, the algorithm in [29] finds the global optimal values for only a few random instances, and it costs longer the CPU time than the other two algorithms. These results reveal that the self-adjustable branching rule is helpful to improve the efficiency of SABB algorithm.

Table 7 Comparison results of SABB algorithm and the algorithms in [29, 32] for solving P

6 Conclusions

In this paper, we investigate a linear multiplicative program (LMP), which has important application in various domains such as financial optimization and robust optimization. Firstly, by employing appropriate variable substitution techniques, LMP is transformed into an equivalent problem (EP). Subsequently, EP can be simplified into a series of linear relaxation programs through the application of affine approximations. Then we propose a self-adjustable branch-and-bound algorithm by integrating the self-adjustable branching rule and branch-and-bound framework. The proposed algorithm has been proven to converge to the global optimal solution of the initial LMP. Additionally, we conduct an analysis of the computational complexity of the algorithm. Finally, numerical results demonstrate its feasibility and high efficiency. The future research direction is to investigate whether there exist more effective relaxation methods, alternative branching rules or reduction strategies for addressing general linear multiplicative problem.