1 Introduction

Throughout the paper, we consider a fractional linear programming problem in the form

$$\begin{aligned} \min \ \frac{p^Tx+q_1^Ty+c}{p^Tx+q_2^Ty+d}, {\ \ \text{ s.t. }\ \ }(x,y)\in M(A,B,b), \end{aligned}$$
(1)

where M(ABb) is a convex polyhedral set described by linear constraints with constraint matrices A and B, and the right-hand side vector b. This is a general model since M(ABb) can be characterized by linear equations, inequalities or both. In the subsequent sections, we will consider particular cases. We investigate the effects of independent and simultaneous variations of (possible all) coefficients in prescribed interval domains on the optimal value. In particular, we are interested in determining the best and the worst optimal value.

State-of-the-art This problem is well studied in linear programming (Černý and Hladík 2016; Chinneck and Ramadan 2000; Fiedler et al. 2006; Hladík 2012, 2014; Mostafaee et al. 2016), but there are considerable less results in the nonlinear case. A general approach to solving interval valued nonlinear program was proposed in Hladík (2011), and particular quadratic problems, e.g., in Li et al. (2015, 2016). A specific case of generalized fractional linear programming with variable coefficients was investigated in Hladík (2010), and the case of intervals in the objective function in Borza et al. (2012) and Effati and Pakdaman (2012). Some duality theorems in fractional programming with (not only) interval uncertainty were stated in Jeyakumar et al. (2013). The related problem of inverse fractional linear programming was studied, e.g., in Jain and Arya (2013). In a fuzzy environment, fractional linear programming was discussed by many authors, including Chinnadurai and Muthukumar (2016) and Sakawa et al. (2001), for instance.

Notation An interval matrix is defined as

$$\begin{aligned} {\varvec{A}}:=\{A\in {\mathbb {R}}^{m\times k}: \underline{A}\le A \le \overline{A}\}, \end{aligned}$$

where \( \underline{A},\overline{A}\in {\mathbb {R}}^{m\times k}\), \( \underline{A}\le A \le \overline{A}\), are given lower and upper bounds, and the inequality is understood componentwise. The set of all m-by-k interval matrices is denoted by \( \mathbb {I}\mathbb {R}^{m\times k}\). Interval vectors and real intervals are viewed as special cases of interval matrices. For \({\varvec{A}}\in \mathbb {I}\mathbb {R}^{m\times k}\), \(z\in \{\pm 1\}^m\) and \(s\in \{\pm 1\}^k\), we define in accordance to Fiedler et al. (2006) the matrix \(A_{zs}\in {\varvec{A}}\) as follows

$$\begin{aligned} (A_{zs})_{ij}:= {\left\{ \begin{array}{ll} \underline{A}_{ij}&{}\text{ if } z_i=s_j,\\ \overline{A}_{ij}&{}\text{ if } z_i\not =s_j, \end{array}\right. } \end{aligned}$$

For \({\varvec{b}}\in \mathbb {I}\mathbb {R}^m\) and \(z\in \{\pm 1\}^m\), we define the vector \(b_{z}\in {\varvec{b}}\) as follows

$$\begin{aligned} (b_{z})_{i}:= {\left\{ \begin{array}{ll} \underline{b}_{i}&{}\text{ if } z_i=-1,\\ \overline{b}_{i}&{}\text{ if } z_i=1. \end{array}\right. } \end{aligned}$$

Eventually, \(e=(1,\dots ,1)^T\) denotes the vector of ones.

An interval fractional linear programming problem Let interval matrices \({\varvec{A}}\in \mathbb {I}\mathbb {R}^{m\times k}\), \({\varvec{B}}\in \mathbb {I}\mathbb {R}^{m\times (n-k)}\) and interval vectors \({\varvec{b}}\in \mathbb {I}\mathbb {R}^m\), \({\varvec{p}}\in \mathbb {I}\mathbb {R}^k\), \(\varvec{q}_{\varvec{1}},\varvec{q}_{\varvec{2}}\in \mathbb {I}\mathbb {R}^{n-k},\) and interval scalars \({\varvec{c}},{\varvec{d}}\in \mathbb {I}\mathbb {R}\) be given. By an interval fractional linear program we understand the family of problems (1), where \(A\in {\varvec{A}}\), \(B\in {\varvec{B}}\), \(b\in {\varvec{b}}\), \(p\in {\varvec{p}}\), \(q_1\in \varvec{q}_{\varvec{1}}\), \(q_2\in \varvec{q}_{\varvec{2}}\), \(c\in {\varvec{c}}\), and \(d\in {\varvec{d}}\). For a concrete setting of interval parameters, (1) is called an instance of the interval program.

Notice that there is a common sub-vector p in the numerator and denominator of the objective function in (1). This is important in the interval context since most of the interval programming models assume independent interval parameters. Herein, however, we have a correlation between the objective function coefficients caused by double occurrence of p, which makes the model more general and also little bit more difficult. In general, handling dependencies between interval parameters is a very hard and challenging problem. Notice that considering both occurrences of p as independent parameters makes the problem easier in the sense that we can rewrite the problem to a classical interval linear programming problem using the Charnes–Cooper transformation and employ the standard techniques from interval linear programming (Hladík 2012). On the other hand, this relaxation of dependencies causes unnecessary overestimation. That is why we investigate the problem with dependencies in this paper.

The problem formulation Let us denote by

$$\begin{aligned} f(A,B,b,p,q_1,q_2,c,d):= \inf \left\{ \frac{p^Tx+q_1^Ty+c}{p^Tx+q_2^Ty+d}: (x,y)\in M(A,B,b)\right\} \end{aligned}$$

the optimal value of the fractional linear program (1); infinite values are allowed, too. The lower bound and the upper bound of the optimal value range \([\underline{f},\overline{f}]\) are defined naturally as

$$\begin{aligned} \underline{f}&:=\inf \ f(A,B,b,p,q_1,q_2,c,d) \\&\quad {\ \ \text{ s.t. }\ \ }A\in {\varvec{A}}, B\in {\varvec{B}},b\in {\varvec{b}},p\in {\varvec{p}},\ q_1\in \varvec{q}_{\varvec{1}}, q_2\in \varvec{q}_{\varvec{2}}, c\in {\varvec{c}}, d\in {\varvec{d}}, \\ \overline{f}&:=\sup \ f(A,B,b,p,q_1,q_2,c,d) \\&\quad {\ \ \text{ s.t. }\ \ }A\in {\varvec{A}}, B\in {\varvec{B}},b\in {\varvec{b}},p\in {\varvec{p}},\ q_1\in \varvec{q}_{\varvec{1}}, q_2\in \varvec{q}_{\varvec{2}}, c\in {\varvec{c}}, d\in {\varvec{d}}. \end{aligned}$$

The key problem studied in this paper is to compute the optimal value range \([\underline{f},\overline{f}]\).

Content In the following Sects. 24, we will consider two basic situations, when M(ABb) is described respectively by linear equations and inequalities (with nonnegative variables). Section 5 is devoted to an application of the proposed techniques in revenue efficiency measuring.

2 Equality constrained feasible set

In this section, we suppose that the interval fractional linear program (1) has equality constraints

$$\begin{aligned} \min \ \frac{p^Tx+q_1^Ty+c}{p^Tx+q_2^Ty+d}, {\ \ \text{ s.t. }\ \ }Ax+By=b,\ x,y\ge 0. \end{aligned}$$
(2)

In order to derive our results, we have to state two natural assumptions on the interval program.

  1. (A1)

    For each instance of the interval parameters, the feasible set is nonempty.

  2. (A2)

    For each instance of the interval parameters and for each feasible solution (xy), the denominator \(px+q_2y+d>0\).

The property from Assumption (A1) is called strong feasibility of the feasible set. It is known that checking this property is a co-NP-hard problem (Rohn 1998). Strong feasibility can be verified either by an exponential method (Fiedler et al. 2006; Rohn 1981), or we can utilize a sufficient condition from Hladík (2013).

Concerning Assumption (A2), we have the following result.

Proposition 1

Assumption (A2) holds true if and only if \(\underline{p}^Tx+\underline{q_2}^Ty+\underline{d}>0\) for each xy such that

$$\begin{aligned} \underline{A}x+\underline{B}y\le \overline{b},\ \overline{A}x+\overline{B}y \ge \underline{b},\quad x,y\ge 0. \end{aligned}$$
(3)

Proof

By Fiedler et al. (2006) and Hladík (2013), the system (3) describes the union of all feasible sets of (2) over all instances of interval parameters. From this the “only if” part directly follows. The “if” part follows from the fact that

$$\begin{aligned} \underline{p}^Tx+\underline{q_2}^Ty+\underline{d}\le p^Tx+q_2^Ty+d \end{aligned}$$

for each \(p\in {\varvec{p}}\), \(q_2\in \varvec{q}_{\varvec{2}}\) and \(d\in {\varvec{d}}\). \(\square \)

By the above proposition, Assumption (A2) is satisfied if and only if

$$\begin{aligned} 0<\min \underline{p}^Tx+\underline{q_2}^Ty+\underline{d} {\ \ \text{ s.t. }\ \ }\underline{A}x+\underline{B}y\le \overline{b},\ \overline{A}x+\overline{B}y \ge \underline{b},\ x,y\ge 0, \end{aligned}$$

which requires just solving one linear programming problem.

The following theorem gives explicit formulae for computing the bounds of the range of optimal values. Note that certain dependency between coefficient in the objective function causes more difficulty to deal with.

Theorem 1

We have

$$\begin{aligned} \underline{f}&=\inf \{f_1(\underline{p}),f_1(\overline{p}),f_2(\underline{p}),f_2(\overline{p})\}, \end{aligned}$$
(4)
$$\begin{aligned} \overline{f}&=\sup _{z\in \{\pm 1\}^m}\{f_3(z),f_4(z),f_5(z),f_6(z)\}, \end{aligned}$$
(5)

where

$$\begin{aligned} {f}_1(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ \overline{A}x+\overline{B}y \ge \underline{b},\ x,y\ge 0,\right\} ,\\ {f}_2(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{p^T x+\underline{q_2}^Ty+\underline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ \overline{A}x+\overline{B}y \ge \underline{b},\ x,y\ge 0\right\} , \end{aligned}$$

and

$$\begin{aligned} f_3(z)&= \sup \big \{ \theta :\ \underline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\qquad \qquad \underline{d}\theta -b_zw\le \overline{c},\ \theta \ge 1,\ (\theta ,w \text{ free }).\big \},\\ f_4(z)&= \sup \big \{ \theta :\ \overline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\qquad \qquad \underline{d}\theta -b_zw\le \overline{c},\ \theta \le 1,\ (\theta ,w \text{ free }).\big \},\\ f_5(z)&= \sup \big \{ \theta :\ \underline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \overline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\qquad \qquad \overline{d}\theta -b_zw\le \overline{c},\ \theta \ge 1,\ (\theta ,w \text{ free }).\big \},\\ f_6(z)&= \sup \big \{ \theta :\ \overline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \overline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\qquad \qquad \overline{d}\theta -b_zw\le \overline{c},\ \theta \le 1,\ (\theta ,w \text{ free }).\big \}. \end{aligned}$$

Proof

  1. (i)

    To show (4), we first suppose that an optimal solution lies in the half-space \(p^T x+\underline{q_1}^Ty \ge -\underline{c}\). Then for each \(q_1\in \varvec{q}_{\varvec{1}}\), \(q_2\in \varvec{q}_{\varvec{2}}\), \(c\in {\varvec{c}}\) and \(d\in {\varvec{d}}\) we have

    $$\begin{aligned} \frac{p^Tx+ q_1^Ty+ c}{p^T x+ q_2^Ty+ d}\ge \frac{p^Tx+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}. \end{aligned}$$

    Based on Theorem 3.2 in Fiedler et al. (2006), we get

    $$\begin{aligned} \underline{f}=\inf _{p\in {{\varvec{\scriptstyle p}}}}\, \inf \left\{ \frac{p^Tx+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ \overline{A}x+\overline{B}y \ge \underline{b},\ x, y\ge 0 \right\} . \end{aligned}$$
    (6)

    For each \(p\in {\varvec{p}}\) and \(x\ge 0\), we have \(\underline{p}^Tx\le p^Tx\le \overline{p}^Tx\). Since the objective function of (6) is monotone with respect to \(z:=p^Tx\), the infimum is attained either for \(z=\underline{p}^Tx\), or for \(z=\overline{p}^Tx\). This justifies the function \(f_1(p)\).

    If an optimal solution lies in the latter half-space \(p^T x+\underline{q_1}^Ty \le -\underline{c}\), we proceed analogously and obtain \(f_2(p)\).

  2. (ii)

    Similarly as in the previous part, to show (5) we consider two subcases. First, suppose that an optimal solution lies in the half-space \(p^T x+\overline{q_1}^Ty \ge -\overline{c}\). We derive

    $$\begin{aligned} \overline{f}&=\sup _{A\in {{\varvec{\scriptstyle A}}},\, B\in {{\varvec{\scriptstyle B}}},\, b\in {{\varvec{\scriptstyle b}}},\, p\in {{\varvec{\scriptstyle p}}},\, q_1\in {{\varvec{\scriptstyle q}}}_1,\,q_2\in {{\varvec{\scriptstyle q}}}_2,\, c\in {{\varvec{\scriptstyle c}}},\, d\in {{\varvec{\scriptstyle d}}}}\ \inf \left\{ \frac{p^Tx+q_1^Ty+c}{p^Tx+q_2^Ty+d},\right. \\&\qquad \qquad \qquad \qquad \qquad \qquad {\ \ \text{ s.t. }\ \ }Ax+By=b,\ x,y\ge 0\bigg \}\\&\begin{aligned} =\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup _{A\in {{\varvec{\scriptstyle A}}},\, B\in {{\varvec{\scriptstyle B}}},\, b\in {{\varvec{\scriptstyle b}}}}\&\inf \left\{ \frac{p^Tx+\overline{q_1}^Ty+\overline{c}}{p^Tx+\underline{q_2}^Ty+\underline{d}}, \right. \\ {\ \ \text{ s.t. }\ \ }Ax+By=b,\ x,y\ge 0\bigg \}. \end{aligned} \end{aligned}$$

Now, we apply the Charnes and Cooper transformation (Charnes and Cooper 1962) to convert the fractional programming problem into an equivalent linear programming problem, and obtain

$$\begin{aligned}&\overline{f} =\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup _{A\in {{\varvec{\scriptstyle A}}},\, B\in {{\varvec{\scriptstyle B}}},\, b\in {{\varvec{\scriptstyle b}}}}\ \inf \big \{ p^Tx+\overline{q_1}^Ty+\overline{c}f:\\&\begin{aligned}&p^Tx+\underline{q_2}^Ty+\underline{d}f=1,\\&Ax+By=fb,\ x,y,f\ge 0\big \}. \end{aligned} \end{aligned}$$

Based on Theorem 3.2 in Fiedler et al. (2006), we have

$$\begin{aligned} \overline{f} =&\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup _{z\in \{\pm 1\}^m}\ \inf \big \{ p^Tx+\overline{q_1}^Ty+\overline{c}f: \\&\begin{aligned}&p^Tx+\underline{q_2}^Ty+\underline{d}f=1,\\&A_{ze}x+B_{ze}y=fb_z,\ x,y,f\ge 0\big \}. \end{aligned} \end{aligned}$$

This model is equivalent to

$$\begin{aligned} \overline{f} =&\sup _{z\in \{\pm 1\}^m}\ \sup _{p\in {{\varvec{\scriptstyle p}}}}\ \inf \big \{ p^Tx+\overline{q_1}^Ty+\overline{c}f: \\&\begin{aligned}&p^Tx+\underline{q_2}^Ty+\underline{d}f=1,\\&A_{ze}x+B_{ze}y=fb_z,\ x,y,f\ge 0\big \}. \end{aligned} \end{aligned}$$

The inner program and the outer program have different directions for optimization. So we will write the dual of the inner program instead (by Assumption (A1) the optimal values of the primal and the dual problems are the same), and obtain

$$\begin{aligned} \overline{f} =&\sup _{z\in \{\pm 1\}^m}\ \sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup \big \{ \theta : \\&\begin{aligned}&p\theta +(A_{ze})^Tw\le p,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\underline{d}\theta -b_z^Tw\le \overline{c},\ (\theta ,w \text{ free }).\big \}. \end{aligned} \end{aligned}$$

Equivalently, we have

$$\begin{aligned} \overline{f} =&\sup _{z\in \{\pm 1\}^m}\ \sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup \big \{ \theta : \\&\begin{aligned}&p(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\underline{d}\theta -b_z^Tw\le \overline{c},\ (\theta ,w \text{ free }).\big \}. \end{aligned} \end{aligned}$$

Now, we have to consider two sub-models according to the sign of \(\theta -1\). If we restrict to the case \(\theta \ge 1\), then from the theory of interval linear inequalities (Fiedler et al. 2006; Hladík 2013) we have

$$\begin{aligned}&\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup \big \{ \theta :\ p(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\quad \underline{d}\theta -b_zw\le \overline{c},\ \theta \ge 1,\ (\theta ,w \text{ free }).\big \}\\&= \sup \big \{ \theta :\ \underline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\quad \underline{d}\theta -b_zw\le \overline{c},\ \theta \ge 1,\ (\theta ,w \text{ free }).\big \}. \end{aligned}$$

Similarly, under the restriction \(\theta \le 1\), we get

$$\begin{aligned}&\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup \big \{ \theta :\ p(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\quad \underline{d}\theta -b_zw\le \overline{c},\ \theta \le 1,\ (\theta ,w \text{ free }).\big \}\\&= \sup \big \{ \theta :\ \overline{p}(\theta -1) +(A_{ze})^Tw\le 0,\ \underline{q_2}\theta +(B_{ze})^Tw\le \overline{q_1},\\&\quad \underline{d}\theta -b_zw\le \overline{c},\ \theta \le 1,\ (\theta ,w \text{ free }).\big \}. \end{aligned}$$

Combining both sub-models together, we obtain \(f_3(z)\) and \(f_4(z)\).

Provided an optimal solution lies in the half-space \(p^T x+\overline{q_1}^Ty \le -\overline{c}\), we analogously come across \(f_5(z)\) and \(f_6(z)\). \(\square \)

The above theorem says that \(\underline{f}\) can be determined by solving 4 fractional linear programming problems, whereas solving \(2^{m+2}\) linear programming problems are required to obtain \(\overline{f}\) according to (5). Since real-valued fractional linear programming is easily reduced to linear programming (Charnes and Cooper 1962), the computational cost is

  • \(4\times \)LP for \(\underline{f}\),

  • \(2^{m+2}\times \)LP for \(\overline{f}\),

where LP is a computational cost for linear programming. Thus complexity for computing \(\overline{f}\) rises exponentially with respect to the number of equations. This is not surprising as the problem of determining \(\overline{f}\) is NP-hard even for interval linear programming problems (Fiedler et al. 2006; Hladík 2012). Notice that even the exponential number of LP problems for \(\overline{f}\) need not be a bad news in practice—we can solve effectively high-dimensional problems provided the number of equations m is moderate. We will see in the next section that inequality constrained problems are polynomially solvable.

3 Inequality constrained feasible set

Herein, we suppose that the interval fractional linear program (1) has inequality constraints

$$\begin{aligned} \min \ \frac{p^Tx+q_1^Ty+c}{p^Tx+q_2^Ty+d}, {\ \ \text{ s.t. }\ \ }Ax+By\le b,\ x,y\ge 0. \end{aligned}$$
(7)

In the real case, (7) can be transformed to the equality-constrained form (2). In the interval-valued setting, however, such a transformation cannot be easily performed since it causes dependencies between interval parameters. Such situation is well known in interval linear programming; see Hladík (2012).

Again, we have to check whether Assumptions (A1) and (A2) are satisfied. It is easy to see (cf. Fiedler et al. 2006; Hladík 2013) that the feasible set is feasible for each instance of interval parameters if and only if the system

$$\begin{aligned} \overline{A}x+\overline{B}y\le \underline{b},\quad x,y\ge 0 \end{aligned}$$

is feasible. Hence, verification of Assumptions (A1) costs just solving one linear program.

For checking Assumption 2, we have the following analogy of Proposition 1.

Proposition 2

Assumption 2 holds true if and only if \(\underline{p}^Tx+\underline{q_2}^Ty+\underline{d}>0\) for each xy such that

$$\begin{aligned} \underline{A}x+\underline{B}y\le \overline{b},\quad x,y\ge 0. \end{aligned}$$
(8)

Proof

By Fiedler et al. (2006) and Hladík (2013), the system (8) describes the union of all feasible sets of (7) over all instances of interval parameters. The rest of the proof is the same as in the proof of Proposition 1. \(\square \)

The following theorem gives explicit formulae for computing the bounds of the range of optimal values. In contrast to the previous case, inequality constrained problems are more easy to deal with. Only four real fractional linear programs are needed to compute each of the end-points of the optimal value range \([\underline{f},\overline{f}]\).

Theorem 2

We have

$$\begin{aligned} \underline{f}&=\inf \{f_1(\underline{p}),f_1(\overline{p}),f_2(\underline{p}),f_2(\overline{p})\}, \end{aligned}$$
(9)
$$\begin{aligned} \overline{f}&=\sup \{f_3(\underline{p}),f_3(\overline{p}),f_4(\underline{p}),f_4(\overline{p})\}, \end{aligned}$$
(10)

where

$$\begin{aligned} {f}_1(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ x,y\ge 0,\right\} ,\\ {f}_2(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{p^T x+\underline{q_2}^Ty+\underline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ x,y\ge 0\right\} , \end{aligned}$$

and

$$\begin{aligned} {f}_3(p)&=\inf \left\{ \frac{p^T x+\overline{q_1}^Ty+\overline{c}}{p^T x+\underline{q_2}^Ty+\underline{d}}:\ \overline{A}x+\overline{B}y\le \underline{b},\ x,y\ge 0,\right\} ,\\ {f}_4(p)&=\inf \left\{ \frac{p^T x+\overline{q_1}^Ty+\overline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}:\ \overline{A}x+\overline{B}y\le \underline{b},\ x,y\ge 0\right\} , \end{aligned}$$

Proof

  1. (i)

    Suppose first that an optimal solution lies in the half-space \(p^T x+\underline{q_1}^Ty \ge -\underline{c}\). Then for each \(q_1\in \varvec{q}_{\varvec{1}}\), \(q_2\in \varvec{q}_{\varvec{2}}\), \(c\in {\varvec{c}}\) and \(d\in {\varvec{d}}\) we have

    $$\begin{aligned} \frac{p^Tx+ q_1^Ty+ c}{p^T x+ q_2^Ty+ d}\ge \frac{p^Tx+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}. \end{aligned}$$

    From Fiedler et al. (2006) and Hladík (2013), we get

    $$\begin{aligned} \underline{f}=\inf _{p\in {{\varvec{\scriptstyle p}}}}\, \inf \left\{ \frac{p^Tx+\underline{q_1}^Ty+\underline{c}}{p^T x+\overline{q_2}^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ x, y\ge 0 \right\} . \end{aligned}$$
    (11)

    For each \(p\in {\varvec{p}}\) and \(x\ge 0\), we have \(\underline{p}^Tx\le p^Tx\le \overline{p}^Tx\). Since the objective function of (11) is monotone with respect to \(z:=p^Tx\), the infimum is attained either for \(z=\underline{p}^Tx\), or for \(z=\overline{p}^Tx\). Hence we come across \(f_1(p)\).

    If an optimal solution lies in the latter half-space \(p^T x+\underline{q_1}^Ty \le -\underline{c}\), we analogously obtain \(f_2(p)\).

  2. (ii)

    First, suppose that an optimal solution lies in the half-space \(p^T x+\overline{q_1}^Ty \ge -\overline{c}\). By Fiedler et al. (2006) and Hladík (2013), the intersection of all feasible set of (7) over all instances of interval parameters is given by the instance

    $$\begin{aligned} \overline{A}x+\overline{B}y\le \underline{b},\quad x,y\ge 0. \end{aligned}$$

    Thus, we have

    $$\begin{aligned} \overline{f}=\sup _{p\in {{\varvec{\scriptstyle p}}}}\, \inf \left\{ \frac{p^Tx+\overline{q_1}^Ty+\overline{c}}{p^Tx+\underline{q_2}^Ty+\underline{d}}:\ \overline{A}x+\overline{B}y\le \underline{b},\ x, y\ge 0 \right\} . \end{aligned}$$
    (12)

    By the monotonicity of the objective function of (12) with respect to \(z:=p^Tx\), we have that the supremum is attained either for \(z=\underline{p}^Tx\), or for \(z=\overline{p}^Tx\). Hence we get \(f_3(p)\). Similarly, we obtain \(f_2(p)\) provided an optimal solution lies in the opposite half-space \(p^T x+\overline{q_1}^Ty \le -\overline{c}\).

\(\square \)

For the same reasons as in the previous section, the computational cost of computing the extremal optimal values is

  • \(4\times \)LP for \(\underline{f}\),

  • \(4\times \)LP for \(\overline{f}\),

where LP is a computational cost for linear programming.

4 Extension

In this section, we consider fractional linear programming problem in the following form:

$$\begin{aligned} \min \ \frac{p^Tx+q_1^Ty+c}{q_2^Tx+p^Ty+d}, {\ \ \text{ s.t. }\ \ }(x,y)\in M(A,B,b), \end{aligned}$$
(13)

Notice that common sub-vector p in the numerator and denominator appears in different terms, which is different from that of (1). Here, we again denote by

$$\begin{aligned} f(A,B,b,p,q_1,q_2,c,d):= \inf \left\{ \frac{p^Tx+q_1^Ty+c}{q_2^Tx+p^Ty+d}: (x,y)\in M(A,B,b)\right\} \end{aligned}$$

the optimal value of the fractional linear program (13). Let \({\varvec{A}},{\varvec{B}}\in \mathbb {I}\mathbb {R}^{m\times k}\), \({\varvec{b}}\in \mathbb {I}\mathbb {R}^m\), \({\varvec{p}},\varvec{q}_{\varvec{1}},\varvec{q}_{\varvec{2}}\in \mathbb {I}\mathbb {R}^k\), \({\varvec{c}},{\varvec{d}}\in \mathbb {I}\mathbb {R}\) be given, and consider the fractional linear program (13) with input entries varying inside prescribed intervals. The following theorem gives explicit formulae for computing the bounds of the range of optimal values when the constraints are in the inequality form

$$\begin{aligned} \min \ \frac{p^Tx+q_1^Ty+c}{q_2^Tx+p^Ty+d}, {\ \ \text{ s.t. }\ \ }Ax+By\le b,\ x,y\ge 0. \end{aligned}$$

In contrast to fractional linear program (7), this case is more difficult to deal with. We have to solve \(2^{k+1}\) fractional linear programs to obtain the lower bound of optimal value range. Notice that the results can be extended to the equality constraints in a straightforward way.

Theorem 3

We have

$$\begin{aligned} \underline{f}&=\inf _{z\in \{\pm 1\}^k}\{f_1(p_z),f_2(p_z)\}, \end{aligned}$$
(14)
$$\begin{aligned} \overline{f}&=\sup \{f_3,f_4\},~~ \end{aligned}$$
(15)

where

$$\begin{aligned} {f}_1(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{\overline{q_2}^T x+ p^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ x,y\ge 0,\right\} ,\\ {f}_2(p)&=\inf \left\{ \frac{p^T x+\underline{q_1}^Ty+\underline{c}}{\underline{q_2}^T x+ p^Ty+\underline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ x,y\ge 0\right\} , \end{aligned}$$

and

$$\begin{aligned} f_3&= \sup \big \{ \theta +\underline{b}^T w:\ \overline{A}^Tw+\underline{q_2}^T\theta \le p,\ \overline{B}^Tw+p^T\theta \le \overline{q_1},\ \underline{d}\theta \le \overline{c},\\&\qquad \qquad \ w\le 0,\ p \in {\varvec{p}},\ (\theta \text{ free }).\big \},\\ f_4&= \sup \big \{ \theta +\underline{b}^T w:\ \overline{A}^Tw+\overline{q_2}^T\theta \le p,\ \overline{B}^Tw+p^T\theta \le \overline{q_1},\ \overline{d}\theta \le \overline{c},\\&\qquad \qquad \ w\le 0,\ p \in {\varvec{p}},\ (\theta \text{ free }).\big \},\\ \end{aligned}$$

Proof

  1. (i)

    First assume that an optimal solution is located in the half-space \(p^T x+\underline{q_1}^Ty \ge -\underline{c}\). In a similar way to the proof of part (i) of Theorem 1, we obtain

    $$\begin{aligned} \underline{f}=\inf _{p\in {{\varvec{\scriptstyle p}}}}\, \inf \left\{ \frac{p^Tx+\underline{q_1}^Ty+\underline{c}}{\overline{q_2}^T x+p^Ty+\overline{d}}:\ \underline{A}x+\underline{B}y\le \overline{b},\ \ x, y\ge 0 \right\} . \end{aligned}$$
    (16)

    The objective function of (16) can be written as follows:

    $$\begin{aligned} \frac{p^Tx+\underline{q_1}^T y+\underline{c}}{\overline{q_2}^T x+p^Ty+\overline{d}}=\frac{p_ix_i+\alpha }{p_iy_i+\beta } \end{aligned}$$

    where \(\alpha =\sum _{j\ne i}p_jx_j+\underline{q_1}^T y+\underline{c}\) and \(\beta =\sum _{j\ne i}p_jy_j+\overline{q_2}^T x+\overline{d}.\) Since \(\displaystyle \frac{p_ix_i+\alpha }{p_iy_i+\beta }\) is monotone with respect to \(p_i,\) the minimum is achieved at either \(\underline{p}_i\) or \(\overline{p}_i.\) This process is continued until all of the components of p are vertices of \({\varvec{p}},\) which leads to the function \(f_1(p_z);z\in \{\pm 1\}^k.\)

    If an optimal solution lies in the latter half-space \(p^T x+\underline{q_1}^Ty \le -\underline{c}\), we analogously obtain \(f_2(p_z);z\in \{\pm 1\}^k\).

  2. (ii)

    First, suppose that an optimal solution lies in the half-space \(p^T x+\overline{q_1}^Ty \ge -\overline{c}\). By Fiedler et al. (2006) and Hladík (2013), the intersection of all feasible set of (7) over all instances of interval parameters is given by the instance

    $$\begin{aligned} \overline{A}x+\overline{B}y\le \underline{b},\ x,y\ge 0. \end{aligned}$$

Thus, we have

$$\begin{aligned} \overline{f}=\sup _{p\in {{\varvec{\scriptstyle p}}}}\, \inf \left\{ \frac{p^Tx+\overline{q_1}^Ty+\overline{c}}{\underline{q_2}^Tx+p^Ty+\underline{d}}:\ \overline{A}x+\overline{B}y\le \underline{b},\ x, y\ge 0 \right\} . \end{aligned}$$

Now, we apply the Charnes and Cooper transformation (Charnes and Cooper 1962) to convert the fractional programming problem into an equivalent linear programming problem, and obtain

$$\begin{aligned} \overline{f} =&\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \inf \big \{ p^Tx+\overline{q_1}^Ty+\overline{c}f: \\&\begin{aligned}&\underline{q_2}^Tx+p^Ty+\underline{d}f=1,\\&\overline{A}x+\overline{B}y\le \underline{b},\ x, y,f\ge 0 \big \}. \end{aligned} \end{aligned}$$

The inner program and the outer program have opposite directions for optimization. So we write the dual problem of the inner program instead, and get

$$\begin{aligned} \overline{f} =&\sup _{p\in {{\varvec{\scriptstyle p}}}}\ \sup \big \{ \theta + \underline{b}^T w: \\&\begin{aligned}&\overline{A}^Tw+ \underline{q_2}\theta \le p,\\&\overline{B}^Tw+p\theta \le \overline{q_1},\\&\underline{d}^T\theta \le \overline{c}^T,\ w\le 0,\ (\theta \text{ free })\big \}. \end{aligned} \end{aligned}$$

Since the inner program and the outer program have the same objective for optimization, we can combine the constraints of two programs and obtain the following one-level nonlinear program:

$$\begin{aligned} \overline{f} =&\ \sup \big \{\theta + \underline{b}^T w: \\&\begin{aligned}&\overline{A}^Tw+ \underline{q_2}\theta \le p,\\&\overline{B}^Tw+p\theta \le \overline{q_1},\\&\underline{d}^T\theta \le \overline{c}^T,\\&p\in {\varvec{p}},\ w\le 0,\ (\theta \text{ free })\big \}. \end{aligned} \end{aligned}$$

The above model is nonlinear due to nonlinear term \(p^T\theta .\)

Similarly, we obtain \(f_4\) provided an optimal solution lies in the latter half-space \(p^T x+\overline{q_1}^Ty \le -\overline{c}\). \(\square \)

By the above formulae, the computational cost of computing the extremal optimal values is

  • \(2^{k+1}\times \)LP for \(\underline{f}\),

  • \(2\times \)LP for \(\overline{f}\),

where LP is a computational cost for linear programming. Thus computation of \(\underline{f}\) is exponential w.r.t. the number of variables, but not w.r.t. the number of constraints. Anyway, this intractability motivates further research on effective approximation of \(\underline{f}\).

5 Empirical illustration of revenue efficiency measures: assessment of hospital activity

This section shows that the presented approach can be used to compute the bounds of economic efficiency ranges in Data Envelopment Analysis (DEA). Interval DEA was investigated by many authors, including Entani and Tanaka (2006), Hatami-Marbini et al. (2014), He et al. (2016), Inuiguchi and Mizoshita (2012), Jablonsky et al. (2004), Khalili-Damghani et al. (2015) and Shwartz et al. (2016) among others.

Data envelopment analysis is a nonparametric approach to evaluate the efficiency measure of a set of Decision Making Units (DMU) that consume multiple inputs to produce multiple outputs. Revenue efficiency (RE), as a DEA model, evaluates the ability of a DMU to achieve maximal revenue at the current inputs. The concept of RE followed from the seminal contribution of Farrell (1957), who originated many ideas underlying efficiency measurement. The RE measure is achieved by dividing the actual observed revenue by maximum output-mix (see for instance, Camanho and Dyson 2005; Fang and Li 2013; Jahanshahloo et al. 2007a, b, 2008; Kuosmanen and Post 2001, 2003; Mostafaee 2011; Mostafaee and Saljooghi 2010).

Suppose that we have a set of n DMUs consisting of \(DMU_j\), \(j=1,2,\ldots ,n\), with input-output vectors \(x_j,y_j,\) in which

$$\begin{aligned} x_j:=(x_{1j}, x_{2j},\ldots , x_{mj})^T,\quad y_j:=(y_{1j}, y_{2j},\ldots , y_{sj})^T. \end{aligned}$$

Define \(X=[x_1,x_2,\ldots , x_n]\) and \(Y=[y_1,y_2,\ldots ,y_n]\) as \(m\times n\) and \(s\times n\) matrices of inputs and outputs, respectively. Given exact output prices \(p=(p_1,p_2,\ldots , p_s),\) RE measure of \(DMU_o\) can be computed by solving the following linear programming problem

$$\begin{aligned} \max _{y,\lambda } \frac{p^Ty}{p^Ty_o} {\ \ \text{ s.t. }\ \ }X\lambda \le x_o,\ Y\lambda \ge y,\ e^T \lambda =1,\ \lambda \ge 0. \end{aligned}$$
(17)

Let \((\lambda ^*, y^*)\) be an optimal solution. Then, the RE measure of \(DMU_o\) is defined as follows:

$$\begin{aligned} RE_o=\displaystyle \frac{p^Ty_o}{p^Ty^*} \end{aligned}$$

It is not difficult to show that \(0< RE_o\le 1.\)

Definition 1

\(DMU_o=(x_o,y_o)\) is called revenue efficient iff \(RE_o=1,\) and revenue inefficient otherwise.

As we can observe, the RE measure of DMUs can be determined when the output prices are exactly known. But in many practical applications, the prices cannot be estimated accurately enough to make good use of economic efficiency concepts, and only the lower and upper bounds of the prices can be estimated. When the data of the price vector are uncertain and can be expressed in the form of ranges, the RE measure calculated from the uncertain data should be uncertain, as well. The bounds typically give a better approximation of true RE measures than a crisp value does.

Assume that the output price vector lies within bounded interval \({\varvec{p}}=[\underline{p}, \overline{p}].\) The lower bound and the upper bounds of the RE measures of \(DMU_o\), \([\underline{RE}_o,\overline{RE}_o]\), are defined naturally as:

$$\begin{aligned} \frac{1}{\overline{RE}_o}&=\min _{p\in {{\varvec{\scriptstyle p}}}}\max _{y,\lambda } \frac{p^Ty}{p^Ty_o} {\ \ \text{ s.t. }\ \ }X\lambda \le x_o,\ Y\lambda \ge y,\ e^T \lambda =1,\ \lambda \ge 0, \end{aligned}$$
(18)
$$\begin{aligned} \frac{1}{\underline{RE}_o}&=\max _{p \in {{\varvec{\scriptstyle p}}}}\max _{y,\lambda } \frac{p^Ty}{p^Ty_o} {\ \ \text{ s.t. }\ \ }X\lambda \le x_o,\ Y\lambda \ge y,\ e^T \lambda =1,\ \lambda \ge 0. \end{aligned}$$
(19)

Even though this problem is in essence an interval linear program, we cannot directly use techniques from interval LP since there are dependencies caused by double occurrence of the interval vector \({\varvec{p}}\). In principle, Theorem 3 can be used to solve (18) and (19), but the specific structure of this problem enables us to derive stronger results. The following theorem gives an explicit formula for computing the bounds of RE measures.

Theorem 4

We have

$$\begin{aligned} \frac{1}{\overline{RE}_o}&=\min _{\tilde{p}, v\ge 0, \eta \ge 0, u_0} x_o^Tv-u_0 \nonumber \\&\qquad {\ \ \text{ s.t. }\ \ }\tilde{p}y_o=1,\ Xv-eu_0 \ge Y\tilde{p},\ \eta \underline{p} \le \tilde{p} \le \eta \overline{p}. \end{aligned}$$
(20)
$$\begin{aligned} \frac{1}{\underline{RE}_o}&=\max _{z\in \{\pm 1\}^s}\max _{y,\lambda } \frac{p_z^Ty}{p_z^Ty_o}\nonumber \\&\qquad {\ \ \text{ s.t. }\ \ }X\lambda \le x_o,\ Y\lambda \ge y,\ e^T \lambda =1,\ \lambda \ge 0. \end{aligned}$$
(21)

Proof

First we prove the upper bound formula. It is not difficult to see that constraint \(Y\lambda \ge y\) in (18) holds as equality at optimality, i.e., \(Y\lambda =y.\) Therefore, we have

$$\begin{aligned} \frac{1}{\overline{RE}_o} =\min _{p\in {{\varvec{\scriptstyle p}}}}\max _{\lambda } \frac{p^TY\lambda }{py_o} {\ \ \text{ s.t. }\ \ }X\lambda \le x_o,\ e^T \lambda =1,\ \lambda \ge 0. \end{aligned}$$
(22)

The inner program and the outer program of (22) have opposite direction for optimization. So, we will write the dual of the inner program instead. Notice that Assumption (A1) holds true, and as a result the optimal values of the primal and the dual problems are the same. Thus we obtain

$$\begin{aligned} \frac{1}{\overline{RE}_o} =\min _{p\in {{\varvec{\scriptstyle p}}}}\min _{v,u_0} x_o^Tv-u_0 {\ \ \text{ s.t. }\ \ }X^Tv-eu_0\ge \frac{Yp}{p^Ty_o},\ v \ge 0,\ (u_0 \text{ free }). \end{aligned}$$
(23)

The inner program and the outer program of (23) have the same direction for optimization (i.e., minimization), the constraints of two programs can be combined and we obtain a one-level linear program

$$\begin{aligned} \frac{1}{\overline{RE}_o} =\min _{v,u_0} x_o^Tv-u_0 {\ \ \text{ s.t. }\ \ }Xv-eu_0\ge \frac{Yp}{p^Ty_o},\ v \ge 0,\ \underline{p}\le p \le \overline{p},\ (u_0 \text{ free }). \end{aligned}$$
(24)

We set \(\eta =\displaystyle \frac{1}{p^Ty_o}>0.\) This implies \(\eta p^T y_o=1.\) By variable alteration \(\hat{p}=\eta p,\) the constraints of (24) are transformed to the following constraints

$$\begin{aligned} \underline{p}\le p \le \overline{p}\ \Longleftrightarrow \ \eta \underline{p}\le \eta p \le \eta \overline{p}\ \Longleftrightarrow \ \eta \underline{p}\le \hat{p} \le \eta \overline{p} \end{aligned}$$
(25)

and

$$\begin{aligned} Xv-eu_0\ge \frac{Yp}{p^Ty_o} ,\ \underline{p}\le p \le \overline{p}\ \Longleftrightarrow \ Xv-eu_0\ge Y\hat{p},\ \hat{p}^Ty_o=1,\ \eta \underline{p} \le \tilde{p} \le \eta \overline{p} \end{aligned}$$
(26)

Now, regarding (23)–(26), models (18) and (20) are equivalent, and the proof of the first part is completed.

The proof of the second part is similar to that of Theorem 3, and is hence omitted. \(\square \)

Notice that due to a certain dependency of the output price vector (i.e., common sub-vector p) in the numerator and the denominator of RE model, the upper bound of RE is obtained by solving only one linear programming problem, which is easy to solve by LP software, but \(2^s\) linear programming problems have to be solved to compute the lower bound of the RE measures.

Example 1

In order to illustrate the ability of our proposed approach, we have utilized the data set for 24 hospitals of Tehran city. Random sampling was used to select these 24 hospitals. This study is a cross sectional, the data field of library and information collected through the use of doctoral dissertations and goes directly to the hospitals and the University’s center for statistics. We use eight variables to form the data set as inputs and outputs. Four inputs include the number of hospital beds (NB), the number of general practitioners and specialists (NEGM), the number of nurses (NN), the number of other personnels (NOP), and four outputs include the number of ambulatory patients in year (IP), the number of inpatients in year (OP), the number of surgeries (SRG), and the average number of occupied beds in month (ANOB). The data are displayed in Table 1.

Now, we illustrate RE measurement considering the situation that the exact output prices at hospitals were not available. The hospitals were assessed with the information on the maximal and minimal bounds observed in their own hospitals. The price for outpatients, inpatients, surgeries, and the average number of occupied beds lie in the intervals [1.25, 1.80], [1.25, 1.80], [100, 600], [20, 25],  respectively. Note that the price of an item of outputs is expressed in terms of dollars and cents per unit. The efficiency assessment requires the calculation of the upper and lower bounds for the RE measure. These bounds correspond to the Optimistic RE and Pessimistic RE measures. Table 2 reports the results of the Optimistic RE and Pessimistic RE measures for all hospitals analysed. These estimates were obtained by solving models (20) and (21), respectively.

Table 1 The input and output data of 24 hospitals
Table 2 Results of RE measurements

As can be seen, there are 16 extreme points for output prices:

$$\begin{aligned}&(1.25,1.25,100,20)^T,\ (1.25,1.25,100,25)^T,\ (1.25,1.25,600,20)^T,\\&(1.25,1.25,600,25)^T,\ (1.25,1.8,100,20)^T,\ (1.25,1.8,100,25)^T,\\&(1.25,1.8,600,20)^T,\ (1.25,1.8,600,25)^T,\ (1.8,1.25,100,20)^T\\&(1.8,1.25,100,25)^T,\ (1.8,1.25,600,20)^T,\ (1.8,1.25,600,25)^T,\\&(1.8,1.8,100,20)^T,\ (1.8,1.8,100,25)^T,\ (1.8,1.8,600,20)^T,\ (1.8,1.8,600,25)^T. \end{aligned}$$

The second and the third columns of Table 2 report the lower and upper bounds of RE measure of 24 hospitals. Also, the fourth column of Table 2 shows the extreme points of output prices in which the lower bound of RE is attained. For instance, the lower bound of RE of Amir Aalam hospital is 0.572399 and attained in extreme point

$$\begin{aligned} p_{10}&=(1.8,1.25,100,25)^T,\ \end{aligned}$$

where we associated \(p_{10}:= p_{(1,-1,-1,1)}\) etc.

As seen in Table 2, only Farabi hospital is revenue efficient in both most and least favourable situations, so it is revenue efficient for any instance of interval parameters. The hospitals including Bahrami, Razi, Zanan Babak, Hazrate Fatemeh Zahra, Lolagar, and Shahid Fahmideh are efficient in the most favourable situation and inefficient in the least favourable situation as the RE of hospitals Bahrami, Razi, Zanan Babak, Hazrate Fatemeh Zahra, Lolagar, and Shahid Fahmideh lie in the intervals \([0.764213, 1], ~[0.662164, 1],~[0.76827,1], ~[0.807069, 1], ~\textit{and} ~[0.04098,1],\) respectively. The other hospitals are inefficient in both most and least favourable situation, so we can be sure they are inefficient for any instance. Even though the output prices are uncertain, we can conclude for efficiency or inefficiency for 18 of 24 hospitals. Only six hospitals can be efficient or inefficient, depending on the concrete output price vector.

6 Conclusion

The aim of the paper was to compute the exact bounds of the optimal objective values in fractional programming problem whose input data can vary in some given real compact intervals. A method was proposed for situation in which the feasible set is described by a linear interval system. Moreover, certain dependencies between the coefficients in the nominators and denominators can be considered. Also, we extended this approach for situations of double appearance of the same vector in different terms in nominators and denominators. The applicability of the approach developed was illustrated in the context of the analysis of hospital performance. The surprising point here is that for 18 of 24 hospitals, we can make certain conclusions on efficiency despite uncertain output prices.