1 Introduction

The RAM measure, defined a decade ago by Cooper et al. (1999) was an attempt to define a generalized efficiency measure in connection with an additive model (Charnes et al. (1985), i.e., a measure that accounts for all the inefficiencies detected by the slacks of the model. As is well known, the classic additive model evaluates the projection by maximizing the L1-distance to the strong efficient frontier and thereby simultaneously maximizes outputs and minimizes inputs. Moreover, the classic additive model assumes variable returns to scale (VRS), an assumption that can easily be modified when formulated as a linear program [see, e.g., Ali and Seiford (1993)]. It is worth mentioning that the directional distance function measure of inefficiency, introduced by Chambers et al. (1998), does not account for all types of inefficiencies that the model can identify (see the example in Ray (2004), p. 95), but allows also for simultaneous output expansion and input minimization.

As far as we know, during the last decade, few, if any, attempts have been made to develop a generalized efficiency measure in direct connection with additive linear efficiency models. Nonetheless, there have been three attempts to define generalized efficiency measures in connection to nonlinear efficiency models. The most popular one resorts to a fractional linear programming model that can be linearized, as shown by Pastor et al. (1999) and by Tone (2001) independently. The corresponding measure is now known as SBM (Slack Based Measure), although its first baptism was as ERG (Enhanced Russell Graph Measure). The second one, the GDF (Geometric Distance Function), was published by Silva-Portela and Thanassoulis (2006), who proposed it in an unpublished research paper in 2002. The GDF relies exclusively on the unit being rated and its projection, and can be used in connection with any DEA model. The last mentioned authors propose using GDF in connection to a nonlinear efficiency model with independent radial input reduction and radial output expansion. This model does not guarantee a strong efficient projection for each unit. Nevertheless, if a model is considered that projects the unit onto the strong efficient frontier, GDF becomes a generalized efficiency measure. Unfortunately, if the projection is not unique, as may happen, e.g., in connection with an additive model, GDF is not well defined. The last generalized measure is due to Sharp et al. (2007), who propose a generalization of the SBM measure to achieve translation invariance.

Almost all the remaining attempts appearing in the literature from 1999 onwards proposed non-generalized efficiency measures. This includes the Hölder distance functions (Briec (1999), the RDM [Range Directional Measure, by Silva-Portela et al. (2004)] and the MEA [Multi Efficiency Analysis Measure, by Bogetoft and Hougaard (1999)]. Very recently, Asmild and Pastor (2010) redefined both RDM and MEA so as to get slack-free measures, that is, generalized efficiency measures.

The drawbacks of the RAM measure are twofold. On the one hand, we can only deal with VRS models; on the other, it shows a small discriminating power, as already detected by Aida et al. (1998). We are going to consider a measure in what follows, first defined by Pastor (1994) for dealing with negative data. We refer to this as the BAM measure, which resembles the RAM measure and has similar interesting properties, but possesses a stronger discriminating power. Moreover, and in order to deal with non-VRS models, a new family of DEA additive models is going to be introduced, where the efficient projections are required to satisfy certain bounds on inputs and outputs, just as Asmild and Pastor (2008) did for radial DEA models. All these new additive models are referred to as “bounded additive” models.

2 The characteristics of the RAM measure

Let us assume that we are rating a sample of n units through a DEA model. Each unit is defined by means of m input and s output quantities. In the definition of the RAM measure a modified additive model was considered, as defined by Charnes et al. (1985), with weights in the objective function, i.e., a weighted additive model (Lovell and Pastor 1995). Each input and each output is weighted by the inverse of the product of (m + s) multiplied by its range in the sample of n units under evaluation. (The factor (m + s) is introduced in order to average the sum of inefficiencies). Hence, the objective function measures inefficiency and, in order to get the corresponding efficiency measure, we just have to compute 1 minus the inefficiency. Here is the RAM inefficiency model.

$$ \begin{gathered} {\text{Max}}\left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{io}^{ - } }}{{(m + s)R_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{ro}^{ + } }}{{(m + s)R_{r}^{ + } }}}} } \right) \hfill \\ s.t. \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} = 1} \hfill \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r \hfill \\ {\text{where}}\quad R_{i}^{ - } = \bar{x}_{i} - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} ,\;{\text{with}}\;\bar{x}_{i} = \max \left\{ {x_{ij} ,j = 1, \ldots ,n} \right\},\;\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} = \min \left\{ {x_{ij} ,j = 1, \ldots ,n} \right\}, \hfill \\ {\text{and}}\quad R_{r}^{ + } = \bar{y}_{r} - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{y}_{r} ,\;{\text{with}}\;\bar{y}_{r} = \max \left\{ {y_{rj} ,j = 1, \ldots ,n} \right\},\;\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{y}_{r} = \min \left\{ {y_{rj} ,j = 1, \ldots ,n} \right\}. \hfill \\ \end{gathered} $$
(1)

As said before, we define the associated efficiency measure, \( \Upgamma_{RAM} \), as

$$ \Upgamma_{RAM} = 1 - \left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{i}^{* - } }}{{(m + s)R_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{r}^{* + } }}{{(m + s)R_{r}^{ + } }}}} } \right) = 1 - {\frac{1}{(m + s)}}\left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{i}^{* - } }}{{R_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{r}^{* + } }}{{R_{r}^{ + } }}}} } \right), $$
(2)

where * denotes the corresponding optimal slack values.

As proved in Cooper et al. (1999), \( \Upgamma_{RAM} \) is well defined because, after deleting the variables with zero range, the ranges of inputs and outputs are always positive numbers. In fact, if the range of any variable is zero, the corresponding input or output is a constant and, in any VRS model, the corresponding restriction may be omitted because it is subsumed by the convexity constraint on the lambdas. In the same paper the authors show that \( \Upgamma_{RAM} \) satisfies the next five properties.

$$ \begin{gathered} \left( {\text{P1}} \right)\,0 \le \Upgamma_{RAM} \le 1. \hfill \\ \left( {\text{P2}} \right)\Upgamma_{RAM} = \left\{ \begin{gathered} 1 \Leftrightarrow DMU_{o} \,{\text{is}}\,{\text{fully}}\,{\text{efficient}} \hfill \\ 0 \Leftrightarrow DMU_{o} \,{\text{is}}\,{\text{fully}}\,{\text{inefficient}}. \hfill \\ \end{gathered} \right. \hfill \\ \left( {\text{P3}} \right)\Upgamma_{RAM} \,{\text{is invariant to}}\left\{ \begin{gathered} {\text{alternative}}\,\,{\text{optima}} \hfill \\ {\text{units}}\,{\text{of}}\,{\text{measurement}}\,{\text{of}}\,{\text{inputs}}\,{\text{and}}\,{\text{outputs}}. \hfill \\ \end{gathered} \right. \hfill \\ \left( {\text{P4}} \right)\Upgamma_{RAM} \,{\text{is strongly monotonic}}. \hfill \\ \left( {\text{P5}} \right)\Upgamma_{RAM} \,{\text{is translation invariant}}. \hfill \\ \end{gathered} $$
(3)

(P1) holds as a consequence of considering a VRS model: each input-term of the objective function of the inefficiency measure is less than \( {1 \mathord{\left/ {\vphantom {1 {(m + s)}}} \right. \kern-\nulldelimiterspace} {(m + s)}} \), because \( s_{io}^{ - } = x_{io} - \sum\nolimits_{j = 1}^{n} {\lambda_{j} x_{ij} } \le x_{io} - \left( {\sum\nolimits_{j = 1}^{n} {\lambda_{j} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} } } \right) = x_{io} - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} \le R_{i}^{ - } ,\quad i = 1, \ldots ,m \), and, similarly, \( s_{ro}^{ + } \le \bar{y}_{r} - y_{ro} \le R_{r}^{ + } . \) The first part of (P2) holds if, and only if, all the slacks are zero, while the second part of (P2) holds if, and only if, the zenith-point belongs to the data set, that is, the point that has, at the same time, all its inputs at the lowest level and all its outputs at the highest level. (P3) shows that \( \Upgamma_{RAM} \) is well defined and units invariant and (P5) that it is translation invariant. (P3) is desirable for mathematical and economical reasons and (P5) for dealing with negative data. Moreover, property (P4), strong monotonicity, is difficult to achieve, and fails to hold in any efficiency model that may project the inefficient units on the weak efficient frontier, such as the radial models (see Pastor et al. (2008) for a thorough revision of linear efficiency models).

With respect to the shortcomings of \( \Upgamma_{RAM} \) we have already pointed out two in the last paragraph of the Introduction. First, RAM shows a small discriminatory power as reported by Aida et al. (1998) through an empirical example, where 108 suppliers of water services in Japan got a \( \Upgamma_{RAM} \) between 0.978 and 1. In other words, the associated inefficiencies are very low, with values in between 0 and 0.022, and always less than 2.5%. Secondly, if we are not dealing with a VRS model, we cannot guarantee (P1). To give but one example, in a CRS (Constant Returns to Scale) model, \( s_{r}^{ + } \) can be much bigger than the corresponding range value of output r.

3 BAM under VRS

In order to get a new measure with higher discriminatory power we need to increase the detected possible magnitudes and variations of the inefficiencies. The route we explore here consists in reducing the range-denominators of the objective function of the RAM inefficiency model, as initially proposed in an unpublished paper by Pastor (1994). Instead of considering ranges of inputs and outputs, we are going to consider lower-sided ranges for inputs and upper-sided ranges for outputs. Moreover, the new sided ranges are specific for each unit being rated. We define the lower-sided range for input i, i = 1,…,m, at unit \( \left( {x_{o} ,y_{o} } \right) \) as

$$ L_{io}^{ - } : = x_{io} - \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} . $$
(4.1)

Similarly, we define the upper-sided range for output r, r = 1,…,s, at unit \( \left( {x_{o} ,y_{o} } \right) \) as

$$ U_{ro}^{ + } : = \bar{y}_{r} - y_{ro} , $$
(4.2)

where the \( {\underline{x}}_i \) and \( \bar{y}_{r} \) are defined in (1). It should be clear that the left-sided range for each input depends only on the lower bound of that input and on the DMU being rated, while the upper-sided range for each output depends only on the upper bound of that output and on the DMU being rated. This is the reason why we call our new measure BAM, that is, “Bounded Adjusted Measure.” Let us observe that each sided range is always less than or equal to the corresponding range for any input or output.

The BAM measure of inefficiency is evaluated by means of the following additive model.

$$ \begin{gathered} {\text{Max}}\left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{io}^{ - } }}{{(m + s)L_{io}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{ro}^{ + } }}{{(m + s)U_{ro}^{ + } }}}} } \right) \hfill \\ s.t. \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} = 1,} \hfill \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r. \hfill \\ \end{gathered} $$
(5)

The corresponding efficiency measure, \( \Upgamma_{BAM} \), is defined as

$$ \Upgamma_{BAM} = 1 - {\frac{1}{(m + s)}}\left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{io}^{* - } }}{{L_{io}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{ro}^{* + } }}{{U_{ro}^{ + } }}}} } \right). $$
(6)

This measure is well defined if we adopt the next convention. If, for unit (x o , y o ), input i satisfies \( x_{io} = \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} \), there is no room for improvement, i.e., \( s_{io}^{* - } = 0 \), and, by convention, we take

$$ {\frac{{s_{io}^{* - } }}{{L_{io}^{ - } }}} = 0. $$

Similarly, if output r satisfies \( y_{ro} = \bar{y}_{r} \), we take

$$ {\frac{{s_{ro}^{* + } }}{{U_{ro}^{ + } }}} = 0. $$

As said before, this measure was initially proposed in an unpublished paper for dealing with negative data by Pastor (1994), as recorded in Cooper et al. (1999). It also appears in Pastor and Ruiz (2007), again in connection with negative data.

It is straightforward to show that \( \Upgamma_{BAM} \) satisfies the same properties as \( \Upgamma_{RAM} \) but (P4). Basically, the definition of the sided ranges at each unit guarantees that

$$ 0 \le {\frac{{s_{io}^{* - } }}{{L_{io}^{ - } }}} \le 1 $$

for any input i, and that

$$ 0 \le {\frac{{s_{ro}^{* + } }}{{U_{ro}^{ + } }}} \le 1 $$

for any output r. Again, under VRS, if any variable of the model is a constant we simply delete it because the corresponding restriction is subsumed by the convexity constraint. Finally, let us prove that \( \Upgamma_{BAM} \) satisfies the next alternative property.

$$ ({\text{P4}}^{\prime})\quad\Upgamma_{BAM}\;{\text{is monotonic.}}$$

Proof

Let (x, y) be a non-efficient unit. Let us increase only its first input by a positive quantity, h > 0 and generate a new point (x′, y). Since this new point is clearly dominated by the former one we would like to prove that \( \Upgamma_{BAM} \left( {x,y} \right) \ge \Upgamma_{BAM} \left( {x^{\prime } ,y} \right) \) or, equivalently, that \( 1 - \Upgamma_{BAM} \left( {x,y} \right) \le 1 - \Upgamma_{BAM} \left( {x^{\prime } ,y} \right) \). If we succeed, by repeating the exercise for each of the remaining inputs, increasing each of them by a positive quantity and each of the outputs, decreasing each of them by a positive quantity, the proof would be completed.

Let us consider the optimal path of (x, y) towards its efficient projection on the strong efficient frontier, as given by the set of optimal slacks \( \left( {s_{1}^{* - } ,s_{2}^{* - } , \ldots ,s_{m}^{* - } ;\;s_{1}^{* + } , \ldots ,s_{s}^{* + } } \right) \). The corresponding feasible path for point (x′,y), which may be suboptimal, is given by the set of slacks \( \left( {s_{1}^{* - } + h,s_{2}^{* - } , \ldots ,s_{m}^{* - } ;\;s_{1}^{* + } , \ldots ,s_{s}^{* + } } \right) \). The next inequality

$$ 1 - \Upgamma_{BAM} \left( {x,y} \right) = \sum\limits_{i = 1}^{m} {{\frac{{s_{i}^{* - } }}{{(m + s)L_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{r}^{* + } }}{{(m + s)U_{r}^{ + } }}}} \le 1 - \Upgamma_{BAM} \left( {x',y} \right) $$

is clearly satisfied if we prove that

$$ \sum\limits_{i = 1}^{m} {{\frac{{s_{i}^{* - } }}{{(m + s)L_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{r}^{* + } }}{{(m + s)U_{r}^{ + } }}}} \le {\frac{{s_{1}^{* - } + h}}{{(m + s)\left( {L_{1}^{ - } + h} \right)}}} + \sum\limits_{i = 2}^{m} {{\frac{{s_{i}^{* - } }}{{(m + s)L_{i}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{r}^{* + } }}{{(m + s)U_{r}^{ + } }}}} $$

The last inequality holds if, and only if,

$$ {\frac{{s_{1}^{* - } }}{{L_{1}^{ - } }}} \le {\frac{{s_{1}^{* - } + h}}{{\left( {L_{1}^{ - } + h} \right)}}}. $$

Simple algebraic manipulations show that the last inequality holds if, and only if,

$$ {\frac{{s_{1}^{* - } }}{{L_{1}^{ - } }}} \le 1, $$

which we know is true. Hence the proof is completed.

4 Bounded additive models

The aim of this section is to introduce new constrained additive models that will allow us to use the BAM measure in connection with non-VRS models. As pointed out before, we are rating a sample of n homogeneous production units, \( \left\{ {\left( {x_{j} ,y_{j} } \right),x_{j} \in R_{ + }^{m} ,y_{j} \in R_{ + }^{s} ,\quad j = 1, \ldots ,n} \right\} \) and, consequently, the production possibility set can be characterized as follows. Assuming we are considering a VRS technology, T is defined precisely as in Banker et al. (1984):

$$ T = \left\{ {(x,y) \in R_{ + }^{m} \times R_{ + }^{s} :(x, - y) \ge \sum\limits_{j = 1}^{n} {\lambda_{j} (x_{j} , - y_{j} ),\sum\limits_{j = 1}^{n} {\lambda_{j} = 1,} \,\lambda_{j} \ge 0,\quad j = 1, \ldots ,n} } \right\}$$
(7)

As it is well known, if we want to restrict the technology to satisfy non-increasing returns to scale (NIRS), non-decreasing returns to scale (NDRS) or constant returns to scale (CRS), we have to modify the constraint on the sum of the intensity variables λ j in the definition of T as follows [see, for instance, Cooper et al. (2007)]:

$$ NIRS:\,\,\sum\limits_{j = 1}^{n} {\lambda_{j} } \le 1;\,\,\,NDRS:\sum\limits_{j = 1}^{n} {\lambda_{j} } \ge 1;\,\,\,CRS:\sum\limits_{j = 1}^{n} {\lambda_{j} } \ge 0\,\,. $$
(8)

Note. Observe that the restriction associated with CRS is redundant and can be deleted, because we always require non-negativity of the lambdas. It is only useful if we want to formulate the four cases with the same number of restrictions.

Any dataset associated with a sample of n units has natural bounds, as given by the maximum and minimum values for each input and each output variable. Moreover, these bounds show clearly what has and can be achieved in practice, i.e., the minimum of each input is the smallest quantity that needs to be consumed in order to get as much as the maximum of each output. Since any efficiency analysis always tries to reduce inputs and/or expand outputs, it is not surprising at all that we incorporate the lower bound for each input and the upper bound for each output into the formulation of a considered DEA model. Now, under CRS, the new bounded production possibility set, T B, is defined as

$$ T^{B} = \left\{ {(x,y) \in R_{ + }^{m} \times R_{ + }^{s} :(x, - y) = \sum\limits_{j = 1}^{n} {\lambda_{j} (x_{j} , - y_{j} );\quad \lambda_{j} \ge 0,\;\forall j} ;\quad x_{i} \ge \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} ,\;\forall i;\quad y_{r} \le \bar{y}_{r} ,\;\forall r} \right\}. $$
(9)

It is straightforward to formulate the corresponding CRS-bounded additive model. We need only to require that the projected point satisfies the required bounds, as follows:

$$ \begin{gathered} {\text{Max}}\left( {\sum\limits_{i = 1}^{m} {s_{io}^{ - } } + \sum\limits_{r = 1}^{s} {s_{ro}^{ + } } } \right) \hfill \\ s.t. \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } \ge \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} ,\quad i = 1, \ldots ,m \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } \le \bar{y}_{r} ,\quad r = 1, \ldots ,s \hfill \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r. \hfill \\ \end{gathered} $$
(10)

This formulation may be simplified for the NIRS-additive model. Now we need to specify that \( \sum\nolimits_{j = 1}^{n} {\lambda_{j} \le 1} \), and knowing that for each r, \( y_{rj} \le \bar{y}_{r} ,\forall j \), we get \( \sum\nolimits_{j = 1}^{n} {\lambda_{j} y_{rj} } \le \sum\nolimits_{j = 1}^{n} {\lambda_{j} \bar{y}_{r} } \le \bar{y}_{r} ,r = 1, \ldots ,s \). That means that the upper bound restrictions for the outputs may be deleted. Consequently, the formulation of the NIRS-bounded additive model is:

$$ \begin{gathered} {\text{Max}}\left( {\sum\limits_{i = 1}^{m} {s_{io}^{ - } } + \sum\limits_{r = 1}^{s} {s_{ro}^{ + } } } \right) \hfill \\ s.t. \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} \le 1} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } \ge \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} ,\quad i = 1, \ldots ,m \hfill \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r. \hfill \\ \end{gathered} $$
(11)

Similarly, for the NDRS case, it is easy to prove that the restrictions that can be deleted are the lower bound restrictions on inputs. Hence, the formulation of the NDRS-bounded additive model is:

$$ \begin{gathered} {\text{Max}}\left( {\sum\limits_{i = 1}^{m} {s_{io}^{ - } } + \sum\limits_{r = 1}^{s} {s_{ro}^{ + } } } \right) \hfill \\ s.t. \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} \ge 1} \hfill \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } \le \bar{y}_{r} ,\quad r = 1, \ldots ,s \hfill \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r. \hfill \\ \end{gathered} $$
(12)

Finally, for the VRS case we have to add the restriction \( \sum\nolimits_{j = 1}^{n} {\lambda_{j} = 1} \), or, equivalently, the two restrictions \( \sum\nolimits_{j = 1}^{n} {\lambda_{j} \le 1} ,\;\sum\nolimits_{j = 1}^{n} {\lambda_{j} \ge 1} \). Hence, in this case, the formulation of the VRS-bounded additive model collapses to the classic additive model. Summarizing, we have defined four bounded additive models but only three are really new.

In order to geometrically portray these newly introduced models, we have drawn four pictures, in the one input—one output case, that compare the frontier of each of the four bounded additive models with the frontier of the corresponding (unbounded) additive models (see Fig. 1).

Representing inputs on the horizontal and output on the vertical axis, picture (a) corresponds to the CRS case with the dashed lines corresponding to the upper and lower bounds. The frontier of the bounded CRS additive model contains no half-rays in contrast with the unbounded model. Moreover, the points on the frontier with input less than the considered sample lower bound disappear. Picture (b) corresponds to the NIRS case. Here only the lower left-hand side of the frontier disappears, just as in case (a). Case (c) corresponds to the NDRS case. Here only the upper right-hand side half-rays are truncated. Finally, case (d) shows that the original (VRS) additive model and its bounded version are undistinguishable.

5 Extension of BAM under any returns to scale

Once the four bounded additive models have been defined, it is easy to realize that the definition of the BAM measure works perfectly well for any of these models. Actually, the addition of the (m + s) input and output restrictions guarantees that the optimal value of the BAM objective function is a nonnegative number less than or equal to 1. It is not difficult to build an example where, in connection with a non-VRS model and without bounds, the corresponding BAM measure takes a negative value. For this purpose, let us consider the data for the 108 water supplying agencies in the Kanto region of Japan as reported in Aida et al. (1998). This dataset contains seven data for each agency (five inputs and two outputs). Summary statistics for them are reported in Table 1.

Table 1 Input and output statistics

Let us evaluate DMU 85 through a NIRS additive model (without bounds). Evaluating its BAM score by means of the NIRS additive model (without bounds), we get

$$ \begin{aligned} NIRS - score\left( {U85} \right) = & 1 - {\frac{1}{{\left( {5 + 2} \right)}}}\left( {\sum\limits_{i = 1}^{5} {{\frac{{s_{io}^{* - } }}{{L_{io}^{ - } }}}} + \sum\limits_{r = 1}^{2} {{\frac{{s_{ro}^{* + } }}{{U_{ro}^{ + } }}}} } \right) \\ = & 1 - \frac{1}{7}\left( {{\frac{13.92}{24 - 3}} + {\frac{0}{{L_{2o}^{ - } }}} + {\frac{2001705.69}{4841279 - 1138674}}} \right) \\ & - \frac{1}{7}\left( {{\frac{4953.84}{24657 - 19777}} + {\frac{49.46}{109.21 - 105.94}} + {\frac{0}{{U_{1o}^{ + } }}} + {\frac{0}{{U_{2o}^{ + } }}}} \right) \\ = & 1 - \frac{1}{7}\left( {0.663 + 0 + 0.541 + 1.015 + 15.125 + 0 + 0} \right) \\ = & 1 - {\frac{17.344}{7}} \\ = & 1 - 2.477 \\ = & - 1.477, \\ \end{aligned} $$
(13)

i.e., a negative efficiency score! As can be seen, the inefficiency contribution of input 5 is twice as much as the sum of the number of inputs and outputs and this is the basic reason why we get a negative efficiency. Consequently, \( NIRS - score \) is not a well defined efficiency measure and resorting to the corresponding bounded model is a must.

Hence, we formulate the BAM model under any returns to scale as follows.

$$ \begin{aligned} \Upgamma_{BAM - RS} : = & {\text{Min}}\,1 - {\frac{1}{(m + s)}}\left( {\sum\limits_{i = 1}^{m} {{\frac{{s_{io}^{ - } }}{{L_{io}^{ - } }}}} + \sum\limits_{r = 1}^{s} {{\frac{{s_{ro}^{ + } }}{{U_{ro}^{ + } }}}} } \right) \\ s.t. \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} + s_{io}^{ - } = x_{io} ,\quad i = 1, \ldots ,m} \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} - s_{ro}^{ + } = y_{ro} ,\quad r = 1, \ldots ,s} \\ \sum\limits_{j = 1}^{n} {\lambda_{j} \ge 0,\;or \le 1,\;or \ge 1,\;or = 1} \\ \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } \ge \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{x}_{i} ,i = 1, \ldots ,m \\ \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } \le \bar{y}_{r} ,r = 1, \ldots ,s \\ \lambda_{j} \ge 0,\quad j = 1, \ldots ,n;\;s_{io}^{ - } \ge 0,\;\forall i;\;s_{ro}^{ + } \ge 0,\;\forall r. \\ \end{aligned} $$
(14)

As already noted, the bounding restrictions on inputs and/or the bounding restrictions on outputs may be deleted depending upon the returns to scale we are working with. In the definition of \( \Upgamma_{BAM - RS}^{{}} \), RS stands for “returns to scale” and, with the appropriate notation, allows considering any of the four possibilities. To give but one example, \( \Upgamma_{BAM - NIRS}^{{}} \) denotes the general BAM measure for the NIRS bounded additive model.

6 Two empirical illustrations

In order to illustrate the behavior of our general BAM model let us consider again the dataset reported in Table 1. The same dataset was used in Pastor et al. (1999) in order to compare, under VRS, the ERG/SBM (Enhanced Russell Graph Measure or Slack Based Measure) with the radial BCC input-oriented model and with the RAM measure.

Our first exercise consists in extending the last mentioned comparison by adding the BAM-VRS measure. The results are shown in Table 2, where only the 59 VRS-inefficient agencies are reported, ordered according to the label of the units.

Table 2 VRS case

When comparing the BAM-VRS measure with the remaining three measures several comments are at hand. At the bottom of Table 2, where only the inefficient units are reported, we find the average efficiency score and its standard deviation for each of the four considered measures, as well as the average inefficiency score. As expected, the RAM measure gets a very high average efficiency score (0.995) and a very low standard deviation (0.004). The statistics associated with the other two known measures are ordered as expected. The slack based measure (ERG/SBM) accounts for twice as much inefficiency as the radial measure (BCC), with a higher dispersion in the corresponding efficiency scores. The BAM-VRS measure also detects a higher inefficiency than the BCC model, but not as much as ERG/SBM. In fact, BAM-VRS occupies a central position with respect to the radial and the ERG/SBM models. Moreover, BAM-VRS exhibits a standard deviation quite close to the BCC model, showing that, in our experimental setting, their discriminatory powers are quite similar.

Let us compare in more detail the BAM-VRS scores with each of the other scores. Considering the difference of each pair of scores, we find the following:

  1. (1)

    Comparison of BAM-VRS with the BCC input-oriented model: as expected, the BCC radial scores are almost always higher than the BAM-VRS scores, with an average difference of 0.068 and a maximum difference of 0.186 (at unit 14). In only two of the 108 cases (units 59 and 86, see Table 2) do the radial scores appear to be less than the BAM-VRS scores, being the difference as small as 0.013 and 0.022.

  2. (2)

    Comparison of BAM-VRS with ERG/SBM: surprisingly enough, the average difference between ERG/SBM and BAM-VRS is also 0.068, but with a maximum difference of 0.330 (at unit 14 again). Only in one case (unit 34) the BAM-VRS score exceeds the ERG/SBM score, but only by 0.011.

  3. (3)

    Comparison of BAM-VRS with RAM: the average difference between RAM and BAM-VRS is 0.183—really high—with a maximum value of 0.506 attained at unit 16 (the most inefficient BAM-VRS unit). In no case does the BAM-VRS score surpass the corresponding RAM score but the minimum difference is 0.053, which occurs at unit 24, a highly efficient unit located close to the frontier in both cases. Moreover, the small discriminatory power of RAM can be detected looking at the units that get the same RAM score but, almost always, different BAM-VRS scores. This happens, for instance, for the eighteen units with a RAM score of 0.996 or for the fifteen units with a RAM score of 0.997. In the first case, the corresponding BAM-VRS scores range from 0.641 til 0.943 while in the second the range is [0.780, 0.938]. A similar comparison holds relating RAM to the other two considered measures.

    Table 3 Correlation matrix of the four considered VRS measures

Finally, looking at the pairwise correlations of any of the four considered measures (see Table 3), we arrive at the next conclusions. First, the RAM measure is only slightly correlated with any of the other three measures. Secondly, the new BAM-VRS presents the highest correlations with respect to the other two non-RAM measures.

Tables 4 and 5, our second example, compares the four BAM measures obtained under the four proposed returns to scale cases corresponding to the bounded additive models with the exception of the VRS case, where, as said before, bounded and unbounded models are exactly the same. Tables 4 and 5 includes only the 80 BAM-CRS non-efficient units. It is rather surprising that the considered database has all the BAM-CRS efficient units on the VRS efficient frontier.

Table 4 CRS non-efficient units with BAM-VRS equal to BAM-NDRS
Table 5 CRS non-efficient units with BAM-VRS equal to BAM-NIRS

In order to identify the different returns to scale regions (see Fig. 1) we have reordered the 80 considered units in the following way: First, the units that have the same BAM-NDRS and BAM-VRS scores, with the VRS scores in ascending order (see Table 4). Thereafter, the units that have the same BAM-NIRS and BAM-VRS scores, again with the VRS scores in ascending order (see Table 5).

Fig. 1
figure 1

Four case pairwise comparisons of the frontiers of bounded and unbounded additive models

Several comments are apparent. Looking at Table 4, we find that not only BAM-VRS and BAM-NDRS are equal at each unit, but also BAM-CRS and BAM-NIRS. While the two extreme efficiency scores for BAM-CRS are 0,385 and 0,953, the corresponding values for BAM-VRS are 0,485 and 1. Moreover, the non-CRS region of the BAM-NDRS frontier accounts for 12 efficient units, which belong also to the VRS frontier (see the lower-left corner of Fig. 1c, d). Observe that only three units (labeled as 91, 87 and 11) get the same efficiency score for all the four BAM models, which means that their respective efficiency projections are frontier points at the intersection of the CRS frontier with the VRS frontier.

In Table 5, we identify the 26 units that have BAM-VRS equal to BAM-NIRS but also BAM-CRS equal to BAM-NDRS. Moreover, the BAM-NIRS frontier does not belong to the CRS frontier and contains 9 efficient units which are also VRS efficient (see the upper-right corner of Fig. 1b–d). These 9 units are rated with the same efficiency score through the BAM-CRS and the BAM-NDRS models which means that the non-CRS region of the BAM-NIRS model corresponds to part of the CRS region of the BAM-NDRS model (see Fig. 1a–c). In this part of Table 5 comprising 26 BAM-CRS inefficient units, none of them is projected onto an efficient point that belongs, at the same time, to the CRS and VRS frontiers.

7 Conclusion

This paper has introduced the BAM as a new measure of efficiency in connection with a new family of additive models, known as bounded additive models. The BAM avoids the lack of discriminatory power of the RAM and, as shown in our empirical example, offers efficiency measures that moderately exceed the inefficiency detected by the radial BCC model. In our empirical example, the BAM scores are, on average, about halfway between the radial scores and the ERG/SBM scores.

A second relevant feature of the BAM measure is that it can be used under any of the standard returns to scale models, provided the corresponding bounded additive model is considered. As a matter of fact, we have shown in Sect. 6, by means of a numerical example, that resorting to bounded models is compulsory if we want to get a well defined efficiency measure [see (3)].

Finally, we would like to point out that the proposed BAM measure has a straightforward generalization through the addition of weights to the input and output slacks such that the sum of all the weights equals 1. This generalization constitutes a way of incorporating external value judgement about the relevance of each variable and has been proposed elsewhere (see, e.g., Cooper et al. (2007), p. 105).