Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1.1 Location Problems and Its Features

Location choices are as old as mankind. Individuals have always chosen their place of residence, starting with the appropriate cave, which would have to be in reasonable proximity to places in which food, and later work, could be found). Municipalities located central places to conduct business, such as the Greek agora or the Roman forum (or arenas for gladiator fights, for that matter), while leaders of countries located places for their administration, their armies, &, eventually, their burial sites (e.g., pyramids).

An interesting look at one of these early situations is provided by Wilamowsky et al. (1995), who demonstrated that Joshua’s (of Bible fame) choice of locations for three cities for “unintentional murderers” were actually optimal sites. Along similar lines, ReVelle and Rosing (2000) showed Emperor Constantine the Great’s choices for the locations of his armies in the fourth century AD. More specifically, there were eight regions of the Roman Empire, in which four existing field armies had to be positioned. The authors of the study used the concept of covering models to locate the field armies, as to require the smallest number of steps of relocation, in case an attack on any of the regions ensued. Actually, the emperor’s choice was not optimal with respect to ReVelle and Rosing’s proximity criterion; it appears that stationing two field armies in Rome was more of a political than strategic decision. Even in literature we can find location problems, albeit hidden somewhat. A good example is Lewis Carroll’s Tangled Tales, where in Knot 2: Eligible apartments, the author has one of his protagonists state: “‘One day-room and three bedrooms,’ said Balbus, as they returned to the hotel. ‘We will take as our day-room the one that gives us the least walking to do to get to it.’” In location analytic terms, we are looking for the location of a median (the concept will be defined below).

Since that time, a large number of contributions have described possible, realistic, real, and implemented location models. The contribution by Eiselt (1992) surveys some of the applications of the 1980s as a lead-in to a special issue of applications in location analysis . A survey that investigates and summarizes contributions related to agriculture is found in Lucas and Chhajed (2004). In addition to the many facilities that have been located throughout the decades, e.g., warehouses, blood banks, fire stations (a good taxonomy is provided by Başar et al. 2012) , ambulances, etc., (many of these real location problems since 1990 are surveyed in Table 1.1 in this paper), there are some unusual location problems. Among them are the optimal location of disinfection stations for water supply networks (Tryby et al. 2002) and the prime locations for street robbers (and, presumably, cops, who will try to foil attempts of the robbers to ply their craft), see Bernasco et al. (2013).

Table 1.1 Summary of location applications 1990–2015

This discussion leads us immediately to the fact that many real location decisions will not rely on a single quantitative criterion, such as distances, but rely on other, often, intangible criteria. In personal location decisions (e.g., choosing a place to live), this could be the perceived beauty of the area, the perceived quality of the neighborhood including the type of school district , and many others. In public location decisions, public support/opposition, environmental impact, and similar concerns are often found. Private location problems, on the other hand, are typically simpler, as profit or cost are the main concerns. These criteria are easily quantifiable and fit very well into a mathematical model.

This chapter will analyze some of the foundations of location models, and determined classes of applications and their main features. In order to do so, we will very loosely and generally model a location problem, in which we have a metric space, in which customers (we use the term here in the widest possible sense), at least in the short to medium run, occupy fixed locations or move along known paths, and in which a decision maker will locate facilities based on known or perceived interactions between the facilities and the customers. These interactions will typically be transportation, either from a facility to a customer or vice versa.

Weber (1909) was among the first to look at actual transportation problems from a mathematical point of view. More specifically, the appendix of his book written by Georg Pick, minimizes transportation costs among three customers and one facility, which are assumed to be proportional to Euclidean distances between the points. Equivalently, this can be seen as finding the point in a triangle, such that the sum of distances between the new point and the vertices of the triangle are as small as possible. This problem was first posed by Fermat (formulated before 1640), solved by Torricelli in 1645, and for which a well-known iterative algorithm was described by Weiszfeld (1937) . Since Weber’s seminal work, geographers and economists have debated the location of facilities (mostly firms in the early days). It is interesting to note that while many facilities that deal with the first sector of the economy (i.e., extraction, such as agriculture and mining) are located close to the source, many facilities of the third sector of the economy (the service sector) are located close to the customers. (We wish to note at this point that this assertion holds true for brick-and-mortar facilities, we do not include online firms or facilities in our discussion). There are many reasons for this phenomenon: Consider a firm in the first sector, e.g., a gold mine. Typically, we may expect about one ounce of gold in three tons of ore. Clearly, given the very low output-to-input ratio, nobody would transport the ore to sites close to the customer to extract the gold at that location. On the other hand, facilities in the service sector have extensive dealings with customers, so that their locations are chosen quite naturally in close proximity to customers.

Consider now a typical scenario, in which multiple facilities of one type exist. In such a case, there are two issues to be addressed: on the one hand, does the firm or the customer choose the facility he is dealing with? The second issue concerns the facility customer interaction. In particular, we distinguish the case in which the firm is responsible for the interaction (e.g., the transport), and the case in which this interaction is done by the customer. This leaves us with four cases. In the first class of models, the firm chooses the facility the customer is served from and the firm also deals with the interaction. Typical examples of this case are trucking, waste collection, ambulances, and many others. Models with this feature are often referred to as allocation (as the firms allocate facilities to their customers) or shipping models. On the other hand, consider models in which customers choose the facility they are dealing with and are also responsible for the interaction with the chosen facility. Models in this category include retail stores, libraries, ATM machines, and other, service-related facilities. Models of this type are frequently called customer choice models or shopping models. Much less are “mixed models”, in which one agent is choosing the facility, while the other is responsible for the interaction. Examples for the firm (in the widest sense, typically a government agency) chooses the facility customers are to patronize, while customers are responsible for the interaction. Scenarios of this type include polling stations, public schools, and similar facilities. Finally, there are rather few cases, in which customers choose the facility, while the firm takes care of the interaction. At first glance, it appears that a customer’s choice of the branch of a chain of retail , e.g., furniture, stores falls into that category, as the store may then ultimately take care of the interaction with the customer by delivering the goods. While this is true, the customer cannot choose (and has no interest in choosing) the warehouse, from which the furniture is delivered to him. This is then again a standard allocation/shipping model.

Another possible way to distinguish between location models was suggested by ReVelle et al. (1970), see also ReVelle and Eiselt (2005), where the authors distinguish between the location of public and private facilities. In some cases, such a distinction will not reveal any insight. Consider, for instance, the location of a library, typically a public facility. The city or whichever government department is responsible for the planning, will try to make the library accessible to as many people as possible, so that the average facility—customer (or, in this case, potential patron) is as short as possible. On the other hand, a firm that plans the location of a regional distribution center will locate the center in such a way that the sum of distances from the center to the customers is minimized, as this can be seen as a proxy for cost. (Note that this argument is not as straightforward as it appears: first, it assumes that special trips are made to the customers rather than milk runs to serve multiple customers on a single trip, which would result in much more difficult location-routing problems, and secondly that there is an established cost per ton-mile for trucking, such as $ 1.38 as illustrated in The Trucker’s Report 2015). In other words, the decision maker’s behavior will be the same (locating the facility in close proximity to the customers), while the underlying motives differ: accessibility for the library planner, and cost minimization for the trucking firm. On the other hand, the public vs. private distinction may be quire relevant in other aspects: typically, public decision makers face many stakeholders in their decision making and, as a result, many conflicting objectives and criteria are to be included. On the other hand, private decision makers can much more easily agree on (profit maximizing or cost minimizing) objectives, which are much easier to incorporate in optimization models given the relative ease with which they can be quantified.

As outlined above, location models deal with the spatial separation and the interaction of customers and facilities. The separation of facilities and customers is expressed in terms of their distance (shortest path in networks and some Minkowski distance in the plane are the most frequently used measures). These distance can then appear in the objective function of a mathematical optimization problem as is the case in typical median or center problems, or they appear in the constraints, as is the case in covering models. Naturally, mixed forms are possible and have been used. In addition to the usual models in which proximity of facilities to customers is desired (they minimize a function of the distances, where customers “pull” a facility towards themselves, see, e.g., Eiselt and Laporte 1995) or undesirable (where the customers “push” the facility away from their location, in case the facility is deemed undesirable), there are models in which only the distances between facilities are relevant. Typical examples are dispersion and defender models; see, e.g., Daskin (2008).

1.2 A Locator’s Toolkit

This section will delineate some tools locators have at their disposal. We will first survey generic formulations of a number of the standard location models. We then discuss a few of the approaches to multiobjective optimization, and finally, we will review some of the main techniques in multicriteria decision making .

In order to facilitate our discussion, we first distinguish between two main categories of location problems: discrete problems and continuous problems . Whereas there exists only a finite number of potential locations in discrete problems, there is an infinite number of location to choose from in continuous problems. If a discrete problem is formulated on a network, the finite set of potential locations is often equal to (or a subset of) the set of nodes. This is, however, not a necessary requirement. The location variables, i.e., the variables that deal with where to locate a facility in the given space, in discrete spaces are simply defined as zero-one variables: for each potential facility location, a variable is created that assumes the value of one, if we decide to locate at that site, and zero otherwise. Such an approach is obviously not feasible in continuous problems, in which an infinite number of potential locations exists. In such cases, in which a facility can either be located in the plane or, in more general cases, in any d-dimensional space, or at any point on a network, the facility location can then be determined by the coordinates of the facility location in space, or the distance of the location from any existing point. To simplify our exposition, we will describe the main location models by using discrete location problems.

In order to do so, first define a vector of location variables \(y=({{y}_{1}},{{y}_{2}},\ldots,{{y}_{n}})\) that indicate whether or not a facility will be located at the potential locations. In addition to the location variables \({{y}_{j}}\), we need to refer to customers \(i=1,\ \ldots,m\) and, at least whenever multiple facilities are to be located, allocation variables \(\mathbf{X}=({{x}_{ij}})\), where \({{x}_{ij}}=1\), if customer i is served by facility j (or, more precisely, if the customers at site i are served by a facility at site j). Furthermore, define \(\mathbf{w}=({{w}_{i}})\) as a vector of weights (which, for instance, denote the number of customers at customer site i), \(\mathbf{D}=({{d}_{ij}})\) as the aligned of distances between customers i and potential facility sites j, and \(\mathbf{f}\,=\,({{f}_{j}})\) the fixed costs of a facility, if it were to be constructed at site j. We also need \(\mathbf{g}(\mathbf{X},\mathbf{y} )\) as a set of constraints and S as the set of feasible solutions. We can then easily describe some of the main types of location problems.

First, consider the simple plant location problem (SPLP) and its close cousin, the capacitated plant location problem (CPLP). The main idea is to locate a number of facilities (its number is determined endogenously), so as to minimize the sum of facility location and allocation, i.e., transportation, costs. Typically, the constraints will ensure that all customers receive service, and that customers can only be served from existing facilities. The closely related capacitated version of the problem adds realism to the problem by associating a capacity to each facility. (Actually, it would also be possible to consider a finite number of possible capacities to different versions of facilities, each of which would then have its own cost). While the introduction of capacities appears to be a minor step in the sense that it just adds a few constraints to the problem, the consequences are much more severe. While the optimizer will automatically assign each customer to its closest facility (by virtue of the minimization objective function), this may no longer be possible if capacities are introduced. This also gives rise to two different versions of the capacitated problem: the single-source assumption, where all customers at one site are served by the same facility, and the multiple source version, in which this is not the case. In some applications, the single-source problem is more realistic, for instance in waste collection, in which the collected garbage anywhere in one town is hauled to the same landfill or transfer station.

The simple plant location problem and capacitated plant location problems can be formulated as follows:

$$ \begin{aligned} \begin{aligned} SPLP/CPLP:\,wDX+fy\\ s.t. \quad g(X,\,y)\in S\end{aligned}\\ \ \ \ \ \ \ \ X\in {{\left\{0,1 \right\}}^{n\times n}}\\ \ \ \ \ \,y\in {{\left\{0,\,1 \right\}}^{n}}\end{aligned}. $$

This problem is actually a general version of a number of other well-known problems. The best-known such problem is the p-median problem (p-MP). Loosely speaking, it requires us to locate a given number of p facilities, so that, given that each customer is served by its closest facility, the total transportation costs are minimized. As the SPLP and CPLP before it, this model assumes that each service requires a special trip. It can be formulated as

$$ \begin{aligned} \begin{aligned} \begin{aligned} p-MP:\,\min{}\,wDX\\ \text{s}\text{.t}.\,ey\,=\,p\end{aligned}\\ \hat{g}(X,y)\,\in \,S\end{aligned}\\ \text{X}\,\in \,{{\left\{0,\,1 \right\}}^{n\times n}}\\y\,\in \,{{\left\{0,\,1 \right\}}^{n}},\end{aligned} $$

where \(\mathbf{e}=(1,\text{ }1,\text{ }\ldots,\text{ }1 )\) is the vector of ones and \(\mathbf{\hat{g}}\) refers to the appropriate constraints. As a matter of fact, the p-MP can be derived from the SPLP by setting \(\mathbf{f}\,=\,\mathbf{0}\) and introducing the additional constraint \(\mathbf{ey}=p\). In other words, while the number of facilities in the SPLP/CPLP is endogenous, it is exogenous in the p-MP.

The anti p-median (also referred to as maxian) is a problem very similar to the p-median , except that it attempts to maximize the average distance between facilities and customers. It was designed to be used for hazardous or noxious facilities, which customers would like to push away as much as possible from their own location. However, formulating p-MP as an anti-median problem takes more than simply replacing the “Min” by a “Max” function. The reason is that the “Min” function automatically allocated each customer to its closest facility. Replacing it by a “Max” objective will result in customers automatically being allocated to their farthest facilities. However, what the planner is trying to do is to push the closest, not the farthest, facility as far away from himself as it is the closest facility that is polluting his location most. In order to formulate this, we will need additional O(n 2) constraints that will guarantee the appropriate allocation.

Consider now the location set covering problem (LSCP). Its purpose is to minimize the sum of facilities that are to be located while ensuring that each customer has a facility within a predetermined service distance from himself. Many such covering problems are found in the context of the location of emergency facilities, such as fire halls, hospitals , police stations , and others. The LSCP can be seen as a special case of the SPLP by deleting the allocation variables X and setting the fixed setup costs of all potential locations equal to each other, i.e., \(\mathbf{f}=\mathbf{e}\) The problem can then be written as

$$ \begin{aligned}\\& LSCP:\,\min{}~ey \\& \text{s}\text{.t}.\ \ {g}'(y)\,\in \,S \\& y\,\in \,{{\left\{0,\,1 \right\}}^{n}}. \end{aligned} $$

In many applications, covering each customer is not feasible, especially in situations, in which customers are very much spread out in the given space. In such case, which gave rise to the max cover problem (MCP); see, e.g., Church and ReVelle (1974). In this problem, the decision maker specifies p, the number of facilities to be located and, given that each customer node has a weight associated with it, the objective is then to maximize the total weight that can be covered within a given distance from the facilities. The problem can be formulated as follows:

$$ \begin{aligned} \begin{aligned} MCP:\ \max{}\,wz\\ \text{s}\text{.t}.\,ey\,\text{=}\,p\end{aligned}\\ g(y,z)\,\in \,S\\ \ ~y,\,z\,\in \,{{\left\{0,\,1 \right\}}^{n}},\end{aligned} $$

where z is a vector of zero-one variables whose i-th component equals1, if customer i is within a prescribed distance from its closest facility, and 0 if it is not.

There are other problems such as center problems (in which the objective is to locate facilities, so that the longest customer—facility distance is as short as possible) and anti-center problems (in which the shortest customer—facility distance is as long as possible), but very few of them have been applied in practice. One of the problems associated with these problems is their unique focus on the worst case. Potentially, objectives of this type can be applied as secondary objectives in cases in which the worst case is to be avoided if at all possible, such as in hazmat transportation or similar instances.

Many practical location problems have more than a single objective . In general, we will distinguish between multiobjective (linear) optimization problems (MOLP), which are typically mixed-integer optimization problems, and multicriteria decision-making problems (MCDM), which feature a small number of decision alternatives, whose consequences are measured on a number of different criteria. Among multiobjective optimization problems, vector optimization is a prominent approach. First described by Zeleny (1974), vector optimization problems are designed to generate nondominated solutions, i.e., solutions which cannot be improved upon on all objectives while remaining in the feasible set. Since the generation of all nondominated solutions is not practical, techniques have been devised to approximate the set of nondominated solutions. Cohon (1978) described a number of techniques for that purpose. Prominent among them are the weighting method and the constraint method. The weighting method associates positive weights with each of the objectives (typically explained as measuring tradeoffs between units of the objectives) and then constructs a composite objective as the sum of the weighted individual objectives. The resulting single-objective problem can then be solved by the pertinent methods. The constraint method chooses one objective to remain part of the model as an objective, while all other objectives are transformed into constraints by using upper or lower limits on them. These limits can, and usually are, parametrically modified in a sequence of sensitivity analyses. While the constraint method will always find an extreme point of the original feasible set, the weighting method may not.

Another approach to multiobjective optimization problems is goal programming. Based on Charnes et al. (1955) and Charnes and Cooper (1961), Ignizio (1982) was one of the main proponents of the approach. Its basic premise is as follows. Constraints are rarely as rigid in real life as they are in a mathematical optimization problem. For instance, a budget typically requires that the amount of money spent does not exceed the amount available. Inserted in an optimization problem, such a constraint is absolute and cannot be violated. In practice, though, it is, of course, possible to borrow money, albeit at a cost (interest). This feature is captured in goal programming by using so-called deviational variables \(d_{k}^{+}\) and \(d_{k}^{-}\) for positive and negative deviations from a set target value of \({{t}_{k}}\), respectively. The variables \(d_{k}^{+}\) and \(d_{k}^{-}\) are commonly interpreted as “overachievement” and “underachievement,” respectively. As an illustration, consider a covering problem, in which it is desired to cover each customer at least once. Each customer, who is not covered, causes a penalty of, say, 5, while each customer, who is covered more than once receives a penalty of 1 for each unit of excess coverage . The reason for the latter are “equity” considerations, which, at least in public decision making, are often considered rather important. As above in the MCP, define a zero-one variable \({{y}_{i}}\), which is equal to 1, if a customer at site i is covered and 0, if this is not the case. In addition, let the vector \({{\mathbf{a}}_{i}}_{\bullet }=({{a}_{ij}})\) equal 1 in its j-th component, if customers at site i can be covered by a facility at site j. Then \({{\mathbf{a}}_{i}}_{\bullet }\mathbf{y}\) equals the number of times customers at site i are covered, and we can formulate the goal constraint

$$ {{\mathbf{a}}_{i\bullet }}\mathbf{y}+d_{k}^{-}-d_{k}^{+}=1, $$

where the subscript of the deviational variables simply indicates that this is the k-th goal. As usual, we assume that the deviational variables have to satisfy the nonnegativity constraints. Hence, if customer i is not covered, \({{d}^{-}}=1\) and \({{d}^{+}}=0\), indicating that there is a coverage deficit of 1. On the other hand, if customer i is covered twice, \({{d}^{+}}=1\) and \({{d}^{-}}\,=\,0\), indicating that there is an excess coverage of 1. In case customer i is covered exactly once, both deviational variables will assume the same value. Consider the penalties of 5 and 1 for overachievement and underachievement of the target value, respectively, the contribution to the objective function by this goal constraint is then

$$ \min{}\,\ldots +5{{w}_{i}}{{d}^{-}}+1{{w}_{i}}{{d}^{+}}\ldots$$

This part of the objective will penalize each solution that does not cover site i and its \({{w}_{i}}\) customers by 5 for each customer, while each customer generates one unit of penalty for each time he is covered more than once. Furthermore, in case the target of single coverage is satisfied, both deviational variables will not only be equal, but will equal 0, as any other solution with \({{d}^{+}}={{d}^{-}}\) will have a higher value of the objective function in the minimization function.

Finally, goal programming problems come in a variety of versions. A popular, albeit somewhat problematic, version features preemptive priorities, where a higher-ranking goal is considered “infinitely more important” than a goal on the next lower level, so that no finite tradeoff exists. Another important issue concerns the commensurabilities, as deviational variables from different constraints are added in the objective. Care must be taken that they are transformed to similar units.

The last multiobjective optimization technique reviewed here is fuzzy programming. Fuzzy logic was pioneered by Bellman and Zadeh (1970). It deals with the vagueness of an achievement, such as “this machine will cost a lot more than $ 250,000.” In order to deal with the problem, a fuzzy membership function is set up that assigns a degree of membership to each achievement or outcome, for which a lower and an upper bound are specified by the decision maker. The value of this membership function is a number between 0 and 1; it assumes a value of 1, if the (maximization) objective achieves the upper bound or is even higher, it assumes a value of 0, if the objective falls short of the lower bound, and is a value in between for achievements between lower and upper bounds. A maximin function with the membership function values as achievements is then the single objective, which can then be solved with the pertinent methods.

Consider now multicriteria decision making problem, in which a decision maker faces a “payoff” aligned \(\mathbf{C}=({{c}_{ij}})\), so that \({{c}_{ij}}\) denotes the achievement of solution i on criterion j. For simplicity, it is often assumed that all criteria are of the “maximization” variety, i.e., higher values indicate better solutions. In the context of location models, the decisions (rows) refer to the potential locations for the (single) facility. An interesting reference that illustrates the use of multicriteria decision analysis to location problems is Larichev and Olson (2001). The book includes models that site facilities to dispose of hazmat, pipeline locations, and the selection of municipal solid waste systems.

A simple generic method for multicriteria decision making problems is as follows. Suppose that the decision maker were able to specify weights \({}\lambda{}\) for all criteria \(\ell \), then aggregate achievements could be determined by calculating weighted average achievements for all decision, which then allows the decision maker to choose the decision with the highest average achievement. Some refinements are easily incorporated: a decision maker could delete decisions whose achievements on specific crucial criteria fall short of a prescribed threshold, or achievements beyond some upper bound will no longer considered to be relevant as contributing towards the aggregated average. Clearly, any version of this technique will have to be followed by extensive sensitivity analyses.

The next class of techniques to deal with multicriteria decision problems are so-called reference point methods. One technique in this class is TOPSIS , and it was first described by Hwang and Yoon (1981). The idea of these techniques is to specify a norm or target value and then measure how far the individual decisions are from this absolute value. In addition to a metric to measure the distance, this requires the specification of tradeoffs between criteria. Note the similarity of this approach and goal programming and its target values. A number of interesting approaches in this class are described by Eiselt and Marianov (2014).

In contrast to reference point methods, which compare decisions and their outcomes to exogenously given standards, data envelopment analysis (DEA) compares decisions with each other and determines whether or not they are efficient. In order to do so, analysts first have to group the input factors (such as costs, required manpower, and others) as well as the output factors of the locations (employment, pollution, and others) together. Then a set of simple linear programming problems will determine whether or not a decision (we will consider one decision at a time) is efficient compared to the other decisions. Note that this approach only demonstrates relative efficiency, as a decision may be the best of the lot, while it may, in the grand scheme of things, still be quite terrible and unacceptable.

Preference cones are an interesting and relatively easy to incorporate concept in multicriteria decision making . The idea is that a decision \(i\) dominates a decision \(\ell \), if the aggregated outcome of the i-th decision with any set of nonnegative, finite weights (as in the generic method described above) is no less than the aggregated outcome of the·\(\ell \)-th decision for the same weights. The advantage of this method is that it requires no numerical input from the decision maker. However, it may very well turn out that no dominances exist, so that no reduction of the system is possible. We are not aware of any applications in location analysis that have applied this concept.

Another approach is taken by outranking methods. In essence, they apply a weak dominance concept and enable a persuasive visualization. Outranking methods were first suggested by Benayoun et al. (1966), with Roy’s (1971) ELECTRE methods (many other versions exist) providing the first workable concept. The basic technique requires that the decision maker specify weights for each of the criteria that indicate its relative importance. The technique then compares each pair of decisions and determines a concordance aligned and a discordance aligned on the basis of the weights. More specifically, the concordance of one decision over another equals the sum of all weights, in which the former decision is preferred over the latter. The discordance aligned can be constructed according to similar lines. A graph, in which each node represents a decision then has an arc leading from a node i to a node \(\ell \) if the concordance of decision i over decision \(\ell \) exceeds a prespecified threshold and the discordance of that relation does not exceed a prespecified threshold.

Based on earlier work by Brans in the early 1980s, Brans and Vincke’s (1985) PROMETHEE method takes a different route. On each criterion separately, the method determines the differences of the achievements for each pair of decisions. This difference is then translated into a preference by way of a preference function. These preference functions could be step functions, piecewise linear, or nonlinear functions, not unlike decay functions in covering models (see, e.g., Church and Roberts 1983, or Berman and Krass 2002). Given the weights of the criteria specified by the decision maker, these preferences are then aggregated. The aligned of these aggregated preferences is then the basis, on which the graphs for the different versions of PROMETHEE are constructed. Karagiannidis and Moussiopoulos (1997) and Hokkanen and Salminen (1997) are among the authors who have suggested to apply outranking methods to location problems.

Finally, techniques that allow inconsistent evaluations by decision makers have become quite popular. Most prominent among them is the analytic hierarchy process (AHP) , or it successor, the analytic network process ANP). These techniques are based on the seminal work by Saaty (1980). There is a large number of studies that have suggested using the method for actual applications; see, e.g., Aragonés-Beltrán et al. (2010) and Tavares et al. (2011). The technique requires decision makers to specify their degree of preference between each pair of decisions on all criteria separately. While reciprocity must be satisfied asserting that, if decision i is, say, considered three times as good as decision \(\ell \), this necessitates that decision \(\ell \) is considered only 1/3 as good as decision i, transitivity does not (i.e., the decision maker may determine that decision i is twice as good as decision \(\ell \) which, in turn, is three times as good as decision r, while claiming that decision i is five times as good as decision r on the same criterion). After this substantial input, the method will provide the decision maker with two outcomes: first, it will determine the degree of consistency of the decision maker’s input (which means, that if it falls short of an agreed-upon threshold, the decision maker will have to rethink his evaluations), send secondly, the technique will provide a final ranking of the decisions under consideration. There has been much discussion concerning the theoretical underpinnings concerning the method, and some alternatives have been suggested, such as the geometric mean method. Surveys of this and many other methods can be found in Olson (1996) and in Eiselt and Sandblom (2004).

1.3 Location Applications in the Literature

This section will summarize applications of location analysis that have been reported in the literature. While there are many contributions that use real data to test their models and methods, not all of them are actual applications. For the purpose of this paper, we would like to distinguish between realistic applications and real applications. Realistic applications are those papers, whose main focus is on the description of a new model or method, which is subsequently tested on data that have been gleaned from real life. On the other hand, a real application’s main focus is on the actual application. Clearly, the distinction between realistic and real applications is fuzzy, which may explain why we may have not included some papers others may deem true applications.

In our discussion, it will be beneficial to distinguish between the firm’s view and the (collective) customers’ view. More specifically, while in both cases it is the firm (or government agency or whoever the planner may be) that locates the facility in question, the two distinct angles of view emphasize who the ultimately beneficiary will be. Models that espouse the firm’s view typically include a simple cost minimization or profit maximization objective. This does, of course, not mean that the models themselves are necessarily simple: they may (and often do) include complex functions that attempt to model customer behavior. Modeling the customers’ view is much more contentious, as a single model will have to somehow express the collective view of customers. As an example, consider the (re-) location of a hospital . Here, a planner (typically not a representative of the customers themselves) will set up objectives for the customers’ benefit. In doing so, he will usually consider average travel times of the patients to the hospital, and try to locate that facility, so that as many potential patients are within a specific travel time from the hospital. Such a planner may also wish to include in the model some measure of “equity,” which is almost always ill-defined and, in order to find a quantifiable proxy, replaced by a measure of equality.

We would like to point out that there is no clear relation between the firm’s and the customers’ view on the one hand and the shipping and shopping models on the other: in case of models that take the firm’s view, firms can locate trucking terminals in the context of a delivery (i.e., shipping) model, or may locate retail outlets (a shopping model). Similarly, in case of the customers’ view, planners may locate ambulances (a shipping model, as dispatchers will decide which ambulance to use and they will be responsible for the transport), or they may locate libraries (a shopping model).

Before we go into any further detail, we would like to offer a few thoughts on the scale of the decision-making process . Two major tools from the vast toolkit of location planners are mathematical optimization problems and methods in multicriteria decision making with multiobjective models occupying a middle ground between the two. More often than not, simple, single-objective optimization models are used on the macro scale in the first stage of a decision-making process. The solution may identify a specific point, which decision makers will usually take as an indication of the general area, in which the facility in question should be located. Subsequently, the process zooms in on this area and at this point, more things are asked of the location, which is accomplished by considering only a smaller number of available locations, and additional criteria. This is what multicriteria approaches do.

Models that use the firm’s view may, in addition to the usual cost minimization or profit maximization objective, include features that relate to customer behavior. For instance, when locating a gas station, planners often look not at customer location, but at typical trips made by customers, as it has been observed that customers usually fill up their cars on their way to work or while they are on trips for other purposes. Similar behavior has been observed for other small purchases and the use of child care. Once such a criterion has been determined, planners may include a flow capturing (or flow interception) feature in their model. For details on these approaches, see, e.g., Hodgson (1981).

Another feature often included in models that use the firm’s view is routing. The main reason is that the usual median models assume that each facility—customer interaction requires a separate trip. Once this is no longer true, routing features may be incorporated in the model. Clearly, this is a serious complicating factor: p-median problems as well as even (for practical purposes) simple routing problems such as standard vehicle routing are NP-hard by themselves, so that in most cases a heuristic method for the solution will be required. For a survey, readers are referred to Nagy and Salhi (2007). In addition to the many existing heuristics and metaheuristics, ranging from simulated annealing to tabu search, neighborhood search, genetic algorithms, and many others (for a survey, see, e.g., Richards 2000) Nagy and Salhi (2007) provide a number of heuristic methods, which are tailor-made for location-routing problems. They distinguish between sequential methods (which solve the location problem first, followed by a routing heuristic), iterative techniques (which shuttle between location and routing components—a sequential method with a feedback loop), clustering-based methods (which subdivide the set of customers into clusters, find a route within each cluster, and then determine the facility locations), and hierarchical techniques.

The most frequently used feature included in models that use the customers’ point of view is accessibility and “equity.” A good example is the work by Burkey et al. (2012), whose hospital model uses the average time for a customer to reach a hospital as one criterion, the second is the coverage of as many potential patients within a half-hour travel time (this can be considered providing equitable service), and also the Gini index (Gini 1912 as well as Ceriani and Verme 2012), which uses the Lorenz curve to measure the degree of equality of a solution.

In addition to these typical concerns included in individual models, there are also industry-specific features that may be included. Typical examples include the access to sources of water in case of fire stations , energy and materials recovery in case of the location of facilities in solid waste management, penalties for overcrowding in the planning of jails , proximity to highways , ports or airports in case distribution centers or manufacturing companies are to be located, to sources of pure water for beer breweries.

Table 1.1 below summarizes some of the applications of location analysis that were published in the last 15 years. The table concentrates on the main features of the model, the objectives, the solution approach, the type of facility, the country it is located in, and the journal in which the piece was published. The exact references are found at the end of this section. To save space in the table, we are using some abbreviations: EJOR refers to the European Journal of Operational Research, C & OR stands for Computers & Operations Research, JORS denotes the Journal of the Operational Research Society and ITOR is the International Journal of Operational Research .

1.4 Summary & Outlook

This paper has described some of the main approaches to location analysis under special consideration of different types of applications, followed by a survey of descriptions of actual applications of location analysis. As expected, the main tools are various types of integer programming and techniques from multicriteria decision-making. The summary also reveals that location models are applied all across the globe. In addition to the classics: finding optimal locations for landfills, fire stations , ambulances, and bank branches, there are a number of new trends. It is hardly surprising that many of these are fueled by technology: (mobile) vehicle inspection stations (see the piece by Geetla et al. in this volume), charging stations for electrical vehicles (or those that use hydrogen fuel), cell phone towers, wifi hot spots, toll stations for electronic road pricing, and similar facilities. What is entirely missing are applications for non-physical location problems, such as models that position employees in positions in a firm (finding the best fit), locating brands that locate close to existing customer tastes, and other “facilities.” The groundwork for such location models has been made a long time ago (Niemi and Weisberg 1976 for political models, and Eiselt and Marianov 2008, for employee positioning), but the calibration of data is probably one of the key difficulties for these models. One of the very positive approaches that exemplify “thinking outside of the box” is presented by Swersey et al. (1993), who, in their quest to keep public services at an acceptable level, while saving money, considered dual training of first responders as firefighters and paramedics. This integration of services appears very promising.

Still, the survey also demonstrates that modeling is pretty much still at the “ad hoc” level. In other words, there is not generally accepted “fire station location model” that functions as transferable technology and is, with appropriate modifications for the local situation, usable in many different places. It would not free modelers from the task of putting together a problem description for their specific situation, but it would allow them to skip the basic steps, which are already done, and skip to the fine tuning right away. This has not yet happened, but, as the saying goes, hope springs eternal.