Keywords

1 Introduction

Although Murphy’s law (if anything can go wrong, it will) does not always come true, it seems at least important to address what might go wrong when designing and operating infrastructures, such as service systems and supply chains. Whether intentional or accidental, disasters can render a system inoperable or inefficient for quite some time. For example, in 2011, flooding in Thailand was considered to be the worst in 50 years. This event disrupted supply chains around the world from computer storage disk manufacturing to cars. In that flood, a production facility for Honda was closed for more than 3 months, and a financial analyst estimated that floods would reduce profits at Toyota, Nissan, and Honda by more than a combined Y35bn (Soble 2011). Other examples of natural disruption include the hit of Hurricane Harvey on Texas in 2017. Included in that disaster was a chemical plant that was flooded in Crosby, TX, which lost power as well as backup power. The chemicals stored at the plant needed refrigeration, and without power there were significant destructive fires. Harm can also be intentional and simple. For example, in 2015 a cyber-attack shut down three power distribution companies in the Ukraine resulting in loss of electricity to 225,000 customers in winter (Lemos 2018). In another incident, a terrorist was able to drive a vehicle into an Air Products & Chemical plant near Lyon, France, in 2015 that caused an explosion (CEN 2015). Of equal concern is that attackers used phishing emails to gain passwords and compromising information. By doing so, they were able to launch a cyber-attack on a steel mill in Germany in 2014. The attackers had enough familiarity with the system that they caused the plant’s control network to fail. In response, plant operators had to perform an emergency shutdown which resulted in significant damage (Lemos 2018). As a final example of intentional disruption, snipers in April 2013 opened fire on a substation supplying power to Silicon Valley, California, and knocked out 17 giant transformers, nearly bringing the entire area to a complete blackout. U.S. Officials have stated that this was the most significant incident in domestic terrorism involving the grid that has ever occurred. In an unreported U.S. government analysis, researchers found that knocking nine key substations out of 55,000 substations on a scorching summer day could result in a coast-to-coast blackout (Smith 2014) and it is believed that protecting 100 key substations would be enough to mitigate such an attack. This gives credence to addressing the question of what is critical to protect. Overall, addressing such potential risks when designing and operating a system of facilities may lead to more resilient and efficient systems.

Facilities and associated transportation networks are key elements in any production, supply, and service system. Traditional modeling approaches for facility location problems are based upon the assumption that systems will operate as designed. Virtually all modern textbooks on modeling production and supply systems ignore the problem of disruption when optimizing the location of a set of facilities. Church et al. (2004) demonstrated that a given deployment of facility resources, although optimal, could be significantly disrupted in service efficiency, while other close-to-optimal configurations were relatively resilient when subject to the same level of disruption. This work and the work of Snyder and Daskin (2005) were instrumental in establishing a need to handle facility reliability and vulnerability explicitly. Since then there has been an increased interest in modeling the fragility of networks and facility systems over a wide range of possible events from natural disasters to intentional strikes.

Research in facility disruption is new and evolving. There are three major problems of interest. The first one is: how much impact can be expected? This problem involves the search for the most critical elements of a system, that is, those facilities which when removed from operation impact the system the most. The second important question is: how can such impacts be averted? One way of averting a crisis may be to fortify facilities against disaster. This may call for something simple like providing backup generators for power or providing enough security that it will ward off a would-be attacker. Another possibility is to move the facility to a nearby site that is less vulnerable to something like flooding. The third main question is: how might facilities be configured so that the resulting system is both efficient in service delivery and resilient when disrupted? This last question deals with the design of a new system, whereas the first two questions deal with an existing system. All of these are major issues and are addressed in this chapter.

The main optimization models developed to answer these questions can be classified as follows:

  1. 1.

    Interdiction models. These models identify vulnerabilities of service/supply systems and quantify the impacts of potential losses of key components on a system ability to provide efficient service.

  2. 2.

    Protection models. These models optimize the allocation of protective resources among the facilities of already existent systems.

  3. 3.

    Design models. These models are used for planning new service and supply systems which are secure and resilient to disruptions.

In this chapter, we provide a description of the seminal models in each class and outline how these models have then been further developed and extended to capture the additional complexities and interdependencies characterizing real service and supply systems. The description of the models is paralleled by a brief description of the solution methodologies which have been proposed for solving them.

The remainder of this chapter is organized as follows. Section 22.2 introduces the notation used throughout the chapter. Interdiction, protection and design models are described in Sects. 22.3, 22.4 and 22.5, respectively. In Sect. 22.6, we highlight future trends in modeling location problems under disaster events. Some conclusive remarks are finally provided in Sect. 22.7.

2 Notation

In the following description of location models under disruption, we assume that the reader is already familiar with the classic location problems introduced in the previous chapters (e.g., median, covering, fixed-charge and hub location problems). Here we briefly summarize the main notation used throughout the chapter.

Inputs

\(\begin {array}{ll} I = & \text{Set of potential locations for the facilities, indexed by {$i$}} \\ J = & \text{Set of customers, indexed by {$j$}} \\ F = & \text{Set of facilities in an existing system} \\ d_j= & \text{Demand of customer {$j$}} \\ c_{ij}= & \text{Unit cost for serving customer {$j$} from facility {$i$}} \\ N_j= & \text{Set of facilities covering customer {$j$} ({$N_j \subseteq I$})} \\ p = & \text{Number of facilities to be located }\\ r = & \text{Number of facilities to be interdicted }\\ b = & \text{Number of facilities to be protected }\\ \end {array}\)

Decision Variables

\(y_{i} = \left \{ \begin {array}{l l} 1 & \quad \text{if a facility is located at site {$i$}}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

\(s_{i} = \left \{ \begin {array}{l l} 1 & \quad \text{if a facility located at {$i$} is interdicted}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

\(z_{i} = \left \{ \begin {array}{l l} 1 & \quad \text{if a facility located at {$i$} is protected}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

\(x_{ij} = \left \{ \begin {array}{l l} 1 & \quad \text{if the demand of customer {$j$} is supplied from facility {$i$}}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

\(u_{j} = \left \{ \begin {array}{l l} 1 & \quad \text{if customer {$j$} is covered before disruption}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

\(v_{j} = \left \{ \begin {array}{l l} 1 & \quad \text{if customer {$j$} is covered after disruption}\\ 0 & \quad \text{otherwise} \end {array} \right . \)

In the models described in this chapter, single-sourcing is assumed. For some uncapacitated problems, such as the p-median problem, single-sourcing occurs naturally (without imposing binary restrictions on the xij variables) as customer demands are served by their nearest open facility, unless a customer has the same minimum cost from two or more open facilities (see Chap. 2). The multi-source counterpart of location models under disruption can be easily formulated by relaxing the integrality constraints on the xij variables.

3 Identifying Critical Facilities: Interdiction Models

Interdiction models date back a few decades and were originally designed to assess the impact of losing critical links in transportation networks for military applications (see, for example, Wollmer 1964 and Wood 1993). The first interdiction models within the facility location literature were introduced by Church et al. (2004) to identify the most critical facility assets in systems that are designed with an objective that is either based on minimizing total weighted distance of service or maximizing coverage. The first problem, called the r-Interdiction Median Problem (r-IMP), can be seen as the antithesis of the p-median problem and aims at identifying the best set of r facilities to remove, among the existing ones, in order to maximize the overall demand-weighted cost for serving the customers from the remaining facilities (these are referred to as non-interdicted facilities). Similarly, the r-Interdiction Covering Problem (r-ICP) can be seen as the antithesis of the maximal covering problem and involves finding the subset of r facilities, which when removed, minimizes the total demand that can be covered within a specified distance or travel time. In essence, both models identify the subset of facilities whose loss has the greatest impact on service delivery, where the impact is measured either in terms of cost increase or in terms of lost coverage to mirror two different service protocols.

The r-Interdiction Median Problem

In addition to the notation introduced in Sect. 22.2, the mathematical formulation of r-IMP requires the definition of the set Tij = {k ∈ F|dkj > dij} defined for each facility i ∈ I and customer j ∈ J. Tij represents the set of existing sites that are farther than i is from demand j. The r-IMP can be formulated in the following manner:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{maximize} &\displaystyle &\displaystyle \sum_{i \in F} \sum_{j \in J} d_j c_{ij} x_{ij} {} \end{array} \end{aligned} $$
(22.1)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{i \in F} x_{ij} = 1 \quad \forall j \in J {} \end{array} \end{aligned} $$
(22.2)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in F} s_i = r {} \end{array} \end{aligned} $$
(22.3)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{k \in T_{ij}} x_{kj} \le s_i \quad \forall i \in F, j \in J {} \end{array} \end{aligned} $$
(22.4)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle x_{ij} \in \{0,1\} \quad \forall i \in F, j \in J {} \end{array} \end{aligned} $$
(22.5)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle s_i \in \{0,1\} \quad \forall i \in F. {} \end{array} \end{aligned} $$
(22.6)

The objective function (22.1) maximizes the demand-weighted total cost after the interdiction of r facilities. Constraints (22.2) ensure that each customer is assigned to a facility after interdiction. Constraints (22.3) stipulate that exactly r facilities are to be interdicted. Constraints (22.4) force each customer j to be assigned to its closest non-interdicted facility. In particular, this set of constraints prevents each customer j from being assigned to facilities which are further than facility i, unless facility i is interdicted. Finally, constraints (22.5) and (22.6) represent the binary restrictions on the assignment and interdiction variables, respectively. Note that the structure of the problem guarantees that there is always one optimal solution in which all the xij variables are binary, so that the integrality restrictions on these variables can be relaxed.

In the above model the parameter r, i.e., the number of facilities that are lost simultaneously in a particular event, is chosen as a metric of possible disruption. In other words, r is used to capture the possible extent of a disruptive event: small values are usually associated with low-impact but possibly frequent events, whereas larger values are associated with disruptions which may affect a large number of assets. Given the difficulty of estimating this parameter precisely, an analyst would normally solve each model over a range of facility losses, r, in order to capture the range of possible impacts to system operations. Using a loss parameter r makes sense in modeling worst case disruptive scenarios due to natural events; however, in a case of intentional disruption one may want to consider the fact that each facility may require different amounts of resources to be completely disabled. For this type of case, one might want to cast disruption as a budget-constrained process (see for example Losada et al. 2012b). However using an interdiction budget requires information that may be completely hidden from the system operator, including the costs of striking and the available budget itself. The use of cardinality constraints such as (22.3) can be seen as a surrogate to knowing exact budget values of the interdictor.

The r-IMP can be cast as an integer linear programming model which can be solved with general-purpose integer programming software. The above formulation of the r-IMP can be streamlined by consolidating redundant assignment variables under special proximity conditions. The resulting variable reduction of this consolidation mechanism, which was initially proposed by Church (2003) for the p-median problem, can be substantial. Scaparra and Church (2008a) report reductions of up to 80% of the initial number of variables. The same authors also analyze and compare different formulations of the closest assignment constraints (22.4) to identify the most efficient formulation for the r-IMP. Although other approaches could be devised to solve the r-IMP, including decomposition methods or heuristics, solving the streamlined model by commercial software is usually quite effective, even for problem instances of significant size.

Clearly, the r-IMP makes some simplifying assumptions which may limit its practical applicability. For instance, it assumes that every strike or disruption is successful and always results in a complete impairment of the affected facility. In reality, the chances of losing a facility following a natural disaster or a man-made attack are based upon some probability. Church and Scaparra (2007a) introduced a probabilistic version of r-IMP where an attempted interdiction is successful only with a given probability. The same authors also show how to build a reliability envelope for identifying the range of possible impacts associated with losing one or more facilities. Losada et al. (2012b) further extended this probabilistic r-IMP by assuming that the probability of impairing a facility depends on the intensity of the disruption or on the amount of offensive resources used in the attack. In a further extension, Lei and Church (2011) address the issue of interdiction when not all demands are served by their closest facility after a disruption.

The r-IMP also assumes no restrictions on the facilities capacity, thus implying that after a disruption, the unaffected facilities have enough combined capacity to supply all the demand. This may not be a realistic assumption as most real supply systems usually operate with capacity limits. The capacitated version of the r-IMP can be found in Scaparra and Church (2012). Another interesting variation of the r-IMP which considers capacity restrictions is the partial interdiction problem introduced by Aksen et al. (2014). In this model, an interdicted facility may preserve part of its capacity; the capacity loss due to interdiction is commensurate to the intensity of the attack and the unmet demand after interdiction can be outsourced at some cost. A similar problem was considered by Zhang et al. (2016).

The r-Interdiction Covering Problem

The r-Interdiction Covering Problem (r-ICP) can be stated mathematically as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{minimize} &\displaystyle &\displaystyle \sum_{j \in J} d_j v_{j} {} \end{array} \end{aligned} $$
(22.7)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle v_j \ge 1 - s_i \quad \forall j \in J, i \in N_j \cap F {} \end{array} \end{aligned} $$
(22.8)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in F} s_i = r {} \end{array} \end{aligned} $$
(22.9)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle v_j \in \{0,1\} \quad \forall j \in J {} \end{array} \end{aligned} $$
(22.10)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle s_i \in \{0,1\} \quad \forall i \in F. {} \end{array} \end{aligned} $$
(22.11)

The objective function (22.7) minimizes the amount of customer demand which is covered after interdiction. Constraints (22.8) stipulate that a customer j must be covered unless all the facilities that currently cover it (i.e., the facilities in Nj ∩ F) are interdicted. Constraints (22.9) force the number of facilities to be eliminated to equal r. The last two sets of constraints (22.10) and (22.11) are binary restrictions on the coverage and interdiction variables. Note that the binary integer restrictions are only needed for the si variables whereas the vj variables automatically take on binary integer values in any optimal solution.

r-ICP instances of considerable size can generally be solved by commercial optimization packages without the need of resorting to more sophisticated approaches or heuristic techniques (Sevaux et al. 2015). Clearly, the same problem variations that have been considered for the r-IMP may be developed for the r-ICP so as to capture additional features such as probabilistic failures, capacity restrictions, and partial interdiction.

Other Interdiction Models

Although our focus so far has been on interdiction models for median and covering systems, an interdiction model counterpart can be devised for virtually every facility location problem proposed in the literature. As an example, Lei (2013) proposed the Hub Interdiction Median Problem which identifies the most critical hub facilities in hub–and–spoke systems.

4 Hardening Facilities: Protection Models

Interdiction models are a valuable tool for assessing facility criticality and worst-case scenario losses in case of disruption. However, it can be easily demonstrated that securing those facilities that are identified as the most critical in an optimal interdiction solution does not necessarily result in the most effective protection strategy (Church and Scaparra 2007b). Interdiction is a function of what is protected and this interdependency must be captured explicitly into a modeling framework to guarantee that limited protective resources are allocated in an optimal way. Most of the facility protection models existing in the literature incorporate an interdiction model as a tool for evaluating worst-case losses in response to protection plans. These models are expressed mathematically as bilevel optimization programs (Dempe 2002) which emulate a game played between a system defender (the leader) and a system attacker or interdictor (the follower). In this bilevel structure, the upper level problem involves decisions on which facilities to harden, whereas the lower level problem identifies which unprotected facilities to attack to inflict maximum damage.

In the following, we show how the model presented for the r-IMP in the previous section can be embedded within a protection model to optimize security investments in systems which are designed using the p-median problem (Scaparra and Church 2008a).

The r-Interdiction Median Problem with Fortification

The bilevel formulation of the r-IMP with Fortification (r-IMPF) is as follows.

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{minimize} &\displaystyle &\displaystyle H(z) {} \end{array} \end{aligned} $$
(22.12)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{i \in F} z_i = b {} \end{array} \end{aligned} $$
(22.13)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle z_i \in \{0,1\} \quad \forall i\in F, {} \end{array} \end{aligned} $$
(22.14)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} H(z) = \text{max} &\displaystyle &\displaystyle \sum_{i \in F }\sum_{j \in J} d_j c_{ij} x_{ij} {} \end{array} \end{aligned} $$
(22.15)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{s.t. } &\displaystyle &\displaystyle s_i \leq 1 - z_i {} \\ &\displaystyle &\displaystyle (\mbox{22.2})-(\mbox{22.6}). \end{array} \end{aligned} $$
(22.16)

The leader objective (22.12) is to minimize the highest possible level of demand-weighted service cost, H, following the disruption on r facilities by allocating b protective resources (22.13). The worst-case cost H is computed in the follower problem, which is simply the r-IMP problem defined in Sect. 22.3 with the additional constraints (22.16). These constraints, which link the upper level protection variables and the lower level interdiction variables, prevent the interdiction of any protected facility.

It is important to note that in the above model protection resources can be cast with a budget constraint and facility varying protection costs (Aksen et al. 2010). It is also possible to add the costs of protection as a an additional term in the objective, where the costs of protection and costs of worst case operation are simultaneously minimized. In either case (as formulated or as an added objective term), one would generally want to solve a series of such problems in order to determine tradeoff curves of system impacts versus protection resources. The above form can be used to identify both supported and unsupported non-dominated solutions whereas the latter will be effective in solving for only supported non-dominated solution. In any case, one would want to understand exactly the benefits of protection in terms of reducing impacts of interdiction as compared to the added costs of protection.

Bilevel programs are generally very difficult to solve (Moore and Bard 1990), especially when integer variables appear in both levels and when the upper level variables parametrize the feasible region of the lower level problem, as it is the case in r-IMPF. Common approaches to solve bilevel integer programs include reformulation into single level problems and decomposition methods. Examples of casting r-IMPF as a single level problem can be found in Church and Scaparra (2007b) and Scaparra and Church (2008b). However, these single level models require a complete enumeration of all the possible ways of interdicting r out of the |F| existing facilities and therefore become quickly intractable as the value of the parameters |F| and r increases. Scaparra and Church (2008a) propose an implicit enumeration (IE) algorithm to solve the bilevel r-IMPF. The approach is based upon the observation that an optimal protection plan must include at least one of the critical facilities identified by solving a simple r-IMP. The recursive use of this property allows a significant reduction of the number of protection strategies that must be evaluated in an enumeration scheme. To date, this algorithm remains one of the most effective methods for solving this type of protection/interdiciton models to optimality and has been successfully applied to problems in different settings as well (e.g., the network protection models in Cappanera and Scaparra 2011).

Note that in the presence of other complicating aspects, such as capacity constraints on the facilities, interdiction problems may require a bilevel formulation. Consequently, the addition of the protection layer results in trilevel models, which are even more challenging to solve. In these cases, the trilevel models are typically solved by using IE for the outer protection level, while other methods, such as decomposition or reformulation, are used for the interdiction bilevel model. Some examples of this are discussed later in this section.

The use of metaheuristics for solving r-IMPF has been recently explored by Cheng et al. (2016), who developed several hybrid approaches where Tabu Search, Simulated Annealing and Genetic Algorithms are used for solving the upper problem, whereas the lower interdiction problem is solved to optimality by a commercial solver. These metaheuristics are more versatile than exact methods based on implicit enumeration, as they do not make any assumption about the follower’s problem. As a result, they can be applied to other settings (e.g., problems where a facility may be damaged partially or a facility may be lost only with certain probability).

Since its appearance, the r-IMPF has spurred a significant amount of research and several different variants to the original problem have been proposed in the literature. As an example, Liberatore et al. (2010) introduced a stochastic version of r-IMPF where the number of possible losses r is uncertain, to reflect the fact that the extent of a disruption is usually not known with certainty. In a follow up paper, Liberatore and Scaparra (2011) compared the model proposed for the above stochastic problem with two regret-based models to identify robust protection strategies in uncertain environments.

Aksen et al. (2010) proposed a budget-constrained version of the r-IMPF with flexible capacity expansion. In particular, they replaced the cardinality constraint (22.13) with a budget constraint and assume that the facilities have different protection costs and flexible capacity (i.e., the capacity can be expanded to accommodate the demand of customers previously assigned to interdicted facilities). A variation of this model can be found in Parajuli et al. (2017) who introduced the notion of gradual capacity backup to hedge against disruption risk in capacitated supply networks. Namely, facilities can be protected at different levels. Protection implies that a facility acquires contingent additional production capacity, and the amount of additional capacity is commensurate to the level of protection investment.

Another interesting variation of the r-IMPF is the problem investigated by Liberatore et al. (2012), which optimizes protection plans in the face of large area disruptions. The problem includes capacitated facilities, partial interdiction (interdiction reduces the amount of demand that can be served by a facility) and correlated disruptions (when a facility is hit, nearby facilities are affected as well). The problem was formulated as a trilevel program, and solved by dualization integrated in the implicit enumeration algorithm devised by Scaparra and Church (2008a) for the r-IMPF.

All the problems cited so far are static which means that they do not consider the effect of disruptions over time. In reality, disrupted facilities may have different recovery times and the duration over which system operations are degraded should be considered when modeling worst-case disruption scenarios. To redress this shortcoming, Losada et al. (2012a) proposed a different protection model for a system which is based upon a p-median problem design. In this model, protection does not necessarily prevent facility failure altogether, but speeds up recovery time following a potential disruption. The resulting model also incorporates the possibility of multiple disruptions over time and is solved using three different decomposition approaches.

An underlying assumption of the r-IMPF and all its variations is that protection is always successful and, therefore, protected facilities are never interdicted in a worst-case scenario. Bricha and Nourelfath (2013) relaxed this assumption and proposed a model where a protected facility is immune to disruption only with a given probability. The initial model was then extended to consider protection against concerted attacks by multiple interdictors.

Whereas most of the focus has been on protection models for systems based upon a p-median design, Zhu et al. (2013) proposed a game theoretical model to identify optimal defense strategies for an uncapacitated fixed-charge location model. In this model, the defender has several investment strategies (or levels of investment) available and aims at minimizing the expected damage to the systems along with the protection expenditure. Similarly, the interdictor can choose different attack levels on each facility and aims at maximizing a utility function, which combines damage and attack expenditures.

Recently, considerable attention has been paid to the protection of hub networks (Ghaffarinasab and Atayi 2018; Quadros et al. 2018; Ramamoorthy et al. 2018). These papers built upon the protection model for the multiple allocation hub interdiction median problem introduced by Lei (2013) and proposed different exact solution methodologies for solving it. Ghaffarinasab and Atayi (2018) introduced a two-level implicit enumeration algorithm based on Scaparra and Church (2008a) (one level of IE for protection and one for interdiction); Quadros et al. (2018) proposed a single level integer linear programming formulation for the problem and solved it through a branch-and-cut algorithm; Ramamoorthy et al. (2018) combined IE for the protection model with Benders decomposition for the interdiction model, after improving the lower level using novel closest assignment constraints.

Protection models have also been developed for location problems with hierarchical facilities (Aliakbarian et al. 2015) and for decentralized supply systems (Zhang and Zheng 2018).

5 Planning Robust Systems: Design Models

Hardening existing facilities can be an effective way of mitigating the impact of facility failures. An alternative approach is to incorporate the risks of potential failures in the initial design of a system by identifying location strategies which are both cost-efficient and robust to external disruptions. Several studies have demonstrated that significant improvements in reliability can often be obtained without significant increases in operating costs (Snyder and Daskin 2005).

Location models for planning reliable systems can be broadly grouped into two main categories which reflect different risk attitudes of the decision maker: risk-averse and risk-neutral.

5.1 Planning Against Worst-Case Disruptions

The models in this category identify location strategies for coping with the worst case in terms of facility loss or disruption. They therefore capture the perspective of a risk-averse decision maker and are suitable for hedging against deliberate disruptions and strategic risks. These models typically embed an interdiction model in a multi-level structure where the upper-level model identifies the optimal location of the facilities, whereas the lower-level model endogenously generates worse-case scenario losses.

We illustrate how such location-interdiction models can be formulated by presenting the Maximal Covering Location-Interdiction Problem (MCLIP). The idea is to couple the classical Maximal Covering Location problem with the r-ICP presented in Sect. 22.3 to identify the location of p facilities which maximizes a weighted combination of i) the initial coverage and ii) the minimum coverage level following the loss of the most critical r facilities (O’Hanley and Church 2011).

The MCLIP model can be formulated as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{maximize} &\displaystyle &\displaystyle \alpha \sum_{j \in J} d_j u_j + (1 - \alpha) H(y) {} \end{array} \end{aligned} $$
(22.17)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{i \in I} y_i = p {} \end{array} \end{aligned} $$
(22.18)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in N_j}y_i \geq u_j \quad \forall j \in J {} \end{array} \end{aligned} $$
(22.19)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y_i \in \{0,1\} \quad \forall i\in I {} \end{array} \end{aligned} $$
(22.20)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u_j \in \{0,1\} \quad \forall j\in J, {} \end{array} \end{aligned} $$
(22.21)

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} H(y) = \text{min} &\displaystyle &\displaystyle \sum_{j \in J} d_j v_j {} \end{array} \end{aligned} $$
(22.22)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{i \in I} s_i = r {} \end{array} \end{aligned} $$
(22.23)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle v_j \geq y_i - s_i \quad \forall j \in J, i \in N_j {} \end{array} \end{aligned} $$
(22.24)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle s_i \in \{0,1\} \quad \forall i\in I {} \end{array} \end{aligned} $$
(22.25)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle v_j \in \{0,1\} \quad \forall j\in J. {} \end{array} \end{aligned} $$
(22.26)

The upper-level objective (22.17) is to maximize the weighted sum of covered demand before and after interdiction by locating p facilities (22.18). Initial and post-disruption coverage are weighted in the objective by using a weight α, with 0 ≤ α ≤ 1. The demand covered before interdiction is determined by constraints (22.19), whereas the worst-case demand-weighted coverage after interdiction, H(y), is computed in the lower level problem (22.22)–(22.26). This is a simple modification of the r-ICP problem (22.7)–(22.11), where constraints (22.8) are replaced by (22.24). These constraints state that customer j must be covered after disruption (vj = 1) unless all the open facilities covering customer j are interdicted.

Bilevel location-interdiction problems such as the MCLIP are even more difficult to solve than the protection-interdiction problems discussed in Sect. 22.4 and some efficient approaches devised for protection models, such as the implicit enumeration algorithm for r-IMPF, are not applicable to them. In O’Hanley and Church (2011), the MCLIP is solved by a decomposition method using so-called supervalid inequalities.Footnote 1

Another example of location-interdiction models can be found in Parvaresh et al. (2014) for p-hub median problems. In this case, the bilevel model is solved heuristically via Simulated Annealing and Tabu Search. Ghaffarinasab and Motallebzadeh (2018) extended this work by introducing the hub interdiction problem under covering and center objectives. A worst-case model for the uncapacitated facility location problem can be found in Hernandez et al. (2014), where a multi-objective optimization approach is used to identify trade-off solutions with respect to the total weighted traveling distance before and after disruptions.

Note that design and protection decisions may be coupled within the same modeling framework. Risk-averse design problems including the option of hardening some of the facilities to be located have received considerable attention. See for example Keçici et al. (2012), Aksen and Aras (2012), Aksen et al. (2013), Shishebori and Jabalameli (2013), Medal et al. (2014), Akbari-Jafarabadi et al. (2017), Zhang et al. (2018) and Jalali et al. (2018). These problems have introduced several novel aspects into the facility protection and robust design literature. For instance, Zhang et al. (2018) considered for the first time the case where the interdictor has no information about the protection resource allocation. Jalali et al. (2018) assumed that facilities fail with some probability which depends on the combined effect of protection and interdiction efforts and used a conditional value-at-risk (CVaR) measure to capture the risk-averse attitude of the system designer.

Design decisions can also be used to identify efficient ways of protecting existing service facilities, as in the problem introduced by Mahmoodjanloo et al. (2016). This problem aims at locating defence facilities at minimum cost, so that each service facility is covered by at least one defence facility. The problem is modeled as a trilevel program, where the bilevel partial interdiction median model introduced by Aksen et al. (2014) is embedded into an outer coverage location model.

Although bilevel location-interdiction models are the most common way of capturing worst-case scenario disruptions, the use of two-stage Robust Optimization (RO) has recently been proposed as an alternative risk-averse approach to hedge against disruptions. RO-based location models use uncertainty sets to capture data uncertainty and seek to determine locations that are robust to any perturbations in the uncertainty sets, including worst-case scenario values. To model situations where some decisions can be made after the uncertainty is revealed, the RO framework can be extended to include second stage recourse decisions. An et al. (2014) proposed the first two-stage RO model to design reliable facility location networks subject to disruptions. Their models, designed for the reliable p-median problem, minimize the weighted sum of the operation costs in normal situations and in the worst disruptive scenario. They also considered two important practical features: facility capacities and demand change due to disruption. The proposed models are solved exactly by Benders decomposition and column-and-constraint generation methods. In recent years, two-stage RO approaches have been used to solve other more complex location problems under disruptions. For example, Zarrinpoor et al. (2017) proposed a hierarchical location-allocation model for health service network design which concurrently addresses several key issues, such as service quality, changes in demand patterns, hierarchical structure of networks, disruption risk and uncertainty associated with demand and service within a queuing theory framework. Cheng et al. (2018) introduced a two-stage RO approach for the reliable logistics network design problem, which includes multiple echelons and facility capacities. To test different levels of conservativeness and study the price of robustness, the authors extended the basic RO scheme and proposed two model variants: the expanded two-stage RO model, which uses multiple uncertainty sets, and the risk-constrained two-stage RO model, where upper bounds are imposed on the worst-case performance. The application of the models indicates that a considerable decrease in the cost of the worst disruptive situation can be achieved for only a small increase in the normal cost.

5.2 Planning Against Random Disruptions

In this class of models, facilities are assumed to fail at random and the objectives typically deal with expected costs or performances.

Although the first paper to consider unreliable facilities which fail with a given probability appeared more than a couple of decades ago (Drezner 1987), a renewed interest in this type of problems has only emerged more recently with the reliability problems investigated by Snyder and Daskin (2005): the Reliability p-Median Problem (RPMP) and the Reliability Fixed-Charge Location Problem (RFLP). Both problems aim at locating a set of facilities so as to minimize the costs incurred by the system when all the facilities are operational and the expected transportation costs after facilities failures.

In the RPMP model, each open facility may fail with the same fixed probability π, failures are independent and several facilities can fail simultaneously. If customer j is not served by any facility, either because all open facilities fail or because it is too costly to receive service by the closest operational facility, the system incurs a lost-sale cost per unit of demand. To model this situation, the set I of potential locations for the facilities is augmented with a dummy facility. Let m be the cardinality of the augmented set |I| and the index of the dummy facility. The dummy facility m never fails and has unit service cost cmj to customer j, which represents the lost-sale cost per unit of demand. As facility m is forced to open, p + 1 facilities must be located instead of p as in standard p-median problems. Each customer is assigned to facilities depending upon their operational status. Accordingly, several assignment levels can be associated with each customer. Level-0 assignments are those made to primary facilities that serve the customers under normal circumstances. Level-l assignments (0 < l ≤ p) are those made to alternative facilities that can serve a customer if the l closer facilities have failed.

To formulate RPMP, the following assignment variables are defined:

$$\displaystyle \begin{aligned} x_{ijl} = \left\{ \begin{array}{l l} 1 & \quad \text{if customer}\ {j}\ \text{is assigned to facility}\ {i}\ \text{at level}\ {l}\\ 0 & \quad \text{otherwise} \end{array} \right. \end{aligned}$$

The RPMP model is as follows.

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{minimize} &\displaystyle &\displaystyle \sum_{j \in J} d_j \sum_{l = 0}^{p} \left[ \sum_{i \in I \setminus m} c_{ij} \pi^l (1-\pi) x_{ijl} + c_{mj}\pi^lx_{mjl} \right] {} \end{array} \end{aligned} $$
(22.27)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{i\in I} x_{ijl} + \sum_{t=0}^{l-1} x_{mjt} = 1 \quad \forall j\in J, l=0,\ldots,p {} \end{array} \end{aligned} $$
(22.28)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{l=0}^{p} x_{ijl} \le 1 \quad \forall i\in I, j\in J {} \end{array} \end{aligned} $$
(22.29)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle x_{ijl} \le y_i \quad \forall i\in I, j\in J, l=0,\ldots,p {} \end{array} \end{aligned} $$
(22.30)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in I} y_i = p + 1 {} \end{array} \end{aligned} $$
(22.31)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y_m = 1 {} \end{array} \end{aligned} $$
(22.32)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y_i \in \{0,1\} \quad \forall i\in I {} \end{array} \end{aligned} $$
(22.33)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle x_{ijl} \in \{0,1\} \quad \forall i\in I, j\in J, l=0,\ldots,p. {} \end{array} \end{aligned} $$
(22.34)

The objective function (22.27) minimizes the demand-weighted expected transportation and lost-sales costs. These are computed as a function of the assignment variables by taking into account that each customer j is served by its level-l facility i if the l closer facilities have failed, which occurs with probability πl, and facility i has not failed, which occurs with probability 1 − π for each i ∈ I ∖ m and with probability 1 if i = m. Constraints (22.28) state that each customer j must be assigned to some facility at each level l, unless j has been assigned to the dummy facility at level t < l. Constraints (22.29) prevent the assignment of a customer to a given facility at more than one level. Constraints (22.30) prohibit the assignment to facilities which are not open, whereas constraint (22.31) state that exactly p facilities must be opened in addition to the dummy facility, which is forced to be open by constraint (22.32). Constraints (22.33) and (22.34) are standard integrality constraints (note that the integrality constraints on the assignment variables xijl can be relaxed).

The original RPMP model presented in Snyder and Daskin (2005) is slightly more general than model (22.27)–(22.34) in two aspects: i) some of the facilities may be considered completely reliable and ii) the objective is to minimize the weighted sum of normal costs and expected failure costs. The authors show that by varying the weights of the resulting bi-objective model, one can generate a trade-off curve for identifying good compromise solutions. This type of analysis demonstrates that large reductions in failure costs can often be attained with only minor increases in operation costs.

The Reliability Fixed-Charge Location Problem (RFLP), which we do not report for the sake of brevity, can be formulated in a similar way to RPMP. Both problems can be tackled by Lagrangian relaxation (Snyder and Daskin 2005). Efficient metaheuristic approaches have also been devised for RPMP by Alcaraz et al. (2012), which report very good results for large-scale instances.

One of the major limitations of this structure for reliability models is that it relies on the assumption that all facilities fail with the same probability. Without this assumption, calculating expected transportation costs becomes significantly more complicated due to the need of expressing probability products using high-degree polynomials. Site-dependent probabilities were considered for the first time by Berman et al. (2007) but the resulting model is highly non-linear and is only solved heuristically. Several attempts at modelling heterogeneous facility failure probabilities using a linear mixed-integer program have appeared in recent years (see for example Cui et al. 2010 and Lei and Tong 2013). Particularly noteworthy is the probability chains linearization technique proposed by O’Hanley et al. (2013) for solving the RPMP with site-dependent probabilities. The technique, which is general and can be extended to other model classes as well, is based on the idea of using a specialized network flow structure for evaluating compound probability terms. Empirical experiments indicate that this technique is quite effective in solving reliability models of significant size. Tran et al. (2017) further extended the concepts of probability chains and introduced a novel network flow structure called a probability lattice to solve the reliable single-allocation p-hub median problem.

Other important issues in modeling location problems with unreliable facilities are correlation and informational uncertainty. Correlation concerns the extent to which the failure of one facility affects the operational status of other facilities. In many real situations neighboring facilities may be exposed to similar hazards and, therefore, fail simultaneously. Examples of models with correlated disruptions can be found in Li and Ouyang (2010), Berman et al. (2013), Li et al. (2013) and Lu et al. (2015). Informational uncertainty relates to the information available to customers about the operational state of the facilities. It is clear that optimal location patterns and optimal service costs may differ if customers do not have prior information about the state of the facilities and must travel to different facilities before they can receive service. The role of information in reliable facility design is analyzed in Berman et al. (2009), Berman et al. (2013), Albareda-Sambola et al. (2015) and Yun et al. (2015).

An issue that has been largely neglected in the reliability location literature is the capacity of the facilities. Most existing reliability models assume that the facilities are uncapacitated and able to absorb the demand of disrupted facilities. As a consequence of this assumption, even the issue of partial facility failure has been mostly ignored. An exception is the study by Azad et al. (2013) which considers capacitated facilities, partial capacity loss due to disruption and goods sharing between non-disrupted and partially disrupted facilities. This problem was subsequently extended by Jabbarzadeh et al. (2016) who proposed a hybrid stochastic-robust optimization model, where a robust optimization approach was applied to the stochastic reliable capacitated facility location problem so as to capture additional uncertainties (i.e. demand fluctuations, probability of a disruption occurrence, supply capacity variations). An alternative way of dealing with potentially excessive demand at non-failing, backup facilities has been considered by Madani et al. (2018) within the context of the reliable p-hub maximal covering problem. In this study, a bi-objective model is introduced, where the primary objective is to maximize the expected covered flow, whereas the secondary objective is to minimize congestion by balancing the flows passing through each hub.

Most existing reliability location models use expected costs or performances in the objective function, thus implicitly assuming that the decision maker is risk-neutral. Yu et al. (2017) argued that risk-averse approaches can provide more robust solutions compared to the risk-neutral approach and proposed two variants of RFLP which use risk-averse measures: conditional value-at-risk (CVaR) and absolute-semideviation (ASD). This study shows that different facility locations are selected under risk-averse measures and that the resulting systems are more reliable than the ones obtained with traditional risk-neutral objectives, but less conservative that the ones obtained with worst-case models.

Finally, as for the bilevel design models discussed in the previous section, location and hardening decisions can be combined into a probabilistic design model for identifying reliable and cost-efficient configurations of hardened and unhardened facilities (see, for example, Lim et al. 2010, Li and Savachkin 2013, Li et al. 2013 and Jabbarzadeh et al. 2016).

5.3 Planning Against Specific Disruption Scenarios

When the uncertainty associated with disruptions can be captured by a finite set of scenarios, we can resort to scenario-indexed models. Within the context discussed in this chapter, such models are an alternative for writing two-stage stochastic mixed-integer programs. The non-anticipative first-stage decisions concern the location of the facilities and are made in the presence of uncertainty about the realization of future disruption scenarios. The second-stage (recourse) decisions, which are conditional to the first-stage decisions, involve the assignment of customers to facilities in response to specific disruption scenarios.

Below we show a scenario-indexed model for the p-median problem, where the objective is to minimize the expected service cost over all failure scenarios. Let Ω be the set of disruption scenarios such that a = 1 if facility i fails in scenario ω. The probability that scenario ω occurs is denoted by πω. The assignment decision variables are defined for each scenario as follows:

$$\displaystyle \begin{aligned} x_{ij\omega} = \left\{ \begin{array}{l l} 1 & \quad \text{if customer}\ j\ \text{is assigned to facility}\ i\ \text{in scenario} \ \omega\\ 0 & \quad \text{otherwise} \end{array} \right. \end{aligned}$$

The scenario-indexed model is then:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{minimize} &\displaystyle &\displaystyle \sum_{\omega \in \varOmega} \pi_\omega \sum_{i \in I }\sum_{j \in J} d_j c_{ij} x_{ij\omega} {} \end{array} \end{aligned} $$
(22.35)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \text{subject to} &\displaystyle &\displaystyle \sum_{j \in J} x_{ij\omega} \leq (1 - a_{i\omega})y_i \quad \forall i \in I, \omega \in \varOmega {} \end{array} \end{aligned} $$
(22.36)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in I} x_{ij\omega} = 1 \quad \forall j \in J, \omega \in \varOmega {} \end{array} \end{aligned} $$
(22.37)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in I} y_i = P {} \end{array} \end{aligned} $$
(22.38)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle y_i \in \{0,1\} \quad \forall i\in I {} \end{array} \end{aligned} $$
(22.39)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle x_{ij\omega} \in \{0,1\} \quad \forall i\in I, j\in J, \omega \in \varOmega. {} \end{array} \end{aligned} $$
(22.40)

The objective function (22.35) minimizes the demand-weighted expected cost across all scenarios. Constraints (22.36) prevent the assignment of customer j to facility i in scenario ω if either i is not open or if it is open but not available in scenario ω. Constraints (22.37) guarantee that each customer is assigned to some facility in every scenario. The remaining constraints are standard cardinality and integrality constraints.

The expected performance criterion used in problem (22.35)–(22.40) yields solutions that may perform poorly in certain scenarios. Solutions which are effective no matter what scenario is realized can be obtained by incorporating robustness measures into the model (see also Chap. 8). An example is the β-robustness measure introduced by Snyder and Daskin (2006). Let \(z_\omega ^*\) be the optimal cost for scenario ω. By adding the following constraint

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i \in I}\sum_{j \in J} d_j c_{ij} x_{ij\omega} \leq (1 + \beta) z_\omega^* \quad \forall \omega \in \varOmega, {} \end{array} \end{aligned} $$
(22.41)

it is possible to generate least-cost solutions whose relative regret in each scenario is no more than β, for a given β ≥ 0.

The β-robustness measure has been used in Peng et al. (2011) to design reliable multi-echelon supply chain networks. Other risk measures to generate robust solutions in scenario planning models include the α-reliable minimax regret (Daskin et al. 1997) and the α-reliable mean-excess regret (Chen et al. 2006). In α-reliable minimax models, the maximum regret is computed only over a subset of scenarios, called the reliability set, whose total probability is at least α. The α-reliable mean-excess regret, which is closely related to the CVaR objective of portfolio optimization (Rockafellar and Uryasev 2000), further extends the α-reliable concept by ensuring that solutions perform reasonably well even in the scenarios which are not included in the reliability set. Typically, the objective function of these models minimizes a weighted sum of the maximum regret over the reliability set and the conditional expectation of the regret over the scenarios excluded from the reliability set. Although these measures have not been explicitly used in facility location problems with disruptions, their application is quite straightforward and certainly deserves future investigation.

When uncertainty can be captured by a finite set of scenarios and a scenario-indexed model can be considered, it is easy to modify the model in a way that the models discussed in Sect. 22.5.2 cannot. As an example, capacity restrictions can be easily modeled by replacing constraints (22.36) with

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{j \in J} d_j x_{ij\omega} \leq (1 - a_{i\omega}) q_i y_i \quad \forall i \in I, \omega \in \varOmega, {} \end{array} \end{aligned} $$
(22.42)

where qi is the capacity of facility i.

Partial disruptions can also be captured by simply redefining a as the proportion of facility i capacity which is lost in scenario ω to model the case where disruptions only reduce the capacity but do not completely disable a facility. An example of partial disruption in scenario-indexed models can be found in Fattahi et al. (2017) for a supply chain network (SCN) design problem. The SCN is composed of customers, warehouses and factories and involves multiple products and multiple periods. Lead times are based upon which facility/warehouse combination serves a given customer. Because of possible disruptions, some customers may not be served, which incurs a penalty cost. Although the factories are already located and fixed in number, warehouses are to be located over the planning horizon. Warehouses can be protected at selected fortification levels which limits disruption to certain levels of capacity. Single source delivery is assumed and demands at customers depend on the facilities serving them based on their delivery lead times. The objective is to minimize supply chain costs, including lead times in product delivery and warehouse recovery costs, by locating warehouses, selecting protection levels and assigning factory/customer supply chains to each demand.

Another scenario-based model which considers the effects of disruption on facility capacities is the risk-aware capacitated plant location problem (CPLP-RISK) introduced by Heckmann (2016). CPLP-RISK is a two-stage stochastic model, where the first-stage decisions include which facilities to open and whether to equip them with the option of capacity expansion that can be used when a disruption occurs; the recourse or second-stage decisions involve the selection of the capacity expansion’s level and duration. A finite set of scenarios is used to model facility capacity reductions and customer demand fluctuations over time. The objective is to minimize the overall system costs (i.e., facility opening costs, capacity expansion costs and service costs) and the service deterioration level due to unmet demand in case of disruption.

Very recently scenario-indexed models have been studied for hub-and-spoke networks by Rostami et al. (2018) and Zhalechian et al. (2018). Particularly noteworthy is the comprehensive model introduced in the latter paper, which integrates several interesting issues such as: operational risks (i.e. fluctuations in input data) and disruption risks; proactive (mitigation) and reactive (recovery) strategies to increase resilience; and three different measures of network design quality (network density, network complexity and node criticality).

One major drawback of scenario-indexed models is that they can become very large if there are many scenarios (consider for example all the possible combinations of facilities that can fail). To obviate this difficulty, the scenario space can be approximated using sampling techniques such as Sample Average Approximation (SAA) (Kleywegt et al. 2002). An innovative application of this method can be found in Aydin and Murat (2013) for the capacitated reliable facility location problem. In this study, Particle Swarm Optimization is integrated within the SAA methodology to improve the computational efficiency and solution quality of traditional SAA implementations. Another alternative is to construct the scenario set empirically by using historical data or expert judgement. As an example, Rawls and Turnquist (2010) use a scenario planning approach to optimize facility locations and emergency resource stockings in the face of natural disasters. In their case study, the scenarios of concern are constructed by using historical records from a sample of fifteen hurricanes.

Note that in standard two-stage stochastic optimization, first-stage decisions must ensure that the solution feasibility is maintained for each scenario realization. A new paradigm, called Recoverable Robust Optimisation, has recently been proposed by Liebchen et al. (2009), where first-stage decisions can be revisited once the uncertainty is resolved in the second stage. In particular, the solution built in the first stage can be recovered through a limited set of recovery actions. This paradigm has been used by Álvarez-Miranda et al. (2015) for the uncapacitated facility location problem under disruptions. The objective of the recoverable robust location problem is to minimize the sum of the first-stage cost (i.e. the cost of the initial facility location and customer allocation), plus the second-stage recovery cost (i.e. the worst-case cost to recover the solution over all possible scenarios). The second-stage recovery actions include the opening of new facilities and the re-allocation of customers that were allocated in the first-stage to facilities which are unavailable in the realized scenario.

6 Future Trends

The research to date on facility location problems with disruption, although groundbreaking, is still evolving. The impetus for such work has come from disasters such as 9/11, the Fukushima nuclear power plant destruction in Japan, and the more recent power disruption in Michoacan, Mexico. As such problems are often represented as a two person game (defender-attacker) or a three person game (defender-attacker-defender), they can be quite mathematically complex and difficult to solve. Because of this, work is needed to expand the range of problem sizes that can be addressed by such model structures.

The work discussed here is based upon the simplest of service systems involving the p-median and maximal covering problems. Work has also involved systems that do not rely on single-source service assignment, like the defender-attacker-defender model of Scaparra and Church (2012). Their model dealt with the protection of a system of capacitated facilities, with an embedded classical transportation problem. Although these problems and extensions can be used in many system designs, lifeline systems such as electrical generation and transmission, water supply and distribution, and communication networks of switches and lines, all present a level of complexity that has yet to be addressed in an efficient and comprehensive way.

Systems are interconnected in many ways. A failure (or an attack) of one system component may lead to the failure of another. Such cascading failures have been documented in electrical and communication systems. In addition, the failure of an electrical system component may render a portion of a communication system inoperable. Connections between such systems have still to be adequately modeled as well. In addition, most models capturing disruption ignore the temporal component. Few (see for example Heckmann 2016) have addressed the possible duration of a disrupting event as well as how best to cope with it and restore the initial operational level (Heckmann 2016). This too, is an area where more research is needed.

Facilities are but one component in a production and distribution system. Flooding in Thailand in 2011 demonstrated that inventories for key parts, like those for computer disk drives, could be disrupted to the extent that the retail price for storage drives almost doubled for a short period of time. Fully addressing such vulnerabilities requires the modeling of facility production and inventory levels simultaneously. Hurricane Harvey, which hit Texas in 2017, affected more than 13,000 business entities in the flood envelope, including oil refineries, plastic molding facilities, and chemical plants (Chang 2017). Petroleum and coal products manufacturing, chemical manufacturing, and oil and gas extraction suffered the greatest impact. These three critical subsectors provide raw materials for other industries, and their disruption had a ripple effect on the raw materials supply chain. The disruption propagated to other industries and countries that rely on these or related exports from the Port of Houston. Although recent studies have attempted to consider multi-echelon distribution systems, the design of robust risk-optimized supply chain networks and the development of improved supply chain risk management strategies still require additional research to fully capture cross-sector and cross-country business interruption risk.

There are three principal ways in which resilient design has been approached: robust, stochastic and bilevel optimization. Work is needed to test the efficacy of each approach. For example, can a small number of scenarios be used to adequately define and couch possible outcomes as compared to the use of a bilevel optimization problem involving a defender-attacker approach? In addition, can simulation models be used in an efficient manner to identify system vulnerabilities? Further, it is important to develop better models to estimate risk.

Finally, the models developed to date to handle interdiction, fortification and reliable design are far more complex than their base-level counterparts, adding a level of computational difficulty that is a new research area. But, one must ask the question: can simpler models be developed which adequately address such uncertainties?

7 Conclusions

This chapter has reviewed the research that has evolved over the last 15 years concerning facility disruption. Disruptions can be thought as arising out of intention (e.g., terrorism), by accident, or by a natural disaster. It has covered three main areas of related research: models of facility interdiction, combined models of facility interdiction and protection, and models of resilient design. These models are designed to address the three basic questions that concern systems planners and operators when facing reality: (1) how much can a service system be degraded in its efficiency when disrupted; (2) how might resources be allocated to protect against such possible events; (3) how might a new system be designed so that it is naturally resilient? Although past work has been based principally on the application of such models using hypothetical data, they have demonstrated that small changes in levels of protection can be effective at improving a system’s ability to cope with a disaster. Further, it has been shown that equal if not better facility deployment results when taking into account possible levels of disruption (whether intentional or natural). Ignoring disaster may come at a cost that is too high when compared to addressing such possibilities in operation (interdiction/fortification) and design. In fact, the value in modeling for disruption is that one can capture levels of impact and determine whether to ignore them or make system adjustments. This area of research is still evolving and future work is needed in applying such concepts to a wide range of lifeline systems, including power generation and distribution, food production and distribution, and water supply systems.