12.1 Introduction

In competitive location models a set of demand points, each with known buying power exist in a market area. Competing for the buying power in the area are several of one’s chain facilities and competing facilities. If an area has only competitors’ facilities and no chain facilities are present in the area, then the chain is considering an area. The competing facilities attract buying power from demand points, yielding market share (the proportion of total buying power in the area captured by one’s chain). The objective common to all competitive location models is the maximization of market share. Usually, profit is assumed to be a monotonically increasing function of market share. Therefore, maximizing profit is associated with maximizing market share. Fernandez et al. (2007) and Redondo et al. (2009) deal explicitly with maximizing profit. If there is a cost differential between different locations, setup costs, as well as different pricing policies, which may vary by location, account for such cost differentials. For a review of competitive location models see Berman et al. (2009a).

Therefore, at the core of any competitive location model is the estimation of the market share attracted by each of the competing facilities. All models assume that the market share captured by a facility is dependent on (1) the distances between the demand points and the facilities and (2) the attractiveness of the facilities, and (3) the prices at the facilities. Estimating market share captured is typically done using one of the following approaches.

Hotelling (1929) suggested a rule that each customer patronizes the facility that offers the lowest total cost (including the price of service and transportation cost). Hotelling’s approach led to the “proximity” rule in which customers patronize the closest facility (when the prices are equal at all facilities). Competitors compete by setting different service price but, from customers’ perspective (total cost), this rule implies that all competing facilities are equally attractive and that total buying power concentrated at a demand point is spent at the same facility. Drezner (1982) analyzed two problems on the plane: location of one competing facility and the leader–follower model (Stackelberg’s equilibrium model, in which competitors react to leader’s action). The proximity rule led to location-allocation models for the location of several facilities in a competitive environment (Hakimi 1983, 1986, 1990; ReVelle 1986; Ghosh and Rushton 1987; Serra and ReVelle 1995).

Different attractiveness levels of different facilities are incorporated in the proximity rule by defining a utility function (Drezner 1994a, 1995). Utility models were extended to random utility models (Leonardi and Tadei 1984; Drezner and Drezner 1996) or to the logit approach (Drezner et al. 1998).

Huff (1964, 1966) suggested applying the probabilistic gravity rule (Reilly 1931) for estimating market share. Drezner (1994b, 1995) suggested a multi-start approach to finding the best location for one new facility based on the gravity rule. Drezner and Drezner (2004) solved this problem optimally. The simultaneous location of multiple facilities according to the gravity rule was analyzed in Drezner et al. (2002a). Drezner (1998) formulated and solved the problem of locating several facilities, applying the gravity rule, when the attractiveness levels of new facilities are not given but they are variables and a given budget is available for constructing the facilities.

The abovementioned models assume that total demand is satisfied and divided among the competing facilities with no lost demand. Drezner and Drezner (2008) proposed a gravity-based model which considers lost demand. This happens when customers have no facility close enough to them; thus, their demand is unsatisfied. Such a model is realistic for non-essential services.

The problems discussed in this chapter are based on covering models. Covering problems have been researched for many years (for reviews see Schilling et al. 1993; Daskin 1995; Current et al. 2002; Plastria 2002). There are two types of covering problems: (1) covering all the points with the minimum number of facilities (the set covering problem, ReVelle et al. 1976) and (2) covering as many points (or total weight when each demand point has a different weight) with a given number of facilities (the max-covering problem). For network formulations see Church and ReVelle (1974), Megiddo et al. (1983), ReVelle (1986), and Berman (1994), and for planar problems see Drezner (1981), Watson-Gandy (1982), Drezner (1986), and Canovas and Pelegrin (1992).

Our competitive location model is based on equal division of buying power among facilities whose radius of influence captures that demand. A comprehensive discussion of this rule is presented in Section 2 of our original paper Drezner et al. (2011). Equal division may not be accurate for a single consumer but the aggregated market share is estimated reasonably well. This rule is much simpler to implement than gravity models or utility-based models. We only need to estimate the catchment area of competing facilities which yields their radius of influence. There are established methods for estimating the radius of influence of a facility (Beaumont 1991; Toppen and Wapenaar 1994). For example, license plates of cars in the parking lot are recorded and the addresses of the cars’ owners obtained. Drezner (2006) conducted interviews with consumers patronizing different shopping malls asking them to provide the zip code of their residence and whether they came from home. Other approaches for estimating market share require numerous parameters for their implementation. Our approach requires only the establishment of the catchment area.

This chapter summarizes four papers on competitive location which are a result of my collaborative work with Dr. Zvi Drezner and Dr. Tammy Drezner:

  1. 1.

    A Cover-Based Competitive Location Model (Drezner et al. 2011)

  2. 2.

    Strategic Competitive Location: Improving Existing and Establishing New Facilities (Drezner et al. 2012)

  3. 3.

    A Leader-Follower Model for Discrete Competitive Location (Drezner et al. 2015)

  4. 4.

    The Multiple Markets Competitive Location Problem (Drezner et al. 2016).

Each of these papers is based on a new cover-based model for competitive location. This new model assumes that each facility has its own radius of influence (sphere of influence, catchment area) and that buying power of a demand point located within the radius of several facilities is equally divided among these facilities, while demand at demand points located outside of any facility’s influence is lost.

Consider a demand point with buying power w. It is in the sphere of influence of F one’s chain facilities and C competitors’ facilities. Here, C contains facilities of all firms competing with one’s chain. Suppose that the demand point is in the sphere of influence of q additional chain facilities. We calculate the additional market share gained by the chain’s facilities from this demand point. Prior to the change in the chain’s facilities, the buying power attracted by one’s chain is \(w\frac {F}{F+C}\). Note that if F + C = 0, i.e., the demand point is outside the sphere of influence of all competing facilities, no buying power is attracted. When the demand point is in the sphere of influence of q additional facilities, the market share attracted by the chain is \(w\frac {F+q}{F+C+q}\). Simple algebraic manipulations lead to an increase in buying power attracted to one’s chain of

$$\displaystyle \begin{aligned} w\frac{qC}{(F+C)(F+C+q)}. \end{aligned} $$
(12.1)

Note that: If q = 0, there is no increase in the market share regardless of the values of F and C; if F = C = 0 and q > 0, then the gain in market share is w; if only one new facility is located, then q can be either 0 or 1.

As an illustrative example, consider the problem in Fig. 12.1 originally presented in Drezner et al. (2011). There are 12 demand points located on a grid of size “1,” one competitor C whose radius of influence is 0.8, and two more attractive (radius of influence of 1.25) one’s chain facilities A and B. Assume that all demand points have a buying power of one unit. One’s chain attracts demand points #7–#12, two-thirds of demand point #6, and one-half of demand point #5 for a total market share of \(7\frac 1 6\) units.

Fig. 12.1
figure 1

The example problem. Open circle: demand point; filled circle: one’s chain; open triangle: competitor

Suppose that a new facility is to be located. If it has the same attractiveness as the competitor’s (radius of 0.8), it can capture buying power from at most 4 demand points when it is located at a center of a square whose vertices are demand points. A quick inspection reveals that the best location for the new facility is at the center of the leftmost square capturing buying power from demand points #1–#4. The buying power of demand points #1–#2, which was lost before, is now fully captured. Half of the buying power at demand points #3–#4 is captured. The total market share is increased by 3 units leading to a total market share of \(10\frac 1 6\) units. If the new facility has the same attractiveness as do the other two chain facilities (radius of 1.25), it can attract 6 demand points and its best location is in the middle between demand points #3 and #4. In that case, it will attract the same 3 units of buying power (all buying power from demand points #1, #2 and half the buying power from demand points #3, #4) and will increase the proportion of the buying power captured from demand points #5 and #6. The buying power captured from demand point #5 increases from one-half to two-thirds and the buying power captured from demand point #6 increases from two-thirds to three quarters for a gain of \(3\frac 1 4\) units, capturing market share of \(10\frac 5{12}\) units. The competitor attracts \(1\frac 7{12}\) units and a more attractive facility cannot reduce this value. The maximum market share that can be attracted by one’s chain by adding one chain facility is therefore \(10\frac 5{12}\) units.

In the remainder of this chapter, the summaries of the main contributions of the four papers are presented, each in a separate section. The chapter ends with a brief conclusion.

12.2 A Cover-Based Competitive Location Model

12.2.1 Locating New Facilities

The problem of locating one new facility in a competitive environment described in the introduction can be converted to the standard max-covering problem with one facility. Each demand point is evaluated and F, C are determined. The additional potential market share is evaluated using q = 1 in (12.3). If this value is 0, the demand point can be removed from the problem. The remaining demand points have assigned buying power determined by (12.3) and the maximum covering problem is solved by using, for example, the algorithm in Drezner (1981). This approach is difficult to extend to the location of more than one new facility. The buying power that needs to be assigned to each demand point depends on the number q of new facilities that cover the demand point in their sphere of influence. Therefore, we designed special algorithms for the location of multiple facilities.

Let S be the set of N potential sites either given as part of the problem definition or calculated as in Drezner et al. (2007) when all points on the plane are potential sites. Let a ij for i = 1, …, n and j ∈ S be an incident matrix. a ij = 1 if demand point i is covered by potential location j, and a ij = 0 otherwise. Let F i be the number of one’s chain facilities covering demand point i and C i be the number of the competitor’s facilities covering it. Let w i be the buying power at demand point i. p new facilities are located at some candidate sites in S. Let x j for j ∈ S be a 0–1 variable. x j = 1 if a new facility is located at candidate point j and x j = 0 otherwise. The number of new facilities covering demand point i, q i, is

$$\displaystyle \begin{aligned} q_i=\sum_{j=1}^{N}a_{ij}x_j. \end{aligned} $$
(12.2)

The increase in the market share, ΔM(q), by one’s chain for a given vector q = {q i} is

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} \varDelta M(q)&\displaystyle =&\displaystyle \sum_{i=1}^nw_i\frac{q_iC_i}{(F_i+C_i)(F_i+C_i+q_i)}. \end{array} \end{aligned} $$
(12.3)

where w i is the buying power at demand point i and q i is calculated by (12.2). Our cover-based competitive location problem formulation is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle \max\left\{\varDelta M(X)=\sum_{i=1}^n\displaystyle\frac{w_i\frac{C_i}{F_i+C_i}\sum_{j=1}^{N}a_{ij}x_j}{F_i+C_i+\sum_{j=1}^{N}a_{ij}x_j}\right\}\\ \mbox{subject to: ~ ~ ~ ~ ~ ~ ~ ~ }\\ &\displaystyle \sum_{j\in P}x_j=p\\ &\displaystyle x_j\in\{0,1\}. \end{array} \end{aligned} $$
(12.4)

A special treatment is needed for the case F i + C i = 0. In this case, if q i = 0 the increase in market share is 0, and if q i > 0, the increase in market share is w i. We define two sets of demand points I 1 and I 2:

$$\displaystyle \begin{aligned}I_1=\left\{~i~|F_i+C_i>0\right\};~~~I_2=\left\{~i~|F_i+C_i=0\right\} \end{aligned} $$
(12.5)

and rewrite (12.4):

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle &\displaystyle \max\left\{\varDelta M(X)=\sum_{i\in I_1}\displaystyle\frac{w_i\frac{C_i}{F_i+C_i}\sum_{j=1}^{N}a_{ij}x_j}{F_i+C_i+\sum_{j=1}^{N}a_{ij}x_j}+\sum_{i\in I_2}w_i\min\left\{\sum_{j=1}^{N}a_{ij}x_j,~1\right\}\right\}\qquad \\ &\displaystyle &\displaystyle \mbox{ subject to:}\\ &\displaystyle &\displaystyle \sum_{j\in P}x_j=p\\ &\displaystyle &\displaystyle x_j\in\{0,1\}. \end{array} \end{aligned} $$
(12.6)

Maximizing ΔM(X) as defined by (12.6) is a non-linear binary programming problem with one constraint. The objective function is a sum of fractional terms. The number of terms is equal to the number of demand points. Each term has linear functions both in the nominator and the denominator. There is only one linear constraint and all decision variables are binary. Therefore, the problem is a generalized binary linear fractional programming problem (Barros 1998, p. 98). Once the binary constraints are relaxed, the problem becomes the sum of linear fractional functions (SOLF) problem (Chen et al. 2005), which is a generalization of the classical linear fractional programming problem (Charnes and Cooper 1962). The SOLF problem is known to be NP-complete when more than one ratio is present in the objective function (Freund and Jarre 2001). The solution procedures for certain SOLF problems can be found in Chen et al. (2005), Nesterov and Nemirovskii (1995), and Falk and Palocsay (1992). Calculating one upper bound on the optimal solution to the generalized binary linear fractional programming problem requires the solution of a SOLF problem. However, solving such a relaxed problem requires significant computer time, especially when N constraints of the type x j ≤ 1 need to be added to the problem. In Drezner et al. (2011) we proposed an efficient upper bound, which exploits the special structure of our particular problem. The problem can also be solved heuristically by various metaheuristics such as tabu search, simulated annealing, genetic algorithms, or others.

12.2.2 Upper Bounds for the Cover-Based Competitive Location Problem

In order to apply a branch and bound algorithm, tight upper bounds need to be constructed. In Drezner et al. (2011) we suggested three upper bounds termed UB 1, UB 2, and UB 3. The second upper bound, UB 2, is based on UB 1. The third upper bound UB 3 is an improvement of UB 2 and also depends on UB 1. UB 3 is always tighter than the other two. The reader is referred to the original paper Drezner et al. (2011) for proofs.

12.2.2.1 First Upper Bound (UB 1)

Since \(\sum \limits _{j=1}^{N}a_{ij}x_j\ge 0\),

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta M(X)&\displaystyle \le &\displaystyle \sum_{i\in I_1}\displaystyle\frac{w_iC_i\sum_{j=1}^{N}a_{ij}x_j}{\left(F_i+C_i\right)^2}+\sum_{i\in I_2}w_i\sum_{j=1}^{N}a_{ij}x_j\\&\displaystyle =&\displaystyle \sum_{j=1}^N\left\{\sum_{i\in I_1}\frac{w_iC_ia_{ij}}{\left(F_i+C_i\right)^2}+\sum_{i\in I_2}w_ia_{ij}\right\}x_j=\sum_{j=1}^N\gamma_jx_j, \end{array} \end{aligned} $$
(12.7)

where \(\gamma _j=\sum \limits _{i=1}^n\frac {w_iC_ia_{ij}}{\left (F_i+C_i\right )^2}\) with the provision that if F i + C i = 0, substitute C i = 1 (and F i = 0).

The following knapsack problem yields an upper bound for the solution of (12.4) or (12.6):

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle \max\left\{\sum_{j=1}^N\gamma_jx_j\right\}\\ \mbox{subject to:}\\ &\displaystyle \sum_{j=1}^Nx_j=p\\ &\displaystyle x_j\in\{0,1\}. \end{array} \end{aligned} $$
(12.8)

The solution to this knapsack problem, UB 1, is the sum of the p largest values of γ j.

12.2.2.2 Second Upper Bound (UB 2)

Since the arithmetic mean is greater than or equal to the geometric mean:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta M(X)&\displaystyle \le&\displaystyle \sum_{i\in I_1}\displaystyle\frac{w_iC_i\sum_{j=1}^{N}a_{ij}x_j}{2(F_i+C_i)\sqrt{(F_i+C_i)\sum_{j=1}^{N}a_{ij}x_j}}+\sum_{i\in I_2}w_i\sqrt{\sum_{j=1}^{N}a_{ij}x_j}\\&\displaystyle =&\displaystyle \sum_{i=1}^n\displaystyle\frac{w_iC_i\sqrt{\sum_{j=1}^{N}a_{ij}x_j}}{2(F_i+C_i)\sqrt{(F_i+C_i)}} \end{array} \end{aligned} $$
(12.9)

with the rule \(\frac {C_i}{2(F_i+C_i)\sqrt {(F_i+C_i)}}=1\mbox{ when }F_i+C_i=0\).

Consider the following identity:

$$\displaystyle \begin{aligned} \sum_{i=1}^n\displaystyle\frac{w_iC_i\sqrt{\sum_{j=1}^{N}a_{ij}x_j}}{2(F_i+C_i)\sqrt{(F_i+C_i)}}&= \sum_{i\in I_1}\frac 1 2 \sqrt{\frac{w_iC_i}{F_i+C_i}}\times \displaystyle\frac{\sqrt{w_iC_i\sum_{j=1}^{N}a_{ij}x_j}}{(F_i+C_i)} \\&\quad +\sum_{i\in I_2}\sqrt{w_i}\times\sqrt{w_i\sum_{j=1}^{N}a_{ij}x_j}. \end{aligned} $$

It can be written as \( \sum \limits _{i=1}^n\displaystyle \frac {w_iC_i\sqrt {\sum \limits _{j=1}^{N}a_{ij}x_j}}{2(F_i+C_i)\sqrt {(F_i+C_i)}}~=~ \sum \limits _{i=1}^n\frac 1 2 \sqrt {\frac {w_iC_i}{F_i+C_i}}\times \displaystyle \frac {\sqrt {w_iC_i\sum \limits _{j=1}^{N}a_{ij}x_j}}{(F_i+C_i)} \) with the rule that when F i + C i = 0, \(\frac {C_i}{F_i+C_i}=4\) in the first term and C i = 1 in the second term.

By the Schwartz inequality (Hardy et al. 1952) (\(\left \{\sum \limits _{i=1}^na_ib_i\right \}^2\le \sum \limits _{i=1}^na_i^2\sum \limits _{i=1}^nb_i^2\)):

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum_{i=1}^n\displaystyle\frac{w_iC_i\sqrt{\sum_{j=1}^{N}a_{ij}x_j}}{2(F_i+C_i)\sqrt{(F_i+C_i)}} &\displaystyle \le&\displaystyle \sqrt{\frac 1 4\sum_{i=1}^n\frac{w_iC_i}{F_i+C_i}\times\sum_{i=1}^n\displaystyle\frac{w_iC_i\sum_{j=1}^{N}a_{ij}x_j}{\left(F_i+C_i\right)^2}}\\ &\displaystyle =&\displaystyle \sqrt{\frac 1 4\sum_{i=1}^n\frac{w_iC_i}{F_i+C_i}\times\sum_{j=1}^{N}\gamma_jx_j}\\&\displaystyle \le&\displaystyle \sqrt{\left\{\frac 1 4\sum_{i=1}^n\frac{w_iC_i}{F_i+C_i}\right\}\mathit{UB}_1}{} \end{array} \end{aligned} $$
(12.10)

with the rule that when F i + C i = 0, \(\frac {C_i}{F_i+C_i}=4\).

To implement UB 2 (12.10) in conjunction with UB 1, i.e., to use as an upper bound \(\min \{\mathit {UB}_1,\mathit {UB}_2\}\), calculate: UB 1 and \(K=\sum \limits _{i=1}^n\frac {w_iC_i}{F_i+C_i}\). If \(\frac 1 4 K<\mathit {UB}_1\) use as upper bound \(\frac 1 2\sqrt {K\times \mathit {UB}_1}\); otherwise, use UB 1 as the upper bound.

12.2.2.3 Third Upper Bound (UB 3)

In developing UB 2 we used inequalities based on a base of “2,” i.e., the arithmetic mean is greater than or equal to the geometric mean and the Schwartz inequality. We can develop formulas based on a base of θ > 1 (not necessarily integer) which reduces to UB 2 when θ = 2 is used. Once the best value of θ is found, UB 3 must be better than UB 2 (or equal to UB 2 when the best θ is equal to 2).

To find UB 3, calculate UB 1, \(\alpha =\frac {\sum \limits _{i\in I_1}\frac {w_iC_i}{F_i+C_i}}{\mathit {UB}_1}\), and \(\beta =\frac {\sum \limits _{i\in I_2}w_i}{\mathit {UB}_1}\). When 0 < β < 1 we need to find θ that satisfies the equation \(\displaystyle \theta =1+\frac {1}{\alpha }\mu ^{\left \{\frac {\beta \mu }{(\theta -1)\alpha +\beta \mu }\right \}}-\frac {\beta \mu }{\alpha }\). Then, UB 3 = λ(θ )UB 1 is calculated by

$$\displaystyle \begin{aligned} \lambda(\theta)=\frac{1}{\theta}\left\{(\theta-1)\left[\alpha+\frac{\theta^{\frac{\theta}{\theta -1}}}{\theta -1}\beta\right]\right\}^{\frac{\theta-1}{\theta}}=\left\{\frac{\theta -1}{\mu}\alpha+\beta\right\}^{1-\frac{1}{\theta}}. \end{aligned} $$
(12.11)

Our original paper Drezner et al. (2011) describes an efficient technique for finding θ and the relationships among the three upper bounds.

12.2.3 Heuristic Algorithms

Four heuristic algorithms were constructed and tested: a greedy heuristic, an ascent heuristic, tabu search (Glover 1986), and simulated annealing (Kirkpatrick et al. 1983). We also tested an “improved greedy” approach, i.e., the ascent algorithm was applied to the solution of the greedy algorithm. Tabu search, simulated annealing, and genetic algorithms were constructed in Alp et al. (2003); Drezner et al. (2005); Berman and Drezner (2008) for the p-median and similar problems. The same principles can be adopted for the construction of such algorithms for the solution of our problem. The tabu search and simulated annealing algorithms tested were adopted from these papers.

12.2.3.1 The Greedy Heuristic

The problem is to select a set P of p sites out of N potential locations. We select the set P one facility at a time. The change in the value of the objective function ΔM is calculated by Eq. (12.3) when adding one facility at each of the N potential sites. The site with the largest increase in ΔM is selected for locating a new facility and remains in P for the rest of the algorithm. The process continues until sites for all p new facilities are selected.

12.2.3.2 The Ascent Algorithm

This algorithm is similar to the heuristic algorithm designed by Teitz and Bart (1968) for the solution of the p-median problem.

  1. 1.

    A set P of p sites out of the N available sites are randomly selected.

  2. 2.

    All p(N − p) possible moves by removing one facility in P and adding one of the N − p non-selected sites to P are evaluated.

  3. 3.

    If an improving move is found, the best improving move is executed.

  4. 4.

    If no improving move is found, the algorithm terminates with the last set P of p sites as the solution.

Note that evaluating all p(N − p) moves can be done sequentially and thus streamlined and performed in a shorter run time. All p selected sites are removed in order and ΔM calculated for each. For each of these sites all N − p possible additions are evaluated and the total ΔM is the sum of the changes.

12.2.3.3 Tabu Search

Tabu search (Glover 1986; Glover and Laguna 1997) proceeds from the solution found by the ascent algorithm in an attempt to escape local maxima and obtain better local maxima or the global maximum.

The following simple tabu scheme was used. A node is in the tabu list if it was recently removed from the selected set of nodes. It cannot re-enter the selected set while in the tabu list (unless its inclusion improves the best known solution). When the tabu list consists of N − p members, no exchange is possible. Therefore, we opted to select the tabu tenure to be a fraction of N − p. Following extensive experiments we randomly selected the tabu tenure in each iteration in the range [0.05(N − p), 0.45(N − p)]. Since the run time of the ascent approach is relatively long, we experimented with relatively few iterations of the tabu search. If the number of the iterations of the ascent algorithm is h, then the number of extra tabu search iterations is set to 19h, so the run time of the tabu search is about 20 times the run time of the ascent algorithm.

12.2.3.4 The Tabu Search for the Cover-Based Competitive Location Problem

  1. 1.

    A tenure vector consisting of an entry for each facility is maintained. The entry for a facility in the tenure vector is either the last iteration number at which it was removed from P or a large negative number when it was never removed from P.

  2. 2.

    Select the result of the ascent algorithm as a starting solution for the tabu search and as the best found solution. The number of iterations in the ascent algorithm is h.

  3. 3.

    Insert a large negative number (for example, − N) for every facility in the tenure vector.

  4. 4.

    Select the tabu tenure, T, in the range [0.05(N − p), 0.45(N − p)].

  5. 5.

    Evaluate all moves (one node to be removed, i out ∈ P, and one node to be added, i inP) and calculate the change in the value of the objective function by moving the facility from i out to i in.

  6. 6.

    If a move yields a solution better than the best found one, continue to evaluate all the moves and perform the best improving move. Update the best found solution and go to Step 3.

  7. 7.

    If no move yields a solution better than the best found solution, select the move which yields the best value of the objective function (whether improving or not) as long as the difference between the current iteration and the entry of i in in the tenure vector does not exceed T.

  8. 8.

    Enter the current iteration number into entry i out in the tenure vector. Go to Step 4.

  9. 9.

    Repeat Steps 48 for 19h iterations.

12.2.4 Computational Experiments

The branch and bound algorithm was coded in J. It was run on a machine with 6 CPUs (each clocked at 1.86 GHz) with each processor tackling a different problem. The greedy, ascent, tabu search, and simulated annealing algorithms were coded in Fortran, compiled by Intel 9.0 Fortran compiler and ran on a desktop computer with a 2.8 GHz Pentium IV processor and 256 MB RAM.

We experimented with the 40 Beasley (1990b) problems designed for testing p-median algorithms. The problems tested ranged between 100 ≤ n ≤ 900 nodes and 5 ≤ p ≤ 200 new facilities. Chain facilities are located at the first ten nodes and the competitors are located at the next 10 nodes. The remaining n − 20 nodes are candidate locations for the facilities in one’s chain. The demand at node i is 1∕i. The same radius of influence was used for existing and new facilities.

Two sets of problems were run yielding a total of 80 problems. The two sets differ in their radius of influence.

For Set#1 the radius of influence was set to the smallest possible radius that ensures that every each node is attracted by at least one existing facility, whether one’s chain facility or a competitor. This guarantees that there is no lost (unmet) demand, thus β = 0. The radius for each problem is calculated as follows: For each of the n − 20 candidate nodes the distances to the first 20 nodes are calculated and the smallest distance determined. The maximum among these distances for all n − 20 candidate locations is the radius of influence. For Set#2 the radius of influence was set to R = 20. In this case there is lost demand prior to establishing new facilities.

12.2.4.1 Set#1

The branch and bound algorithm solved 20 of the 40 problems. The run of the remaining 20 problems was stopped after about 2 days unless we observed that the search is quite advanced after 2 days and was expected to finish in a reasonable additional time. They all reached the best known solution when terminated. A relative accuracy of 10−5 was used. For two problems the upper bound was within this relative accuracy from the best known solution so they were instantly solved at the root.

The greedy and ascent heuristic algorithms found the best known solution. The greedy solution was obtained in a few seconds. The ascent algorithm was repeated from 100 randomly generated starting solutions and found the best known solution in all 100 runs. Since the greedy and ascent algorithms performed well for Set#1 we saw no need to experiment with tabu search and simulated annealing for the problems in Set#1.

The branch and bound algorithm solved 20 of the 40 problems in Set#1 to optimality illustrating that the upper bound UB 3 is effective. The heuristic algorithms performed extremely well on this set of problems. The best known solution (for half of the problems it has been proven optimal) was found by all runs in a short run time.

12.2.4.2 Set#2

As to Set#2, the branch and bound algorithm found the optimal solution for 16 of the 40 problems. The heuristic algorithms were not as effective for Set #2 as they were for Set #1 and therefore we also experimented with tabu search and simulated annealing.

The ascent algorithm was run 100 times for each problem. It found the best known solution at least once for 37 out of the 40 problems. It found the best known solution in 54.6% of the runs and in all 100 runs for 12 problems.

The tabu search was run 10 times for each problem. It found the best known solution (including 16 known optimal solutions) for all 40 problems. For 29 problems it found the best known solution in all 10 runs. It found the best known solution in 88% of the runs. Run time, by design, was 20 times longer than that for the descent algorithm. Therefore, total run time for 10 tabu solutions was about double the time required for 100 runs of the ascent algorithm.

The results for simulated annealing were inferior to the other algorithms. Results for simulated annealing generally improve when more iterations are allowed in the algorithm. Therefore, longer run time was needed in order to obtain results comparable to those obtained by tabu search. We experimented with run times of more than six times those required for the ascent algorithm (average run time of about 8500 s for 10 runs of the simulated annealing) and obtained the best known results for only 27 of the problems, with the best result averaging 0.238% below the best known result. The best known result was obtained in 5.8 out of 10 runs, on the average. In 19 problems, though, simulated annealing obtained the best known results in all 10 runs.

The reader is referred to our original paper Drezner et al. (2011) for full results of the computational experiment.

12.3 Strategic Competitive Location

The new competitive location model originally proposed in Drezner et al. (2011) and described in Sect. 12.2 of this chapter inspired the follow-up project on strategic competitive location. In Drezner et al. (2011) the location of p new facilities with a given radius is investigated. In the strategic competitive location model proposed in our Drezner et al. (2012) paper, the radii of the facilities are variables, there is a budget constraint, and—in addition to constructing new facilities—we also consider an option to expand existing facilities.

There is a finite set of potential locations for new facilities. Each of the existing facilities has its radius of influence (sphere of influence, catchment area) which is monotonically increasing with its attractiveness. Upgrading existing facilities entails increasing their radius of influence, thereby increasing their catchment area. We do not consider downgrading or closing facilities. A fixed cost plus a variable cost, depending on the radius, is required for improvement. All potential locations for new facilities are defined with a radius of zero. Establishing a new facility requires a given fixed cost (usually greater than the fixed cost required for an improvement of an existing facility) plus a variable cost that is increasing with the radius. These three strategies were captured in a unified model presented in our paper Drezner et al. (2012) where existing facilities and potential new locations are defined as one set of locations, with corresponding radii and setup costs associated with each location.

Models existing prior to our Drezner et al. (2012) paper considered either improving existing facilities or constructing new ones. To the best of our knowledge only Küçükaydın et al. (2011) had analyzed the combination of both options before us, however, in a different context. In our paper an expansion of one’s chain is achieved by one of the three strategies: (1) upgrading some or all of one’s existing chain facilities, (2) constructing new chain facilities, (3) employing a combination of both (1) and (2). A given budget is available for such expansion. The objective of the chain is to attract the maximum market share (or to maximize additional market share captured following the expansion) within the given budget.

12.3.1 The Three Strategic Competitive Location Models

We consider three strategies, all encompassed in one unified model.

Improvement Strategy: (IMP):

Only improvement of existing chain facilities is considered.

Construction Strategy: (NEW):

Only construction of new facilities is considered.

Joint Strategy: (JNT):

Both improvement of existing facilities and construction of new facilities are considered.

All strategies are treated in a unified model by assigning a radius of zero to potential locations for new facilities. Note that the NEW strategy is somewhat similar to the variable radius covering model (Berman et al. 2009b) where a covering model with no competition was proposed.

12.3.1.1 Notation

n :

The number of demand points

w i :

The buying power at demand point i, i = 1, …, n

F i :

The number of facilities belonging to one’s chain that attract demand point i

C i :

The number of competitor facilities attracting demand point i

B :

The budget for increasing the attractiveness of existing facilities or creating new ones

p :

The number of chain facilities including potential locations for new facilities

\(r_j^o\) :

The existing radius of facility j for j = 1, …, p. For new facilities \(r_j^o=0\)

f(r):

The cost of building a facility of radius r (a non-decreasing function of r)

r j :

The unknown assigned radius to facility j. \(r_j>r_j^o\) for existing facilities and r j ≥ 0 for establishing new facilities

S j :

The fixed cost if facility j is improved or established

12.3.1.2 Calculating the Increase in Market Share

In this section we evaluate the increase in buying power captured by the chain as a result of an expansion. An expansion consists of increasing the attractiveness of some chain facilities and/or constructing new ones. The catchment area of a facility is a circle defined by its radius. Demand points in the facility’s catchment area are attracted to the facility (are covered by the facility). If a demand point is in the catchment area of several facilities, its buying power is equally divided among these facilities (Drezner et al. 2011). There may exist extreme cases where such a rule can be improved. However this rule provides an estimate for the captured market share and such rare exceptions do not introduce a significant deviation to the estimate. As we explained in Drezner et al. (2011), this rule is simple and robust.

Demand point i is in the catchment area of F i chain facilities and C i competitors’ facilities. Let q i be the number of additional chain facilities attracting demand point i following an expansion of the chain. This means that following the expansion demand point i is in the catchment area of F i + q i chain facilities. Prior to the expansion of chain facilities, the buying power attracted by one’s chain is \(\sum \limits _{i=1}^nw_i\frac {F_i}{F_i+C_i}\). Note that if F i + C i = 0, demand point i is outside the catchment area of all competing facilities, no buying power is captured, and the demand at demand point i is lost. Following the change in attractiveness, the market share attracted by the chain is \(\sum \limits _{i=1}^nw_i\frac {F_i+q_i}{F_i+C_i+q_i}\). The increase in market share captured by one’s chain for a given vector q = {q i}, ΔM(q) is given by Eq. (12.3).

Define α i(q i) as the proportion of the demand from demand point i added to the chain’s market share when demand point i is attracted to q i additional chain facilities:

$$\displaystyle \begin{aligned} \alpha_i(q_i)=\frac{q_iC_i}{(F_i+C_i)(F_i+C_i+q_i)}. \end{aligned} $$
(12.12)

It follows that

$$\displaystyle \begin{aligned} \varDelta M(q)=\sum_{i=1}^nw_i\alpha_i(q_i). \end{aligned} $$
(12.13)

It is easy to show that:

Property 12.1

0 ≤ α i(q i) ≤ 1.

Property 12.2

If q i = 0, there is no increase in the market share captured from demand point i regardless of the values of F i and C i, thus α i(0) = 0.

Property 12.3

If F i = C i = 0 and q i > 0, then the gain in market share from demand point i is w i, thus α i(q i) = 1.

12.3.1.3 Preliminary Analysis

When a new facility is established, one can assign a radius of zero to it so that it attracts only the demand point at which it is located. However, the setup cost is added to the total cost. In order to simplify the presentation we assume that potential new locations have a radius of − 𝜖 for a very small 𝜖 > 0, and f(−𝜖) = 0.

If the radius of facility j is increased from \(r_j^o\) to \(r_j> r_j^o\), the cost of the increase is \(f(r_j)-f(r_j^o)+S_j\). Otherwise, the cost is zero. The objective is to maximize the market share attracted to one’s chain by increasing some (or all) of the radii, subject to the budget constraint:

$$\displaystyle \begin{aligned}\sum\limits_{j=1}^{p}\left\{f(r_j)-f\left(r_j^o\right)\right\}+\sum\limits_{r_j> r_j^o}S_j\le B.\end{aligned}$$

The buying power at demand points that are attracted to one’s chain facilities but are not attracted to any competitor is fully satisfied by the chain. Thus, the contribution of these demand points to the chain’s market share cannot increase. Such demand points can be removed from consideration when calculating the increase in market share captured following the expansion.

Theorem 12.1

For each facility there is a finite number of radii that should be considered for improvement. Consequently, there is a finite number of feasible candidate solutions.

The reader is referred to our paper Drezner et al. (2012) for the proof.

Note that even though the number of feasible solutions is finite, it can be very large. If the total increase in market share were an additive function of the individual market share increases, a dynamic programming solution approach would be possible. However, since this condition does not hold, we propose branch and bound and heuristic algorithms rather than solving a non-linear program.

By Theorem 12.1 there is a list of improvement costs to be considered. To define this list let d ik be the distance between demand point i and facility k (existing or new) and B ik be the cost (extra budget required) of increasing the radius of facility k from \(r_k^o\) to d ik

(12.14)

The set {B ik} for a given k may have tied values (when d ik = d mk). The sorted list of costs considered for improvement of facility k, following the removal of tied values, consists of 0 (no improvement), and the remaining distinct values in the vector {B ik}. It is defined as the sorted list of costs B K = {b ik} of cardinality M k so that \(0=b_{1k}<b_{2k}<\ldots <b_{M_kk}\).

By Theorem 12.1, all feasible candidate solutions are \(b_{i_kk}\) for 0 ≤ i k ≤ M k such that \(\sum \limits _{k=1}^pb_{i_kk}\le B\). The additional market share captured by the chain by investing b ik for improving facility k can be calculated for each feasible solution by obtaining q i, i = 1, …, n and applying Eq. (12.1).

12.3.2 A Branch and Bound Algorithm

The total enumeration of all feasible candidate solutions can be performed by first evaluating \(b_{i_11}\) for i 1 = 1, …M 1, then for each 1 ≤ i 1 ≤ M 1, evaluating all \(b_{i_22}\) for \(0\le b_{i_22}\le B-b_{i_11}\), and so on. That means that for each 2 ≤ k ≤ p evaluating \(0\le b_{i_kk}\le B-\sum \limits _{m=1}^{p-1}b_{i_mm}\).

Since the number of candidate feasible solutions can be prohibitively large, an upper bound on the possible increase in market share captured (once the first k radii are set) is required for solving moderately sized problems.

12.3.2.1 An Upper Bound

We construct an upper bound using a dynamic programming technique on upper bounds for each facility.

Lemma 12.1

α i(q i + 1) ≤ α i(q i) + α i(1).

The proof of this lemma can be found in Drezner et al. (2012).

Let e i be the market share added when demand point i is covered by a single additional chain facility. e i is calculated by (12.1) or (12.12) using the present F i and C i for that demand point and q i = 1:

(12.15)

Theorem 12.2

The market share added from one demand point when it is covered by q i additional chain facilities is less than or equal to q i e i.

Proof

The theorem is trivially true for q i = 0 and q i = 1. The theorem follows for every q i ≥ 0 by mathematical induction on the value of q i proven by applying Lemma 12.1.

By Theorem 12.2, if we add e i for any instance of demand point i being covered by an additional chain facility, the sum of these values is an upper bound on the additional market share gained. This suggests an upper bound for the additional market share that can be gained by facilities k, …, p once the radii for the first k − 1 facilities are known. Note that both e i (12.15) and the lists B k for k = 1, …, p can be calculated in the preamble to the branch and bound process.

To make the upper bound simpler to calculate and use, a parameter H (for example, H = 10, 000) is selected. The budget B is then divided into H equal parts. The upper bound is calculated only for an available remaining budget of \(h\frac B H\) for integer 0 ≤ h ≤ H. If all values of the budget increase b ik are integers, it is convenient to select H = B. When calculating the upper bound for any remaining budget, it is rounded up to the nearest h and the upper bound for this value is used. We create a matrix U of p columns and H rows. U hk is the upper bound for a remaining budget of \(h\frac B H\) available for improving facilities k, …, p.

We calculate the upper bound matrix U backward by applying a dynamic programming approach starting from facility p. A matrix V  of the same dimension as matrix U is calculated. The value of V hk is the additional market share that can be obtained by using a budget \(h\frac B H\) for 0 ≤ h ≤ H to improve facility k. To calculate column k (V hk for 0 ≤ h ≤ H):

  1. 1.

    Set V hk = 0 for h = 0, …, H.

  2. 2.

    Scan in order all demand points i = 1, …, n.

  3. 3.

    The market share added when demand point i is covered by one extra chain facility, e i, is calculated by Eq. (12.15).

  4. 4.

    The extra budget needed to cover demand point i, b ik is calculated by Eq. (12.14).

  5. 5.

    e i is added to all entries V hk when the following two conditions hold:

    1. (a)

      b ik > 0, and

    2. (b)

      \(h\frac B H\ge b_{ik}\) which is the same condition as \(h\ge \frac H B b_{ik}\).

    Note that if b ik = 0 no action is taken regarding demand point i because \(d_{ik}\le r_k^o\) and the demand point is already covered by facility k.

The matrix U is calculated by using a dynamic programming approach on the matrix V . The last column p in U is identical to the last column p in V . The columns from k = p − 1 down are updated in reverse order of the column number by the following recursive formula:

$$\displaystyle \begin{aligned}U_{hk}=\max\limits_{0\le s\le h}\left\{V_{sk}+U_{(h-s),(k+1)}\right\}.\end{aligned}$$

U hk is calculated starting with h = H down to h = 0. Once U hk is calculated, it can replace V hk because only smaller values of h are needed for the calculation of the rest of the column. Therefore, the matrices V  and U can occupy the same space in memory.

12.3.2.2 The Algorithm

Suppose that the costs for the first k facilities are assigned. This is represented by a vector t(1), t(2), …, t(k). The costs are b t(j)j for j = 1, …, k. This represents a node in the tree. The budget used to expand the first k facilities is \(B_0=\sum \limits _{j=1}^kb_{t(j)j}\) which leaves a budget of B − B 0 for expanding the remaining facilities. The upper bound on the additional market share captured by facilities k + 1, …, p is U h,k+1, where \(h=H\frac {B-B_0}{B}\) rounded up. The additional market share captured by the first k facilities is Δ k and Δ is the best solution found so far.

  1. 1.

    Calculate U. Set k = 1, t(1) = 1, with a budget B 0 = 0 (h = H), Δ 1 = 0, and Δ  = 0. Note that Δ ≥ 0 can be obtained by a heuristic approach (see Sect. 12.3.3) such as a greedy approach.

  2. 2.

    If Δ k + U h,k+1 ≤ Δ  + 𝜖, the rest of the tree from this node is fathomed. Go to Step 4.

  3. 3.

    Set k = k + 1.

    1. (a)

      If k = p, calculate the extra market share by using B − B 0 to expand facility p. Update Δ if necessary. Set k = p − 1 and go to Step 4.

    2. (b)

      If k < p, set t(k) = 1. B 0 is unchanged. No additional demand points are covered; thus, all F i do not change. Go to Step 2.

  4. 4.

    Set t(k) = t(k) + 1. B 0 is changed to B 0 + b t(k)k − b t(k)−1,k.

    1. (a)

      If B 0 > B, the search moves back to facility k − 1. Set k = k − 1.

      • If k > 0, go to Step 4.

      • If k = 0, stop with Δ as the optimal solution within an accuracy of 𝜖.

    2. (b)

      If B 0 ≤ B, calculate \(h=H\frac {B-B_0}{B}\) rounded up. The F i and Δ k are updated. Go to Step 2.

The reader is referred to our paper Drezner et al. (2012), which offers some interesting modifications to the branch and bound algorithm.

12.3.3 Heuristic Algorithms

We constructed a greedy heuristic, an ascent approach, and tabu search. We also constructed a simulated annealing algorithm but it did not perform well. Tabu search performed best, while the greedy approach was the fastest. We detail the greedy heuristic, the ascent approach (on which the tabu search is based), and tabu search.

For each existing facility or candidate location for a new facility, a list of all possible improvements and their associated cost is compiled. The solution consists of selecting one value from each list and finding the maximum increase in market share among all feasible selections.

12.3.3.1 The Greedy Heuristic

Following extensive experiments with various strategies we found the following approach the most effective:

  1. 1.

    Start with a cost of zero for each column.

  2. 2.

    Evaluate all feasible increases ΔB in the costs for each column and calculate the market share increase ΔM for each.

  3. 3.

    Select the column and cost that maximizes ΔMΔB.

  4. 4.

    Steps 2 and 3 are repeated until no ΔB is feasible.

12.3.3.2 The Ascent Algorithm

The starting solution for the ascent approach is the greedy solution using either the whole budget or a portion of it. Following extensive experiments, the following search neighborhood was found to be most effective: For all combinations of two columns, increases in cost are evaluated for one column combined with decreases in cost for the other column. The neighborhood consists of all feasible combinations based on pairs of columns. Only a partial set of feasible exchanges should be considered in the neighborhood. In the ascent approach, the largest increase in market share is selected until there is no improved combination in the neighborhood.

12.3.3.3 The Tabu Search

The tabu search (Glover 1986; Glover and Laguna 1997) extends the search once the ascent approach terminates. The moves of the ascent approach are continued, whether improving or not, for a pre-specified number of iterations (including the ascent iterations). A tabu list is defined. It consists of columns whose cost was decreased in recent moves. Increasing the cost of such columns is in the tabu list. Each iteration, one of the two possible moves is selected. If a solution better than the best found solution is found (regardless of the tabu list) it is selected. Otherwise, the solution obtained by a move not in the tabu list with the maximum market share (whether improving or not) is selected. The length of the tabu list is randomly generated between t min and t max for every iteration, where t min and t max are parameters of the algorithm.

Our original paper Drezner et al. (2012) offers several time-saving measures.

12.3.4 Computational Experiments

All solution methods were programmed in Fortran using double-precision arithmetic. The programs were compiled by the Intel 11.1 Fortran Compiler and seven different computers were used for running the programs. All computers used were multi-core computers and all cores, except one, were used on each computer with no parallel processing. A total of 23 cores were used.

To enable an easy replication of our test problems for future comparison, we experimented with the 40 Beasley (1990a) problems designed for testing p-median algorithms. The problems ranged between 100 ≤ n ≤ 900 nodes. The number of new facilities for these problems was ignored. Chain facilities are located on the first ten nodes and the competitors are located on the next 10 nodes. The demand at node i is 1∕i and the cost function is f(r) = r 2. Since all distances are integers, the cost for improving a facility is integer; thus, we set H = B. The same radius of influence was used for existing chain facilities and competing facilities. When new facilities can be added (strategies NEW and JNT), the last n − 10 nodes are candidate locations for the facilities in one’s chain and are assigned a radius of 0. For each of the 40 problems, three sets of problems (strategies IMP, NEW, and JNT) were run, yielding a total of 120 instances. We experimented with various values of the coverage radius, a fixed cost for establishing a new facility, and an available budget.

The branch and bound algorithm starts with a lower bound of zero. No heuristic was run first to establish a tighter lower bound. An accuracy of 𝜖 = 10−5 was employed in the branch and bound approach. Following extensive experiments with the branch and bound and tabu algorithms we set the parameter L to 5 for IMP problems and 0 for NEW and JNT problems. The following parameters were used in the tabu search: The number of iterations was set to 1000, and the length of the tabu tenure was randomly generated every iteration between \(t_{\min }=5\) and \(t_{\max }=8\). The starting solutions for the tabu search are the results of the greedy algorithm using between 10% and 100% (randomly generated) of the available budget.

Optimal solutions were found by the branch and bound algorithm for all 40 IMP problems, 19 of the 40 NEW problems, and 15 of the 40 JNT problems. The average IMP solution was 5.87% below the JNT solution and the average NEW solution was 5.96% below the JNT one.

The tabu search for the IMP problems was replicated 10,000 times, while it was replicated 100 times for the other two strategies. The tabu search found the best known solution at least once for all 120 problems. It is found in 96.2% of the runs for the IMP strategy, 81.2% for the NEW strategy, and 75.9% of the time for the JNT strategy. The percent of the average tabu solution was only 0.002% below optimum for the IMP strategy, 0.320% below the best known NEW strategy solutions, and 0.116% below the best known JNT strategy solutions. The branch and bound results for the NEW strategy were 1.03% below the best known solution (obtained by tabu search) and the JNT strategy solution were 0.27% below the best known solution.

The reader is referred to Drezner et al. (2012) for full results.

12.4 A Leader–Follower Model for Discrete Competitive Location

Our next follow-up paper features a game-theoretical approach. The following two extensions of Drezner et al. (2011) were incorporated in Drezner et al. (2015):

Budget Constraints::

Combining the location decision with facility design (treating the attractiveness level of the facility as a variable) was recently investigated in Aboolian et al. (2007), Drezner (1998), Drezner et al. (2011, 2012), Fernandez et al. (2007), Plastria and Carrizosa (2004), Redondo et al. (2010), and Toth et al. (2009). Drezner (1998) assumed that the facilities’ attractiveness are variables. In that paper it is assumed that a budget is available for locating new facilities and for establishing their attractiveness levels. One needs to determine the facilities’ attractiveness levels so that the available budget is not exceeded. Plastria and Vanhaverbeke (2008) combined the limited budget model with the leader–follower model. Aboolian et al. (2007) studied the problem of simultaneously finding the number of facilities, their respective locations, and attractiveness (design) levels.

Leader–Follower::

The leader–follower model (Stackelberg 1934) considers a competitor’s reaction to the leader’s action. The leader decides to expand his chain. The follower is aware of the action taken by the leader and expands his facilities to maximize his own market share. The leader’s objective becomes maximizing his market share following the follower’s reaction. The leader–follower location model in a competitive environment was investigated in Drezner and Drezner (1998), Drezner (1982), Küçükaydın et al. (2012), Plastria and Vanhaverbeke (2008), Redondo et al. (2013, 2010), Saidani et al. (2012), and Sáiz et al. (2009).

With these extensions, we were able to analyze and solve the leader–follower model incorporating facilities’ attractiveness (design), subject to limited budgets for both the leader and follower. We were also able to investigate what is the main source of extra market share for the leader and the follower.

12.4.1 Formulation of the Leader–Follower Model

We adapted the cover-based model Drezner et al. (2011, 2012). In our first paper on competitive location Drezner et al. (2011), the location of p new facilities with a given radius is sought so as to maximize the market share captured by one’s chain. In our second paper, Drezner et al. (2012), three strategies were investigated: In the improvement strategy (IMP) only the improvement of existing chain facilities is considered; in the construction strategy (NEW) only the construction of new facilities is considered; and in the joint strategy (JNT) both improvement of existing chain facilities and construction of new facilities are considered. All three strategies are treated in a unified model by assigning a radius of zero to potential locations of new facilities.

In this new formulation, the leader employs one of the three strategies and the follower also implements one of these three strategies. This setting gives rise to nine possible models. Each model is a combination of the strategy employed by the leader and the strategy employed by the follower. For example, the leader employs the JNT model, i.e., considers both improving existing facilities and establishing new ones, while the follower may employ the IMP model, i.e., only considers the improvement of his existing facilities. The most logical model is to employ for both the leader and the follower the JNT strategy which yields the highest market share. However, constructing new facilities or improving existing ones may not be a feasible option for the leader or the follower.

12.4.1.1 Notation

The set of potential locations for the facilities is discrete.

N :

The set of demand points of cardinality n

w i :

The buying power at demand point i, i = 1, …, n

L i :

The number of facilities that belong to the leader’s chain that attract demand point i

F i :

The number of follower’s facilities attracting demand point i

B L :

The budget available to the leader for increasing the attractiveness of existing facilities or constructing new ones

B F :

The budget available to the follower for increasing the attractiveness of existing facilities or constructing new ones

P L :

The set of the existing leader’s facilities including potential locations for new facilities of cardinality p L

P F :

The set of the existing follower’s facilities including potential locations for new facilities of cardinality p F

p :

The total number of facilities. p = p L + p F

d ij :

The distance between demand point i and facility j

\(r_j^o\) :

The present radius of facility j for j = 1, …, p. For new facilities \(r_j^o=-\epsilon \) for a very small 𝜖 to guarantee that new facilities do not attract demand at their potential location

f(r):

The cost of building a facility of radius r (a non-decreasing function of r)

r j :

The unknown radius assigned to facility j for j = 1, …, p

R :

The set of unknown radii \(\left \{r_j\right \}\) for j = 1, …, p

S j :

The fixed cost if facility j is improved or established, i.e., \(r_j>r_j^o\) for existing facilities and r j ≥ 0 for establishing new facilities

C(r j ):

The cost of improving facility j to a radius r j. It is zero if \(r_j=r^o_j\), and \(f(r_j)-f(r^o_j)+S_j\), otherwise

Note that the radii r j are continuous variables. However, it is sufficient to consider a finite number of radii in order to find the optimal solution. Consider the sorted vector of distances between facility j and all n demand points. A radius between two consecutive distances covers the same demand points as the radius equal to the shorter of the two distances yielding the same value of the objective function. Since the improvement cost is an increasing function of the radius, an optimal solution exists for radii that are equal to a distance to a demand point.

12.4.1.2 Calculating the Market Share

For demand point i, the numbers L i and F i can be calculated by counting the number of leader’s facilities that cover demand point i, and the number of follower’s facilities that cover it. Formally, let , then L i and F i for a given strategy R are

$$\displaystyle \begin{aligned} L_i=\sum_{j\in P_L}\delta_{ij}(R);~~F_i=\sum_{j\in P_F}\delta_{ij}(R). \end{aligned} $$
(12.16)

The objective functions by the leader and the follower, before locating new facilities, are then calculated (Drezner et al. 2011, 2012):

$$\displaystyle \begin{aligned} MS_L=\sum_{i=1}^nw_i\frac{L_i}{L_i+F_i}. \end{aligned} $$
(12.17)
$$\displaystyle \begin{aligned} MS_F=\sum_{i=1}^nw_i\frac{F_i}{L_i+F_i}. \end{aligned} $$
(12.18)

Note that if F i = L i = 0, then in (12.17) \(\frac {L_i}{L_i+F_i}=0\) and in (12.18) \(\frac {F_i}{L_i+F_i}=0\) and the demand w i associated with demand point i is lost.

Suppose the leader improves some of his facilities and establishes new ones. Note that F i does not depend on the actions taken by the leader. The follower’s problem is thus well-defined following the leader’s action and can be optimally solved by the branch and bound algorithm detailed in Drezner et al. (2012).

Once the follower’s optimal solution is known, the leader’s objective function is well defined as his market share is calculated by (12.17) incorporating changes to the follower’s facilities locations and radii.

12.4.1.3 Calculating the Increase in Market Share

To solve the follower’s problem it is more efficient to maximize the increase in market share rather than the market share itself. Upper bounds developed for the increase in market share are tighter. Demand point i is in the catchment area of L i leader’s facilities and F i follower’s facilities. Suppose that a radius of some of the follower’s facilities is increased and the number of follower’s facilities that cover demand point i increased from F i to F i + ΔF i. Let Q ⊆ N be the set of demand points for which ΔF i > 0. For i ∈ Q the buying power attracted by the follower was \(w_i\frac {F_i}{F_i+L_i}\) and after the change it is \(w_i\frac {F_i+\varDelta F_i}{F_i+L_i+\varDelta F_i }\) leading to a market share increase of \(w_i\frac {\varDelta F_i L_i}{(F_i+L_i)(F_i+L_i+\varDelta F_i)}\). The increase in market share is therefore:

$$\displaystyle \begin{aligned} \varDelta M_F=\sum_{i\in Q}w_i\frac{\varDelta F_iL_i}{(F_i+L_i)(F_i+L_i+\varDelta F_i)}. \end{aligned} $$
(12.19)

Note that when F i = L i = 0 for i ∈ Q, the ratio in (12.19) is equal to one and for such demand points the follower’s market share is increased by w i. The demand w i was lost before the increase because no facility attracted it but, following the increase, the whole demand w i is captured by the follower.

12.4.1.4 The Objective Functions

The follower wishes to maximize the increase in his market share ΔM F calculated by Eq. (12.19). The follower “knows” the values of L i for i = 1, …, n because his competitor (the leader) has already taken action. The follower can increase the radius of influence of his facilities subject to his available budget, thus increasing some of his radii of influence defining the set Q of demand points that are covered by at least one additional follower’s facilities. The leader anticipates the follower’s reaction. Therefore, once the follower’s problem is solved, the values of L i, F i for i = 1, …n are all known and the leader’s value of the objective function is calculated by (12.17).

12.4.1.5 The Constraints

The leader and follower cannot exceed their respective budgets. For a combined strategy R = {r j} by both competitors the constraints are

$$\displaystyle \begin{aligned} \sum_{j\in P_L}C(r_j)\le B_L;~~~\sum_{j\in P_F}C(r_j)\le B_F. \end{aligned} $$
(12.20)

12.4.1.6 The Two Formulations

Once the strategy of the leader is known and thus L i are defined, the follower’s problem is

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \max\limits_{r_j,~j\in P_F}\left\{\sum_{i=1}^nw_i\frac{\sum_{j\in P_F}\delta_{ij}(R)}{L_i+\sum_{j\in P_F}\delta_{ij}(R)}\right\}\\ \mbox{Subject to:}\\ &\displaystyle &\displaystyle \sum_{j\in P_F}C(r_j)\le B_F. \end{array} \end{aligned} $$
(12.21)

The leader’s problem needs to be formulated as a bi-level programming model (Gao et al. 2005; Sun et al. 2008):

$$\displaystyle \begin{aligned} \begin{array}{rcl} &&\max\limits_{r_j,~j\in P_L}\left\{\sum_{i=1}^nw_i\frac{\sum_{j\in P_L}\delta_{ij}(R)}{\sum_{j\in P_L}\delta_{ij}(R)+\sum_{j\in P_F}\delta_{ij}(R)}\right\}\\ \mbox{Subject to:}\\ &&\sum_{j\in P_L}C(r_j)\le B_L\\ &&r_j\mbox{ for }j\in P_F=\arg\left[\begin{array}{l} \max\limits_{r_j,~j\in P_F}\left\{\sum_{i=1}^nw_i\frac{\sum_{j\in P_F}\delta_{ij}(R)}{\sum_{j\in P_L}\delta_{ij}(R)+\sum_{j\in P_F}\delta_{ij}(R)}\right\}\\ \mbox{subject to:} \sum_{j\in P_F}C(r_j)\le B_F.\end{array}\right] \end{array} \end{aligned} $$
(12.22)

Note that the follower problem may have several optimal solutions (each resulting in different leader’s objective) and the leader does not know which one of these the follower will select. This issue exists in all leader–follower models.

12.4.2 Solution Algorithms

The follower’s problems are identical to the three problems analyzed in Drezner et al. (2012) because market conditions are fully known to the follower. A branch and bound algorithm as well as a tabu search (Glover 1977, 1986; Glover and Laguna 1997) were proposed in Drezner et al. (2012) for the solution of each of these three strategies.

The branch and bound algorithm is based on an upper bound on the increase in market share which is calculated by dynamic programming. The set of possible locations of facilities is discrete. It is shown in Drezner et al. (2012) that, in order to find an optimal solution, only a finite list of radii needs to be considered at each location. Branching is performed on a tree whose nodes are combinations of facilities’ locations and their possible radii. The same algorithms are used for solving the three strategies. The follower’s problem should be solved by the branch and bound algorithm because it is essential to get his optimal solution.

It may be problematic to develop an effective upper bound for the leader’s problem. Even if such an upper bound could be constructed, the number of nodes to be evaluated by the branch and bound procedure might be prohibitive. According to the computational experiments reported in Drezner et al. (2012), solving the simplest strategy problem (IMP) may require scanning more than 18,000 nodes for n = 200 problems and almost 170 million nodes for one n = 300 problem. Such a large number of nodes will severely restrict the size of the problems that can be solved. In Drezner et al. (2015) we proposed to solve the leader’s problem by a tabu search algorithm. Note, however, that evaluating each move in the neighborhood requires finding an optimal solution to the follower’s problem, and thus it affects the size of the neighborhood. For this reason, the number of iterations would be quite limited as well. We proposed a tabu search algorithm for the solution of the leader’s problem similar to the algorithm proposed in Drezner et al. (2012).

12.4.2.1 The Greedy-Type Heuristic for Generating Starting Solutions

For each of the three strategies there is a list of existing facilities (either existing chain facilities with their given radii or potential locations for new facilities with a radius of zero). For each such facility a feasible list of radii is constructed. As explained in Drezner et al. (2012) this list consists of all the distances between the facility and the demand points. A solution is represented by a radius assigned to each facility (either no change in the radius or an increase in the radius with a setup cost added to the total cost).

A leader’s starting solution satisfying the budget constraint is generated. No reaction by the follower is considered in the greedy-type algorithm. Following extensive experiments with various strategies, we found that the following approach generates the best starting solutions for heuristic algorithms (such as tabu search) applied for the solution of leader’s problem. The follower’s problem is solved by a rigorous algorithm and a starting solution is not needed. To insert a random component to the process (so that different starting solutions are generated when the process is repeated), only a random percent, in a given range, of the budget is used.

  1. 1.

    A budget of zero (i.e., the existing radius for the facility) is assigned to each facility.

  2. 2.

    Evaluate all feasible increases ΔB in the budgets for each facility and calculate the market share increase ΔM for each.

  3. 3.

    Select the facility that maximizes ΔMΔB.

  4. 4.

    Update the budget for the selected facility and the remaining available budget.

  5. 5.

    Steps 2 and 3 are repeated until no ΔB is feasible.

12.4.2.2 The Tabu Search Algorithm

We used the tabu search algorithm to solve the leader’s problem. For each leader’s solution the value of the leader’s objective function is calculated by optimally solving the follower’s problem and evaluating the extra market share attracted by the leader following the follower’s reaction. A tabu list is created. At the start it is empty. A facility whose radius was recently reduced (note that it cannot be below the existing radius) is in the tabu list meaning that its radius is not considered for increase unless the best value of the objective function is improved by the move. A facility remains in the tabu list for tabu tenure iterations and the tabu tenure is randomly generated within a range every iteration. The process is continued for a pre-specified number of iterations and the best solution encountered during the search is the result of the tabu search.

12.4.2.3 The Neighborhood of the Tabu Search

We apply the neighborhood for solving the leader’s problem, that was successfully employed in Drezner et al. (2012) for solving the follower’s problem, with all the time-saving measures described there. For completeness we briefly describe the neighborhood construction.

  • The individual budget currently used to expand facility k be b k.

  • In the current iteration a budget of \(B_0=\sum \limits _{k=1}^pb_k\) is used.

  • The maximum budget used by any facility is \(B_{\max }=\max \limits _{1\le k\le p}\{b_k\}\).

  • Cost increases are considered for all k = 1, …, p up to a budget of \(B_L+B_{\max }-B_0\).

  • For each such k and its cost increase, cost reductions are considered for all jk as long as b j > 0. Let the budget B 0 following the increase in the budget of facility k be \(B^\prime _L\). It is required that \(b_j\ge B^\prime _L-B_L\). In addition, decreases in b j are considered sequentially starting from the smallest decrease and moving up the line of decreases up to a decrease of b j guaranteeing that the radius of facility j is not smaller than the existing radius.

  • Once a decrease in b j leads to a budget not exceeding B L for the first time, the solution is considered for the move. Consequently, for each pair of facilities k, j, at most one radius for facility j is considered. Note that if the radius of a facility is decreased and some demand points that were covered by the facility are no longer covered, the same equation (12.19) can be used by defining Q as the set of demand points whose cover was reduced and using as F i the number of facilities covering the demand point following the change.

It should be emphasized the follower’s problem is optimally solved for each move in the neighborhood yielding the leader’s objective function for that member of the neighborhood. This is required for only one reduction in the budget of some j for each increase in the budget for some k which limits the number of the follower’s problems to be solved at each iteration of the tabu search to p L.

12.4.2.4 The Algorithm

Let F(X) = MS L(X) be the value of the leader’s objective function for a solution X.

  1. 1.

    Generate a feasible starting solution X = X 0, empty the tabu list. The best solution found so far is X  = X 0 with the best value of the objective function found so far F  = F(X ).

  2. 2.

    The tabu tenure is randomly generated in a pre-specified range \([t_{\min },t_{\max }]\).

  3. 3.

    The value of the objective function is evaluated at all solutions in the neighborhood of X.

  4. 4.

    The best solution in the neighborhood is X .

  5. 5.

    If F(X ) > F , the next iterate is X = X . The facility whose radius was reduced is entered into the tabu list and X and F are updated. Go to Step 7.

  6. 6.

    Otherwise, let X ′′ be the best solution in the neighborhood for which the facility whose radius is increased is not in the tabu list. The next iterate is X = X ′′. The facility whose radius was reduced is entered into the tabu list.

  7. 7.

    Increase the iteration number by one. Go to Step 2 unless the pre-specified number of iterations is reached.

  8. 8.

    The result of the tabu search is X with a value of the objective function F .

An efficient way to handle the tabu list (especially when the tabu tenure is randomly generated) is to maintain a tenure vector for all facilities. Initially, a large negative number is recorded for all facilities. When a facility is entered into the tabu list the iteration number is recorded for it. A facility is in the tabu list if the difference between the current iteration number and its recorded value in the tenure vector is less than or equal to the tabu tenure.

12.4.3 Computational Experiments

As in our previous papers, we experimented with the 40 Beasley (1990a) problem instances designed for testing p-median algorithms in order to enable an easy replication of our results. The problems ranged between 100 ≤ n ≤ 900 nodes. The number of new facilities for these problems was ignored. The leader’s facilities are located on the first ten nodes and the follower’s facilities are located on the next 10 nodes. The problems are those tested in Drezner et al. (2012). The demand at node i is 1∕i (for testing problems with no reaction by the competitor) and the cost function is f(r) = r 2. The same radius of influence was used for existing leader’s and follower’s facilities. When new facilities can be added (strategies NEW and JNT), n − 10 nodes are candidate locations for the new facilities (nodes that occupy one’s facilities are not candidates for new facilities) and are assigned a radius of 0. We used \(r^j_0=20\), and S j = 0 for expanding existing facilities and the same S j > 0 for establishing any new facility.

The programs for finding the optimal solution, with no reaction by the competitor, were coded in Fortran and compiled by an Intel 11.1 Fortran Compiler with no parallel processing, and run on a desktop with the Intel 870/i7 2.93 GHz CPU Quad processor and 8 GB RAM. Only one thread was used. By the computational experiments in Drezner et al. (2012) a budget of 5000 leads to very long computational times. Therefore, in preparation for solving the leader–follower model, we first experimented with a budget of 1500. For larger budgets run times may be prohibitive and it may be necessary to replace the branch and bound procedure with the effective tabu search described in Drezner et al. (2012).

The branch and bound optimal algorithm is used to solve the follower’s problem. Since it is used numerous times in the solution procedure for the leader’s problem, we opted to apply a budget of 1500 and a setup cost of 500 for both the leader and the follower. Both the leader and the follower apply the JNT strategy. Tabu search, which does not guarantee optimality, is used to solve the leader’s problem. We therefore repeated the solution of each problem instance for at least 20 times to assess the quality of the tabu search solutions. We were able to solve (in reasonable run times) problems with up to 400 demand points.

12.4.3.1 Computational Experiments with No Competitor’s Reaction

In our “leader–follower” paper Drezner et al. (2015), we first reported experiments with solving the leader’s problem with no reaction from the follower. This was needed to establish a baseline and was performed by the branch and bound rigorous algorithm. When solving the leader–follower problem, this algorithm is performed to find the follower’s optimal solution and consequently the leader’s objective function.

In Drezner et al. (2015), we also reported the analysis of the percent of market share captured: by the chain (leader), by the competitor (follower), and from lost demand as a function of the budget following the leader’s optimal action. The follower takes no action. The setup cost is S j = 300 and the JNT strategy is applied. As expected, one’s chain market share increases and the competitor’s market share declines. Some of the increase in the leader’s market share comes at the expense of the competitor and some comes from capturing demand that is presently lost. It is interesting that the proportion of the additional market share from the competitor remains almost constant for all budgets tested. The average for all 40 problems is 44.2% for a budget of 1500, 44.2% for a budget of 2000, 44.9% for a budget of 2500, and 46.7% for a budget of 5000. A larger percentage of market share gained comes from lost demand. These percentages are the complements of the percentages gained from competitors or about 55%.

For the branch and bound algorithm for the JNT strategy solving the leader’s problem when the follower does not react, problems with up to 400 demand points were solved in less than a quarter of a second. We then tested the three strategies for a setup cost of S j = 300. The lower setup cost provides more options for the leader and, therefore, the run times (and number of nodes) are significantly higher, especially for large values of n.

12.4.3.2 Computational Experiments for Solving the Leader–Follower Problem

The tabu search procedure for finding the leader’s best solution after the follower’s reaction was programmed in C#. Its effectiveness was first tested on 160 JNT instances (40 instances for each budget) optimally solved, i.e., assuming no follower’s reaction. Our tabu search was capable of finding optimal solutions to 148 out of 160 instances and sub-optimal solutions (avg. error: 0.11%, max error: 0.41%) to the remaining 12 instances. We also observed that, for the budget of 1500, only one instance (#31) was not solved optimally by tabu search and the error was 0.04%. In the subsequent experiments reported in Drezner et al. (2015), we considered a budget of 1500.

Next, we extended the leader’s solution by adding the branch and bound procedure (the follower’s solution) to the tabu search. The original parameters used for solving the leader’s problem (\(w_i=\frac 1 i\) ) did not provide interesting results because the weights declined as the index of the demand point increased and thus both the leader and the follower concentrated their effort on attracting demand from demand points with a low index (high w i) and “ignored” demand points with higher indices. We therefore assigned equal weights of “1” to all demand points. We used a budget of 1500 and a setup cost of 500 for both the leader and the follower.

As a result of extensive experiments, the following parameters were used in the tabu search for solving the leader’s problem: The number of iterations was set to 1000, and the length of the tabu tenure was randomly generated every iteration between \(t_{\min }=5\) and \(t_{\max }=8\). The starting solutions for the tabu search are the results of the greedy algorithm described in Sect. 12.4.2.1 using between 10% and 100% (randomly generated) of the available budget.

The optimal solution for the follower was found by using the Fortran program that finds the optimal solution for the follower once the action taken by the leader is known. Run times were quite long so we solved the first 20 problems up to 400 demand points. We solved the first 10 problems 100 times each, the next five problems (n = 300) 50 times each, and the next 5 problems (n = 400) 20 times each. Recall that the follower’s problem was solved optimally and thus these results are valid.

The reader is referred to our paper Drezner et al. (2015) for details and comprehensive discussion of the results.

12.5 The Multiple Markets Competitive Location Problem

In our third follow-up paper Drezner et al. (2016), we consider a competitive location model with a very large number of demand points and facilities. Applying existing solution methods may, at best, provide a good heuristic solution.

The basic problem the company faces is how to invest its available budget in order to expand chain facilities, either by improving the attractiveness of some existing ones, by building new facilities, or by a combination of both actions. Such problems cannot be optimally solved for large instances with currently available computational resources. In Drezner et al. (2016), we investigated a special case for which optimal solutions may be obtained for large problems, and illustrated this approach by optimally solving a problem with 5000 demand points and 400 existing facilities (200 chain facilities and 200 competing facilities).

It is quite common for large problems that a large market area consists of a union of mutually exclusive sub-markets. An international corporation (for example, McDonald’s) has facilities in many markets that are mutually exclusive, i.e., customers in one market area do not patronize outlets in other markets or cross-patronizing between markets is negligible. This may well be the case even on a smaller scale when the market can be partitioned to “almost” mutually exclusive sub-markets when a large distance exists between clusters of demand points. For example, urban areas in Texas such as Dallas, Houston, San Antonio, Austin, etc. are mutually exclusive. Consumers residing in Dallas will rarely patronize a McDonald outlet in San Antonio.

The contribution of our Drezner et al. (2016) paper was twofold: (1) dealing with multiple mutually exclusive sub-markets, and (2) discretizing the budget so that its allocation to each sub-market is not a continuous variable.

Suppose that the market can be partitioned into m mutually exclusive sub-markets. If we know the budget allocated to each sub-market, we may be able to find the optimal solution (where to locate new facilities and which existing facilities to expand) for each sub-market separately. This simplifies the formulation. However, the resulting problem is intractable as well because m variables representing the budget allocated to each sub-market are added to the formulation (in addition to the decision variables in each sub-market). In addition, a constraint that the sum of these individual budgets is equal to the available budget is added. A Lagrangian approach (adding a Lagrange multiplier for the constraint on the total budget and finding its value) is not applicable to this particular problem. The formula for the profit obtained in a sub-market as a function of the budget allocated to that sub-market is not an explicit expression.

Three objectives are investigated: (1) Maximizing firm’s profit, (2) maximizing firm’s return on investment, and (3) maximizing profit subject to a minimum acceptable return on investment. The last objective is similar in many ways to the threshold concept where the objective is to minimize the probability of falling short of a profit threshold or a cost overrun (Drezner et al. 2002b; Drezner and Drezner 2011). The first paper to introduce the threshold concept was Kataoka (1963) in the context of transportation problems. Frank (1966, 1967) considered a model of minimizing the probability that the cost function in the Weber or minimax problems (Love et al. 1988) on a network exceeds a given threshold. The threshold concept has been employed in financial circles as a form of insurance on a portfolio, either to protect the portfolio or to protect firm’s minimum profit (Jacobs and Levy 1996).

12.5.1 Multiple-Market Competitive Location Solution Approach

There are m mutually exclusive sub-markets, each with given data about chain facilities, competitors, and demand points. A budget B is available for an investment in all m sub-markets. In order to diversify the investment, we can impose a maximum budget of B 0 in each of the sub-markets. The maximum budget can be different for different sub-markets. Suppose that the budget B is divided into K units, each unit is \(\frac B K\) dollars. For example, we can use K = 1000 so that each unit is 0.1% of the total budget. Since all m sub-markets are mutually exclusive we can find the maximum profit for each individual sub-market by investing in sub-market j = 1, …, m a budget of \(b_j=i\frac B K\) for some 0 ≤ i ≤ K. If the amount to be invested in a particular sub-market cannot exceed B 0 dollars, then \(\frac {i}{K}B\le B_0\) leading to \(0\le i\le K\frac {B_0}{B}=i_{\max }\). We assume that the maximum profit for a given investment in a given sub-market can be found by an optimal algorithm or, if necessary, by a good heuristic algorithm. The result is a matrix P of \(i_{\max }\) rows and m columns. The element p ij for \(1\le i\le i_{\max }\) and 1 ≤ j ≤ m in the matrix is the maximum profit obtained by investing \(i\frac B K\) in sub-market j. For i = 0 the profit is zero. The problem is solved in two phases:

12.5.1.1 Phase 1: Calculating the Maximum Profit of a Sub-market for All Possible Budgets

Since each sub-market is independent of the other sub-markets, the maximum profit obtained in a sub-market for a given budget can be found by any existing competitive location solution method. There are also heuristic approaches proposed for such problems when a sub-market leads to a large problem. A problem consisting of 5000 demand points is too big for most published approaches. However, as we illustrate below, if such a problem can be divided to 20 sub-markets consisting between 100 and 400 demand points each, it is tractable for most solution approaches. The following are examples of competitive models and solution approaches that can be applied to find the maximum profit for a sub-market for a given budget allocated to that sub-market:

  • Aboolian et al. (2007) solved the multiple facility location problem with a limited budget in discrete space within a given α% of optimality.

  • Plastria and Vanhaverbeke (2008) solved the problem defined by Aboolian et al. (2007) in a leader–follower modification. The leader–follower model is also termed the Stackelberg’s equilibrium model (Sáiz et al. 2009; Stackelberg 1934).

  • Fernandez et al. (2007) and Toth et al. (2009) solved the same problem as Aboolian et al. (2007) in a planar environment.

  • Drezner and Drezner (2004) solved optimally the single facility problem based on the gravity formulation for a given budget (attractiveness).

  • Drezner et al. (2012) solved optimally the multiple facilities problem with a limited budget in discrete space. New facilities can be constructed and existing facilities improved.

  • Drezner et al. (2015) solved the leader–follower version of the formulation in Drezner et al. (2012). The competitor (follower) is expected to improve his facilities or build new ones in response to the leader’s action. The objective is to maximize the leader’s market share following the follower’s action.

For K = 1000 (a parameter), a matrix P of up to 1001 rows corresponding to the possible investments, and m columns corresponding to the m sub-markets can be calculated by solving 1000m sub-problems. Of course, an investment of zero yields zero profit and need not be solved.

12.5.1.2 Phase 2: Calculating the Total Profit for All Markets Combined

Once the matrix P is available, the distribution of B among the m sub-markets can be found in two ways. One way is solving a binary linear program and the other way is by dynamic programming.

12.5.1.3 Binary Linear Programming Formulation

Let x ij for 1 ≤ i ≤ K and 1 ≤ j ≤ m be a binary variable that is equal to 1 if a budget of \(i\frac B K\) is invested in sub-market j and zero otherwise. The total profit is \(\sum \limits _{i=1}^{i_{\max }}\sum \limits _{j=1}^mp_{ij}x_{ij}\). The total investment is \(\frac B K\sum \limits _{i=1}^{i_{\max }}\sum \limits _{j=1}^mix_{ij}\)

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \max\left\{~~~\sum_{i=1}^{i_{\max}}\sum_{j=1}^mp_{ij}x_{ij}~~~\right\}{} \end{array} \end{aligned} $$
(12.23)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \mbox{Subject to:}\\ &\displaystyle &\displaystyle \sum_{i=1}^{i_{\max}}x_{ij}\le 1~~~~~\mbox{ for }j=1,\ldots,m{} \end{array} \end{aligned} $$
(12.24)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum_{i=1}^{i_{\max}}\sum_{j=1}^mix_{ij}\le K \end{array} \end{aligned} $$
(12.25)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle x_{ij}\in\{0,1\}{} \end{array} \end{aligned} $$
(12.26)

which is binary linear program with \(i_{\max }\times m\) variables and m + 1 constraints. The constraint (12.24) guarantees that only one budget value is selected for each sub-market and if all x ij = 0 for sub-market j, then no budget is allocated to sub-market j.

12.5.1.4 Dynamic Programming

Row zero is added to matrix P with zero values. The stages in the dynamic programming are the maximum profit for a budget \(i\frac B K\) by investing only in the first j sub-markets. Let the matrix Q = [q ij] be the maximum profit obtained by investing a budget of \(i\frac B K\) in the first j sub-markets. By definition q i1 = p i1. For 2 ≤ j ≤ m the following recursive relationship holds:

$$\displaystyle \begin{aligned} q_{ij}=\max\limits_{0\le r\le i}\left\{q_{r,j-1}+p_{i-r,j}\right\}. \end{aligned}$$

The values q im are the maximum possible profit for spending a total budget \(i\frac B K\) in all sub-markets. Some sub-markets may be assigned no investment. One advantage of dynamic programming over the binary linear programming approach is that the maximum profit is obtained for each partial budget in one application of the dynamic programming, while K solutions of the binary linear programming are required. In addition, the maximum return on investment (ROI) is obtained for any partial budget by one application of dynamic programming.

12.5.1.5 Maximizing Profit Subject to a Minimum ROI

Finding the maximum profit subject to a minimum ROI can be done using the results obtained for maximizing the profit for a given budget. The ROI is the ratio between the profit and the investment (budget). It can be calculated for each investment value yielding a vector of ROI values. The maximum profit for a ROI greater than a certain value is found by calculating the maximum profit for all investments whose ROI exceeds the given value.

It can also be done by solving binary linear programs similar to the formulation presented in Sect. 12.5.1.3. Only one additional constraint is added to the binary linear programming formulation (12.23)–(12.26). By definition, the ROI is the ratio between the profit and the investment. Therefore,

$$\displaystyle \begin{aligned} ROI=\frac{\sum\limits_{i=1}^{i_{\max}}\sum\limits_{j=1}^mp_{ij}x_{ij}}{\frac B K\sum\limits_{i=1}^{i_{\max}}\sum\limits_{j=1}^mix_{ij}}. \end{aligned}$$

Suppose that a minimum ROI of α is required. ROI ≥ α is equivalent to:

$$\displaystyle \begin{aligned} \sum_{i=1}^{i_{\max}}\sum_{j=1}^m\left\{p_{ij}-i\alpha \frac B K\right\}x_{ij}\ge 0~~. \end{aligned} $$
(12.27)

Constraint (12.27), which is linear, is added to the formulation (12.23)–(12.26) leading to a binary linear program with \(i_{\max }\times m\) variables and m + 2 constraints.

12.5.2 An Illustrative Multiple Markets Example

Once the maximum profit for a given investment in an individual sub-market is found, our general framework can be implemented. All the formulations and solution procedures described in Sect. 12.5.1.1 can be used for this purpose. In Drezner et al. (2016), we opted to apply the optimal branch and bound algorithm proposed in Drezner et al. (2012) for finding the maximum profit by investing a given budget in a single sub-market.

The networks selected for our sub-markets are the first 20 Beasley (1990a) networks designed for the evaluation of algorithms for solving p-median problems. Beasley (1990a) did not consider competitive models. Demand points, existing facilities, and potential locations for new facilities are located at the nodes of the network. Distances along links are measured in tenths of miles. These networks are easily available for testing other models as well. They can be used for future comparisons.

  • 5000 demand points are located in 20 sub-markets. Each sub-market consists of between 100 and 400 demand points.

  • 200 chain facilities and 200 competing facilities presently operate in these sub-markets.

  • Each demand point has an available buying power to be spent at one’s facilities or the competitors’ facilities.

  • For simplicity of presentation, each sub-market has a total buying power of $150 million for a total of $3 billion.

  • A budget of up to $100 million is available for improvements of existing facilities and construction of new ones. No more than $30 million can be allocated to each sub-market.

  • Existing facilities can be expanded and new facilities can be constructed at any node of the network.

  • Each facility has a “circle of influence” defined by a radius of influence inside which they attract customers.

  • For simplicity of presentation we assume that each existing facility has a radius of influence of 2 miles.

  • The cost of expanding a facility is proportional to the increase in the area of its circle of influence. Expanding a facility from the existing radius of influence of 2 miles to a radius of influence of r miles costs r 2 − 4 million.

  • Building a new facility with radius of influence r entails a $5 million setup cost plus a cost of r 2 million.

We want to determine which, if any, of the 200 existing facilities should be expanded and at which of the 4800 potential locations should new facilities be constructed to maximize profit. Maximizing the return on investment (ROI) is also considered, as well as maximizing profit subject to a minimum ROI value. The radii of the expanded and new facilities are variables, for a total of 5000 variables.

The branch and bound optimal algorithm (Drezner et al. 2012) and the dynamic programming procedure were programmed in Fortran using double-precision arithmetic. The programs were compiled by the Intel 11.1 Fortran Compiler and run, with no parallel processing, on a desktop with the Intel 870/i7 2.93 GHz CPU Quad processor and 8 GB memory. Only one thread was used.

The matrix P contains 6020 values (301 rows for a budget of zero and between $0.1 and $30 million, and 20 columns, one for each sub-market), each being the maximum profit for a given budget invested in a given sub-market. Note that an investment of $0 yields a profit of $0. All 6020 optimal solutions that are needed for the construction of matrix P were obtained in about 103 min of computing time.

Once the matrix P is found, obtaining the maximum profit for all partial budgets by solving binary linear programs using CPLEX 12.A took about 3 s for solving each of the 300 problems. The 300 results using dynamic programming were obtained in less than 1 s. Finding the maximum profit subject to a minimum ROI requirement by solving the binary linear program required about 1.6 s. Once the 300 results found by dynamic programming are available, the solution to the maximum profit for a minimum ROI is found by constructing a simple excel file.

In Table 12.1 we summarize the maximum possible profit along with the maximum return on investment (ROI) and the corresponding investments leading to these profits and ROIs. In five of the 20 sub-markets no profit is possible and no investment should be made. If unlimited budget is available and the best investment strategy is selected for each sub-market, then the total investment is $298.5 million leading to a profit of $198.4 million and ROI of 0.665.

Table 12.1 Individual sub-markets results

Sub-market #20 was selected for depiction of the profit and the ROI as a function of the investment in that sub-market. In Fig. 12.2, these graphs are depicted. As reported in Table 12.1, the maximum profit of $26.884 million dollars is obtained for an investment of $24.1 million and a maximum ROI of 1.51 is achieved for an investment of $14.5 million.

Fig. 12.2
figure 2

Profit and ROI as a function of the investment in sub-market #20

In Fig. 12.3, we depict the profit and ROI for the total investment in all 20 sub-markets. These values were obtained using dynamic programming. The profit increases as a function of total investment. However, ROI is quite erratic. ROI reaches the maximum when $5.7 million is invested in sub-market #17 and no investment made in other sub-markets.

Fig. 12.3
figure 3

Profit and ROI as a function of the total investment

In Fig. 12.4, the maximum profit for a minimum ROI value is plotted for an investment of up to $100 million. As expected, when higher minimum ROI is required the maximum profit declines.

Fig. 12.4
figure 4

Maximum profit subject to minimum ROI

12.6 Conclusions

This chapter summarized four papers on competitive location: Drezner et al. (2011, 2012, 2015, 2016). All four papers utilize the radius of influence and are based on an assumption of equal division of buying power among facilities whose radius of influence captures that power.

We presented efficient methods for locating multiple new, and expanding existing, facilities in such a competitive environment. We also presented a leader–follower model in which initial actions of the leader (locate new and/or improve existing facilities) are countered by follower’s response, along with a solution method. Finally, we discussed a multiple disjoint markets problem with real-world objectives and presented efficient solution techniques.

All these methods have a common objective: the maximization of the market share. Since profit is (usually) a monotonically increasing function of market share captured, this objective is associated with maximizing profit.

The original idea of locating new facilities based on their radii of influence called for an extension allowing expanding existing facilities in addition to locating new ones. This concept, however, was considered in a static context in which a decision to locate new and expand existing facilities was based on the “ceteris paribus” assumption from the classical economics, i.e., no reaction from the competitors. This assumption is released in the leader–follower version of our model, in which the leader’s decision takes into account the actions of competitors, directly following the leader’s decision. Like in a game of chess, the leader must carefully consider each move (locating new and expanding existing facilities) in terms of the competitors’ counter-moves. Adding this new dimension to the problem significantly increased the difficulty of finding a solution but it also made our model more realistic.

Even in its original formulation, the competitive location problem is combinatorially explosive as its complexity increases much faster than the number of facilities to be located or expanded. The practical considerations such as the concentration of demand points in urban areas and the limited sphere of influence for most types of facilities lead us to the model in which multiple disjoint markets are identified and considered as separate environments for locating new and expanding existing facilities. This new approach allows tackling large practical competitive location problems which would be too difficult or impossible to solve without splitting them into disjoint sub-markets.

Future research involving our competitive location model might consider downgrading and closing existing facilities (together with opening and upgrading) as a new strategy. Studying the effects of including this new strategy in the leader–follower model is another interesting research avenue to explore. Also, additional moves in the competitive game (e.g., leader’s move, follower’s response, leader’s response) could be investigated, however, on a much smaller scale. Other game-theoretical approaches such as forming coalitions with some competitors could be investigated. Lastly, a gradual cover extension of our original model (with multiple radii of influence) could be considered.