1 Introduction

Recently, many parts of the world have been struck by disasters such as earthquakes, nuclear plant accidents, volcanic eruptions, and flooding, leading to the recognition of the urgent need for orderly evacuation planning. A powerful tool for evacuation planning is the dynamic flow model introduced by Ford and Fulkerson (1958), which represents the movement of people or goods over time in a network. In this model, we are given a graph with source vertices and sink vertices; each source vertex has a positive weight called a supply, each sink vertex has a positive weight called a demand, and each edge has positive length and capacity, with an edge capacity limiting the amount of supply that can enter the edge per unit time. One variant of the dynamic flow problem is the quickest transshipment problem, the objective of which is to send exactly the right amount of supply from sources to sinks while satisfying the demand constraints in the minimum overall time. Hoppe and Tardos (2000) provided a polynomial-time algorithm for this problem when the transit times are integral, but their algorithm is very complex. Therefore, finding a practical polynomial-time solution to this problem remains an open problem; see the survey by Skutella (2009) on dynamic flows.

Discussed herein is the related sink location problem (Belmonte et al. 2015; Benkoczi et al. 2018, 2020; Bhattacharya et al. 2017; Chen and Golin 2016, 2023; Higashikawa et al. 2014, 2015b; Mamada et al. 2006), the objective of which is to locate sinks in a given dynamic flow network so that all the supply is sent to the sinks as quickly as possible. As the criterion for optimal location, the following two can naturally be considered: (i) minimization of evacuation completion time or (ii) minimization of aggregate evacuation time (i.e., the sum of evacuation times), and we refer to the sink location problem with criterion (i) or (ii) as the CTSL problem or the ATSL problem, respectively. Several previous papers have studied CTSL problems (Belmonte et al. 2015; Bhattacharya et al. 2017; Chen and Golin 2016, 2023; Higashikawa et al. 2014, 2015b; Mamada et al. 2006), but for ATSL problems the results are scarce and only for path networks (Benkoczi et al. 2018, 2020; Higashikawa et al. 2015b).

To model the evacuation behavior of people, it would be natural to treat each supply as a discrete quantity as in Hoppe and Tardos (2000); Mamada et al. (2006). However, almost all previous papers on sink location problems (Belmonte et al. 2015; Bhattacharya et al. 2017; Chen and Golin 2016, 2023; Higashikawa et al. 2014, 2015b) have treated each supply as a continuous quantity, this being because doing so makes it easier to handle the problems mathematically, and also the effect of such treatment is small enough to ignore when the number of people is large. Therefore, here we likewise adopt the model with continuous supplies.

Although the above two criteria are reasonable, they may be impractical because the population distribution is assumed to be fixed. In a real situation, the number of people in an area may vary with time; for example, an office area in a large city contains many people during weekdays but far fewer people on weekends and during the night. To account for such uncertainty, Kouvelis and Yu (1997) introduced the minmax regret model. In a minmax regret sink location problem, we are given a finite or infinite set S of scenarios, with each scenario giving a particular assignment of weights on all the vertices. Here, for a sink location x and a scenario \(s \in S\), we denote the evacuation completion time or aggregate evacuation time by F(xs). Then, the problem can be understood as a two-person Stackelberg game as follows. The first player picks a sink location x, and the second player chooses a scenario \(s \in S\) that maximizes the regret defined as \(R(x,s):= F(x,s)-\min _x F(x,s)\); the objective of the first player is to choose x that minimizes the maximum regret. Here, we refer to the minmax regret sink location problem with the regret defined based on the evacuation completion time or the aggregate evacuation time as the MMR-CTSL problem or the MMR-ATSL problem, respectively. MMR-CTSL problems have been studied previously (Arumugam et al. 2019; Bhattacharya and Kameda 2015; Golin and Sandeep 2018; Higashikawa et al. 2015a, 2014; Li and Xu 2016; Li et al. 2016; Golin and Sandeep 2022), but there are few results for MMR-ATSL problems (Bhattacharya et al. 2018; Higashikawa et al. 2018) despite being important both theoretically and practically.

As for how to define a set of scenarios, all previous studies on minmax regret sink location problems adopted the model with interval weights, in which each vertex is given a weight as a real interval, and a scenario is defined by choosing an element of the Cartesian product of all the weight intervals over the vertices. However, a drawback of the minmax regret model with interval weights is that each weight can take an independent value, thus we consider some extreme scenarios that may not happen in real situations, such as a scenario in which all the vertices have either maximum or minimum weights. To account for the dependency among the weights of all the vertices, we adopt the model with parametric weights (first introduced by Vairaktarakis and Kouvelis (1999) for the minmax regret median problem), in which each vertex is given a weight as a linear function in a common parameter t on a real interval, and a scenario is determined just by choosing t. Note that when considering a real situation, each weight function is likely to be more complex but can be approximated by a piecewise linear function. Thus, by superimposing all such piecewise linear functions, it turns out that for a sufficiently small subinterval of t, every weight function can be regarded as linear, and we can obtain the solution by solving multiple subproblems with linear weight functions.

Herein, we study the MMR-ATSL problem on a dynamic flow path network with parametric weights for the first time. For this problem with both general edge capacities and uniform edge capacity, we give polynomial time algorithms. Our main theorem is as follows.

Theorem 1

(Main Results) Suppose that we are given a dynamic flow path network of n vertices with parametric weights.

  1. (i)

    The MMR-ATSL problem can be solved in \(O( n^4 2^{\alpha (n)} \alpha (n)^2 \log n)\) time, where \(\alpha (\cdot )\) is the inverse Ackermann function.

  2. (ii)

    When all the edge capacities are uniform, the MMR-ATSL problem can be solved in \(O( n^3 2^{\alpha (n)} \alpha (n) \log n)\) time.

Note that the MMR-ATSL problem with interval weights was studied by Bhattacharya et al. (2018), Higashikawa et al. (2018) but only for the case with uniform edge capacity. Higashikawa et al. (2018) provided an \(O(n^3)\) time algorithm, which Bhattacharya et al. (2018) improved to one running in \(O(n^2 \log ^2 n)\) time. However, no algorithm yet exists for the case with general edge capacities. Therefore, our result implies that the problem becomes solvable in polynomial time by introducing parametric weights. Tables 1 and 2 summarize the current best results and our results.

Table 1 The current best results on MMR-CTSL problems
Table 2 The current best results on MMR-ATSL problems

The rest of this paper is organized as follows. In Sect. 2, we give the notation and fundamental properties that are used throughout the paper. In Sect. 3, we give the key lemmas and the algorithm for solving the problem, which concludes the paper.

2 Preliminaries

For two real values ab with \(a<b\), let \([a, b] = \{t \in {{\mathbb {R}}} \mid a \le t \le b\}\), \((a, b) = \{t \in {{\mathbb {R}}} \mid a< t < b\}\), and \((a, b] = \{t \in {{\mathbb {R}}} \mid a < t \le b\}\).

In our problem, we are given a real interval \(T = [t^{-}, t^{+}] \subset {{\mathbb {R}}}\) and a dynamic flow path network \({{\mathscr {P}}} = (P, \textbf{w}(t), \textbf{c}, \textbf{l}, \tau )\) that consists of five elements. \(P = (V, E)\) is a path with vertex set \(V = \{v_i \mid 1 \le i \le n\}\) and edge set \(E = \{e_i = (v_i, v_{i+1}) \mid 1 \le i \le n-1\}\). A vector \(\textbf{w}(t)=\langle w_1(t), \ldots , w_n(t) \rangle \) consists of the weight function \(w_i: T \rightarrow {{\mathbb {R}}}_{\ge 0}\) that is linear in a parameter t and nonnegative for any \(t \in T\). A vector \(\textbf{c} = \langle c_1, \ldots , c_{n-1} \rangle \) consists of the capacity \(c_i\) of edge \(e_i\). A vector \(\textbf{l} = \langle \ell _1, \ldots , \ell _{n-1} \rangle \) consists of the length \(\ell _i\) of edge \(e_i\), and \(\tau \) is the time that the supply takes to move a unit distance on any edge. Let us explain how edge capacities and lengths affect the evacuation time. Consider an evacuation under fixed \(t \in T\). Suppose that at time 0, supply of amount w is at vertex \(v_{i+1}\) and going through edge \(e_i\) toward vertex \(v_i\). The first fraction of supply from \(v_{i+1}\) can arrive at \(v_i\) at time \(\tau \ell _i\). The edge capacity \(c_i\) represents the maximum amount of supply that can enter \(e_i\) in a unit time interval, so all the supply w can complete leaving \(v_{i+1}\) at time \(w/c_i\). Therefore, all the supply w can complete arriving at \(v_{i}\) at time \(\tau \ell _i+w/c_i\).

For any integers ij with \(1 \le i \le j \le n\), we denote the sum of weights from \(v_i\) to \(v_j\) by \(W_{i,j}(t) = \sum _{h=i}^{j} w_h(t)\). For the notation, we define \(W_{i,j}(t) = 0\) for ij with \(i > j\). For a vertex \(v_i \in V\), we use \(v_i\) to denote the distance between \(v_1\) and \(v_i\), that is, \(v_i = \sum _{j=1}^{i-1} \ell _j\). For an edge \(e_i \in E\), we use \(e_i\) to denote a real open interval \((v_i,v_{i+1})\). We also use P to denote a real closed interval \([0,v_n]\). If a real value x satisfies \(x \in (v_i, v_{i+1})\), then x is said to be a point on edge \(e_i\) to which the distance from \(v_i\) is \(x-v_i\). Let \(C_{i,j}\) be the minimum capacity for all the edges from \(e_i\) to \(e_j\), that is, \(C_{i,j} = \min \{c_{h} \mid i \le h \le j \}\).

Note that we precompute the values of \(v_i\) and \(W_{1,i}(t)\) for all i in O(n) time, and then \(W_{i,j}(t)\) for any ij can be obtained in O(1) time as \(W_{i,j}(t)=W_{1,j}(t)-W_{1,i-1}(t)\). In addition, \(C_{i,j}\) for any ij can be obtained in O(1) time with O(n) preprocessing time, which is known as the range minimum query (Alstrup et al. 2002; Bender and Farach-Colton 2000).

2.1 Evacuation completion time on a dynamic flow path network

In this section, we give a formula for the evacuation completion time for fixed \(x \in P\) and \(t \in T\). Suppose that sink x is on edge \(e_i = (v_{i}, v_{i+1})\). In this case, all the supply on the right side (i.e., at \(v_{i+1}, \ldots , v_n\)) will flow left to sink x, and all the supply on the left side (i.e., at \(v_{1}, \ldots , v_{i}\)) will flow right to sink x. First, we consider the evacuation for the supply on the right side of x. Supply on the path is viewed as a continuous value, and we regard all the supply on the right side of x as being mapped to the interval \((0, W_{i+1,n}(t)]\). The value z satisfying \(z \in (W_{i+1,j-1}(t), W_{i+1,j}(t)]\) with \(i+1 \le j\le n\) represents all the supply at vertices \(v_{i+1}, v_{i+2}, \ldots , v_{j-1}\) plus the partial supply of \(z - W_{i+1,j-1}(t)\) at \(v_j\). Let \(\theta ^{e_i}_{\textrm{R}} (x, t, z)\) denote the time at which the first z amount of supply on the right side of x (i.e., \(v_{i+1}, v_{i+2}, \ldots , v_n\)) completes its evacuation to sink x. Higashikawa (2014) gives the following formula for \(\theta ^{e_i}_{\textrm{R}} (x, t, z)\); For \(z \in (W_{i+1,j-1}(t), W_{i+1,j}(t)]\) with \(i+1 \le j\le n\), we have

$$\begin{aligned} \theta ^{e_i}_{\textrm{R}} (x, t, z)= & \max _{i+1 \le h \le j} \left\{ \tau (v_h - x) + \frac{z - W_{i+1,h-1}(t)}{C_{i, h}} \right\} . \end{aligned}$$
(1)

In a symmetric manner, we consider the evacuation for the supply on the left side of x (i.e., \(v_1, \ldots , v_i\)). The value z satisfying \(z \in (W_{j+1,i}(t),W_{j,i}(t)]\) with \(1 \le j\le i\) represents all the supply at vertices \(v_i, v_{i-1}, \ldots , v_{j+1}\) plus the partial supply of \(z - W_{j+1,i}(t)\) at \(v_j\). Let \(\theta ^{e_i}_{\textrm{L}} (x, t, z)\) denote the time at which the first z amount of supply on the left side of x completes its evacuation to sink x. Higashikawa (2014) gives the following formula for \(\theta ^{e_i}_{\textrm{L}} (x, t, z)\); For \(z \in (W_{j+1,i}(t),W_{j,i}(t)]\) with \(1 \le j\le i\), we have

$$\begin{aligned} \theta ^{e_i}_{\textrm{L}} (x, t, z)= & \max _{j \le h \le i} \left\{ \tau (x - v_h) + \frac{z - W_{h+1,i}(t)}{C_{h,i}} \right\} . \end{aligned}$$
(2)

Let us turn to the case in which sink x is at a vertex \(v_i \in V\). We confirm that the evacuation times when supply of amount z originates from the right and left sides of \(v_i\) to sink \(v_i\) are given by \(\theta ^{e_{i}}_{\textrm{R}} (v_i, t, z)\) and \(\theta ^{e_{i-1}}_{\textrm{L}}(v_i, t, z)\), respectively.

2.2 Aggregate evacuation time

Let \(\varPhi (x, t)\) be the aggregate evacuation time (i.e., the sum of evacuation time) when the sink is at a point \(x \in P\) and the weight functions are fixed by the parameter \(t \in T\). For point x on edge \(e_i\) and parameter \(t \in T\), the aggregate evacuation time \(\varPhi (x,t)\) is defined by the integrals of the evacuation completion times \(\theta ^{e_i}_{\textrm{L}} (x, t, z)\) over \(z \in [0,W_{1,i}(t)]\) and \(\theta ^{e_i}_{\textrm{R}} (x, t, z)\) over \(z \in [0,W_{i+1,n}(t)]\), that is,

$$\begin{aligned} \varPhi (x, t) = \int _{0}^{W_{1,i}(t)} \theta ^{e_i}_{\textrm{L}} (x, t, z) dz + \int _{0}^{W_{i+1,n}(t)} \theta ^{e_i}_{\textrm{R}} (x, t, z) dz. \end{aligned}$$
(3)

Similarly, if sink x is at vertex \(v_i\), then \(\varPhi (v_i, t)\) is given by

$$\begin{aligned} \varPhi (v_i, t) = \int _{0}^{W_{1, i-1}(t)} \theta ^{e_{i-1}}_{\textrm{L}} (v_i, t, z) dz + \int _{0}^{W_{i+1,n}(t)} \theta ^{e_i}_{\textrm{R}} (v_i, t, z) dz. \end{aligned}$$
(4)

2.3 Minmax regret formulation

We denote by \(\textrm{Opt}(t)\) the minimum aggregate evacuation time with respect to parameter \(t \in T\). Higashikawa et al. (2015b) and Benkoczi et al. (2020) showed that for the minsum k-sink location problem, there exists an optimal k-sink such that all the k sinks are at vertices. This implies that we have

$$\begin{aligned} \textrm{Opt}(t) = \min _{x \in V} \varPhi (x, t) \end{aligned}$$
(5)

for any \(t \in T\). For a point \(x \in P\) and a value \(t \in T\), a regret R(xt) with regard to x and t is the gap between \(\varPhi (x,t)\) and \(\textrm{Opt}(t)\) defined as

$$\begin{aligned} R(x, t) = \varPhi (x,t) - \textrm{Opt}(t). \end{aligned}$$
(6)

The maximum regret for a sink \(x \in P\), denoted by MR(x), is the maximum value of R(xt) with respect to \(t \in T\). Thus, MR(x) is defined as

$$\begin{aligned} MR(x) = \max _{t \in T} R(x,t). \end{aligned}$$
(7)

Given a dynamic flow path network \({{\mathscr {P}}}\) and a real interval T, MMR-ATSL problem is defined as follows:

$$\begin{aligned} \text{ minimize } MR(x) \text{ subject } \text{ to } x\in P. \end{aligned}$$
(8)

Let \(x^*\) denote an optimal solution of (8).

2.4 Piecewise functions and upper/lower envelopes

A function \(f: X (\subset {{\mathbb {R}}}) \rightarrow {{\mathbb {R}}}\) is called a piecewise polynomial function if and only if the real interval X can be partitioned into subintervals \(X_1, X_2, \ldots , X_m\) so that f is formed as a polynomial \(f_i\) on each \(X_i\). We denote such a piecewise polynomial function f by \(f = \langle (f_1, X_1), \ldots , (f_m, X_m) \rangle \), or simply \(f = \langle (f_i, X_i) \rangle \). We assume that such a partition into subintervals is maximal in the sense that for any i and \(i+1\), we have \(f_i\ne f_{i+1}\). We call each pair \((f_i, X_i)\) a piece of f, and an endpoint of the closure of \(X_i\) a breakpoint of f. A piecewise polynomial function \(f = \langle (f_i, X_i) \rangle \) is called a piecewise polynomial function of degree at most two if and only if each \(f_i\) is quadratic or linear. We confirm the following property about the sum of piecewise polynomial functions.

Proposition 1

Let m and \(m'\) be positive integers, and \(f, g: X (\subset {{\mathbb {R}}}) \rightarrow {{\mathbb {R}}}\) be piecewise polynomial functions of degree at most two with m and \(m'\) pieces, respectively. Then, a function \(h = f + g\) is a piecewise polynomial function of degree at most two with at most \(m + m'\) pieces. Moreover, given \(f = \langle (f_i, X_i) \rangle \) and \(g = \langle (g_j, X'_j) \rangle \), we can obtain \(h = f + g = \langle (h_j, X''_j) \rangle \) in \(O( m + m' )\) time.

Let \({{\mathscr {F}}} = \{f_1(y), \ldots , f_m(y)\}\) be a family of m polynomial functions, where \(f_i: Y_i(\subset {\mathbb {R}}) \rightarrow {\mathbb {R}}\) and Y denotes the union of \(Y_i\), that is, \(Y = \cup _{i=1}^{m}Y_i\). An upper envelope \({{\mathscr {U}}}_{{{\mathscr {F}}}}(y)\) and a lower envelope \({{\mathscr {L}}}_{{{\mathscr {F}}}}(y)\) of \({{\mathscr {F}}}\) are functions from Y to \({\mathbb {R}}\) defined as follows:

$$\begin{aligned} {{\mathscr {U}}}_{{{\mathscr {F}}}}(y) = \max _{i = 1, \ldots , m} f_i(y), \,\, {{\mathscr {L}}}_{{{\mathscr {F}}}}(y) = \min _{i = 1, \ldots , m} f_i(y), \end{aligned}$$
(9)

where the maximum and the minimum are taken over those functions that are defined at y, respectively. For the upper envelope \({{\mathscr {U}}}_{{{\mathscr {F}}}}(y)\) of \({{\mathscr {F}}}\), there exists an integer sequence \(U_{{\mathscr {F}}} = \langle u_1, \ldots , u_k \rangle \) and subintervals \(I_1, \ldots , I_k\) of Y such that \({{\mathscr {U}}}_{{{\mathscr {F}}}}(y) = \langle (f_{u_1}(y), I_1), \ldots , (f_{u_k}(y), I_k) \rangle \) holds. That is, the upper envelope \({{\mathscr {U}}}_{{{\mathscr {F}}}}(y)\) can be represented as a piecewise polynomial function. We call the above sequence \(U_{{\mathscr {F}}}=\langle u_1, \cdots ,u_k \rangle \) the function sequence of \({{\mathscr {U}}}_{{\mathscr {F}}}(y)\).

In our algorithm, we compute the upper/lower envelopes of partially defined, univariate polynomial functions. The following result is useful for this operation.

Theorem 2

(Hart and Sharir 1986; Hershberger 1989; Agarwal et al. 1989) Let \({{\mathscr {F}}}\) be a family of n partially defined polynomial functions of degree at most two. Then \({{\mathscr {U}}}_{{{\mathscr {F}}}}\) and \({{\mathscr {L}}}_{{{\mathscr {F}}}}\) consist of \(O( n2^\alpha (n) ) \) pieces, and one can obtain them in \(O(n \alpha (n) \log n )\) time, where \(\alpha (n)\) is the inverse Ackermann function. Moreover, if \({{\mathscr {F}}}\) is a family of n partially defined linear functions, then \({{\mathscr {U}}}_{{{\mathscr {F}}}}\) and \({{\mathscr {L}}}_{{{\mathscr {F}}}}\) consist of \(O(n\alpha (n))\) pieces, and one can obtain them in \(O(n \log n )\) time. Furthermore, if \({{\mathscr {F}}}\) is a family of n totally defined linear functions, then \({{\mathscr {U}}}_{{{\mathscr {F}}}}\) and \({{\mathscr {L}}}_{{{\mathscr {F}}}}\) consist of O(n) pieces, and one can obtain them in \(O(n \log n )\) time.

Note that the number of pieces and the computation time for the upper/lower envelopes are associated with the maximum length of Davenport–Schinzel sequences; see Hart and Sharir (1986) for the details. For a family \({{\mathscr {F}}}\) of functions, if we say that we obtain envelopes \({{\mathscr {U}}}_{{{\mathscr {F}}}}(y)\) or \({{\mathscr {L}}}_{{{\mathscr {F}}}}(y)\), then we obtain the information of all pieces \((f_{u_i}(y), I_i)\).

2.5 Property of inverse Ackermann function

The Ackerman function is defined as follows:

$$\begin{aligned} A(n, m) = \left\{ \begin{array}{ll} m + 1 & \text { if } n = 0, \\ A(n-1, 1) & \text { if } n > 0, m=0, \\ A(n-1, A(n, m - 1)) & \text { otherwise.} \end{array} \right. \end{aligned}$$

The inverse Ackermann function \(\alpha (n)\) is defined as

$$\begin{aligned} \alpha (n) = \min \{ k \in {{\mathbb {N}}}_{0} \mid n \le A(k, k) \}, \end{aligned}$$

and we show the following inequality.

Property 1

$$\begin{aligned} \alpha (n^3) \le \alpha (n) + 1 \end{aligned}$$
(10)

holds for any positive integer n with \(n \ge 8\).

Let us suppose that \(n \ge 8\) and \(k = \alpha (n) \ge 3\) because \(A(2, 2) = 7\) holds. Inequality (10) holds if the inequality

$$\begin{aligned} n^3 \le A(k+1, k+1). \end{aligned}$$

holds. By the definition and the monotonicity of the Ackermann function, we have

$$\begin{aligned} A(k+1, k+1) = A\left( k, A(k+1, k) \right) > A\left( k, A(k, k)\right) \ge \bigl ( A(k, k) \bigr )^3. \end{aligned}$$

The last inequality arises from the fact that

$$\begin{aligned} A\left( k, m\right) \ge A\left( 3, m\right) = 2^{m+3} - 3 \ge m^3 \end{aligned}$$

holds for any integer \(k \ge 3\) and any positive integer \(m\ge 1\), where \(A(3,m)=2^{m+3}-3\) is shown by Porto and Matos (1980). Because \(n \le A(k, k)\) holds, we have \(n^3 \le \bigl ( A(k, k) \bigr )^3 \le A(k+1, k+1)\). Thus, the proof is complete.

3 Algorithm

The main task of the algorithm is to compute the following O(n) values: MR(v) for all \(v \in V\) and \(\min \{ MR(x) \mid x \in e\}\) for all \(e \in E\). Once we compute these values, we immediately obtain the solution of the problem by choosing the minimum among them in O(n) time.

Let us focus on computing \(\min \{ MR(x) \mid x \in e\}\) for each \(e \in E\). (Note that we can compute MR(v) for \(v \in V\) in a similar manner.) Recall the definition of the maximum regret for x, \( MR(x) = \max \{ R(x,t) \mid t \in T \}\). A main difficulty lies in evaluating R(xt) over \(t \in T\) even for a fixed x because we must treat an infinite set T. Furthermore, we are also required to find an optimal location among an infinite set e. To tackle this issue, our key idea is to partition the problem into a polynomial number of subproblems as follows. We partition interval T into a polynomial number of subintervals \(T_1, \ldots , T_{m}\) so that R(xt) is represented as a (single) polynomial function in x and t on \(\{x \in e\} \times T_j\) for each \(j = 1, \ldots , m\). For each \(T_j\), we compute the maximum regret for \(x \in e\) over \(T_j\) denoted by \(G_j(x) =\max \{R(x,t) \mid t \in T_j\}\). An explicit form of \(G_j(x)\) will be given in Sect. 3.1. We then obtain MR(x) for \(x \in e \) as the upper envelope of functions \(G_1(x), \ldots , G_{m}(x)\) and find the minimum value of MR(x) for \(x \in e\) by elementary calculation.

In the rest of the paper, we mainly show that for each e or v, there exists a partition of T with a polynomial number of subintervals such that the regret R(xt) is a polynomial function of degree at most two on each subinterval.

3.1 Key lemmas

Recalling (5) and (6), function R(xt) is based on function \(\varPhi (x,t)\). First, let us consider extracting a univariate function in t from \(\varPhi (x,t)\) as much as possible. According (3), \(\varPhi (x,t)\) is defined as the some of definite integrals \(\theta ^{e_i}_{\textrm{R}}(x,t,z)\) and \(\theta ^{e_i}_{\textrm{L}}(x,t,z)\) over z. By (1) and (2), \(\theta ^{e_i}_{\textrm{R}}(x,t,z)\) (resp.\(\theta ^{e_i}_{\textrm{L}}(x,t,z)\)) can be decomposed into term \(-\tau x\) and the other term independent of x, denoted by \(f^{e_i, j}_{\textrm{R}} (t, z)\) (resp. \(f^{e_i, j}_{\textrm{L}} (t, z)\)). Formally, for \(1 \le i < j \le n\), let function \(f^{e_i, j}_{\textrm{R}} (t, z)\) be defined on \(t \in T\) and \(z \in (W_{i+1,j-1}(t), W_{i+1,n}(t)]\) as

$$\begin{aligned} f^{e_i, j}_{\textrm{R}} (t, z) = \tau v_j + \frac{z - W_{i+1,j-1}(t) }{C_{i, j}}, \end{aligned}$$
(11)

and for \(1 \le j < i \le n\), let function \(f^{e_i, j}_{\textrm{L}} (t, z)\) be defined on \(t \in T\) and \(z \in (W_{j+1,i}(t), W_{1,i}(t)]\) as

$$\begin{aligned} f^{e_i, j}_{\textrm{L}} (t, z) = - \tau v_j + \frac{z - W_{j+1,i}(t) }{C_{j,i}}. \end{aligned}$$
(12)

In addition, let \(F^{e_i}_\textrm{L}(t)\) and \(F^{e_i}_\textrm{R}(t)\) denote univariate functions defined as

$$\begin{aligned} F^{e_i}_\textrm{L}(t) = \int _{0}^{W_{1,i}(t)} f^{e_i}_{\textrm{L}} (t, z) dz, \quad F^{e_i}_\textrm{R}(t) = \int _{0}^{W_{i+1,n}(t)} f^{e_i}_{\textrm{R}} (t, z) dz, \end{aligned}$$
(13)

where \(f^{e_i}_{\textrm{L}} (t, z)\) and \(f^{e_i}_{\textrm{R}} (t, z)\) denote functions defined as

$$\begin{aligned} f^{e_i}_{\textrm{L}} (t, z) = \max _{1 \le j \le i} \left\{ f^{e_i, j}_{\textrm{L}} (t, z) \right\} , \ f^{e_i}_{\textrm{R}} (t, z) = \max _{i + 1 \le j \le n } \left\{ f^{e_i, j}_{\textrm{R}} (t, z) \right\} . \end{aligned}$$
(14)

See also Fig. 1. Recall the definition of the aggregate evacuation time \(\varPhi (x,t)\) given in (3). We observe that for \(x \in e_i\), \(\varPhi (x,t)\) can be represented as

$$\begin{aligned} \varPhi ( x, t )= & \bigl (W_{1,i}(t) - W_{i+1,n}(t) \bigr ) \tau x + \!\! \int _{0}^{W_{1,i}(t)} \!\!\!\!\!\!\!\! f^{e_i}_{\textrm{L}} (t, z) dz + \!\! \int _{0}^{W_{i+1,n}(t)} \!\!\!\!\!\!\!\! f^{e_i}_{\textrm{R}} (t, z) dz \nonumber \\= & \bigl (W_{1,i}(t) - W_{i+1,n}(t) \bigr ) \tau x + F^{e_i}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t). \end{aligned}$$
(15)

Similarly, by the definition in (4) and by (13), we have

$$\begin{aligned} \varPhi ( v_i, t ) = \bigl ( W_{1,i-1}(t) - W_{i+1,n}(t) \bigr ) \tau v_i + F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t). \end{aligned}$$
(16)
Fig. 1
figure 1

Graph of \(f^{e_i}_{\textrm{L}} (t, z)\) w.r.t. z with a focus on \(f^{e_i,j}_{\textrm{L}} (t, z)\). The gray area represents \(F^{e_{i}}_\textrm{L}(t)\)

Let us focus on the function \(F^e_\textrm{L}(t)\). As t increases, while the function sequence of \(f_\textrm{L}^e(t,z)\) w.r.t. z remains the same, function \(F^e_\textrm{L}(t)\) is represented as the same polynomial, whose degree is at most two by (11)–(13). In other words, a breakpoint of \(F^e_\textrm{L}(t)\) corresponds to the value t such that the function sequence of \(f_\textrm{L}^e(t,z)\) w.r.t. z changes. We notice that such a change happens only when then three functions \(f^{e, h}_{\textrm{L}} (t, z)\), \(f^{e, i}_{\textrm{L}} (t, z)\), and \(f^{e, j}_{\textrm{L}} (t, z)\) intersect each other, which can happen at most once. We immediately see that \(F^e_\textrm{L}(t)\) consists of \(O(n^3)\) breakpoints; that is, it is a piecewise polynomial function of degree at most two with \(O(n^3)\) pieces. The following lemmas show that the number of pieces is actually \(O(n^2\alpha (n))\) and O(n) for the cases with general edge capacities and uniform edge capacity, respectively.

Lemma 1

For each \(e \in E\), \(F^{e}_\textrm{L}(t)\) and \(F^{e}_\textrm{R}(t)\) are piecewise polynomial functions of degree at most two with \(O(n^2\alpha (n))\) pieces, and they can be computed in \(O(n^3\alpha (n) \log n)\) time.

Lemma 2

If all the edge capacities are uniform, for each \(e \in E\), \(F^{e}_\textrm{L}(t)\) and \(F^{e}_\textrm{R}(t)\) are piecewise polynomial functions of degree at most two with O(n) pieces, and they can be computed in \(O(n^2 \log n)\) time.

Proof of Lemma 1

Let us suppose that x is on \(e_i \in E\). We prove the lemma only for \(F^{e_i}_{L}(t)\) because the case of \(F^{e}_\textrm{R}(t)\) can be proved in a symmetric manner. We prove the following statements separately.

  1. (i)

    The number of pieces of \(F^{e_i}_\textrm{L}(t)\) is \(O(n^2\alpha (n))\).

  2. (ii)

    The function of each piece of \(F^{e_i}_\textrm{L}(t)\) is a piecewise polynomial function of degree at most two in t.

  3. (iii)

    All pieces of \(F^{e_i}_\textrm{L}(t)\) can be obtained in \(O(n^3\alpha (n) \log n)\) time.

First, we prove statement (i). Recall that \(f^{e_i}_{\textrm{L}} (t, z)\) is the upper envelope of \(\left\{ f^{e_i, j}_{\textrm{L}} (t, z) \ \Big | \ 1 \le j \le i \right\} \) w.r.t. \(z \in (0, W_{1,i}(t)]\). For integers \(j,j'\) with \(1 \le j' < j \le i\), let \(z_{j',j}(t)\) be a function in t defined as follows. If \(C_{j',i}=C_{j,i}\), then we have

figure a

as shown in Fig. 2, otherwise; in other words, if \(C_{j',i}<C_{j,i}\), then we have

figure b

where \(z^*_{j',j}(t)\) is the solution for z of the equation \(f^{e_i, j'}_{\textrm{L}} (t, z) = f^{e_i, j}_{\textrm{L}} (t, z)\) as shown in Fig. 3. Note that when t changes from \(t \le t_{j',j}\) to \(t > t_{j',j}\), the function \(z^*_{j',j}(t)\) changes from (17) to (18) or from (18) to (17), where \(t_{j',j}\) is the solution for t of the equation \(f^{e_i, j}_{\textrm{L}} (t, W_{j'+1,i}(t)) = f^{e_i, j'}_{\textrm{L}} (t, W_{j'+1,i}(t))\). The same thing holds for (19) and (20). This change happens at most once while \(t\in [t^-,t^+]\).

Fig. 2
figure 2

Illustration of \(z_{j',j}(t)\) for the case of \(C_{j',j}=C_{j,j}\) with \(1 \le j' < j \le i\)

Fig. 3
figure 3

Illustration of \(z_{j',j}(t)\) for the case of \(C_{j',j}<C_{j,j}\) with \(1 \le j' < j \le i\)

For any \(t \in T\), let U(t) be the function sequence of \(f^{e_i}_{\textrm{L}}(t, z)\) in z. Now, for some \(t'\), suppose that \(U(t') = \langle u_1, \ldots , u_k \rangle \). Note that \(u_1> \cdots > u_k\) holds. Then, the h-th smallest breakpoint of \(f^{e_i}_{\textrm{L}}(t', z)\) for \(z \in (0, W_{1,i}(t'))\) is \(z_{u_{h+1},u_h}(t')\). We notice that if t changes from \(t'\) under the conditions that U(t) and the formula for each \(z_{u_{h+1},u_h}(t)\) remain the same, then the formula for \(F^{e_i}_\textrm{L}(t)\) also remains the same. In other words, a breakpoint of \(F^{e_i}_\textrm{L}(t)\) corresponds to when U(t) or the formula for some \(z_{u_{h+1},u_h}(t)\) changes. Let us consider in what cases this happens.

  1. Case 1:

    When t reaches some value \(t_1\), the formula for some \(z_{u_{h+1},u_h}(t)\) changes from (17) to (18) or from (19) to (20) (see Fig. 4).

  2. Case 2:

    When t reaches some value \(t_2\), the formula for some \(z_{u_{h+1},u_h}(t)\) changes from (18) to (17) or from (20) to (19) (see Fig. 5).

  3. Case 3:

    Just after t reaches some value \(t_3\), that is, when \(t=t_3+\varepsilon \), \(f^{e_i, j}_{\textrm{L}} (t_3+\varepsilon , z)\) for some j appears as a part of \(f^{e_i}_{\textrm{L}} (t_3+\varepsilon , z)\) so that \(U(t_3+\varepsilon ) = \langle u_1, \ldots , u_h, j, u_{h+1}, \ldots , u_k \rangle \) (see Fig. 6).

  4. Case 4:

    When t reaches some value \(t_4\), \(f^{e_i, u_h}_{\textrm{L}} (t_4, z)\) disappears from \(f^{e_i}_{\textrm{L}} (t_4, z)\), that is, \(U(t_4) = \langle u_1, \ldots , u_{h}, u_{h+2}, \ldots , u_k \rangle \) (see Fig. 7).

Fig. 4
figure 4

When t reaches \(t_1\), the formula for some \(z_{u_{h+1},u_h}(t)\) changes to \(W_{u_{h+1}+1,i}(t)\) from \(z^{*}_{u_{h+1},u_h}\)

Fig. 5
figure 5

When t reaches \(t_2\), the formula for some \(z_{u_{h+1},u_h}(t)\) changes to \(z^{*}_{u_{h+1},u_h}\) from \(W_{u_{h+1}+1,i}(t)\)

Fig. 6
figure 6

When \(t=t_3+\varepsilon \), \(f^{e_i, j}_{\textrm{L}} (t_3+\varepsilon , z)\) appears as a part of \(f^{e_i}_{\textrm{L}} (t_3+\varepsilon , z)\) so that \(U(t_3+\varepsilon ) = \langle \ldots , u_h, j, u_{h+1}, \ldots \rangle \)

Fig. 7
figure 7

When t reaches \(t_4\), \(f^{e_i, u_h}_{\textrm{L}} (t_4, z)\) disappears from \(f^{e_i}_{\textrm{L}} (t_4, z)\), that is, \(U(t_4) = \langle \ldots , u_{h}, u_{h+2}, \ldots \rangle \)

Note that \(t_1\) and \(t_2\) are the solutions of

\( f^{e_i, u_{h}}_{\textrm{L}} (t, W_{u_{h+1}+1,i}(t)) = f^{e_i, u_{h+1}}_{\textrm{L}} (t, W_{u_{h+1}+1,i}(t))\). For any \(t\in T\) and j with \(2\le j \le i\), \(z_{j}(t)\) is defined as

$$\begin{aligned} z_{j}(t) = \min _{h} \left\{ z_{h,j}(t) \mid 1 \le h \le j-1 \right\} . \end{aligned}$$
(21)

Then, \(t_1\), \(t_2\), \(t_3\), and \(t_4\) in Cases 1–4 are breakpoints of \(z_{u_h}(t)\). Therefore, the number of breakpoints of \(F^{e_i}_\textrm{L}(t)\) is at most the number of all the breakpoints of the functions \(z_j(t)\) for all j with \(2\le j \le i\). Recall that \(z_j(t)\) is the lower envelope of O(j) line segments by (17)–(21). Thus, the number of breakpoints of \(z_j(t)\) for any j is \(O(n\alpha (n))\) by Theorem 2. Therefore, the total number of breakpoints of the functions \(z_j(t)\) for all j with \(2\le j \le i\) is \(O(n^2\alpha (n))\). This completes the proof for property (i).

Next, we prove statement (ii). Let \((H(t),T')\) be a piece of \(F^{e_i}_\textrm{L}(t)\) such that for \(t \in T'\), \(U(t) = \langle u_1, \ldots , u_k \rangle \) and the formula for each \(z_{u_{h+1},u_h}(t)\) remains the same. We then have

$$\begin{aligned} H(t) = \sum _{h=1}^{k} \int _{z_{u_h,u_{h-1}}(t)}^{z_{u_{h+1},u_h}(t)} f^{e_i,u_h}_{\textrm{L}} (t, z) dz, \end{aligned}$$
(22)

where \(z_{u_1,u_0}(t) = 0\) and \(z_{u_{k+1},u_{k}}(t) = W_{1,i}(t)\). By (12), it holds for h with \(1 \le h \le k\) that

$$\begin{aligned} f^{e_i,u_h}_{\textrm{L}} (t, z) = \rho ^h_1 z + \rho ^h_2 t + \rho ^h_3, \end{aligned}$$
(23)

where \(\rho ^h_1, \rho ^h_2, \rho ^h_3\) are some constants. Substituting (23) into (22), we obtain

$$\begin{aligned} H(t)= & \sum _{h=1}^{k} \Big \{\frac{\rho ^h_1}{2} \left( z_{u_{h+1},u_h}(t)^2-z_{u_h,u_{h-1}}(t)^2 \right) \nonumber \\ & + (\rho ^h_2t+\rho ^h_3) \left( z_{u_{h+1},u_h}(t)-z_{u_h,u_{h-1}}(t) \right) \Big \}, \end{aligned}$$
(24)

which is quadratic in t because \(z_{u_{h+1},u_h}(t)\) and \(z_{u_h,u_{h-1}}(t)\) are linear in t by (17)–(20). Note that if the coefficient of \(t^2\) in the function H(t) of (24) is zero, then H(t) is a linear function. This completes the proof for property (ii).

Finally, we prove statement (iii). Our algorithm consists of the following two steps.

Step 1: Obtain breakpoints of \(F^{e_i}_\textrm{L}(t)\). For each j with \(1 \le j \le i\), we find all the breakpoints of \(z_{j}(t)\), which is a lower envelope of \(z_{j',j}(t)\) for integers \(j'\) with \(1 \le j' < j \) by (21). Because \(z_{j',j}(t)\) is a piecewise linear function with at most three pieces, we can obtain all breakpoints of \(z_{j}(t)\) in \(O(n \log n)\) time for each j by Theorem 2. Thus, Step 1 requires \(O(n^2 \log n)\) time.

Step 2: Obtain functions of all pieces of \(F^{e_i}_\textrm{L}(t)\). Let \(T'\) be an interval of a piece of \(F^{e_i}_\textrm{L}(t)\). Choosing the parameter \(t \in T'\), we obtain a function sequence \(U(t) = \langle u_1, \ldots , u_k \rangle \) of \(f_L^{e_i}(t,z)\). Because \(f_L^{e_i}(t,z)\) is the upper envelope of partially defined linear functions by (12) and (14), U(t) can be computed in \(O(n \log n)\) time by Theorem 2. We obtain a polynomial of degree at most two in O(n) time by evaluating (24). Because the number of pieces of \(F^{e_i}_\textrm{L}(t)\) is \(O(n^2\alpha (n))\), Step 2 requires \(O(n^3\alpha (n) \log n)\) time.

In total, our algorithm requires \(O(n^2 \log n + n^3\alpha (n) \log n)=O(n^3\alpha (n) \log n)\) time. \(\square \)

Proof of Lemma 2

Let us suppose that x is on \(e_i \in E\). We prove the lemma only for \(F^{e_i}_{L}(t)\) because the case of \(F^{e}_\textrm{R}(t)\) can be proved in a symmetric manner. We prove the following statements separately, as in the proof of Lemma 1.

  1. (i)

    The number of pieces of \(F^{e_i}_\textrm{L}(t)\) is O(n).

  2. (ii)

    The function of each piece of \(F^{e_i}_\textrm{L}(t)\) is a piecewise polynomial function of degree at most two in t.

  3. (iii)

    All pieces of \(F^{e_i}_\textrm{L}(t)\) can be obtained in \(O(n^2\log n)\) time.

Because statement (ii) has been shown in the proof of Lemma 1, we will prove only statements (i) and (iii).

First, we prove statement (i). For any \(t \in T\), let U(t) be the function sequence of \(f^{e_i}_{\textrm{L}}(t, z)\) in z as in Lemma 1. Note that only Cases 3 and 4 in the proof of Lemma 1 happen when all edge capacities are uniform. Therefore, some \(t'\) is a breakpoint of \(F^{e_i}_\textrm{L}(t)\) if and only if U(t) changes at \(t=t'\). In the following, we will prove that any index comes into U(t) at most once, and therefore U(t) changes O(n) times. Suppose that when t reaches \(t^{\prime }\), j comes out from U(t). We then show that j never comes into U(t) for any t with \(t' < t\). In this case, some \(j'\) with \(j<j'\le i\) comes to dominate j at time \(t^{\prime }\); that is, when \(t =t^{\prime }-\varepsilon \) for sufficiently small \(\varepsilon >0\), \(f^{e_i,j}_{L}(t,z)\) is above \(f^{e_i,j'}_{L}(t,z)\) for \(z\in (W_{j+1,i}(t),W_{1,i}(t)]\), and when \(t =t^{\prime }\), \(f^{e_i,j}_{L}(t,z)=f^{e_i,j'}_{L}(t,z)\) holds for \(z\in (W_{j+1,i}(t^{\prime }),W_{1,i}(t^{\prime })]\) because the slopes in z of \(f^{e_i,h}_{\textrm{L}}(t, z)\) for all h are the same. Recall that in the proof of Lemma 1, the function \(z_{j',h}(t)\) for \(h=1,\ldots ,j' \) changes from (17) to (18) or from (18) to (17) at most once. Therefore, we can see that while \(t >t^{\prime }\), \(f^{e_i,j}_{L}(t,z)\) remains below \(f^{e_i,j'}_{L}(t,z)\) for \(z\in (W_{j+1,i}(t),W_{1,i}(t)]\). Thus, the number of changes of U(t) w.r.t. j is at most two. Because a breakpoint of \(F^{e_i}_\textrm{L}(t)\) corresponds to t where some j comes either into or out from U(t), the number of breakpoints of \(F^{e_i}_\textrm{L}(t)\) is O(n).

Next, we prove statement (iii). We need only prove that all of O(n) breakpoints of \(F^{e_i}_\textrm{L}(t)\) can be obtained in \(O(n^2\log n)\) time because after that, we can apply the same operation as in Step 2 in the proof of Lemma 1 for computing O(n) pieces of \(F^{e_i}_\textrm{L}(t)\). Let us consider a value \(t=t'\) where U(t) changes. Then, for some j with \(1\le j \le i\), the index of the function consisting of \(f^{e_i}_{L}(t,z)\) over \(z \in (W_{j+1,i}(t), W_{j,i}(t)]\) changes at \(t=t'\). To compute such \(t'\), it is enough to consider \(f^{e_i}_{L}(t,z)\) with \(z=W_{j,i}(t)\); that is, \(t'\) is a breakpoint of the univariate function \(g^{j}(t):=f^{e_i}_{L}(t, W_{j,i}(t))\). Thus, \(t'\) is a value where U(t) changes if and only if \(t'\) is a breakpoint of \(g^{j}(t)\) for some j with \(1\le j \le i\).

From the above discussion, we can compute \(F^{e_i}_\textrm{L}(t)\) by the following two steps.

Step 1: Obtain breakpoints of \(F^{e_i}_\textrm{L}(t)\). For all \(j=1,\ldots , i\), compute the breakpoints of \(g^{j}(t)\). The function \(f^{e_i,h}_{L}(t, W_{j,i}(t))\) is a linear function for any t in \([t^-,t^+]\) by substituting \(W_{j,i}(t)\) into (12). For any j with \(1\le j \le i\), \(g^{j}(t)\) is the upper envelope of \(f^{e_i,h}_{L}(t, W_{j,i}(t))\) for all h with \(j\le h \le i\), so we can compute the breakpoints of \(g^{j}(t)\) in \(O(n\log n)\) time and the number of breakpoints is O(n) by Theorem 2. Therefore, we can compute the breakpoints of \(g^{j}(t)\) for all j with \(1\le j \le i\) in \(O(n^2\log n)\) time. Note that some breakpoints may be obtained multiple times. Then, by removing duplicates, we obtain all O(n) breakpoints of \(F^{e_i}_\textrm{L}(t)\). Because the number of obtained breakpoints before removing duplicates is \(O(n^2)\), this operation takes \(O(n^2\log n)\) time. In total, Step 1 takes \(O(n^2\log n)\) time.

Step 2: Obtain functions of all pieces of \(F^{e_i}_\textrm{L}(t)\). The number of pieces of \(F^{e_i}_\textrm{L}(t)\) is O(n) by statement (i). Therefore, by the same operation as that of Step 2 of the proof of Lemma 1, we can obtain polynomial functions of all the pieces of \(F^{e_i}_\textrm{L}(t)\) in \(O(n^2\log n)\) time.

In total, our algorithm requires \(O(n^2\log n)\) time to compute \(F^{e_i}_\textrm{L}(t)\). \(\square \)

Let \(N_{F}\) denote the maximum number of pieces of \(F^{e}_\textrm{L}(t)\) and \(F^{e}_\textrm{R}(t)\) over \(e \in E\). Then we have \(N_{F} = O(n^2\alpha (n))\), and for the case with uniform edge capacity, \(N_{F} = O(n)\). Next, we consider \(\textrm{Opt}(t) = \min \{ \varPhi (x,t) \mid x \in V\}\), which is the lower envelope of a family of n functions \(\varPhi (v_i, t)\) in t. Theorem 2 and Lemmas 1 and 2 imply the following lemma.

Lemma 3

\(\textrm{Opt}(t)\) is a piecewise polynomial function of degree at most two with \(O(n N_{F}2^{\alpha (n)})\) pieces, and it can be obtained in \(O(n N_{F} \alpha (n) \log n)\) time if functions \(F^{e}_\textrm{L}(t)\) and \(F^{e}_\textrm{R}(t)\) for all \(e \in E\) are available.

Proof

Equation (5) implies that \(\textrm{Opt}(t)\) is the lower envelope of a family of n functions \( \{\varPhi (v,t) \mid v \in V \}\). Recall that for \(v_i \in V\), we have

$$\begin{aligned} \varPhi (v_i, t) = \bigl ( W_{1,i-1}(t) - W_{i+1,n}(t) \bigr ) \tau v_i + F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t). \end{aligned}$$

By Lemmas 1 and 2, because \(F^{e}_\textrm{L}(t)\) and \(F^{e}_\textrm{R}(t)\) for any \(e \in E\) are piecewise polynomial functions of degree at most two with at most \(N_{F}\) pieces, \(\textrm{Opt}(t)\) is the lower envelope of \(O(n N_{F})\) partially defined polynomial functions of degree at most two.

Theorem 2 implies that \(\textrm{Opt}(t)\) is a piecewise polynomial function of degree at most two with at most \(O(n N_{F} 2^{\alpha (n N_{F})}) = O(n N_{F} 2^{\alpha (n)})\) pieces and can be obtained in \(O(n N_{F} \alpha (n N_{F}) \log (n N_{F}))) = O(n N_{F} \alpha (n) \log n)\) time, where we used the fact that \(N_{F} = O(n^3)\) and Property 1. This completes the proof. \(\square \)

Let \(N_\textrm{Opt}\) denote the number of pieces of \(\textrm{Opt}(t)\). Then we have \(N_\textrm{Opt} = O(n N_{F}2^{\alpha (n)})\).

Let us consider R(xt) in the case that sink x is on an edge \(e_i \in E\). Substituting (15) for (6), we have

$$\begin{aligned} R(x, t) = \varPhi (x,t) - \textrm{Opt}(t) = \bigl ( W_{1,i}(t) - W_{i+1,n}(t) \bigr ) \tau x + F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t). \end{aligned}$$

By Proposition 1, \(F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\) is a piecewise polynomial function of degree at most two with at most \(2 N_{F} + N_\textrm{Opt} = O(N_\textrm{Opt})\) pieces. Let \(N_{e_i}\) be the number of pieces of \(F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\) and \(T^{e_i}_j\) be the interval of the j-th piece (from the left) of \(F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\). Thus, R(xt) is represented as a (single) polynomial function in x and t on \(\{x \in e\} \times T_j\) for each \(T_j\). For each integer j with \(1 \le j \le N_{e_i}\), let \(G^{e_i}_{j}(x)\) be a function defined as

$$\begin{aligned} G^{e_i}_{j}(x) = \max \{ R(x, t) \mid t \in T^{e_i}_j\}. \end{aligned}$$
(25)

We then have the following lemma.

Lemma 4

For each \(e_i \in E\) and j with \(1 \le j \le N_{e_i}\), \(G^{e_i}_{j}(x)\) is a piecewise polynomial function of degree at most two with at most three pieces, and it can be obtained in constant time if functions \(F^{e_i}_\textrm{L}(t)\), \(F^{e_i}_\textrm{R}(t)\), and \(\textrm{Opt}(t)\) are available.

Proof

Recall that we have

$$\begin{aligned} R(x, t) = \varPhi (x,t) - \textrm{Opt}(t) = \bigl ( W_{1,i}(t) - W_{i+1,n}(t) \bigr ) \tau x + F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t). \end{aligned}$$

Because \(W_{1,i}(t)\), \(W_{i+1,n}(t)\) are linear in t and \(F^{e_i}_\textrm{L}(t)+ F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\) is a polynomial function of degree at most two on \(t \in T_j\), we can represent R(xt) on \(\{x \in e_i\} \times T_j\) with five real constants \(\beta _k\) (\(k = 1, \ldots , 5\)) as

$$\begin{aligned} R(x, t) = \beta _1 t^2 + \beta _2 x t + \beta _3 t + \beta _4 x + \beta _5. \end{aligned}$$

Let us consider the explicit form of the maximum value \(G_{j}(x)\) of R(xt) for \(x \in e_i\) over \(t \in T_j\). Let \(T_j = [t_{j-1}, t_{j}]\). If \(\beta _1 \ge 0\), then R(xt) takes the maximum value when \(t = t_{j-1}\) or \(t=t_{j}\) for any x. Thus, we have \(G_{j}(x) = \max \{ R(x, t_{j-1}), R(x, t_{j})\}\) that is a piecewise linear function in x with at most two pieces, because both \(R(x, t_{j-1})\) and \(R(x, t_{j})\) are linear in x. Let us consider the other case in which \(\beta _1 < 0\) holds. When the axis of symmetry of R(xt) w.r.t. t, that is, \(t = -(\beta _2x +\beta _3)/(2\beta _1)\), is contained in \(T_j\), it holds that \(G_{j}(x) = R(x, -(\beta _2x +\beta _3)/(2\beta _1))\). Thus we have

$$\begin{aligned} G_{j}(x) = \left\{ \begin{array}{ll} R(x, t_{j-1})& \text { if } -\frac{\beta _2x + \beta _3}{2\beta _1} < t_{j-1}, \\ R\left( x, -\frac{\beta _2x + \beta _3}{2\beta _1} \right) & \text { if } t_{j-1} \le -\frac{\beta _2x + \beta _3}{2\beta _1} \le t_{j}, \\ R(x, t_{j})& \text { if } -\frac{\beta _2x + \beta _3}{2\beta _1} > t_{j}. \end{array} \right. \end{aligned}$$
(26)

Note that the inequality conditions in (26) can be solved for x as [\(x < x_1\); \(x_1 \le x \le x_2\); \(x > x_2\)] or [\(x > x_1\); \(x_1 \ge x \ge x_2\); \(x < x_2\)], where \(x_1\) is the solution for x of the equation \(-(\beta _2x + \beta _3)/(2\beta _1) = t_{j-1}\) and \(x_2\) is the solution for x of the equation \(-(\beta _2x + \beta _3)/(2\beta _1) = t_{j}\) (see Fig. 8). Therefore, \(G_{j}(x)\) is a piecewise polynomial function with at most three polynomials of degree at most two because \(R\left( x, - \frac{\beta _2x + \beta _3}{2\beta _1} \right) \) is a quadratic function in x. \(\square \)

Fig. 8
figure 8

Graph of \(G_{j}(x)\) in the case of \(v_i< x_1< x_2 < v_{i+1}\)

Recalling the definition of MR(x), it holds that for \(x \in e\),

$$\begin{aligned} MR(x) = \max \{R(x, t) \mid t \in T \} = \max \{G^{e}_{j}(x) \mid 1 \le j \le N_{e} \}, \end{aligned}$$

that is, MR(x) is the upper envelope of functions \(G^{e}_{1}(x), \ldots , G^{e}_{N_{e}}(x)\). Applying Theorem 2, we have the following lemma.

Lemma 5

For each \(e \in E\), there exists an algorithm that finds a location that minimizes MR(x) under the restriction with \(x \in e\) in \(O(N_\textrm{Opt} \alpha (n) \log n)\) time if functions \(F^{e}_\textrm{L}(t)\), \(F^{e}_\textrm{R}(t)\), and \(\textrm{Opt}(t)\) are available.

Proof

We give how to find a location \(x^{*, e}\) that minimizes MR(x) over \(x \in e\). Because functions \(F^{e}_\textrm{L}(t)\), \(F^{e}_\textrm{R}(t)\), and \(\textrm{Opt}(t)\) are available, we can apply Lemma 4 and then compute the explicit forms of functions \(G^{e}_{j}(x)\) for all j with \(1 \le j \le N_{e}\), which are obtained in \(O(N_{e})=O(N_\textrm{Opt})\) time. Function MR(x) for \(x \in e\) is the upper envelope of functions \(G^{e}_{1}(x), \ldots , G^{e}_{N_{e}}(x)\). Because \(G^{e}_{j}(x)\) is a piecewise polynomial function of degree at most two with at most three pieces, MR(x) is the upper envelope of at most \(3N_e = O(N_\textrm{Opt})\) functions of degree at most two. Theorem 2 implies that MR(x) consists of \(O(N_\textrm{Opt} 2^{\alpha ( N_\textrm{Opt} )}) = O(N_\textrm{Opt} 2^{\alpha (n)})\) pieces and can be obtained in \(O(N_\textrm{Opt} \alpha ( N_\textrm{Opt} ) \log N_\textrm{Opt}) = O(N_\textrm{Opt} \alpha (n) \log n)\) time, where we used the fact that the inverse Ackermann function satisfies that \(\alpha (n^3) \le \alpha (n) + 1\) holds for any positive integer n with \(n \ge 8\).

For each piece, compute a point that minimizes MR(x) in constant time, and from the obtained values, choose the minimum one as \(x^{*, e}\). Summarizing the above argument, these operations take \(O(N_\textrm{Opt} \alpha (n) \log n)\) time. \(\square \)

Note that a small modification to the algorithm of Lemma 5 allows us to also compute MR(v) for all \(v \in V\) in \(O(N_\textrm{Opt})\) time.

Lemma 6

For each \(v \in V\), there exists an algorithm that computes MR(v) in \(O( N_\textrm{Opt} )\) time if functions \(F^{e}_\textrm{L}(t)\), \(F^{e}_\textrm{R}(t)\), and \(\textrm{Opt}(t)\) are available for all \(e \in E\).

Proof

We show that it takes \(O(N_\textrm{Opt})\) time to obtain \(MR(v_i)\) for each \(v_i \in V\). Substituting (16) for (6), we have

$$\begin{aligned} R(v_i, t) = \bigl (W_{i-1}(t) + W_i(t) - W_n(t) \bigr ) \tau v_i + F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t). \end{aligned}$$

By Proposition 1, \(F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\) is a piecewise polynomial function of degree at most two with at most \(O(N_\textrm{Opt})\) pieces. Let \(N_{v_i}\) be the number of pieces of \(F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\) and \(T^{v_i}_j\) be the interval of the j-th piece (from the left) of \(F^{e_{i-1}}_\textrm{L}(t) + F^{e_i}_\textrm{R}(t) - \textrm{Opt}(t)\). Thus, \(R(v_i,t)\) is represented as a polynomial function of degree at most two on \(t \in T^{v_i}_j\) for each \(T^{v_i}_j\). For each integer j with \(1 \le j \le N_{v_i}\), by elementary calculation, we obtain the maximum value \(G^{v_i}_j\) of \(R(v_i, t)\) over \(t \in T_j\) in O(1) time. By choosing the maximum values among \(G^{v_i}_{1}, \ldots , G^{v_i}_{N_{v_i}}\), we can obtain \(MR(v_i)\) in \(O(N_{v_i}) = O(N_\textrm{Opt})\) time. \(\square \)

3.2 Algorithm and time analysis

Finally, let us give an algorithm that finds a sink location that minimizes the maximal regret, as well as an analysis of the running time of each step.

First, we obtain \(F^e_\textrm{L}(t)\) and \(F^e_\textrm{R}(t)\) for all \(e \in E\) and obtain function \(\textrm{Opt}(t)\) as a preprocess. Applying Lemmas 13, we take \(O(n^2 N_{F} \log n)\) time for these operations. Next, we compute \(x^{*,e} = \text {arg min}\{ MR(x) \mid x \in e\}\) for all \(e \in E\) in \(O(n N_\textrm{Opt} \alpha (n) \log n)\) time by applying Lemma 5. Then, we also compute MR(v) for all \(v \in V\) in \(O(n N_\textrm{Opt})\) time by applying Lemma 6. Finally, we find an optimal sink location \(x^{*}\) in O(n) time by evaluating MR(x) for \(x \in \{ x^{*, e} \} \cup V\).

Because we have \(N_{Opt} = O(n N_{F} 2^{\alpha (n)})\), the bottleneck of our algorithm is in computing \(x^{*,e}\) for all \(e \in E\). Thus, we see that the algorithm runs in \(O(n^2 N_{F}2^{\alpha (n)} \alpha (n) \log n)\) time, which completes the proof of our main theorem because \(N_{F} = O(n^2\alpha (n))\), and for the case with uniform edge capacity, \(N_{F} = O(n)\).

4 Concluding remarks

MMR-ATSL addresses a situation where sink locations are decided under the uncertainty that population changes by time. We have presented an \(O(n^4 2^{\alpha (n)}\alpha (n)^2 \log n)\) time algorithm for MMR-ATSL on dynamic flow path networks with parametric weights. When all the edge capacities are uniform, MMR-ATSL can be solved in \(O(n^3 2^{\alpha (n)}\alpha (n) \log n)\) time.

MMR-ATSL on other networks as trees, grids or more general networks is still open. Also, MMR-CTSL has not been solved for any network class. It is worth designing algorithms for these problems in order to make an effective evacuation plan.