1 Introduction

Consider the following traveling salesman problem: any request cannot be rejected, and if the salesman can’t serve the request immediately, there occurs an extra cost which increases with the waiting time for the salesman. This problem conforms with reality such as in emergency vehicle rescue. In order to save more lives and transfer the emergency materials to affected sites, the emergency vehicle can’t reject any request. Zhou et al. (2017) consider that for the emergency vehicle, the sooner it meets the demands of the affected area, the better. For affected population, after sending out signal for help, they always want to be served immediately. A longer waiting time may cause more dissatisfaction and victims. For injured people, a longer waiting time may be life-threatening which can be seen an extra cost for the emergency vehicle. For example, in the earthquake rescue, the time of the rescue is really important for the affected population, one day after the earthquake, the survival rate is nearly 90 percent, and after two or three days, the survival rate may be 70 to 80 percent. Moreover, in order to cut the cost and improve efficiency, the emergency vehicles want to minimize the final time which servers are back at the origin plus the costs associated with the requests which can’t be served immediately. The situations mentioned above are very similar to traveling salesman problem with per-unit-time cost.

For the time cost in TSP, Campbell and Thomas (2008) study the probabilistic TSP with deadline, they consider a per-unit-time charge of violation charge for violating the deadline. The cost for per-unit-time charge is represented by a customer-dependent \(\lambda _{i}\) (which is the same with the parameter B in our model). For instance, FedEx Custom Critical refunds varying percentages of the cost of a shipment based on how late the shipment is delivered (FedEx 2005). For additional examples, see (Charnsirisakskul et al. 2004) and (Slotnick and Sobel 2005). Our model is the online version of Campbell and Campbell and Thomas (2008)’s model when deadline is equal to 0 without considering the probabilistic case.

In this paper, we consider the online version of the problem. The requests arrive over-time, and the server has to decide how to move without any information of the future requests. There are a few researches concerning OL-TSP with the extra cost, but the cost form is different from the cost form in our model, the cost is generated if the server rejects the request or misses the deadline, and the server can’t serve it. Ausiello et al. (2008) introduce the costs for unserved requests into online quota TSP, and call it online prize-collecting TSP, the objective of the server is to collect a given amount of quota while minimizing the sum of time to complete the service and the costs. Jaillet and Lu (2011) consider the situation that the server only serves the accepted requests, there is a cost for rejecting a request, and the objective is to minimize the time to serve all accepted requests plus the sum of penalties associated with the rejected requests. Wen et al. (2015) introduce the deadline into the OL-TSP with service flexibility, the server can choose whether to serve or not when a new request arrives, by rejecting the request or missing its deadline, costs will be generated, the objective is to minimize server’s total costs. Yu et al. (2014) study the Online Quota Traveling Salesman Problem, and present optimal deterministic algorithms for each variant defined on a general space, a real line, or a half-line. Above literatures neither consider the cost which increases with the waiting time nor consider the situation that the server can’t reject the request.

The similar online problem is OL-TRP(on line Traveling Repairman Problem) which is an extension of OL-TSP, Krumke et al. (2003) study the OL-TRP, and the objective is to minimize the latency. Irani et al. (2004) study the dynamic OL-TRP, and the objective is to serve maximum requests before the deadline. The most related research is the Net-latency OL-TRP, Allulli et al. (2008) study the Net-latency OL-TRP, the objective function is to minimize the net latency (which is similar to the cost form in our model). They show when the lookahead information \(\varDelta > 2D\)(D is the diameter of the network), there exits a competitive online algorithm. Especially on the line segment, they give an online algorithm with lookahead information, and the competitive ratio depends on the lookahead information. Different from the model(Allulli et al. 2008), we consider not only the cost but also the makespan(the earliest time for the server to go back to the origin after serving all the requests), give the lower bound and a competitive online algorithm on the truncated line segment without any lookahead information. Our result is an extension and improvement of their work.

Compared with previous research, we have several contributions: (1) We study the TSP with per-unit-time cost by considering an online version which matches several real-life scenarios. (2) We try to minimize the total cost of traveling time and the time cost as an objective instead of only one item. (3) We consider the non-zealous algorithm. Previous research only considers the zealous algorithm (Allulli et al. 2008), where the direction of online server changes only if a new request becomes known, or the server is either in the origin or at a request that has just been served (Blom et al. 2001). Taking realities into consideration, we consider non-zealous algorithm (Lipmann 2003), where the online server can change its directions at any time and at any place. The main contribution is the analysis of OL-TSP with per-unit-time cost. The results are the general extension of OL-TSP by considering the per-unit-time cost form. In previous research, they consider the scenario that the server does not serve the request if violating the deadline or the server rejects the request which generates the cost, but in our research, we consider the delayed service which leads the costs, this means that the later the service, the larger the cost. We show that neither deterministic nor randomized online algorithms can achieve constant competitive ratios on general metric spaces. While on the truncated line segment, we give a lower bound and an Observe and Move Algorithm whose competitive ratio is related to the length of the line. While on uniform metric space, we give a lower bound and a Greedy Algorithm, and prove that the Greedy Algorithm is optimal.

The rest of the paper is structured as follows. In Sect.2, we introduce online traveling salesman problem with per-unit-time cost and non-zealous server. In Sect.3, we show that there are no constant competitive ratios for either deterministic or randomized online algorithms on general metric spaces. In Sect.4, we analyze the problem on the truncated line segment, give a lower bound, and an Observe and Move Algorithm with its competitive ratio. In Sect.5, we analyze the problem on the uniform metric space, give a lower bound, and a Greedy Algorithm with its competitive ratio. Final conclusions are given in Sect.6.

2 Preliminaries

Given a general metric space \(M \). Requests arrive over-time and request \(R_{i}\) is denoted as \(R_i=(r_{i},l_{i})\), \(i\in N\), \(N = \left\{ {1,2,3 \cdots ,n} \right\} \), where n is the number of request. \(r_{i}\) represents the request’s release time, and \(l_{i}\) represents the position of \(R_{i}\). Denote \(s_{i}\) as the service time (the moment that the server arrives at \(l_{i}\)), and \(p_{i}\) is the cost that the server can’t serve the request \(R_{i}\) immediately which equals to \(B(s_{i}-r_{i})\), where B(\(B\ge 1\)) is a cost rate. The problem starts at time 0 (initial state), and the server is at the origin. The server can only be idle or move at unit speed. The server must be idle at the origin after serving all the requests(final state). The earliest time for the server to reach the final state is called makespan. The objective of the problem is to minimize the total cost, i.e. makespan plus the total cost \(\sum _{i=1}^{n} p_{i}\).

3 Lower bounds for OL-TSP with per-unit-time cost

3.1 Deterministic case

Theorem 1

No deterministic online algorithm can achieve a constant competitive ratio, even on the positive half line.

Proof

Consider the positive half line. At time \(l(l\ge 6)\), we consider the different positions of the online server, suppose the online server is at point x.

Case 1. When \(x\le 0.5l\), the adversary releases n requests denoted as \(R_{1\le i \le n}=(l+\frac{i}{n},l- \frac{i}{n})\). After serving all the n requests, the online server and the offline server would be back to the origin(see Fig. 1).

Fig. 1
figure 1

Trajectory of online and offline server

As shown in Fig. 1, the cost of the offline server is equal to \(c_{opt}=2l\). At time \(l+1\), the online server arrives at point \(x+1\), and the cost for each request in \(R_{1\le i \le n}\) is at least equal to \(l-x-2>1\), the cost of the online server is equal to \({c_{online}}> 3l-x + n(l-x-2)B> 2.5l+ nB\). We have \(\frac{c_{online}}{c_{opt}} > \frac{2.5l + n B}{2l} \rightarrow \ \infty \ \) for \(n \rightarrow \ \infty \ \).

Case 2. When \(x> 0.5l\), the offline server arrives at point l at time l, the adversary releases n requests denoted as \(R_{1\le i \le n}=(l+\frac{i}{n},1+ \frac{i}{n})\). After serving all the n requests, the online server and the offline server would go back to the origin. The cost of the offline server is equal to \(c_{opt}=l+2\), and the cost of the online server is equal to \({c_{online}}> l+x + n(x-2)B> 1.5l+nB\). We have \(\frac{c_{online}}{c_{opt}} > \frac{1.5l + n B}{l+2} \rightarrow \ \infty \ \) for \(n \rightarrow \ \infty \ \).

3.2 Randomized case

Since the deterministic online algorithms perform badly against the adversary, the randomized online algorithms seem interesting. In this section, we only consider the randomness of online server’s position. For the randomness of the requests’s release time, see Simroth and Souza (2009)

Theorem 2

No randomized online algorithm can achieve a constant competitive ratio, even on the positive half line.

Proof

Consider the positive half line. At time 0, the offline server moves towards point \(l(l\ge 4)\). Because the online server is a non-zealous server, the online server may stay any position between 0 and l, at time l. The probabilities are the same for online server to stay at these positions. The position of the online server would be uniformly distributed in the segment [0, l], the offline server arrives at point l. The adversary releases n requests denoted as \(R_{1\le i \le n}=(l+\frac{i}{n},l- \frac{i}{n})\). After serving all the n requests, the online server and the offline server must go back to the origin. The offline server can serve all the requests without any extra cost. The online server’s cost is correlated with the position of the online server at time l. Suppose the online server would generate \(c_{online}(1)\) if it is located during \([0,l-3]\). At time \(l+1\), the online server is located during \([1,l-2]\), the cost of each request in \(R_{1\le i \le n}\) is at least equal to 1 for the online server. Thus we have \({c_{online}}> c_{online}(1)> \frac{{l - 3}}{l} (\int \limits _0^l {(l - x)dx} + l)+ n\frac{{l - 3}}{l}B\), and \(\frac{c_{online}}{c_{opt}} \rightarrow \ \infty \ \) for \(n \rightarrow \ \infty \ \).

4 OL-TSP with per-unit-time cost on truncated line segment

Deterministic and randomized online algorithms can’t perform well against the offline adversary. It is interesting to explore the performance of online algorithms on restricted metric spaces. In this section, similar to Wen et al. (2012) and Gutiérrez et al. (2006), we consider the truncated line segment \([-L,L]\) with unit distance between nodes, and \(L\ge 3\) as an integer. The requests can only be released at nodes, and every node has only one unserved request at a time.

4.1 A lower bound

Theorem 3

The lower bound for OL-TSP with per-unit-time cost on truncated line segment is \(1 + 0.75LB\)(L is the half length of the truncated line, B is a cost rate).

Proof

Because the adversary is an omnipotent one which can release the requests at the positions of the offline server, and this would decrease the cost of the offline server. The enlarged distance between the online server and the offline server would increase the cost of the online server. With all the situations mentioned above, the adversary would first release request sequence as follows. The classification of the case analysis in the proof of lower bound is based on the position of the online server and the strategy of the online server after the newly released request.

At time L, there are two cases for the online server: arrive at the extreme nodes(L or \(-L\)) and arrive at other positions.

Case 1. The online server arrives at the extreme nodes(L or \(-L\)).

Without loss of generality suppose the online server arrives at node L at time L, the offline server arrives at node \(-L\). The adversary would release L requests on the negative half line, denoted as \({R_{1 \le i \le L}} = (L + i - 1, - L + i - 1)\). The online server must move towards \(-L\), then the online server and the offline server meet at the origin. If the offline server is idle at the origin, the cost of the offline server is equal to \({c_{offline}} = 2L\), the cost of the online server is equal to \({c_{online}} = 4L + ({L^2} + L)B\), the cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 2 + \frac{{(L + 1)B}}{2}\). Now we consider the offline server starts to move again at time 2L, the offline server moves straightly to L, the adversary releases requests at the offline server’s positions. There are two cases for the online server, in these two cases, we only consider the added cost.

Case 1.1 The online server goes straightly to node \(-L\), then turns around.

When the offline server is idle at the origin again at time 4L, the offline server’s cost increases by 2L, the online server’s cost increases by \(2L+2L^{2}B\), the cost ratio is equal to \(\frac{{2L\mathrm{{ + }}2{L^2}B}}{{2L}} = 1 + LB\).

Case 1.2 The online server travels a distance \(y(y<L)\), then turns around.

Once the offline server finds that the online server turns around, the offline server turns around too. The adversary releases requests at the offline server’s positions without unserved requests for the online server. There are two cases for the online server after turning around:

Case 1.2.1 After turning around, the online server travels a distance \(y_{1}(y_{1}\le y)\) then turns around to node \(-L\).

The online server doesn’t serve any requests on the right side of the origin in this case (See Fig. 2).

Fig. 2
figure 2

Trajectory of online and offline server

When the offline server is idle at the origin again, the offline server’s cost increases by \(2L+2y_{1}\). For the online server’s cost, we notice that the requests between node \(-y\) and node \(-L\) increases by \(2(L-y)y_{1}\), the online server’s cost increases by \(2L\mathrm{{ + }}2{y_1} + (2{y_1}(L - {y}) + 2{y_1}y + 2{L^2})B\), the cost ratio is equal to \(\frac{{2L\mathrm{{ + }}2{y_1} + (2{y_1}(L - y) + 2{y_1}y + 2{L^2})B}}{{2L + 2{y_1}}} = 1 + LB\).(See “Appendix” A for details)

Case 1.2.2 After turning around, the online server travels a distance \(y+y_{2}\), then turns around to the origin.

The online server serves \(min\{y,y_{2}\}\) requests on the right side of the origin in this case. When the offline server is idle at the origin again, the offline server’s cost increases by \(2L+2y+2y_{2}\).

When \(y_{2}\le y\), the online server serves \(y_{2}\) requests on the right side of the origin(See Fig. 3). The adversary releases \(y_{2}\) new requests on the left side of the origin. The online server’s cost increases by \(2L\mathrm{{ + 2}}y + 2{y_2} + (2y{y_2} + 2{y_2}^2 + 2(L - {y_2})(y + {y_2}) + 2{L^2})B\)(See “Appendix” B for details), the cost ratio is equal to

\(\frac{{2L\mathrm{{ + 2}}y + 2{y_2} + (2y{y_2} + 2{y_2}^2 + 2(L - {y_2})(y + {y_2}) + 2{L^2})B}}{{2L\mathrm{{ + 2}}y + 2{y_2}}} = 1 + LB\).

Fig. 3
figure 3

Trajectory of online and offline server

When \(y_{2}> y\), the online server serves y requests on the right side of the origin. The adversary releases y new requests on the left side of the origin. The online server’s cost increases by \(2L+2y+2y_{2}+(2(y+y_{2})(L-y)+2y^{2}+2yy_{2}+2L^{2})B\)(See “Appendix” 1 for details), the cost ratio is equal to

\(\frac{2L+2y+2y_{2}+(2(y+y_{2})(L-y)+2y^{2}+2yy_{2}+2L^{2})B}{2L+2y+2y_{2}}=1+LB\)

According to above analysis, the strategy of turning around for the online server wouldn’t decrease the cost ratio, so the online server wouldn’t turn around until arriving at node \(-L\). We denote the situation (the offline server moves to an extreme node, then goes back to the origin) as a cycle. For one cycle, the cost of the offline server would increase by 2L, and the cost of the online server would increase by \(2L+2L^{2}B\), after n cycles, the cost ratio is equal to

\(\frac{{{c_{online}}}}{{{c_{offline}}}} = \frac{{4L + ({L^2} + L)B + n(2L\mathrm{{ + }}2{L^2}B)}}{{2L + 2nL}}\), when \(n \rightarrow \infty \), \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1 + LB > 2 + \frac{{(L + 1)B}}{2},L \ge 3,B \ge 1\).

Case 2. The online server arrives at other positions

Without loss of generality suppose the online server is on the negative half line or at the origin at time L, the offline server arrives at L. We consider the parity of L.

When L is even, the adversary releases \(2L+1\) requests denoted as \({R_{1 \le i \le L}} = (L + i - 1,L - i + 1)\). The offline server would arrive at node \(-L\) at time 3L. If at time 3L, the online server dozen’t serve the request at node L, the offline server waits at node \(-L\) until online server arrives at node L. Notice that this is case 1, and the cost ratio is equal to \(1+LB\). If at time 3L, the online server has served the request at node L which means that at time 2L, the online server’s position is \(x(0<x\le L)\). The online server’s strategy is to serve the requests on the positive half line firstly, then turn around to serve the requests on the negative half line(otherwise the online server can’t serve the request at node L before time 3L). We consider the cycles which means that whenever the offline server arrives at the origin, the offline server would move to the extreme node again which is farther from the online server(we notice that the online server and the offline server don’t meet at the origin in this case). The adversary releases the requests at the offline server’s positions. For the online server, similar to above analysis, the online server must arrive at the extreme node which has an unserved request before the offline server arrives at another extreme node(See Fig. 4), otherwise the case will become case 1.

Fig. 4
figure 4

Trajectory of online and offline server

For each cycle, offline server increases by 2L, and that of the online server increases by \(2L+(2{L^2} - 0.5{x^2} + 2L - x)B\). When \(x=L\), the online server’s added cost can achieve its minimum value, which is equal to \(2L+(1.5L^{2}+L)B\). After n cycles, the cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = \frac{{3L + c + n(2L\mathrm{{ + }}(1.5L^{2}+L)B)}}{{2L + 2nL}}\), where c is the cost of the first L requests. When \(n \rightarrow \infty \), \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1 + (0.75L + 0.5)B,L \ge 3,B \ge 1\).

When L is odd, similar to above analysis, the cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1 + (0.75L + 0.75)B\).

In the above cases, we don’t consider the “waiting” strategy for the online server. In case 1, if the online server adopts “waiting” strategy, so does the offline server, the cost ratio is still the same. In case 2, if the online server adopts “waiting” strategy, the case is similar with case 1 which will increase the ratio.

In conclusion of Case 1 and Case 2, we can get the lower bound for OL-TSP with per-unit-time cost on truncated line segment as \(1 + 0.75LB\).

4.2 Observe and move algorithm and its competitive ratio

The main purpose of this section is to design OM algorithm and get the competitive ratio. In order to describe OM algorithm clearly, we prove the best way of releasing the requests for the adversary (Lemma 1). According to Lemma 1, we define the Released request sequence. Based on the definition of the Released request sequence, we give the description of Observe and Move algorithm. The main idea of OM algorithm is that through observing the released request sequence, the online server will decide to keep his direction or turn around. In order to get the competitive ratio, we prove that after the occurring of Meeting situation 2, the offline server must go to the extreme node (L or \(-L\) ) or go to the origin and idle (Lemma 2). At last, we prove that no matter how the adversary releases the requests, Meeting situation 2 must happen. We get the competitive ratio.

Lemma 1

Once the online server and the offline server are not at the same position, the cost ratio of online algorithm A won’t decrease if the adversary only releases the requests at the positions of the offline server.

Proof

After the separation of the online server and the offline server, define a request sequence set \(S=\{S_{i},i=1{\ldots }k\}\), in S the adversary only releases the requests at the positions of the offline server, and every request sequence in S begins at the time when offline server leaves the origin, and ends at the time when the offline server goes back to the origin for the second time.

For \(S_{m}\), the offline server’s cost is equal to \(y_{m}\), and the online server’s cost is equal to \(z_{m}\). Suppose the adversary only releases a request \(R_{j}\) at node j which isn’t at the position of the offline server, the release time is assumed to be t unit time earlier than the arrival time, the offline server’s cost is equal to \(y_{m}+Bt\).

Now we consider the online server’s cost. If based on algorithm A, the online server’s trajectory is the same with the trajectory when the adversary releases the request sequence \(S_{m}\), there are two cases for the online server:

Case 1. The position of node j is between the meeting position of the two servers and the online server, the online server’s cost is equal to \(z_{m}-Bt\), and the cost ratio is equal to \(\frac{z_{m}-Bt}{y_{m}+Bt}< \frac{z_{m}}{y_{m}}\);

Case 2. The position of node j is at other positions except the positions in case 1, the online server’s cost is equal to \(z_{m}+Bt\), and the cost ratio is equal to \(\frac{z_{m}+Bt}{y_{m}+Bt}< \frac{z_{m}}{y_{m}}\).

If based on algorithm A, the online server’s trajectory isn’t the same with the trajectory when the adversary releases the request sequence \(S_{m}\)(this may also change the trajectory of the offline server), after the offline server serves all the requests, we can find that the request sequence \(S_{m}\) turns into a new request sequence \(S_{l}\) in S(for the unserved requests of the online server, the adversary can’t release new requests on the nodes, the online server and the offline server will meet at different points which would lead to a different released request sequence). For the new request sequence \(S_{l}\), we can analyze it as above.

We don’t consider that the offline server would stay at some point without moving(except staying at the origin at the final state). Because the adversary can’t release any new request which would lead to a higher offline server’s cost and a lower online server’s cost according to Lemma 1 and the hypothesis.

4.2.1 Observe and move algorithm

According to Lemma 1, after the separation of the online server and the offline server, the adversary must release the requests at the positions of the offline server. In the proof of the lower bound, the adversary also releases the requests at the positions of the offline server. When the online server serves these requests, the cost of each request for the online server is the same. For the online server, turning around before serving these requests is no good. This kind of released request must be considered specially. In order to describe the Observe and Move Algorithm clearly, the definitions of two released request sequences is given firstly:

Released request sequence 1: For a given request sequence S, the online server must serve it, if the cost of each request in S for the online server is the same, which equals to \(2\left| {x - 0.5({x_0} + l)} \right| \)(where x is the position where the online server turns around, l is the position of the first released request in S, \(x_{0}\) is the position of online server when the first request in S is released), we call this sequence S Released request sequence 1(Especially, only one request can be seen as Released request sequence 1)(see Fig. 5).

Fig. 5
figure 5

Released request sequence 1

Released request sequence 2: The other released request sequences except released request sequence 1 are denoted as Released request sequence 2.

We give the OM Algorithm in the following table.

figure a

The main idea of the algorithm is that the online server observes the released request sequence, and then he makes the decisions (keeping the direction or turning around). If the online server is serving the released request sequence 1, no matter the direction of the online server, he must serve all the requests of the released request sequence 1 then turns around. If the online server isn’t serving the released request sequence 1, when the online server is moving towards the extreme node, the newly released requests behind the online server satisfy released request sequence 1, the online server won’t turn around until serving all the requests in front of the online server, if not, turn around immediately; when the online server is moving towards the origin, the strategy is quite different. Once the newly request has been released behind the online server, the online server turns around immediately.

4.2.2 Competitive ratio

Before we give the proof of the competitive ratio, we give the definitions of two meeting situation:

Meeting situation 1: Online server moves towards the origin, and offline server moves towards the extreme node(L or \(-L\)). When two servers meet at some point on the truncated line, we call it Meeting situation 1.

Meeting situation 2: Online server moves towards the extreme node(L or \(-L\)), and offline server moves towards the origin. When two servers meet at some point on the truncated line, we call it Meeting situation 2.

Especially, when two servers meet at the origin, we define this meeting as Meeting situation 2. The classification of the case analysis in Lemma 2 is based on the Meeting situation 2’s occurrence.

Lemma 2

After Meeting situation 2 occurs, the offline server must go straightly to the origin and idle or keep his direction to the extreme node, and this won’t decrease the cost ratio of OM algorithm.

Proof

Without loss of generality, suppose that the Meeting situation 2 occurs at node \(-x\)(for simplicity of the analysis, suppose x is an integer and we suppose all the parameters are integer in the following part). Before this meeting, the offline server wouldn’t turn around until the offline server arrives at the extreme node (L or \(-L\) ). According to Lemma 1 and OM algorithm, the last meeting of the two servers occurs at node x at time t, it was also Meeting situation 2. Now we only consider two servers’ cost after time \(t+2L\). Based on the different movement of the offline server, we consider three cases:

Case 1. The offline server goes straightly to the origin or node L.

When the offline server goes straightly to node L, then turns around back to the origin, according to Lemma 1 and OM algorithm, the online server and the offline server will meet at node \(L-x\). The cost ratio is equal to \(\rho _0= \frac{{4L - x + 2(L - x)(2L + x)B}}{{2L + x}}\).

When the offline server goes straightly to the origin, the online server and the offline server won’t meet again. According to Lemma 1 and OM algorithm, the cost ratio is equal to \(\rho = \frac{{2(L - x)LB + 2L - x}}{x}\).

We will compare the cost ratio of the following cases with \(\rho \) or \(\rho _{0}\). If the value of \(\rho \) or \(\rho _{0}\) is larger, we can prove Lemma 2.

Case 2. The offline server travels a distance of y, then turns around, before the meeting happens, turns around again.

If two servers don’t meet for the first turning around of the offline server(the two servers will eventually meet), the value of y must satisfy \(1 \le y<L-x\), if \(y\ge L-x\), the offline server must turn around on the right side of node \(-x\). The adversary can’t release any new request which would decrease the cost ratio.

The offline server turns around to node \( - x - {y_1}(0 \le {y_1} \le L - x - y - 1)\)(this assures that two servers wouldn’t meet at the negative half line), then goes straightly to node L, goes back to the origin and stays still. According to Lemma 1, the adversary releases requests at the positions of the offline server, but the adversary can’t release requests at the nodes which online server hasn’t served. The cost of the offline server is equal to \({c_{offline}} = 2y + 2{y_1} + x + 2L\). For the online server’s movements, we can see Fig. 6.

Fig. 6
figure 6

Trajectory of online and offline server

As shown in Fig. 6, the released requests between node \(-x\) and node \(-x+y\) for the online server satisfy Released request sequence 1, the cost for each request is equal to \(2(L-x)B\); the released requests between node \(-x+y\) and node L for the online server also satisfy Released request sequence 1, the cost for each request is equal to \(2(L - x - y - {y_1})B\). According to OM algorithm, the online server would go straightly to node \(-L\), turn around to node L, then go back to the origin. The cost of the online server is equal to \({c_{online}} = 4L - x + (2(L - x)(L - x + y) + 2(L - x - y)({y_1} + 1) + 2(L - x - y - {y_1})(L + 2x + {y_1}) - ({y_1}^2 + {y_1}))B \).

When \(y=1,y_{1}=0\), \(\frac{{c_{online} }}{{c_{offline} }}\) can get the maximum value, \(\rho _{0} - \frac{c_{online}}{c_{offline}} > 0\)(See “Appendix” C for details).

Case 3. The Offline server travels a distance of y, then turns around, and two servers meet.

After traveling a distance of \(y(1 \le y \le L+x-1)\), the offline server turns around and meets with the online server at node \(-L+y\). Based on the values of \(-L+y\), we consider three cases:

Case 3.1. The online server and the offline server meets on the left side of node \(-x\)(including node \(-x\)).

If the online server and the offline server meets on the left side of the node \(-x\), the value of y must satisfy \(y \le L-x\).

Suppose the offline server would move a distance of \(z( 1 \le z \le y)\) after the meeting, then turn around back to the origin. According to OM algorithm, online server would go towards node \(-L\), then turn around. If two servers would meet for the second time, the position of the meeting is node \(y - x - z\). There is \(y - x - z \ge -L+y\), this means that the position of the meeting is on the right side of node \(-L+y\). Based on the different values of \(y-x-z\), we consider two cases:

Case 3.1.1. The online server and the offline server meet at negative half line.

If the online server and the offline server meet at negative half line, the value of y satisfy \(y<x+z\).

The meeting is Meeting situation 2( when Meeting situation 2 occurs again, we don’t consider that the offline server would turn around again until the offline server arrives at the origin or the extreme node). The offline server would go straightly back to the origin after the meeting, the cost of the offline server is equal to \({c_{offline}} = 2L + 2z - x\). According to OM algorithm, we can get the trajectory of the online server and the offline server(see Fig. 7).

Fig. 7
figure 7

Trajectory of online and offline server

As shown in Fig. 7, the released requests between node \(-x+1\) and node \(-x+y\), the released requests between node \(-L+y+1\) and node \(-L+y-z\), and the released requests between node \(y-x-z+1\) and the origin for online server all satisfy Released request sequence 1, and the cost for each request is equal to \(2(L-x)B\). According to OM algorithm, the online server would go straightly to node \(-L\) at time \(t+2L\), turn around to node \(-x+y\), and turn around to node \(-L+y-z\), then go back to the origin. The cost of the online server is equal to \({c_{online}} = 4L - 3x + 2z + (4(L - x)z + {(L - x - y)^2} + (L - x - y) + {(L - x - z)^2} + (L - x - z) + 2{L^2} - 2xL)B\).

Let \(f(y,z) = \rho - \frac{{{c_{online}}}}{{{c_{offline}}}}\), f(yz) is an increasing function of y and z. We can get \(f(y,z)>0\)(See “Appendix” D for details).

Case 3.1.2. The online server and the offline server don’t meet at negative half line.

If the online server and the offline server don’t meet at negative half line, the value of y must satisfy \(y \ge x+z\).

The online server and the offline server wouldn’t meet at the positive half line, because the offline server has stayed at the origin. According to Lemma 1 and OM algorithm, the cost of the offline server is equal to \({c_{offline}} = 2L + 2z - x\), the cost of the online server is equal to \({c_{online}} = 4L - 3x + 2z + (2(L - x)(L - x + y + z) + {(L - x - y)^2} + (L - x - y) + {(L - y)^2} + (L - y))B\). Similar to the analysis in Case 3.1.1, we can get \(\rho > \frac{{{c_{online}}}}{{{c_{offline}}}}\).

Case 3.2 The online server and the offline server meet between the node \(-x\) and the origin(including the origin).

If the online server and the offline server meet between the node \(-x\) and the origin(including the origin), the value of y must satisfy \( L-x < y \le L \).

Suppose the offline server would move a distance of \(z( 1 \le z \le y)\) after the meeting, then turn around back to the origin. The analysis of this case is similar to the Case 3.1, no matter what values y and z are, we can get \(\rho > \frac{{{c_{online}}}}{{{c_{offline}}}}\).

Case 3.3 The online server and the offline server meet at the positive half line.

If the online server and the offline server meet at the positive half line, the value of y must satisfy \( y>L \).

After the meeting, the offline server would go back to the origin. For the online server, according to OM algorithm and Lemma 1, the online server would go straightly to node \(-L\), turn around, move straightly to node \(-x+y\), then go back to the origin(See Fig. 8).

Fig. 8
figure 8

Trajectory of online and offline server

The cost of the offline server is equal to \({c_{offline}} = 2y - x\), the cost of the online server is equal to \({c_{online}} = 2L + 2y - 3x + 2(L - x)(2y - L)B\).

Let \(f\left( {y } \right) = \frac{{c_{online} }}{{c_{offline} }}\), \(f\left( {y } \right) \) is an increasing function of y. When \(y=L+x-1\), \(f\left( {y } \right) \) can get the maximum value, \(\rho _0-f\left( {L+x-1 } \right) =f\left( {L+x } \right) - f\left( {L+x-1 } \right) >0\).

In conclusion of the above cases, we can prove Lemma 2.

Theorem 4

The competitive ratio for Observe and Move Algorithm on the truncated line segment is \(1+LB\).

Proof

The adversary always wants to separate the online server and the offline server. According to OM algorithm, there are two cases to separate the online server and the offline server: 1. At time 0, the adversary releases a request at node x. 2. At time 0, the adversary doesn’t release any request, offline server moves towards extreme node(L or \(-L\)). We show that no matter how the adversary releases the requests, Meeting situation 2 must occur.

Case 1. Without loss of generality, suppose \(x>0\), according to OM algorithm, the online server serves the request immediately. For the offline server, when the online server leaves the origin, staying at the origin seems meaningless(Because this only leads to a higher cost of the offline server, and a lower cost of the online server).

Case 1.1. The offline server moves towards node \(-L\).

After traveling a distance of \(y(y>0)\), the offline server turns around. According to Lemma 1, the adversary would release requests at positions of the offline server. For the online server, these y requests satisfy Released request sequence 1. According to OM algorithm, the online server would serve the request on node x firstly, then turn around to serve these y requests. If \(y \ge x\), two servers would meet at negative half line or at the origin again, and this meeting is Meeting situation 2; If \(y < x\), two servers would meet at node \(x - y >0\), the offline server must serve the request at node x. Suppose the offline server travels a distance of \(z(z \ge 0)\) after serving the request on node x, then turns around to the origin. Two servers would meet at 0.5z again, this meeting is Meeting situation 2.

Case 1.2. The offline server moves towards node L.

The offline server would stay at the origin first, after the leaving of the online server, the offline server starts to move. When the adversary begins to release requests, according to OM algorithm, the first released request for the online server satisfies Released request sequence 1, online server must serve the request at node x then makes the decision. At the same time, the offline server would go on keeping the direction to extreme node L, or turn around. If the meeting occurs at negative half line, the meeting is Meeting situation 2; if the meeting occurs at positive half line, the meeting is Meeting situation 1, the following analysis is similar to Case 1.1, the next meeting must be meeting situation 2.

Case 2. Without loss of generality, at time 0, suppose the offline server moves towards node L, the online server stays at the origin. When the first request is released, according to Lemma 1 and OM algorithm, the online server would move to serve the request immediately and the offline server is on the right side of the online server. No matter how the adversary releases requests, two servers must meet at positive half line, and the meeting is meeting situation 2.

According to Lemma 2, after Meeting situation 2 occurs, the offline server must go straightly to the origin and idle or keep his direction to the extreme node. Suppose the position of the meeting on the positive half line is node x. Now we consider the situation (the offline server arrives at L and \(-L\) successively, then goes back to the origin) as a cycle, for each cycle, the cost of the offline server would increase by 4L, and the cost of the online server would increase by \(4L+4(L^{2}-x^{2})B\). When \(x=0\), the cost of the online server would get maximum value.

Through above analysis, two servers meet at the origin is the worst case. We can describe the worst case as follows: At time 0, the adversary releases a request at node L, the online server must move towards to node L to serve the request, the offline server moves straightly to node \(-L\). The adversary releases the requests at the positions of the offline server, this will last \(n+1\) cycles. The cost of the offline server is equal to \(4L+3LB+4nL\), the cost of the online server is equal to \(6L + LB + 2{L^2}B +4nL+ 4n{L^2}B\). The competitive ratio is \(\frac{{c_{online} }}{{c_{offline} }} =\frac{6L + LB + 2{L^2}B +4nL+ 4n{L^2}B}{4L+3LB+4nL}\), when \(n \rightarrow \infty \), \(\frac{{c_{online} }}{{c_{offline} }} =1+LB\).

We conclude that the competitive ratio of OM algorithm is \(1+LB\).

5 Uniform metric space

5.1 Lower bound on uniform metric space

In this section, we discuss the online TSP with per-unit-time cost on the uniform metric space, the uniform metric space is induced by a complete graph with unit edge weights. Besides the origin, there are n vertices in the metric space, and the request can only be released at the vertex.

Theorem 5

No deterministic online algorithm can achieve a competitive ratio less than \(1+B\).

Proof

The offline server would visit the n vertices one by one, then return to the origin, the adversary would release the requests at the offline server’s positions. The cost of the offline server is equal to \(c_{offline}=n+1\). We consider the different positions of the online server at time 1.

Case 1. The online server stays at the origin.

For the online server, when a new request is released, there are three options: 1. Serve the request immediately. 2. Go to another node. 3. Stay. The options of “stay” would lead to a higher cost for the online server. We only consider the two options:

  1. 1.

    Serve the request immediately. The cost of each request for the online server is equal to B. After serving all the n requests, the cost of the online server is equal to \(c_{online}=n+2+nB\).

  2. 2.

    Go to another vertex. The worst case for the online server is that after visiting \(n-1\) nodes, the online server has not served any request at all. The adversary would release the last request at time n. The penalties of n requests for the online server are larger than nB, the cost of the online server is equal to \(c_{online}>2n-1+nB\).

If the online server adopts the mixed strategy, i.e, for some requests, the online server would serve it immediately, for other requests, the online server would go to another vertex, the value of the cost in this case must larger than \(n+2+nB\).

The cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = \frac{{n + 2 + nB}}{{n + 1}}\), when \(n \rightarrow \infty \), \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1+B\).

Case 2. The online server stays at other vertex.

Consider the symmetry of the network, this case is the same with case 1, the cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1+B\).

Case 3. The online server stays between the origin and vertex j.

Vertex j would be any vertex in the uniform metric space, suppose at time 1, the distance between the online server and the origin(or vertex j) is \(x(0<x\le 0.5)\). The online server still has two options as in case 1 when a new request is released. Similar to the analysis in case 1, the minimum value of the online server’s cost is equal to \(c_{online}=n+2+x+n(1+x)B\). The cost ratio is equal to \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1+(1+x)B\) when \(n \rightarrow \infty \).

In conclusion of above cases, we can get the lower bound for online TSP with per-unit-time cost on the uniform metric space as \(1+B\).

5.2 The greedy algorithm

In this section, we give a greedy algorithm and analyze the competitiveness on the uniform metric space.

Greedy algorithm(GA): At any time t, define S(t) as a set for all the released requests but not yet served by online server. The greedy server will always choose to serve the requests in S(t) for the least cost. If set S(t) is empty, the greedy server will go back to the origin.

Lemma 3

The adversary only releases the requests at the positions of the offline server.

Proof

When the adversary only releases the requests at the positions of the offline server, the cost of the offline server is equal to y, the cost of the online server is equal to z. Suppose the adversary only releases one request \(R_{j}\) at node j before the offline server arrives at node j, the distance from the online server to node j at time \(r_{j}\) is \(x(0<x\le 1)\). The cost of the offline server is equal to \(c_{offline} \ge y+xB\). For the online server, at time \(r_{j}\), there are two unserved requests. According to GA, there are two cases for the online server:

Case 1. The online server serves the request \(R_{j}\) later, the cost of the online server is equal to \(z_{1}=z+xB\).

Case 2. The online server serves the request \(R_{j}\) firstly, the cost of the online server is equal to \(z_{2}\).

if \(z_{2}>z_{1}\), the cost ratio is equal to \(\frac{{z + xB}}{{y + xB}} < \frac{z}{y}\); if \(z_{2}<z_{1}\), the cost ratio is equal to \(\frac{{{z_2}}}{{y + xB}}< \frac{{z + xB}}{{y + xB}} < \frac{z}{y}\).

Theorem 6

The competitive ratio of the Greedy algorithm is \(1+B\).

Proof

According to GA, the online server would serve the request immediately when a new request is released. According to Lemma 3, the adversary only releases the requests at the positions of the offline server. For the online server, there is only one unserved request at the same time. For each request, the cost is equal to B. If the adversary does’nt release request when the offline server arrives at the vertex which would decrease the online server’s cost(the offline server’s cost is the same). After serving all the n requests, the cost of the offline server is equal to \(c_{offline}=n+1\), the cost of the online server is equal to \(c_{online}=n+2+nB\), when \(n \rightarrow \infty \), the competitive ratio is \(\frac{{{c_{online}}}}{{{c_{offline}}}} = 1+B\).

6 Conclusion

In this paper, we consider the affected population’s dissatisfaction in the emergency vehicle routing problem, introduce per-unit-time cost into the OL TSP. First, we find out that no deterministic and randomized online algorithms can achieve constant competitive ratio for OL-TSP with per-unit-time cost on general metric space even on the positive half line. Therefore, we propose the problem on more restricted metric spaces, such as the truncated line segment and the uniform metric space. While on truncated line segment, we give a lower bound, an Observe and Move Algorithm and the algorithm’s competitive ratio. The algorithm is to observe the sequence of the released requests, accordingly the online server decides how to move. There are two interesting practical applications that can be solved by the proposed algorithm: (1)Emergency vehicle routing problem. After the natural disaster (such as earthquake, hurricane), the natural disaster would destruct the roads of the city, there would be only several main roads which can pass through in the city (especially the structure of the city is grid network). For each main road, we can see it as a truncated line segment; (2) The order picking problem. Imagine in a big ware house, under the assembly-line order picking system, every picker is responsible for one cross aisle which can be seen as a truncated line segment. The request in our model can be seen as the storage which would be picked by the picker, the objective function can measure the efficiency of the picker. While on uniform metric space, we give a lower bound and a Greedy Algorithm with its competitive ratio, and prove it optimal. Besides, introducing other factors(such as deadline) into this problem is also interesting.