Jointly dedicated to an Unceasingly Requested Algorithmist on his threescore Jubilee.

1 Motivation

Imagine an incredibly intelligent and industrious individual. Take the reasonably random example of a professor in computer science at a renowned university in the heart of Europe. Said professor has to meticulously manage his or her time schedule—we will arbitrarily stick with “his” for the purpose of this paper. He is invited to an influential school to give its teachers a brief talk on the education in his field, illuminating computer science’s core principals and providing tried and tested tools to guide our future generations. As always, the organizers wish for an immediate confirmation whether he will embrace this offer. Undoubtedly, he would love to do so; nevertheless, he needs to check whether he is already booked at the requested time.Footnote 1

Even if the booking remains a viable possibility, however, he is of course prudent enough to not accept any invitation precipitately. After all, it might interfere with another opportunity yet to come—envision an extensive exposition of his educational efforts and an ensuing exemplary execution for the government’s leading officials. Such a feasible future request might fill the available time window even better, thereby minimizing his idle time. As a conscientious person valuing reliability, going back on a made commitment is no option to him, limiting the flexibility even further. In order to cope successfully with all the diverse and demanding duties, a good guideline for accepting or rejecting requests is thus indispensable.

We try to help our professor in finding and analyzing the best acceptance strategies.

2 Introduction and Definitions

In this section, we introduce the model of our problem, the necessary formal framework, and the tools used for its analysis.

2.1 Length-Weighted Disjoint Path Allocation

The task at hand can be described as follows. First, we are given a time span of length \(\ell \in \mathbb {N}\). It is measured in a reasonable time unit, assumed to be indivisible. Certainly, we should not be micro-managing single seconds. The time span can be visualized as a path of length \(\ell \), as it might appear in a day planner. Switching from the motivating model to a more tangible example, suppose there is a meeting room that can be booked, initially available over the entire time span. Inevitably, an unknown number of reservation requests will start pouring in. Every request corresponds to a subpath and demands immediate acceptance or rejection. Once taken, a decision cannot be revoked. Our goal is to maximize the room’s occupancy, the overall time it is in use.

This is of course an optimization problem, but also a typical online problem. Such problems appear abundantly in all areas of computer science. They are characterized by the fact that a solving algorithm \(\mathcal {A}\) is forced to produce parts of the output without complete knowledge of the input.

We now define our problem of Length-Weighted Disjoint Path Allocation on paths, lwDPA for short, more formally. An instance \(I \) consists of a path P of length \(\ell \) and n requests \(r_i\), which are just subpaths of P. An algorithm either accepts or rejects each request. It cannot accept a request that overlaps a previously accepted request, where two requests are said to overlap if they share at least one edge. In this case, we say that they block each other. The requests accepted by an algorithm \(\mathcal {A} \) on an instance \(I \) constitute the solution \(\mathcal {A} (I)\) of \(\mathcal {A} \) on \(I \). A solution is thus a selection of non-overlapping requests. For a fixed solution, we refer to the leaves of the accepted requests, that is, the starting and ending points of the chosen subpaths, as transition points.

The gain of a solution to \(I \) computed by \(\mathcal {A} \) is the total length of all accepted requests, denoted by \(\mathrm {gain}(\mathcal {A} (I))\). A solution is optimal if it has the maximum possible gain. An optimal algorithm \(\textsc {Opt} \) produces an optimal solution \(\textsc {Opt} (I)\) with gain \(\mathrm {gain}(\textsc {Opt} (I))\) on every instance I. We refer to the accepted requests of a fixed optimal solution as the optimal requests.

In accordance with the conventional way of defining related problems—see Subsect. 2.2—we assume that all requests are unique; in other words, no subpath is offered more than once. Moreover, we assume that the algorithm is not given the number n of requests beforehand and also does not recognize the last request as such. This is a severe constraint for any algorithm.

2.2 Connection with Classic Disjoint Path Allocation

Among all researched online problems, the problem of Disjoint Path Allocation (DPA) is one of the earliest. It has been thoroughly analyzed in many different settings [1,2,3, 8]. The difference to our Length-Weighted Disjoint Path Allocation (lwDPA) lies in a single but significant detail: For DPA, the gain of a solution is defined as the number of requests it contains; for lwDPA, on the other hand, we weight the requests by their length. Thus, the gain is the total length of all accepted requests.

Both DPA and lwDPA can be altered to allow preemption, that is, the removal of provisionally accepted requests from the solution. The preemptive model has been the focus of the work by Kováčová [11]. Nevertheless, it contains a few results for the non-preemptive case. Her Theorems 8, 9, 12 and 13 correspond to the unparametrized special case of our optimality results in Subsect. 3.1, namely Theorems 1 to 4. Theorem 18 gives an upper bound that is asymptotically improved upon in Theorem 7 by choosing \(b>0\).

DPA is traditionally motivated by a network wherein permanent communication channels between pairs of nodes are established. It is unknown what the connections are used for; consequently, the only goal is to maximize their number. In contrast, our motivating example of lwDPA illustrates why maximizing the number of covered edges will be preferred instead under the right circumstances.

2.3 Competitive Analysis

Competitive analysis was introduced by Sleator and Tarjan as a tool to measure the performance of online algorithms [12]. The basic idea is to assess the quality of the solution computed by an online algorithm by comparing it to an optimal solution that is calculated offline, that is, by an algorithm with prior knowledge of the whole input. Note that no limit on the computational complexity is imposed; enforcing online decisions is restrictive enough. An offline algorithm can, therefore, always deliver an optimal solution, if necessary by applying brute force.

In the present case of a maximization problem, the following two notions enable us to express the mentioned comparison succinctly:

Definition 1

(Strict Competitiveness). \(\mathcal {A}\) is strictly c-competitive if

$$\begin{aligned} \forall I :\ \frac{\mathrm {gain}(\textsc {Opt} (I))}{\mathrm {gain}(\mathcal {A} (I))}\le c. \end{aligned}$$

The best attainable strict competitiveness is 1. This is only achieved if \(\mathcal {A}\) always produces an optimal solution. The poorer \(\mathcal {A} \)’s handling of any worst-case instance, the larger a c we need to choose.

Definition 2

(Strict Competitive Ratio). The strict competitive ratio of an online algorithm \(\mathcal {A} \) is the infimum over all c that satisfy the condition in Definition 1.

A thorough introduction to online computation and competitive analysis can be found in the textbook by Borodin and El-Yaniv [4].

2.4 Advice Complexity

The competitive analysis of online algorithms compares them to offline algorithms. One might say that this sets the bar far too high. Often, an online algorithm has no chance to come even near the performance of an offline algorithm.

Here, the concept of advice complexity comes into play, offering a more fine-grained classification of online problems. The aim is to answer the question of how much information about the complete input is required for an online algorithm to guarantee any given competitiveness. Arguably the most fundamental question in this regard is how much knowledge about the input suffices for an online algorithm to ensure an optimal solution. With less information it is generally more difficult to solve a problem well; hence, a trade-off between the necessary amount of information and the achievable competitive ratio occurs.

We allow for any kind of information and measure it in bits. We model this by an oracle with infinite computational power that delivers, in dependence of the given instance, an infinite binary string of advice bits. This string is provided to the online algorithm along with the instance in the form of an infinite tape. The online algorithm may read a prefix of arbitrary length from the tape and adjust its behavior accordingly. We refer to the read prefix as the advice string and call its length the advice complexity of the algorithm on the given instance.

An online algorithm that reads at most b advice bits can also be interpreted as a collection of \(2^b\) classic deterministic algorithms, one for each advice string. Conversely, we need \(\log B\) advice bitsFootnote 2 to simulate B distinct deterministic algorithms.Footnote 3 This viewpoint will come in handy in our proofs.

Across all possible online algorithms, there is a minimum number of advice bits that must be read to process any problem instance optimally. This minimal advice complexity can be interpreted as a fundamental property of the problem, its information content.

The concept of advice complexity was introduced by Dobrev et al. [6] and subsequently refined by Hromkovič et al. [9], Böckenhauer et al. [3], and Emek et al. [7]. In particular the work by Hromkovič et al. [9] elaborated on the intriguing idea of advice complexity as a tool to measure a problem’s information content. Advice complexity has since been successfully applied to numerous online problems. For a recent and well-written introduction to both online problems and the concept of advice, we refer to Komm [10].

2.5 Parametrization

For many online problems, bounds are naturally given as functions of n, the number of requests. For DPA as well as lwDPA, however, there is the alternative to express results in terms of the path length \(\ell \) instead. We present results for both choices.

Our analysis of the lwDPA problem is consistently parametrized by \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \), a guaranteed lower and upper bound, respectively, on the length of the requests in the instances. These two parameters are known to both the algorithm and the oracle. We denote the ratio between these two bounds by \(\varphi := k_{\mathrm {max}}/k_{\mathrm {min}} \). Naturally, we have \(1\le k_{\mathrm {min}} \le k_{\mathrm {max}} \le \ell \). When we restrict the instances to requests of a single length, DPA and lwDPA coincide. This corresponds to the particular case of \(k_{\mathrm {min}} =k_{\mathrm {max}} \), for which good bounds are already proven in the bachelor’s thesis by Dietiker [5]; we therefore assume \(k_{\mathrm {min}} < k_{\mathrm {max}} \). Our parametrized results are a true generalization in the sense that setting \(k_{\mathrm {min}} =1\) and \(k_{\mathrm {max}} =\ell \) yields meaningful bounds on the strict competitive ratio in the classic sense.

Why would we want to parametrize the lwDPA problem in this way? We get a good intuition by going back to the motivating example of a meeting room that is available for booking over a time span of length \(\ell \). Blocks of 5 minutes could be a reasonable time unit, providing sufficient flexibility without an excessive number of options. Since nobody properly enjoys concise meetings, we might want to enforce a minimum reservation length of half an hour. With our parametrization, we can model this by setting \(k_{\mathrm {min}} \) to 6, yielding a minimum of \(k_{\mathrm {min}} \cdot 5=30\) minutes. One might object that we could have just as well stretched the time unit to half an hour, with no need for introducing new parameters. This, however, would leave no option between 30 minutes and a full hour, thereby preventing a reasonable booking time for, say, a 45-minute exercise class. Imposing upper bounds on the reservation length via \(k_{\mathrm {max}} \), on the other hand, could make sense in order to avoid permanent reservations and give everybody a booking opportunity.

Altogether, we see that the proposed parametrization in terms of \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \) is well justified and will help us to deal better with boundary conditions of this sort. Note that \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \) are only bounds on the request lengths known to the algorithm and not necessarily the actual lengths of the shortest and longest requests. In the example, we know that every meeting will last at least half an hour; however, this does not mean we can count on the appearance of a booking request for a 30-minute meeting.

3 Results

This section presents all of our results for lwDPA in consistent parametrization. It is structured as follows.

In Subsect. 3.1, we give upper and lower bounds on the amount of advice bits needed to achieve optimality. In Subsect. 3.2, we prove a tight bound on the strict competitive ratio of the problem without advice.

Subsection 3.3 bridges the gap between these two extremes; we present results that trace the trade-off between strict competitiveness and advice complexity. The first one, Theorem 6, is an upper bound witnessed by an intelligent greedy algorithm. It performs exceptionally well with only few advice bits. The performance deteriorates quickly with increasing advice complexity, however. Theorem 7 cures this shortcoming with an advice-assisted preprocessing; it is well-suited for situations where much advice is available. The last two theorems give the complementing lower bounds. Theorem 8 is a good match to Theorem 6; it features a hard instance set that is tailor-made against the greedy approach. Theorem 9, on the other hand, is a rather technical but all the more flexible result, yielding multiple useful corollaries.

The conclusion in Sect. 4 recapitulates the results and discusses what light they shed on the advice complexity behavior of lwDPA.

3.1 Optimality

This subsection answers the question of how much advice an online algorithm needs in order to produce an optimal solution to every instance. Theorems 1 and 3 give an answer that depends on n, whereas Theorems 2 and 4 address the question in terms of the path length \(\ell \). It is worth noting that we have two closely matching pairs of upper and lower bounds and even a perfect match in the unparametrized case of \(k_{\mathrm {min}} =1\) and \(k_{\mathrm {max}} =\ell \).

Theorem 1

Optimality can be achieved with n advice bits.

Proof

The oracle chooses an arbitrary but fixed optimal solution. Then, the oracle specifies for every request whether it is part of this solution or not, using n advice bits. Whenever the algorithm receives a request, it reads the next advice bit from the tape and accepts or rejects accordingly. This strategy does not depend on the values of \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \).    \(\square \)

Theorem 2

Optimality can be achieved with \((\ell - 1)/k_{\mathrm {min}} \cdot \left( 1 + 2\log (k_{\mathrm {min}})\right) \) advice bits.

Proof

Before we describe how the oracle determines the advice given to the algorithm \(\mathcal {A}\), let us consider the subpaths of \(P \) consisting of \(k_{\mathrm {min}} \) vertices. We first show that in any optimal solution to an instance \(I \), any such subpath can contain at most two transition points. For the sake of contradiction, fix some subpath \(S\) of length \(k_{\mathrm {min}}- 1\) and some optimal solution \(\textsc {Opt} (I)\) and assume that \(S\) contains at least three transition points of \(\textsc {Opt} (I)\). Then we can pick three consecutive transition points from \(S\) and call them (from left to right) \(u, v \), and \(w \). Since \(v \) is the middle transition point, \(\textsc {Opt} (I)\) must contain at least one of the requests \((u, v)\) and \((v, w)\). However, since \(S\) only contains \(k_{\mathrm {min}} \) vertices, the subpaths \((u, v)\) and \((v, w)\) both have length at most \(k_{\mathrm {min}}- 1\) and are thus shorter than the minimum guaranteed request length. Hence, they can neither be part of the instance \(I \) nor \(\textsc {Opt} (I)\), yielding the contradiction.

As depicted in Fig. 1, we now divide the \(\ell - 1\) inner vertices of the path \(P \) into \((\ell - 1)/k_{\mathrm {min}} \) segments containing \(k_{\mathrm {min}} \) vertices each. For the given input sequence \(I \), the oracle chooses some arbitrary but fixed optimal solution \(\textsc {Opt} (I)\) and uses the advice bits to indicate the positions of the transition points of \(\textsc {Opt} (I)\) in each segment. To calculate the number of advice bits necessary, let us count the number of cases the oracle has to distinguish. Taking into account that the number of transition points in a segment might be two, one, or zero, the number of different possible combinations that have to be distinguished is

$$\begin{aligned} \left( {\begin{array}{c}k_{\mathrm {min}} \\ 2\end{array}}\right) + k_{\mathrm {min}} + 1 = \frac{k_{\mathrm {min}} (k_{\mathrm {min}}- 1) + 2k_{\mathrm {min}} + 2}{2} = \frac{k_{\mathrm {min}} ^2+ k_{\mathrm {min}} + 2}{2}, \end{aligned}$$

which can be bounded from above by \(2k_{\mathrm {min}} ^2 \). The number of advice bits necessary to encode that many different possible combinations of transition points per segment is

$$\begin{aligned} \frac{\ell - 1}{k_{\mathrm {min}}} \cdot \log \left( 2 k_{\mathrm {min}} ^2 \right) = \frac{\ell - 1}{k_{\mathrm {min}}} \cdot \left( 1 + 2\log ({k_{\mathrm {min}}}) \right) . \end{aligned}$$

The algorithm \(\mathcal {A}\) reads exactly this many advice bits. (It can do so even before the first request arrives.) From this information, \(\mathcal {A}\)  can derive all transition points of \(\textsc {Opt} (I)\) except the first and last vertex of \(P\), which can always be treated as transition points. Now it only has to accept every presented request whose starting and ending points coincide with two successive transition points, and reject all others. This way, \(\mathcal {A}\)  never accepts any requests that block any requests from \(\textsc {Opt} (I)\). Hence, \(\mathcal {A}\)  accepts all requests from \(\textsc {Opt} (I)\) and must be optimal itself.    \(\square \)

Fig. 1.
figure 1

An example of a possible optimal solution \(\textsc {Opt} (I)\). The dashed lines indicate the segments containing \(k_{\mathrm {min}} \) vertices each. The bit string underneath indicates all transition points of \(\textsc {Opt} (I)\), enabling \(\mathcal {A}\) to act optimally on \(I \). The algorithm can deduce this string by reading only \(1 + 2\log k_{\mathrm {min}} \) advice bits per segment.

For these two upper bounds, it was sufficient to provide an algorithm with the given advice complexity that performs optimally on all instances. For the corresponding lower bounds, we now take the adversary’s position and show that there is always an instance that the algorithm will solve suboptimally, unless given a certain number of advice bits.

Theorem 3

At least \(n-1-\lfloor (n-1)/(k_{\mathrm {max}}-k_{\mathrm {min}} +1)\rfloor \) advice bits are needed for optimality.

Proof

We describe a set \(\mathcal {I} \) of instances such that no deterministic algorithm can solve two of them optimally. This implies that an optimal algorithm must be able to distinguish them all, so \(\log (|\mathcal {I} |)\) advice bits are required.

To give a lower bound in n, we can choose the path length as large as necessary. We fix it to be \(\ell := nk_{\mathrm {min}} \) for all instances from \(\mathcal {I} \). Every instance is presented in k blocks that consist of \(i_1,\ldots ,i_k\) requests, respectively; hence \(i_1+\cdots +i_k=n\). The first block has \(i_1\) requests of lengths \(k_{\mathrm {min}},\ldots ,k_{\mathrm {min}} +i_1-1\); they are all aligned on the left endpoint and presented in increasing order. All following blocks are constructed in the same way: Block j has \(i_j\) requests of lengths \(k_{\mathrm {min}},\ldots ,k_{\mathrm {min}} +i_j-1\), aligned as far as possible to the left, without overlapping the previous block. See Fig. 2 for an example. Every new request protrudes further to the right: by 1 if it stays in the same block, and by \(k_{\mathrm {min}} \) if it starts a new one. Thus, the path length \(\ell =n\cdot k_{\mathrm {min}} \) is indeed just large enough to accommodate all of them.

The optimal solution, obtained by accepting the last request of each block and highlighted in Fig. 2, is unique. Hence, already a single mistake prevents optimality. We prove now that a deterministic algorithm cannot handle any two such instances \(I\ne I'\) optimally, but will make a mistake on one of them. The requests for I and \(I'\) are identical up to some point. Let \(r_j\) be the first request that differs between them. Like every request, \(r_j\) either starts a new block or continues an existing one. Only in the former case is \(r_{j-1}\) the last request in a block and has to be accepted. Since the deterministic algorithm cannot distinguish the two instances before \(r_j\) is presented, it is bound to make a mistake on one of them.

It remains to give a good estimate on the number of instances in \(\mathcal {I} \). For this, we consider how these instances can be constructed step by step, adding the n requests one after another. The first request in the first block is identical for all instances; it has length \(k_{\mathrm {min}} \). In each of the following \(n-1\) steps we can either start a new block with a request of length \(k_{\mathrm {min}} \) or—if the preceding request is shorter than \(k_{\mathrm {max}} \)—extend the current block downward by a request that is one unit longer. The second option is excluded only if the preceding request has the maximum length \(k_{\mathrm {max}} \). How often can this situation, namely a request of length \(k_{\mathrm {max}} \) preceding another one, occur at most? There are \(n-1\) requests preceding another one. Also, any block featuring a request of length \(k_{\mathrm {max}} \) comprises \(k_{\mathrm {max}}-k_{\mathrm {min}} +1\) requests. Therefore, the answer is

$$\begin{aligned} d:=\left\lfloor \frac{n-1}{k_{\mathrm {max}}-k_{\mathrm {min}} +1}\right\rfloor . \end{aligned}$$

Thus, we always have at least \(n-1-d\) binary choices during the construction of an instance and the number of different instances is bounded from below by \(|\mathcal {I} |\ge 2^{n-1-d}\). As a result, the required number of advice bits is at least \(\log |\mathcal {I} |\ge n-1-d.\)    \(\square \)

Fig. 2.
figure 2

An instance \(I \in \mathcal {I} \) with \(k_{\mathrm {min}} =2\), \(k_{\mathrm {max}} =5\), \(n=13\) and \(k=5\). Requests are presented from top to bottom and from left to right. The highlighted optimal solution contains the last request of each block. Each block is adjacent to the previous one.

Theorem 4

At least \(\ell /k_{\mathrm {min}}-1-\lfloor (\ell /k_{\mathrm {min}}-1)/(k_{\mathrm {max}}- k_{\mathrm {min}} + 1)\rfloor \) advice bits are needed for optimality.

Proof

The approach from the proof of Theorem 3 works here as well. In fact, we can use the exact same set \(\mathcal {I} \) of hard instances. To obtain a lower bound that now depends on \(\ell \), we are free to choose the number n of requests conveniently. Since we had \(\ell = nk_{\mathrm {min}} \) for all instances from \(\mathcal {I} \) before, we now set \(n := \ell /k_{\mathrm {min}} \). Using this to substitute n in Theorem 3 directly yields the desired result.    \(\square \)

3.2 No Advice

Having explored the boundaries on advice complexity for optimality, we now turn to the opposite end of the spectrum: What is the best strict competitive ratio achievable by a classic online algorithm operating without advice? First, we analyze the strict competitive ratio on only a subset of the requests, for which even better lower and upper bounds on the request lengths might apply. In Lemma 1, we give a case-based upper bound for this scenario, and we extract a more general bound that encompasses all cases in Corollary 1. Although the advantage of considering only subsets of requests may not be obvious at first, this will prove to be useful for Theorem 6 in the next section. Additionally, we can directly derive from Lemma 1 a greedy algorithm that is given in Theorem 5, together with a perfectly matching lower bound.

Lemma 1

Let \(I \) be some input sequence for lwDPA and let \(I ' \subseteq I \) be a subset of its requests. Moreover, let \(k_1 \) and \(k_2 \) be lower and upper bounds, respectively, on the lengths of all requests from \(I '\), with \(k_{\mathrm {min}} \le k_1 \le k_2 \le k_{\mathrm {max}} \). Then, the strict competitive ratio of Greedy—the algorithm that accepts every request that is not blocked yet—on the requests from \(I '\) is

$$\begin{aligned} c \le {\left\{ \begin{array}{ll} k_2 &{} if\,\, k_1 = 1,\\ (2k_2 +k_1)/(k_1 +2) &{} if\,\,\,k_1 \ge 2\,\, and \,\,k_1 + 1< k_2 < k_1 ^2/4,\\ 2k_2/ k_1 &{} {otherwise}. \end{array}\right. } \end{aligned}$$

Proof

The worst case for the greedy algorithm occurs in different situations, depending on the lengths \(k_1 \) and \(k_2 \). We distinguish several cases depending on the length of an accepted request in comparison to the length of the subpath which the blocked requests could have covered otherwise. All relevant cases are depicted in Fig. 3.

If \(k_1 = 1\), only the cases depicted in Fig. 3a and 3b can occur. In the case corresponding to Fig. 3a, Greedy accepts one request of length \(k_1 = 1\), which later blocks an optimal request of length \(k_2 \), leading to a strict competitive ratio of \(k_2 \). In the case corresponding to Fig. 3b, Greedy accepts a request of length \(k_1 + 2\), which blocks three requests appearing later: One request of length \(k_1 = 1\) and two requests of length \(k_2 \), yielding a strict competitive ratio of \((2k_2 + 1)/3 \le k_2 \). The strict competitive ratio in the case \(k_1 = 1\) is thus  \(c \le k_2 \).

If \(k_1 \ge 2\), we further distinguish two subcases. If \(k_2 \le k_1 + 1\), the case depicted in Fig. 3b cannot occur. We can quickly convince ourselves that, under this assumption, the worst case realized in the situation of Fig. 3c is where Greedy accepts one request of length \(k_1 \) which blocks two requests of length \(k_2 \). This gives us a strict competitive ratio of \(c\le 2k_2/k_1 \) in this case.

If \(k_1 \ge 2\) and \(k_2 \ge k_1 + 2\), the worst case occurs when requests are presented according to Fig. 3b or 3c. (Requests may also be presented according to Fig. 3a; however, requests arriving according to Fig. 3c always ensure an even worse performance of the greedy algorithm.) We have to distinguish two further subcases; \(k_2 \ge k_1 ^2/4\) and \(k_2 < k_1 ^2/4\). In the former case, the worst-case strict competitive ratio is \(2k_2/k_1 \), as

$$\begin{aligned} \frac{2k_2}{k_1} = \frac{2k_2 (k_1 + 2)}{k_1 (k_1 + 2)} = \frac{2k_2 k_1 + 4k_2}{k_1 (k_1 + 2)} \ge \frac{2k_2 k_1 + k_1 ^2}{k_1 (k_1 + 2)} = \frac{2k_2 + k_1}{k_1 + 2}. \end{aligned}$$

Analogously, in the case that \(k_2 < k_1 ^2/4\), the strict competitive ratio obtained in the worst case is \((2k_2 + k_1)/(k_1 + 2)\).    \(\square \)

Fig. 3.
figure 3

The highlighted requests are those accepted by Greedy. The three figures show which requests can be presented afterwards such that the worst case for Greedy occurs, depending on \(k_1 \) and \(k_2 \). In case 1, a request of length \(k_1 \) is accepted, blocking an optimal request of length \(k_2 \). This situation can always occur, independently of the values \(k_1 \) and \(k_2 \). Case 2 is only possible if \(k_2 \ge k_1 + 2\). Here, an accepted request of length \(k_1 + 2\) can block one request of length \(k_1 \) and two requests of length \(k_2 \). In case 3, two requests of length \(k_2 \) are blocked by one accepted request of length \(k_1 \). This can only happen if \(k_1 \ge 2\).

Corollary 1

Let \(I \) be some input sequence for lwDPA and let \(I ' \subseteq I \) be a subset of its requests. Moreover, let \(k_1 \) and \(k_2 \) be lower and upper bounds, respectively, on the lengths of all requests from \(I '\), with \(k_{\mathrm {min}} \le k_1 \le k_2 \le k_{\mathrm {max}} \). Then, the strict competitive ratio of Greedy on the requests from \(I '\) is bounded from above by \(2k_2/k_1 + 1\).

Proof

We only have to consider the three different cases from Lemma 1 and make sure the strict competitive ratio is bounded from above by \(2k_2/k_1 + 1\) in each case. It is obvious that the claim holds for the first and the last case. For the second case, we make the following transformation.

$$\begin{aligned} \frac{2k_2 + k_1}{k_1 + 2} < \frac{2k_2 + k_1}{k_1} = \frac{2k_2}{k_1} + 1.\\[-3pc] \end{aligned}$$

   \(\square \)

Theorem 5

There is an online algorithm without advice, namely Greedy, that has a strict competitive ratio of \(c \), and no online algorithm without advice can have a strict competitive ratio better than \(c \), with

$$ c = {\left\{ \begin{array}{ll} k_{\mathrm {max}} &{} {if}\,\,k_{\mathrm {min}} = 1,\\ (2k_{\mathrm {max}} +k_{\mathrm {min}})/(k_{\mathrm {min}} +2) &{} {if}\,\, k_{\mathrm {min}} \ge 2\,\, and \,\,k_{\mathrm {min}} + 1< k_{\mathrm {max}} < k_{\mathrm {min}} ^2/4,\\ 2k_{\mathrm {max}}/ k_{\mathrm {min}} &{} {otherwise}. \end{array}\right. } $$

Proof

The first part of the statement follows directly from Lemma 1 by setting \(I ' := I \), \(k_1:= k_{\mathrm {min}} \), and \(k_2:= k_{\mathrm {max}} \).

For the second part, we make the following considerations. Since we cannot use any advice bits, we only have one deterministic algorithm available. Thus, let \(A\) be some arbitrary but fixed deterministic algorithm for lwDPA. We show that for any combination of \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \), there is a hard set of two input sequences for \(A\) that leads to the strict competitive ratio mentioned in the theorem. The hard input sequences we consider are derived from the worst-case scenarios depicted in Fig. 3. If \(k_{\mathrm {min}} = 1\), we consider the input consisting of only one request of length 1 and the input consisting of this request and a second one of length \(k_{\mathrm {max}} \) underneath it as depicted in Fig. 3a. The algorithm \(A\) can either accept the first request, leading to a strict competitive ratio of \(k_{\mathrm {max}} \) on the second of these two inputs, or reject it, leading to a strict competitive ratio of 1/0 on the first of the two inputs. Overall, \(A\) cannot have a strict competitive ratio better than \(k_{\mathrm {max}} \).

Similarly, we can construct hard input sequences for the case that \(k_{\mathrm {min}} \ge 2\) and \(k_{\mathrm {min}} + 1< k_{\mathrm {max}} < k_{\mathrm {min}} ^2/4\) according to the scenario depicted in Fig. 3b, and those for the remaining case according to Fig. 3c.

In all three cases, we choose the length of the path just long enough for all requests to fit in; this is \(\ell :=k_{\mathrm {max}} \) in the first case, \(\ell := 2k_{\mathrm {max}} + k_{\mathrm {min}} \) in the second, and \(\ell := 2k_{\mathrm {max}} + k_{\mathrm {min}}- 2\) in the third case.    \(\square \)

3.3 Trade-Off

Having covered the two extreme cases, classic online algorithms and optimal online algorithms with advice, we now we now strive to find the best possible behavior in between, balancing out advice complexity and strict competitiveness. Theorem 6 proves an upper bound by describing a class-based greedy algorithm. It is a great choice when only few advice bits are given, but less so with increasing advice complexity. Corollary 2 shows that it cannot undercut a strict competitive ratio logarithmic in \(\varphi :=k_{\mathrm {max}}/k_{\mathrm {min}} \), even when reading arbitrarily many advice bits. This problem is remedied by Theorem 7, which incorporates the approach of Theorem 6, but also makes efficient use of ample available advice by performing a sort of preprocessing.

Finally, we prove two lower bounds. Theorem 8 is evidently a counterpart to Theorem 6, with the same order of magnitude for a constant number of advice bits. Theorem 9, on the other hand, states a very versatile, albeit quite technical result. The two Corollaries 5 and 6 bring it in a more applicable form. The first one improves on the lower bound of Theorem 8 in the case of a strict competitive ratio logarithmic in \(\varphi \); the second one yields the missing lower bound for even larger advice complexities, complementing Theorem 7 and Corollary 3.

Theorem 6

There is an online algorithm reading \(b \) bits of advice that has a strict competitive ratio of \(c\le 2^{b} \cdot \left( 2\cdot \root 2^b \of {\varphi } +1 \right) \).

Proof

The following is a variation of the randomized strategy Classify and Randomly Select for DPA, first proposed by Awerbuch et al. [1], adapted to lwDPA in our advice setting.

We divide the requests of the input sequence \(I \) into \(2^b \) classes according to their lengths. Let class \(C _i\) contain all requests of length \(k _i\) with

$$ k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{i-1}} \le k _i < k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^i} $$

for all i with \(1\le i\le 2^b \), and let \(C _{2^b}\) additionally contain all requests of length \(k_{\mathrm {max}} \). This way, each request is contained in exactly one class since \(k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^0} = k_{\mathrm {min}} \) and \(k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{2^b}} = k_{\mathrm {min}} \cdot \varphi = k_{\mathrm {max}} \). Furthermore, let \(A _i\) be the deterministic algorithm that accepts all requests from class \(C _i\) greedily and rejects all others. The oracle specifies the algorithm \(A ^* \in \left\{ A _1,A _2,\dots , A _{2^b} \right\} \) that has the largest gain on \(I \) among all algorithms \(A _i\). Let us denote this maximum gain by \(a ^* := \mathrm {gain}(A ^*(I))\). To specify \(A ^*\), the oracle uses \(b \) advice bits.

Now let us consider some arbitrary but fixed optimal solution \(\textsc {Opt} (I)\). Plugging \(k_1:= k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{i-1}}\) and \(k_2:= k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{i}}\) into Corollary 1, we can deduce that for all i, \(\textsc {Opt} (I)\) has a gain of at most

$$ a ^* \cdot \left( \frac{2k_2}{k_1} +1 \right) = a ^* \cdot \left( \frac{2k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{i}}}{k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^{i-1}}} +1\right) = a ^* \cdot \left( 2 \cdot \root 2^b \of {\varphi } +1 \right) $$

on the requests from \(C _i\).

As the overall gain of \(\textsc {Opt} (I)\) can be at most the sum of the gains on all \(2^{b}\) classes, we obtain

$$ \mathrm {gain}(\textsc {Opt} (I)) \le 2^b \cdot a ^* \cdot \left( 2 \cdot \root 2^b \of {\varphi } +1 \right) , $$

and hence the strict competitive ratio of \(A ^*\) on \(I \) is

$$ \frac{\mathrm {gain}(\textsc {Opt} (I))}{\mathrm {gain}(A ^*(I))} \le \frac{2^b \cdot a ^* \cdot \left( 2 \cdot \root 2^b \of {\varphi } +1 \right) }{a ^*} = 2^b \cdot \left( 2 \cdot \root 2^b \of {\varphi } +1 \right) . $$

   \(\square \)

As mentioned above, the algorithm of Theorem 6 is most efficient for little advice. For no advice at all, \(b = 0\), we get \(c \le 2\varphi + 1\); this coincides with the upper bound that we get from Corollary 1 for the adviceless greedy algorithm on the complete input sequence \(I \) by plugging in \(k_1:= k_{\mathrm {min}} \) and \(k_2:= k_{\mathrm {max}} \). For a single bit of advice, \(b = 1\), the bound reads \(c \le 2\cdot (2 \sqrt{\varphi } + 1)\). This is a marked improvement for large \(\varphi \), whose influence is dampened significantly. We pay with an additional factor of \(2^b=2\), however. This problem exacerbates quickly with increasing advice complexity. In fact, there is a threshold beyond which more advice even worsens the bound. The standard method for finding extrema reveals it to be \(b=\log \left( \ln \varphi / (1+W(1/(2\mathrm {e})))\right) \), where W(x) is the Lambert W function. For this value, Theorem 6 yields its lowest upper bound on the strict competitive ratio: There is a strictly \((4.41\cdot \log {\varphi })\)-competitive online algorithm that reads \(\log \log \varphi -0.73\) advice bits. The following corollary provides a more tangible bound in the same order of magnitude.

Corollary 2

There is a strictly \((5\log {\varphi })\)-competitive online algorithm that reads \(\log \log \varphi \) advice bits.

Suppose now that a rather large amount of advice is available, \(\varphi \) or \(\ell /2\) bits for example. Only a tiny fraction of advice can be put to good use in the approach of Theorem 6; most of it would be wasted on the algorithm. Theorem 7 shows how to use this otherwise superfluous advice in a kind of preprocessing. The underlying idea is that the oracle first gives information about transition points, similarly to the optimal algorithm of Theorem 2. If the advice does not suffice to communicate their locations exactly, we reveal them only approximately in a way that is useful nonetheless. In essence, this first bulk of advice allows us to predict all long optimal requests with reasonable accuracy and ensure that enough of them will be accepted. The remaining advice can be spent in the spirit of Theorem 6 on the remaining requests, with a virtual upper bound of \(2s-1\) taking the place of \(k_{\mathrm {max}} \).

Theorem 7

For any \(s\ge 1\) and any \(b\ge 0\), there is an online algorithm that is strictly \(\max \{6-8/(s+1),2^{b}(2\cdot \root 2^b \of {(2s-1)/k_{\mathrm {min}}}+1)\}\)-competitive and uses \((\ell -1)/s+b\) advice bits.

Fig. 4.
figure 4

An example with path length \(\ell =16\). The requests are those of the fixed optimal solution on the bases of which the advice is given. There are \(\ell -1=15\) inner vertices, gathered into groups of size \(s\). The top part of the figure gives the advice string that results from groups of size \(s=3\). Underneath, the advice string for \(s=1\) is given. From this string, we can easily deduce the advice string for general \(s\ge 1\) by conjoining the bits groupwise.

Proof

As usual, the oracle fixes an optimal solution; let \(r_1,\ldots ,r_k\) be its requests. Any request \(r_i\) is a subpath with two leaves plus a set \(R_i\) of inner vertices. This set is empty if and only if the request has length 1. Since two requests \(r_i\ne r_j\) of a solution cannot overlap, any two sets \(R_i\ne R_j\) are separated by at least one vertex; we will use this below. The given path of length \(\ell \) contains exactly \(\ell -1\) inner vertices. We gather these into groups \(G_i\), each of size \(s\), as illustrated in Fig. 4 for \(s=3\). For every group \(G_i\), the oracle communicates to the algorithm by a single bit \(b_i\) whether the group’s vertices are enclosed within a single optimal request, that is, \(b_i=1\, \Longleftrightarrow \, \exists j\in \{1,\ldots ,k\}:G_i\subseteq R_j\). This requires one bit per group, hence \((\ell -1)/s\) bits in total.

How does this information help us? We begin with the case \(s=1\). Here, the advice reveals the exact location of all vertices of all \(R_i\). Remember that \(R_i\) is the set of inner vertices of \(r_i\). Since these sets are pairwise separated, we can directly infer the exact position of all requests \(r_i\) containing any inner vertex, that is, all optimal requests of length 2 or greater. Hence, the algorithm can reserve these subpaths and accept the fitting requests whenever they appear. The requests of length 1, in contrast, need to be accepted exactly if the reservations allow for it. Indeed, either they are blocked by longer optimal requests and can therefore not be part of the chosen solution; or they can and must be included in the solution by its optimality. The algorithm will thus reconstruct an optimal solution, using only \(\ell -1\) advice bits.

Now to the general case \(s\ge 1\). The advice bit of a group \(G_i\) is still 1 if and only if there is a request \(r_j\) that encloses all its vertices, that is, \(G_i\subseteq R_j\). If this is true for two neighboring groups, they must be contained within the same request, that is, \(G_i,G_{i+1}\subseteq R_j\), since the \(R_j\) are pairwise disjoint. The groups are also disjoint, so \(|G_i|+|G_{i+1}|\le |R_j|\). By the same reasoning, we see that any substring \(B\) of the advice string that consists of \({|B|}\) consecutive ones stands for \({|B|}\) groups whose \({|B|}\cdot s\) vertices are inner vertices of a single optimal request \(r_j\); hence \({|B|}\cdot s\le |R_j|\).

Denote the set of all maximal substrings of ones by \(\mathcal {B}\). For any \(B\in \mathcal {B}\), we can state an upper limit of \(|R_j|\le {|B|}s+2(s-1)\). Otherwise, the substring would not be maximal: The upper bound is attained if and only if \(R_j\) overlaps the vertices of the groups corresponding to \(B\) by \(s-1\) vertices on each side. If the overlap were any larger on either side, then the respective neighboring bit of the substring \(B\) would also have been set to 1 instead of 0, contradicting the maximality.

This insight about \(R_j\) translates directly to \(r_j\), the optimal request associated with \(B\); just add the two leaves: For the upper limit, \(r_j\) covers exactly \({|B|}+2\) groups, those of \(B\) plus one additional on each side; its length is therefore \(({|B|}+2)s-1\). The lower limit is attained if \(r_j\) only contains the two leaves in addition to the inner vertices of the \({|B|}\) groups. In this case, the length of \(r_j\) is \({|B|}s+1\). Thus, the optimal solution’s gain associated with block \(B\) stays in the range \(\left[ {|B|}s+1,({|B|}+2)s-1\right] .\)

Now, we might hope for our algorithm to accept at least one request within the described boundaries for each substring; this would result in a lower bound of \(\sum _{B\in \mathcal {B}}{|B|}s+1\le \mathrm {gain}'(\mathcal {A} (I)),\) where \(\mathrm {gain}'\) denotes the gain from requests that cover at least one group. Since there exists at least one fitting request for each block, namely \(r_j\), the algorithm could just wait for the first one. Due to the mentioned potential overlap on either side, it may happen, however, that an accepted request for one substring blocks all suited requests of the next substring. In fact, this is only possible for two neighboring substrings that are separated by a single 0. This might be the case for all substrings, however. So we forego our unrealistic hopes on every second substring \(B\in \mathcal {B}\) and focus on the others. For those, we can now reserve subpaths of size \(({|B|}+2)s-1\), large enough to ensure an accepted request within the given range.

Among the two possible choices of alternate substrings of ones, the algorithm picks that with the larger grand total of ones. This way, the actual lower bound gets cut down to no less than half of what we have calculated above,

$$\begin{aligned} \frac{1}{2}\sum _{B\in \mathcal {B}}{|B|}s+1\le \mathrm {gain}'(\mathcal {A} (I)). \end{aligned}$$

For the optimal solution, we have \(\mathrm {gain}'(\textsc {Opt} (I))\le \sum _{B\in \mathcal {B}}\left( ({|B|}+2)s-1\right) \). The strict competitive ratio on the reserved subpaths can therefore be bounded by 6:

$$\begin{aligned} \frac{\mathrm {gain}'(\textsc {Opt} (I))}{\mathrm {gain}'(\mathcal {A} (I))} \le 2+2\frac{\sum _{B\in \mathcal {B}}\Big (2s-2\Big )}{\sum _{B\in \mathcal {B}}\Big ({|B|}s+1\Big )}&\le 2+2\frac{2s+2-4}{s+1}&=6-\frac{8}{s+1}. \end{aligned}$$

We have not yet accounted for the gain on the unreserved subpaths. For this part, all optimal requests went undetected by any advice bit. Hence, they have a length of \(2s-1\) or less, as seen by setting \({|B|}=0\) in the range established above. The algorithm can thus greedily accept all requests with lengths up to \(2s-1\) and be strictly \((2s-1)\)-competitive on this remaining part. Employing the more sophisticated greedy approach of Theorem 6, we achieve a strict competitive ratio of \(2^{b}(2\cdot \root 2^b \of {(2s-1)/k_{\mathrm {min}}}+1)\) by granting \(b\) additional advice bits. This way, the algorithm uses \((\ell -1)/s+b\) advice bits overall and guarantees

$$\begin{aligned} \frac{\mathrm {gain}(\textsc {Opt} (I))}{\mathrm {gain}(\mathcal {A} (I))}&\le \max \left\{ 6-\frac{8}{s+1},2^b\left( 2\cdot \root 2^b \of {(2s-1)/k_{\mathrm {min}}}+1\right) \right\} .\\[-3pc] \end{aligned}$$

   \(\square \)

The following corollary extracts a more accessible result from Theorem 7 that will later be complemented by Corollary 6.

Corollary 3

For any constant \(k \ge 6\), the number of advice bits sufficient to achieve a strict competitive ratio of \(c \le k \) is at most

$$ 2\cdot \frac{\ell - 1}{2^{k/ 5} \cdot k_{\mathrm {min}} + 1} + \log {\left( \frac{k}{5} \right) } \in \mathcal {O}\left( \frac{\ell }{k_{\mathrm {min}}} \right) . $$

Proof

We plug the values \(s:= (2^{k/5}\cdot k_{\mathrm {min}} + 1)/2\) and \(b:= \log {(k/5)}\) into Theorem 7 and obtain that there exists an algorithm with a strict competitive ratio of at most

$$\begin{aligned} c = \max {\left\{ 6- \frac{8}{s+ 1}, \tfrac{k}{5} \left( 2\cdot \root k/5 \of {2^{k/5}} + 1 \right) \right\} } \le \max {\left\{ 6, \tfrac{k}{5} \left( 2\cdot 2 + 1 \right) \right\} } = k \end{aligned}$$

and an advice complexity of

$$ b = \frac{(\ell - 1)\cdot 2}{2^{k/5} \cdot k_{\mathrm {min}} + 1} + \log {\left( \frac{k}{5} \right) } \in \mathcal {O}\left( \frac{\ell }{k_{\mathrm {min}}} \right) . $$

   \(\square \)

The following lower bound is a good match for Theorem 6 in the case of a constant number of advice bits.

Theorem 8

No online algorithm reading at most \(b \) advice bits can have a strict competitive ratio better than \(\root 2^b \of {\varphi }\).

Proof

Let \(f(i):= k_{\mathrm {min}} \cdot \root 2^b \of {\varphi ^i}\) for all i with \(0 \le i \le 2^b \). Note that \(f(0)=k_{\mathrm {min}} \) and \(f(2^b)=\varphi \cdot k_{\mathrm {min}} =k_{\mathrm {max}} \). For any number \(b \) of advice bits, we consider the following set \(\mathcal {I} =\left\{ I _0,\ldots ,I _{2^b}\right\} \) of instances. Instance \(I _{2^b}\) presents \(2^b +1\) left-aligned requests \(r _0, \dots , r _{2^b}\) of increasing length \(f(i)\), as illustrated in Fig. 5. Instance \(I _i \in \mathcal {I} \) presents only the first \(i+1\) of these requests, namely \(r _0, \dots ,r _i\).

Fig. 5.
figure 5

The \(2^b+1\) requests  \(r _0,\dots ,r _{2^b}\) of \(I _{2^b}\in \mathcal {I} \).

With \(b \) advice bits, we only have \(2^b \) deterministic algorithms available, and thus at least two out of the \(2^b +1\) instances from \(\mathcal {I} \) must be processed by the same deterministic algorithm. Hence, let \(A\) denote a deterministic algorithm that processes at least two instances from \(\mathcal {I} \) and let \(I _i\) and \(I _j\) with \(i<j\) be two of the instances processed by \(A\). The gain of \(A \) on any instance \(I _s\) is

$$ \mathrm {gain}(A (I _s)) = {\left\{ \begin{array}{ll} f(t) &{} \hbox {if}\,\, A \,\,\, \hbox {accepts a request}\,\,\, r_t\,\,\, \hbox {with}\,\,\ t\le s,\,\,\, \hbox {and}\\ 0 &{} \text {otherwise.}\\ \end{array}\right. } $$

The optimal gain for any \(I _s\) is \(f(s)\), of course. The two instances \(I _i\) and \(I _j\) are identical for the first \(i+1\) requests; hence, \(A\) must treat them the same way up to this point. We distinguish two cases and show that \(A\) performs badly on at least one of the instances \(I _i\) or \(I _j\) in both cases:

First, if \(A \) does not accept a request \(r_t \in \{r_0,\ldots ,r_i\}\), then its gain on \(I _i\) is \(\mathrm {gain}(A (I _i)) = 0\) and the strict competitive ratio is unbounded. Second, if \(A \) does accept a request \(r_t \in \{r_0,\ldots ,r_i\}\), then its gain on \(I _j\) is \(\mathrm {gain}(A (I _j)) = f(t) \le f(i),\) which leads to a strict competitive ratio of

$$c = \frac{f(j)}{f(t)} \ge \frac{f(j)}{f(i)} \ge \frac{f(i+1)}{f(i)} = \frac{\root 2^b \of {\varphi ^{i+1}}\cdot k_{\mathrm {min}}}{\root 2^b \of {\varphi ^{i}}\cdot k_{\mathrm {min}}} = \root 2^b \of {\varphi }.$$

This means that no matter which algorithm \(A \) is used for two instances from \(\mathcal {I} \), on one of these two instances the strict competitive ratio cannot be better than \(\root 2^b \of {\varphi }\).    \(\square \)

The following proof of Theorem 9 adapts a method from Gebauer et al. [8] that has been used to analyze the unweighted and non-parametrized disjoint path allocation problem on paths. First, we need a result that is proven in the mentioned paper. It is a slight variant of a bound for the tail of the hypergeometric distribution.

Fact 1

(Corollary 1 from Gebauer et al. [8]). Let X be a discrete random variable with a hypergeometric distribution with parameters MN,  and n,  and let \(t\ge 0\). Then, for every \(M'\le M,\) we have

$$ \Pr \left( X\le n \cdot \frac{M'}{N} -tn \right) \le \mathrm {e}^{-2t^{2}n}\;. $$

We construct a set \(\mathcal {I} \) of instances which is hard for many deterministic algorithms at once and thereby show that many deterministic algorithms are necessary to achieve a good strict competitive ratio on all instances from \(\mathcal {I} \). For the time being, we choose three parameters \(x, y,\) and \(z \) with \(x,y,z \in \mathbb {N}_{\ge 1}\) and construct the set \(\mathcal {I} \) subject to those parameters to obtain a result as general as possible. The parameters are chosen in such a way that the following requirements are satisfied.

$$\begin{aligned}&x \le \log \varphi , \end{aligned}$$
(1)
$$\begin{aligned}&y \le \log {\left( {\ell }/{k_{\mathrm {min}}} \right) } - 2x, \quad \text { and} \end{aligned}$$
(2)
$$\begin{aligned}&z \le \sqrt{ {2^y}/{\ln x} }. \end{aligned}$$
(3)

The general result we obtain eventually, depending on the variables \(x \) and \(z \), is the following.

Theorem 9

Let \(x,z \in \mathbb {N}_{\ge 1}\). For any \(x \le \log \varphi \) and \(z \le \sqrt{\ell /(k_{\mathrm {min}} \cdot 4^x \cdot \ln x)}\), any online algorithm for lwDPA needs to read at least

$$ b = \frac{\ell }{k_{\mathrm {min}}} \cdot \frac{\log \mathrm {e}}{ z ^2 \cdot 4^{x}} - \log x $$

advice bits to obtain a strict competitive ratio of

$$ c \le \frac{x +2}{2 \cdot \left( 1+\frac{x}{z} \right) }. $$

Proving this will be the goal of our subsequent considerations. Afterwards, we will also see how to choose reasonable values for the parameters to get less general but more expressive statements. More specifically, we will derive two corollaries from Theorem 9, which will give us almost matching lower bounds to two of the upper bounds proven above. Before we prove Theorem 9, however, we will have to make the necessary arrangements.

Let us start with a description of how to construct the hard instance set \(\mathcal {I} \). In each instance, requests are presented in \(x +1\) phases, starting with phase 0. We divide the path into \(2^{x +y}\) segments of width \(2^x \cdot k_{\mathrm {min}} \) each, and all requests presented during the whole computation are always aligned with the left border of the respective segment. Phase 0 contains \(2^{x +y}\) requests of length \(2^0\cdot k_{\mathrm {min}} \) each, one in each segment. In each phase i with \(1\le i \le x \), we choose exactly half of the segments that contained requests in phase \(i-1\) and present a request of twice the size in those segments. Let us call a request good if it has no further requests underneath and bad otherwise. Figure 6 shows an example.

Fig. 6.
figure 6

An example for \(\ell =32\), \(k_{\mathrm {min}} =1\), \(x=2\), and \(y=1\). The dashed lines indicate the segments. There are \(2^{x +y}=8\) segments of width \(2^{2}\cdot k_{\mathrm {min}} = 4\) each, and requests appear in \(x +1=3\) phases; 8 requests of length 1 in phase 0, then 4 of length 2 in phase 1, and 2 of length 4 in phase 2. Good requests are marked by thick lines.

Let us define \(h:= x + y \). We make the following observation.

Observation 1

For each i with \(0\le i \le x \), phase i contains \(2^{h-i}\) requests of length \(2^i \cdot k_{\mathrm {min}} \) each.

We have to check two points concerning the presentation of requests: First, whether the path is long enough to hold all segments, and second, whether all requests have lengths between \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \). Our requirement (2) ensures the first point, as

$$ 2^{x +y} \cdot 2^x \cdot k_{\mathrm {min}} = 2^{2x +y} \cdot k_{\mathrm {min}} \le 2^{\log {\left( \frac{\ell }{k_{\mathrm {min}}} \right) }} \cdot k_{\mathrm {min}} = \ell . $$

Furthermore, according to Observation 1, all requests are between \(2^0 \cdot k_{\mathrm {min}} \) and \(2^x \cdot k_{\mathrm {min}} \) in length. So all requests are large enough, and requirement (1) ensures that no requests are larger than \(k_{\mathrm {max}} \), since

$$ 2^x \cdot k_{\mathrm {min}} \le 2^{\log \varphi } \cdot k_{\mathrm {min}} = \varphi \cdot k_{\mathrm {min}} = k_{\mathrm {max}}. $$

For all instances constructed in the way described before, the gain of their respective optimal solutions is the same, as stated in the following observation.

Observation 2

For each instance \(I \in \mathcal {I} \), the gain of the optimal solution is

$$ \mathrm {gain}(\textsc {Opt} (I)) = (x +2) \cdot k_{\mathrm {min}} \cdot 2^{h-1}. $$

Proof

The optimal solution contains exactly all good requests. For each phase, these are exactly half of the requests presented, except for the last phase, in which all requests are good and hence contained in the optimal solution. Thus, using Observation 1, the gain of the optimal solution on each instance \(I \) is

$$\begin{aligned} \mathrm {gain}(\textsc {Opt} (I)) = \left( \sum _{i=0}^{x-1} {\frac{2^{h-i} \cdot 2^i k_{\mathrm {min}}}{2}} \right) + 2^{h-x} \cdot 2^x k_{\mathrm {min}} = (x +2)k_{\mathrm {min}} \cdot 2^{h-1}. \end{aligned}$$

   \(\square \)

The set \(\mathcal {I} \) can be represented by a tree in which each vertex corresponds to a subset of instances. The depth of this tree is \(x \) (i.e., it has \(x +1\) levels); the root on level 0 represents all instances from \(\mathcal {I} \), and every vertex on level i represents the set of all instances with the same particular set of requests in phases \(0,\dots ,i\). Every leaf is located on level \(x \) and represents a single instance from \(\mathcal {I} \). The same tree representation of \(\mathcal {I} \) has also been used by Gebauer et al. [8]. However, it is worth mentioning that in our case, the vertices of the tree have a different number of children since we construct the instances from \(\mathcal {I} \) in a different way.

Consider some vertex \(v \) on level i and the corresponding set of instances \(\mathcal {I} _v \). Now consider any deterministic algorithm \(A \), given an arbitrary instance from \(\mathcal {I} _v \) as input. Then \(A \) is in the same state during the whole computation up to and including phase i, independently of the exact instance from \(\mathcal {I} _v \) that \(A \) is given, meaning that it sees and accepts the exact same requests up until the end of phase i.

From now on, let \(A \) be arbitrary but fixed. For a given vertex \(v \) on level i, let \(a(i) \) be the number of accepted requests of \(A \) on any instance from \(\mathcal {I} _v \) during phase i, and let \(g(i) \) be the gain of \(A \) achieved during phase i. Obviously, due to Observation 1,

$$\begin{aligned} g(i) = a(i) \cdot 2^i \cdot k_{\mathrm {min}}. \end{aligned}$$
(4)

Moreover, let us define \(\hat{g}(i) \) to be the gain that \(A \) achieves in all phases up to and including phase i and, in accordance with this definition, \(\hat{g}(-1) \) to be the total gain of requests accepted before the start of computation. Hence,

$$\begin{aligned} \hat{g}(i) := \sum _{j=0}^{i}{g(j)} \quad \text {and} \quad \hat{g}(-1) := 0. \end{aligned}$$
(5)

Now let us define the sets of requests \(B(i) \) and \(B^{+}(i) \) as follows. Let \(B(i) \) be the set of requests from phase i that are already blocked at the beginning of phase i. Let \(B^{+}(i) \) be the set of requests from phase i that are blocked at the beginning of phase \(i+1\), including those that were accepted during phase i and are now blocking themselves.

We call a phase i bad for \(A \) if \(B(i) \) contains at least

$$\begin{aligned} d_{i}:= \frac{1}{2^i} \cdot \left( \frac{\hat{g}(i-1)}{k_{\mathrm {min}}} - \frac{i}{z}\cdot 2^{h} \right) \end{aligned}$$
(6)

requests and good otherwise. Moreover, let us call a vertex \(v \) on level i bad for \(A \) if, when given an instance from \(\mathcal {I} _v \) as its input, phase i is bad for \(A \), and otherwise let us call \(v \) good for \(A \). Since each instance corresponds to a vertex on level \(x + 1\), an instance is bad for \(A \) if and only if phase \(x + 1\) is bad for \(A \).

Observation 3

If at least \(d_{i+1} \) requests from \(B^{+}(i) \) are bad, then phase \(i+1\) is bad for \(A \).

Proof

As defined before, \(B^{+}(i) \) is the set of requests from phase i that are blocked at the beginning of phase \(i+1\). If at least \(d_{i+1} \) requests of those are bad, then at least \(d_{i+1} \) requests are presented in phase \(i+1\) that are already blocked at the beginning of phase \(i+1\); this matches the definition of a bad phase.    \(\square \)

Lemma 2

For each bad vertex, the fraction of bad vertices among its children is at least

$$ 1- \mathrm {e}^{- 2^y/z ^2 }. $$

Proof

Let \(v \) be a bad vertex on level i. Since \(v \) is bad, phase i must be bad for \(A \) when \(A\) is given any instance from \(\mathcal {I} _v \) as input. Thus, \(B(i) \) must contain at least \(d_{i} \) requests. As \(A\) accepts \(a(i) \) requests in phase i, the set \(B^{+}(i) \) contains at least \(d_{i} +a(i) \) requests. According to Observation 3, if at least \(d_{i+1} \) requests from \(B^{+}(i) \) are bad, phase \(i+1\) is bad. Hence, a sufficient condition for a child \(w \) of \(v \) to be bad is that at least \(d_{i+1} \) requests from \(B^{+}(i) \) are bad when \(A \) is given an instance from \(\mathcal {I} _w \) as input. Thus, we have the following scenario. There are \(N:= 2^{h-i}\) requests in phase i; out of these, \(M \ge M' := d_{i} + a(i) \) are blocked at the beginning of phase \(i+1\) and hence contained in \(B^{+}(i) \). Each child \(w \) of \(v \) corresponds to the set of instances in which the same set of \(n:= 2^{h-i-1}\) requests from phase i are bad. This in turn corresponds to the following. We have an urn containing N balls, out of which \(M\ge M'\) are black; we draw n balls without replacement, and we are interested in the probability that the number of black balls drawn is at least \(d_{i+1} \).

Let \(X_{i} \) be the random variable counting the number of bad requests from \(B^{+}(i) \) in this scenario. \(X_{i} \) has a hypergeometric distribution with parameters MN,  and n, and we are interested in \(\Pr \left( X_{i} \ge d_{i+1} \right) \).

We can bound the probability

$$\begin{aligned} \Pr \left( X_{i} \le n\cdot \frac{M'}{N} - t n\right)&= \Pr \left( X_{i} \le \frac{d_{i} + a(i)- t\cdot 2^{h-i}}{2} \right) \end{aligned}$$

from above for any \(t\ge 0\) by using Fact 1. We obtain

$$\begin{aligned} \Pr \left( X_{i} \le \frac{d_{i} + a(i)- t\cdot 2^{h-i}}{2} \right) \le \mathrm {e}^{-2 t^2 \cdot n} = \mathrm {e}^{-2 t^2 \cdot 2^{h-i-1}} = \mathrm {e}^{-t^2 \cdot 2^{h-i}}. \end{aligned}$$
(7)

Using (5) and (6), we get

$$\begin{aligned} d_{i+1} = \frac{1}{2^{i+1}} \left( \frac{\hat{g}(i)}{k_{\mathrm {min}}} - \frac{i+1}{z}\cdot 2^{h} \right) = \frac{1}{2^{i+1}} \left( \frac{\hat{g}(i-1)}{k_{\mathrm {min}}} + \frac{g(i)}{k_{\mathrm {min}}} - \frac{i}{z}\cdot 2^{h} - \frac{1}{z}\cdot 2^{h} \right) . \end{aligned}$$

Doing further transformations using (4) and again (6), we obtain

$$\begin{aligned} d_{i+1} = \frac{ \frac{1}{2^{i}} \cdot \left( \frac{\hat{g}(i-1)}{k_{\mathrm {min}}} - \frac{i}{z}\cdot 2^{h} + 2^{i} \cdot a(i) - \frac{1}{z}\cdot 2^{h} \right) }{2} = \frac{d_{i} + a(i) - \frac{1}{z}\cdot 2^{h-i} }{2}. \end{aligned}$$

Combining this result with (7) and choosing \(t:= 1/z \) yields

$$\begin{aligned} \Pr \left( X_{i} \ge d_{i+1} \right) \ge 1-\Pr \left( X_{i} \le \frac{d_{i} + a(i)- \tfrac{1}{z}\cdot 2^{h-i}}{2} \right) \ge 1- \mathrm {e}^{- 2^{h-i}/ z ^2}, \end{aligned}$$

and since \(h-i \ge y \) for all i with \(0\le i \le x \), we finally obtain that for all such i,

$$\begin{aligned} \Pr \left( X_{i} \ge d_{i+1} \right) \ge 1- \mathrm {e}^{- 2^{y} /z ^2}.\\[-3pc] \end{aligned}$$

   \(\square \)

From the fraction of bad vertices among the children of bad vertices, we can now draw conclusions concerning the fraction of bad vertices on each level.

Lemma 3

The fraction of bad vertices on level i is at least

$$ \left( 1- \mathrm {e}^{- 2^y/z ^2 } \right) ^i. $$

Proof

We prove the claim by induction on i. For the base case, we consider the root, the only vertex on level 0. The root is bad if phase 0 is bad for \(A \), and phase 0 is bad for \(A \) if at least \(d_{0} \) requests from phase 0 are already blocked at the beginning of phase 0. This is obviously the case as \(d_{0} = 0\) according to (5) and (6). Hence, the root is a bad vertex and thus the fraction of bad vertices on level 0 is 1, which is at least

$$\begin{aligned} \left( 1- \mathrm {e}^{- 2^y/z ^2 } \right) ^0 = 1. \end{aligned}$$

For the induction step, let us assume that the claim holds for level \(i-1\). Let us define \(v(i) \) to be the number of all vertices on level i and \(b(i) \) the number of bad vertices among those. Moreover, let \(c(i) \) denote the number of children of each vertex on level i. As a final definition, let \(f(i) \) be the fraction of bad vertices among the children of each bad vertex on level i. Lemma 2 states that \(f(i) \ge 1- \mathrm {e}^{- 2^{y} / z ^2}\) for all i, and by the induction hypothesis it holds that \({b(i-1)}/{v(i-1)} \ge ( 1- \mathrm {e}^{- 2^y/z ^2 } )^{i-1}\). Using Lemma 2, the induction hypothesis, and the fact that \(b(i) \ge b(i-1) \cdot c(i-1) \cdot f(i-1) \) and \(v(i) = v(i-1) \cdot c(i-1) \), the fraction \(b(i)/v(i) \) of bad vertices on level i can be bounded from below by

$$\begin{aligned} \frac{b(i-1) \cdot f(i-1)}{v(i-1)} \ge \left( 1- \mathrm {e}^{- 2^y/z ^2 } \right) ^{i-1} \cdot \left( 1- \mathrm {e}^{- 2^{y} / z ^2} \right) = \left( 1- \mathrm {e}^{- 2^y/z ^2 } \right) ^{i}.\\[-3pc] \end{aligned}$$

   \(\square \)

From this we can conclude that for each deterministic online algorithm, a large number of instances must be bad.

Corollary 4

For any deterministic online algorithm \(A \), the fraction of instances of \(\mathcal {I} \) which are bad for \(A \) is at least \(( 1- \mathrm {e}^{- 2^y/z ^2 } )^{x}\).

Proof

This follows directly from Lemma 3 and the fact that each single instance from \(\mathcal {I} \) corresponds to a vertex on level \(x \).    \(\square \)

Now we want to show that any deterministic algorithm can only accept few requests on each bad instance.

Lemma 4

Let \(A\) be an arbitrary but fixed deterministic online algorithm for lwDPA, and let \(I \in \mathcal {I} \) be a bad instance for \(A \). Then the gain of \(A\) on \(I \) is

$$ \mathrm {gain}(A (I)) = \hat{g}(x) \le 2^h \cdot k_{\mathrm {min}} \cdot \left( 1+ \frac{x}{z} \right) . $$

Proof

According to the definition of bad vertices and instances, an instance (corresponding to a vertex on level \(x \)) is bad if there are at least \(d_{x} \) requests from phase \(x \) blocked at the beginning of phase \(x \). In this phase, \(A\)  is presented \(2^{h- x}\) requests of length \(2^x \cdot k_{\mathrm {min}} \) due to Observation 1, so the gain of \(A\) in phase \(x \) is

$$\begin{aligned} g(x)&\le 2^x \cdot k_{\mathrm {min}} \cdot \left( 2^{h- x} - d_{x} \right) \\&= 2^x \cdot k_{\mathrm {min}} \cdot \left( \frac{2^{h}}{2^x} + \frac{1}{2^x} \cdot \left( \frac{x}{z}\cdot 2^{h} - \frac{\hat{g}(x-1)}{k_{\mathrm {min}}} \right) \right) \\&= 2^h \cdot k_{\mathrm {min}} \cdot \left( 1 + \frac{x}{z} \right) - \hat{g}(x- 1). \end{aligned}$$

For the total gain at the end of \(A\) ’s computation, we obtain

$$ \hat{g}(x) = g(x) + \hat{g}(x-1) \le 2^h \cdot k_{\mathrm {min}} \cdot \left( 1+\frac{x}{z} \right) . $$

   \(\square \)

Now we can bound the strict competitive ratio of \(A\) on any bad instance from below.

Lemma 5

The strict competitive ratio of any deterministic online algorithm on any bad instance from \(\mathcal {I} \) is

$$ c \ge \frac{x +2}{2 \cdot \left( 1+\frac{x}{z} \right) }. $$

Proof

Let \(A\) be some arbitrary but fixed deterministic online algorithm for lwDPA and let \(I \in \mathcal {I} \) be a bad instance for \(A\). Using Lemma 4 and Observation 2, we obtain

$$\begin{aligned} c = \frac{\mathrm {gain}(\textsc {Opt} (I))}{\mathrm {gain}(A (I))} \ge \frac{(x +2) \cdot k_{\mathrm {min}} \cdot 2^{h-1}}{2^h \cdot k_{\mathrm {min}} \cdot \left( 1+\frac{x}{z} \right) } = \frac{x +2}{2 \cdot \left( 1+\frac{x}{z} \right) }.\\[-3pc] \end{aligned}$$

   \(\square \)

From this we can conclude that any online algorithm for lwDPA needs to read a large number of advice bits to obtain a good strict competitive ratio. Hence, we are finally ready to prove Theorem 9.

Proof

(of Theorem 9). We interpret an online algorithm that reads \(b \) advice bits in the usual way as a set of deterministic algorithms \(\{ A _1, \dots , A _{2^b}\}\). According to Lemma 5, each such algorithm \(A _i\) can have a strict competitive ratio of at most \(c \) only on instances which are good, and according to Corollary 4, for each \(A _i\) the fraction of good instances from \(\mathcal {I} \) is at most

$$\begin{aligned} 1- \left( 1- \mathrm {e}^{-2^y/ z ^2} \right) ^x \le \frac{x}{\mathrm {e}^{2^y/ z ^2}}, \end{aligned}$$
(8)

where we used Bernoulli’s Inequality. Requirement (3) from the very beginning makes sure that (8) always yields a reasonable result, i.e., that the right-hand side of (8) never becomes larger than 1, since

$$ \frac{x}{\mathrm {e}^{2^y/ z ^2}} \le \frac{x}{\mathrm {e}^{2^y/ (2^y/\ln x)}} = \frac{x}{\mathrm {e}^{\ln x}} = 1. $$

Thus, the number of deterministic algorithms that are necessary to guarantee a strict competitive ratio of \(c \) on all instances from \(\mathcal {I} \) is at least \((\mathrm {e}^{2^y/ z ^2})/x\). Hence, the number of necessary advice bits is

$$\begin{aligned} b \ge \log \left( \frac{\mathrm {e}^{2^y/ z ^2}}{x} \right) = \frac{2^y}{ z ^2}\cdot \log \mathrm {e}- \log x. \end{aligned}$$

Now we are almost done. As our final step, we observe that the variable \(y \) that appears in the formula for \(b \) does not have any influence on \(c \). Since the number of necessary advice bits grows with \(y \), we can set it to the maximum possible value according to requirement (2). Thus, we set \(y:= \log {\left( {\ell }/{k_{\mathrm {min}}} \right) } - 2x = \log {\left( {\ell }/{\left( k_{\mathrm {min}} \cdot 2^{2x}\right) } \right) }\), and as a result, we finally obtain that the number of necessary advice bits is

$$ b \ge \frac{2^{\log {\left( \frac{\ell }{k_{\mathrm {min}} \cdot 4^{x}}\right) }}}{ z ^2}\cdot \log \mathrm {e}- \log x = \frac{\ell }{k_{\mathrm {min}}} \cdot \frac{\log \mathrm {e}}{ z ^2 \cdot 4^{x}} - \log x. $$

Plugging the value that we just fixed for \(y \) into requirement (3) yields the range of \(z \le \sqrt{\ell /(k_{\mathrm {min}} \cdot 4^x \cdot \ln x)}\) given in the theorem.    \(\square \)

By plugging in values for \(x \) and \(z \) into this theorem, we can derive two results which nicely complement Corollaries 2 and 3, respectively. The first one states that many advice bits are necessary to obtain a strict competitive ratio better than \(1/4\cdot \log \varphi \).

Corollary 5

Let \(\delta \) be an arbitrary constant with \(0< \delta < 1/2\) and let \(\varepsilon > 2\delta \). Any online algorithm for lwDPA that achieves a strict competitive ratio of \(\delta /2 \cdot \log \varphi \) needs to read at least \(\omega \left( \varphi ^{1 - \varepsilon } \cdot {\ell }/{k_{\mathrm {max}}} \right) \) advice bits when \(\varphi \in \omega (1)\).

Proof

Let \(x:= \delta \log \left( k_{\mathrm {max}}/k_{\mathrm {min}} \right) = \delta \log \varphi \) and \(z:= \log ^2 \varphi \). Then, \(4^x = 2^{2x} = 2^{2\delta \log \varphi } = \varphi ^{2\delta }\). We obtain

$$\begin{aligned} \frac{\frac{\ell }{k_{\mathrm {min}}}}{z ^2 \cdot 4^x} = \frac{\frac{\ell }{k_{\mathrm {max}}} \cdot \frac{k_{\mathrm {max}}}{k_{\mathrm {min}}}}{\log ^4 \varphi \cdot \varphi ^{2\delta }} = \frac{\ell }{k_{\mathrm {max}}} \cdot \frac{\varphi }{\log ^4 \varphi \cdot \varphi ^{2\delta }} = \frac{\ell }{k_{\mathrm {max}}} \cdot \frac{\varphi ^{1 - 2\delta }}{\log ^4 {\varphi }}. \end{aligned}$$

The number of advice bits necessary to achieve a strict competitive ratio of \(c \) is

$$\begin{aligned} b \ge \frac{\ell }{k_{\mathrm {min}}} \cdot \frac{\log \mathrm {e}}{ z ^2 \cdot 4^{x}} - \log x = \frac{\varphi ^{1 - 2\delta }}{\log ^4 {\varphi }} \cdot \frac{\ell }{k_{\mathrm {max}}} \cdot \log \mathrm {e}- \log \delta - \log \log \varphi . \end{aligned}$$

For any constant \(\varepsilon > 2\delta \), we have \(({\varphi ^{1 - 2\delta }}/{\log ^4 {\varphi }}) \in \omega \left( \varphi ^{1 - \varepsilon } \right) \) if \(\varphi \in \omega (1)\). Therefore, the number of advice bits necessary to obtain a strict competitive ratio of \(c \) is in \(\omega \left( \varphi ^{1 - \varepsilon } \cdot {\ell }/{k_{\mathrm {max}}} \right) \), for any constant \(\varepsilon > 2\delta \). The value of \(c \) obtained by plugging in the values for \(x \) and \(z \) as chosen above is

$$\begin{aligned} c \ge \frac{x + 2}{2(1 + \frac{x}{z})} \ge \frac{\delta \log \varphi + 1}{2(1 + \frac{\delta }{\log {\varphi }})} = \frac{(\delta \log \varphi + 1 )\log \varphi }{2(\log \varphi + \delta )} = \frac{\delta \log \varphi }{2} \cdot \frac{\log \varphi + \frac{1}{\delta }}{\log \varphi + \delta }. \end{aligned}$$

Using this together with \(\delta < 1/\delta \), we can bound \(c \) from below by \((\delta \log \varphi )/2\).    \(\square \)

Combining this with Corollary 2, we observe a surprising fact. While \(\log \log \varphi \) advice bits are sufficient to obtain a strict competitive ratio of \(5\log \varphi \), we need \(\omega \left( \varphi ^{1 - \varepsilon } \cdot {\ell }/{k_{\mathrm {max}}} \right) \) advice bits to decrease the strict competitive ratio by only another constant factor. The next corollary complements Corollary 3, stating that many advice bits are necessary to obtain a constant strict competitive ratio. The two bounds match up to a constant factor.

Corollary 6

Let \(k_{\mathrm {min}} \in o(\ell )\). For any constant \(k \le (\log \varphi + 2)/4\), the number of advice bits necessary to achieve a strict competitive ratio of \(c \le k \) is at least

$$ \frac{\ell }{k_{\mathrm {min}}} \cdot \frac{\log \mathrm {e}}{(4k- 2)^2 \cdot 4^{4k- 2} } - \log {\left( 4k- 2 \right) } \in \varOmega \left( \frac{\ell }{k_{\mathrm {min}}} \right) . $$

Proof

We set \(x:= z:= 4k- 2\) and plug these values into Theorem 9. To be able to apply this theorem, it must hold that \(x \le \log \varphi \) and \(z \le \sqrt{\ell /(k_{\mathrm {min}} \cdot 4^x \cdot \ln x)}\). The first requirement is trivially fulfilled as \(x = 4k- 2 \le 4\cdot (\log \varphi + 2)/4 - 2 = \log \varphi \). The second one is also satisfied since \(k_{\mathrm {min}} \in o(\ell )\) and hence \(\sqrt{\ell /(k_{\mathrm {min}} \cdot 4^x \cdot \ln x)} \in \omega (1)\), while \(z \in \mathcal {O}(1)\). Due to Theorem 9, any algorithm needs to read at least

$$ b \ge \frac{\ell }{k_{\mathrm {min}}} \cdot \frac{\log \mathrm {e}}{(4k- 2)^2 \cdot 4^{4k- 2} } - \log {\left( 4k- 2 \right) } \in \varOmega \left( \frac{\ell }{k_{\mathrm {min}}} \right) $$

advice bits to achieve a strict competitive ratio of

$$ c \le \frac{x + 2}{2\cdot (1 + \frac{x}{z})} = \frac{4k}{2\cdot (1 + 1)} = k. $$

   \(\square \)

Thus, combining Corollaries 3 and 6, we obtain that \(\varTheta (\ell /k_{\mathrm {min}})\) advice bits are necessary and sufficient to achieve any constant strict competitive ratio.

Fig. 7.
figure 7

A schematic plot of the results proven in this paper.

4 Conclusion

In this paper, we thoroughly studied the length-weighted disjoint path allocation problem on paths parametrized by given lower and upper bounds on the request lengths \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \).

We examined the trade-off between the number of advice bits an online algorithm reads and the strict competitive ratio it can achieve. The strict competitive ratios we considered cover the entire possible range; from optimality to the worst-case strict competitive ratio when no advice is given at all. We described several algorithms for lwDPA and were able to give nearly or perfectly matching lower bounds in most cases. Our results imply a very interesting threshold behavior of the lwDPA problem.

As depicted schematically in Fig. 7, we showed the following. When an online algorithm for lwDPA is not given any advice at all, the strict competitive ratio cannot be better than \(2\varphi + 1\), where \(\varphi =k_{\mathrm {max}}/ k_{\mathrm {min}} \), and a simple greedy approach achieves this ratio. With any constant number \(b \) of advice bits, the achievable strict competitive ratio lies between \(\root 2^b \of {\varphi }\) and \(2^b \cdot (2\cdot \root 2^b \of {\varphi } + 1)\), which is matching up to a constant factor. When the algorithm is allowed to read \(\log \log \varphi \) advice bits, a relatively good strict competitive ratio can be obtained already, namely \(5\log \varphi \). However, to decrease it further by only a constant factor to \(\delta /4 \cdot \log \varphi \) for any \(\delta < 1\), we showed that \(\ell /k_{\mathrm {min}} \cdot \varphi ^{-\varepsilon }\) advice bits are necessary, for any \(\varepsilon > \delta \). We proved that \(\varTheta (\ell /k_{\mathrm {min}})\) advice bits are necessary and sufficient to achieve any constant strict competitive ratio \(c \) with \(6 \le c \le (\log \varphi + 2)/4\). Finally, in order to be optimal, n or \((\ell - 1)/k_{\mathrm {min}} \cdot (1 + 2\log k_{\mathrm {min}})\) advice bits are sufficient; the number of advice bits necessary to be optimal is at least \(\gamma \cdot (n - 1)\) or \(\gamma \cdot (\ell /k_{\mathrm {min}}- 1)\) for some constant \(\gamma \) that depends on both \(k_{\mathrm {max}} \) and \(k_{\mathrm {min}} \), but is always at least 1/2 and approaches 1 with growing difference between \(k_{\mathrm {min}} \) and \(k_{\mathrm {max}} \). Thus, there is only a factor of \(\mathcal {O}(\log k_{\mathrm {min}})\) between the lower and upper bounds for optimality in \(\ell \) and a constant factor between those in n; moreover, in the unparametrized case, the bounds in \(\ell \) are perfectly matching and those in n match up to an additive constant of 1.