Keywords

1 Introduction

A slab in d-dimensional space \(\mathbb {R}^d\) is the set of points enclosed by two parallel hyperplanes. A double-slab is a pair of parallel slabs, that is, all four hyperplanes bounding the two slabs are parallel. We consider the problem of finding an optimal double-slab that covers a given set P of n points in \(\mathbb {R}^d\), for any fixed dimension \(d \geqslant 3\). We consider two optimization problems. In the first problem, which we call the minimum-width double-slab problem, we wish to minimize the width of the resulting double-slab, where the width of a double-slab is defined as the maximum width of its two slabs. In the second problem, which we call the widest empty slab problem, we wish to maximize the gap of the double-slab, which is defined as the distance between the two slabs.

The two optimization problems concerning double-slabs extend and/or generalize several fundamental geometric problems, which have been extensively studied mostly in \(\mathbb {R}^2\) or \(\mathbb {R}^3\), and sometimes in higher dimensions.

The width of a point set P in \(\mathbb {R}^d\) is defined as the minimum width of a slab that covers P. The width is considered one of the most fundamental extent measures describing a point set, together with the diameter and the radius. The width of a set of n points in the plane can be computed easily in optimal \(O(n \log n)\) time [29]. If the convex hull of P is already given, this can be improved to O(n) time using rotating calipers [31]. For \(d=3\), Houle and Toussaint [23] presented an \(O(n^2)\)-time algorithm, and the first subquadratic-time algorithm was given by Chazelle et al. [13]. The currently best algorithm by Agarwal and Sharir runs in \(O(n^{3/2+\epsilon })\) expected time [2]. In higher dimensions, Chan [9] discussed a simple formulation that reduces the width problem to searching a \((d+1)\)-dimensional convex polytope, resulting in an \(O(n^{\lceil d/2 \rceil })\)-time exact algorithm for \(d \geqslant 4\). Efficient approximation schemes are also known [1, 9].

By definition, computing the width is equivalent to fitting a line for \(d = 2\) (a plane for \(d=3\), or in general a hyperplane) to the given point set P. In particular, for \(d=2\), this problem is also known as the line center problem. For \(k\geqslant 1\), the k-line center problem asks to find a set of k lines such that the maximum distance of a point in P to its nearest line center is minimized. Like the k-center problem, this problem is known to be NP-hard when k is part of the input, even in the plane \(\mathbb {R}^2\) [28]. In spite of its fundamental position in theory and applications, little is known about exact algorithms for the k-line center problem. Even for \(d=2\), efficient algorithms are known only for \(k\leqslant 2\): algorithms with running time \(O(n^2 \log ^2 n)\) were found in the 1990 s [20, 25] and there has been no improvement since then. For \(d \geqslant 3\), we are not aware of any nontrivial exact algorithm to compute a k-line center for \(k \geqslant 2\) in the literature. Agarwal et al. [5] presented an efficient approximation algorithm for any \(k\geqslant 1\) and \(d\geqslant 2\). Recently, some results on constrained variants of the k-line center problem in the plane \(\mathbb {R}^2\) have been published: Bae [6] showed that a parallel 2-line center can be computed in \(O(n^2)\) time, where the two lines are restricted to be parallel. Das et al. [17] presented an approximation algorithm for orthogonal line centers and Chung et al. [15] studied a variant of parallel k-line centers for \(k\geqslant 2\) considering gaps between the induced clusters.

The generalization to higher dimensions is the k-hyperplane center problem. Given a set P of n points in \(\mathbb {R}^d\), we want to find k hyperplanes that minimize the maximum distance from a point in P to its nearest hyperplane. This is equivalent to finding k slabs of minimum width that cover P. This problem has been studied in the context of the projective clustering problem and, except for the variants in \(\mathbb {R}^2\) mentioned above, only approximation algorithms are known (see e.g. [4] and the references therein). As discussed in [4] and [5], an exact algorithm with running time \(n^{O(dk)}\) can be easily obtained for both the k-line center and the k-hyperplane center problems. To our best knowledge, no better algorithms are known, not even for small constants \(k \geqslant 2\) and \(d\geqslant 3\).

Our first problem, namely the minimum-width double-slab problem, can be seen as a constrained variant of the 2-hyperplane center problem, in which the two hyperplane centers must be parallel, while it extends and generalizes the fundamental geometric problems mentioned above. We give the first exact algorithm for this problem.

Covering P by a double-slab induces two clusters on P. In some applications, such as in obnoxious facility location [16], it is considered more important to achieve a maximum possible separation between the resulting clusters of P. This motivates us to find a double-slab cover of P with the maximum possible gap, or a widest empty slab through P. In the planar case \(d=2\), this problem is well known as the widest empty corridor problem. Houle and Maciel [22] presented the first \(O(n^2)\)-time algorithm for the widest empty corridor problem. Many variants of the problem have been studied since, including corridors containing at most k input points [10, 24, 30], dynamic maintenance of corridors [24, 30], and more. Díaz-Báñez et al. [18] studied the widest empty slab problem in \(\mathbb {R}^3\), and presented an \(O(n^3)\)-time algorithm. We are not aware of any known algorithm for the widest empty slab problem in \(\mathbb {R}^d\) for any fixed dimension \(d\geqslant 4\). We present the first exact algorithm.

We summarize our results as follows:

  1. (1)

    We solve the minimum-width double-slab problem in \(O(n^d)\) time for \(d=3, 4\), and in \(O(n^{d+1})\) time for any fixed dimension \(d \geqslant 5\).

  2. (2)

    We solve the widest empty slab problem in \(O(n^d \log n)\) time for any fixed dimension \(d \geqslant 4\).

We characterize combinatorial properties of optimal solutions to both problems, and find the optimal solution by efficiently enumerating all candidates. We make a heavy use of geometric duality, mapping points in \(\mathbb {R}^d\) into non-vertical hyperplanes in \(\mathbb {R}^d\), and known algorithms and data structures for hyperplanes and their arrangement in high dimensions.

Due to page limit, most proofs are omitted and will be seen in a full version.

2 Preliminaries

Let \(d\geqslant 2\) be an arbitrary fixed dimension. We consider the d-dimensional Euclidean space \(\mathbb {R}^d\) with the d coordinate axes, called the \(x_1, x_2, \ldots , x_d\)-axes. We consider the \(x_d\)-axis as the vertical direction. We also treat \(\mathbb {R}^d\) as a vector space equipped with the standard inner product and the induced Euclidean norm \(\Vert \cdot \Vert \). Hence, for any two points \(p, q\in \mathbb {R}^d\), the length of segment pq is \(\Vert p-q\Vert \). For any subset \(A \subset \mathbb {R}^d\), let \(\textrm{aff}(A)\) be the affine hull of A, that is, the intersection of all affine subspaces containing A.

For any affine subspace A of \(\mathbb {R}^d\), we denote by \(A^\perp \) the orthogonal complement of A in \(\mathbb {R}^d\), the vector subspace consisting of all vectors in \(\mathbb {R}^d\) that are orthogonal to every vector \(a - a_0\) for \(a, a_0 \in A\). Note that \(\dim (A) + \dim (A^\perp ) = d\). Let \(\pi _A :\mathbb {R}^d\rightarrow A\) denote the orthogonal projection onto A. In particular, if \(A = \mathbb {R}^{d-1} = \{x_d = 0\}\), then we simply write \(\pi = \pi _{\mathbb {R}^{d-1}}\). The projection \(\pi \) drops the \(x_d\)-coordinate, so that for any \((a_1, \ldots , a_d) \in \mathbb {R}^d\), we have \(\pi (a_1, \ldots , a_{d-1}, a_d) = (a_1, \ldots , a_{d-1}) \in \mathbb {R}^{d-1}\).

Hyperplanes and orientations. We call a k-flat vertical if it is parallel to the \(x_d\)-axis. A hyperplane in \(\mathbb {R}^d\) is a \((d-1)\)-flat. Any non-vertical hyperplane \(h \subset \mathbb {R}^d\) can be seen as the graph of a \((d-1)\)-variate linear function \(h :\mathbb {R}^{d-1} \rightarrow \mathbb {R}\): \(h :x_d = u_1x_1 + u_2x_2 + \cdots + u_{d-1}x_{d-1} + b\), for some \(u = (u_1, u_2, \ldots , u_{d-1}) \in \mathbb {R}^{d-1}\) and \(b\in \mathbb {R}\). We call u the orientation of h, and b its displacement. Any non-vertical hyperplane h can be uniquely determined by an orientation u and a displacement b in this way, and we have the following.

Observation 1

Let h be a non-vertical hyperplane in \(\mathbb {R}^d\) with orientation \(u \in \mathbb {R}^{d-1}\). Then, the vector \((u, -1) \in \mathbb {R}^d\) is a normal of h.

In particular, hyperplanes with equal orientation are parallel. We can thus identify the space of possible orientations of non-vertical hyperplanes with \(\mathbb {R}^{d-1}\). For any orientation \(u \in \mathbb {R}^{d-1}\), let \(\theta (u)\) be the angle between the vector \((u, -1) \in \mathbb {R}^d\) and the negative direction of the \(x_d\)-axis. For any two parallel hyperplanes \(h_1\) and \(h_2\), let \(w(h_1, h_2)\) denote the distance between \(h_1\) and \(h_2\). We observe that \(w(h_1, h_2) = |b_1 - b_2| \cdot \cos (\theta (u)) = \frac{|b_1 - b_2|}{\Vert (u, -1)\Vert }\), where u denotes their common orientation and \(b_i\) the displacement of \(h_i\), for \(i=1,2\).

A slab S is the closure of the region between two parallel hyperplanes \(h_1\) and \(h_2\), denoted by \(S = (h_1, h_2)\). The width of S, denoted by w(S), is \(w(h_1, h_2)\). A double-slab \(D = (S_1, S_2)\) is the union of two disjoint parallel slabs \(S_1\) and \(S_2\). The width of D, denoted by w(D), is \(\max \{w(S_1), w(S_2)\}\). We say that a slab or double-slab is vertical if its defining hyperplanes are vertical. For any non-vertical slab or double-slab, its orientation is the orientation of its defining hyperplanes.

Duality. We recall the classic point-to-hyperplane duality transform: for any point \(p = (a_1, a_2, \ldots , a_d) \in \mathbb {R}^d\), the dual transformation maps p into its dual hyperplane \(p^\star :x_d = a_1x_1 + \cdots + a_{d-1}x_{d-1} - a_d\). Conversely, any non-vertical hyperplane \(h \subset \mathbb {R}^d\) is mapped to a point \(h^\star \in \mathbb {R}^d\). It is well known that the duality transform \(a \mapsto a^\star \) preserves the point-hyperplane incidence relation and the vertical order among points and hyperplanes [8].

Observation 2

Let \(S=(h_1,h_2)\) be a non-vertical slab with orientation u.

  • The segment \(h_1^\star h_2^\star \) is vertical, that is, it is parallel to the \(x_d\)-axis in \(\mathbb {R}^d\).

  • It holds that \(w(S) = \Vert h_1^\star - h_2^\star \Vert \cos (\theta (u))\).

  • The first \(d-1\) coordinates of \(h_1^\star \) and \(h_2^\star \) are equal to those of u, that is, \(\pi (h_1^\star ) = \pi (h_2^\star ) = u\).

  • For any point \(p\in \mathbb {R}^d\), \(p\in S\) if and only if the segment \(h_1^\star h_2^\star \) intersects \(p^\star \).

Therefore, there is a one-to-one correspondence between non-vertical slabs and vertical segments under the duality transform.

Arrangement of Hyperplanes. Let H be a set of n hyperplanes in \(\mathbb {R}^d\). Consider the arrangement \(\mathcal {A}(H)\) of these n hyperplanes. We introduce some algorithms and data structures for hyperplane arrangements that we will be using.

The arrangement \(\mathcal {A}(H)\) consists of \(O(n^d)\) faces for \(d\geqslant 2\) [21]. The upper and lower envelopes of H, denoted by \(\mathcal {U}(H)\) and \(\mathcal {L}(H)\), correspond to the convex hull of points that are dual to H, so their complexity is bounded by \(O(n^{\lfloor d/2 \rfloor })\). It is well known that \(\mathcal {A}(H)\) can be computed in \(O(n^d)\) time [21] and the envelopes \(\mathcal {U}(H)\) and \(\mathcal {L}(H)\) can be computed in \(O(n \log n + n^{\lfloor d/2 \rfloor })\) time, using any optimal convex hull algorithm, such as the one by Chazelle [12].

The zone Z(hH) of another hyperplane h in the arrangement of H is the set of all cells in \(\mathcal {A}\) intersected by h and their incident faces. Edelsbrunner et al. [19] showed that the complexity of the zone Z(hH) is \(\varTheta (n^{d-1})\).

Lemma 1

(de Berg et al. [7]). For any fixed \(d\geqslant 2\), the zone of a hyperplane h in an arrangement \(\mathcal {A}(H)\) of n hyperplanes H can be computed in \(O(n^{d-1} + n\log n)\) time, without computing the whole arrangement \(\mathcal {A}\).

In our algorithms, it is often required to test the feasibility of a candidate slab. The following query structure for point location in the arrangement will be used for our purpose.

Lemma 2

(Chazelle [11] and Matoušek [26]). A set H of n hyperplanes in \(\mathbb {R}^d\) can be preprocessed in \(O(n^d/\log ^d n)\) time into a data structure of size \(O(n^d/\log ^d n)\) that answers the following query in \(O(\log n)\) time: Given a query point \(q \in \mathbb {R}^d\), locate the face in \(\mathcal {A}(H)\) that contains q and count the number of hyperplanes in H above q.

By the duality transform, this data structure can answer the half-space counting query [3]. Thus, given a set P of n points in \(\mathbb {R}^d\), by using the query structure in Lemma 2 for the set of n hyperplanes dual to points in P, we can count the number of points in P contained in a query slab \(S=(h_1, h_2)\) in \(O(\log n)\) time, according to Observation 2.

General Position Assumption. In the following, P will be a set of n input points in \(\mathbb {R}^d\). For simplicity, we will assume that P is in general position, meaning that no hyperplane in \(\mathbb {R}^d\) contains more than d points of P or, equivalently, that any \(d+1\) points in P are affinely independent. Hence, for any subset \(Q \subseteq P\) of \(k\leqslant d+1\) points, its affine hull \(\textrm{aff}(Q)\) is a \((k-1)\)-flat in \(\mathbb {R}^d\). This also implies that the orthogonal complement \((\textrm{aff}(Q))^\perp \) is of dimension \(d-k+1\). Throughout the paper, we often discuss the intersection of two orthogonal complements \(V = (\textrm{aff}(Q_1))^\perp \cap (\textrm{aff}(Q_2))^\perp \) for two nonempty disjoint subsets \(Q_1, Q_2 \subset P\) with \(k = |Q_1| + |Q_2| \leqslant d+1\). We shall call V the (linear) subspace orthogonal to both \(\textrm{aff}(Q_1)\) and \(\textrm{aff}(Q_2)\). The general position also implies that the subspace V is always of dimension \(d-k+2\) for any two subsets \(Q_1,Q_2\in P\).

3 Widest Empty Slabs

Let us call a slab S empty if it contains no point of P in its interior while separating P into two nonempty subsets. Our goal in this section is to compute an empty slab S of maximum width. The following is an easy observation on empty slabs.

Lemma 3

Suppose that S is a maximum-width empty slab for P in \(\mathbb {R}^d\) for \(d\geqslant 3\). Let \(h \subset \mathbb {R}^d\) be any hyperplane parallel to the normal of S. Then, \(\pi _h(S)\) is a maximum-width empty slab for \(\pi _h(P)\) in the \((d-1)\)-dimensional space h.

Lemma 3 allows us to find the widest vertical empty slab by projecting the point set to \(\mathbb {R}^{d-1}\) and solving the problem there. We can therefore concentrate on finding the widest non-vertical empty slab, and in the remainder of this section all slabs will be non-vertical. We represent a non-vertical slab S by a pair \((h_1, h_2)\) of hyperplanes such that \(h_1\) is above \(h_2\).

We prove the following characterization of optimal empty slabs.

Lemma 4

For any \(d\geqslant 2\), if \(S=(h_1, h_2)\) is an empty slab of maximum width, then one of the following must hold:

  1. (i)

    At least \(d+1\) points lie on the boundary of S, that is, \(|P\cap (h_1\cup h_2)| \geqslant d+1\).

  2. (ii)

    The normal vector to S is parallel to \(\textrm{aff}(P\cap (h_1\cup h_2))\).

Proof

Let \(S=(h_1, h_2)\) be a maximum-width empty slab for P, and let \(k := |P\cap (h_1\cup h_2)|\) be the number of points in P that lie on the boundary of S. Clearly at least one point of P lies on each of \(h_1\) and \(h_2\), so \(k \geqslant 2\). Let \(u\in \mathbb {R}^{d-1}\) be the orientation of S. Then, \(\hat{u}:=(u, -1)\) is a normal vector of S by Observation 1.

We will prove by induction on k and d that whenever \(k \leqslant d\), the statement in the second case holds, that is, the normal \(\hat{u}\) is parallel to \(\textrm{aff}(P\cap (h_1\cup h_2))\).

Consider first the case \(k = 2\). The planar case of the statement was proven by Houle and Maciel [22], see also Janardan and Preparata [24, Theorem 2.1]. Consider now \(d > 2\) and assume the claim holds in \(\mathbb {R}^{d-1}\) (for \(k=2\)). Let \(q_1, q_2\in P\) be the two points such that \(q_1 \in h_1\) and \(q_2\in h_2\), and suppose for a contradiction that \(\hat{u}\) is not parallel to \(\ell := \textrm{aff}(P\cap (h_1\cup h_2))\). In this case, \(\ell \) is the line through \(q_1\) and \(q_2\). Let \(\ell '\) be the line parallel to \(\hat{u}\) going through \(q_1\). The two lines \(\ell \) and \(\ell '\) make a positive angle \(\phi > 0\). We consider a hyperplane h containing both lines \(\ell \) and \(\ell '\). Since the normal of h is parallel to S, Lemma 3 implies that \(\pi _h(S)\) is a maximum-width empty slab for \(\pi _h(P)\) in h. The two points \(q_1\) and \(q_2\) lie in h and are still the only two points of P on the boundary of \(\pi _h(S)\). The normal of \(\pi _h(S)\) is identical to \(\hat{u}\) and still makes a positive angle \(\phi \) with \(\ell \), a contradiction to the optimality of \(\pi _h(S)\). See Fig. 1(a). It follows that the claim holds for \(k=2\) in any dimension.

Fig. 1.
figure 1

Illustration to the proof of Lemma 4.

Consider now the general case \(d>2\) and \(2<k\leqslant d\), and assume that the claim holds for all smaller dimensions and smaller values of k. In this case there are at least two points in \(P\cap h_1\) or \(P\cap h_2\). Without loss of generality, we let \(|P\cap h_1| \geqslant 2\). Let f be the \((k-1)\)-flat \(f = \textrm{aff}(P\cap (h_1\cup h_2))\). We pick two points \(q_1, q_2 \in P \cap h_1\). Let \(\ell \) be the line through \(q_1\) and \(q_2\), and let h be the hyperplane normal to \(\ell \) containing \(q_1\). Since the normal of h is parallel to S, we again apply Lemma 3 to conclude that \(\pi _h(S)\) is a maximum-width empty slab for \(\pi _h(P)\) in h. Since \(\pi _h(q_1) = \pi _h(q_2)\), there are \(k-1\) points of \(\pi _h(P)\) lying on the boundary of \(\pi _h(S)\). By the inductive assumption, the normal \(\hat{u}\) of \(\pi _h(S)\) is parallel to the \((k-2)\)-flat \(f' = \textrm{aff}(\pi _h(P)\cap (\pi _h(h_1) \cup \pi _h(h_2))\). Since our projection direction lies in f, we have \(f' \subset f\), and so \(\hat{u}\) is parallel to f, completing the inductive step. See Fig. 1(b) for an illustration when \(d=3\) and \(k=3\).    \(\square \)

We will now describe our algorithm that computes a maximum-width empty slab for a given set P of n points in \(\mathbb {R}^d\). It enumerates all candidate slabs that satisfy one of the two conditions described in Lemma 4. We will show that the number of candidates is bounded by \(O(n^d)\) and we will spend \(O(\log n)\) time per candidate for the emptiness test.

Let \(P^\star \) be the set of n hyperplanes dual to the points P, and let \(\mathcal {A}= \mathcal {A}(P^\star )\) be their arrangement. We first build the query structure of Lemma 2 for \(\mathcal {A}\). We will separately enumerate all candidates that fall in case (i) and those that fall in case (ii) of Lemma 4.

3.1 Case (i)

Let us call an empty slab S a candidate slab if it satisfies the condition of case (i) in Lemma 4. In dual space, Observation 2 immediately implies the following.

Lemma 5

A candidate slab corresponds to a maximal vertical segment contained in a d-dimensional cell of \(\mathcal {A}\), whose endpoints lie in the relative interiorFootnote 1 of two faces \(f_1\) and \(f_2\) with \(\dim (f_1) + \dim (f_2) \leqslant d-1\).

We call such a vertical segment a candidate stick. Our algorithm collects all candidate sticks from the arrangement \(\mathcal {A}\), computes the width of the corresponding slab for each candidate stick, and returns one that maximizes this width.

The candidate sticks are closely related to a vertical decomposition of the arrangement of hyperplanes. Chazelle and Friedman [14] showed that the number of candidate sticks is \(O(n^d)\) and argued that they can be computed by overlaying the orthogonal projections of all the faces incident to each d-dimensional cell of \(\mathcal {A}\) onto the horizontal hyperplane \(\{x_d = 0\}\). They also described an algorithm that computes all candidate sticks in \(O(n^{\lfloor 3d/2 \rfloor } \log ^{\lfloor d/2 \rfloor } n)\) time,Footnote 2 which is too slow for our purpose.

Consider a candidate stick s between two faces \(f_1\) and \(f_2\) of \(\mathcal {A}\). By Lemma 5, their dimensions sum up to at most \(d-1\), so the smaller dimension of the two, say \(\dim (f_2)\), is at most \(\lfloor (d-1)/2\rfloor \). Let \(k:=\dim (f_2)+1\). Then, we have \(\dim (f_1) \leqslant d-k\), and so \(f_1\) lies on (is a subface of) a face \(f'\) of \(\mathcal {A}\) of dimension \(\dim (f') = d-k\). The face \(f'\) is a face on the \((d-k)\)-flat \(f_Q := \bigcap _{q\in Q} q^\star \) for some subset \(Q \in P\) with \(|Q| = k\). Let now \(\hat{f}_{Q}\) be the \((d-k+1)\)-flat spanned by \(f_{Q}\) and the vertical direction, and observe that s lies in \(\hat{f}_{Q}\), with one endpoint in \(f_Q\).

Every hyperplane \(q^\star \) intersects \(\hat{f}_{Q}\) in a \((d-k)\)-flat \(q'\). The face \(f_2\) of \(\mathcal {A}\) has \(\dim (f_2) = k-1\) and therefore lies on the intersection of \(d-k+1\) hyperplanes of \(\mathcal {A}\). The intersection of these hyperplanes with the \((d-k+1)\)-flat \(\hat{f}_{Q}\) is therefore zero-dimensional, that is, a vertex of the arrangement \(\mathcal {A}_{Q}\) of the \((d-k)\)-flats \(q'\), for \(q \in Q\), inside the \((d-k+1)\)-flat \(\hat{f}_{Q}\).

To summarize, for our candidate stick s there is a subset \(Q \subset P\) of size \(|Q| = k\), for some \(1 \leqslant k \leqslant \lfloor (d-1)/2 \rfloor + 1 = \lceil d/2 \rceil \), such that one endpoint of s is a vertex v of the arrangement \(\mathcal {A}_{Q}\) in \(\hat{f}_{Q}\) and the other endpoint is the vertical projection of v on the \((d-k)\)-flat \(f_{Q}\). No hyperplane \(q'\) crosses the relative interior of s, so the entire segment lies in the zone of \(f_{Q}\) in the arrangement \(\mathcal {A}_{Q}\).

Our algorithm enumerates all possibilities: We consider all non-empty subsets \(Q \subset P\) of size at most \(\lceil d/2 \rceil \). For each Q, we compute the flat \(f_{Q}\) and its vertical extension \(\hat{f}_{Q}\), and intersect all hyperplanes with \(\hat{f}_{Q}\). We then compute the zone Z of \(f_Q\) in the arrangement \(\mathcal {A}_Q\), and for each vertex v of Z, generate a vertical segment connecting v with its vertical projection on \(f_{Q}\). Finally, we test whether the segment actually is a candidate stick by verifying whether its relative interior intersects any hyperplane of \(\mathcal {A}\) using the data structure of Lemma 2.

There are \(\left( {\begin{array}{c}n\\ k\end{array}}\right) =O(n^k)\) different subsets of k points. Computing the zone of n hyperplanes in the \((d-k+1)\)-dimensional space \(\hat{f}_{Q}\) takes time \(O(n^{d-k} + n\log n)\) time by Lemma 1. By the zone theorem [19], the zone has at most \(O(n^{d-k})\) vertices, so we can test all the resulting vertical segments in time \(O(n^{d-k}\log n)\). Summing over all \(O(n^{k})\) subsets Q of size k results in time \(O(n^{d}\log n + n^{k+1} \log n)\), and summing over k from 1 to \(\lceil d/2 \rceil \) results in total time \(O(n^{d} \log n)\).

3.2 Case (ii)

We now turn to the second case. We will enumerate all slabs S (not just empty ones) that are determined by a subset Q of at most d points of P in the following way: The intersection of the boundary of S with P is exactly Q, and the normal of S is parallel to \(\textrm{aff}(Q)\). We then test whether the slab is actually empty using the data structure of Lemma 2, and return an empty slab maximizing the width.

The following lemma shows that given the boundary points, the unique slab satisfying the condition can be found in constant time.

Lemma 6

Let \(Q_1, Q_2 \subset P\) be two disjoint subsets such that \(2\leqslant |Q_1|+|Q_2| \leqslant d\). There exists a unique orientation u of a slab \(S=(h_1, h_2)\) such that \(P\cap h_1 = Q_1\), \(P\cap h_2 = Q_2\), and the normal of S is parallel to \(\textrm{aff}(Q_1\cup Q_2)\). Moreover, such a unique orientation can be computed in \(O(d^3) = O(1)\) time.

Let \(Q_1, Q_2 \subset P\) be a pair of disjoint subsets of P with \(2\leqslant |Q_1\cup Q_2| \leqslant d\). In constant time we can compute the unique orientation described in Lemma 6, denoted by \(u(Q_1, Q_2)\), and construct a slab \(S = (h_1, h_2)\) with orientation \(u(Q_1, Q_2)\) such that \(P\cup h_1 = Q_1\) and \(P\cup h_2 = Q_2\). We can then test in \(O(\log n)\) time if the relative interior of S is empty using the data structure of Lemma 2. We return the widest slab among all slabs passing the test.

We run the above procedure for all valid pairs \((Q_1, Q_2)\). The number of such pairs is bounded by \(\sum _{2\leqslant k \leqslant d}\left( {\begin{array}{c}n\\ k\end{array}}\right) \cdot 2^k = O(n^d)\), since d is a constant. Hence, the total running time for case (ii) is also bounded by \(O(n^d \log n)\).

Theorem 1

For any constant dimension \(d\geqslant 4\), a widest empty slab for a set of n points in \(\mathbb {R}^d\) can be computed in \(O(n^d \log n)\) time.

Our algorithm can be implemented faster for \(d \leqslant 3\), because the emptiness of a candidate slab can be tested in O(1) amortized time. This results in algorithms with \(O(n^2)\) and \(O(n^3)\) time for \(d=2\) and \(d=3\), respectively, matching the previously known time bounds [18, 22]. The space requirement of our algorithm, however, is larger.

4 Minimum-Width Double-Slabs

In this section, we present algorithms that compute a minimum-width double-slab enclosing P in \(\mathbb {R}^d\) for \(d\geqslant 3\). Throughout this section, we assume that there exists a minimum-width double-slab enclosing P that is non-vertical. As in the previous section, if this is not the case, we can solve the problem by reducing it to an instance in \(\mathbb {R}^{d-1}\) after orthogonally projecting P onto \(\mathbb {R}^{d-1}\). In addition, we also assume that P consists of at least \(d+2\) points, since the problem becomes trivial when \(|P| \leqslant d+1\).

Let \(P^\star \) be the set of n hyperplanes dual to points in P, \(\mathcal {A}= \mathcal {A}(P^\star )\) be their arrangement \(\mathcal {A}\), \(\mathcal {L}= \mathcal {L}(P^\star )\) be the lower envelope of \(\mathcal {A}\), and \(\mathcal {U}= \mathcal {U}(P^\star )\) be the upper envelope of \(\mathcal {A}\). We represent any non-vertical double-slab D by a 4-tuple \((h_1, h_2, h_3, h_4)\) of hyperplanes such that \(h_i\) is above \(h_{i+1}\) in \(\mathbb {R}^d\) for each \(i=1,2,3\).

Fig. 2.
figure 2

Two possible configurations of minimum-width double-slabs \(D = (h_1, h_2, h_3, h_4)\) in \(\mathbb {R}^3\). (a) Case (i) of Lemma 7: There are six points on the four planes. (b) Case (iii): There are five points on the four planes, while we have \(w(h_1, h_2) = w(h_3, h_4)\).

It is obvious that there exists a minimum-width double-slab enclosing P such that at least one point in P lies on each of its four hyperplanes. The following lemma describes a characterization of optimal double-slabs for our problem. See also Fig. 2.

Lemma 7

For any \(d\geqslant 2\), if \(D=(h_1,h_2,h_3,h_4)\) is a minimum-width double-slab enclosing P such that \(|P\cap h_i| \geqslant 1\) for each \(i\in \{1,2,3,4\}\). Then, the following conditions must hold:

  • \(\sum _{i} |P\cap h_i| \geqslant d+2\).

  • Exactly one of the following cases holds:

    1. (i)

      \(w(h_1, h_2) > w(h_3, h_4)\) and \(|P \cap h_1| + |P \cap h_2| \geqslant d+1\).

    2. (ii)

      \(w(h_1, h_2) < w(h_3, h_4)\) and \(|P \cap h_3| + |P \cap h_4| \geqslant d+1\).

    3. (iii)

      \(w(h_1, h_2) = w(h_3, h_4)\).

Consider any minimum-width double-slab \(D=(h_1,h_2, h_3, h_4)\) enclosing P that satisfies the conditions described in Lemma 7. Let \(Q_i := P\cap h_i\) for \(i=1,2,3,4\). Since the outer slab \(S_{\textrm{out}}=(h_1, h_4)\) also encloses P, it is obvious that a portion of \(\bigcap _{q\in Q_1} q^\star \) appears as a face of \(\mathcal {L}\) and a portion of \(\bigcap _{q\in Q_4} q^\star \) appears in \(\mathcal {U}\) in the same way. Hence, we have \(h_1^\star \in \mathcal {L}\) and \(h_4^\star \in \mathcal {U}\). On the other hand, the inner slab \(S_{\textrm{in}} = (h_2, h_3)\) should be empty, so the vertical segment \(h_2^\star h_3^\star \) crosses no hyperplane in \(P^\star \). Observe also that the dual point \(h_i^\star \) for each i lies in the relative interior of a \((d-|Q_i|)\)-face of \(\mathcal {A}\) that is a portion of the flat \(\bigcap _{q\in Q_i} q^\star \).

We will often discuss the overlay \(\mathcal {M}\) of the orthogonal projections \(\pi (\mathcal {U})\) and \(\pi (\mathcal {L})\) of the envelopes \(\mathcal {U}\) and \(\mathcal {L}\) onto the hyperplane \(\{x_d = 0\}= \mathbb {R}^{d-1}\). Each face f of the overlay \(\mathcal {M}\) is associated with a pair \((Q_1, Q_4)\) of subsets of P such that: for any orientation \(u\in f\), if \((h_1, h_4)\) is the minimum-width slab of orientation u that encloses P, then it always holds that \(P\cap h_1 = Q_1\) and \(P\cap h_4 = Q_4\).

Lemma 8

For any fixed \(d \geqslant 2\), the overlay \(\mathcal {M}\) consists of \(O(n^{\lceil d/2\rceil })\) many faces and can be computed in \(O(n^{\lceil d/2\rceil } + n\log n)\) time.

In the following, we describe our algorithms for \(d\geqslant 3\).

4.1 Three-Dimensional Case

First, we consider the case of \(d=3\). For any point \(q\in P\) and any orientation \(u \in \mathbb {R}^2\), we consider the double-slab \(D_q(u) = (h_1, h_2, h_3, h_4)\) with orientation u such that \(P \subset D_q(u)\), \(h_1^\star \in \mathcal {L}\), \(q'\in h_2\) (that is, \(h_2^\star \in q'^\star \)) for some \(q'\in P \setminus \{q\}\), \(q\in h_3\) (that is, \(h_3^\star \in q^\star \)), and \(h_4^\star \in \mathcal {U}\). Since \(D_q(u)\) encloses P, the point \(q'\) on \(h_2\) is determined as the first point in P hit by translating \(h_3\) upwards or, equivalently, as the first hyperplane \(q'^\star \) in \(\mathcal {A}\) hit by the ray emanating downwards from \(h_3^\star \). By Lemma 7, there exist some \(q\in P\) and \(u\in \mathbb {R}^2\) such that \(D_q(u)\) is indeed a minimum-width double-slab enclosing P.

Let \(w_q(u) := w(D_q(u))\) be its width. Then, we have \(w_q(u) = \max \{\Vert h_1^\star - h_2^\star \Vert , \Vert h_3^\star - h_4^\star \Vert \} \cdot \cos (\theta (u))\). Let \(v_q(u) := w_q(u) / \cos (\theta (u))\) be the linear part of the right side of the above equation. Observe that the function \(v_q :\mathbb {R}^2 \rightarrow \mathbb {R}\) is piecewise linear, since its values are determined by the vertical distances between faces in \(\mathcal {A}\), while \(\cos (\theta (u)) = 1/\sqrt{\Vert u\Vert ^2+1}\) is a continuous function over \(u\in \mathbb {R}^2\).

Lemma 9

The number of linear pieces in function \(v_q\) is \(O(n^2)\) and an explicit description of function \(v_q\) can be computed in \(O(n^2)\) time.

By Lemma 9, we also obtain the planar subdivision \(M_q\) induced by the breakpoints of \(v_q\) such that \(v_q\) restricted to any face of \(M_q\) (of any dimension) is a linear function. Lemma 7 for \(d=3\) implies the existence of a minimum-width double-slab enclosing P whose orientation lies on a vertex or an edge of \(M_q\). Since the function \(v_q\) restricted to each face of \(M_q\) (of any dimension) is linear and thus of constant complexity, we can find an optimal double-slab constrained about \(q\in P\) in total \(O(n^2)\) time. By iterating all \(q\in P\), we obtain the following.

Theorem 2

Given a set P of n points in \(\mathbb {R}^3\), a minimum-width double-slab enclosing P can be computed in \(O(n^3)\) time.

4.2 Four-Dimensional Case

We then consider the four-dimensional case, \(d=4\). Consider an optimal solution \(D = (h_1, h_2, h_3, h_4)\) that satisfies the conditions of Lemma 7. We distinguish two cases: when (a) \(|P\cap h_2| = |P\cap h_3| = 1\) or (b) either \(|P\cap h_2| \geqslant 2\) or \(|P \cap h_3| \geqslant 2\). Our goal is to find an optimal double-slab that falls into each case.

In the former case (a), we first compute the overlay \(\mathcal {M}\) of \(\pi (\mathcal {U})\) and \(\pi (\mathcal {L})\) by applying Lemma 8. Then, for each face of \(\mathcal {M}\), consider its associated pair \((Q_1, Q_4)\) of subsets and a minimum-width double-slab D enclosing P such that \(P\cap h_1 = Q_1\), \(P\cap h_4 = Q_4\), and it falls into case (a). Since we must have \(|Q_1| + |Q_4| \geqslant 4\) by Lemma 7, we ignore those faces of \(\mathcal {M}\) whose associated pair \((Q_1, Q_4)\) consists of at most three points. Since our target double-slab contains \(Q_1\) and \(Q_4\) on its outer boundary, its normal should be orthogonal both to \(\textrm{aff}(Q_1)\) and \(\textrm{aff}(Q_4)\). Hence, we consider the subspace \(f = (\textrm{aff}(Q_1))^\perp \cap (\textrm{aff}(Q_4))^\perp \) orthogonal to both \(\textrm{aff}(Q_1)\) and \(\textrm{aff}(Q_4)\). As discussed in Sect. 2, such a flat f exists and its dimension is always \(6 - |Q_1| - |Q_4| \leqslant 2\). Hence, f is a line or a plane in \(\mathbb {R}^4\). If f is a line, then we can find our solution in O(n) time by scanning all points in P; if f is a plane, then we project P orthogonally onto f and solve the 2-dimensional problem instance in \(O(n^2)\) time [6]. Since \(\mathcal {M}\) consists of \(O(n^2)\) faces (Lemma 8), we can handle case (a) in a total of \(O(n^4)\) time.

Next, we consider the latter case (b). Assume without loss of generality that we have \(|Q_3| \geqslant 2\) in an optimal double-slab of this case. We fix a pair of points \(q, q' \in P\) and consider the case where \(q, q' \in Q_3\). For the purpose, we consider any hyperplane h in \(\mathbb {R}^4\) orthogonal to the line through q and \(q'\), and the orthogonal projections of P onto h. Note that q and \(q'\) are projected to a common point \(\bar{q}\) on h. We then have a problem instance in \(\mathbb {R}^3\) with the constraint that \(\bar{q}\) should lie on the third plane in the resulting slab. Therefore, we apply Lemma 9 to solve this instance in \(O(n^2)\) time. We can thus handle case (b) by iterating all pairs \((q, q')\) of two points in P, taking \(O(n^4)\) time.

Theorem 3

Given a set P of n points in \(\mathbb {R}^4\), a minimum-width double-slab enclosing P can be computed in \(O(n^4)\) time.

4.3 Algorithm for Higher Dimension

Finally, we consider higher dimensions for \(d \geqslant 5\). The algorithm starts with some preprocessing, including the initialization of the point location structure of Lemma 2 and the query structure by Matoušek and Schwarzkopf [27] for ray shooting for \(\mathcal {U}\) and \(\mathcal {L}\). The cost of preprocessing is not more than \(O(n^d)\) time. In the main part of the algorithm, we handle each of the three cases described in Lemma 7.

Cases (i) and (ii). First, we handle double-slabs \(D=(h_1, h_2, h_3, h_4)\) enclosing P of case (i): \(w(h_1, h_2) > w(h_3, h_4)\) and \(|P\cap h_1| + |P\cap h_2| \geqslant d+1\). For the purpose, we consider those \(D = (h_1, h_2, h_3, h_4)\) with the additional constraint that only one fixed point \(q\in P\) lies on \(h_2\).

Lemma 10

Let \(d\geqslant 2\) be any fixed integer and \(q \in P\) be any fixed point. A minimum-width double-slab \(D = (h_1, h_2, h_3, h_4)\) enclosing P such that \(w(h_1, h_2) > w(h_3, h_4)\) and \(P\cap h_2 = \{q\}\) can be computed in \(O(n^{\lfloor d/2\rfloor }\log n)\) time, provided the query structure of Lemma 2.

For each \(1\leqslant k \leqslant d\), consider an optimal double-slab D of this case such that \(|P \cap h_2| = k\). Such an optimal solution D can be computed as follows: For each subset \(Q_2 \subset P\) with \(|Q_2| = k\), let \(f := (\textrm{aff}(Q_2))^\perp \) be the orthogonal complement of \(\textrm{aff}(Q_2)\). Note that f is a \((d-k+1)\)-flat, as discussed in Sect. 2. We then orthogonally project P onto f by the projection operator \(\pi _f\), and find an optimal double-slab \(D' = (h'_1, h'_2, h'_3, h'_4)\) of case (i) with the additional constraint that \(\pi _f(P) \cap h'_2 = \pi _f(Q_2)\). This constrained problem can be solved by Lemma 10 if \(1\leqslant k \leqslant d-1\), since the projection \(\pi _f\) maps all points in \(Q_2\) to a common point in f and thus \(\pi _f(Q_2)\) consists of a single point. Since there are \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \) different choices of \(Q_2\) and we spend \(O(n^{\lfloor (d-k+1)/2 \rfloor } \log n)\) time per each (Lemma 10), it takes \(O(n^{d} \log n)\) time in total over all \(1\leqslant k \leqslant d-1\).

In case of \(k=d\), the dimension of flat f is 1, so it gives us a unique orientation \(u \in \mathbb {R}^{d-1}\). Hence, this case can be solved in \(O(\log n)\) time as follows: Using the ray shooting queries for \(\mathcal {L}\) and \(\mathcal {U}\) [27], we locate the points \(h_1^\star \) and \(h_4^\star \) on \(\mathcal {L}\) and \(\mathcal {U}\), respectively, at u in \(O(\log n)\) time, and then test if the vertical segment with top endpoint at \(h_4^\star \) and length \(\Vert h_1^\star - h_2^\star \Vert \) crosses the correct number of hyperplanes in \(\mathcal {A}\). This query can be answered in \(O(\log n)\) time by Lemma 2. The total running time for case (i) is thus bounded by \(O(n^{d} \log n)\).

Case (ii) can be handled in the same way as above.

Case (iii). Finally, consider double-slabs \(D=(h_1, h_2, h_3, h_4)\) enclosing P of case (iii): \(w(h_1, h_2) = w(h_3, h_4)\) and \(\sum _i |P\cap h_i|\geqslant d+2\). For any four subsets \(Q_1, Q_2, Q_3, Q_4 \subseteq P\), we define \(D(Q_1, Q_2, Q_3, Q_4) = (h_1, h_2, h_3, h_4)\) to be a minimum-width double-slab such that \(w(h_1,h_2) = w(h_3,h_4)\) and \(P\cap h_i = Q_i\) for \(i=1,2,3,4\). Note that \(D(Q_1, Q_2, Q_3, Q_4)\) does not have to enclose P and it may be undefined when all the conditions cannot be fulfilled at the same time or there is no minimum width even though there are infinitely many such double-slabs. If \(D(Q_1, Q_2, Q_3, Q_4)\) is well defined, then we call it a candidate double-slab. Our strategy is to collect all candidate double-slabs over all possible combinations of four subsets and then to test the feasibility of each of them, that is, if it encloses P by Lemma 2.

It turns out that the case of \(|Q_2| = |Q_3| = 1\) can be handled efficiently.

Lemma 11

Let \(d\geqslant 2\) be any fixed integer and \(q_2, q_3\in P\) be two fixed points. The number of candidate double-slabs of the form \(D(Q_1, \{q_2\}, \{q_3\}, Q_4)\) for some \(Q_1, Q_4 \subset P\) is \(O(n^{\lfloor d/2\rfloor })\), and all of them can be computed in \(O(n^{\lfloor d/2\rfloor } + n\log n)\) time. In case of \(d\leqslant 3\), given the lower and upper envelopes \(\mathcal {L}\) and \(\mathcal {U}\), all candidates can be reported in O(n) time.

Now, we fix \(2\leqslant k \leqslant d\) and any pair \((Q_2, Q_3)\) of nonempty subsets of P with \(|Q_2| + |Q_3| = k\). We enumerate all candidate double-slabs of the form \(D(Q_1, Q_2, Q_3, Q_4)\) for some \(Q_1, Q_4\subset P\) in the following way. Compute the subspace \(f = (\textrm{aff}(Q_2))^\perp \cap (\textrm{aff}(Q_3))^\perp \) orthogonal to both \(\textrm{aff}(Q_2)\) and \(\textrm{aff}(Q_3)\). As discussed in Sect. 2, we always have \(\dim (f) = d-k+2\). As done above for case (i), we project points in P orthogonally onto f by projection operator \(\pi _f\). Observe that all points in \(Q_2\) are projected to a common point \(\bar{q}_2 \in f\) and all points in \(Q_3\) are projected to a common point \(\bar{q}_3 \in f\). Hence, we can apply Lemma 11 in the \((d-k+2)\)-dimensional space f. This results in \(O(n^{\lfloor (d-k+2)/2 \rfloor })\) candidate double-slabs, taking the same amount of time if \(2\leqslant k \leqslant d-2\), or \(O(n\log n)\) time if \(k = d-1\) or d. By iterating all \(2\leqslant k \leqslant d\) and all \(\left( {\begin{array}{c}n\\ k\end{array}}\right) \cdot (2^k-2)\) pairs \((Q_2, Q_3)\), we collect a total of \(O(n^{d+1})\) candidate double-slabs. We then test each candidate double-slab if it encloses P by point location in \(\mathcal {A}\) (Lemma 2) in \(O(\log n)\) time per each.

It remains to consider those double-slabs \(D=(h_1, h_2, h_3, h_4)\) of case (iii) such that \(k = |P\cap h_2| + |P\cap h_3| \geqslant d+1\). In this case, observe that \((h_2, h_3)\) is an empty slab and its dual segment \(h_2^\star h_3^\star \) is a candidate stick defined in Sect. 3. (See also Lemma 5.) Thus, we run the algorithm described in Sect. 3.1 to compute all \(O(n^d)\) candidate sticks and, for each candidate sticks s, we build a corresponding double-slab that encloses P in \(O(\log n)\) time by ray shooting queries on \(\mathcal {L}\) and \(\mathcal {U}\) at \(u = \pi (s) \in \mathbb {R}^{d-1}\). So, we can handle this case in \(O(n^d \log n)\) time.

The algorithm described above takes \(O(n^{d+1} \log n)\) time in total to compute an optimal double-slab of case (iii). The most time-consuming part of the algorithm is the case of \(k=d\): for \(k=d\), we collect O(n) candidates for each of \(O(n^d)\) pairs \((Q_2, Q_3)\) and spend \(O(n^{d+1} \log n)\) time, including the feasibility test. The other cases for \(2\leqslant k \leqslant d-1\) and for \(k \geqslant d+1\) take only \(O(n^d \log n)\) time in total.

In order to improve the running time, remark that, when \(k=d\), we reduce the problem in \(\mathbb {R}^d\) to an instance in \(\mathbb {R}^2\). In the following, we show that these 2-dimensional instances can be handled efficiently in a 3-dimensional instance.

Lemma 12

Let \(d = 3\) and \(q_2, q_3\in P\) be two fixed points. A minimum-width double-slab \(D = (h_1, h_2, h_3, h_4)\) enclosing P such that \(w(h_1, h_2) = w(h_3, h_4)\), \(q_2\in h_2\), \(q_3\in h_3\), and \(|P\cap h_2| + |P\cap h_3| = 3\) can be computed in \(O(n^2)\) time.

We exploit Lemma 12 as follows: For each pair \((Q_2, Q_3)\) with \(|Q_2|+|Q_3| = d-1\), we apply Lemma 12. Then, it also handles the pairs \((Q'_2, Q_3)\) and \((Q_2, Q'_3)\) such that \(Q'_2 = Q_2 \cup \{q\}\) and \(Q'_3 = Q_3 \cup \{q\}\) for any point \(q\in P\setminus (Q_2 \cup Q_3)\). Hence, we can now handle those pairs including d points in bundles. Since we spend only \(O(n^d \log n)\) time for the cases of \(2\leqslant k \leqslant d-1\), the final running time is thus improved to \(O(n^{d+1})\).

Theorem 4

Given a set P of n points in \(\mathbb {R}^d\) for any fixed \(d\geqslant 5\), a minimum-width double-slab enclosing P can be computed in \(O(n^{d+1})\) time.