1 Introduction and statement of results

For two points \(a,b \in [0,1)^d\), we write \(a \le b\) and \(a<b\) if the corresponding inequalities hold in each coordinate; furthermore, we write \([a,b)\) for the set \(\{x \in [0,1)^d: ~a \le x < b\}\), and call such a set a (\(d\)-dimensional) interval. We denote by \(\mathbf 1 _{I}\) the indicator function of the set \(I \subseteq [0,1)^d\) and by \(\lambda _d\) the \(d\)-dimensional Lebesgue measure. We will sometimes write 0 for the \(d\)-dimensional vector \((0,\dots ,0)\).

A sequence \((x_n)_{n \in \mathbb N }\) of points in \([0,1]^d\) is called uniformly distributed modulo 1 (u.d. mod 1) if

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{\sum _{n = 1}^N \mathbf 1 _{[a, b)} (x_n)}{N} = \lambda _d([a,b)) \end{aligned}$$

for all \(d\)-dimensional intervals \([a, b ) \subseteq [0,1)^d\). A further characterization of u.d. is given by the following well-known result of Weyl [17]: a sequence \((x_n)_{n \in \mathbb N }\) of points in \([0,1)^d\) is u.d. mod 1 if and only if for every continuous function \(f\) on \([0,1)^d\) the relation

$$\begin{aligned} \lim _{N \rightarrow \infty } \frac{\sum _{n = 1}^N f(x_n)}{N} = \int _{[0,1)^d} f(x) \text{ d}x \end{aligned}$$

holds. Although this theorem shows the possibility of using u.d. point sequences for numerical integration—a method usually called Quasi-Monte Carlo (QMC) integration—it does not give any information on the integration error.

The Koksma–Hlawka inequality [10] states that this integration error can be bounded by the product of the variation of \(f\) (in the sense of Hardy and Krause), denoted by \(V(f)\), and the so-called star-discrepancy \(D_N^*\) of the point sequence \((x_n)_{n \in \mathbb N }\). More precisely,

$$\begin{aligned} \left| \frac{1}{N} \sum _{n = 1}^N f(x_n) - \int _{[0,1]^d} f(x) \text{ d}x \right| \le V(f) D_N^*(x_n), \end{aligned}$$

where \(D_N^*\) is given by

$$\begin{aligned} D_N^* = D_N^*(x_1, \ldots , x_N) = \sup _{a \in [0,1)^d} \left| \frac{\sum _{n=1}^N \mathbf 1 _{(0,a)} (x_n)}{N} - \lambda _d([0,a))\right| . \end{aligned}$$

The Koksma–Hlawka inequality suggests that for QMC integration, one should use a sequence of points whose discrepancy is as small as possible. The best known point sequences achieve a discrepancy of order \(\mathcal O (N^{-1} (\log N)^d)\); such sequences are called low-discrepancy sequences. Note that this convergence rate is for all \(d \ge 1\) better than that of the probabilistic error of Monte Carlo integration, where a sequence of random points is used instead of a deterministic one. QMC integration can be successfully applied in several different areas of applied mathematics, for example, in actuarial or financial mathematics, where frequently high-dimensional integration problems arise (see e.g., [2, 14]). For more information on discrepancy theory, low-discrepancy sequences and QMC integration see [8, 12].

A popular approach to construct \(d\)-dimensional low-discrepancy sequences is to combine \(d\) one-dimensional low-discrepancy sequences; this works, for example, for the so-called Halton sequence, which is obtained by joining one-dimensional van der Corput sequences coordinatewise. In the present paper, we show that this construction principle is not generally applicable for a special class of one-dimensional low-discrepancy sequence, so-called LS-sequences. We prove that the limit distribution of a multidimensional LS-sequences (composed coordinatewise from one-dimensional low-discrepancy LS-sequences) can spectacularly fail to be u.d., if there is a certain number-theoretic connection between the parameters of the one-dimensional sequences. To explain the construction of LS-sequences, we need some definitions.

Definition 1

(Kakutani splitting procedure) If \(\alpha \in (0,1)\) and \(\pi = \{[t_{i-1}, t_i): 1 \le i \le k\}\) is any partition of \([0,1)\), then \(\alpha \pi \) denotes its so-called \(\alpha \)-refinement, which is obtained by subdividing all intervals of \(\pi \) having maximal length into two parts, proportional to \(\alpha \) and \(1- \alpha \), respectively. The so-called Kakutani’s sequence of partitions \((\alpha ^n \omega )_{n \in \mathbb N }\) is obtained as the successive \(\alpha \)-refinement of the trivial partition \(\omega = \{[0,1)\}\).

The notion of \(\alpha \)-refinements can be generalized in a natural way to so-called \(\rho \)-refinements.

Definition 2

(\(\rho \) -refinement) Let \(\rho \) denote a non-trivial finite partition of \([0,1)\). Then, the \(\rho \)-refinement of a partition \(\pi \) of \([0,1)\), denoted by \(\rho \pi \), is given by subdividing all intervals of maximal length positively homothetically to \(\rho \). Note that the \(\alpha \)-refinement is a special case with \(\rho = \{[0, \alpha ), [\alpha , 1)\}\).

By a classical result of Kakutani [11], for any \(\alpha \), the sequence of partitions \((\alpha ^n \omega )_{n \in \mathbb N }\) is uniformly distributed, which means that for every interval \([a,b] \subset [0,1]\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{\sum _{i=1}^{k(n)} \mathbf 1 _{[a,b]}(t_i^n)}{k(n)} = b-a, \end{aligned}$$

where \(k(n)\) denotes the number of intervals in \(\alpha ^n \omega =\{[t_{i-1}^n,t_i^n),~1 \le i \le k(n)\}\). The same result holds for any sequence of \(\rho \)-refinements of \(\omega \), due to a result of Volčič [16] (see also [1, 7]). A multidimensional generalization of \(\rho \)-refinements has been introduced by Carbone and Volčič [6]. A special case of a \(\rho \)-refinement is the so-called LS-sequence of partitions. This sequence of partitions has been introduced by Carbone [5].

Definition 3

(LS-sequence of partitions) An LS-sequence of partitions \((\rho ^n_{L,S} \omega )_{n \in \mathbb N }\) is the successive \(\rho \)-refinement of the trivial partition \(\omega \), where \(\rho _{L,S}\) consists of \(L+S\) intervals such that the first \(L>0\) intervals of \(\rho _{L,S}\) have length \(\beta \), and the remaining \(S \ge 0\) intervals have length \(\beta ^2\). Note that necessarily \(L\beta +S\beta ^2 = 1\) holds, and consequently for each pair \((L,S)\) of parameters, there exists exactly one corresponding number \(\beta \).

It can easily be seen that for every \(n\), the partition \(\rho ^n_{L,S} \omega \) consists only of intervals having either length \(\beta ^n\) or \(\beta ^{n+1}\). This fact makes the analysis of LS-sequences relatively simple, in comparison with the analysis of general \(\rho \)-refinements. We denote by \(t_n\) the total number of intervals of \(\rho ^n_{L,S} \omega \), and correspondingly, let \(l_n\) and \(s_n\) be the number of long and short intervals after \(n\) steps, respectively (more precisely, \(l_n\) is the number of intervals of length \(\beta ^n\), and \(s_n\) is the number of intervals of length \(\beta ^{n+1}\)). It is easy to see that these sequences satisfy the following recurrence relations (see [5]):

$$\begin{aligned} t_n&= L t_{n - 1} + S t_{n - 2},\\ l_n&= L l_{n - 1} + S l_{n - 2},\\ s_n&= L s_{n - 1} + S s_{n - 2}, \end{aligned}$$

for \(n \ge 2\), where \(t_1 = L + S, t_0 = 1, l_1 = L, l_0, = 1, s_1 = S\) and \(s_0 = 0\). Solving these binary recurrences, we obtain explicit formulas for \(t_n,l_n\) and \(s_n\):

$$\begin{aligned} t_n=\tau _0 \beta ^{-n}+\tau _1(-S\beta )^n,\qquad \tau _0&= \frac{L+2S+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}},\nonumber \\ \tau _1&= \frac{-L-2S+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}},\end{aligned}$$
(1)
$$\begin{aligned} l_n=\lambda _0 \beta ^{-n}+\lambda _1(-S\beta )^n,\qquad \lambda _0&= \frac{L+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}},\nonumber \\ \lambda _1&= \frac{-L+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}},\end{aligned}$$
(2)
$$\begin{aligned} s_n=\sigma _0 \beta ^{-n}+\sigma _1(-S\beta )^n,\qquad \sigma _0&= \frac{2S+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}},\nonumber \\ \sigma _1&= \frac{-2S+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}}. \end{aligned}$$
(3)

We can generate a sequence of points from a sequence of partitions by ordering the left endpoints of the intervals in the partition. The following rule by Carbone [5] defines the so-called LS-sequence of points.

Definition 4

(LS-sequence of points) Given an LS-sequence of partitions \((\rho ^n_{L,S} \omega )_{n \in \mathbb N }\), we define the corresponding LS-sequence of points \((\xi _{L,S}^n)_{n \in \mathbb N }\) as follows: the first \(t_1\) points are the left endpoints of the partition \(\rho _{L,S} \omega \) ordered by magnitude. We denote this ordered set of points by \(\Lambda ^1_{L,S}\).

For \(n > 1\), we define \(\Lambda ^{n + 1}_{L,S} = \{ \xi _{L,S}^0, \ldots , \xi _{L,S}^{t_{n + 1}-1} \}\) inductively as the ordered set of the left endpoints of the intervals of \(\rho ^n_{L,S} \omega \) in the following way:

$$\begin{aligned} \Lambda ^{n + 1}_{L,S}&= \left\{ \xi _{L,S}^0, \ldots , \xi _{L,S}^{t_{n}-1},\right. \\&\psi _{1,0}^{n + 1}\left( \xi _{L,S}^0 \right) , \ldots , \psi _{1,0}^{n + 1}\left( \xi _{L,S}^{l_n-1} \right) , \ldots , \psi _{L,0}^{n + 1}\left( \xi _{L,S}^0 \right) , \ldots , \psi _{L,0}^{n + 1}\left( \xi _{L,S}^{l_n-1} \right) ,\\&\left. \psi _{L,1}^{n + 1}\left( \xi _{L,S}^0 \right) , \ldots , \psi _{L,1}^{n + 1}\left( \xi _{L,S}^{l_n-1} \right) , \ldots , \psi _{L,S-1}^{n + 1}\left( \xi _{L,S}^0 \right) , \ldots , \psi _{L,S-1}^{n + 1}\left( \xi _{L,S}^{l_n-1} \right) \right\} , \end{aligned}$$

where

$$\begin{aligned} \psi ^n_{i,j}(x) = x + i \beta ^n + j \beta ^{n + 1}. \end{aligned}$$

For more details on the definition of LS-sequences of points, and on the properties of such sequences, see [4, 5].

Next, we recall the definition of the well-known van der Corput sequence in base \(b \ge 2\), \(b \in \mathbb N \). For every \(n \in \mathbb N _0 \), the unique digit expansion of \(n\) in base \(b\) is given by

$$\begin{aligned} n = \sum _{i \ge 0} n_i b^i, \end{aligned}$$

where \(n_i \in \{0,1, \ldots , b-1\}, i \ge 0\).

For \(n \in \mathbb N _0\), we define the radical-inverse function (or Monna map) \(\phi _b(n) :\mathbb N _0 \rightarrow [0,1)\) by

$$\begin{aligned} \phi _b(n) = \phi _b \left( \sum _{i \ge 0} n_i b^i \right) := \sum _{i \ge 0} n_i b^{-i-1}. \end{aligned}$$
(4)

We call \(x\) a \(b\)-adic rational if \(x=a b^{-c}\), where \(a\) and \(c\) are positive integers and \(0 \le a < b^c\). Note that \(\phi _b(n)\) maps \(\mathbb N \) onto the \(b\)-adic rationals in [0,1), and therefore, the image of \(\mathbb N \) under \(\phi _b(n)\) is dense in [0,1).

Definition 5

The van der Corput sequence in base \(b\) is defined as \((\phi _b(n))_{n \in \mathbb N }\).

Note that the definition of the van der Corput sequence in base \(b \ge 2\) coincides with the definition of the LS-sequence of points with parameters \(L=b\) and \(S=0\). Thus, LS-sequences can be seen as a generalization of the van der Corput sequence. A remarkable property of van der Corput sequences is, that several van der Corput sequences in pairwise coprime bases can be combined coordinatewise to a multidimensional sequence, the so-called Halton sequence, which is a low-discrepancy sequence. As mentioned above, this means that the discrepancy of a Halton sequence is of asymptotic order \(\mathcal O (N^{-1} (\log N)^d)\), where \(N\) is the number of points and \(d\) denotes the dimension, which together with the Koksma–Hlawka inequality makes it a perfect candidate for Quasi-Monte Carlo integration (for details on the properties of van der Corput and Halton sequences, see [8, 12]).

Several authors consider so-called \(\beta \)-adic van der Corput sequences, which are not equal to LS-sequences although very similar. Barat and Grabner [3] and Ninomiya [13] showed that such sequences are one-dimensional low-discrepancy sequences if \(\beta \) is a Pisot number with irreducible \(\beta \)-polynomial. Furthermore, the connection between \(\beta \)-adic van der Corput sequences and special numeration systems has been investigated by Grabner et al. [9] and Steiner [15]. They use the so-called Dumont–Thomas expansion, where \(G = (G_n)_{n \ge 0}\) is a linear recurring base sequence, \(\beta \) is the corresponding characteristic root, and the points of the \(\beta \)-adic van der Corput sequences are obtained by reflecting the \(G\)-ary expansion of every integer at the decimal point, written in a \(\beta \)-adic number system. Unfortunately, this procedure cannot be applied for LS-sequences except when \(L=S=1\). In Lemma 3, we will present a similar, but slightly more complicated algorithm which is tailor-made for the construction of LS-sequences.

If we assume \(S \ge 1\), then by a result of Carbone [5] a one-dimensional LS-sequence is a low-discrepancy sequences if and only if \(L > S - 1\). Thus, it is tempting to assume that several LS-sequences can be combined coordinatewise in order to obtain multidimensional low-discrepancy sequences. If this was the case, then this method would produce a new parametric class of multidimensional low-discrepancy sequences. However, even in the case of the combination of van der Corput sequences (which are a special case of LS-sequences, as mentioned before), the bases \(b_1, \dots , b_d\) cannot be chosen arbitrarily, but have to satisfy a certain number-theoretic condition (they have to be coprime). A similar restriction can be expected in the case of combining LS-sequences.

In a talk in Graz in June 2012, Maria Rita Iacò presented several numerical examples of the asymptotic distribution of two-dimensional LS-sequences. In some cases, they showed “random” behavior, while in others (for example, when combining the sequence with parameters (1,1) and the sequence with parameters (4,1)), the distribution seemed to be rather erratic. Obviously, the reason for this behavior is that there is a multiplicative relation between the solutions of the equations \(x+x^2=1\) and \(4x+x^2=1\), which define the lengths of the intervals for the LS-sequences with parameters (1,1) and (4,1), respectively. The purpose of the present paper is to prove that in fact the two-dimensional LS-sequences is not uniformly distributed (and not even dense) in \([0,1]^2\) if such a multiplicative relation exists. Furthermore, in a second theorem, we show that if the parameters of two one-dimensional LS-sequences have a greatest common divisor (\(\gcd \)) which is greater than 1, then the resulting two-dimensional LS-sequence is also not dense in \([0,1]^2\). This second result generalizes the requirement of having coprime bases of the van der Corput sequences, in order to obtain a low-discrepancy Halton sequence by joining them coordinatewise.

The formal definition of a multidimensional LS-sequence can be given as follows.

Definition 6

Let \(\mathcal B =((L_1,S_1),\ldots ,(L_d,S_d))\) be an ordered \(d\)-tuple of pairs, \((L_i,S_i)\) such that \(L_i>0, ~S_i\ge 0\) and \(L_i + S_i \ge 2\) for all \(i\). Then, we define the \(d\)-dimensional LS-sequence in base \(\mathcal B \) as the sequence

$$\begin{aligned} \mathbf \xi _\mathcal B ^n=\left( \xi _{L_1,S_1}^n,\ldots ,\xi _{L_d,S_d}^n\right) _{n \in \mathbb N }. \end{aligned}$$

The following theorem states that a two-dimensional LS-sequences in bases \(\mathcal B =((L_1,S_1),(L_2,S_2))\), where the one-dimensional components are low-discrepancy sequences, are not dense in \([0,1]^2\) if there exist integers \(m\) and \(k\) such that \(\frac{\beta _1^{k+1}}{\beta _2^{m+1}}\in \mathbb{Q }\). For example, in the case \((L_1,S_1)=(1,1)\) and \((L_2,S_2)=(4,1)\), we have \(\beta _2=\beta _1^3\).

Theorem 1

Let \(\mathcal B =((L_1,S_1),(L_2,S_2))\) with \(L_i> S_i-1\ge 0\) and assume that there exist integers \(m\) and \(k\) such that \(\frac{\beta _1^{k+1}}{\beta _2^{m+1}}\in \mathbb{Q }\). Then, the two-dimensional LS-sequence \(\mathbf \xi _\mathcal B ^n\) is not uniformly distributed, and not even dense in \([0,1]^2\).

On the other hand, we have not been able to derive any positive results, proving uniform distribution of a LS-sequence for an appropriate choice of \(L_1,S_1\) and \(L_2,S_2\) (except for the case of the Halton sequence). So up to date not a single example of parameters \(L_1,S_1,L_2,S_2\) is known, for which either \(S_1 \ne 0\) or \(S_2 \ne 0\), and the corresponding two-dimensional LS-sequence is uniformly distributed.

Note that Theorem 1 can also be applied to the multidimensional case, since for any multidimensional sequence of points, which is uniformly distributed, all lower-dimensional projections also have to be uniformly distributed. More precisely, we immediately get the following corollary.

Corollary 1

Let \(\mathcal B =((L_1,S_1),\dots ,(L_d,S_d))\) with \(L_i> S_i-1\ge 0\) and assume that there exist numbers \(u,w \in \{1, \dots , d\}\) and integers \(m\) and \(k\) such that \(\frac{\beta _u^{k+1}}{\beta _w^{m+1}}\in \mathbb{Q }\). Then, the \(d\)-dimensional LS-sequence \(\mathbf \xi _\mathcal B ^n\) is not uniform distributed, and not even dense in \([0,1]^d\).

The next theorem characterizes another class of two-dimensional LS-sequences, which are not dense in \([0,1]^2\).

Theorem 2

Let \(\mathcal B =((L_1,S_1),(L_2,S_2))\) and assume that \(\gcd (L_1, S_1, L_2, S_2) > 1\). Then, the two-dimensional LS-sequence \(\mathbf \xi _\mathcal B ^n\) is not dense in \([0,1]^2\).

Note that Theorem 2 also includes the case of the classical Halton sequence. Furthermore in Theorem 1, we have to require that every one-dimensional component is a low-discrepancy sequence, which is not the case in Theorem 2. Moreover, it is easily seen that Theorem 1 and Theorem 2 do not fully contain one another: for example, the LS-sequences with parameters (1,1) and (4,1) satisfy the conditions of Theorem 1, but not those of Theorem 2; for the LS-sequences with parameters (4,2) and (2,2), it is vice versa. As above, we can state a corollary which describes the \(d\)-dimensional situation.

Corollary 2

Let \(\mathcal B =((L_1,S_1),\dots ,(L_d,S_d))\) and assume that \(\gcd (L_i, S_i, L_j, S_j) > 1\) for some \(i,j \in \{1, \ldots , d\}, ~ i \ne j\). Then, the \(d\)-dimensional LS-sequence \(\mathbf \xi _\mathcal B ^n\) is not dense in \([0,1]^d\).

In the next section, we provide several auxiliary results concerning one-dimensional LS-sequences. These lemmas will be essential in the proofs of Theorem 1 and Theorem 2, which are presented in Sect. 3.

Remark 1

It seems to be possible to formulate number-theoretic conditions on \(\beta _1\) and \(\beta _2\) to assure that the two-dimensional combination of two LS-sequences is u.d. or even low-discrepancy in \([0,1)^2\). Intensive numerical investigation suggests that it is sufficient to assume \(L_i> S_i-1\ge 0\) for \(i = 1,2,\) and that the assumptions of the two above theorems do not hold. Nevertheless, a rigorous proof seems to be difficult since, for example, an application of the Chinese remainder theorem, like in the classical Halton case, is not possible. This will be subject of further research.

2 Points in elementary intervals

Before we define elementary intervals and prove some of their properties, we obtain the following recurrence relations for the sequences \(t_n\) and \(l_n\).

Lemma 1

We have

$$\begin{aligned} t_{n+1}=t_n+(L+S-1)l_n \quad \text{ and}\quad l_{n+1}=t_n+(L-1)l_n \end{aligned}$$

for all \(n\ge 0\).

Proof

Follows immediately by (1) and induction with respect to \(n\). \(\square \)

We will also need a lemma on the irrationality of \(\beta \).

Lemma 2

Let \(L>S-1\ge 0\), then \(\beta ^k\) is irrational for every positive integer \(k\).

Proof

First, we prove that \(\beta \) is irrational. In particular, we have to prove that \(L^2+4S\) is not a square. Therefore, we note that

$$\begin{aligned} L^2<L^2+4S<L^2+4L+4=(L+2)^2, \end{aligned}$$

thus \(L^2+4S=(L+1)^2\), i.e., \(S=\frac{2L+1}{4}\not \in \mathbb{Z }\). Hence, \(\beta \) is irrational.

Now, suppose that \(\beta ^k\) is rational for some \(k>1\). If \(\beta ^k = r\) for some rational number \(r\), the same relation holds for the conjugate \(\beta ^{\prime }\) of \(\beta \), i.e., \(|\beta |=|\beta ^{\prime }|\). Since \(\beta =\frac{-L+\sqrt{L^2+4S}}{2S}\) and \(\beta >\beta ^{\prime }=\frac{-L-\sqrt{L^2+4S}}{2S}\), this yields a contradiction to \(|\beta |=|\beta ^{\prime }|\), unless \(L=0\) which is excluded. \(\square \)

We call an interval elementary, if it is an element of \(\rho ^n_{L,S} \omega \) for some \(n\). Equivalently, we can define elementary intervals as all intervals of the form \(I_x^{(k)}=[\xi _{L,S}^x,\xi _{L,S}^x+\beta ^k)\) for some \(k\), where \(x <l_k\). If \([\xi _{L,S}^x,\xi _{L,S}^x+\beta ^k)\) is an elementary interval, then there necessarily exists an integer \(y<t_k\) such that \(\xi _{L,S}^x+\beta ^k=\xi _{L,S}^y\). Note that in the case of the van der Corput sequence, the elementary intervals defined in this way coincide with the usual ones.

In order to count points in elementary intervals, we need a method to decide whether a point \(\xi _{L,S}^N\) is contained in some given elementary interval or not. In the case of the van der Corput sequence, this can be achieved by considering digit expansions as in (4), which motivates the following construction. Let \(N\ge 0\) be a fixed integer and let \(n\) be such that \(t_n\le N < t_{n+1}\). We construct two sequences \((\epsilon _k)_{0 \le k \le n}\) and \((\eta _k)_{0 \le k \le n}\) recursively in the following way: We put

$$\begin{aligned} N_n=N, \quad \epsilon _n=1, \quad \eta _n=\left\lfloor \frac{N_n-t_n}{l_n}\right\rfloor \quad \text{ and} \quad N_{n-1}=N_n-t_n-\eta _nl_n. \end{aligned}$$

For \(k \le n-1\), if \(N_{k}<t_{k}\), we put \(\epsilon _{k}=\eta _{k}=0\) and \(N_{k-1}=N_{k}\). Otherwise, we proceed as in the initial construction, i.e., put \(\epsilon _k=1\), \(\eta _k=\lfloor \frac{N_k-t_k}{l_k}\rfloor \) and \(N_{k-1}=N_k-t_k-\eta _kl_k\). If \(N_{k-1}=0\), we terminate and put \(\epsilon _i=\eta _i=0\) for all \(i<k\). Since \(t_{k+1}=t_k+(L+S-1)l_k\) and \(l_{k+1}\ge t_k\ge l_k\) for all \(k\ge 0\), this algorithm yields a representation of \(N\) in the form

$$\begin{aligned} N=\sum _{i=0}^n (\epsilon _it_i+\eta _il_i), \end{aligned}$$
(5)

where \(\epsilon _i\in \{0,1\}\), \(0\le \eta _i \le L+S-2\), and \(\epsilon _i=0\) implies \(\eta _i=0\). Furthermore, since \(t_k+(L-1)l_k=l_{k+1}\), it follows that \(\eta _i \ge L - 1\) implies \(\epsilon _{i + 1} = 0\).

Note that the representation (5) is not unique. Consider, e.g., the case \(L=2,S=1\); then, we have \(t_2+l_2=12=t_2+t_1+l_1\). However, for the rest of this paper speaking of a representation, we will always mean the representation whose coefficients \(\epsilon _i\) and \(\eta _i\) were constructed as explained in the algorithm above (i.e., in the above example, the representation we choose is \(12=t_2+l_2\)).

In order to establish unique “digit expansions,” we use the following Lemma.

Lemma 3

There is a bijection between positive integers and finite sequences of the form

$$\begin{aligned} \mathcal{D }=((\epsilon _n,\eta _n),\ldots ,(\epsilon _0,\eta _0)) \end{aligned}$$

such that \(\epsilon _i\in \{0,1\}\), \(\epsilon _n=1\), \(0\le \eta \le L+S-2\), \(\epsilon _i=0\) implies \(\eta _i=0\) and \(\epsilon _i=1, \eta _i\ge L-1\) implies \(\epsilon _{i+1}=0\). This bijection is given by

$$\begin{aligned} \varPsi (\mathcal{D })=\sum _{i=0}^n (\epsilon _it_i+\eta _il_i) \end{aligned}$$

and its inverse

$$\begin{aligned} \varPsi ^{-1}(N)=((\epsilon _n,\eta _n),\ldots ,(\epsilon _0,\eta _0)), \end{aligned}$$

where the \(\epsilon _i\) and \(\eta _i\) are computed by the algorithm described above.

Proof

Let \(N>0\) be an integer. Note that \(\varPsi ^{-1}\) is injective since by construction, we have \(N_k<t_{k+1}\), and therefore in every step of the algorithm, the pair \((\epsilon _k,\eta _k)\) is uniquely determined.

It remains to prove that \(\varPsi ^{-1}(\varPsi (\mathcal{D }))=\mathcal{D }\). We prove this by induction on the length \(n+1\) of \(\mathcal{D }\). The case \(n=0\) is trivial, since \(\varPsi (\mathcal{D })=\epsilon _0+\eta _0<t_1\), and applying \(\varPsi ^{-1}\) yields indeed \(\mathcal{D }\). Let us assume that the algorithm yields the correct sequence \(\varPsi (\mathcal{D })\) for all sequences \(\mathcal{D }\) of length \(\le n\). In particular, this implies \(\varPsi (\mathcal{D })<t_n\) for all \(\mathcal{D }\) of length \(\le n\). Assume now that \(\mathcal{D }\) is of length \(n+1\).

First, we prove that \(N=\varPsi (\mathcal{D })<t_{n+1}\). Since \(\epsilon _n=1\), we have \(\eta _{n-1}\le L-2\), and by induction, we know that for a sequence \(\mathcal{E }\) of length \(n\) starting with \((1,L-2)\), we have \(\varPsi (\mathcal{E })<t_{n-1}+ (L-1)l_{n-1}=l_n\). Hence, \(\varPsi (\mathcal{D })<t_n+(L+S-2)l_n+l_n=t_{n+1}\).

Now, let \(\mathcal{E }\) be the sequence of length \(n\) induced by \(\mathcal{D }\) by deleting the entry \((\epsilon _n,\eta _n)\). Note that \(\epsilon _{n-1}\) might be zero, and thus, \(\mathcal{E }\) is not a valid output of the above algorithm. If \(\epsilon _i = 0\) for all \(i \le n-1\), then the proof is trivial. Assume now that at least one \(\epsilon _i > 0\) for \(i \le n-1\). Then, we can write \(\mathcal{E } = (\mathcal{E }_1,\mathcal{E }_2)\), where \(\mathcal{E }_1 = ((\epsilon _{n-1},\eta _{n-1}), \ldots , (\epsilon _{n-k},\eta _{n-k}))\) and \((\epsilon _i,\eta _i) = (0,0)\) for \(n-k \le i \le n-1\) and \(\mathcal{E }_2 =((\epsilon _{n-k-1},\eta _{n-k-1}), \ldots , (\epsilon _0,\eta _0))\) with \(\epsilon _{n-k-1} = 1\). We obtain that \(N=\varPsi (\mathcal{D })=t_n+\eta _n l_n+\varPsi (\mathcal{E }_2)\) and since \(\varPsi (\mathcal{E }_2)<l_n\), we know that \(\eta _n\) is the same integer which we obtain by applying our algorithm to \(N\). In particular, we have

$$\begin{aligned} \varPsi ^{-1}(\varPsi (\mathcal{D }))=((1,\eta _n), \mathcal{E }_1,\varPsi ^{-1}(\varPsi (\mathcal{E }_2))) =((1,\eta _n),\mathcal{E })=\mathcal{D }. \end{aligned}$$

\(\square \)

Using this digit expansions, we are able to prove arithmetic properties of LS-sequences. We start with a lemma which provides conditions under which a point \(\xi _{L,S}^N\) lies in a certain elementary interval.

Lemma 4

Let \(N\) be an integer with representation given in (5). Then,

$$\begin{aligned} \xi _{L,S}^N= \sum _{i=0}^n \left( \beta ^{i+1} \min \{L,\epsilon _i+\eta _i\}+\beta ^{i+2}\max \{\epsilon _i+\eta _i-L,0\}\right) . \end{aligned}$$
(6)

Moreover, let \(I_x^{(k)}=[\xi _{L,S}^x,\xi _{L,S}^x+\beta ^k)\), with \(x <l_k\), be an elementary interval. Then, \(\xi _{L,S}^N \in I_x^{(k)}\) if and only if

$$\begin{aligned} x=\sum _{i=0}^{k-1} (\epsilon _it_i+\eta _il_i) \end{aligned}$$

is the truncated representation of \(N\).

In addition, let \(A_x^{(k)}(N)=\sharp \{m\, :\, m\le N, \xi _{L,S}^m \in I_x^{(k)}\}\) and assume that

$$\begin{aligned} N=x+\sum _{i=k}^{n}(\epsilon _i t_{i}+\eta _i l_{i}). \end{aligned}$$

Then,

$$\begin{aligned} A_x^{(k)}(N)= \sum _{i=0}^{n-k}(\epsilon _{i+k} t_{i}+\eta _{i+k} l_{i}) +1. \end{aligned}$$

Proof

We start with the proof of (6). Due to Definition 4, we have

$$\begin{aligned} \xi _{L,S}^N=\beta ^n\min \{L,\epsilon _n+\eta _n\}+ \beta ^{n+1}\max \{\epsilon _n+\eta _n-L,0\}+\xi _{L,S}^{\tilde{N}} \end{aligned}$$

with

$$\begin{aligned} \tilde{N}=\sum _{i=0}^{n-1} (\epsilon _it_i+\eta _il_i). \end{aligned}$$

Repeating this argument inductively, we will end up in (6).

Now, let \(N\) be of the form (5) and let

$$\begin{aligned} N_1 = \sum _{i=0}^{k-1} (\epsilon _it_i+\eta _il_i)\quad \text{ and} \quad N_2=\sum _{i=0}^{n-k} (\epsilon _{i+k}t_i+\eta _{i+k}l_i). \end{aligned}$$

Then by (6), we have \(\xi _{L,S}^N=\xi _{L,S}^{N_1}+\beta ^k\xi _{L,S}^{N_2}\). Since the LS-sequences only take values in the interval \([0,1)\) and since two distinct points of an LS-sequence with index \(<l_k\) differ at least by \(\beta ^k\), this implies the second statement of the lemma. Moreover, by assumption, \(N_1 = x < l_k\) and therefore \(N_2\) can take all integer values, since we have no restriction on the digits \(\epsilon _k\) and \(\eta _k\) (see Lemma 3). Thus, \(N_2\) counts all points \(\xi _{L,S}^i\) in the interval \(I_x^{(k)}\) with \(x < i \le N\). Since \(\xi _{L,S}^x \in I_x^{(k)}\) is the first point that hits the interval \(I_x^{(k)}\), the last statement of the lemma is established. \(\square \)

Note that Lemma 4 would not hold in case of \(l_k\le x<t_k\), since this would imply \(\epsilon _k=0\) (see Lemma 3), and we would have serious restrictions for the digits of \(N_2\). Thus, \(N_2\) could not take all integer values.

Since a crucial point of the proof of Theorem 1 is to have a precise knowledge of \(A_x^{(k)}(N)\), the next lemma gives a further method to describe this quantity.

Lemma 5

Assume that \(x<l_k\). If \(\xi _{L,S}^N \in I_x^{(k)}\), then there exist integers \(A,B\) such that \(N=x+At_k+Bl_k\) and \(A_x^{(k)}(N)=1+A+B\).

Proof

By Lemma 4, we know that \(N\) is of the form

$$\begin{aligned} N = x+\sum _{i=k}^n (\epsilon _i t_i+\eta _il_i). \end{aligned}$$

Now by the recurrence relations for \(t_k\) and \(l_k\) and Lemma 1, we deduce that there are integers \(A,B\) such that \(N=x+At_k+Bl_k\).

Since \(\xi _{L,S}^x\in I_x^{(k)}\), the proof of the lemma is complete, if we can prove that there exist integers \(A,B\) such that

$$\begin{aligned} \sum _{i=k}^n (\epsilon _i t_i+\eta _il_i)=At_k+Bl_k\quad \text{ and} \quad \sum _{i=0}^{n-k} (\epsilon _{i+k}t_i+\eta _{i+k}l_i)=A+B, \end{aligned}$$

where \(\epsilon _i,\eta _i\) and \(n\ge k\) are arbitrary non-negative integers. We prove this assertion by induction on \(n-k\). The case \(n=k\) is trivial. We postpone checking the case \(n=k+1\) to the end of the proof. Now, let us assume that \(n-k\ge 2\). Using the recurrence relations (1) and (2) for \(t_k\) and \(l_k\), respectively, we have

$$\begin{aligned} \sum _{i=k}^n (\epsilon _i t_i+\eta _il_i)=\sum _{i=k}^{n-1} (\tilde{\epsilon }_i t_i+\tilde{\eta }_il_i) \end{aligned}$$

and

$$\begin{aligned} \sum _{i=0}^{n-k} (\epsilon _{i+k} t_i+\eta _{i+k}l_i)=\sum _{i=0}^{n-k-1} (\tilde{\epsilon }_{i+k} t_i+\tilde{\eta }_{i+k}l_i), \end{aligned}$$

with

$$\begin{aligned}&\tilde{\epsilon }_i=\epsilon _i, \tilde{\eta }_i=\eta _i \quad \text{ for} \quad i=k,k+1,\ldots ,n-3 \quad \text{ and}\\&\tilde{\epsilon }_{n-2}=\epsilon _{n-2}+S\epsilon _n,\quad \tilde{\epsilon }_{n-1}=\epsilon _{n-1}+L\epsilon _n,\\&\tilde{\eta }_{n-2}=\eta _{n-2}+S\eta _n,\quad \tilde{\eta }_{n-1}=\eta _{n-1}+L\eta _n. \end{aligned}$$

The new representations have fewer summands, and by induction hypotheses, we find appropriate integers \(A\) and \(B\).

It remains to check the case \(n-k=1\). Using Lemma 1, we get

$$\begin{aligned} \epsilon _kt_k&+ \eta _kl_k+\epsilon _{k+1}t_{k+1}+\eta _{k+1}l_{k+1}\\&\quad = \epsilon _kt_k+\eta _kl_k+\epsilon _{k+1}(t_k+l_k(L+S-1))+ \eta _{k+1}(t_k+l_k(L-1))\\&\quad = t_k\stackrel{:=A}{\overbrace{(\epsilon _k+\epsilon _{k+1}+\eta _{k+1})}}+ l_k\stackrel{:=B}{\overbrace{(\eta _k+\epsilon _{k+1} (t_1-1)+\eta _{k+1}(l_1-1))}}. \end{aligned}$$

But, now

$$\begin{aligned} A+B&= \epsilon _k+\epsilon _{k+1}+\eta _{k+1}+\eta _k+\epsilon _{k+1}(t_1-1)+ \eta _{k+1}(l_1-1))\\&= \epsilon _kt_0+\eta _kl_0+\epsilon _{k+1}t_1+\eta _{k+1}l_1 \end{aligned}$$

and we have found appropriate integers \(A\) and \(B\). \(\square \)

The next lemma is related to the discrepancy of one-dimensional LS-sequences. In particular, we are interested in an accurate formula for \(\frac{A_x^{(k)}(N)}{N}\), where \(\xi _{L,S}^N \in I_x^{(k)}\).

Lemma 6

Assume that \(N\) has a representation of the form (5), and assume that \(\xi _{L,S}^N \in I_x^{(k)}\). Then, we have

$$\begin{aligned} \frac{A_x^{(k)}(N)}{N} =\beta ^k+\frac{R(1-(-S\beta )^k)+1-x\beta ^k}{N}, \end{aligned}$$

where

$$\begin{aligned} R=\sum _{i=k}^{n}(\epsilon _i \tau _1+\eta _i \lambda _1)(-S\beta )^{i-k}, \end{aligned}$$

which can be estimated by

$$\begin{aligned} |R|<\max \{|\tau _1|,|\tau _1+(L+S-2)\lambda _1|\} \frac{1-(S\beta )^{n-k+1}}{1-S\beta }. \end{aligned}$$

if \(S\beta \ne 1\) and

$$\begin{aligned} |R|<\max \{|\tau _1|,|\tau _1+(L+S-2)\lambda _1|\}\max \{n-k+1,0\} \end{aligned}$$

if \(S\beta =1\).

Proof

Using our assumptions and Lemma 4, we can calculate the exact values of \(A_x^{(k)}(N)\) and \(N\). In fact, we have

$$\begin{aligned} N=x + \sum _{i=0}^{n}(\epsilon _i t_{i}+\eta _i l_{i})\quad \text{ and} \quad A_x^{(k)}(N)=\sum _{i=k}^{n}(\epsilon _i t_{i-k}+\eta _i l_{i-k}) +1. \end{aligned}$$

This yields

$$\begin{aligned} \frac{A_x^{(k)}(N)}{N}&= \frac{\sum _{i=k}^{n}(\epsilon _i t_{i-k}+\eta _i l_{i-k}) +1}{\sum _{i=k}^{n}(\epsilon _i t_{i}+\eta _i l_{i}) +x}\\&= \frac{\sum _{i=k}^{n}(\epsilon _i \tau _0+\eta _i \lambda _0)\beta ^{-i+k}+\overset{R}{\overbrace{\sum \nolimits _{i=k}^{n}(\epsilon _i \tau _1+\eta _i \lambda _1)(-S\beta )^{i-k}}}+1}{\beta ^{-k}\sum _{i=k}^{n}(\epsilon _i \tau _0+\eta _i \lambda _0)\beta ^{-i+k}+(-S\beta )^k\underset{R}{\underbrace{\sum \nolimits _{i=k}^{n}(\epsilon _i \tau _1+\eta _i \lambda _1)(-S\beta )^{i-k}}}+x}\\&= \beta ^k+\frac{R+1}{N}+\frac{\sum _{i=k}^{n}(\epsilon _i \tau _0+\eta _i \lambda _0)\beta ^{-i+k}-N\beta ^k}{N}\\&= \beta ^k+\frac{R+1}{N}+\frac{\sum _{i=k}^{n}(\epsilon _i \tau _0+\eta _i \lambda _0)\beta ^{-i+k}}{N}\\&\quad -\frac{\left( \beta ^{-k}\sum _{i=k}^{n}(\epsilon _i \tau _0+\eta _i \lambda _0)\beta ^{-i+k}+(-S\beta )^kR+x\right) \beta ^k}{N}\\&= \beta ^k+\frac{R(1-(-S\beta ^2)^k)+1-x\beta ^k}{N}. \end{aligned}$$

Thus, it remains to estimate \(R\). Note that \(\tau _1<0\) and

$$\begin{aligned} \left| \epsilon _i \tau _i + \eta _i \lambda _i \right| < \max \{|\tau _1|,|\tau _1+(L+S-2)\lambda _1|\} \end{aligned}$$

for \(i = k, \ldots , n\). Hence to complete the proof of Lemma 6, we only have to compute the geometric sum

$$\begin{aligned} \sum _{i=0}^{n-k}|-S\beta |^{i}. \end{aligned}$$

We have to take absolute values in order to estimate \(R\). \(\square \)

Note that Lemma 6 gives a constant bound for \(|R|\) for \(n \rightarrow \infty \) if and only if \(S \beta < 1\), which is equivalent to \(L > S - 1\). This is exactly the case when we have a one-dimensional low-discrepancy LS-sequence.

3 Proofs of main results

We start with the proof of Theorem 1. According to Definition 6, we define the recurrences

$$\begin{aligned} t_n^{(i)}&= L_i t_{n - 1}^{(i)} + S_i t_{n - 2}^{(i)}, \qquad t_0^{(i)}=1, \qquad t_1^{(i)}=L_i+S_i,\\ l_n^{(i)}&= L_i l_{n - 1}^{(i)} + S_i l_{n - 2}^{(i)}, \qquad l_0^{(i)}=1, \qquad l_1^{(i)}=L_i,\\ s_n^{(i)}&= L_i s_{n - 1}^{(i)} + S_i s_{n - 2}^{(i)}, \qquad s_0^{(i)}\!\!=0, \qquad s_1^{(i)}\!=S_i, \end{aligned}$$

which correspond to the number of intervals, long intervals and short intervals after \(n\) refinement steps in the \(i\)-th component of the two-dimensional LS-sequence, for \(i=1,2\).

Now, let \(k\) and \(m\) be integers satisfying the assumptions of Theorem 1 and assume that the integers \(\tilde{k}\) and \(\tilde{m}\) are “large” (a precise condition will be given later). Furthermore, choose two integers \(x_1<l^{(1)}_k\) and \(x_2<l^{(2)}_m\) such that \(x_1\ne x_2\). In the sequel, we will consider the intervals \(I=I_{x_1}^{(k)} \times I_{x_2}^{(m)}\) and \(\tilde{I} = I_{x_1}^{(k+\tilde{k})} \times I_{x_2}^{(m+\tilde{m})}\). We want to prove that no point of the two-dimensional LS-sequence is contained in the interval \(\tilde{I}\). Note that \(\tilde{I} \subset I\), and consequently, a point can only be contained in \(\tilde{I}\) if it is also contained in \(I\). Now let \(N\) be given, and assume that \(\mathbf \xi _\mathcal B ^N \in I\). We will also assume that \(\mathbf \xi _\mathcal B ^N \in \tilde{I}\) and show that this leads to a contradiction (provided \(\tilde{k}\) and \(\tilde{m}\) are sufficiently large). Consequently, no point of the sequence \((\mathbf \xi _\mathcal B ^n)_{n \in \mathbb N }\) can be contained in \(\tilde{I}\).

Due to Lemma 5, \(\mathbf \xi _{L_1,S_1}^N \in I_{x_1}^{(k)}\) implies that there exist integers \(A_1,B_1\) such that \(N=x_1+A_1t^{(1)}_k+B_1l^{(1)}_k\). Similarly, \(\mathbf \xi _{L_2,S_2}^N \in I_{x_2}^{(m)}\) implies the existence of integers \(A_2,B_2\) such that \(N=x_2+A_2t^{(2)}_m+B_2l^{(2)}_m\). Thus,

$$\begin{aligned} x_1+A_1t^{(1)}_k+B_1l^{(1)}_k=x_2+A_2t^{(2)}_m+B_2l^{(2)}_m. \end{aligned}$$
(7)

Moreover, choosing \(A_1\) and \(B_1\) according to Lemma 5, we know that there are exactly \(1+A_1+B_1\) points with index \(\le N\) lying in the interval \(I_{x_1}^{(k)}\). Therefore, Lemma 6 yields

$$\begin{aligned} \frac{A_1+B_1+1}{x_1+A_1t^{(1)}_k+B_1l^{(1)}_k}= \beta _1^k+\frac{R_1(1-(-S_1\beta _1^2)^k)+1-x_1\beta _1^k}{N}. \end{aligned}$$

Multiplying both sides with \(N=x_1+A_1t^{(1)}_k+B_1l^{(1)}_k\) and solving for \(B_1\), we obtain

$$\begin{aligned} B_1=A_1\frac{t^{(1)}_k\beta _1^k-1}{1-l^{(1)}_k\beta _1^k}+ \frac{R_1(1-(-S_1\beta _1^2)^k)}{1-l^{(1)}_k\beta _1^k}. \end{aligned}$$
(8)

A similar argument for the second component yields

$$\begin{aligned} B_2=A_2\frac{t^{(2)}_m\beta _2^m-1}{1-l^{(2)}_m\beta _2^m}+ \frac{R_2(1-(-S_2\beta _2^2)^m)}{1-l^{(2)}_k\beta _2^m}. \end{aligned}$$
(9)

Now, we resubstitute Eqs. (8) and (9) into (7) and obtain

$$\begin{aligned} x_1&+ A_1\frac{t^{(1)}_k-l^{(1)}_k}{1-l^{(1)}_k\beta _1^k}+ \frac{R_1l^{(1)}_k(1-(-S_1\beta _1^2)^k)}{1-l^{(1)}_k\beta _1^k}\nonumber \\&= x_2+A_2\frac{t^{(2)}_m-l^{(2)}_m}{1-l^{(2)}_m\beta _2^m}+ \frac{R_2l^{(2)}_m(1-(-S_2\beta _2^2)^m)}{1-l^{(2)}_m\beta _2^m}. \end{aligned}$$
(10)

Now, let us investigate the quantities \(\frac{t_k-l_k}{1-l_k\beta ^k}\) and \(\frac{l_k(1-(-S\beta ^2)^k)}{1-l_k\beta ^k}\).

Lemma 7

We have

$$\begin{aligned} \frac{t_k-l_k}{1-l_k\beta ^k}=\beta ^{-k-1} \end{aligned}$$

and

$$\begin{aligned} \frac{l_k(1-(-S\beta ^2)^k)}{1-l_k\beta ^k}= \frac{\lambda _0}{\lambda _1}\beta ^{-k}+(-S\beta )^k. \end{aligned}$$

Proof

Using the explicit formulas (1) and (2) for the recurrences \(t_k\) and \(l_k\), respectively, we obtain

$$\begin{aligned} \frac{t_k-l_k}{1-l_k\beta ^k}&= \frac{(\tau _0-\lambda _0)\beta ^{-k}+ (\tau _1-\lambda _1)(-S\beta )^k}{1-\beta ^k(\lambda _0\beta ^{-k}+ \lambda _1(-S\beta )^k)}=\frac{\frac{S}{\sqrt{L^2+4S}}(\beta ^{-k}- (-S\beta )^k)}{\lambda _1(1-(-S\beta ^2)^k)}\\&= \frac{\frac{S}{\sqrt{L^2+4S}}}{\lambda _1}\cdot \beta ^{-k}\cdot \frac{1-(-S\beta ^2)^k}{1-(-S\beta ^2)^k}=\beta ^{-k-1}, \end{aligned}$$

which proves the first part of Lemma 7.

Note that

$$\begin{aligned} \frac{\lambda _1}{\frac{S}{\sqrt{L^2+4S}}}= \frac{\frac{-L+\sqrt{L^2+4S}}{2\sqrt{L^2+4S}}}{\frac{S}{\sqrt{L^2+4S}}}= \frac{-L+\sqrt{L^2+4S}}{2S}=\beta . \end{aligned}$$

Let us now prove the second statement of the lemma. Again we use the explicit formulas (1) and (2) and obtain

$$\begin{aligned} \frac{l_k(1-(-S\beta ^2)^k)}{1-l_k\beta ^k}&= \frac{l_k(1-(-S\beta ^2)^k)}{1-\beta ^k(\lambda _0\beta ^{-k}+ \lambda _1(-S\beta )^k)}\\&= \frac{l_k(1-(-S\beta ^2)^k)}{\lambda _1(1-(-S\beta ^2)^k)}\\&= \frac{\lambda _0}{\lambda _1}\beta ^{-k}+(-S\beta )^k \end{aligned}$$

\(\square \)

Continuing the proof of Theorem 1, let us insert the explicit formulas of Lemma 7 into (10). We get

$$\begin{aligned} A_1=A_2\frac{\beta _1^{k+1}}{\beta _2^{m+1}}+ (x_2-x_1)\beta _1^{k+1}+\tilde{R}, \end{aligned}$$
(11)

where

$$\begin{aligned} \tilde{R}=R_2\stackrel{c_2{:=}}{\overbrace{\beta _2\left( \frac{\lambda _0^{(2)}}{\lambda _1^{(2)}}+(-S_2\beta _2^2)^m \right) \frac{\beta _1^{k+1}}{\beta _2^{m+1}}}}- R_1\stackrel{c_1{:=}}{\overbrace{\beta _1\left( \frac{\lambda _0^{(1)}}{\lambda _1^{(1)}}+(-S_1\beta _1^2)^k)\right) }}. \end{aligned}$$
(12)

Now by assumption, we have \(\frac{\beta _1^{k+1}}{\beta _2^{m+1}}=\frac{p}{q}\) for some coprime positive integers \(p,q\) and (11) can be written as

$$\begin{aligned} 2(qA_1-pA_2)=2(x_2-x_1)q\beta ^{k+1}+2q\tilde{R}, \end{aligned}$$
(13)

Obviously, the left side of (13) is an even integer. In order to prove Theorem 1, we want to show that the right side is not an even integer if \(\mathbf \xi _\mathcal B ^N \in \tilde{I}\) and \(\tilde{k}\) and \(\tilde{m}\) are sufficiently large. By our assumptions on \(x_1,x_2\) and the parameters \(L_1,S_1,L_2\) and \(S_2\), we have

$$\begin{aligned} \Vert 2q(x_2-x_1)\beta _1^{k+1}\Vert _\mathrm{odd}=\epsilon <1, \end{aligned}$$
(14)

where \(\Vert \cdot \Vert _\mathrm{odd}\) denotes the distance to the nearest odd integer. Note that \(\beta _1^{k+1}\) is irrational due to Lemma 2 and therefore indeed \(\epsilon <1\).

Now Lemma 6 tells us that the assumption \(\xi _\mathcal{B }^N\in \tilde{I}\) yields

$$\begin{aligned} N=x+\sum _{i=k+\tilde{k}}^{n}(\epsilon ^{(1)}_i t^{(1)}_i+\eta ^{(1)}_i l^{(1)}_i)(-S_1\beta _1)^{i}, \end{aligned}$$

and we have

$$\begin{aligned} |R_1|\le \max \{|\tau ^{(1)}_1|,|\tau ^{(1)}_1+(L_1+S_1-2) \lambda ^{(1)}_1|\}\frac{\left| - S_1 \beta _1 \right| ^{\tilde{k}}}{1-S_1\beta _1}. \end{aligned}$$

A similar inequality also holds for \(|R_2|\). Now, we choose \(\tilde{k}\) sufficiently large such that

$$\begin{aligned} \left| -S_1 \beta _1\right| ^{\tilde{k}}<\frac{1-\epsilon }{|c_1|4q \max \{|\tau ^{(1)}_1|,|\tau ^{(1)}_1+(L_1+S_1-2)\lambda ^{(1)}_1|\}} (1-S_1\beta _1) \end{aligned}$$

in case of \(|c_1|\ne 0\) and \(\tilde{k}=0\) otherwise. Similarly, we choose \(\tilde{m}\) sufficiently large such that

$$\begin{aligned} \left| -S_2 \beta _2\right| ^{\tilde{m}}<\frac{1-\epsilon }{|c_2|4q \max \{|\tau ^{(2)}_1|,|\tau ^{(2)}_1+(L_2+S_2-2)\lambda ^{(2)}_1|\}} (1-S_2\beta _2) \end{aligned}$$

in case of \(|c_2|\ne 0\) and \(\tilde{m}=0\) otherwise. With this choice of \(\tilde{k}\) and \(\tilde{m}\), we obtain

$$\begin{aligned} |\tilde{R}|<\frac{1-\epsilon }{2q}, \end{aligned}$$

which together with (14) implies that the right side of (13) is an odd integer plus something \(<1\) in absolute values. Consequently, (11) does not have any solution \(A_1, A_2\). However, since (11) followed from our assumptions, this means that we have a contradiction. Consequently, it is not possible that \(\mathbf \xi _\mathcal B ^N \in \tilde{I}\) for any \(N\), which means that \(\tilde{I}\) does not contain any point of \((\mathbf \xi _\mathcal B ^n)_{n \in \mathbb N }\). This completes the proof of Theorem 1.

We continue with the proof of Theorem 2. Let \(b = \gcd (L_1, S_1, L_2, S_2) > 1\) and let \(t_n^{(1)}, l_n^{(1)}, t_n^{(2)}, l_n^{(2)}\) be defined as in the proof of Theorem 1. Note that \(b\) divides \(t_n^{(1)}, l_n^{(1)}, t_n^{(2)}, l_n^{(2)}\) for all \(n \ge 1\). Now, consider the interval \(I = [0, \beta _1) \times [\beta _2, 2 \beta _2)\). Note that \(2 \beta _2 < 1\) since by \(\gcd (L_1, S_1, L_2, S_2) > 1\), it follows that \(L_2 \ge 2\). It follows from Lemma 5 that we can write every \(N_1\) for which \(\xi _{L_1, S_1}^{N_1} \in [0, \beta _1)\) as \(N_1 = 0 + A_1 t_1^{(1)} + B_1 l_1^{(1)}\) and every \(N_2\) for which \(\xi _{L_2, S_2}^{N_2} \in [\beta _2, 2 \beta _2)\) as \(N_2 = 1 + A_2 t_1^{(2)} + B_2 l_1^{(2)}\) for appropriate integers \(A_1, B_1, A_2, B_2\). Hence, we obtain for every \(N_1, N_2\) that \(N_1 \equiv 0 \, \text{ mod} \, {b}\) and \(N_2 \equiv 1 \, \text{ mod} \, {b}\), respectively. Consequently, no point of the two-dimensional LS-sequence can be contained in \(I\). This proves Theorem 2.