1 Introduction

The most important technical problem in the general theory of \(l^2\)-invariants is establishing bounds on the spectral density of group ring elements. Let us illustrate it with three examples. For an introduction to \(l^2\)-invariants see [8], or [21] for a more comprehensive treatment. Spectral measures of group ring elements are discussed in the next section.

(i) The celebrated Lück approximation theorem states that the \(l^2\)-Betti numbers of a normal cover of a finite CW-complex are limits of the ordinary Betti numbers of intermediate finite covers, normalized by the cardinality of the fibers. This is an easy corollary of the following statement. For every \(C>0\) there exists a function \(f:\mathbb R_+\rightarrow \mathbb R_+\) such that \(f({\varepsilon })\rightarrow 0\) when \({\varepsilon }\rightarrow 0\), and such that for every residually finite group \(G\), and every self-adjoint \(T\) in the integral group ring \(\mathbb Z[G]\) whose \(l^1\)-norm is bounded by \(C\), we have

$$\begin{aligned} \mu _T((0,{\varepsilon })) < f({\varepsilon }). \end{aligned}$$
(1)

In other words, Lück approximation follows from having any uniform bound at all on the spectral density around \(0\). Lück [20] showed that one can take \(f({\varepsilon }) := \frac{C'}{|\log ({\varepsilon })|}\), where \(C'\) depends on \(C\).

(ii) Another \(l^2\)-invariant, the \(l^2\)-torsion, is not known to be well-defined for arbitrary normal covers. It is well-defined for all normal covers with a given deck transformation group \(G\) if and only if for every self-adjoint \(T\in \mathbb Z[G]\) the integral

$$\begin{aligned} \int _{0^+}^1\log (x) \,d\mu _T(x) \end{aligned}$$

is convergent. This is clearly a statement about the density of \(\mu _T\) around \(0\). The convergence of the integral above was established by Clair [5] and Schick [28] for a large class of groups \(G\), including all residually-finite ones. As a consequence, using the little-o notation, we have

$$\begin{aligned} \mu _T((0,{\varepsilon })) = o\left( \frac{1}{|\log ({\varepsilon })|}\right) , \end{aligned}$$
(2)

which is the best general upper bound known (see Remark 4(iii) below).

(iii) A major open problem, known as the determinant approximation conjecture, is the analog of Lück approximation for the \(l^2\)-torsion. It is currently known only when \(G\) has the infinite cyclic group \(\mathbf Z\) as a subgroup of finite index (see [21, Lemma 13.53]). It is not difficult to show that if, under the assumption stated in the example (i), for some \({\delta }>0\) we had

$$\begin{aligned} \mu _T((0,{\varepsilon })) < \frac{C'}{|\log ^{1+ {\delta }}({\varepsilon })|}, \end{aligned}$$

then the determinant approximation conjecture would be true.

For a long time it has not been known if there actually exist group ring elements whose spectral density is as large as the best known general bound (2) suggests. This is reflected in the following conjecture made by Lott and Lück.

Conjecture 1

(Lott–Lück [18])  Let \(G\) be a discrete group. For a self-adjoint \(T\in \mathbb Z[G]\) there exist \(C,\eta >0\) such that for sufficiently small \({\varepsilon }\) we have

$$\begin{aligned} \mu _T((0,{\varepsilon })) < C{\varepsilon }^\eta . \end{aligned}$$

Note that the bound in Conjecture 1 is very far away from the best known bound (2): for every \(\eta >0\) and sufficiently small \({\varepsilon }\) we have \({\varepsilon }^\eta < \frac{1}{|\log ({\varepsilon })|}\). However, in this note we show that (2) is not too far away from an optimal bound.

Theorem 2

For every \({\delta }>0\) there is a group \(G_{\delta }\) and a self-adjoint element \(S_{\delta }\in \mathbb Z[G_{\delta }]\) such that for some constant \(C>0\) we have

$$\begin{aligned} \mu _{S_{\delta }} ((0,{\varepsilon }_i)) > \frac{C}{|\log ({\varepsilon }_i)|^{1+{\delta }}} \end{aligned}$$
(3)

for some sequence of positive \({\varepsilon }_i\) converging to \(0\). In particular, Conjecture 1 is false for \(S_{\delta }\).

The family \(S_{\delta }\) does not invalidate the strategy in the example (iii) above of proving the determinant approximation conjecture, because the supports of \(S_{\delta }\) grow when \({\delta }\rightarrow 0\). However, in view of how we construct \(S_{\delta }\), we state the following conjecture.

We say a function \(g:\mathbb R_+\rightarrow \mathbb R_+\) is computable if there is an algorithm which given \({\varepsilon }\in \mathbb Q\) computes \(g({\varepsilon })\).

Conjecture 3

For every continuous computable function \(g:\mathbb R_+\rightarrow \mathbb R_+\) such that \(g({\varepsilon })\rightarrow 0\) when \({\varepsilon }\rightarrow 0\) there exists a group \(G\) and \(S\in \mathbb Z[G]\) such that

$$\begin{aligned} \mu _S((0,{\varepsilon }_i)) > \frac{g({\varepsilon }_i)}{|\log ({\varepsilon }_i)|} \end{aligned}$$

for some sequence of positive \({\varepsilon }_i\) converging to \(0\).

Informally, Conjecture 3 claims that the best known general bound (2) is optimal. We discuss some arguments in favour at the end of the introduction. Theorem 2 establishes Conjecture 3 for \(g({\varepsilon }) = \frac{1}{|\log ({\varepsilon })|^{\delta }}\), for all \({\delta }>0\).

Remarks 4

(i) The groups \(G_{\delta }\) in Theorem 2 are of the form \([\oplus _\mathbb Z\mathbf Z_2^N \rtimes ({\text {Aut}}(\mathbf Z_2^N)\wr \mathbf Z)] \times (\mathbf Z_2^2\rtimes {\text {Aut}}(\mathbf Z_2^2))\), where \(N\) depends on \({\delta }\). The elements \(S_{\delta }\) can also be written down explicitly.

In the statement of Theorem 2, the phrase “for some sequence of positive \({\varepsilon }_i\) converging to \(0\)” can be replaced by “for all sufficiently small \({\varepsilon }\)”. We discuss it in Remark 8.

(ii) Another way of phrasing Conjecture 1 is that the so-called Novikov-Shubin invariants are always positive. All group ring elements \(S_{\delta }\) in Theorem 2 have the Novikov-Shubin invariant equal to \(0\). For more information and context we refer to [21, Chapter 2]. Let us note in passing that the only groups \(G\) such that the conjecture is known for all \(T\in \mathbb Q[G]\), are the virtually abelian (see [22] and references there) and virtually free groups (see [27]).

It is interesting to note the contrast between the status of Conjecture 1 and the status of the question about the rationality of \(l^2\)-Betti numbers, i.e. the Atiyah conjecture. Although recently [1, 12, 13, 16, 25] examples of irrational \(l^2\)-Betti have been found, it still is very plausible that the Atiyah conjecture holds when the fundamental group is torsion-free. This is not the case with Conjecture 1. Although the groups \(G_{\delta }\) in Theorem 2 are not torsion-free, in the appendix we describe a previously unpublished observation of the author and B. Virág: counterexamples to Conjecture 1 with a torsion-free \(G\) can be deduced from the mathematical physics literature.

(iii) In [10] it is shown that the proof of (2) from [28] works for \(T\in \mathbb Q[G]\), where \(G\) is an arbitrary sofic group.Footnote 1 We do not discuss sofic groups here, see e.g. [24].

The bound (2) is the best known general bound in the following sense. If \(G\) is a finitely generated group which is not virtually abelian or virtually free then it is not known if there exists \(f\) such that \(f({\varepsilon })=o\left( \frac{1}{|\log ({\varepsilon })|}\right) \) and such that for every \(T\in \mathbb Q[G]\) there exists \(C_T>0\) such that \(\mu _T((0,{\varepsilon }))<C_T f({\varepsilon })\).

The situation is much worse when \(G\) is not assumed to be sofic. Then it is not known if for every \(T\in \mathbb Q[G]\) there exists a computable function \(f_T\) whose values converge to \(0\) when \({\varepsilon }\rightarrow 0\) and such that \(\mu _T((0,{\varepsilon })) \leqslant f_T({\varepsilon })\) for sufficiently small \({\varepsilon }\). (in particular it is not known if (2) holds).

(iv) Even less is known when we consider the real group ring instead of the rational one. If \(G\) is not virtually abelian or virtually free, then it is not known if for every \(T\in \mathbb R[G]\) there exists a computable function \(f_T\) whose values converge to \(0\) when \({\varepsilon }\rightarrow 0\) and such that \(\mu _T((0,{\varepsilon })) \leqslant f_T({\varepsilon })\) for sufficiently small \({\varepsilon }\). The only general result known to the author is from [29]: the bound \(\mu _T((0,{\varepsilon })) \leqslant \frac{C}{\sqrt{|\log {{\varepsilon }}|}}\) holds for \(T-{\alpha }\in \mathbb R[G]\) where \(T\in \mathbb Q[G]\), \({\alpha }\in \mathbb R\), and \(G\) is a sofic group. Both here and in the previous remark \(\mathbb Q\) can be replaced everywhere by the field of algebraic numbers.

(v) Conjecture 3 does not contradict the determinant approximation conjecture, only invalidates the proof strategy mentioned in the example (iii). Furthermore, Li and Thom [17] establish the determinant approximation in a different setting of compressions along a Følner sequence. They do not use any a priori bounds on the spectral density. In fact their approximation result works for an arbitrary element of the group von Neumann algebra, and such elements can have arbitrarily large spectral density around \(0\).

Overview of the proof of Theorem 2. In the next section we recall some basics on spectral measures and explain our computational tool. Variants of it were used in the context of computing spectral measures e.g. in [1, 2, 6, 15, 25]. For proofs we refer to [13, Section 2], where a very general version is presented. For more details on the spectral measures and von Neumann algebras see e.g. [21, Chapters 1 and 2], or the extremely readable textbook [26].

In Sect. 3 we explain how to deduce Theorem 2 from the existence of a measure-preserving action \({\Gamma }\curvearrowright X\) of a discrete group \({\Gamma }\) on a probability measure space \(X\) with certain properties. The most important property is as follows. There are finitely many subsets \(X_1,X_2,\ldots , X_n\) of \(X\) together with elements \({\gamma }_1,\ldots {\gamma }_n\), such that the following holds. Let \(\mathcal G\) be the graph whose set of vertices is \(X\) and with an edge between \(x\) and \(y\) iff for some \(i\) we have \(x\in X_i\) and \({\gamma }_i.x=y\). Then there is \(d>0\) and a sequence of natural numbers \(l_1,l_2,\ldots \), such that the probability that the connected component of \(x\in X\) is a line of length at least \(l_i\) is more than \(\frac{C}{l_i^d}\). The closer \(d\) is to \(0\), the closer \({\delta }\) in Theorem 2 is to \(0\).

In Sect. 4 we construct the actions \({\Gamma }\curvearrowright X\) with necessary properties. This is done using the framework of Turing dynamical systems introduced in [13]. The sets \(X_i\) are interpreted as states of a Turing machine and \({\gamma }_i\) as instructions corresponding to a given state. The point is to find a Turing dynamical system whose computations are long, and such that “different computational paths do not interfere with each other”, so that the connected components of \(\mathcal G\) above are indeed lines and not trees.

There are no problems in finding Turing dynamical systems with very long computational paths. In fact long enough to prove Conjecture 3, if not for the interference condition. To make sure that different computational paths do not interfere, we use a very specific system which imitates the standard “carry” algorithm of adding numbers. It is very explicit and this allows for checking the interference condition.

However, it seems to the author that the interference condition is more a technical problem than a fundamental obstacle, and this is the reason why we pose Conjecture 3.

2 Preliminaries on spectral measures

If \(D\) is a countable set then \(l^2(D)\) is the Hilbert space of all functions \(f:D\rightarrow \mathbb C\) such that \(\sum _{d\in D} f(d)\overline{f(d)} <\infty \). The indicator function of \(d\in D\) is denoted by \(\zeta _d\). If \(G\) is a graph whose set of vertices is \(V\) then \(l^2(G):= l^2(V)\).

We need to consider Hilbert spaces which are direct integrals of other Hilbert spaces—see e.g. [11, Subsection 7.4].

The group ring \(\mathbb R[G]\) of a group \(G\) is the set of formal linear combinations \(\sum a_g\cdot g\) where \(a_g\in \mathbb R\), \(g\in G\), and all but finitely many \(a_g\) are \(0\), together with the obvious addition and multiplication operations. Similarly for \(\mathbb Z[G]\), \(\mathbb Q[G]\) and \(\mathbb C[G]\).

Let \((X,\mu )\) be a probability measure space and let \({\Gamma }\curvearrowright X\) be an action of a discrete group by measure-preserving transformations. The result of the action of \({\gamma }\in {\Gamma }\) on \(x\in X\) is denoted by \({\gamma }.x\). Consider the Hilbert space \(\mathcal H\) defined as the direct integral

$$\begin{aligned} \int _X^\oplus l^2({\Gamma })\,d\mu (x). \end{aligned}$$
(4)

Since we will need to specify different fibers in \(\mathcal H\), we denote the copy of \({\Gamma }\) corresponding to \(x\in X\) by \({\Gamma }_x\). From now on we will suppress both \(X\) and \(d\mu (x)\) from all the integrals in this section. As such we could write \(\mathcal{H }= \int ^\oplus l^{2}(\Gamma _{x})\).

We have actions of \({\Gamma }\) and of \(L^\infty (X)\) on \(\mathcal H\) defined as follows. Both \({\Gamma }\) and \(L^\infty (X)\) act in a fiber-preserving fashion. For \({\gamma }\in {\Gamma }\) and \({\alpha }\in {\Gamma }_x\) we define \({\gamma }.\zeta _{\alpha }:= \zeta _{{\gamma }{\alpha }}\) and for \(f\in L^\infty (X)\) we define \(f.\zeta _{\alpha }:= f({\alpha }.x)\zeta _{\alpha }\). The weak completion of the algebra of bounded operators on \(\mathcal H\) generated by \({\Gamma }\) and \(L^\infty (X)\) is denoted by \({\Gamma }\ltimes L^\infty (X)\) and is called the group-measure space construction. It is an example of a von Neumann algebra. It is equipped with a \({}^*\)-operation which fulfils

$$\begin{aligned} \langle T\zeta _1,\zeta _2\rangle = \overline{\langle \zeta _1,T^*\zeta _2\rangle }, \end{aligned}$$

for all \(\zeta _1,\zeta _2\in \mathcal H\) and \(T\in {\Gamma }\ltimes L^\infty (X)\). If \(T=T^*\) we say \(T\) is self-adjoint.

Every \(T\in {\Gamma }\ltimes L^\infty (X)\) preserves the fibers of \(\mathcal H\). In other words there is a family \(T(x)\), \(x\in X\), of operators on \(l^2({\Gamma }_x)\) such that

$$\begin{aligned} T = \int _X^\oplus T(x). \end{aligned}$$
(5)

The von Neumann trace of \(T\) is defined by \(\tau (T):= \int \langle T\zeta _x,\zeta _x\rangle \). If \(T\) is a self-adjoint operator on an arbitrary Hilbert space, one associates to it a projection-valued spectral measure, i.e. a function \(\pi _T\) from measurable subsets of \(\mathbb R\) to the projections on the Hilbert space, which has a suitable \({\sigma }\)-additivity property. If \(T\in {\Gamma }\ltimes L^\infty (X)\), the spectral measure of \(T\) is the measure \(\mu _T\) on \(\mathbb R\) which is equal to the composition \(\tau \circ \pi _T\). It makes sense to compose \(\tau \) with \(\pi _T\) because for \(T\in {\Gamma }\ltimes L^\infty (X)\) and a measurable \(D\subset \mathbb R\) we have \(\pi _T(D)\in {\Gamma }\ltimes L^\infty (X)\).

A particular case of the group-measure space construction is when \(X\) consists of a single point, and \({\Gamma }\curvearrowright X\) is the only possible action. The resulting group-measure space construction is the group von Neumann algebra of \({\Gamma }\) which is denoted by \(L({\Gamma })\).

We now focus on the situation when \((X,\mu )\) is a compact abelian group with the normalized Haar measure and the action \({\Gamma }\curvearrowright X\) is by continuous group automorphisms. Let \(A\) be the Pontryagin dual of \(X\) (see [11] for more information on the Pontryagin duality). In particular \(A\) is a discrete abelian group. We have an embedding

$$\begin{aligned} \mathbb C[A] \rightarrow L^\infty (X) \end{aligned}$$
(6)

given by the Pontryagin duality. The preimage of \(f\in L^\infty (X)\) under this embedding, if it exists, is denoted by \({\widehat{f}}\).

We have the dual action of \({\Gamma }\) on \(A\) which is the unique action which makes the embedding (6) equivariant. As such we obtain an embedding

$$\begin{aligned} \mathbb C[{\Gamma }\ltimes A] \rightarrow {\Gamma }\ltimes L^\infty (X), \end{aligned}$$
(7)

We have potentially two ways of computing the spectral measure of \(T\in \mathbb C[{\Gamma }\ltimes A]\). Either as an element of the group von Neumann algebra \(L({\Gamma }\ltimes A)\) or, via the above embedding, as an element of the von Neumann algebra \({\Gamma }\ltimes L^\infty (X)\).

Lemma 5

(e.g. Proposition 2.5 in [13]) The spectral measure of \(T\in \mathbb C[{\Gamma }\ltimes A]\) is the same in \(L({\Gamma }\ltimes A)\) and in \({\Gamma }\ltimes L^\infty (X)\).

We will produce the groups \(G_{\delta }\) and elements \(S_{\delta }\in \mathbb Z[G_{\delta }]\) in Theorem 2 using the above lemma. The groups will be constructed as \({\Gamma }\ltimes A\) for suitable \(A\) and \({\Gamma }\), but the computations showing the desired spectral properties will be carried out in \({\Gamma }\ltimes L^\infty (X)\).

Let us now explain why the spectral computations in \({\Gamma }\ltimes L^\infty (X)\) are sometimes easy to perform. The point is to understand the decomposition (5). Let \(T\in {\Gamma }\ltimes L^\infty (X)\) be given as a finite sum \(\sum _{i=1}^n {\gamma }_i\chi _i\), where \({\gamma }_i\in {\Gamma }\) and \(\chi _i\in L^\infty (X)\) are indicator functions of some subsets \(X_i\subset X\).

Consider the oriented graph \({\widetilde{\mathcal{G}}}\) associated to \(T\) as follows. The set of vertices is \({\Gamma }\times X\) and there is an edge from \(({\gamma }_1,x_1)\) to \(({\gamma }_2,x_2)\) if for some \(i=1,\ldots , n\) we have \({\gamma }_1.x_1\in X_i\) and \({\gamma }_i{\gamma }_1.x_1={\gamma }_2.x_2\).

Let \({\widetilde{\mathcal{G}}}(x)\) be the connected component of \((e,x)\) in \({\widetilde{\mathcal{G}}}\). Let \({\widetilde{T}}(x):l^2({\widetilde{\mathcal{G}}}(x)) \rightarrow l^2({\widetilde{\mathcal{G}}}(x))\) be the oriented adjacency operator on \({\widetilde{\mathcal{G}}}(x)\) and for \(i=1,\ldots ,n\), define

$$\begin{aligned} {\widetilde{\chi }}_i(x):l^2({\widetilde{\mathcal{G}}}(x)) \rightarrow l^2({\widetilde{\mathcal{G}}}(x)) \end{aligned}$$

by \({\widetilde{\chi }}_i(x)\zeta _{({\gamma },y)} := \zeta _{({\gamma },y)}\) if \({\gamma }.y\in X_i\) and \({\widetilde{\chi }}_i(x)\zeta _{({\gamma },y)} :=0\) otherwise.

Consider now the oriented graph \(\mathcal G_{\Gamma }\) with edge labels from the set \(\{{\gamma }_1,\ldots , {\gamma }_n\}\) defined as follows. The set of vertices is \(X\), and there is an edge from \(x_1\) to \(x_2\) with label \({\gamma }\) if for some \(i=1,\ldots , n\) we have \({\gamma }={\gamma }_i\), \(x_1\in X_i\) and \({\gamma }_i.x_1=x_2\). Note that there might be multiple edges between any two points of \(X\).

Let \(\mathcal{G}_{\Gamma }(x)\) be the connected component of \(x\) in \(\mathcal{G}_{\Gamma }\). We say \(\mathcal{G}_{\Gamma }(x)\) is simply-connected if multiplying edge-labels along any closed loop gives the trivial element of \({\Gamma }\) (if a loop traverses an edge in the direction opposite to the orientation of the edge, we invert the label). Note that in a simply-connected \(\mathcal{G}_{\Gamma }(x)\) there are no multiple edges, and self-loops are labelled with the neutral element of \({\Gamma }\).

Finally, let \(\mathcal{G}(x)\) be the graph which arises from \(\mathcal{G}_{\Gamma }(x)\) in the following way. If there is an edge from \(x_1\) and \(x_2\) with label \({\gamma }\) then we replace it by an edge with label equal to \(\#\{i:x_1\in X_i \text { and } {\gamma }_i={\gamma }\}\in \mathbb N\). Let \(T(x) :l^2(\mathcal{G}(x)) \rightarrow l^2(\mathcal{G}(x))\) be the oriented and labelled adjacency operator on \(\mathcal{G}(x)\), i.e. the unique operator such that \(\langle T(x)\zeta _x,\zeta _y\rangle \) is equal to the sum of labels on edges from \(x\) to \(y\). Let us define \(\chi _i(x):l^2(\mathcal{G}(x)) \rightarrow l^2(\mathcal{G}(x))\) by \(\chi _i(x)\zeta _y := \zeta _y\) if \(y\in X_i\) and \(\chi _i(x)\zeta _y:=0\) otherwise.

Let \(X^{sf}\) be the subset of \(X\) consisting of those \(x\) for which \(\mathcal{G}_{\Gamma }(x)\) is simply-connected and finite.

Let \(\mathcal K\) be the Hilbert space

$$\begin{aligned} \int ^\oplus _{X\setminus X^{sf}} l^2({\widetilde{\mathcal{G}}}(x))\oplus \int ^\oplus _{X^{sf}} l^2(\mathcal{G}(x)). \end{aligned}$$

We have a \({}^*\)-embedding of the von Neumann subalgebra \(L(T)\) generated by \(T, \chi _1,\ldots , \chi _n\in {\Gamma }\ltimes L^\infty (X)\) into \(\mathrm B(\mathcal K)\), induced by

$$\begin{aligned} T \mapsto \int ^\oplus _{X\setminus X^{sf}} {\widetilde{T}}(x) \oplus \int ^\oplus _{X^{sf}} T(x)\\ \chi _i \mapsto \int ^\oplus _{X\setminus X^{sf}} {\widetilde{\chi }}_i(x) \oplus \int ^\oplus _{X^{sf}} \chi _i(x) \end{aligned}$$

Under this embedding, \(U\in L(T)\) decomposes and we denote the resulting decomposition by

$$\begin{aligned} \int ^\oplus _{X\setminus X^{sf}} {\widetilde{U}}(x) \oplus \int ^\oplus _{X^{sf}} U(x). \end{aligned}$$

A direct computation shows

$$\begin{aligned} \tau (U) = \int _{X\setminus X^{sf}} \langle \,{\widetilde{U}}(x)\, \zeta _{(e,x)}, \zeta _{(e,x)}\rangle + \int _{X^{sf}} \frac{1}{|\mathcal{G}(x)|} {\text {tr}}(U(x)), \end{aligned}$$

where \({\text {tr}}(U(x)) = \sum _{v\in { \mathcal G}(x)} \langle U(x)\zeta _v,\zeta _v\rangle \,d\mu (x) \) is the standard trace of a finite-dimensional endomorphism. Accordingly, for self-adjoint \(U\) the spectral measure \(\mu _U\) decomposes as

$$\begin{aligned} \mu _U = \mu ^1_U + \mu ^2_U. \end{aligned}$$
(8)

The measure \(\mu ^2_U(D)\) of a measurable subset \(D\subset \mathbb R\) is given by

$$\begin{aligned} \mu ^2_U(D) = \int _{X^{sf}} \frac{1}{|\mathcal{G}(x)|} \mu _{U(x)}(D) \, d\mu (x). \end{aligned}$$

Note that \(\mu _{U(x)}\) is the spectral measure of a finite-dimensional endomorphism \(U(x)\), i.e. it is simply the set of eigenvalues of \(U(x)\) with multiplicities. This makes \(\mu ^2_U\) relatively easy to compute. The operators \(S_{\delta }\) we will construct to prove Theorem 2 will be such that already \(\mu ^2_{S_{\delta }}\) fulfils the claimed inequality (3).

3 Turing dynamical systems with long computational chains

It is convenient to construct our group ring elements using Turing dynamical systems introduced in [13], so we start by adapting some definitions from [13, Section 4]. All sets and subsets are measurable whenever it makes sense (and the checks that the sets we define are measurable are straightforward).

Let \((X, \mu )\) be a probability measure space and \({\Gamma }\curvearrowright X\) be a measure-preserving action of a countable discrete group \({\Gamma }\). The result of the action of \({\gamma }\in {\Gamma }\) on \(x\in X\) is denoted by \({\gamma }.x\). A dynamical hardware is the following data: \((X,\mu )\), the action \({\Gamma }\curvearrowright X\), and a division \(X = \bigsqcup _{i=1}^n X_i\) of \(X\) into disjoint measurable subsets. For brevity, we denote such a dynamical hardware by \((X)\).

Suppose we are given a dynamical hardware \((X)\) and we choose three additional distinguished disjoint subsets of \(X\), each of which is a union of certain \(X_i\): the initial set \(I\), the rejecting set \(R\), and the accepting set \(A\) (all or some of them might be empty). Furthermore, suppose that for every set \(X_i\), we choose one element \({\gamma }_i\) of the group \({\Gamma }\) in such a way that

  1. (i)

    the elements corresponding to the sets \(X_i\) which are subsets of \(R\cup A\) are equal to the neutral element \(e\) of \({\Gamma }\),

  2. (ii)

    if \({\gamma }_i\ne e\) then \(\mu (\{x\in X_i:{\gamma }_i.x= x\})=0\).

A dynamical software for a given dynamical hardware \((X)\) consists of the distinguished sets \(I, A\) and \(R\), and the choice of elements \({\gamma }_i\), subject to the conditions above. Each pair \(({\gamma }_i, X_i)\) is referred to as an instruction. The set \({\gamma }_i.X_i\) is referred to as the resulting set of the instruction \(({\gamma }_i,X_i)\).

Define \(T_X: X\rightarrow X\) by putting \(T_X(x):= {\gamma }_i.x\) for \(x\in X_i\). A Turing dynamical system is a dynamical hardware \((X)\) together with a dynamical software for \((X)\). We will denote a Turing dynamical system as just defined by \((X, T_X)\).

A computational chain is a sequence \(x, T_X(x), T^2_X(x),\ldots , T^{l-1}_X(x)\) such that \(x\in I\), \(T^{l-1}_X(x)\in A\cup R\), and such that \(l-1\) is the smallest integer for which \(T^{l-1}_X(x)\in A\cup R\). Note that it follows that \(T^{l}_X(x)=T^{l-1}_X(x)\). The length of such a chain is \(l\).

Our task is to construct a Turing dynamical system for which there is a subset \(Y\subset X\) which is a union of disjoint computational chains, and such that

  1. (A)

    there exist \(d>0\), \(C>0\), and infinitely many natural numbers \(l\) such that the measure of the set of those \(x\in Y\) which are in computational chains of length at least \(l\) is more than \(\frac{C}{l^d}\)

  2. (B)

    \(T_X(X{\setminus } Y)\subset X{\setminus } Y\).

  3. (C)

    \(I\cap T_X(X) =\emptyset \).

We will construct such Turing dynamical systems in Sect. 4 (with arbitrarily small \(d\)). In this section we explain how \((X,T_X)\) gives rise to an operator in the von Neumann algebra \({\Gamma }\ltimes L^\infty (X)\) whose spectral measure is large around \(0\).

Let \(\chi _i\in L^\infty (X)\) be the indicator function of \(X_i\) and let \(\chi _I\) be the indicator function of \(I\).

Theorem 6

Let \((X, T_X)\) be a Turing dynamical system as above which fulfils (A), (B) and (C). Consider the operator

$$\begin{aligned} T:= \sum _{({\gamma }_i,X_i)\in P} {\gamma }_i \chi _i \in {\Gamma }\ltimes L^\infty (X), \end{aligned}$$

where \(P\) is the set of those instructions whose group element is not \(e\). Let

$$\begin{aligned} S := 5+ 2(T+T^*) -4\chi _I\in {\Gamma }\ltimes L^\infty (X). \end{aligned}$$

Then there is a constant \(C'>0\) and a sequence \({\varepsilon }_j>0\) converging to \(0\) such that

$$\begin{aligned} \mu _S((0,{\varepsilon }_j)) \geqslant \frac{C'}{|\log ({\varepsilon }_j)|^{d+1}}. \end{aligned}$$

Proof

Recall that \(Y\subset X\) is a union of disjoint computational chains with properties (A), (B) and (C). The property (B) implies that for \(y\in Y\) the connected component \(\mathcal{G}(y)\) is the directed line:

$$\begin{aligned} \bullet \rightarrow \bullet \rightarrow \ldots \rightarrow \bullet . \end{aligned}$$

In particular \(Y\subset X^{sf}\).

Furthermore (C) implies that only the first point of the chain is in the initial set \(I\). It follows that \(S(y)\) is the adjacency operator on the graph on Fig. 1, where the number of nodes is equal to the length of the computational chain of \(y\).

Fig. 1
figure 1

Example graph \(\mathcal{G}(y)\)

Let the adjacency operator on the graph on Fig. 1 with \(m\) nodes be called \(S_m\). The next lemma follows from a linear algebra calculation which we postpone to Sect. 5.

Lemma 7

There is \(0< D<1\) such that for all \(m\geqslant 2\) the adjacency operator \(S_m\) has a positive eigenvalue smaller than \(D^m\).

The equality (8) together with the assumption (A) imply now, for all \(l\geqslant 2\) from the condition (A),

$$\begin{aligned} \mu _S^2((0,D^l)) \geqslant \frac{1}{l} \cdot \frac{C}{l^d} \end{aligned}$$

Putting \({\varepsilon }=D^l\) we obtain \(\mu _S((0,{\varepsilon })) \geqslant \mu _S^2((0,{\varepsilon }))\geqslant \frac{C|\log (D)|^{d+1}}{|\log ({\varepsilon })|^{d+1}}\).

Remark 8

The proof shows that the sequence of \({\varepsilon }_j\) in the statement of Theorem 6 can be taken to be \({\varepsilon }_j:=D^{l_j}\), where \(D\) is a number in \((0,1)\) and \(l_j\) are the lengths fulfilling the assumption (A). In the next section, we show that the sequence \(l_j\) can be taken to be \(l_j:= L^j\) for a suitable natural number \(L\). It follows that for \({\varepsilon }\in ({\varepsilon }_{j+1}, {\varepsilon }_j)\) we have

$$\begin{aligned} \mu ((0,{\varepsilon }))\geqslant \mu ((0,{\varepsilon }_{j+1}))&\geqslant \frac{C'}{|\log ({\varepsilon }_{j+1})|^{d+1}} = \frac{C'}{|\log (D^{l_{j+1}})|^{d+1}} =\\&= \frac{C'}{|\log (D^{L\cdot l_{j}})|^{d+1}} = \frac{\frac{C'}{L}}{|\log ({\varepsilon }_j)|^{d+1}}\geqslant \frac{\frac{C'}{L}}{|\log ({\varepsilon })|^{d+1}}. \end{aligned}$$

This shows that the inequality (3) in Theorem 2 is true, after decreasing the constant, for all sufficiently small \({\varepsilon }\).

4 The “carry” algorithm as a Turing dynamical system

Informally speaking, the Turing dynamical systems we construct imitate a Turing machine which given the input \(\mathtt{0 }\,\mathtt{0 }\,\mathtt{0 }\,\ldots \,\mathtt{0 }\) performs the traditional “carry” algorithm to add \(1\) to \(\mathtt{0 }\,\mathtt{0 }\,\mathtt{0 }\,\ldots \,\mathtt{0 }\) in the \((\mathtt{D }+1)\)-ary system for some fixed \(\mathtt{D }\), and keeps doing it until it reaches \(\mathtt{D }\,\mathtt{D }\,\mathtt{D }\,\ldots \,\mathtt{D }\). The larger \(\mathtt{D }\) we choose, the closer to \(0\) the constant \(d\) in property (A) will be.

There are no problems in obtaining the property (A). Small complications to the idea above arise because of (B) and (C).

Let \(N\) be a natural number and let \(\textsc {M}:=\mathbf Z_2^N\), \(\textsc {S}:= \mathbf Z_2^2\). They should be thought of as respectively the alphabet and the set of states of a Turing machine. Let us denote the non-zero elements of \(\textsc {S}\) by inc-last-digit, carry, and zero-prev-digits. The element \(0\in \textsc {S}\) will be called void.

The element \(0\in \textsc {M}\) will be denoted by \(\mathtt{\bullet }\) (the “empty” symbol), and other elements by \(\mathtt{0 }, \mathtt{1 },\mathtt{2 },\ldots , \mathtt{D }\) (the order does not matter). The symbol \(\mathtt{D }\) is also used to denote the natural number \(2^N-2\).

For all pairs of distinct elements \({\sigma },\tau \in \textsc {S}{\setminus }\{\textsc {void}\}\) we choose an automorphism of \(\textsc {S}\) denoted by \(({\sigma }{\rightarrow }\tau )\in {\text {Aut}}(\textsc {S})\) which sends \({\sigma }\) to \(\tau \). Similarly for all \(x,y\in \textsc {M}{\setminus }\{\mathtt{\bullet }\}\) we choose \((x{\rightarrow } y)\in {\text {Aut}}(\textsc {M})\) which sends \(x\) to \(y\).

Let us introduce a notation for subsets of \(\textsc {M}^\mathbf Z\times \textsc {S}\) by giving examples. The set of those \(((m_i)_{i\in \mathbb Z},s))\in \textsc {M}^\mathbf Z\times \textsc {S}\) such that \(m_0= \mathtt x \), \(m_1= \mathtt y \), \(s={\sigma }\) is denoted by

$$\begin{aligned}{}[\underline{\mathtt{x }}\,\mathtt y , {\sigma }]. \end{aligned}$$

A hatted symbol means that we allow everything but not that symbol, for example

$$\begin{aligned}{}[\underline{\mathtt{x }}\,{\widehat{\mathtt{y }}}, {\sigma }] \end{aligned}$$

is the subset of \(\textsc {M}^\mathbf Z\times \textsc {S}\) consisting of those \(((m_i),s)\) such that \(m_0=\mathtt x , m_1\ne \mathtt y \) and \(s={\sigma }\). Finally \(*\) means that we allow any symbol, for example

$$\begin{aligned}{}[\underline{*}\,{\widehat{\mathtt{x }}}, {\sigma }] \end{aligned}$$

is the subset of \(\textsc {M}^\mathbf Z\times \textsc {S}\) consisting of those \(((m_i),s)\) such that \(m_1\ne \mathtt x \) and \(s={\sigma }\).

Concrete elements from the sets as above will be denoted with curly brackets, for example \((\underline{\mathtt{x}_{\mathtt{0}}}\,\mathtt{x}_{\mathtt{1}}, {\sigma })\) is an element of the set \([\underline{\mathtt{x}_{\mathtt{0}}}\,\mathtt{x}_{\mathtt{1}}, {\sigma }]\).

For the rest of this section \(X\) is the compact abelian group \(\textsc {M}^\mathbf Z\times \textsc {S}\), and \({\Gamma }= ({\text {Aut}}(\textsc {M})\wr \mathbf Z) \times {\text {Aut}}(\textsc {S})\). The action \({\Gamma }\curvearrowright X\) is as follows. \({\text {Aut}}(\textsc {S})\) acts on \(\textsc {S}\) in the natural way, \({\text {Aut}}(\textsc {M})\) acts on the copy of \(\textsc {M}\) in the \(0\)-coordinate of \(\textsc {M}^\mathbf Z\) in the natural way, and the generator \(t\in \mathbf Z\) acts by shifting, i.e. \(t.(\underline{\mathtt{x }}\,\mathtt y , {\sigma }) = (\mathtt{x }\,\underline{\mathtt{y }}, {\sigma })\).

We define a dynamical hardware \((X)\) by choosing the division \(X=\bigsqcup X_i\) to consist of the cylinder sets

$$\begin{aligned}{}[\mathtt x \, \underline{\mathtt{y }}\,\mathtt z , {\sigma }], \end{aligned}$$
(9)

where \(\mathtt x ,\mathtt y ,\mathtt z \in \textsc {M}\), and \({\sigma }\in \textsc {S}\).

The dynamical software is defined by the following instructions. The sets below are not of the form (9), but they are finite disjoint unions of sets of the form (9), so they give rise to instructions in a natural way. The left hand side is the element of \({\Gamma }\) which corresponds to the set on the right hand side.

$$\begin{aligned} (\mathtt{0 }{\rightarrow }\mathtt{1 })&\quad [\underline{\mathtt{0 }}\,\mathtt{\bullet }, \textsc {inc-last-digit}] \end{aligned}$$
(S1)
$$\begin{aligned} \cdots \end{aligned}$$
$$\begin{aligned} (\mathtt{D{-}1}{\rightarrow } \mathtt{D})&[\underline{\mathtt{D {-}1}}\,\mathtt{\bullet }, \textsc {inc-last-digit}] \end{aligned}$$
(S2)
$$\begin{aligned} (\textsc {inc-last-digit}{\rightarrow }{\textsc {carry}})&[ \underline{\mathtt{D}}\,\mathtt{\bullet }, \textsc {inc-last-digit}] \end{aligned}$$
(S3)
$$\begin{aligned} t^{-1}&[\underline{\mathtt{D }}, \textsc {carry}] \end{aligned}$$
(S4)
$$\begin{aligned} t(\mathtt{D{-}1}{\rightarrow }\mathtt{D })(\textsc {carry}{\rightarrow }\textsc {zero-prev-digits})&[\underline{\mathtt{D{-}1}}\,\widehat{\mathtt{\bullet }}, \textsc {carry}] \end{aligned}$$
(S5)
$$\begin{aligned} t(\mathtt D{-}2 {\rightarrow }\mathtt D{-}1 )(\textsc {carry}{\rightarrow }\textsc {zero-prev-digits})&[\underline{\large {\mathtt{D}{-}2}}\,\widehat{\mathtt{\bullet }}, \textsc {carry}] \end{aligned}$$
(S6)
$$\begin{aligned} \quad \quad \ldots \end{aligned}$$
$$\begin{aligned} t(\mathtt{0 }{\rightarrow }\mathtt{1 })(\textsc {carry}{\rightarrow }\textsc {zero-prev-digits})&[\underline{\mathtt{0 }}\,\widehat{\mathtt{\bullet }}, \textsc {carry}] \end{aligned}$$
(S8)
$$\begin{aligned} t(\mathtt{D }{\rightarrow }\mathtt{0 })&\quad [\underline{\mathtt{D }}\,\widehat{\mathtt{\bullet }}, \textsc {zero-prev-digits}] \end{aligned}$$
(S9)
$$\begin{aligned} (\mathtt{D }{\rightarrow }\mathtt{0 })(\textsc {zero-prev-digits}{\rightarrow }\textsc {inc-last-digit})&[\underline{\mathtt{D }}\,\mathtt{\bullet }, \textsc {zero-prev-digits}]. \end{aligned}$$
(S10)

We will refer to the pairs above also as instructions.

If a cylinder set as in (9) is not a subset of one of the sets above, its associated element of \({\Gamma }\) is defined to be the neutral element. Finally, we define the initial set \(I\) to be \([\mathtt{\bullet }\,\underline{\mathtt{D }}, \textsc {zero-prev-digits}]\), the accepting set \(A\) to be \([\underline{\mathtt{\bullet }}, \textsc {carry}]\), and the rejecting set \(R\) to be the empty set.

We will now define a set \(Y\) for which the properties (A), (B) and (C) hold. For \(j=1,2,\ldots \) let

$$\begin{aligned} Y(j):= [\mathtt{\bullet }\,\underline{\mathtt{D }}\,\mathtt{D }^{j-1}\,\mathtt{\bullet },&\textsc {zero-prev-digits}] \cup \\&\cup \ T_X([\mathtt{\bullet }\,\underline{\mathtt{D }}\,\mathtt{D }^{j-1}\,\mathtt{\bullet },\textsc {zero-prev-digits}])\cup \\&\cup \ T_X^2([\mathtt{\bullet }\,\underline{\mathtt{D }}\,\mathtt{D }^{j-1}\,\mathtt{\bullet },\textsc {zero-prev-digits}])\cup \ldots , \end{aligned}$$

and let \(Y:= \bigcup _j Y(j)\).

Proposition 9

The Turing dynamical system \((X,T_X)\) and the set \(Y\) just defined fulfil the properties (A), (B) and (C). Furthermore, as \(\mathtt{D }\) grows to infinity, we can take \(d\) in (A) converging to \(0\).

Proof

First we need to check that each \(Y\) is indeed a union of disjoint computational paths. We first explicitly check that the map \(T_X\) indeed acts as a machine which keeps adding \(1\) to its input. We take

$$\begin{aligned} x\in [\mathtt{\bullet }\,\underline{\mathtt{D }}\,\mathtt{D }^{j-1}\,\mathtt{\bullet },\textsc {zero-prev-digits}], \end{aligned}$$

denote it by

$$\begin{aligned} (\mathtt{\bullet }\,\underline{\mathtt{D }}\,\mathtt{D }^{j-1}\,\mathtt{\bullet },\textsc {zero-prev-digits}) \end{aligned}$$

and write its trajectory under \(T_X\) in Fig. 2. Now a direct examination shows that each trajectory ends in the accepting set \([\underline{\mathtt{\bullet }}, \textsc {carry}]\) (so that each trajectory is a computational chain) and that the trajectories are indeed pair-wise disjoint.

It is straightforward to show that the condition (A) is fulfilled: since any number between \(\mathtt{0 }\,\mathtt{0 }\,\ldots \,\mathtt{0 }\) (\(j\) times) and \(\mathtt{D }\,\mathtt{D }\,\ldots \,\mathtt{D }\) (\(j\) times) appears on the tape, it follows that the computational chain as in Fig. 2 has length at least \((\mathtt{D }+1)^j\). For the same reason, and since the measure on \(X\) is the product measure, we have that the measure of \(Y(j)\) is at least\(\frac{1}{|\textsc {S}|} \left( \frac{\mathtt{D }+1}{\mathtt{D }+2}\right) ^j\left( \frac{1}{\mathtt{D }+2}\right) ^2\).

Fig. 2
figure 2

Trajectory of a point under the map \(T_{X}\)

It follows that in the condition (A) we can take \(C:= (|\textsc {S}|(\mathtt{D }+2))^{-2}\), and any \(d\) such that \(\frac{1}{(\mathtt{D }+1)^d} < \left( \frac{\mathtt{D }+1}{\mathtt{D }+2}\right) \).

Conditions (B) and (C) follow from a careful case-by-case analysis of our dynamical software. Let us start with (C). We have to check that the resulting set of every instruction (S1)–(S10) intersects trivially the initial set \(I=[\mathtt{\bullet }\,\underline{\mathtt{D }}, \textsc {zero-prev-digits}]\). Just by considering the resulting state (i.e. the \(\textsc {S}\)-coordinate of the resulting set), we obtain the trivial intersection of \(I\) with the resulting sets of instructions (S1)–(S4) and (S10).

The resulting set of (S5) is

$$\begin{aligned} t(\mathtt{D{-}1}{\rightarrow }\mathtt{D })(\textsc {carry}{\rightarrow }\textsc {zero-prev-digits}).[\underline{\mathtt{D{-}1}}\,\widehat{\mathtt{\bullet }}, \textsc {carry}], \end{aligned}$$

which is equal to \([\mathtt{D }\,{\underline{\widehat{\mathtt{\bullet }}}}, \textsc {zero-prev-digits}]\), which clearly has trivial intersection with \([\mathtt{\bullet }\,\underline{\mathtt{D }}, \textsc {zero-prev-digits}]\). Instructions (S6)–(S8) are checked similarly.

For (S9) the resulting set is

$$\begin{aligned} t(\mathtt{D }{\rightarrow }\mathtt{0 }). [\underline{\mathtt{D }}\,\widehat{\mathtt{\bullet }}, \textsc {zero-prev-digits}], \end{aligned}$$

which is equal to \([\mathtt{0 }\,\underline{\widehat{\mathtt{\bullet }}}, \textsc {zero-prev-digits}]\), which also intersects the initial set trivially.

Let us now check the condition (B). We just saw that the first element in Fig. 2 has no preimage under \(T_X\). On the other hand it is clear that every other element in Fig. 2 has exactly one preimage under \(T_X\) which is also an element of \(Y\) (namely the preceding element in Fig. 2). It follows that to check the condition (B) it is enough to show that for every pair \((({\beta }_1,B_2), ({\beta }_2, B_2))\) of different instructions from among (S1)–(S10) we have

$$\begin{aligned} {\beta }_1.B_1 \cap {\beta }_2.B_2 = \emptyset . \end{aligned}$$

We only consider those pairs of instructions where it is not enough to look at the resulting state to show the trivial intersection. For example we would not consider the pair (S1) and (S3), because

$$\begin{aligned} (\mathtt{0 }{\rightarrow }\mathtt{1 }).[\underline{\mathtt{0 }}\,\mathtt{\bullet }, \textsc {inc-last-digit}] \subset [\textsc {inc-last-digit}] \end{aligned}$$

and

$$\begin{aligned} (\textsc {inc-last-digit}{\rightarrow }{\textsc {carry}}). [ \underline{\mathtt{D }}\,\mathtt{\bullet }, \textsc {inc-last-digit}] \subset [\textsc {carry}], \end{aligned}$$

and so the intersection is trivial.

As such we consider three cases, depending on what is the resulting state.

Case 1. The resulting state is inc-last-digit.

We have to consider the instructions (S1)–(S2) and (S10). If both instructions are from among (S1)–(S2), it is enough to look at the underlined symbol. Thus we consider a pair consisting of (S10) and one of the instruction (S1)–(S2). As for (S10), we have

$$\begin{aligned} (\mathtt{D }{\rightarrow }\mathtt{0 })(\textsc {zero-prev-digits}{\rightarrow }\textsc {inc-last-digit}).&[\underline{\mathtt{D }}\,\mathtt{\bullet }, \textsc {zero-prev-digits}]= \\&= [\underline{\mathtt{0 }}\,\mathtt{\bullet }, \textsc {inc-last-digit}], \end{aligned}$$

which is disjoint from the resulting sets of the instructions (S1) and (S2), as they are both contained in \([{\underline{\widehat{\mathtt{0 }}}},\textsc {inc-last-digit}]\).

Case 2. The resulting state is carry.

We have to consider the instructions (S3) and (S4). For (S3) we have

$$\begin{aligned} (\textsc {inc-last-digit}{\rightarrow }{\textsc {carry}}). [ \underline{\mathtt{D }}\,\mathtt{\bullet }, \textsc {inc-last-digit}] = [ \underline{\mathtt{D }}\,\mathtt{\bullet }, \textsc {carry}], \end{aligned}$$

and for (S4) we have

$$\begin{aligned} t^{-1}. [\underline{\mathtt{D }}, \textsc {carry}] = [\underline{*}\, \mathtt{D },\textsc {carry}], \end{aligned}$$

and so the intersection is trivial because \(\mathtt{D }\ne \mathtt{\bullet }\).

Case 3. The resulting state is zero-prev-digits.

We have to consider the instructions (S5)–(S8) and (S9). If both instructions are from among (S5)–(S8), considering the symbol immediately to the left of the underlined symbol establishes the trivial intersection.

For the case when one of the instruction is (S9), note that the resulting sets of (S5)–(S8) are all contained in

$$\begin{aligned}{}[{\widehat{\mathtt{0 }}}\,\underline{*},\textsc {zero-prev-digits}], \end{aligned}$$

and the resulting set of (S9) is contained in \([0\,\underline{*},\textsc {zero-prev-digits}]\).

We have all the ingredients to finish the proof of Theorem 2.

Proof of Theorem 2

For every \(d>0\), Proposition 9 gives a Turing dynamical system \((X,T_X)\) which fulfils the properties (A), (B) and (C). Theorem 6 gives us an operator \(S\in {\Gamma }\ltimes L^\infty (X)\) such that for some constant \(C'>0\) and a sequence \({\varepsilon }_j>0\) converging to \(0\) we have

$$\begin{aligned} \mu _S((0,{\varepsilon }_j)) \geqslant \frac{C'}{|\log ({\varepsilon }_j)|^{d+1}}. \end{aligned}$$

Furthermore, we constructed our Turing dynamical system in such a way that \((X,\mu )\) is a compact abelian group with the normalized Haar measure, the action \({\Gamma }\curvearrowright X\) is by continuous group automorphisms, and the indicator functions of \(X_i\) are in the image of the Pontryagin duality embedding \(\mathbb Q[A]\rightarrow L^\infty (X)\), where \(A\) is the Pontryagin dual of \(X\). As such, Lemma 5 gives us an element \({\widehat{S}}\in \mathbb Q[{\Gamma }\ltimes A]\) whose spectral measure is the same as the spectral measure of \(S\). This finishes the proof. \(\square \)

5 Proof of Lemma 7

We finish this note by proving Lemma 7. Let us recall the statement for reader’s convenience.

Lemma 7

There is \(0<D<1\) such that for all \(m\geqslant 2\) the adjacency operator \(S_m\) has a positive eigenvalue smaller than \(D^m\).

Proof

The matrix of the adjacency operator of the graph on Fig. 3 with \(m\) nodes is

$$\begin{aligned} U_m := \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 1 &{} 2 &{} &{} &{} \\ 2 &{} 5 &{} 2 &{} &{} \\ &{} 2 &{} 5 &{} &{} \\ &{} &{} &{} \ldots &{} &{} \\ &{} &{} &{} &{} 5 &{}2 \\ &{} &{} &{} &{}2 &{} 5 \end{array} \right) \end{aligned}$$

We check by hand that \(\det (U_1) = \det (U_2) =1\), and we prove inductively that \(\det (U_m)=1\) for all \(m>1\) by expanding the determinant along the last row.

Fig. 3
figure 3

The graph whose adjacency matrix is \(S_m\)

Note that the matrix

$$\begin{aligned} V_m := \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} 0 &{} 2 &{} &{} &{} \\ 2 &{} 0 &{} 2 &{} &{} \\ &{} 2 &{} 0 &{} &{} \\ &{} &{} &{} \ldots &{} &{} \\ &{} &{} &{} &{} 0 &{}2 \\ &{} &{} &{} &{}2 &{} 0 \end{array} \right) \end{aligned}$$

is equal to \(2V'_m\), where \(V'_m\) is the adjacency matrix of a bipartite graph of maximal degree \(2\). Recall that the spectrum of a bipartite graph is symmetric ([3, Proposition 8.2]). It follows that the eigenvalues of \(V_m\) are contained in \([-4,4]\) and there are as many eigenvalues in \([-4,0]\) as in \([0,4]\), and so the eigenvalues of \(U_M + {\text {Diag}}(4,0,\ldots ,0)\) are contained in \([1,9]\), and at least half of them are contained in \([5,9]\) (“at least” because of the eigenvalue “5”).

Let us recall the Weyl’s inequality for rank one perturbations (e.g. [14, Theorem 4.3.4]). Let \({\lambda }_1\leqslant \cdots \leqslant {\lambda }_m\) be the eigenvalues of \(U_m\) and \({\kappa }_1\leqslant \ldots , {\kappa }_m\) be the eigenvalues of \(U_m+{\text {Diag}}(4,0,\ldots , 0)\). Then for \(k=1,\ldots , m-1\) we have

$$\begin{aligned} {\kappa }_k \leqslant {\lambda }_{k+1}. \end{aligned}$$

It follows that \(1\leqslant {\lambda }_2\), and \(5 \leqslant {\lambda }_{\lfloor m/2 \rfloor +1}\). Estimating \({\lfloor m/2 \rfloor }\) by \(\frac{m}{3}\) from below we obtain

$$\begin{aligned} 1=\det (U_m) = {\lambda }_1{\lambda }_2\ldots {\lambda }_m > {\lambda }_1 \cdot 5^\frac{m}{3}, \end{aligned}$$

and therefore we can take \(D:= 5^{\frac{1}{3}}\). \(\square \)

Remark 10

We note in passing that the matrices \(U_m\) from the proof of Lemma 7 give a counterexample to the determinant approximation conjecture in the context of graph sequences convergent in the Benjamini-Schramm sense (see [19, Chapter 19] for the definition). Such counterexamples have been known among experts for a fairly long time, although none seem to have been published. This particular counterexample is due to L. Lovász.

Indeed, on the one hand we have that \(\det (U_m)= 1\) for all \(m\). On the other hand, we have established in the proof of Lemma 7 that at least third of the eigenvalues of \(\det (U_m + {\text {Diag}}(4,0,\ldots ,0))\) are larger or equal to \(5\), and all of them are larger or equal to \(1\). This shows \((\det (U_m + {\text {Diag}}(4,0,\ldots ,0)))^{\frac{1}{m}} \geqslant 5^{\frac{1}{3}}\). We conclude that given a sequence of finite graphs which is convergent in the Benjamini-Schramm sense to a Cayley graph of \(\mathbf Z\), it is not true that the normalized determinants of the adjacency matrices on the finite graphs converge to the Fuglede–Kadison determinant of the adjacency operator the Cayley graph.