1 Main results

We are interested in Hausdorff dimensional properties of spectral measures of Schrödinger operators

$$\begin{aligned} (Hu)(x)=-\frac{\mathrm {d}^2 u}{\mathrm {d}x^2}(x)+V(x)u(x) \end{aligned}$$
(1.1)

acting in \(\mathrm {L}^2 (I_b)\), where \(I_b=[0,b]\), \(0<b<\infty \), is a bounded interval of \(\mathbb {R}\); our potentials V(x) are signed combs of delta distributions carefully spaced in \(I_b\) and accumulating only at b. The boundary condition at 0 is

$$\begin{aligned} u(0) \cos (\varphi ) +u'(0)\sin (\varphi )=0, \end{aligned}$$
(1.2)

with \(\varphi \in [0, 2\pi )\) fixed.

Pearson [14] has presented a family of such Schrödinger operators \(-\mathrm {d}^{2}/\mathrm {d} x^{2}+V_0(x)\), on the bounded interval \(I_b\), which has purely absolutely continuous spectrum in a certain range of energies. (The potential \(V_0(x)\) is a selected comb of delta distributions, and its construction is recalled in “Appendix A.”) Borrowing ideas from [12, 15], we will perturb such potential \(V_0(x)\) to obtain a model (1.1) with a singular continuous spectral component in this interval (see also Example 14.6.j in [16]).

The main contribution of this work is to compute the Hausdorff dimensions of their spectral measures. It is usually hard to present examples of potentials for which one can say something about fractal properties of spectral measures. To the best of our knowledge, these are the first examples, in bounded intervals, of singular continuous Schrödinger operators with computable dimensions.

We need some preparation in order to state our main results.

1.1 Hausdorff dimension of measures

If \(\mu \) is a positive and finite Borel measure on \({\mathbb {R}}\), its lower local dimension at \(x\in \mathrm {support}(\mu )\) is given by

$$\begin{aligned} d_\mu ^-(x) = \liminf _{\epsilon \downarrow 0}\frac{\ln \mu ((x-\epsilon ,x+\epsilon ))}{\ln \epsilon }. \end{aligned}$$

The general idea is to estimate the scaling property \(\mu ((x-\epsilon ,x+\epsilon ))\sim \epsilon ^{d_\mu (x)}\) for small \(\epsilon >0\).

Let \( \dim _{\mathrm {H}}(S)\) denote the Hausdorff dimension of the set \(S\subset {\mathbb {R}}\) and \(0< \alpha \le 1\); the upper Hausdorff dimension of \(\mu \) is defined as

$$\begin{aligned} \dim _{\mathrm {H}}^+(\mu )= \inf \{ \dim _{\mathrm {H}}(S); \mu (\mathbb {R}\setminus S)=0, \,S\; \mathrm {a\; Borel\; subset\; of}\; {\mathbb {R}} \}, \end{aligned}$$

and its lower Hausdorff dimension as

$$\begin{aligned} \dim _{\mathrm {H}}^-(\mu )= \sup \{\alpha ; \mu (S)=0\;\;\mathrm {if}\;\dim _{\mathrm {H}}(S)<\alpha ,\;S\; \mathrm {a\; Borel\; subset\; of}\; {\mathbb {R}}\}. \end{aligned}$$

If \(A \subset \mathbb {R}\) is a Borel set, we shall also consider the dimensions of the restriction of \(\mu \) to such set, that is, \(\mu _{;A}(\cdot ){:}{=} \mu (A\cap \cdot )\).

It turns out that [5, 13]

$$\begin{aligned} \dim _{\mathrm {H}}^-(\mu ) = \mu -{\mathrm{ess.inf}}\;d_\mu ^- ,\qquad \dim _{\mathrm H}^+(\mu ) = \mu -{\mathrm{ess.sup}}\;d_\mu ^-\,. \end{aligned}$$
(1.3)

Hence, the information of the lower local dimensions of \(\mu \) gives the values of its Hausdorff dimensions.

1.2 The potential

Let the potential \(V_0(x)\) be a comb of delta distributions at suitable points \(a_n\in (0,b)\), as constructed in [14,15,16], for which the Schrödinger operator (1.1) has purely absolutely continuous spectrum on an interval \(J\subset \mathbb {R}\). We consider the potential

$$\begin{aligned} V_\omega (x)=V_0(x)+\sum _{n=1}^{\infty }g_{n}^\omega \delta (x-b_n), \quad 0<x<b, \end{aligned}$$
(1.4)

with \(b_{n}=\sum _{j=1}^{n}8a_j\) and \(b=\lim _{n\rightarrow \infty }b_{n}=\sum _{j=1}^{\infty }8a_j<\infty \); the potential V(x) is a perturbation of \(V_0(x)\) by a \(\delta \) comb located at the points of the sequence \( (b_ {n})\).

We assume that \((g_n^{\omega })\) is a sequence of independent (real) random variables defined in a probability space with (probability) measure \(\nu (\omega )\), and let \(\mathbb {E}\) denote the expectation with respect to \(\nu \). For each realization \(\omega \), denote the corresponding Schrödinger operator (with boundary condition (1.2)) by

$$\begin{aligned} H_{\omega }=-\frac{\mathrm {d}^2}{\mathrm d x^{2}}+V_\omega (x), \quad 0<x<b, \end{aligned}$$
(1.5)

and its spectral measure by \(\rho _\omega \). We will also assume that there is \(\lambda >0\) such that

  1. (i)

    \(\mathbb {E}((g_{n}^{\omega })^{2})=\lambda ^{2} n^{-1}\) and, for a positive constant \(C_4\), \(\mathbb {E}((g_n^\omega )^4) \le C_4\lambda ^4n^{-2}\);

  2. (ii)

    \(\mathbb {E}(g_{n}^{\omega })=0\);

  3. (iii)

    for some \(\epsilon >0\), there is a positive constant \(C_1\) so that \(\sup _{\omega }|g_{n}^{\omega }|\le C_1n^{-\frac{1}{3}-\epsilon }\);

  4. (iv)

    \(g_{n}^{\omega }\) is independent of \((g_{j}^{\omega })_{j=1}^{n-1}\).

Note that the hypothesis that the fourth moment scales like the square of the second moment is true for random variables following the normal distribution.

1.3 Main results

For \(0<\lambda <2\), denote

$$\begin{aligned} J=J(\lambda ){:}{=}\left( \frac{6-3\sqrt{4-\lambda ^{2}}}{5},\frac{6+3\sqrt{4-\lambda ^{2}}}{5}\right) \Bigg \backslash \left\{ \frac{3(2-\sqrt{2})}{5},\frac{6}{5}, \frac{3(2+\sqrt{2})}{5}\right\} , \end{aligned}$$

and observe that \(J(\lambda )\subset (0,12/5)\) for all \(0<\lambda <2\).

Proposition 1.1

Fix \(\lambda \in (0,2)\). Then, for \(\nu \)-a.e. \(\omega \), \(J(\lambda )\) is a subset of the spectrum of \(H_\omega \) and the restricted operator \(H_\omega P^{H_\omega }(J(\lambda ))\) is purely singular continuous (where \(P^{H_\omega }\) is the spectral projection of \(H_\omega \)).

To some extent, this proposition is similar to some of the results discussed in Pearson [14,15,16], but here we have to deal with the role of random potentials, as considered in [12].

Theorem 1.2

Fix \(\lambda \in (0,2)\). Then, for \(\nu \)-a.e. \(\omega \) and each \(E\in J(\lambda )\), the spectral measure \(\rho _\omega \) has lower local dimension

$$\begin{aligned} d_{\rho _\omega }^-(E)=\alpha =\alpha (E,\lambda ){:}{=}1-\frac{9\lambda ^{2}}{60E-25E^{2}}\,. \end{aligned}$$
(1.6)

Corollary 1.3

Let \(\nu ,\lambda ,J,H_\omega \) and \(\rho _\omega \) be as above. Then, for \(\nu \)-a.e. \(\omega \),

  1. (i)

    \(\dim _{\mathrm {H}}^-(\rho _{\omega ;J})=0 \) and \(\dim _{\mathrm {H}}^+(\rho _{\omega ;J})=1-\frac{\lambda ^2}{4}\).

  2. (ii)

    given an interval \([m,M]\subset (0,1-\lambda ^2/4)\), \(m<M\), there is a subset \(J_{m,M}\subset J\) so that, for the spectral measure \(\rho _\omega ^{m,M}{:}{=}\rho _{\omega ;J_{m,M}}\) of the restricted operator \(H_\omega ^{m,M}{:}{=}H_\omega \,P^{H_\omega }(J_{m,M})\), one has

    $$\begin{aligned} \dim _{\mathrm {H}}^-(\rho _\omega ^{m,M})=m \quad \mathrm {and}\quad \dim _{\mathrm {H}}^+(\rho _\omega ^{m,M})=M. \end{aligned}$$

In Sect. 2, we present the proofs of Proposition 1.1, Theorem 1.2 and Corollary 1.3, which make use of nontrivial technical estimates (in particular Theorem 2.1 on \(\alpha (E,\lambda )\)-subordinate solutions) of the asymptotic behavior of solutions to the eigenvalue equation for \(H_\omega \). Section 3 provides results about Hausdorff subordinacy in bounded intervals. In Sect. 4, some of the techniques mentioned above are discussed, in order to prove Theorem 2.1 in Sect. 5. For the reader’s convenience, some details regarding the construction of the unperturbed potential \(V_0\) are recalled in “Appendix A.”

2 Proofs of the main results

Proposition 1.1

As discussed in Sect. 5.1, if \(\sum _{n=1}^\infty (g_n^\omega )^2=\infty \), then \(J(\lambda )\) is contained in the spectrum of \(H_\omega \) and this operator is purely singular continuous there. Hence, the proof here amounts to show that \(\nu \)-a.e. \(\omega \) one has \(\sum _{n=1}^\infty (g_n^\omega )^2=\infty \).

Let \(X_{k,\omega }=\sum _{n=1}^k (g_n^\omega )^2\) and, given \(N>0\), set

$$\begin{aligned} U_N=\Big \{ \omega \ ;\ \sum _{n=1}^\infty (g_n^\omega )^2 <N\Big \} \ . \end{aligned}$$

Then, \(\nu (U_N)\le \nu (X_{k,\omega }<N)\) for every k. Pick \(k_0\) such that \(\mathbb {E}(X_{k,\omega })>N/2\), for every \(k\ge k_0\).

From the classical Bienaymé–Chebyshev inequality, for every \(t>0\),

$$\begin{aligned} \nu (\{\omega \ ;\ |X_{k,\omega }-\mathbb {E}(X_{k,\omega })|> t\}) \le \frac{\mathrm{var}(X_{k,\omega })}{t^2}, \end{aligned}$$

where \(\mathrm{var}(\cdot )\) denotes the variance. By the independence of \(g_n^\omega \),

$$\begin{aligned} \mathrm{var}(X_{k,\omega }) = \sum _{n=1}^k \mathrm{var}((g_n^\omega )^2) < {\tilde{C}}_4 , \end{aligned}$$

since \(\mathbb {E}((g_n^\omega )^4) \le C_4\lambda ^4n^{-2}\).

Therefore, for each k,

$$\begin{aligned} \nu (\{\omega \ ;\ \mathbb {E}(X_{k,\omega })-X_{k,\omega }>t\}) \le \nu (\{\omega \ ;\ |\mathbb {E}(X_{k,\omega })-X_{k,\omega }|>t\}) \le \frac{{\tilde{C}}_4}{t^2} , \end{aligned}$$

which implies that \(\nu (\omega ; X_{k,\omega }<N) < \frac{4{\tilde{C}}_4}{N^2}\) for \(k\ge k_0\). Since N is arbitrary, the result follows. \(\square \)

For a solution u to the eigenvalue equation

$$\begin{aligned} (H_\omega u)(x)=Eu(x) \end{aligned}$$
(2.1)

and \(0<L<b\), denote

$$\begin{aligned} \Vert u\Vert _{L}=\left( \int _{0}^{L}|u(r)|^2 \mathrm {d} r\right) ^{1/2}. \end{aligned}$$
(2.2)

For a finite Borel measure \(\mu \) on \({\mathbb {R}}\) and \(0\le \gamma \le 1\), consider the upper \(\gamma \)-derivative of \(\mu \) at \(x\in \mathbb {R}\), given by

$$\begin{aligned} {\overline{D}}_\mu ^\gamma (x) = \limsup _{\epsilon \rightarrow 0} \frac{\mu ((x-\epsilon ,x+\epsilon ))}{(2\epsilon )^\gamma }\,. \end{aligned}$$

The proof of Theorem 1.2 is based on the following theorem, whose proof is presented in Sect. 5, and it is an important technical part of this paper. Since for \(\nu \)-a.e. \(\omega \) the operator \(H_\omega \) has no eigenvalue in J, for all nonzero solutions u to the eigenvalue Eq. (2.1) with \(E\in J\), one has \(\Vert u\Vert _L\rightarrow \infty \) as \(L\rightarrow b\).

Theorem 2.1

Let \(0<\lambda <2\), and \(\alpha (E,\lambda )\) be as on the right-hand side of (1.6). Then, for \(\nu \)-a.e. \(\omega \) and each \(E\in J(\lambda )\), there exists a solution \(u_E^S\) to (2.1) so that, for all other solutions \(u_E\), linearly independent with \(u_E^S\),

$$\begin{aligned} \lim _{L\rightarrow b}\frac{\Vert u_E^S\Vert _{L}}{\Vert u_E\Vert _{L}^{\alpha /(2-\alpha )}}=A \end{aligned}$$
(2.3)

holds for some appropriate value of \(A\in (0,\infty )\).

The proof of Theorem 2.1 makes use of adaptations to the bounded interval setting of sparse potentials in unbounded intervals [12]. In a bounded interval, one does not have room for sparse potentials, which here will be replaced by signed delta comb potentials, with diverging intensities as one approaches the interval endpoint b.

Theorem 1.2

First note that \(y\mapsto y/(2-y)\) is a monotonically increasing function of \(y\in [0,1]\). For \(E\in J\) and \(\alpha '<\alpha \), by (2.3) one has

$$\begin{aligned} \lim _{L\rightarrow b}\frac{\Vert u_E^S\Vert _{L}}{\Vert u_E\Vert _{L}^{\alpha '/(2-\alpha ')}}=\infty , \end{aligned}$$

and so, by the Hausdorff subordinacy theory (see Sect. 3 and [9]), it follows that \(\overline{D}_{\rho _\omega }^{\alpha '}(E)=0.\) Hence, given \(\delta >0\), for all \(\epsilon >0\) small enough,

$$\begin{aligned} \frac{\rho _\omega ((E-\epsilon ,E+\epsilon ))}{\epsilon ^{\alpha '}}<\delta \; \Longrightarrow \; \frac{\ln \rho _\omega ((E-\epsilon ,E+\epsilon ))}{\ln \epsilon }> \frac{\ln \delta }{\ln \epsilon }+\alpha ', \end{aligned}$$

and so \(d_{\rho _\omega }^-(E)\ge \alpha '\); since this holds for all \(\alpha '<\alpha \), one has \(d_{\rho _\omega }^-(E)\ge \alpha \).

On the other hand, if \(\alpha <\alpha '\), by (2.3) one has

$$\begin{aligned} \lim _{L\rightarrow b}\frac{\Vert u_E^S\Vert _{L}}{\Vert u_E\Vert _{L}^{\alpha '/(2-\alpha ')}}=0, \end{aligned}$$

and so, again by the Hausdorff subordinacy theory, it follows that \({\overline{D}}_{\rho _\omega }^{\alpha '}(E)=\infty .\) Hence, given \(N>0\), there is a subsequence \(\epsilon _j\downarrow 0\) with

$$\begin{aligned} \frac{\rho _\omega ((E-\epsilon _j,E+\epsilon _j))}{\epsilon _j^{\alpha '}}>N \; \Longrightarrow \; \frac{\ln \rho _\omega ((E-\epsilon _j,E+\epsilon _j))}{\ln \epsilon _j}< \frac{\ln N}{\ln \epsilon _j}+\alpha ' \end{aligned}$$

and so \(d_{\rho _\omega }^-(E)\le \alpha '\); since this holds for all \(\alpha <\alpha '\), one has \(d_{\rho _\omega }^-(E)\le \alpha \). By combining both inequalities, \(d_{\rho _\omega }^-(E)= \alpha \).

Corollary 1.3

(i) By (1.3) and Theorem 1.2, and since \(\alpha (E,\lambda )\) is a continuous function of the variable \(E\in J\), it is enough to note that \(\inf _{E\in J}\alpha (E,\lambda )=0\) and \(\max _{E\in J}\alpha (E,\lambda )=1-\lambda ^2/4.\)

(ii) It is enough to use the inverse function \(\alpha ^{-1}\) to pick \(J_{m,M}=\alpha ^{-1}(\cdot ,\lambda )\big ([m,M]\cap J(\lambda )\big )\).

3 Hausdorff subordinacy in bounded intervals

We provide in this section the necessary results about Hausdorff subordinacy for operators (1.1) acting in \(\mathrm {L}^2 (I_b)\). We depart from known results on subordinacy for Schrödinger operators with action (1.1) in the cases of unbounded intervals \(I = \mathbb {R}\) or \(I = [0, \infty )\) [2, 3, 9,10,11,12]. The Hausdorff subordinacy results are generalizations of the subordinacy theory of Gilbert and Pearson [6, 8].

The subordinacy theory for operators (1.1) in bounded intervals was discussed in [6, 7, 16]. Now we dwell on the adaptation of such results to Hausdorff subordinacy. We begin by discussing the spectral properties of operators

$$\begin{aligned} \mathcal {H}=-\frac{\mathrm {d}^2}{\mathrm {d}x^2} + V(x), \end{aligned}$$
(3.1)

acting in \(\mathrm {L}^2(I_b)\), that are regular at the point 0 and limit point at b (see [16] for such standard concepts). A self-adjoint operator H can be obtained from \(\mathcal {H}\) in a suitable domain, satisfying the boundary condition (1.2) at point 0, with fixed \(\varphi \in [0, \pi )\). By varying the boundary conditions, we obtain a family of self-adjoint operators H resulting from \(\mathcal {H}\).

The spectral function \(\rho (E)\) of H generates a Borel–Stieltjes measure \(\rho \) which is the spectral measure associated with the operator H, which, by its turn, is related to the Weyl–Titchmarsh m-function through

$$\begin{aligned} \rho (E_2)-\rho (E_1)=\lim _{\delta \rightarrow 0}\lim _{\varepsilon \rightarrow 0}\frac{1}{\pi }\int _{E_{1}+\delta }^{E_{2}+\delta }\mathrm {Im}(m(E+i\varepsilon ))\mathrm {d}E, \end{aligned}$$

and the inverse relation

$$\begin{aligned} m(z)=\int _{-\infty }^{\infty }\frac{1}{(E-z)}\mathrm {d}\rho (E) +\cot (\varphi ). \end{aligned}$$

The m-function m(z) satisfies

$$\begin{aligned} \hat{u}(x,z){:}{=}u_{2}(x,z)+m(z)u_{1}(x,z) \in {\mathrm {L}}^2(I_b), \end{aligned}$$
(3.2)

for all \(z\in \mathbb {C}\backslash \mathbb {R}\), with \(u_1(x,z)\) and \(u_2(x,z)\) solutions to \(\mathcal {H}u=zu\) satisfying the orthogonal initial conditions

$$\begin{aligned} \left\{ \begin{array}{ll} u_{1}(0,z)=-\sin \varphi &{}\ \ \ u_{2}(0,z)=\cos \varphi \\ u'_{1}(0,z)=\cos \varphi &{}\ \ \ u'_{2}(0,z)=\sin \varphi \end{array} \right. . \end{aligned}$$
(3.3)

We denote by \(u_{1,E}\) and \(u_{2,E}\) the solutions to the Eq. (2.1) in \(I_b\), satisfying the initial conditions (3.3) for \(\varphi \in [0,2\pi )\) fixed. Since at least one of these solutions has unbounded norm, let \(L(\varepsilon )\in (0,b)\) be the length defined by the equality (see (2.2))

$$\begin{aligned} \Vert u_{1,E}\Vert _{L(\varepsilon )} \Vert u_{2,E}\Vert _{L(\varepsilon )}= \frac{1}{2\varepsilon }\ . \end{aligned}$$
(3.4)

Following the lines of Jitomirskaya and Last [9], and taking into account the parameter variation formula, i.e., equation (7.2.8) in [16], one can prove the following inequalities:

Proposition 3.1

Assume that the differential operator \(\mathcal {H}\) in (3.1) is regular at 0 and limit point at b. Let H be the self-adjoint operator defined by \(\mathcal {H}\) satisfying the boundary conditions (1.2). Given \(E\in \mathbb {R}\) and \(\varepsilon >0\), then

$$\begin{aligned} \frac{5-\sqrt{24}}{|m(E+i\varepsilon )|}< \frac{\Vert u_{1,E}\Vert _{L(\varepsilon )}}{\Vert u_{2,E}\Vert _{L(\varepsilon )}}< \frac{5+\sqrt{24}}{|m(E+i\varepsilon )|}. \end{aligned}$$
(3.5)

Relation (3.5) allows for a generalization of Hausdorff subordinacy results [1, 9,10,11] to obtain dimensional properties of the spectral measure of H in bounded intervals. The next result follows directly by Proposition 3.1 and relation (3.4).

Corollary 3.2

Let H be as in Proposition 3.1, \(E\in \mathbb {R}\), \(\varepsilon >0\) and a fixed \(\kappa \in (0,1]\). Then,

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{(1-\kappa )}|m(E+i\varepsilon )|=\infty \ \Longleftrightarrow \ \liminf _{L \rightarrow b} \frac{\Vert u_{1,E}\Vert _{L}}{\Vert u_{2,E}\Vert ^{\kappa /(2-\kappa )}_{L}}=0. \end{aligned}$$

Given \(\kappa \in (0,1]\), a solution u (for fixed E) to (2.1) is called \(\kappa \)-Hausdorff subordinate at b if

$$\begin{aligned} \liminf _{L\rightarrow b}\frac{\Vert u\Vert _{L}}{\Vert v\Vert ^{\kappa /(2-\kappa )}_{L}}=0, \end{aligned}$$

for any solution v to (2.1) linearly independent with u. The (original) subordinate notion [6, 8] is recovered by taking \(\kappa =1\).

Inspired by known results presented in Chapter 7 of [16], we have

Theorem 3.3

Let \(\rho \) be the spectral measure associated with the operator H, as in Proposition 3.1, with boundary condition (1.2) at point 0 with \(\varphi \in [0, 2\pi )\) fixed. Pick \(\kappa \) \(\in \) (0, 1] and let \({\mathcal {I}}\) be a subset of the spectrum of H with \(\rho ( {{\mathcal {I}}})>0\).

  1. (i)

    Suppose that for all \(E\in {{\mathcal {I}}}\), the solution \(u_{E}\) to (2.1) is \(\kappa \)-Hausdorff subordinate at the point b. Then, \({\overline{D}}_{\rho }^{\kappa }(E)=\infty \) for all \(E\in {{\mathcal {I}}}\), in particular the restriction \(\rho ( {{\mathcal {I}}}\cap \cdot )\) is \(\kappa \)-Hausdorff singular.

  2. (ii)

    Suppose that for all \(E\in {{\mathcal {I}}}\), there is no solution to (2.1) that is \(\kappa \)-Hausdorff subordinate at b. Then, \({\overline{D}}_{\rho }^{\kappa }(E)<\infty \) for all \(E\in {{\mathcal {I}}}\), in particular the restriction \(\rho ( {{\mathcal {I}}}\cap \cdot )\) is \(\kappa \)-Hausdorff continuous.

Proof

By general results on Hausdorff measures and corresponding decompositions with respect to the behavior of \(\kappa \)-derivative \({\overline{D}}_{\rho }^{\kappa }\) (see [4, 5, 13, 17]), this theorem is a simple consequence of Corollary 3.2 and the definition of \(\kappa \)-subordinate solution.

More precisely, we have

$$\begin{aligned} {\overline{D}}_{\rho }^{\kappa }(E)=\infty \; \Longleftrightarrow \; \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{(1-\kappa )}|m(E+i\varepsilon )|=\infty . \end{aligned}$$
(3.6)
  1. (i)

    If for all E, in some subset \( {{\mathcal {I}}}\), the solution \(u_{E}\) to Eq. (2.1) is \(\kappa \)-Hausdorff subordinate at the point b, then, by Corollary 3.2, we have that

    $$\begin{aligned} \limsup _{\varepsilon \rightarrow 0}\varepsilon ^{(1-\kappa )}|m(E+i\varepsilon )|=\infty \ . \end{aligned}$$

    Consequently, by Eq. (3.6) we have \(\overline{D}_{\rho }^{\kappa }(E)=\infty ,\) for all \(E\in {{\mathcal {I}}}\). Thus, the restriction \(\rho ( {{\mathcal {I}}}\cap \cdot )\) is \(\kappa \)-Hausdorff singular.

  2. (ii)

    If for all E, in some subset \( {{\mathcal {I}}}\), there is no solution to Eq. (2.1) that is \(\kappa \)-Hausdorff subordinate at b, then by Corollary 3.2,

    $$\begin{aligned} \liminf _{\varepsilon \rightarrow 0}\varepsilon ^{(1-\kappa )}|m(E+i\varepsilon )|<\infty . \end{aligned}$$

    Similarly, by Eq. (3.6), we have \(\overline{D}_{\rho }^{\kappa }(E)<\infty ,\) for all \(E\in {{\mathcal {I}}}\). Then, the restriction \(\rho ( {{\mathcal {I}}}\cap \cdot )\) is \(\kappa \)-Hausdorff continuous.

\(\square \)

4 Asymptotic behavior of solutions

In this section, we use some notations presented in “Appendix A.” Pick a decreasing sequence \((a_{n})\) of positive numbers and set \(b_{n}=\sum _{j=1}^{n}8a_j\) so that

$$\begin{aligned} b=\lim _{n\rightarrow \infty }b_{n}=\sum _{n=1}^{\infty }8a_n<\infty , \end{aligned}$$

and consider the matrices in Eqs. (A.2) and (A.6), respectively,

$$\begin{aligned} M_{n,n-1}(E)=\left( \begin{array}{c@{\quad }c} 1 -\frac{5}{3}E+O(a_n^{1/2}) &{} 1+O(a_n^{1/2}) \\ -\frac{5}{3}E+O(a_n^{1/2}) &{} 1+O(a_n)\end{array}\right) ,\quad M(E) = \left( \begin{array}{c@{\quad }c} 1 -\frac{5}{3}E &{} 1 \\ -\frac{5}{3}E &{} 1\end{array}\right) . \end{aligned}$$

To simplify the notation, we will study the behavior of the solutions to the eigenvalue equation

$$\begin{aligned} -u''(x)+V(x)u(x)=Eu(x), \end{aligned}$$
(4.1)

with potential V(x) of the form (1.4), and will use Prüfer variables R and \(\theta \) to analyze the behaviors of its solutions. First write

$$\begin{aligned} \left( \begin{array}{c} u_{E}(b_n) \\ u'_{E}(b_n)\end{array}\right) =p_nf_{+}+q_{n}f_{-}, \end{aligned}$$
(4.2)

with eigenvectors

$$\begin{aligned} f_{+}=\left( \begin{array}{c} 1 \\ 1-e^{-i\phi }\end{array}\right) \ \ \ \text {and} \ \ \ f_{-}=\left( \begin{array}{c} 1 \\ 1-e^{i\phi }\end{array}\right) \end{aligned}$$

of the matrix M(E), associated with eigenvalues \(e^{\pm i\phi }\), respectively. Define \(R_{n}>0\) and \(\theta _{n}\in {\mathbb {R}}\) by

$$\begin{aligned} p_{n}=iR_n e^{i\theta _n} \ \ \ \ \ \text {and} \ \ \ \ \ \ q_{n}=-iR_n e^{-i\theta _n}, \end{aligned}$$

satisfying the initial conditions

$$\begin{aligned} -2R_0\sin {\theta _0} = \cos {\varphi }, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -2R_0 \sin {\theta _0} +2R_0 \sin (\theta _0-\phi ) = \sin {\varphi } \ , \end{aligned}$$
(4.3)

where \(\theta _0\) is chosen in \([0,2\pi )\). By relation (A.7), we can write, for each \(b_n>0\),

$$\begin{aligned} \int _{0}^{b_{n}}[u_{E}(x)]^{2} \mathrm {d}x= u'_{E}(b_{n})\frac{\mathrm {d}}{\mathrm {d} E}u_{E}(b_{n})-u_{E}(b_{n})\frac{\mathrm {d}}{\mathrm {d} E}u'_{E}(b_{n}). \end{aligned}$$

By (4.2),

$$\begin{aligned} u_{E}(b_n)=-2R_n\sin \theta _n, \ \ \ \ \ \ \ \ u'_{E}(b_n)=2R_n[-\sin \theta _n + \sin (\theta _n -\phi )], \end{aligned}$$

and so

$$\begin{aligned} \Vert u_E\Vert _{b_n}^2=\int _{0}^{b_{n}}[u_{E}(x)]^{2} \mathrm {dx}=4R_n^2\,\frac{\partial {\theta }_n}{\partial E}\,\sin \phi \end{aligned}$$
(4.4)

We use transfer matrices to analyze the behavior of \(R_n\) and \(\frac{\partial {\theta }_n}{\partial E}\) for large values of n.

We have, by the construction of the potential V(x) and (4.2), that

$$\begin{aligned} \left( \begin{array}{c} u_{E}(b_n) \\ u'_{E}(b_n)\end{array}\right)= & {} \begin{pmatrix} 1 &{} 0 \\ g_n &{} 1 \end{pmatrix} M_{n,n-1}(E)\left( \begin{array}{c} u_{E}(b_{n-1}) \\ u'_{E}(b_{n-1})\end{array}\right) \\= & {} \begin{pmatrix} 1 &{} 0 \\ g_n &{} 1 \end{pmatrix} (M(E)+O(a_n))\left( \begin{array}{c} u_{E}(b_{n-1}) \\ u'_{E}(b_{n-1})\end{array}\right) \\= & {} \begin{pmatrix} 1 &{} 0 \\ g_n &{} 1 \end{pmatrix} ( p_{n-1}e^{i\phi } f_++ q_{n-1}e^{-i\phi } f_-)+O(a_n) \\= & {} \begin{pmatrix} 1 &{} 0 \\ g_n &{} 1 \end{pmatrix} \begin{pmatrix} p_{n-1}e^{i\phi }+q_{n-1} e^{-i\phi } \\ p_{n-1}e^{i\phi }(1-e^{-i\phi })+q_{n-1}e^{-i\phi }(1-e^{i\phi }) \end{pmatrix} +O(a_n) \\= & {} \begin{pmatrix} p_{n-1}e^{i\phi } +q_{n-1}e^{-i\phi } \\ g_n (p_{n-1}e^{i\phi }+q_{n-1}e^{-i\phi }) +p_{n-1}(e^{i\phi }-1)+q_{n-1}(e^{-i\phi }-1) \end{pmatrix} +O(a_n), \end{aligned}$$

and again by (4.2),

$$\begin{aligned} \left( \begin{array}{c} u_{E}(b_n) \\ u'_{E}(b_n)\end{array}\right) =\begin{pmatrix} p_n+q_n \\ p_n(1-e^{-i\phi }) + q_n(1-e^{i\phi }) \end{pmatrix}. \end{aligned}$$
(4.5)

Thus, we conclude that

$$\begin{aligned}&p_n+q_n = p_{n-1}e^{i\phi }+q_{n-1}e^{-i\phi } +O(a_n) p_n(1-e^{-i\phi })+q_n(1-e^{i\phi }) \\&\quad = g_n (p_{n-1}e^{i\phi }+q_{n-1}e^{-i\phi }) +p_{n-1}(e^{i\phi }-1) +q_{n-1}(e^{-i\phi }-1)+O(a_n). \end{aligned}$$

We need recurrence relations for \(p_n\) and \(q_n\). Multiplying the above first equation by \((1-e^{i\phi })\) and subtracting the second one,

$$\begin{aligned} p_n(-e^{i\phi }+e^{-i\phi })&= p_{n-1}e^{i\phi }(1-e^{i\phi })+q_{n-1}e^{-i\phi }(1-e^{i\phi }) -g_n(p_{n-1}e^{i\phi }+q_{n-1}e^{-i\phi })\\&\;\;-p_{n-1}(e^{i\phi }-1) -q_{n-1}(e^{-i\phi }-1)+O(a_n) \\&= p_{n-1}(1-e^{2i\phi }) -g_n(p_{n-1}e^{i\phi }+q_{n-1}e^{-i\phi })+O(a_n). \end{aligned}$$

Therefore,

$$\begin{aligned} p_n&= p_{n-1} e^{i\phi } \left( 1-\frac{i g_n}{2\sin {\phi }}\right) -\frac{i g_n e^{-i\phi }}{2\sin {\phi }} q_{n-1} +O(a_n)\ , \end{aligned}$$
(4.6)
$$\begin{aligned} q_n&= q_{n-1} e^{-i\phi } \left( 1+\frac{i g_{n}}{2\sin {\phi }}\right) +\frac{i g_n e^{i\phi }}{2\sin {\phi }} p_{n-1} + O(a_n) \ . \end{aligned}$$
(4.7)

Relation (4.7) can be obtained with a calculation similar to the one done to obtain \(p_n\) in terms of \((p_{n-1},q_{n-1})\), i.e., multiplying by \((1-e^{-i\phi })\), or equivalently using that \(q_n=p_n^*\).

From \(p_nq_n=R_n^2\), we have

$$\begin{aligned} \frac{R_n^2}{R_{n-1}^2} = 1+\frac{g_n^2}{2\sin ^2\phi } - \frac{g_n}{\sin {\phi }} \left( \sin {2(\theta _{n-1}+\phi )}+ \frac{g_n \cos {2(\theta _{n-1}+\phi )}}{2\sin {\phi }} \right) +O(a_n).\nonumber \\ \end{aligned}$$
(4.8)

To simplify the notation, we denote \(\tilde{\theta }_{n}={\theta }_{n-1}+\phi \) and Eq. (4.8) can be rewritten as

$$\begin{aligned} \frac{R_{n}^{2}}{R_{n-1}^{2}}=1-\frac{g_n}{\sin \phi }\sin (2\tilde{\theta }_{n})+\frac{g_{n}^{2}}{\sin ^{2}\phi }\sin ^{2}(\tilde{\theta }_{n})+O(a_n). \end{aligned}$$
(4.9)

On the other hand, multiplying (4.6) by itself, we obtain

$$\begin{aligned} e^{2i{\theta }_{n}}=\frac{R_{n-1}^2}{R_n^2}\left[ e^{2i\tilde{\theta }_{n}}-\eta _n^2(e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}}+2)-2i\eta _n(e^{2i\tilde{\theta }_{n}}+1)+O(a_n))\right] , \end{aligned}$$

with \(\eta _n=\frac{g_n}{2\sin \phi }\). We can also write relation (4.9) as

$$\begin{aligned} \frac{R_{n}^2}{R_{n-1}^2}= 1+\eta _n^2(2-e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}})+i\eta _n(e^{-2i\tilde{\theta }_{n}}-e^{2i\tilde{\theta }_{n}})+O(a_n). \end{aligned}$$

Thus,

$$\begin{aligned} e^{2i{\theta }_{n}}=\frac{e^{2i\tilde{\theta }_{n}}+\eta _n^2(e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}}+2)-i\eta _n(e^{2i\tilde{\theta }_{n}}+1)+O(a_n)}{1+\eta _n^2(2-e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}})+i\eta _n(e^{-2i\tilde{\theta }_{n}}-e^{2i\tilde{\theta }_{n}})+O(a_n)}. \end{aligned}$$
(4.10)

Proposition 4.1

There is a parameter \( C>0\), depending on \(\phi \in (0,\pi )\), such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\left| \frac{\partial {\theta }_n}{\partial E}\right| = C, \end{aligned}$$

with \(\cos \phi =1-\frac{5}{6}E\).

Proof

To prove this bound on \(\frac{\partial {\theta }_n}{\partial E}\), it suffices to consider \(\frac{\partial {\theta }_n}{\partial \phi }\), because \(\frac{\partial {\theta }_n}{\partial E}=\frac{5}{6\sin \phi }\frac{\partial {\theta }_n}{\partial \phi }\). Since \(\lim g_n=0\) and remembering that \(\eta _n=\frac{g_n}{2\sin \phi }\), we can write (4.10) as

$$\begin{aligned} e^{2i{\theta }_{n}}=\frac{e^{2i\tilde{\theta }_{n}}}{1+z(\theta _n,\phi )} +t(\theta _n,\phi ) , \end{aligned}$$

with

$$\begin{aligned} z(\theta _n,\phi )=\eta _n^2(2-e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}})+i\eta _n(e^{-2i\tilde{\theta }_{n}}-e^{2i\tilde{\theta }_{n}})+O(a_n) \end{aligned}$$

and

$$\begin{aligned} t(\theta _n,\phi )=\frac{\eta _n^2(e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}}+2)-i\eta _n(e^{2i\tilde{\theta }_{n}}+1)+O(a_n)}{1+\eta _n^2(2-e^{2i\tilde{\theta }_{n}}-e^{-2i\tilde{\theta }_{n}})+i\eta _n(e^{-2i\tilde{\theta }_{n}}-e^{2i\tilde{\theta }_{n}})+O(a_n)}, \end{aligned}$$

which are continuously differentiable functions and converge uniformly to zero as \(n\rightarrow \infty \); hence, we obtain the estimate

$$\begin{aligned} \left| \frac{\partial {\theta }_n}{\partial \phi }\right| \le C_n\left( \left| \frac{\partial {\theta }_{n-1}}{\partial \phi }\right| +1\right) + \varepsilon _n, \end{aligned}$$
(4.11)

with the sequences \(C_n \rightarrow 1\) and \(\varepsilon _n \rightarrow 0\).

Assume inductively that \(\frac{\partial {\theta }_{n-1}}{\partial \phi }=O(n-1)\), that is, there is a constant D such that

$$\begin{aligned} \left| \frac{\partial \tilde{\theta }_{n-1}}{\partial \phi }\right| \le D(n-1), \end{aligned}$$

and observe that the induction base case follows from the initial conditions (4.3). Then, by relation (4.11),

$$\begin{aligned} \left| \frac{\partial {\theta }_n}{\partial \phi }\right| \le n\left( DC_n+\frac{(1-D)}{n}C_n + \frac{\varepsilon _n}{n}\right) . \end{aligned}$$

Since \(C_n \rightarrow 1\) and \(\varepsilon _n \rightarrow 0\) as \(n\rightarrow \infty \), we conclude that \( \frac{\partial {\theta }_n}{\partial \phi }= O(n)\). Therefore, there is a \( C>0 \), depending on \(\phi \in (0,\pi )\), so that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\left| \frac{\partial {\theta }_n}{\partial E}\right| = C. \end{aligned}$$

This completes the proof of the proposition. \(\square \)

Note that, by Eq. (4.4), there is a \(0<C<\infty \) so that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\Vert u_E\Vert ^2_{b_{n}}}{n}=\lim _{n\rightarrow \infty }\frac{1}{n}\int _{0}^{b_{n}}[u_{E}(x)]^{2}\mathrm {d}x = C \lim _{n\rightarrow \infty }R_n^2. \end{aligned}$$
(4.12)

We conclude that it is possible to obtain information about the asymptotic behavior of the solutions \(u_{E}\) by studying the behavior of the sequence \((R_{n})\), as \(n\rightarrow \infty \). In the next section, we analyze the sequence \((R_{n})\) for selected \((g_n)\), concluding the proof of Theorem 2.1.

5 Proof of Theorem 2.1

We note that the estimates for the Prüfer variables in (4.9) coincide, up to terms of \(O(a_n)\), with the estimates in [12] (Equation (2.12.c)) for discrete Schrödinger operators. Since the sequence \((a_n)\) tends to zero, these terms are not expected to interfere with the asymptotic behavior of the sequence \((R_{n})\). We will justify this expectation in the following.

We specialize to the potential \(V_{\omega }(x)\) of the form (1.4) with independent random variables \(g_{n}\equiv g_n^{\omega }\) defined in a probability space, with (probability) measure \(\nu (\omega )\), and satisfying the conditions (i)-(iv) in Sect. 1.2.

Proposition 5.1

Suppose that the sequence \((g_{n}^{\omega })\) satisfy (i)-(iv) in Sect. 1.2. Fix \(\phi \in (0,\pi )\), with \(\phi \ne \frac{\pi }{4},\frac{\pi }{2},\frac{3\pi }{4}\). Then, for \(\nu \)-a.e. \(\omega \), the variables \(R_n\), associated with a solution to the eigenvalue Eq. (4.1), satisfy

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln R_{n}}{\left( \sum _{j=1}^{n}\frac{1}{j}\right) }=\frac{\lambda ^{2}}{8\sin ^{2}\phi }. \end{aligned}$$
(5.1)

Proof

Fix E and an initial condition \(\theta _{0}\) associated with a solution to (4.1); by (4.9),

$$\begin{aligned} \ln R_{n}-\ln R_{n-1}=\frac{1}{2}\ln \left( 1-\frac{g_n^{\omega }}{\sin \phi }\sin (2\tilde{\theta }_{n})+\frac{(g_{n}^{\omega })^{2}}{\sin ^{2}\phi }\sin ^{2}(\tilde{\theta }_{n})+y_n\right) , \end{aligned}$$

with \((y_{n})\) a sequence so that \(|y_{n}|\le C a_{n}\), for a \(C>0\) independent of n. Since

$$\begin{aligned} \sup _{\omega }\left| \frac{g_n^{\omega }}{\sin \phi }\sin (2\tilde{\theta }_{n})+\frac{(g_{n}^{\omega })^{2}}{\sin ^{2}\phi }\sin ^{2}(\tilde{\theta }_{n})+y_n\right| \longrightarrow 0, \end{aligned}$$

as \(n\rightarrow \infty \), we may use the expansion

$$\begin{aligned} \ln (1+x)=x-\frac{x^{2}}{2}+O(x^{3}) \end{aligned}$$

to get

$$\begin{aligned}&\ln \Big (1-\frac{g_n^{\omega }}{\sin \phi }\sin (2\tilde{\theta }_{n})+\frac{(g_{n}^{\omega })^{2}}{\sin ^{2}\phi }\sin ^{2}(\tilde{\theta }_{n})+y_n\Big ) \\&\quad = -\frac{g_n^{\omega }}{\sin \phi }\sin (2\tilde{\theta }_{n})+ \frac{(g_{n}^{\omega })^{2}}{\sin ^{2}{\phi }}\sin ^{2}(\tilde{\theta }_{n})\\&\qquad + \ y_n -\frac{(g_{n}^{\omega })^{2}}{2\sin ^{2}\phi }\sin ^{2}(2\tilde{\theta }_{n})\\&\qquad + \ O\left( (g_{n}^{\omega })^{3}+a_n\right) . \end{aligned}$$

Now, by using the trigonometric relation

$$\begin{aligned} \sin ^{2}\theta -\frac{1}{2}\sin ^{2}(2\theta )= & {} \frac{1}{2}-\frac{1}{2}\cos (2\theta ) -\frac{1}{2}\left[ \frac{1}{2}-\frac{1}{2}\cos (4\theta )\right] \\= & {} \frac{1}{4}-\frac{1}{2}\cos (2\theta )+\frac{1}{4}\cos (4\theta ), \end{aligned}$$

we obtain that

$$\begin{aligned} \ln R_{n}=\frac{1}{8}\sum _{n=1}^{\infty }\frac{\mathbb {E}((g_{n}^{\omega })^{2})}{\sin ^{2}\phi }+ C_{1} + C_2 + C_3 + C_4, \end{aligned}$$

with corrections

$$\begin{aligned} \left. \begin{array}{l} C_1 = -\frac{1}{2\sin \phi }\sum _{j=1}^{n} g_{j}^{\omega }\sin (2\tilde{\theta }_{j})\\ C_2 = \frac{1}{2\sin ^{2}\phi }\sum _{j=1}^{n} \left[ ((g_{j}^{\omega })^{2}) - \mathbb {E}((g_{j}^{\omega })^{2})\right] \left[ \sin ^{2}(\tilde{\theta }_{j})-\frac{1}{2}\sin ^{2}(2\tilde{\theta }_{j})\right] \\ C_3 = \frac{1}{2\sin ^{2}\phi }\sum _{j=1}^{n} (\mathbb {E}(g_{j}^{\omega })^{2}) \left[ \frac{1}{2}\cos (2\tilde{\theta }_{j})-\frac{1}{4}\cos (4\tilde{\theta }_{j})\right] \\ C_4 = \sum _{j=1}^{n} \mathrm {O}\left( (g_{j}^{\omega })^{3}+a_{j}\right) . \end{array}\right. \end{aligned}$$

Hence, the result follows if we prove that for each \(q=1,2,3,4\) and a.e. \(\omega \),

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{|C_{q}(\omega )|}{\left( \sum ^{n}_{j=1}\frac{1}{j}\right) }=0. \end{aligned}$$

For \(q=1,2,3\), the above limit follows exactly as in Theorem 8.2 in [12], since we have analogous expressions for \(C_1,C_2,C_3\); here it is necessary that \(\phi \ne \frac{\pi }{4},\frac{\pi }{2},\frac{3\pi }{4}\).

For \(q=4\), the result follows from hypothesis (iii) and the construction of the sequence \((a_n)\), since \(\sum _{n=1}^{\infty }8a_n=b<\infty \). \(\square \)

Proposition 5.2

Let \((g_{n}^{\omega })\) be as in Proposition 5.1. Fix \(\phi \in (0,\pi )\), with \(\phi \ne \frac{\pi }{4},\frac{\pi }{2},\frac{3\pi }{4}\). Then, for \(\nu \)-a.e. \(\omega \), there is a solution \(u_{\theta _{\omega }}\) to (4.1) so that \(R^{(\theta _{\omega })}\) satisfies

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln R^{(\theta _{\omega })}_{n}}{\ln n}=-\frac{\lambda ^{2}}{8\sin ^{2}\phi }. \end{aligned}$$
(5.2)

Proof

Let \(\beta =\frac{\lambda ^{2}}{8\sin ^{2}\phi }\). Let \(R^{(1)}_{n}\) and \(R^{(2)}_{n}\) be the radial Prüfer variables associated with \(\theta _{n}^{(1)}\) and \(\theta _{n}^{(2)}\), respectively. By Proposition 5.1, for a.e. \(\omega \) and \(k=1,2\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln R_{n}^{(k)}}{\ln n}=\beta . \end{aligned}$$
(5.3)

By relation (4.5), we have that

$$\begin{aligned}&u_{E}(b_n)=-2R_n \sin ({\theta }_{n}),\\&\quad u'_{E}(b_n)=-2R_n[\sin (\phi )\cos ({\theta }_{n})+(1-\cos \phi )\sin ({\theta }_n)]. \end{aligned}$$

Then, for any two linearly independent solutions \(u_{1,E}\) and \(u_{2,E}\) to Eq. (4.1), associated with \(R^{(k)}\) and \(\theta ^{(k)}\), \(k=1,2\), respectively, we have

$$\begin{aligned} W[u_{1,E},u_{2,E}](b_{n})= & {} u_{1,E}(b_n)u'_{2,E}(b_n)-u'_{1,E}(b_n)u_{2,E}(b_n)\\= & {} 4R_{n}^{(1)}R_{n}^{(2)}\sin \phi \left( \sin ({\theta }^{(1)}_n)\cos ({\theta }_n^{(2)})-\sin ({\theta }_n^{(2)})\cos ({\theta }_n^{(2)}))\right) \\= & {} 4R_{n}^{(1)}R_{n}^{(2)}\sin \phi \sin (\theta _{n}^{(1)}-\theta _{n}^{(2)}). \end{aligned}$$

Since the Wronskian is constant, one gets

$$\begin{aligned} R_{n}^{(1)}R_{n}^{(2)}\sin \phi \sin (\theta _{n}^{(1)}-\theta _{n}^{(2)})=C_{\alpha }. \end{aligned}$$

So, by (5.3),

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\ln \big |\sin \big (\theta _{n}^{(1)}-\theta _{n}^{(2)}\big )\big |}{\ln n}=-2\beta . \end{aligned}$$

Therefore, to conclude this proof it is enough to follow the same steps of the proof of Lemma 8.8 in [12] (from equation (8.20) onwards). \(\square \)

Our Propositions 5.1 and 5.2 are versions, in our setting, of Theorem 8.2 and Lemma 8.8 in [12], respectively. These theorems are for \(\phi \ne \frac{\pi }{4},\frac{\pi }{2},\frac{3\pi }{4}\).

Recalling that in the model discussed here we have \(\cos \phi =1-\frac{5}{6}E\), it follows that \(E\in (0,\frac{12}{5})\) with

$$\begin{aligned} E\ne \frac{3(2-\sqrt{2})}{5},\,\frac{6}{5},\, \frac{3(2+\sqrt{2})}{5}. \end{aligned}$$

By Proposition 5.2 and relation (4.4), it follows that for \(\nu \)-a.e. \(\omega \) there is a solution \(u_{E}^{S}\) to Eq. (4.1) so that (the superscript S is for “subordinate”; see Sect. 5.1)

$$\begin{aligned} \Vert u_{E}^{S}\Vert _{b_{n}} \asymp n^{-\beta }n^{1/2}, \end{aligned}$$
(5.4)

with \(r_1(n)\asymp r_2 (n)\) denoting the relation \(\lim _{n\rightarrow \infty }\frac{\ln r_1}{\ln r_2}=C\), for some \(0<C<\infty \). Note that if \(\beta <1/2\), then \(\lim _{n\rightarrow \infty }\Vert u_{E}^{S}\Vert _{b_{n}}=\infty ,\) and consequently, \(u_{E}^{S}\notin \mathrm {L}^{2}(I_b)\).

Similarly, by Proposition 5.1 and relation (4.4), it follows that for every solution \(u_{E}\) to Eq. (4.1), with \(u_{E}\) linearly independent with \(u_{E}^S\), we have

$$\begin{aligned} \Vert u_{E}\Vert _{b_{n}} \asymp n^{\beta }n^{1/2}.\end{aligned}$$
(5.5)

Lemma 5.3

Suppose \(\Vert u\Vert _{b_n} \asymp n^{\beta +1/2}\). If \(L_j\) is a sequence in (0, b) with \(L_j\uparrow b\), for large j pick the (unique) subsequence \((b_{n_j})\) so that \(b_{n_j}\le L_j<b_{n_j+1}\). Then, \(\Vert u\Vert _{L_j} \asymp n_{j}^{\beta +1/2}\).

Proof

Since \(\Vert u\Vert _L\) is a monotone function of L, one has

$$\begin{aligned} \Vert u\Vert _{b_{n_j}}\le \Vert u\Vert _{L_j} < \Vert u\Vert _{b_{n_j+1}} \end{aligned}$$

and so

$$\begin{aligned} \frac{\ln \Vert u\Vert _{b_{n_j}}}{\ln n_j^{\beta +1/2}} \;\le \;\frac{\ln \Vert u\Vert _{L_j}}{\ln n_j^{\beta +1/2}} \; <\; \frac{\ln \Vert u\Vert _{b_{n_j+1}}}{\ln (n_j+1)^{\beta +1/2}}\times \frac{\ln (n_j+1)}{\ln n_j} \end{aligned}$$

and since \(\ln (n_j+1)/\ln n_j\downarrow 1\) as \(j\rightarrow \infty \), the result follows. \(\square \)

Theorem 2.1

Fix \(0<\lambda <2\) and consider the Schrödinger operator generated by a potential of the form (1.4) satisfying (i)-(iv). Let \(E\in J(\lambda )\), with \(J(\lambda )\) as in Sect. 1.3.

From \(\cos \phi =1-\frac{5}{6}E\), for \(\phi \in (0,\pi )\), we have

$$\begin{aligned} \sin ^{2}\phi =1-\cos ^{2}(\phi )=\frac{60E-25E^{2}}{36}, \end{aligned}$$

and since \(\beta =\frac{\lambda ^{2}}{8\sin ^{2}\phi }\), one finds

$$\begin{aligned} \beta =\frac{9\lambda ^{2}}{120E-50E^{2}}. \end{aligned}$$

Thus, \(\beta <\frac{1}{2}\) if and only if \(\lambda <2\) and \(E\in J\). We have, by (5.4) and (5.5), that \( \Vert u_{E}^{S}\Vert _{b_{n}} \asymp n^{\frac{1}{2}-\beta } \), and for \(u_E\) linearly independent with \(u_{E}^{S}\), one has \( \Vert u_{E}\Vert _{b_{n}} \asymp n^{\frac{1}{2}+\beta }. \)

Noting that \(\left( \frac{1}{2}-\beta \right) -\tilde{\alpha }\left( \frac{1}{2}+\beta \right) =0\) if and only if \(\tilde{\alpha }=\frac{1-2\beta }{1+2\beta }\), consequently

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\Vert u_{E}^{S}\Vert _{b_{n}}}{\Vert u_{E}\Vert ^{\tilde{\alpha }}_{b_{n}}}=A,\ \end{aligned}$$

for some \(0<A<\infty \). Hence, we conclude, by Lemma 5.3 that for \(0<L<b\),

$$\begin{aligned} \lim _{L\rightarrow b}\frac{\Vert u_{E}^{S}\Vert _L}{\Vert u_{E}\Vert ^{\alpha /(2-\alpha )}_{L}}=A \end{aligned}$$
(5.6)

with \(\tilde{\alpha }=\frac{\alpha }{2-\alpha }\), and any solution \(u_{E}\) to (4.1), linearly independent with \(u_{E}^{S}\). \(\square \)

5.1 Singular continuous spectra

The fact that the Schrödinger operator, with a potential of the form (1.4) and \(\sum _n g_n^2=\infty \), has singular continuous spectrum in (0, 12/5), follows by Proposition 3 of [15] (since there are similarities in the calculations in our Sect. 4 with those in ([15], pages 32–35)). However, [15] does not provide information on Hausdorff dimensions.

Another way to verify that the operator (1.5), with \(0<\lambda <2\) and a potential of the form (1.4) satisfying (i)-(iv), has singular continuous spectrum in \(J(\lambda )\) for \(\nu \)-a.s. \(\omega \), is by noting that for \(0<\beta <1/2\), then by relations (5.4)–(5.5) and Lemma 5.3, one has

$$\begin{aligned} \lim _{L\rightarrow \ b}\frac{\Vert u_{E}^{S}\Vert _{L}}{\Vert u_{E}\Vert _{L}}=0, \end{aligned}$$

for any solution \(u_{E}\) to (4.1) linearly independent with \(u_E^S\). Thus, for all \(E\in J\), it follows that \(u_{E}^{S}\) is subordinate at b (i.e., with \(\kappa =1\) in Sect. 3), and by (5.4), one has \(u_{E}^{S}\notin \mathrm {L}^{2}[0,b]\) and E is not an eigenvalue; hence, by Theorem 7.3 in [16], the operator has a purely singular continuous spectrum in this set.