1 Introduction

There has been much progress during the past decades in the understanding of superdiffusion in one-dimensional systems with several conservation laws. Chains of coupled oscillators are typical models showing superdiffusive transport of energy. They are the one-dimensional Hamiltonian systems

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt} q_x(t) &{} = \partial _{v_x}\mathcal {H}(v_x(t),q_x(t)) \\ \frac{d}{dt} v_x(t) &{} = -\partial _{q_x}\mathcal {H}(v_x(t),q_x(t)) , \end{array}\right. } \end{aligned}$$

with Hamiltonian

$$\begin{aligned}&\mathcal {H} = \sum _{x \in {\mathbb {Z}}} \left( \frac{|v_x|^{2}}{2} + V(q_x- q_{x+1}) \right) . \end{aligned}$$

Here \(v_x(t)\) is the velocity of the oscillator x at time t and \(q_x(t)\) is the displacement from its equilibrium position of the oscillator x at time t. In the case where the potential V is quadratic, the dynamics is linear and the chain is said to be harmonic and otherwise anharmonic. The Fermi–Pasta–Ulam chain (FPU chain) has possibly cubic and/or quartic terms in the potential. Super diffusion of energy and the divergence of the corresponding thermal conductivity have been observed numerically in the dynamics of FPU chains [5, 13, 14]. Strong efforts are made to identify the exponent of the divergence and the nature of superdiffusion in FPU chains numerically and theoretically in recent years.

In an innovative article [17], Spohn discussed an asymptotic behavior of time-dependent correlation functions of heat mode applying the method of fluctuating hydrodynamics. His argument suggests that for general anharmonic chains the macroscopic diffusion of energy is governed by the fractional diffusion equation

$$\begin{aligned} \partial _{t} \mathbf {e}(y,t) = - (- \Delta _{y})^{\frac{s}{2}}\mathbf {e}(y,t). \end{aligned}$$
(1.1)

Moreover, Spohn’s theory suggests that there are only two universality classes, \(s = \frac{3}{2}\) or \(\frac{5}{3}\). These exponents were also derived in a series of Kepler ratios in general diffusive dynamics with several conservation laws [15].

However, a rigorous mathematical analysis of the energy transport in the anharmonic chains is too hard to justify Spohn’s theory. Recently as an analytically tractable model, the harmonic chains of oscillators with a stochastic exchange of momentum between neighboring sites, which we call the momentum exchange model, was introduced [1]. In [1] the authors prove the divergence of the thermal conductivity for this model and obtain an explicit exponent of the divergence of Green–Kubo formula. To understand the nature of superdiffusion for this model, a weak noise limit is studied in [2]. They show that in the weak noise limit the time evolution of the local density of the energy is governed by the Boltzmann equation

$$\begin{aligned}&\partial _{t} u(y,k,t) + \frac{1}{2 \pi } \omega '(k) \partial _{y} u(y,k,t) = (\mathcal {L}u)(y,k,t) , \nonumber \\&(\mathcal {L}u)(y,k,t) = \int _{{\mathbb {T}}} dk' ~ R(k,k') (u(y,k',t) - u(y,k,t)). \end{aligned}$$
(1.2)

Here, the local density of energy u(ykt) depends on the position \(y \in {\mathbb {R}}\) along the chain, the wave number \(k \in {\mathbb {T}}= [-\frac{1}{2},\frac{1}{2})\) and time \(t \ge 0\). \({\omega }(k)\) is the dispersion relation. Later in [10], it is shown that a properly rescaled solution of the Boltzmann equation (1.2) converges to the solution of the fractional diffusion equation (1.1) with \(s=\frac{3}{2}\). The main idea of the proof of this convergence is the following: Since the scattering kernel \(R(k,k')\) is positive and symmetric \(R(k,k{'}) = R(k{'},k)\), (1.2) can be interpreted as the forward equation for the probability density of a Markov process (z(t), k(t)) on \({\mathbb {R}}\times {\mathbb {T}}\), especially, k(t) is a reversible continuous time Markov chain. Applying a limit theorem for additive functionals of reversible Markov chains, they showed that the scaled process \(N^{ - \frac{2}{3}} z(Nt)\) converges to a Lévy process generated by \(- (-\Delta )^{\frac{3}{4}}\) (up to a constant). Their limit theorems are based on martingale approximation of additive functionals and limit theorems of dependent variables discussed in [6]. By this two-step scaling limit, the 3 / 4-fractional diffusion equation is derived from the momentum exchange model rigorously. Recently the 3 / 4-fractional diffusion equation is derived by a direct limit (namely one-step scaling limit) in [11]. For a variant of the momentum exchange model, a skew 3 / 4-fractional diffusion equation is derived by a direct space–time scaling limit in [3].

Most recently in [16, 18] two of the authors introduced another variant of the momentum exchange model which also shows the superdiffusive behavior of the energy but the exponent of the divergence of Green–Kubo formula is different from the original one. The model is a chain of coupled charged harmonic oscillators in a magnetic field with a stochastic exchange of velocity between neighboring sites.

The goal of the present paper is to understand the nature of the superdiffusion for this coupled charged harmonic chain of oscillators in a magnetic field with noise. We apply the two-step scaling limits. Following the idea of [2], we first show as Theorem 1 that in the weak noise limit the local density of energy is governed by the phonon linear Boltzmann equation

$$\begin{aligned}&\partial _{t} u(y,k,i,t) + \frac{1}{2\pi } \omega '(k) \partial _{y} u(y,k,i,t) = \mathcal {L} u(y,k,i,t) , \nonumber \\&\mathcal {L}u(y,k,i,t) = \sum _{j= 1,2} \int _{\mathbf {T}} dk' ~ R(k,i,k',j) (u(y,k',j,t)-u(y,k,i,t) ). \end{aligned}$$
(1.3)

Here, the local density of energy u(ykit) depends on position \(y \in {\mathbb {R}}\) along the chain, the wave number \(k \in {\mathbb {T}}\), the type of phonon \(i =1,2\) and time \(t \ge 0\). Then, we consider a properly rescaled solution of the Boltzmann equation (1.3) and show that it converges to the solution of the fractional diffusion equation (1.1) with \(s=\frac{5}{3}\) as Theorem 2. This provides a first rigorous example of the 5 / 6-superdiffusion of energy in a chain of oscillators.

A key ingredient of the proof of Theorem 2 is the scaling limit of an additive functional of a Markov process as the prior work. Actually, since the scattering kernel \(R(k,i,k',j)\) is positive and symmetric under the exchange of (ki) and \((k',j)\), (1.3) can be interpreted as the time evolution of the density for a Markov process (Z(t), K(t), I(t)) on \({\mathbb {R}}\times {\mathbb {T}}\times \{ 1,2 \}\), especially, (K(t), I(t)) is a reversible Markov process. By using this process, we have a stochastic representation of the solution of (4.2), \(u(y,k,i,t) = \mathbb {E}_{(y,k,i)}[u_{0}(Z(t),K(t),I(t))]\). Applying a general limit theorem in [10], we show that the scaled process \(N^{ - \frac{3}{5}} Z(Nt)\) converges to a Lévy process generated by \(- (-\Delta )^{\frac{5}{6}}\) (up to a constant) as Theorem 3. On the other hand, by the ergodicity of (K(t), I(t)) , the limit of rescaled solution \(u_{N}(y,k,i,Nt)\) will homogenize in \({\mathbb {T}}\times \{ 1,2 \}\) as \(N \rightarrow \infty \). Thus the limit of the rescaled solution satisfies the fractional diffusion equation.

The difference of the exponents between \(\frac{3}{4}\) (obtained in [10, 11] for the original momentum exchange model) and \(\frac{5}{6}\) is explained by the asymptotic behavior of the derivative of the dispersion relation \(\omega '(k)\) and the mean value of the scattering kernel \(R(k) = \int _{{\mathbb {T}}} R(k,k')dk'\) as \(k \rightarrow 0\). (We abbreviate the term ij.) Roughly speaking, if

$$\begin{aligned} \omega '(k) \sim k^{a}, ~ R(k) \sim k^{b} ~ {\textit{as}} ~ k \rightarrow 0 \end{aligned}$$

for some \(a,b \in {\mathbb {N}}_{\ge 0}\), by applying the argument in [10] formally, one will obtain a Lévy process generated by \(- (- \Delta )^{\frac{b+1}{2(b-a)}}\) as a proper scaling limit if \(0< \frac{b+1}{2(b-a)} < 1\) and by \(\Delta \) if \(\frac{b+1}{2(b-a)} \ge 1\). For the original momentum exchange model presented in [2, 11]

$$\begin{aligned} \omega '(k) \sim 1, ~ R(k) \sim k^{2} ~ {\textit{as}} ~ k \rightarrow 0, \end{aligned}$$

while in our model

$$\begin{aligned} \omega '(k) \sim k, ~ R(k) \sim k^{4} ~ {\textit{as}} ~ k \rightarrow 0. \end{aligned}$$

In particular, our model has the vanishing sound speed since \(\lim _{k \rightarrow 0}\omega '(k)=0\). To be more precise, in our model \(R(k,i) = \sum _{j=1}^2\int _{{\mathbb {T}}} R(k,i,k',j)dk'\) satisfies \(R(k,1) \sim k^{2}\) and \(R(k,2) \sim k^{4}\) (or \(R(k,2) \sim k^{2}\) and \(R(k,1) \sim k^{4}\) depending on the sign of the magnetic field) and the latter dominates the macroscopic evolution. Note that for a class of non-acoustic chains introduced in [12],

$$\begin{aligned} \omega '(k) \sim k, ~ R(k) \sim k^{2} ~ {\textit{as}} ~ k \rightarrow 0 \end{aligned}$$

and so its macroscopic evolution is diffusive.

A technically crucial idea of our proof of Theorem 1 is that we consider the microscopic local density of energy, called the Wigner distribution in physics, associated to the eigenvectors of the deterministic dynamics including the effect of the magnetic field. If we employ the classical wave functions which are the eigenvectors of the harmonic Hamiltonian dynamics (without a magnetic field) and study its associated Wigner distribution, then we obtain a system of Boltzmann equations as the weak noise limit. However, so far we do not know how to rescale the solutions of the system and derive the fractional diffusion equation from it. By employing the modified wave functions, instead of the classical wave functions, we obtain a single limiting Boltzmann equation which is much easier to analyze. This strategy can be applied to derive the limiting equation from other Hamiltonian systems with some energy-conservative external field. The exponents that we found here are identical to those given by the fluctuating hydrodynamics in [15, 17]. However, underlying mechanisms between them are not very clear, and hence it is still an open problem.

Our paper is organized as follows: In Sect. 2 we prepare some notations. In Sect. 3 we introduce our model, wave functions and its associated Wigner distribution. Note that since we consider the infinite system, we need to define our model in terms of wave functions to make the argument rigorous. In Sect. 4 we state our main results, Theorems 1 and 2. We study a Markov process associated to our Boltzmann equation and its scaling limit in Sect. 5. Proofs of Theorems 1 and 2 are given in Sects. 6 and 7 respectively.

2 Notations

Let \({\mathbb {T}}\cong [-\frac{1}{2},\frac{1}{2})\) be the one-dimensional torus. For \(f \in \ell ^{2}({\mathbb {Z}})\), we introduce the discrete Laplacian \(\Delta f : {\mathbb {Z}}\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} \Delta f (x)&= f(x+1) + f(x-1) - 2 f(x) \end{aligned}$$

and its Fourier transform \(\widehat{f} \in L^{2}({\mathbb {T}}) \) defined by

$$\begin{aligned} \widehat{f}(k)&= \sum _{x \in {\mathbb {Z}}} e^{-2 \pi \sqrt{-1} k x} f(x). \end{aligned}$$

For functions \(f,g \in \ell ^{2}({\mathbb {Z}})\), the discrete convolution \(f * g :{\mathbb {Z}}\rightarrow {\mathbb {R}}\) is defined by

$$\begin{aligned} f *g(x)&= \sum _{z \in {\mathbb {Z}}} f(x-z)g(z). \end{aligned}$$

Let \(\mathbf {S}\) be the space of rapidly decreasing functions on \( {\mathbb {R}}\times {\mathbb {T}}\) defined by

$$\begin{aligned} \mathbf {S} =\{ J \in C^{\infty }( {\mathbb {R}}\times {\mathbb {T}},\mathbb {C}) \ ; \ |J|_{m,n} < \infty \quad \forall m, n \in {\mathbb {Z}}_{\ge 0} \} \end{aligned}$$

where

$$\begin{aligned} |J|_{m,n} = \sup _{r,s \le m} \sup _{y \in {\mathbb {R}}, k \in {\mathbb {T}}} (1 + y^{2})^{n}|\partial _{y}^{r} \partial _{k}^{s} J(y, k)|. \end{aligned}$$

For \(J : {\mathbb {R}}\times {\mathbb {T}}\rightarrow \mathbb {C}\) such that J(yk) is rapidly decreasing in \(y \in {\mathbb {R}}\), we define \(\widehat{J} : {\mathbb {R}}\times {\mathbb {T}}\rightarrow \mathbb {C}\) as

$$\begin{aligned} \widehat{J}(p,k) = \int _{{\mathbb {R}}} dy ~ e^{-2 \pi \sqrt{-1} p y} J(y,k). \end{aligned}$$

We introduce a norm \(||\cdot ||\) on \(\mathbf {S}^{2} = \mathbf {S} \times \mathbf {S}\) defined by

$$\begin{aligned} ||\varvec{J}|| = \sum _{i = 1 , 2} \int _{{\mathbb {R}}} dp \sup _{k} |{\widehat{J_{i}}}(p,k)| \end{aligned}$$

for \(\varvec{J}=(J_{1},J_{2}) \in \mathbf {S}^{2}\) and define a topology on \(\mathbf {S}^{2}\) induce by the norm \(||\cdot ||\).

By \((\mathbf {S}^2)'\) we denote the dual space of \(\mathbf {S}^2\) equipped with the weak-\(*\) topology.

For two functions f(k) and g(k) defined on \({\mathbb {T}}\) or \({\mathbb {T}}{\setminus } \{0\}\), we denote by \(f(k) \sim g(k)\) as \(k \rightarrow 0\) if there exists a constant \(C>0\) such that for all k whose absolute value is small enough, \(\frac{1}{C}|g(k)| \le |f(k)| \le C|g(k)|\).

3 The Dynamics

We consider the one-dimensional infinite chain of coupled charged harmonic oscillators in two-dimensional space with weak continuous noise. Since we study the system with finite total energy, it is appropriate for us to define the dynamics through the wave functions , see Sect. 3.4. However, it may be difficult to understand the meaning of the feature values such as \(\widehat{\alpha }(k), R(k)\) from the definition of the dynamics (3.7). Hence, we first give a formal description of the deterministic dynamics in Sect. 3.1 in terms of \(\{ ( \mathbf {v}_x(t), \mathbf {q}_x(t) ) \in {\mathbb {R}}^{2} \times {\mathbb {R}}^{2} \}\), construct the associated wave functions in Sect. 3.2 and add the stochastic perturbation to the dynamics in Sect. 3.3. As we do not specify the initial condition \(\{ (\mathbf {v}_x(0), \mathbf {q}_x(0) ) \}\) there, the above construction is just formal. The first three sections are devoted to show classical way to define the dynamics and make the physical meaning of our model clear. In Sect. 3.5 we introduce the Wigner distribution associated to our wave functions.

3.1 Deterministic Dynamics

We consider a one-dimensional chain of oscillators in a magnetic field. Our deterministic dynamics \( ( \mathbf {v}_x(t), \mathbf {q}_x(t) ) \in {\mathbb {R}}^{2} \times {\mathbb {R}}^{2}\) is formally given as follows:

$$\begin{aligned} \left\{ \begin{array}{lll} &{}\frac{d}{dt} q_x^i =v_x^i \\ &{}\frac{d}{dt} v_x^i =[\Delta q^i]_x +\delta _{i,1}Bv^2_x-\delta _{i,2}B v^1_x \end{array}\right. \end{aligned}$$
(3.1)

for \(x \in {\mathbb {Z}}, i =1,2\) where \(B \in {\mathbb {R}}{\setminus } \{0\}\) is the strength of the magnetic field.

The total energy E of the system is formally given by

$$\begin{aligned} E = \sum _{i=1,2} \sum _{x \in {\mathbb {Z}}} \left( \frac{|v_x^i|^{2}}{2} + \frac{|q_x^i - q_{x+1}^i|^{2}}{2} \right) . \end{aligned}$$

We introduce operators A and G as follows:

$$\begin{aligned} A&= \sum _{i=1,2} \sum _{x \in {\mathbb {Z}}}( v_{x}^i \partial _{q_{x}^i} + [\Delta q^{i}]_x \partial _{v_{x}^i}) , \\ G&= \sum _{x \in {\mathbb {Z}}} \big ( v_{x}^2 \partial _{v_{x}^1} - v_{x}^1 \partial _{v_{x}^2} \big ). \end{aligned}$$

Then our deterministic dynamics formally satisfies \(\frac{d}{dt}f( \mathbf {v}, \mathbf {q})=(A+BG)f( \mathbf {v}, \mathbf {q})\) for any smooth cylinder function f, that is, f depends on the configuration \(( \mathbf {v}, \mathbf {q})\) only through a finite set of coordinates.

Let \(\alpha : {\mathbb {Z}}\rightarrow {\mathbb {R}}\) be a function that \(\alpha (0) = 2 \), \(\alpha (1)= \alpha (-1) = -1\) and \(\alpha (x) = 0, |x| \ge 2\). Using this function, the total energy E and the operator A are also written as follows:

$$\begin{aligned} E&= \sum _{i=1,2} \left( \sum _{x \in {\mathbb {Z}}} \frac{|v_{x}^i|^{2}}{2} + \sum _{ x , x' \in {\mathbb {Z}}} \frac{\alpha (x-x')}{2} q_{x}^iq_{x'}^i \right) , \\ A&= \sum _{i=1,2} \left( \sum _{x \in {\mathbb {Z}}} v_{x}^i \partial _{q_{x}^i} - \sum _{x , x' \in {\mathbb {Z}}} \alpha (x-x')q_{x'}^i \partial _{v_{x}^i}\right) . \end{aligned}$$

Remark 3.1

Suppose that \(\alpha _* : {\mathbb {Z}}\rightarrow {\mathbb {R}}\) is a function satisfying the following conditions (a.1)–(a.4).

  • \((a.1) ~ \alpha _* (x) \ne 0 \) for some \(x \in {\mathbb {Z}}. \)

  • \((a.2) ~ \alpha _* (x) = \alpha _* (-x) \) for all \( x \in {\mathbb {Z}}.\)

  • (a.3)   There exist some positive constants \(C_{1} , C_{2}\) such that \(|\alpha _* (x)| \le C_{1}e^{-C_{2}|x|} \) for all \(x \in {\mathbb {Z}}\).

  • \((a.4) ~ {\widehat{\alpha }_{*}}(k) >0 \) for all \(k \ne 0\) , \({\widehat{\alpha }_{*}}(0) = 0 , {\widehat{\alpha }_{*}}''(0) > 0\).

We can consider the dynamics associated to \(\alpha _*\), or precisely that given by \(A_* + BG\) where

$$\begin{aligned} A_* = \sum _{i=1,2} \left( \sum _{x \in {\mathbb {Z}}} v_{x}^i \partial _{q_{x}^i} - \sum _{x , x' \in {\mathbb {Z}}} \alpha _* (x-x')q_{x'}^i\partial _{v_{x}^i}\right) . \end{aligned}$$

Then, Theorems 1, 2, and 3 are generalized to this dynamics (with stochastic perturbation) by replacing \(\alpha \) with \(\alpha _* \). The generalization from \(\alpha \) to \(\alpha _* \) is straightforward, so we omit the proof.

3.2 Wave Functions

To define our dynamics rigorously and then introduce the Wigner distribution, we consider the Fourier transform of the configuration \(( \mathbf {v}, \mathbf {q})\). From the formal description of the dynamics (3.1), the time evolution of the deterministic process \(( \widehat{\mathbf {v}}(k,t), \widehat{\mathbf {q}}(k,t) )\) should be given by

$$\begin{aligned}&\partial _{t} ~ \begin{pmatrix} {\hat{q}^1}(k,t) \\ {\hat{q}^2}(k,t) \\ {\hat{v}^1}(k,t) \\ {\hat{v}^2}(k,t) \end{pmatrix} = M(k) ~ \begin{pmatrix} {\hat{q}^1}(k,t) \\ {\hat{q}^2}(k,t) \\ {\hat{v}^1}(k,t) \\ {\hat{v}^2}(k,t) \end{pmatrix} , \nonumber \\&M(k) = \begin{pmatrix} 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ -\widehat{\alpha }(k) &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{} \quad -\widehat{\alpha }(k) &{}\quad 0 &{}\quad 0 \end{pmatrix} + {B \begin{pmatrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{} \quad 0 &{} \quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad -1 &{}\quad 0 \end{pmatrix}} , \end{aligned}$$
(3.2)

for each \(k \in {\mathbb {T}}\) where \(\widehat{\alpha }(k) = 2 - 2\cos {2 \pi k}\). Note that the dynamics (3.2) is well-defined for any initial condition \(( \widehat{\mathbf {v}}(k,0), \widehat{\mathbf {q}}(k,0) )\) for each \(k \in {\mathbb {T}}\).

We denote the eigenvalues of the matrix M(k) by \(\{ \pm \sqrt{-1} {\omega }_i(k) , i = 1,2 \}\), which are explicitly given as

$$\begin{aligned} {\omega }_1(k)&= \sqrt{\widehat{\alpha }(k) + \frac{B^{2}}{4}} + \frac{B}{2} , \\ {\omega }_2(k)&= \sqrt{\widehat{\alpha }(k) + \frac{B^{2}}{4}} - \frac{B}{2} . \end{aligned}$$

Note that \({\omega }_i(k), \omega '_i(k), ~ i=1,2\) are bounded in \(k \in {\mathbb {T}}\) and \(\omega '_1 = \omega '_2\). Denote by \(\omega '(k)\) the common value of \(\omega '_i(k)\). We introduce the corresponding wave functions \(\{ {\widehat{\psi }_{i}}(k,t) ; i = 1,2 \}\) given by

$$\begin{aligned} {\widehat{\psi }_{1}}(k,t)&= \theta _{1}(k)({\hat{v}^1}(k,t) - \sqrt{-1}{\omega }_2(k){\hat{q}^1}(k,t) + \sqrt{-1}{\hat{v}^2}(k,t) + {\omega }_2(k){\hat{q}^2}(k,t)) ,\nonumber \\ {\widehat{\psi }_{2}}(k,t)&= \theta _{2}(k)({\hat{v}^1}(k,t) - \sqrt{-1}{\omega }_1(k){\hat{q}^1}(k,t) - \sqrt{-1}{\hat{v}^2}(k,t) - {\omega }_1(k){\hat{q}^2}(k,t)) \end{aligned}$$
(3.3)

with

$$\begin{aligned} \theta _{i}(k)&= \sqrt{\frac{{\omega }_i(k)}{{\omega }_1(k)+{\omega }_2(k)}} , ~ i=1,2. \end{aligned}$$

\({\widehat{\psi }_{i}}(k)\) is an eigenfunction associated to the eigenvalue \(- \sqrt{-1} {\omega }_i(k)\) :

$$\begin{aligned} \partial _t {\widehat{\psi }_{i}}(k) = - \sqrt{-1} {\omega }_i(k) {\widehat{\psi }_{i}}(k) , ~ i=1,2. \end{aligned}$$

Note that even though \(\omega _i(k)=\omega _i(-k)\) and \(\theta _i(k)=\theta _i(-k)\) for \(i=1,2\) and \(k \in {\mathbb {T}}\), \(\widehat{\psi }_{i}(k)\ne \widehat{\psi }_{i}(-k)\) in general. We normalize \(\widehat{\psi }\) by multiplying \(\theta _{i}\) so that the total energy E is given by the integral of the \(L^2\) norm of the wave functions as

$$\begin{aligned} E&= \frac{1}{2} \int _{{\mathbb {T}}} dk ~ \left( |{\hat{v}^1}(k)|^{2}+|{\hat{v}^2}(k)|^{2}+\widehat{\alpha }(k)(|{\hat{q}^1}(k)|^{2}+|{\hat{q}^2}(k)|^{2}) \right) \\&= \frac{1}{2}\int _{{\mathbb {T}}} dk ~ \left( |{\widehat{\psi }_{1}}(k)|^{2} + |{\widehat{\psi }_{2}}(k)|^{2}\right) . \end{aligned}$$

By a direct computation we have

$$\begin{aligned} {\hat{v}^1}(k)&= \frac{\theta _{1}(k)}{2}({\widehat{\psi }_{1}}(k) + {\widehat{\psi }_{1}}(-k)^{*}) + \frac{\theta _{2}(k)}{2}({\widehat{\psi }_{2}}(k) + {\widehat{\psi }_{2}}(-k)^{*}) , \nonumber \\ {\hat{v}^2}(k)&= - \frac{\sqrt{-1} \theta _{1}(k)}{2}({\widehat{\psi }_{1}}(k) - {\widehat{\psi }_{1}}(-k)^{*}) + \frac{\sqrt{-1} \theta _{2}(k)}{2}({\widehat{\psi }_{2}}(k) - {\widehat{\psi }_{2}}(-k)^{*}) , \nonumber \\ {\hat{q}^1}(k)&= \frac{\sqrt{-1} \theta _{1}(k)}{2{\omega }_1(k)}({\widehat{\psi }_{1}}(k) - {\widehat{\psi }_{1}}(-k)^{*}) + \frac{\sqrt{-1} \theta _{2}(k)}{2{\omega }_2(k)}({\widehat{\psi }_{2}}(k) - {\widehat{\psi }_{2}}(-k)^{*}) , \nonumber \\ {\hat{q}^2}(k)&= \frac{\theta _{1}(k)}{2{\omega }_1(k)}({\widehat{\psi }_{1}}(k) + {\widehat{\psi }_{1}}(-k)^{*}) - \frac{\theta _{2}(k)}{2{\omega }_2(k)}({\widehat{\psi }_{2}}(k) + {\widehat{\psi }_{2}}(-k)^{*}). \end{aligned}$$
(3.4)

3.3 Stochastic Perturbation

We consider a local stochastic perturbation of the dynamics (3.1) which conserves the total energy. We introduce an operator S as follows:

$$\begin{aligned} S&= \frac{1}{2} \sum _{x \in {\mathbb {Z}}} (Y_{x,x+1})^{2} = \frac{1}{4} \sum _{x \in {\mathbb {Z}}} \sum _{z \in {\mathbb {Z}};|x-z| = 1} (Y_{x,z})^{2}, \\ Y_{x , z}&= (v^2_z - v_x^2) (\partial _{v^1_z}-\partial _{v_{x}^1}) -(v^{1}_z - v_{x}^1)(\partial _{v^2_z}-\partial _{v_{x}^2}). \end{aligned}$$

We consider a Markov process \(( \mathbf {v}_x(t) , \mathbf {q}_x(t) )\) generated by \(L := A + BG +\epsilon \gamma S\). \(\gamma > 0\) is the strength of the stochastic noise and \(0< \epsilon <1\) is a scale parameter. The dynamics can be also given by the stochastic differential equation

$$\begin{aligned} {\left\{ \begin{array}{ll} d q_x^i &{} =v_x^i dt \\ d v_x^i &{} = ( - [\alpha * q^{i}]_x +\delta _{i,1}Bv^{2}_x-\delta _{i,2}B v^{1}_x + \epsilon \gamma [\Delta v^{i}]_x ) dt \\ &{}~ ~ + \sqrt{\epsilon \gamma } \sum _{z; |z-x| = 1} (Y_{x , z} v_x^i) dw_{x,z} , \end{array}\right. } \end{aligned}$$
(3.5)

for \(x \in {\mathbb {Z}}\), \(i=1,2\) where \(\{ w_{x,z}(t) = w_{z,x}(t) ; x,z \in {\mathbb {Z}}, |z-x| = 1 \}\) are independent standard Wiener processes on \({\mathbb {R}}\). Note that L formally conserves the total energy and the total pseudomomentum \(\sum _{x} v_{x}^1 - Bq^2_x, \sum _{x} v_{x}^2 + Bq^1_x\). For more details about the conserved quantities, see [16].

Remark 3.2

This specific choice of noise is not important. Our proof is also applicable for the velocity exchange noise used in [16] and yields the same scaling limits. For the construction of this jump-type process, we can follow the argument in Chapter 5 of [7].

3.4 Rigorous Definition of the Dynamics

In this subsection, we define the dynamics rigorously. First, we calculate the time evolution of the wave functions \({\widehat{\psi }_{i}}(k,t)\) obtained from the formal description (3.5):

$$\begin{aligned} d {\hat{q}^i}(k,t)&= {\hat{v}^i}(k,t) dt ~ , i=1,2 \nonumber ,\\ d {\hat{v}^1}(k,t)&= (- \widehat{\alpha }(k) {\hat{q}^1}(k,t) + B{\hat{v}^2}(k,t) - {\frac{\epsilon \gamma R(k)}{2}} {\hat{v}^1}(k,t) ) dt \nonumber \\&\quad - \sqrt{\epsilon \gamma } \int _{{\mathbb {T}}} r(k,k') {\hat{v}^2} (k-k',t) W(dk',dt) \nonumber ,\\ d {\hat{v}^2}(k,t)&= (- \widehat{\alpha }(k) {\hat{q}^2}(k,t) - B{\hat{v}^1}(k,t) - {\frac{\epsilon \gamma R(k)}{2}} {\hat{v}^2}(k,t) ) dt \nonumber \\&\quad + \sqrt{\epsilon \gamma } \int _{{\mathbb {T}}} r(k,k') {\hat{v}^1} (k-k',t) W(dk',dt) , \end{aligned}$$
(3.6)

where

$$\begin{aligned} {R}(k)&= 4 - 4 \cos {2 \pi k} ,\\ r(k,k')&= (e^{-2 \pi \sqrt{-1} k'} - e^{-2 \pi \sqrt{-1} k})(e^{2 \pi \sqrt{-1} k} - 1) ,\\ W(k,t)&= \sum _{x \in {\mathbb {Z}}} w_{x,x+1}(t) e^{-2 \pi \sqrt{-1} k x}. \end{aligned}$$

The term with R(k) comes from the stochastic perturbation. In our case \(2 \widehat{\alpha }(k) = {R}(k)\), but in general (cf. Remark 3.1) there is no such relation between \(\widehat{\alpha }\) and R, and so we keep \(\widehat{\alpha }\) and R for the generalization. W is called a cylindrical Wiener process on \(\mathbb {L}^{2}({\mathbb {T}})\). A precise derivation of (3.6) from (3.5) is given in “Appendix A”. Combining (3.3) and (3.6) we have

$$\begin{aligned} d {\widehat{\psi }_{1}}(k,t)&= (- \sqrt{-1}{\omega }_1(k) {\widehat{\psi }_{1}}(k,t) \nonumber \\&\quad - {\frac{\epsilon \gamma \theta _{1}(k) R(k)}{2}} ( \theta _{1}(k) {\widehat{\psi }_{1}}(k,t) + \theta _{2}(k) {\widehat{\psi }_{2}}(-k,t)^{*}) )dt \nonumber \\&\quad + \sqrt{-1} \theta _{1}(k) \sqrt{\epsilon \gamma } \int _{{\mathbb {T}}} r(k,k') (\theta _{1}(k-k') {\widehat{\psi }_{1}}(k-k',t) \nonumber \\&\quad + \theta _{2}(k-k') {\widehat{\psi }_{2}}(k'-k,t)^{*} ) W(dk',dt) , \nonumber \\ d {\widehat{\psi }_{2}}(k,t)&= (- \sqrt{-1}{\omega }_2(k) {\widehat{\psi }_{2}}(k,t) \nonumber \\&\quad - {\frac{\epsilon \gamma \theta _{2}(k) R(k)}{2}} ( \theta _{1}(k) {\widehat{\psi }_{1}}(-k,t)^{*} + \theta _{2}(k) {\widehat{\psi }_{2}}(k,t)) ) dt \nonumber \\&\quad - \sqrt{-1} \theta _{2}(k) \sqrt{\epsilon \gamma } \int _{{\mathbb {T}}} r(k,k') (\theta _{1}(k-k') {\widehat{\psi }_{1}}(k'-k,t)^{*} \nonumber \\&\quad + \theta _{2}(k-k') {\widehat{\psi }_{2}}(k-k',t) ) W(dk',dt) . \end{aligned}$$
(3.7)

Now we define a stochastic process \(\{ \widehat{\varvec{\psi }}(\cdot ,t) \in (\mathbb {L}^{2}({\mathbb {T}}))^{2} ; t \ge 0 \} \) as the unique solution of (3.7). We can show the existence of the solution by using a classical technique, called a fixed-point theorem. For the sketch of the proof, see “Appendix B”. Once we define the dynamics \( \widehat{\varvec{\psi }}(\cdot ,t) \in (\mathbb {L}^{2}({\mathbb {T}}))^{2} \), then we can also define \(\widehat{\mathbf {v}}(k,t)\) by (3.4) and then define a stochastic process \(\{ \mathbf {v}_x(t), \varvec{\psi }(x,t) ; x \in {\mathbb {Z}}, t \ge 0 \}\) by

$$\begin{aligned} v_x^i(t)&= \int _{{\mathbb {T}}} dk ~ e^{2\pi \sqrt{-1} k x} {\hat{v}^i}(k,t) , \\ \psi _i(x,t)&= \int _{{\mathbb {T}}} dk ~ e^{2\pi \sqrt{-1} k x} {\widehat{\psi }_{i}}(k,t) \end{aligned}$$

for \(x \in {\mathbb {Z}}, i= 1,2\). On the other hand, \(\widehat{\mathbf {q}}(\cdot ,t)\) is not necessarily well-defined as an element of \((\mathbb {L}^{2}({\mathbb {T}}))^2\) because \({\omega }_2(k) \sim {k}^2\) as \(k \rightarrow 0\) if \(B>0\) and \({\omega }_1(k) \sim {k}^2\) as \(k \rightarrow 0\) if \(B<0\). A trivial example is \(\widehat{\psi }_{i}(k,0) = C , C > 0\), and in this case one can easily see that \(\hat{q}^{2}(0) \ne \mathbb {L}^{2}({\mathbb {T}})\). Hence, \(\mathbf {q}_x(t)\) are also not necessarily well-defined. Actually, to assume \(\mathbf {q}(0) \in (\ell ^{2}({\mathbb {Z}}))^{2}\) is too strong condition for the finite total energy. Hereafter we do not use the variables \(\mathbf {q}_x\).

3.5 Wigner Distribution

Let \(Q_{\epsilon }\) be a probability measure on \((\mathbb {L}^{2}({\mathbb {T}}))^{2} \) which satisfies the following condition:

$$\begin{aligned} K_{0} = \sup _{0< \epsilon< 1} \sum _{i=1,2} \epsilon \int _{{\mathbb {T}}} dk ~ E_{Q_{\epsilon }}[ |{\widehat{\psi }_{1}}(k)|^{2} + |{\widehat{\psi }_{2}}(k)|^{2}] ~ < ~ \infty . \end{aligned}$$
(3.8)

Denote by \(\mathbb {E}_{\epsilon }\) the expectation with respect to the distribution of \(\{{\widehat{\psi }_{i}}(\cdot ,t)\}_{t \ge 0}\) which starts from \(Q_{\epsilon }\). In “Appendix C”, we show that

$$\begin{aligned} {\sum _{i=1,2} ||{\widehat{\psi }_{i}}(\cdot ,t)||_{\mathbb {L}_{2}}^{2} = \sum _{i=1,2} ||{\widehat{\psi }_{i}}(\cdot ,0)||_{\mathbb {L}_{2}}^{2} \quad a.s.} \end{aligned}$$

for any \(t \ge 0\). In particular, under the condition (3.8)

$$\begin{aligned} \sup _{0< \epsilon< 1} \sum _{i=1,2} \epsilon \int _{{\mathbb {T}}} dk ~ \mathbb {E}_{\epsilon }[ |{\widehat{\psi }_{1}}(k,t)|^{2} + |{\widehat{\psi }_{2}}(k,t)|^{2} ] ~ = K_{0} ~ < ~ \infty \end{aligned}$$
(3.9)

for any time \(t \ge 0\).

For the wave function \(\varvec{\psi }\), we introduce the averaged Wigner function as in Section 3 of [2]. We denote the Wigner distribution on the time scale \(\epsilon ^{-1}t\) by \(\Omega ^{\epsilon }(t)\) with \(\epsilon \) the small semiclassical parameter. Namely, we define \(\Omega ^{\epsilon }(t) \in (\mathbf {S}^2)'\) by

$$\begin{aligned}&\langle \Omega ^{\epsilon }(t),\varvec{J}\rangle ~ = \sum _{i=1,2} \langle \Omega _{i}^{\epsilon }(t),J_{i}\rangle \end{aligned}$$

for \(\varvec{J}=(J_{1},J_{2}) \in \mathbf {S}^{2}\) with

$$\begin{aligned}&\langle \Omega _{i}^{\epsilon }(t),J\rangle \nonumber \\&\quad = \frac{\epsilon }{2} \sum _{x,x' \in {\mathbb {Z}}} \mathbb {E}_{\epsilon }[\psi _{i}(x',\frac{t}{\epsilon })^{*} \psi _{i}(x,\frac{t}{\epsilon }) ] \int _{{\mathbb {T}}} dk ~ e^{2\pi \sqrt{-1} (x'-x) k} J(\frac{\epsilon }{2} (x+x'),k)^{*} \nonumber \\&\quad = \frac{\epsilon }{2} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ \mathbb {E}_{\epsilon } [{\widehat{\psi }_{i}}(k-\frac{\epsilon p}{2},\frac{t}{\epsilon })^{*} ~ {\widehat{\psi }_{i}}(k+\frac{\epsilon p}{2},\frac{t}{\epsilon }) ] \widehat{J}(p,k)^{*} \end{aligned}$$
(3.10)

for \(J \in \mathbf {S}\). By the Cauchy–Schwarz inequality and (3.9),

$$\begin{aligned} \sup _{0< \epsilon < 1} \sup _{t \ge 0} |\langle \Omega ^{\epsilon }(t),\varvec{J}\rangle | ~ \le ~ \frac{1}{2} K_{0}||\varvec{J}|| \end{aligned}$$
(3.11)

under the condition (3.8).

Remark 3.3

As discussed in [2], \(\Omega ^{\epsilon }(\cdot )\) is well-defined on a wider class of test functions than \(\mathbf {S}^2\). In particular we can take \(\varvec{J}(y,k) =(J(k),J(k))\) with a bounded function J(k) on \({\mathbb {T}}\), and then we have

$$\begin{aligned} \langle \Omega ^{\epsilon }(t),\varvec{J}\rangle = \frac{\epsilon }{2} \int _{{\mathbb {T}}} dk ~ \sum _{i=1,2}\mathbb {E}_{\epsilon } [|{\widehat{\psi }_{i}}(k,\frac{t}{\epsilon })|^{2} ] J(k). \end{aligned}$$

From this representation one can see that \(\Omega ^{\epsilon }(\cdot )\) is the distribution of the spectral density of the energy. Also if we take \(\varvec{J}(y,k) =(J(y),J(y))\) with a rapidly decreasing function J(y) on \({\mathbb {R}}\) as a test function, then we have

$$\begin{aligned} \langle \Omega ^{\epsilon }(t),\varvec{J}\rangle = \frac{\epsilon }{2} \sum _{x \in {\mathbb {Z}}} \sum _{i=1,2}\mathbb {E}_{\epsilon } [ |\psi _{i}(x,\frac{t}{\epsilon })|^{2} ] J(\epsilon x). \end{aligned}$$

This is the integral of J with respect to the averaged empirical measure of \(\frac{1}{2} \sum _{i=1,2}|\psi _{i}(x,\frac{t}{\epsilon })|^2\). Namely, \( \Omega ^{\epsilon }(t)\) is a rescaled microscopic local spectral density.

4 Main Results

As mentioned in the Introduction, the main purpose of the present paper is to understand the nature of the superdiffusion for the coupled charged harmonic chain of oscillators in a magnetic field with noise defined in the last section, and we apply the two-step scaling limits. In Sect. 4.1, following the idea of [2], we claim that in the weak noise limit the local density of energy is governed by a phonon linear Boltzmann equation. In Sect. 4.2, we consider a properly scaled solution of the Boltzmann equation and state that it converges to the solution of the fractional diffusion equation (1.1) with \(s=\frac{5}{3}\), which is our main result.

4.1 Boltzmann Equation

In this subsection we state the limiting behavior of the Wigner distribution.

Theorem 1

Suppose the condition (3.8) holds. If \(\Omega ^{\epsilon }(0)\) converges to \(\Omega _0\) in \((\mathbf {S}^2)'\) as \(\epsilon \rightarrow 0\), then for all \(t \ge 0\), \(\Omega ^{\epsilon }(t)\) converges to a vector-valued finite positive measure \(\varvec{\mu }(t)=(\mu _{1}(t),\mu _{2}(t))\) in \((\mathbf {S}^2)'\) as \(\epsilon \rightarrow 0\), which is the unique solution of the following Boltzmann equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} \int d \varvec{\mu }(t) \cdot \varvec{J} = \frac{1}{2\pi } \int d\varvec{\mu }(t) \cdot \omega ' \partial _{y} \varvec{J} + \gamma \int d\varvec{\mu }(t)\cdot C\varvec{J} \\ \int d \varvec{\mu }(0) \cdot \varvec{J} = \langle \Omega _0 , \varvec{J}\rangle , \end{array}\right. } \end{aligned}$$
(4.1)

where

$$\begin{aligned} \int d \varvec{\mu } \cdot \varvec{J}&= \sum _{i=1,2} \int _{{\mathbb {R}}\times {\mathbb {T}}} \mu _{i} (dy,dk) ~ J_{i}(y,k)^* \quad \text {for} \quad \varvec{\mu }=(\mu _1,\mu _2),\\ (C\varvec{J})_{i}(x,k)&= \sum _{j = 1,2} \int _{{\mathbb {T}}} dk' \theta _{i}(k)^{2} R(k,k') \theta _{j}(k')^{2} (J_{j}(x,k')-J_{i}(x,k)) \end{aligned}$$

for \(\varvec{J}=(J_1,J_2)\in \mathbf {S}^{2}\) with \(R(k,k') = 16\sin ^{2}{\pi k}\sin ^{2}{\pi k'}\).

Remark 4.1

In the case \(B = 0\), if we assume an additional assumption

$$\begin{aligned} \lim _{\rho \rightarrow 0} \limsup _{\epsilon \rightarrow 0} \frac{\epsilon }{2} \sum _{i=1}^2\int _{|k| < \rho } dk ~ E_{Q_{\epsilon }}[|{\widehat{\psi }_{i}}(k)|^{2}] = 0 \end{aligned}$$

on the initial measure \(Q_{\epsilon }\), the same statement of Theorem 1 holds. For this case, the proof is essentially given in [2].

Remark 4.2

Suppose that the solution of (4.1) has the density u(ykit) for all \(t \in [0,T]\) , that is,

$$\begin{aligned} \mu _{i}(t)(dy,dk)&= u(y,k,i,t) dy dk, ~ i = 1,2 , \\ \mu _{i}(0)(dy,dk)&= u_{0}(y,k,i) dy dk, ~ i = 1,2 . \end{aligned}$$

Then u(ykit) is a weak solution of the linear Boltzmann equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} u(y,k,i,t) + \frac{1}{2\pi } \omega '(k) \partial _{y} u(y,k,i,t) = \gamma \mathcal {L} u(y,k,i,t) \\ u(y,k,i,0) = u_{0}(y,k,i) , \\ \end{array}\right. } \end{aligned}$$
(4.2)

where

$$\begin{aligned} \mathcal {L}u(y,k,i,t) = \sum _{j= 1,2} \int _{\mathbf {T}} dk' \theta _{i}(k)^2 R(k,k') \theta _{j}(k')^2 (u(y,k',j,t)-u(y,k,i,t) ) . \end{aligned}$$

We prove Theorem 1 in Sect. 6. The strategy of our proof is as follows: First we derive a microscopic evolution equation of \(\Omega ^{\epsilon }\), which is not closed in terms of \(\Omega ^{\epsilon }\). Then, with this expression of the time evolution, we show that for any fixed \(T>0\), \(\{ \Omega ^{\epsilon }(t) , 0 \le t \le T \}_{0< \epsilon < 1}\) is sequentially compact in \(C([0,T];(\mathbf {S}^2)')\) in a certain weak-\(*\) sense. See its precise meaning in Sect. 6. We verify that any limit of a convergent subsequence is extended to a vector-valued finite positive measure in “Appendix D”. The uniqueness of the bounded solution of (4.1) in the class of vector-valued finite positive measures is shown in “Appendix E”. Finally we show that any limit of a convergent subsequence satisfies (4.1), which is a closed equation in terms of \(\varvec{\mu }\). Summarizing the above we can show that \(( \Omega ^{\epsilon }(\cdot ) )_{\epsilon }\) is convergent and the limit satisfies (4.1).

4.2 Derivation of the \(\frac{5}{6}\) Fractional Diffusion Equation

In this subsection we study a macroscopic behavior of a solution of properly scaled Boltzmann equation (4.2). Consider a spatially scaled linear Boltzmann equation with a scaling parameter N as

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} u(y,k,i,t) + \frac{1}{N^{3/5}}\frac{1}{2\pi } \omega '(k) \partial _{y} u(y,k,i,t) = \gamma \mathcal {L} u(y,k,i,t) \\ u(y,k,i,0) = u_{0}(y,k,i) , \\ \end{array}\right. } \end{aligned}$$
(4.3)

and denote its solution by \(u_N\).

Remark 4.3

For any given \(u_{0}(y,k,i) \in C_{0}^{\infty }({\mathbb {R}}\times {\mathbb {T}}) , ~ i=1,2\), a solution of (4.2) is constructed explicitly using a Markov process associated to the Boltzmann equation in the next section. The uniqueness of solutions in a certain class follows from that of (4.1). The argument also applies to (4.3) and so the existence and uniqueness of \(u_N\) follows.

Theorem 2

Suppose \(u_{0}(y,k,i) \in C_{0}^{\infty }({\mathbb {R}}\times {\mathbb {T}}) , ~ i=1,2\). Define the initial local density of energy at \(y \in {\mathbb {R}}\) as \(\bar{u}_0(y) =\sum _{i=1,2} \int _{{\mathbb {T}}\times \{ 1,2 \} } dk ~ u_{0}(y,k,i)\). Then, for all \(y \in {\mathbb {R}}, ~ t \ge 0\),

$$\begin{aligned} \lim _{N \rightarrow \infty } \sum _{i=1,2} \int _{{\mathbb {T}}} dk ~ |u_{N}(y,k,i,Nt) - \frac{1}{2}\bar{u}(y,t)|^2 = 0 , \end{aligned}$$

where \(\bar{u}\) is a solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} \bar{u}(y,t) = - D (-\Delta _{y})^{\frac{5}{6}} \bar{u}(y,t) \\ \bar{u}(y,0) = \bar{u}_0(y) \end{array}\right. } \end{aligned}$$
(4.4)

and \(D = D(B,\gamma ,\alpha ) \) is a positive constant such that

$$\begin{aligned} D = C |B|^{-\frac{1}{3}} \gamma ^{-\frac{2}{3}} \widehat{\alpha }''(0) \end{aligned}$$

with a universal constant C. In particular,

$$\begin{aligned} \lim _{N \rightarrow \infty } ~ | \sum _{i=1,2} \int _{{\mathbb {T}}} dk \ u_{N}(y,k,i,Nt) - \bar{u}(y,t)|^2 = 0. \end{aligned}$$

Remark 4.4

In the case \(B = 0\), if we denote by \(u_N(y,k,i,t)\) the solution of a scaled linear Boltzmann equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} u(y,k,i,t) + \frac{1}{N^{2/3}} \frac{1}{2\pi } \omega '(k) \partial _{y} u(y,k,i,t) = \gamma \mathcal {L} u(y,k,i,t) \\ u(y,k,i,0) = u_{0}(y,k,i) , \end{array}\right. } \end{aligned}$$

then for all \(y \in {\mathbb {R}},~ t \ge 0\),

$$\begin{aligned} \lim _{N \rightarrow \infty } \sum _{i=1,2} \int _{{\mathbb {T}}} dk ~ |u_{N}(y,k,i,Nt) - \frac{1}{2}\bar{u}(y,t)|^2 = 0 \end{aligned}$$

where \(\bar{u}\) is the solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _{t} \bar{u}(y,t) = - D' (-\Delta _{y})^{\frac{3}{4}} \bar{u}(y,t) \\ \bar{u}(y,0) = \bar{u}_0(y) \end{array}\right. } \end{aligned}$$

and \(D' = D'(\gamma ,\alpha ) \) is a positive constant such that

$$\begin{aligned} D' = C'\gamma ^{-\frac{1}{2}} (\widehat{\alpha }''(0))^{\frac{3}{4}} \end{aligned}$$

with a universal constant \(C'\). The result is essentially proved in [10].

Theorem 2 means that the limit of the rescaled solution \(u_{N}\) homogenizes in \(k \in {\mathbb {T}}, i=1,2\) due to the scattering effect \(\mathcal {L}\) on \({\mathbb {T}}\times \{ 1,2 \}\) and satisfies a fractional diffusion equation with the exponent \(\frac{5}{3}\). For the proof, we follow the strategy of [10]. Namely, we consider a long-time asymptotic behavior of a Markov process \(\{ (Z(t),K(t),I(t)) \in {\mathbb {R}}\times {\mathbb {T}}\times \{1,2 \} ; t \ge 0 \}\) associated to the Boltzmann equation (4.2). By using this process, we have a stochastic representation of the solution of (4.2), \(u(y,k,i,t) = \mathbb {E}_{(y,k,i)}[u_{0}(Z(t),K(t),I(t))]\). Since \(\{ (K(t),I(t)) ; t \ge 0 \}\) is ergodic, the homogenization on \({\mathbb {T}}\times \{ 1,2 \}\) occurs as \(t \rightarrow \infty \). At the same time, the finite-dimensional distributions of \(\{ N^{-\frac{3}{5}} Z(Nt) ; t \ge 0 \}\) converges weakly to those of a stable process with the exponent \(\frac{5}{3}\). To apply a general theorem in [10], we need to check several conditions. The above is the main subject of the next section, where we conclude all the required conditions are satisfied and then Theorem 3 on the asymptotic behavior of a Markov process is obtained. We apply it to the study of the limit of \(u_N\) and prove Theorem 2 in Sect. 7.

5 Markov Process Associated to the Boltzmann Equation

In this section we construct a solution of (4.2) probabilistically. We will see that there exists a Markov process associated to (4.2) and study its long-time asymptotic behavior.

Let \(\{ ( K_{n} , I_{n} ) ; n \in {\mathbb {Z}}_{\ge 0} \}\) be a Markov chain on \({\mathbb {T}}\times \{ 1 , 2 \}\) whose transition probability is given by

$$\begin{aligned} P(k,i,dk',j) = t(k,i) \gamma \theta _{i}(k)^2 R(k,k') \theta _{j}(k')^2 dk', \end{aligned}$$

where

$$\begin{aligned} t(k,i)&= [ \gamma \theta _{i}(k)^2 R(k) ]^{-1} , \quad R(k) = \int _{{\mathbb {T}}} dk' R(k,k'). \end{aligned}$$

Since \(R(k,k')\) is a product of functions of k and \(k'\), we have

$$\begin{aligned} P(k,i,dk',j) = \pi (dk',j) \end{aligned}$$

where \(\pi (dk,di)\) is a reversible measure for this Markov chain given as

$$\begin{aligned} \pi (dk,di) = \sum _{j=1,2} \frac{t(k,j)^{-1}}{\gamma \overline{R}} dk \delta _{ \{ j \} }(di) , \quad \overline{R} = \int _{{\mathbb {T}}} dk ~ R(k). \end{aligned}$$

In particular, \(\{ ( K_{n} , I_{n} ) ; n \ge 1 \}\) is an i.i.d. sequence of random variables on \({\mathbb {T}}\times \{ 1 , 2 \}\) with distribution \(\pi \).

Now we construct a continuous time random walk generated by \(\mathcal {L}\). Let \(\{ \tau _{n} , n \ge 1 \}\) be an i.i.d. sequence of random variables such that \(\tau _{1}\) is exponentially distributed with intensity 1 and \(\{ ( K_{n} , I_{n} ) ; n \in {\mathbb {Z}}_{\ge 0} \}\) and \(\{ \tau _{n} , n \ge 1 \}\) are independent. Set \(t_{n} := \sum _{m = 1}^{n} t(K_{m-1},I_{m-1}) \tau _{m} , n \ge 1 , ~ t_{0} = 0 \) and define a stochastic process (K(t) , I(t) ) as \(K(t) = K_{n} , I(t) = I_{n} \) if \(t \in [t_{n},t_{n+1})\). Then, by the construction \(\{ ( K(t) , I(t) ) \}_{t \ge 0}\) is a continuous time random walk generated by \(\mathcal {L}\). Note that the uniform measure on \({\mathbb {T}}\times \{1,2 \}\) is the reversible probability measure of \(\mathcal {L}\). With this process we can construct an explicit solution of the Eq. (4.2) by

$$\begin{aligned} u(y,k,i,t)&= \mathbb {E}_{(k,i)}[u_{0}(Z(t),K(t),I(t))] , \end{aligned}$$

where

$$\begin{aligned} Z(t)&= y + \frac{1}{2\pi } \int _{0}^{t} ds ~ \omega '(K(s)) , \end{aligned}$$

and \(K(0) = k , I(0) = i\). For this process, we have the following result.

Theorem 3

Suppose \((K(0), I(0))=(k,i)\) for some \(k \ne 0\) and \(i=1\) or 2. Then as \(N \rightarrow \infty \), the finite-dimensional distributions of rescaled processes \(\{ N^{-\frac{3}{5}} Z(Nt)\}_{t \ge 0 }\) converge weakly to the finite-dimensional distributions of a Lévy process generated by \(- D (-\Delta _{y})^{\frac{5}{6}}\), where \(D = D(B,\gamma ,\alpha ) \) is a positive constant such that

$$\begin{aligned} D = C |B|^{-\frac{1}{3}} \gamma ^{-\frac{2}{3}} \widehat{\alpha }''(0), \end{aligned}$$

and C is a positive constant which does not depend on B, \(\gamma \), \(\alpha \).

Remark 5.1

In the case of \(B = 0\),the finite-dimensional distributions of rescaled processes \(\{ N^{-\frac{2}{3}} Z(Nt) ; t \ge 0 \}\) converge weakly to the finite-dimensional distributions of a Lévy process generated by \( - D' (-\Delta _{y})^{\frac{3}{4}}\), where \(D' = D'(\gamma ,\alpha ) \) is a positive constant such that

$$\begin{aligned} D' = C'\gamma ^{-\frac{1}{2}} (\widehat{\alpha }''(0))^{\frac{3}{4}} , \end{aligned}$$

and \(C'\) is a positive constant which does not depend on \(\gamma \), \(\alpha \). It is essentially shown in [10].

5.1 Proof of Theorem 3.

We apply [10, Theorem 2.8 (i)] to our process with \(\alpha =\frac{5}{3}\). For this, it is enough to show that Conditions 2.1, 2.2, 2.3 and (2.12) of [10] are satisfied.

First we verify that Condition 2.1 is satisfied. Define

$$\begin{aligned} \Psi (k,i) = \omega '(k) t(k,i) . \end{aligned}$$

The tail of \(\Psi \) under \(\pi \) is

$$\begin{aligned} \pi (\{ (k,i) ; \Psi (k,i) \ge \lambda \})&= \sum _{i = 1,2} \int _{ \{ k ; \Psi (k,i) \ge \lambda \} } dk \ \frac{\theta _{i}(k)^2R(k)}{\overline{R}} \\&= C |B|^{- \frac{1}{3}} \gamma ^{- \frac{5}{3}} \widehat{\alpha }''(0) \lambda ^{-\frac{5}{3}}(1 + O(\lambda ^{-\frac{4}{3}}) ) , \end{aligned}$$

as \(\lambda \rightarrow \infty \) because

$$\begin{aligned}&\theta _{1}(k)^2 \sim 1, \ {\textit{and }} \ \theta _{2}(k)^2 \sim \frac{\widehat{\alpha }''(0)k^{2}}{|B|^{2}} {\textit{as}} \ k \rightarrow 0 \ {\textit{if}} \ B>0 \\&\theta _{1}(k)^2 \sim \frac{\widehat{\alpha }''(0)k^{2}}{|B|^{2}} \ {\textit{and }} \ \theta _{2}(k)^2 \sim 1 {\textit{as}} \ k \rightarrow 0 \ {\textit{if}} \ B<0 \ \\ \end{aligned}$$

and

$$\begin{aligned} \omega '(k) \sim \frac{\widehat{\alpha }(0)'' k}{|B|}, \quad R(k) \sim k^{2} \quad {\textit{as}} \ k \rightarrow 0. \end{aligned}$$

C is a positive constant which does not depend on \(B,\gamma ,\alpha \). \(\Psi \) is odd for k and the density of \(\pi (\cdot ,i)\) with respect to the Lebesgue measure is even for k, so

$$\begin{aligned} \pi (\{ (k,i) ; \Psi (k,i) \ge \lambda \}) = \pi (\{ (k,i) ; \Psi (k,i) \le -\lambda \}) \end{aligned}$$

and \( \int \Psi d\pi = 0\).

Next we check Condition 2.2. It is obvious that

$$\begin{aligned} \sup \{ ||Pf||_{\mathbb {L}^{2}(\pi )} ; \int d\pi ~ f = 0 , ||f||_{\mathbb {L}^{2}(\pi )} = 1 \} = 0 \end{aligned}$$

because \(Pf = \int d\pi ~ f\).

Finally we show that Condition 2.3 and (2.12) hold. Condition 2.3 is obviously satisfied with \(Q \equiv 0\) and \(p \equiv 1\). Also, we have

$$\begin{aligned} ||P\Psi ||_{\mathbb {L}^{2}(\pi )}^{2}&= \sum _{i=1,2} \int _{{\mathbb {T}}} dk \left( \sum _{j=1,2} \int _{{\mathbb {T}}} dk' \Psi (k',j) \frac{t(k',j)^{-1}}{\gamma \overline{R}}\right) ^{2} \frac{t(k,i)^{-1}}{\gamma \overline{R}} \\&= \sum _{i=1,2} \int _{{\mathbb {T}}} dk \left( \sum _{j=1,2} \int _{{\mathbb {T}}} dk' \frac{\omega '(k')}{\gamma \overline{R}} \right) ^{2} \frac{t(k,i)^{-1}}{\gamma \overline{R}} < \infty . \end{aligned}$$

Therefore, by [10, Theorem 2.8 (i)], the finite-dimensional distributions of the scaled process \(\{ N^{-\frac{3}{5}} Z(Nt)\}_{t \ge 0 }\) under \(\mathbb {P}_{\pi }\) converge to a stable process with exponent \(\frac{5}{3}\) whose characteristic function at time 1, denoted by \(\phi (x)\) is

$$\begin{aligned} \phi (x) = \exp {(\int _{{\mathbb {R}}} d\lambda ~ {(e^{\sqrt{-1}\lambda x} - 1 - \sqrt{-1} \lambda x)} c_{*}(\lambda ) |\lambda |^{-\frac{8}{3}})}, \end{aligned}$$

where

$$\begin{aligned} c_{*}(\lambda ) = \frac{5C |B|^{- \frac{1}{3}} \gamma ^{- \frac{5}{3}} \widehat{\alpha }''(0) A_{\frac{5}{3}}}{\bar{t}} \end{aligned}$$

for all \(\lambda \ne 0\), C is a positive constant appeared in the tail estimate of \(\Psi \) and

$$\begin{aligned} A_{\frac{5}{3}}&= \int _{0}^{\infty } dy ~ e^{-y} y^{\frac{5}{3}}, \\ \bar{t}&= \int d\pi ~ t(k,i) ~ = \frac{1}{2\gamma }. \end{aligned}$$

Finally we show that the finite-dimensional distributions of \(\{ N^{-\frac{3}{5}} Z(Nt) ; t \ge 0 \}\) under \(\mathbb {P}_{(k,i)}\) also converge to the same stable process for \(k \in {\mathbb {T}}{\setminus } \{ 0 \} , ~ i=1,2\). For \(t \ge 0\) define n(t) as the nonnegative integer such that

$$\begin{aligned} t_{n(t)} \le t < t_{n(t) + 1}. \end{aligned}$$

Then we have

$$\begin{aligned} N^{-\frac{3}{5}} Z(Nt) = N^{-\frac{3}{5}} \sum _{n = 0}^{n(Nt)} \Psi (K_{n},I_{n}) \tau _{n+1}. \end{aligned}$$

If \(k \ne 0\) then \(N^{-\frac{3}{5}} \Psi (k,i) \tau _{1} \rightarrow 0 \) as \(N \rightarrow \infty \)\(\mathbb {P}_{(k,i)}\)—almost surely. Moreover, under \(\mathbb {P}_{(k,i)}\), the distribution of \(\{ ( K_{n} , I_{n} )\}_{n \ge 1 }\) is an i.i.d. sequence with distribution \(\pi \). By Theorem 6.1 and Lemma 6.2 of [10], the finite-dimensional distributions of \(\{ N^{-\frac{3}{5}} \sum _{n = 1}^{n(Nt)} \Psi (K_{n},I_{n}) \tau _{n+1} ; t \ge 0 \}\) under \(\mathbb {P}_{(k,i)}\) converge to the stable process, so the finite-dimensional distributions of \(\{ N^{-\frac{3}{5}} Z(Nt) ; t \ge 0 \}\) under \(\mathbb {P}_{(k,i)}\) also converge to the same stable process if \(k \ne 0\).

6 Proof of the Theorem 1.

To simplify the notation, we define functions \({\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k)\), \({\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k)\) on \({\mathbb {R}}\times {\mathbb {T}}\) by

$$\begin{aligned} {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k)&= \frac{\epsilon }{2} \mathbb {E}_{\epsilon } [ {\widehat{\psi }_{i}}(k-\frac{\epsilon p}{2},\frac{t}{\epsilon })^{*} ~ {\widehat{\psi }_{i}}(k+\frac{\epsilon p}{2},\frac{t}{\epsilon }) ] , \\ {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k)&= \frac{\epsilon }{2} \mathbb {E}_{\epsilon } [ {\widehat{\psi }_{i}}(-k+\frac{\epsilon p}{2},\frac{t}{\epsilon }) ~ {\widehat{\psi }_{i^{*}}}(k+\frac{\epsilon p}{2},\frac{t}{\epsilon }) ] \end{aligned}$$

for \(i=1,2\) where \(i^{*} = 3-i\). We use the notation \(i^*\) throughout the rest of the paper. We also define \({\widehat{\Omega }_{i^{-}}^{\epsilon }}(t)(p,k)\), \({\widehat{\Gamma }_{i^{-}}^{\epsilon }}(t)(p,k)\) as

$$\begin{aligned} {\widehat{\Omega }_{i^{-}}^{\epsilon }}(t)(p,k)&= {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,-k), \\ {\widehat{\Gamma }_{i^{-}}^{\epsilon }}(t)(p,k)&= {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)^{*}(-p,k). \end{aligned}$$

Note that for all \(p \in {\mathbb {R}}\) these functions satisfy

$$\begin{aligned} ||{\widehat{\Omega }_{i^{\iota }}^{\epsilon }}(t)(p,\cdot )||_{\mathbb {L}^{1}({\mathbb {T}})} \le \frac{1}{2} K_{0}, \quad ||{\widehat{\Gamma }_{i^{\iota }}^{\epsilon }}(t)(p,\cdot )||_{\mathbb {L}^{1}({\mathbb {T}})} \le \frac{1}{2} K_{0} \end{aligned}$$

for \(i=1,2 \), \(\iota = +,-\) under the condition (3.8). With this notation, Wigner distribution is rewritten as

$$\begin{aligned} \langle \Omega ^{\epsilon }(t),\varvec{J}\rangle&= \sum _{i=1,2} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) {\widehat{J_{i}}}(p,k)^{*}. \end{aligned}$$
(6.1)

From now on we will show that the time evolution of \(\Omega ^{\epsilon }(\cdot )\) satisfies the following equation

$$\begin{aligned}&\partial _{t} \langle \Omega ^{\epsilon }(t),\varvec{J}\rangle \nonumber \\&\quad = \frac{1}{2\pi } \langle \Omega ^{\epsilon }(t),\omega '(k) \partial _{y} \varvec{J}\rangle + \gamma \langle \Omega ^{\epsilon }(t),C\varvec{J}\rangle \nonumber \\&\qquad + \gamma ( \langle \Gamma ^{\epsilon }(t),C'\varvec{J}\rangle + \langle (\Gamma ^{\epsilon })^{*}(t),C'\varvec{J}\rangle ) + O_{\varvec{J}}(\epsilon ) \end{aligned}$$
(6.2)

for \(\varvec{J} \in \mathbf {S}^{2}\) where

$$\begin{aligned} \langle \Gamma ^{\epsilon }(t),\varvec{J}\rangle&= \sum _{i=1,2} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) {\widehat{J_{i}}}(p,k)^{*} ,\nonumber \\ \langle (\Gamma ^{\epsilon })^{*}(t),\varvec{J}\rangle&= \sum _{i=1,2} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ {\widehat{\Gamma }_{i^{-}}^{\epsilon }}(t)(p,k) {\widehat{J_{i}}}(p,k)^{*} \end{aligned}$$
(6.3)

and

$$\begin{aligned} (C'\varvec{J})_{i}(p,k) = \int _{{\mathbb {T}}} dk' ~ \theta _{1}(k) \theta _{2}(k) R(k,k') \theta _{i^{*}}^{2}(k') J_{i^*}(p,k') - {\frac{R(k)}{2}} \theta _{1}(k) \theta _{2}(k) J_{i}(p,k). \end{aligned}$$

Here, \(O_{\varvec{J}}(\epsilon )\) is a term which satisfies

$$\begin{aligned} \frac{O_{\varvec{J}}(\epsilon )}{\epsilon } \le C_{\varvec{J}} \end{aligned}$$

for all \(0< \epsilon < 1\) with a positive constant \(C_{\varvec{J}}\) which depends on \(\varvec{J}\).

By (3.7) the time evolution of \({\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k)\) is

$$\begin{aligned}&\partial _{t} {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\quad = - \frac{\sqrt{-1}}{\epsilon }({\omega }_i(k+\frac{\epsilon p}{2}) - {\omega }_i(k-\frac{\epsilon p}{2})) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} ({R}(k+\frac{\epsilon p}{2}) \theta _{i}^{2}(k+\frac{\epsilon p}{2}) + {R}(k-\frac{\epsilon p}{2}) \theta _{i}^{2}(k-\frac{\epsilon p}{2})) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} {R}(k+\frac{\epsilon p}{2}) \theta _{i}(k+\frac{\epsilon p}{2}) \theta _{i^{*}}(k+\frac{\epsilon p}{2}) {\widehat{\Gamma }_{i^{*-}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} {R}(k-\frac{\epsilon p}{2}) \theta _{i}(k-\frac{\epsilon p}{2}) \theta _{i^{*}}(k-\frac{\epsilon p}{2}){\widehat{\Gamma }_{i^{*+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad + \gamma \theta _{i}(k-\frac{\epsilon p}{2}) \theta _{i}(k+\frac{\epsilon p}{2}) \int _{{\mathbb {T}}} dk' R_{\epsilon p}(k,k') \nonumber \\&\qquad \times [\theta _{i}(k'-\frac{\epsilon p}{2}) \theta _{i}(k'+\frac{\epsilon p}{2}) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k') \nonumber \\&\qquad + \theta _{i^{*}}(k'-\frac{\epsilon p}{2}) \theta _{i^{*}}(k'+\frac{\epsilon p}{2}) {\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k')) \nonumber \\&\qquad + \theta _{i}(k'-\frac{\epsilon p}{2}) \theta _{i^{*}}(k'+\frac{\epsilon p}{2}) {\widehat{\Gamma }_{i^{*-}}^{\epsilon }}(t)(p,k') \nonumber \\&\qquad + \theta _{i}(k'+\frac{\epsilon p}{2}) \theta _{i^{*}}(k'-\frac{\epsilon p}{2}) {\widehat{\Gamma }_{i^{*+}}^{\epsilon }}(t)(p,k')], \end{aligned}$$
(6.4)

where

$$\begin{aligned} R_{\epsilon p}(k,k') = 16 \sin {(k+\frac{\epsilon p}{2})}\sin {(k-\frac{\epsilon p}{2})}\sin {(k'+\frac{\epsilon p}{2})}\sin {(k'-\frac{\epsilon p}{2})}. \end{aligned}$$

For the derivation of (6.4), see “Appendix F”.

Since \({R}, \theta _{i}\) and \({\omega }_i\) are smooth on \({\mathbb {T}}\), the term (6.4) is rewritten as

$$\begin{aligned}&\partial _{t} {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\quad = - \sqrt{-1} p \omega '_i(k) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) - \gamma {R}(k) \theta _{i}^{2}(k) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} {R}(k) \theta _{i}(k) \theta _{i^{*}}(k) ({\widehat{\Gamma }_{i^{*-}}^{\epsilon }}(t)(p,k) + {\widehat{\Gamma }_{i^{*+}}^{\epsilon }}(t)(p,k)) \nonumber \\&\qquad + \gamma \theta _{i}^{2}(k) \int _{{\mathbb {T}}} dk' R(k,k') [\theta _{i}^{2}(k') {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k') + \theta _{i^{*}}^{2}(k') {\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k') \nonumber \\&\qquad + \theta _{i}(k') \theta _{i^{*}}(k') {\widehat{\Gamma }_{i^{*-}}^{\epsilon }}(t)(p,k') + \theta _{i}(k') \theta _{i^{*}}(k') {\widehat{\Gamma }_{i^{*+}}^{\epsilon }}(t)(p,k')] + \mathcal {R}_{i}(p,k), \end{aligned}$$
(6.5)

where \(\mathcal {R}_{i}, i=1,2\) are the remainder terms and these satisfy

$$\begin{aligned} ||\mathcal {R}_{i}(p,\cdot )||_{\mathbb {L}^{1}({\mathbb {T}})} \le C(T,B,\gamma ,\alpha ,K_{0}) |p| \epsilon \end{aligned}$$

for all \(p \in {\mathbb {R}}\). Then for any \(\varvec{J} \in \mathbf {S}^{2}\),

$$\begin{aligned} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ \mathcal {R}_{i}(p,k) {\widehat{J_{i}}}(p,k)^{*} = O_{\varvec{J}}(\epsilon ). \end{aligned}$$
(6.6)

Combining (6.1), (6.3), (6.5) and (6.6) with the relation \(\int _{{\mathbb {T}}} dk' R(k,k') = {R}(k)\), we conclude that (6.2) holds.

From (3.11) and (6.2), for any fixed \(T>0\) and \(\varvec{J} \in \mathbf {S}^{2}\), \(\{ \langle \Omega ^{\epsilon }(\cdot ),\varvec{J}\rangle \}_{ 0<\epsilon <1} \subset C([0,T],{\mathbb {C}}) \) is uniformly bounded and equicontinuous. Hence, for each \(\varvec{J} \in \mathbf {S}^{2}\), there exists a subsequence \(\epsilon _N \downarrow 0\) such that \( \langle \Omega ^{\epsilon _N}(\cdot ),\varvec{J}\rangle \) converges to a function in \(C([0,T], {\mathbb {C}})\) uniformly as \(N \rightarrow \infty \). Since \( \mathbf {S}^{2}\) is separable, there is a dense countable subset \(\{ \varvec{J}^{(m)} ; m \in {\mathbb {N}}\}\) of \(\mathbf {S}^2\) and by the diagonal argument we can find a sequence \(\epsilon _N \downarrow 0\) such that \( \langle \Omega ^{\epsilon _N}(\cdot ),\varvec{J}^{(m)}\rangle \) converges for all \(m \in {\mathbb {N}}\). Now, we show that for all \(J \in \mathbf {S}^2\), \( \langle \Omega ^{\epsilon _N}(\cdot ),\varvec{J}\rangle \) converges uniformly to a continuous function as \(N \rightarrow \infty \). Fix \(\varvec{J} \in \mathbf {S}^2\) and \(\delta > 0\). Since \(\{ \varvec{J}^{(m)} ; m \in {\mathbb {N}}\}\) is dense we can take some \(\varvec{J}^{(l)}\) so that \(||\varvec{J} - \varvec{J}^{(l)}|| < \delta \). Then for any \(n,m \in {\mathbb {N}}\) we have

$$\begin{aligned}&\sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _n}(t),\varvec{J}\rangle - \langle \Omega ^{\epsilon _m}(t),\varvec{J}\rangle | \\&\quad \le \sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _n}(t),\varvec{J}\rangle - \langle \Omega ^{\epsilon _n}(t),\varvec{J}^{(l)}\rangle | \\&\qquad + \sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _n}(t),\varvec{J}^{(l)}\rangle - \langle \Omega ^{\epsilon _m}(t),\varvec{J}^{(l)}\rangle | \\&\qquad + \sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _m)}(t),\varvec{J}^{(l)}\rangle - \langle \Omega ^{\epsilon _m}(t),\varvec{J}\rangle | \\&\quad \le K_{0} \delta + \sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _n}(t),\varvec{J}^{(l)}\rangle - \langle \Omega ^{\epsilon _m}(t),\varvec{J}^{(l)}\rangle | \end{aligned}$$

by (3.11). Hence, for sufficiently large nm,

$$\begin{aligned} \sup _{t \in [0,T]} | \langle \Omega ^{\epsilon _n}(t),\varvec{J}\rangle - \langle \Omega ^{\epsilon _m}(t),\varvec{J}\rangle |\le (1+K_0) \delta \end{aligned}$$

and so \( \langle \Omega ^{\epsilon _N}(\cdot ),\varvec{J}\rangle \) converges uniformly.

In “Appendix D”, we prove that for any \(t \ge 0\), any limit of a weak-* convergent subsequence of \(\{ \Omega ^{\epsilon }(t) \}_{\epsilon }\) can be extended to a vector-valued finite positive measures on \({\mathbb {R}}\times {\mathbb {T}}\). The uniqueness of solutions of the Eq. (4.1) is shown in “Appendix E”.

Hence, noting that \(\omega '(k) \partial _y\varvec{J}, C\varvec{J}, C'\varvec{J} \in \mathbf {S}^{2}\) for any \(\varvec{J} \in \mathbf {S}^{2}\), by (6.2) and the following lemma we conclude the proof of Theorem 1.

Lemma 6.1

For any \(T>0\) and \(\varvec{J} \in \mathbf {S}^{2}\),

$$\begin{aligned}&\lim _{\epsilon \rightarrow 0} |\int _{0}^{T} dt ~ \langle \Gamma ^{\epsilon }(t),\varvec{J}\rangle | = 0 , \\&\lim _{\epsilon \rightarrow 0} |\int _{0}^{T} dt ~ \langle (\Gamma ^{\epsilon })^{*}(t),\varvec{J}\rangle | = 0. \end{aligned}$$

Proof

By (3.7) the time evolution of \({\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k), i =1,2\) is

$$\begin{aligned}&\partial _{t} {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \\&\quad = - \frac{\sqrt{-1}}{\epsilon } ({\omega }_i(k-\frac{\epsilon p}{2}) + \omega _{i^{*}}(k+\frac{\epsilon p}{2})) {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \\&\qquad - \frac{\gamma }{2} ({R}(k-\frac{\epsilon p}{2}) \theta _{i}^{2}(k-\frac{\epsilon p}{2}) + {R}(k+\frac{\epsilon p}{2}) \theta _{i^{*}}^{2}(k+\frac{\epsilon p}{2})){\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \\&\qquad - \frac{\gamma }{2} {R}(k+\frac{\epsilon p}{2}) \theta _{i}(k+\frac{\epsilon p}{2}) \theta _{i^{*}}(k+\frac{\epsilon p}{2}){\widehat{\Omega }_{i^{-}}^{\epsilon }}(t)(p,k) \\&\qquad - \frac{\gamma }{2} {R}(k-\frac{\epsilon p}{2}) \theta _{i}(k-\frac{\epsilon p}{2}) \theta _{i^{*}}(k-\frac{\epsilon p}{2}){\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k) \\&\qquad + \gamma \theta _{i}(k-\frac{\epsilon p}{2}) \theta _{i^{*}}(k+\frac{\epsilon p}{2}) \int _{{\mathbb {T}}} dk' R_{\epsilon p}(k,k') \\&\qquad \times [\theta _{i}(k'-\frac{\epsilon p}{2}) \theta _{i}(k'+\frac{\epsilon p}{2}) {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k') \\&\qquad + \theta _{i^{*}}(k'-\frac{\epsilon p}{2}) \theta _{i^{*}}(k'+\frac{\epsilon p}{2}) {\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k')) \\&\qquad + \theta _{i}(k'-\frac{\epsilon p}{2}) \theta _{i^{*}}(k'+\frac{\epsilon p}{2}) {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k') \\&\qquad + \theta _{i}(k'+\frac{\epsilon p}{2}) \theta _{i^{*}}(k'-\frac{\epsilon p}{2}) {\widehat{\Gamma }_{i^{-}}^{\epsilon }}(t)(p,k')] . \end{aligned}$$

Since R, \(\theta _{i}\) and \({\omega }_i\) are smooth on \({\mathbb {T}}\), the above term is rewritten as

$$\begin{aligned}&\partial _{t} {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\quad = - \frac{\sqrt{-1}}{\epsilon } ({\omega }_i(k) + \omega _{i^{*}}(k)) {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} ({R}(k) \theta _{i}^{2}(k) + {R}(k) \theta _{i^{*}}^{2}(k)){\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad - \frac{\gamma }{2} {R}(k) \theta _{i}(k) \theta _{i^{*}}(k){\widehat{\Omega }_{i^{-}}^{\epsilon }}(t)(p,k) - \frac{\gamma }{2} {R}(k) \theta _{i}(k) \theta _{i^{*}}(k){\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k) \nonumber \\&\qquad + \gamma \theta _{i}(k) \theta _{i^{*}}(k) \int _{{\mathbb {T}}} dk' R(k,k') [\theta _{i}^{2}(k') {\widehat{\Omega }_{i^{+}}^{\epsilon }}(t)(p,k') + \theta _{i^{*}}^{2}(k') {\widehat{\Omega }_{i^{*+}}^{\epsilon }}(t)(p,k')) \nonumber \\&\qquad + \theta _{i}(k') \theta _{i^{*}}(k') {\widehat{\Gamma }_{i^{+}}^{\epsilon }}(t)(p,k') + \theta _{i}(k') \theta _{i^{*}}(k') {\widehat{\Gamma }_{i^{-}}^{\epsilon }}(t)(p,k')] + \mathcal {R}_{i + 2}(p,k) \end{aligned}$$
(6.7)

for \(i =1,2\) where \(\mathcal {R}_{i}, i=3,4\) are the remainder terms which satisfy

$$\begin{aligned} ||\mathcal {R}_{i}(p,\cdot )||_{\mathbb {L}^{1}({\mathbb {T}})} \le C(T,B,\gamma ,\alpha ,K_{0}) |p| (1+\epsilon ) \end{aligned}$$
(6.8)

for all \(p \in {\mathbb {R}}\). Hence, for any \(\varvec{J} \in \mathbf {S}^{2}\) and \(i=1,2\),

$$\begin{aligned} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ \mathcal {R}_{i+2}(p,k) {\widehat{J_{i}}}(p,k)^{*} = O_{\varvec{J} }(1). \end{aligned}$$

Combining (6.1), (6.7) and (6.8) we have

$$\begin{aligned}&\partial _{t} \langle \Gamma ^{\epsilon }(t),\varvec{J}\rangle \\&\quad = - \frac{\sqrt{-1}}{\epsilon } \langle \Gamma ^{\epsilon },({\omega }_1+{\omega }_2)\varvec{J} \rangle + \langle \Omega ^{\epsilon },R'\varvec{J} \rangle + \langle \Omega ^{\epsilon },R'\varvec{J}^{t}\rangle \\&\qquad + \langle \Gamma ^{\epsilon },R''\varvec{J}\rangle \\&\qquad + \langle (\Gamma ^{\epsilon })^{*},R''\varvec{J}\rangle + \langle \Omega ^{\epsilon },\beta '\varvec{J}^{t}\rangle + \langle (\Omega ^{\epsilon })^{*},\beta '\varvec{J}\rangle \\&\qquad + \langle \Gamma ^{\epsilon },\beta \varvec{J}\rangle + ~ O_{\varvec{J}}(1) , \end{aligned}$$

where \(\varvec{J}^{t} = (J_{2},J_{1})\) and

$$\begin{aligned}&\langle (\Omega ^{\epsilon })^{*}(t),\varvec{J}\rangle = \sum _{i=1,2} \int _{{\mathbb {R}}} dp \int _{{\mathbb {T}}} dk ~ {\widehat{\Omega }_{i^{-}}^{\epsilon }}(t)(p,k) {\widehat{J_{i}}}(p,k)^{*}, \\&\beta (k) = - \frac{\gamma }{2} R(k), \quad \beta '(k) = - \frac{\gamma }{2} \theta _{1}(k) \theta _{2}(k) R(k), \\&(R'\varvec{J})_{i}(p,k) = \gamma \int _{{\mathbb {T}}} dk' \theta _{i}(k)^2 R(k,k') \theta _{1}(k')\theta _{2}(k') J_{i}(p,k'), \\&(R''\varvec{J})_{i}(p,k) = \gamma \int _{{\mathbb {T}}} dk' \theta _{1}(k) \theta _{2}(k) R(k,k') \theta _{1}(k')\theta _{2}(k') J_{i}(p,k'). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} |\int _{0}^{T} dt ~ \langle \Gamma ^{\epsilon }(t),({\omega }_1+{\omega }_2)\varvec{J} \rangle | = 0 \end{aligned}$$

for all \(\varvec{J} \in \mathbf {S}^{2}\). Since \({\omega }_1(k) + {\omega }_2(k) \) is uniformly bounded by positive constants from above and below, \(({\omega }_1 + {\omega }_2)^{-1}\varvec{J} \in \mathbf {S}^2\) for all \(\varvec{J} \in \mathbf {S}^2\). Hence we conclude that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} |\int _{0}^{T} dt ~ \langle \Gamma ^{\epsilon }(t),\varvec{J}\rangle | = 0 \end{aligned}$$

for all \(\varvec{J} \in \mathbf {S}^{2}\).

For \((\Gamma ^{\epsilon })^{*}\) we can apply the same proof. \(\square \)

7 Proof of Theorem 2.

We use the Markov chain (K(t), I(t)) introduced in Sect. 5. First note that for any \(u_0 \in C^{\infty }_0({\mathbb {R}}\times {\mathbb {T}}\times \{1,2\})\),

$$\begin{aligned} u_N(y,k,i,t) = \mathbb {E}_{(k,i)}[u_{0}(Z_{N}(t),K(t),I(t))]. \end{aligned}$$

where

$$\begin{aligned} Z_{N}(t) = y + \frac{1}{2\pi N^{\frac{3}{5}}} \int _{0}^{t} ds ~ \omega '(K(s)). \end{aligned}$$

Then, by using the Fourier transform we can write

$$\begin{aligned} u_N(y,k,i,Nt)&= \mathbb {E}_{(k,i)}[u_{0}(Z_{N}(Nt),K(Nt),I(Nt))] \\&= \sum _{x \in {\mathbb {Z}}} \int _{{\mathbb {R}}} dp \sum _{j = 1,2} {\widetilde{u}_{0}} (p,x,j) \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} } ] , \end{aligned}$$

where \(\widetilde{u}(p,x,i)\) is the Fourier transform of u(yki). Denote by di the counting measure on \(\{1,2 \}\). Let \(P^{t} , t \ge 0\) be the semigroup generated by \(\mathcal {L}\). Since \(\frac{1}{2}dkdi\) is a reversible probability measure of the process \(\{ ( K(t) , I(t) )\}_{t \ge 0}\) and 0 is a simple eigenvalue for the generator \(\mathcal {L}\), we have

$$\begin{aligned} \lim _{t \rightarrow \infty } ||P^{t}f||_{\mathbb {L}^{2}({\mathbb {T}}\times \{ 1,2 \} )} = 0 \end{aligned}$$

for any \(f \in \mathbb {L}^{2}({\mathbb {T}}\times \{ 1,2 \} )\) satisfying \(\int _{{\mathbb {T}}\times \{1,2 \}} dkdi f(k,i) = 0\) by the ergodicity and the reversibility (cf. Theorem 1.6.1, 1.6.3 and Exercise 4.7.2 of [8]). Let \(\{ m_{N}\}_{N \in {\mathbb {N}}} \) be an increasing sequence of positive numbers such that

$$\begin{aligned}&\lim _{N \rightarrow \infty } m_{N} = \infty , \\&\lim _{N \rightarrow \infty } m_{N} N^{-\frac{3}{5}} = 0. \end{aligned}$$

Then for any \(t \ge 0 , p \in {\mathbb {R}}, x \in {\mathbb {Z}}\) and \(j = 1,2\)

$$\begin{aligned}&\left| \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} }] \right. \\&\qquad \left. - \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} }] \right| \\&\quad \le \mathbb {E}_{(k,i)}[|1 - e^{\sqrt{-1} p (Z_{N}(Nt) - Z_{N}(Nt - m_{N}t))}|] \\&\quad \le \mathbb {E}_{(k,i)}[|p (Z_{N}(Nt) - Z_{N}(Nt - m_{N}t))|] \end{aligned}$$

since \(|1-e^{\sqrt{-1}a}| \le |a|\) for any \(a \in {\mathbb {R}}\). The last expression converges to 0 as \(N \rightarrow \infty \) since

$$\begin{aligned} \mathbb {E}_{(k,i)}[|p (Z_{N}(Nt) - Z_{N}(Nt - m_{N}t))|]&= \mathbb {E}_{(k,i)}[|p \frac{1}{2\pi N^{\frac{3}{5}}} \int _{Nt - m_{N}t}^{Nt} ds ~ \omega '(K(s))|] \\&\le \Vert \omega '\Vert _{\infty } t |p| m_{N} N^{-\frac{3}{5}} \rightarrow 0 \end{aligned}$$

where \( \Vert \omega '\Vert _{\infty }=\sup _{k} | \omega '(k)|\). By the Markov property

$$\begin{aligned}&\mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} }] \\&\quad = \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \mathbb {E}_{(K(Nt - m_{N}t),I(Nt - m_{N}t))} [e^{\sqrt{-1} x K(m_{N}t)} 1_{\{ I(m_{N}t) = j \} }] ] . \end{aligned}$$

By the Schwarz’s inequality,

$$\begin{aligned}&\big |\mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \mathbb {E}_{(K(Nt - m_{N}t),I(Nt - m_{N}t))} [e^{\sqrt{-1} x K(m_{N}t)} 1_{\{ I(m_{N}t) = j \} }] ] \nonumber \\&\qquad - \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \mathbb {E}_{(K(Nt - m_{N}t),I(Nt - m_{N}t))}[ \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}]] \big | \nonumber \\&\quad \le \mathbb {E}_{(k,i)}[|\mathbb {E}_{(K(Nt - m_{N}t),I(Nt - m_{N}t))}[e^{\sqrt{-1} x K(m_{N}t)} 1_{\{ I(m_{N}t) = j \} } - \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}]|^{2}]^{\frac{1}{2}} . \end{aligned}$$
(7.1)

Let \(g(k,i) = e^{\sqrt{-1} x k} 1_{\{ j \} }(i) - \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}\). Since \(\frac{1}{2}dkdi\) is the reversible probability measure we have

$$\begin{aligned}&\int _{{\mathbb {T}}\times \{1,2 \}} dkdi ~ \mathbb {E}_{(k,i)}[|\mathbb {E}_{(K(Nt - m_{N}t),I(Nt - m_{N}t))}[e^{\sqrt{-1} x K(m_{N}t)}1_{\{ I(m_{N}t) = j \} } \\&\qquad - \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}]|^{2}] \\&\quad = ||P^{m_{N}t}g||_{\mathbb {L}^{2}({\mathbb {T}}\times \{ 1,2 \} )}^{2} . \end{aligned}$$

Hence we conclude that (7.1) converges to 0 in \(\mathbb {L}^{2}({\mathbb {T}}\times \{ 1,2 \} )\) as \(N \rightarrow \infty \) since \(\int _{{\mathbb {T}}\times \{1,2 \}} dkdi ~ g(k,i) = 0\).

Summarizing the above and applying the dominated convergence theorem, we have

$$\begin{aligned}&\lim _{N \rightarrow \infty } \int _{{\mathbb {T}}\times \{1,2 \}} dk di \sum _{x \in {\mathbb {Z}}} \int _{{\mathbb {R}}} dp \sum _{j = 1,2} | {\widetilde{u}_{0}} (p,x,j) | \\&\qquad \times ~ |\mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} } ] \\&\qquad - \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}] |^{2} \\&\quad = 0. \end{aligned}$$

Note that

$$\begin{aligned}&\sum _{x \in {\mathbb {Z}}} \int _{{\mathbb {R}}} dp \sum _{j = 1,2} {\widetilde{u}_{0}} (p,x,j) \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}] \\&\quad = \mathbb {E}_{(k,i)} [ \frac{1}{2} \int _{{\mathbb {T}}\times \{1,2 \}} dk' dj ~ u_{0}(Z_{N}(Nt - m_{N}t),k',j) ], \\&\quad = \frac{1}{2} \mathbb {E}_{(k,i)} [\bar{u}_0(Z_{N}(Nt - m_{N}t))]. \end{aligned}$$

By Theorem 3, \(Z_{N}(Nt - m_{N}t)\) converges to a Lévy process starting from y and generated by \(D (-\Delta _{y})^{\frac{5}{6}}\) and so the last term converges to \(\bar{u}(y,t)\) given in (4.4) for \(k \ne 0, ~ i=1,2\). Therefore,

$$\begin{aligned} \frac{1}{2}\mathbb {E}_{(k,i)} [\bar{u}_0(Z_{N}(Nt - m_{N}t))] \rightarrow \frac{1}{2}\bar{u}(y,t) \quad a.e. \end{aligned}$$

and by the dominated convergence theorem,

$$\begin{aligned}&\limsup _{N \rightarrow \infty } \int _{{\mathbb {T}}\times \{1,2 \}} dkdi | u_N(y,k,i,Nt) - \frac{1}{2}\bar{u}(y,t) |^{2} \\&\quad \le \limsup _{N \rightarrow \infty }\int _{{\mathbb {T}}\times \{1,2 \}} dkdi | u_N(y,k,i,Nt) - \frac{1}{2} \mathbb {E}_{(k,i)} [\bar{u}_0(Z_{N}(Nt - m_{N}t))] |^{2}. \end{aligned}$$

Applying the Fourier transform, the last term is bounded from above by

$$\begin{aligned}&\limsup _{N\rightarrow \infty } \left( \sum _{x \in {\mathbb {Z}}} \int _{{\mathbb {R}}} dp \sum _{j = 1,2} | {\widetilde{u}_{0}} (p,x,j) |\right) \\&\qquad \times \int _{{\mathbb {T}}\times \{1,2 \}} dkdi \sum _{x \in {\mathbb {Z}}} \int _{{\mathbb {R}}} dp \sum _{j = 1,2} | {\widetilde{u}_{0}} (p,x,j) | \\&\qquad \times |\mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt)} e^{\sqrt{-1} x K(Nt)} 1_{\{ I(Nt) = j \} } ] \\&\qquad - \mathbb {E}_{(k,i)}[e^{\sqrt{-1} p Z_{N}(Nt - m_{N}t)} \frac{1}{2} \int _{{\mathbb {T}}} dk' e^{\sqrt{-1} x k'}] |^{2} \end{aligned}$$

and so we complete the proof.