1 Introduction and Statement

Let \(\{\sigma _x(\omega ); x\in \mathbb Z^d\}\) be i.i.d., \(\mathbb E[\sigma _x]=0\) and assume moreover,

$$\begin{aligned} \Vert \sigma _x\Vert _\infty \le C. \end{aligned}$$
(1.1)

Consider the finite difference random operator

$$\begin{aligned} L_\omega =-\Delta +\delta \nabla ^*\sigma \nabla \end{aligned}$$
(1.2)

\(\nabla f(x) =(f(x+e_1)-f(x), f(x+e_2)-f(x), ...f(x+e_d)-f(x))\). Here \(e_i\) are the unit lattice vectors and f is defined on \(\mathbb Z^d\) .

Consider the stochastic equation

$$\begin{aligned} L_\omega u_\omega =f. \end{aligned}$$
(1.3)

Let \(\langle \,\cdot \,\rangle \) denote the expectation. Formally we have

$$\begin{aligned} \mathbb E[u_\omega ]\equiv \langle u_\omega \rangle = \langle L_\omega ^{-1}\rangle f \ \text{ and } \ A \langle u_\omega \rangle =f \end{aligned}$$
(1.4)

with

$$\begin{aligned} A=\langle L_\omega ^{-1} \rangle ^{-1}. \end{aligned}$$
(1.5)

Since A is translation invariant it can be expressed as a multiplication operator \(\hat{A}(\xi )\) in Fourier space. We prove the following result about the regularity of \(\hat{A}(\xi )\).

Theorem

With the above notation, given \(\varepsilon >0\), there is \(\delta _0>0\) such that for \(|\delta | <\delta _0\), A has the form

$$\begin{aligned} A =\nabla ^* (1+K_1) \nabla \end{aligned}$$
(1.6)

with \(K_1\) given by a convolution operator such that \(\hat{K}_1 (\xi )\) has \(d-\varepsilon \) derivatives or

$$\begin{aligned} K_1 (x-y)= O(\delta [1+|x-y|]^{-(2d-\varepsilon )}) \end{aligned}$$
(1.7)

for \(x,y \in \mathbb Z^d \) .

Remark

  1. (1)

    In the general case when \(\sigma _x\) defines an ergodic process, homogenization was developed by Kozlov [3] and Papanicolaou and Varadhan [5]. However, the regularity of \(K_1\) is was not addressed in these papers. See [2] for a review of results in homogenization.

  2. (2)

    This paper is closely related to an unpublished note of Sigal [6], where the exact same problem is considered. In [6] an asymptotic expansion for \(K_1\) is given and (1.7) verified up to the leading order by applying the Feshbach-Schur formula. What we basically manage to do here is to control the full series. The argument is rather simple, but contains perhaps some novel ideas that may be of independent interest in the study of the averaged dynamics of stochastic PDE’s.

  3. (3)

    In Bach, Fröhlich, Sigal, [1] a multi-scale version of Feshbach-Schur were used the study an atom coupled to an electromagnetic field.

  4. (4)

    In the context of homogenization, the same formalism was developed by J. Conlon, A. Naddaf in [4]. This paper proved some regularity of \(K_1\) under certain mixing conditions.

  5. (5)

    It is an open question whether the same strong regularity holds assuming that only \(|\delta | <1\).

  6. (6)

    The author is grateful to T. Spencer for bringing the problem to his attention and a few preliminary discussions. Thanks also to the referee and W. Schlag for clarifying the exposition. He has also benefited from some comments of A. Gloria.

2 The Expansion

We briefly recall the derivation of the multi-linear expansion for \(K_1\) established in [6]. Denote \(b=\delta \sigma , P=\mathbb E, P^\bot = 1-\mathbb E\). Using the Feshbach-Shur map to the block decomposition

$$\begin{aligned} \begin{pmatrix} (P, P) &{} (P,P^\bot )\\ (P^\bot ,P) &{} (P^\bot , P^\bot ) \end{pmatrix} \end{aligned}$$

we obtain

$$\begin{aligned} PL^{-1}P= \big (PLP-PLP^\bot (P^\bot LP^\bot -io)^{-1} P^\bot LP \big )^{-1} \end{aligned}$$

Since \(PLP=-\Delta P, PLP^\bot =P\nabla ^*b\nabla P^\bot , P^\bot LP= P^\bot \nabla ^* b\nabla P\), we obtain

$$\begin{aligned} (-\Delta P -\nabla ^* Pb\nabla (P^\bot LP^\bot )^{-1} \nabla ^*b \nabla P)^{-1}. \end{aligned}$$
(2.1)

Next, \(P^\bot LP^\bot = (-\Delta )\big ( 1+ (-\Delta )^{-1} \nabla ^*P^\bot b\nabla \big )P^\bot \) and we expand

$$\begin{aligned} (P^\bot LP^\bot )^{-1} =\Big [1-(-\Delta )^{-1} \nabla ^* P^\bot b\Big (\sum _{n\ge 0} (-1)^n(KP^\bot b)^n\Big ) \nabla P^\bot \Big ] (-\Delta )^{-1} \end{aligned}$$
(2.2)

where we denoted K the convolution singular operator

$$\begin{aligned} K=\nabla (-\Delta )^{-1} \nabla ^*. \end{aligned}$$
(2.3)

Substitution of (2.2) in (2.1) gives

$$\begin{aligned} \langle \nabla ^* b\nabla (P^\bot LP^\bot )^{-1} \nabla ^* b\nabla \rangle =\sum _{n\ge 1}(-1)^{n+1} \nabla ^* \langle b(KP^\bot b)^n\rangle \nabla . \end{aligned}$$
(2.4)

Hence

$$\begin{aligned} \langle L_\omega ^{-1}\rangle =\big (-\Delta + (2.4)\big )^{-1} \end{aligned}$$

and

$$\begin{aligned} A=-\Delta + (2.4) =\nabla ^* \big (1+K_1)\nabla \end{aligned}$$

with

$$\begin{aligned} K_1=\sum _{n\ge 1} (-1)^{n} \langle b(KP^\bot b)^n\rangle . \end{aligned}$$
(2.5)

Remains to analyze the individual terms in (2.5). In doing so, without loss of generality, we treat K as a scalar singular integral operator.

3 A Deterministic Inequality

Our first ingredient in controlling the multi-linear terms in the series (2.5) is the following (deterministic) bound on composing singular integral and multiplication operators.

Lemma 1

Let K be a (convolution) singular integral operator acting on \(\mathbb Z^d\) and \(\sigma _1, \ldots , \sigma _s \in \ell ^\infty (\mathbb Z^d)\). Define the operator

$$\begin{aligned} T = K\sigma _1 K\sigma _2 \cdots K\sigma _s. \end{aligned}$$
(3.1)

Then T satisfies the pointwise bound

$$\begin{aligned} |T(x_0, x_s)|<|x_0-x_s|^{-d+\varepsilon } (C\varepsilon ^{-1})^s \prod ^s_1 \Vert \sigma _j\Vert _\infty \end{aligned}$$
(3.2)

for all \(\varepsilon >0\).

Proof

Firstly, recalling the well-known bound

$$\begin{aligned} \Vert K\Vert _{p\rightarrow p}< \frac{c}{p-1} \ \text{ for } \ 1< p\le 2 \end{aligned}$$
(3.3)

and normalizing \(\Vert \sigma _j\Vert _\infty =1\), we get

$$\begin{aligned} \Vert T\Vert _{p\rightarrow p} +\Vert T^*\Vert _{p\rightarrow p}< \Big (\frac{c}{p-1}\Big )^s. \end{aligned}$$
(3.4)

In particular

$$\begin{aligned} \max _x \Big (\sum _y |T(x, y)|^p\Big )^{\frac{1}{p}}+\max _y \Big (\sum _x|T(x, y)|^p\Big )^{\frac{1}{p}} <\Big (\frac{c}{p-1}\Big )^s. \end{aligned}$$
(3.5)

Next, write

$$\begin{aligned} T_s (x_0, x_s)=\sum _{x_1, \ldots , x_{s-1}} K(x_0-x_1)\sigma _1(x_1) K(x_1-x_2)\sigma _2(x_2) \cdots K(x_{s-1} -x_s) \sigma _s (x_s).\qquad \end{aligned}$$
(3.6)

We use a dyadic decomposition according to \(\max _{0\le j < s}|x_j-x_{j+1}|\). Specify \(R\gg 1\) and \(0\le i<s\) satisfying

$$\begin{aligned} |x_i -x_{i+1}| \sim R \end{aligned}$$
(3.7)
$$\begin{aligned} \max _j|x_j -x_{j+1}|\lesssim R. \end{aligned}$$
(3.8)

In particular \(|x_0 -x_s|\lesssim sR\). The corresponding contribution to (3.6) may be bounded by

$$\begin{aligned} \sum _{\begin{array}{c} x_i, x_{i+1}\\ |x_i -x_{i+1}|\sim R \end{array}} |T^{(*)}_{i} (x_0, x_i)| \ |K(x_i-x_{i+1})| \ | T^{(*)}_{s-1-i}(x_{i+1},x_s)| \end{aligned}$$
(3.9)

with \(T^{(*)}_i\) obtained from formula (3.6) with additional restriction (3.8). The bound (3.5) also holds for \(T^{(*)}_i\). Since \(|K(z)|< |z|^{-d}\) (where we denote \(| \ |=| \ |+1\)), it follows from (3.5), (3.7), (3.8) and Hölder’s inequality that

$$\begin{aligned} (3.9)&\le \Big (\frac{c}{p-1}\Big )^s \Big (\sum \limits _{x_i, x_{i+1}, |x_0-x_i|< sR, |x_0-x_{i+1}|<s R} 1\Big )^{1/p'} R^{-d} \quad \Big (p'=\frac{p}{p-1}\Big )\nonumber \\&< \Big (\frac{c}{p-1}\Big )^s (sR)^{2d(p-1)} R^{-d} < (C\varepsilon ^{-1})^s R^{-d+\varepsilon } \end{aligned}$$
(3.10)

by taking p such that \(2d(p-1)=\varepsilon \). Then

$$\begin{aligned} \sum _{0\le i<s} \sum _{R\in 2^{\mathbb {N}},\; R \gtrsim |x_0-x_s|/s} (3.10) < s^{d+1} (C\varepsilon ^{-1})^s |x_0- x_s|^{-d+\varepsilon } \end{aligned}$$

proving (3.2).\(\square \)

4 Use of the Randomness

Returning to (2.5), the randomness and the projectors will allow us to further improve the pointwise bounds on \(\langle b(KP^\bot b )^n\rangle \).

Write

$$\begin{aligned} b(KP^\bot b)^n (x_0, x_n)=\sum _{x_1, \ldots , x_{n-1}\in \mathbb Z^d} b_{x_0} K(x_0, x_1)P^\bot b_{x_1} K(x_1, x_2)P^\bot b_{x_2}\ldots b_{x_n}. \end{aligned}$$
(4.1)

Note that evaluation of \(\langle b(KP^\bot b)^n\rangle \) by summation over all diagrams would produce combinatorial factors growing more rapidly than \(C^n\) and hence we need to proceed differently.

Let again \(R\gg 1\) and \(0\le j_0<n\) s.t.

$$\begin{aligned} |x_{j_0} - x_{j_0+1}|\sim R \ \text{ and } \ \max _{0\le j< n} |x_j-x_{j+1}|\lesssim R. \end{aligned}$$
(4.2)

We denote

$$\begin{aligned} S=\bigg \{ \begin{array}{ll} &{}(x_1, \ldots , x_{n-1})\in (\mathbb Z^d)^{n-1} \text{ and } \{x_0, \ldots , x_{j_0}\}\cap \{x_{j_0+1}, \ldots , x_n\}\not = \phi \\ &{}\text{ subject } \text{ to } \text{(4.2) } \end{array} \bigg \} \end{aligned}$$
(4.3)

\(\mathbb E[(4.1)]\) only involves the irreducible graphs in (4.1), due to the presence of the projection operators \(P^\bot \). This means that

$$\begin{aligned} \mathbb E[ b_{x_0} P^\bot b_{x_1} P^\bot b_{x_2}\cdots P^\bot b_{x_n}] = 0 \end{aligned}$$

whenever there exists some \(0\le j<n\) such that \(\{x_0, \ldots , x_{j}\}\cap \{x_{j+1}, \ldots , x_n\} = \phi \). From the preceding, it follows in particular that

$$\begin{aligned} \mathbb E[(4.1)]=\mathbb E[(4.4)] \end{aligned}$$

defining

$$\begin{aligned} (4.4) =\sum _{(x_1, \ldots , x_{n-1})\in S} b_{x_0} K(x_0, x_1) P^\bot b_{x_1} \ldots b_{x_n}. \end{aligned}$$

Our goal is to prove

Lemma 2

For all \(\varepsilon >0\), we have

$$\begin{aligned} |\mathbb E[(4.4)]|<C_\varepsilon ^n R^{-d+4\varepsilon } |x_0-x_n|^{-d} \end{aligned}$$
(4.5)

which clearly implies the Theorem.

For definition (4.3)

$$\begin{aligned} S= \bigcup _{\begin{array}{c} 0\le j_1\le j_0\\ j_0<j_2\le n \end{array}} S_{j_1, j_2} \end{aligned}$$

where

$$\begin{aligned} S_{j_1, j_2}= \{(x_1,\ldots , x_{n-1})\in (\mathbb Z^d)^{n-1} \text{ subject } \text{ to } \text{(4.2) } \text{ and } x_{j_1}=x_{j_2}\}. \end{aligned}$$
(4.6)

Note that these sets \(S_{j_1, j_2}\) are not disjoint and we will show later how to make them disjoint at the cost of another factor \(C^n\).

Consider the sum

$$\begin{aligned} \sum _{(x_1, \ldots , x_{n-1})\in S_{j_1, j_2}} b_{x_0} K(x_0, x_1) P^\bot b_{x_1} \cdots b_{x_n}= (4.7). \end{aligned}$$

We claim that for all \(\varepsilon >0\)

$$\begin{aligned} |(4.7)|< C_\varepsilon ^n R^{-d+4\varepsilon } |x_0-x_n|^{-d} \end{aligned}$$
(4.8)

(thus without taking expectation).

To prove (4.8), factor (4.7) as

$$\begin{aligned}&(KP^\bot b)^{j_1}(x_0, x_{j_1})(KP^\bot b)^{j_0-j_1}(x_{j_1}, x_{j_0}) K(x_{j_0}, x_{j_0+1}) P^\bot b_{x_{j_0+1}},\nonumber \\&(KP^\bot b)^{j_2-j_0} (x_{j_0+1}, x_{j_1}) (KP^\bot b)^{n-j_2-1}(x_{j_1}, x_n) \end{aligned}$$
(4.9)

with summation over \(x_{j_0}, x_{j_0+1}, x_{j_1}\).

Using the deterministic bound implied by Lemma 1

$$\begin{aligned} |(KP^\bot b)^\ell (x, y)|< C_\varepsilon ^\ell |x-y|^{-d+\varepsilon } \end{aligned}$$
(4.10)

we may indeed estimate

$$\begin{aligned} |(4.7)|< & {} R^{-d} C_\varepsilon ^n \sum _{x_{j_0}, x_{j_0+1}, x_{j_1}} |x_0-x_{j_1}|^{-d+\varepsilon } |x_{j_1}-x_{j_0}|^{-d+\varepsilon } |x_{j_0+1} -x_{j_1}|^{-d+\varepsilon } |x_{j_1}-x_n|^{-d+\varepsilon }\\< & {} C_\varepsilon ^n R^{-d+4\varepsilon } |x_0-x_n|^{-d}. \end{aligned}$$

Remains the disjointification issue for the sets \(S_{j_1, j_2}\).

Our device to achieve this may have an independent interest. Define the disjoint sets

$$\begin{aligned} S_{j_1, j_2}' =S_{j_1, j_2} \Big \backslash \Big (\bigcup _{\begin{array}{c} j<j_1\\ j_0< j'\le n \end{array}} S_{j, j'} \ \cup \ \bigcup _{j_0< j'<j_2} S_{j_1, j'}\Big ). \end{aligned}$$
(4.11)

Replacing \(S_{j_1, j_2}\) by \(S_{j_1, j_2}'\) in (4.7), we prove that the bound (4.8) is still valid.

Note that, by definition, \((x_1, \ldots , x_{n-1})\not \in \bigcup \limits _{\begin{array}{c} j<j_1\\ j_0<j'\le n \end{array}} S_{j. j'}\) means that

$$\begin{aligned} \{x_0, \ldots , x_{j_1-1}\} \cap \{x_{j_0+1}, \dots , x_n\}=\phi . \end{aligned}$$
(4.12)

n Thus we need to implement the condition (4.12) in the summation (4.7) at the cost of a factor bounded by \(C^n\).

We introduce an additional set of variables \(\bar{\theta }=(\theta _x)_{x\in \mathbb Z^d}, \theta _x\in \mathbb T=\mathbb R/2\pi \mathbb Z\) and consider the corresponding Steinhaus system. Denote \(E=\{0, 1, \ldots , j_1-1\}\), \(F=\{j_{0}+ 1, \ldots , n\}\). Replace in (4.7)

$$\begin{aligned} {\left\{ \begin{array}{ll} b_{x_j} \ \text{ by } \ b_{x_j} e^{i\theta _{x_j}} \ \text{ for } \ j\in E\\ b_{x_j} \ \text{ by } \ b_{x_j} e^{-i\theta _{x_j}} \ \text{ for } \ j\in F. \end{array}\right. } \end{aligned}$$
(4.13)

After this replacement, (4.7) becomes a Steinhaus polynomial in \(\bar{\theta }\), i.e. we obtain

$$\begin{aligned} \sum _{(x_1, \ldots , x_{n-1})\in S_{j_1, j_2}} \ e^{i(\sum _{j\in E} \theta _{x_j}-\sum _{k\in F} \theta _{x_k})} b_{x_0} K(x_0, x_1) P^\bot b_{x_1}\ldots b_{x_n} \end{aligned}$$
(4.14)

for which the estimate (4.8) still holds (uniformly in \(\bar{\theta }\)).

Next, performing a convolution with the Poisson kernel \(P_t(\theta _x) = \sum _{n\in \mathbb {Z}} t^{|n|}e^{in\theta _x}\) in each \(\theta _x\) (which is a contraction), gives

$$\begin{aligned}&\int (4.14) \prod _x P_t(\theta _x'-\theta _x) \frac{d\theta _x}{2\pi }\nonumber \\&\quad =\sum \limits _{(x_1, \ldots , x_{n-1})\in S_{j_1, j_2}} t^{w_{\bar{x}}} e^{i(\sum _{j\in E}\theta _{x_j}' -\sum _{k\in F} \theta _{x_k}')} b_{x_0} K(x_0, x_1)P^\bot \cdots b_{x_n} \end{aligned}$$
(4.15)

where \(0\le t\le 1\) and

$$\begin{aligned} w_{\bar{x}} =\sum _x \big | \, |\{j\in E; x_j=x\}| - |\{k\in F; x_k =x\}|\big | \le |E|+|F|=D. \end{aligned}$$

Note that the condition \(\{x_j, j\in E\}\cap \{x_k; k\in F\}=\phi \) is equivalent to \(w_{\bar{x}}=D\) and (4.14) obtained by projection of (4.15), viewed as polynomial t, on the top degree term. Our argument is then concluded by the standard Markov brothers’ inequality.

Lemma 3

Let P(t) be a polynomial of degree \(\le D\). Then

$$\begin{aligned} \max _{-1\le t\le 1} |P^{(k)} (t)|\le \frac{D^2(D^2-1^2)(D^2-2^2)\cdots (D^2-(k-1)^2)}{1, 3, 5\ldots (2k-1)} \max _{-1\le t\le 1}|P(t)|.\qquad \end{aligned}$$
(4.16)

Indeed, we conclude that for all \(\bar{\theta }\)

$$\begin{aligned} \Big |\sum _{\begin{array}{c} (x_1, \ldots , x_{n-1})\in S_{j_1, j_2}\\ w_{\bar{x}}=D \end{array}} e^{i\big (\sum _{j\in E}\theta _{x_j} -\sum _{k\in F}\theta _{x_k}\big )} b_{x_0} K(x_0, x_1)P^\bot \ldots b_{x_n}\Big | <C^n. \end{aligned}$$
(4.8)

and set then \(\bar{\theta }=0\).