1 Setting of the Problem

Our general concern is the computation of the effective viscosity generated by a suspension of N particles in a fluid flow. We consider spherical particles of small radius a, centered at \(x_{i,N}\), with \(N \geqq 1\) and \(1\leqq i\leqq N\). To lighten notations, we write \(x_i\) instead of \(x_{i,N}\), and \(B_i = B(x_i, a)\). We assume that the Reynolds number of the fluid flow is small, so that the fluid velocity is governed by the Stokes equation. Moreover, the particles are assumed to be force- and torque-free. If \(\mathcal {F} = \mathbb {R}^3 \setminus (\cup _i B_i)\) is the fluid domain, governing equations are

$$\begin{aligned} \left\{ \begin{aligned} -\mu \Delta u + {\nabla }p&= 0, \quad x \in \mathcal {F} , \\ \hbox {div}~u&= 0, \quad x \in \mathcal {F} , \\ u\vert _{B_i}&= u_i + \omega _i \times (x-x_i), \end{aligned} \right. \end{aligned}$$
(1.1)

where \(\mu \) is the kinematic viscosity, while the constant vectors \(u_i\) and \(\omega _i\) are Lagrange multipliers associated to the constraints

$$\begin{aligned} \begin{aligned} \int _{{\partial }B_i} \sigma _\mu (u,p) n\, \mathrm{d}s&= 0, \quad&\int _{{\partial }B_i} \sigma _\mu (u,p) n \times (x-x_i) \, \mathrm{d}s = 0. \end{aligned} \end{aligned}$$
(1.2)

Here, \(\sigma _\mu (u,p) := 2\mu D(u) - p I\) is the usual Cauchy stress tensor. The boundary condition at infinity will be specified later on.

We are interested in a situation where the number of particles is large, \(N \gg 1\). We want to understand the additional viscosity created by the particles. Ideally, our goal is to replace the viscosity coefficient \(\mu \) in (1.1) by an effective viscosity tensor \(\mu '\) that would encode the average effect induced by the particles. Note that such replacement can only make sense in the flow region \(\mathcal {O}\) in which the particles are distributed in a dense way. For instance, a finite number of isolated particles will not contribute to the effective viscosity, and should not be taken into account in \(\mathcal {O}\). The selection of the flow region is formalized through the following hypothesis on the empirical measure:

$$\begin{aligned} \begin{aligned}&\delta _N = \frac{1}{N} \sum _{i=1}^N \delta _{x_i} \xrightarrow [N \rightarrow +\infty ]{} f(x) \mathrm{d}x \quad \text {weakly,} \\&support(f) = \overline{\mathcal {O}}, \quad \mathcal {O}\text { smooth, bounded and open}, \, f\vert _\mathcal {O}\in C^1(\overline{\mathcal {O}}). \end{aligned} \end{aligned}$$
(H1)

Note that we do not ask for regularity of the limit density f over \(\mathbb {R}^3\), but only in restriction to \(\mathcal {O}\). Hence, our assumption covers the important case \(f = \frac{1}{|\mathcal {O}|} 1_{\mathcal {O}}\).

We investigate the classical regime of dilute suspensions, in which the solid volume fraction

$$\begin{aligned} \phi = \frac{4}{3}N \pi a^3/|\mathcal {O}| \end{aligned}$$
(1.3)

is small, but independent of N. In addition to (H1), we make the separation hypothesis

$$\begin{aligned} \min _{i\ne j} |x_i - x_j| \geqq c N^{-1/3} \, \text { for some constant } c > 0\text { independent of } N. \end{aligned}$$
(H2)

Let us stress that (H2) is compatible with (H1) only if the \(L^\infty \) norm of f is small enough (roughly less than \(1/c^3\)), which in turn forces \(\mathcal {O}\) to be large enough.

Our hope is to replace a model of type (1.1) by a model of the form

$$\begin{aligned} \left\{ \begin{aligned} -\mu \Delta u + {\nabla }p&= 0, \quad \hbox {div}~u = 0, \quad x \in \mathbb {R}^3 \setminus \mathcal {O}, \\ -2 \hbox {div}(\mu ' D(u')) + {\nabla }p'&= 0, \quad \hbox {div}~u' = 0, \quad x \in \mathcal {O}, \end{aligned} \right. \end{aligned}$$
(1.4)

with the usual continuity conditions on the velocity and the stress

$$\begin{aligned} u = u' \quad \text {at } \, {\partial }\mathcal {O}, \quad \sigma _\mu (u,p)n= \sigma _{\mu '}(u',p')n \quad \text {at } \, {\partial }\mathcal {O}. \end{aligned}$$
(1.5)

A priori, \(\mu '\) could be inhomogeneous (and should be if the density f seen above is itself non-constant over \(\mathcal {O}\)). It could also be anisotropic, if the cloud of particles favours some direction. With this in mind, it is natural to look for \(\mu ' = \mu '(x)\) as a general 4-tensor, with \(\sigma ' = 2 \mu ' D(u)\) given in coordinates by \(\sigma _{ij} = \mu '_{ij kl} D(u)_{kl}\). By standard classical considerations of mechanics, \(\mu '\) should satisfy the relations

$$\begin{aligned} \mu '_{ij kl} = \mu '_{jikl} = \mu '_{jilk} = \mu '_{lkji}; \end{aligned}$$

namely, \(\mu '\) should define a symmetric isomorphism over the space of \(3\times 3\) symmetric matrices.

As we consider a situation in which \(\phi \) is small, we may expect \(\mu '\) to be a small perturbation of \(\mu \), and hopefully admit an expansion in powers of \(\phi \):

$$\begin{aligned} \mu ' = \mu \mathrm{Id}+ \phi \mu _1 + \phi ^2 \mu _2 + \dots + \phi ^k \mu _k + o(\phi ^k). \end{aligned}$$
(1.6)

The main mathematical questions are:

  • Can solutions \(u_N\) of (1.1)–(1.2) be approximated by solutions \(u_{eff} = 1_{\mathbb {R}^3\setminus \mathcal {O}} u + 1_{\mathcal {O}} u'\) of (1.4)–(1.5), for an appropriate choice of \(\mu '\) and an appropriate topology ?

  • If so, does \(\mu '\) admit an expansion of type (1.6), for some k ?

  • If so, what are the values of the viscosity coefficients \(\mu _i\), \(1 \leqq i \leqq k\) ?

Let us stress that, in most articles about the effective viscosity of suspensions, it is implicitly assumed that the first two questions have a positive answer, at least for \(k=1\) or 2. In other words, the existence of an effective model is taken for granted, and the point is then to answer the third question, or at least to determine the mean values

$$\begin{aligned} \nu _i := \frac{1}{|\mathcal {O}|} \int _\mathcal {O}\mu _i(x) \mathrm{d}x \end{aligned}$$
(1.7)

of the viscosity coefficients. As we will see in Section 2, these mean values can be determined from the asymptotic behaviour of some integral quantities \(\mathcal {I}_N\) as \(N \rightarrow +\infty \). These integrals involve the solutions \(u_N\) of (1.1)–(1.2) with condition at infinity

$$\begin{aligned} \lim _{|x| \rightarrow +\infty } u(x) - S x = 0, \end{aligned}$$
(1.8)

where S is an arbitrary symmetric trace-free matrix.

The effective viscosity problem for dilute suspensions of spherical particles has a long history, mostly focused on the first order correction created by the suspension, that is \(k=1\) in (1.6). The pioneering work on this problem was due to Einstein [15], not mentioning earlier contributions on the similar conductivity problem by Maxwell [29], Clausius [11], Mossotti [32]. The celebrated Einstein’s formula,

$$\begin{aligned} \mu ' = \mu + \frac{5}{2} \phi \mu + o(\phi ), \end{aligned}$$
(1.9)

was derived under the assumption that the particles are homogeneously and isotropically distributed, and neglecting the interactions between particles. In other words, the correction \(\mu _1 = \frac{5}{2} \mu \) is obtained by summing N times the contribution of one spherical particle to the effective stress. The calculation of Einstein will be seen in Section 2. It was later extended to the case of an inhomogeneous suspension by Almog and Brenner [1, p. 16], who found that

$$\begin{aligned} \mu _1 = \frac{5}{2} |\mathcal {O}| f(x) \mu . \end{aligned}$$
(1.10)

The mathematical justification of formula (1.9) came much later. As far as we know, the first step in this direction was due to Sanchez-Palencia [38] and Levy and Sanchez-Palencia [28], who recovered Einstein’s formula from homogenization techniques, when the suspension is periodically distributed in a bounded domain. Another justification, based on variational principles, is due to Haines and Mazzucato [19]. They also consider a periodic array of spherical particles in a bounded domain \(\Omega \), and define the viscosity coefficient of the suspension in terms of the energy dissipation rate:

$$\begin{aligned} \mu _N = \frac{\mu }{ |S|^2} \int _{\mathcal {F}} |D(u_N)|^2, \end{aligned}$$

where \(u_N\) is the solution of (1.1)–(1.2)–(1.8), replacing \(\mathbb {R}^3\) by \(\Omega \). Their main result is that

$$\begin{aligned} \mu _N = \mu + \frac{5}{2} \phi \mu + O(\phi ^{3/2}). \end{aligned}$$

For preliminary results in the same spirit, see Keller-Rubenfeld [27]. Eventually, a recent work [21] by the second author and Di Wu shows the validity of Einstein’s formula under general assumptions of type (H1)–(H2). See also [33] for a similar recent result.

Our goal in the present paper is to go beyond this famous formula, and to study the second order correction to the effective viscosity, that is \(k=2\) in (1.6). Results on this problem have split so far into two settings: periodic distributions, and random distributions of spheres. Many different formulas have emerged in the literature, after analytical, numerical and experimental studies. In the periodic case, one can refer to the works [2, 34, 37, 42], or to the more recent work [2], dedicated to the case of spherical inclusions of another Stokes fluid with viscosity \(\tilde{\mu }\ne \mu \). Still, in the simple case of a primitive cubic lattice, the expressions for the second order correction differ. In the random case, the most reknowned analysis is due to Batchelor and Green [5], who consider a homogeneous and stationary distribution of spheres, and express the correction \(\mu _2\) as an ensemble average that involves the N-point correlation function of the process. As pointed out by Batchelor and Green, the natural idea when investigating the effective viscosity up to \(O(\phi ^2)\) is to replace the N-point correlation function by the two-point correlation function, but this leads to a divergent integral. To overcome this difficulty, Batchelor and Green develop what they call a renormalization technique, that was developed earlier by Batchelor to determine the sedimentation speed of a dilute suspension. After further analysis of the expression of the two-point correlation function of spheres in a Stokes flow [6], completed by numerical computations, they claim that under a pure strain, the particles induce a viscosity of the form

$$\begin{aligned} \mu ' = \mu + \frac{5}{2} \phi \mu + 7.6 \phi ^2 \mu + o(\phi ^2). \end{aligned}$$
(1.11)

Although the result of Batchelor and Green is generally accepted by the fluid mechanics community, the lack of clarity about their renormalization technique has led to debate; see [1, 22, 35].

One main objective in the present paper is to give a rigorous and global mathematical framework for the computation of

$$\begin{aligned} \nu _2 = \frac{1}{|\mathcal {O}|} \int _\mathcal {O}\mu _2(x) \mathrm{d}x, \end{aligned}$$
(1.12)

leading to explicit formula in periodic and stationary random settings. We will adopt the point of view of the studies mentioned before: we will assume the validity of an effective model of type (1.4)–(1.5)–(1.6) with \(k=2\), and will identify the averaged coefficient \(\nu _2\).

More precisely, our analysis is divided into two parts. The first part, conducted in Section 2, has as its main consequence the following:

Theorem 1.1

Let \((x_{i})_{1 \leqq i \leqq N}\) a family of points supported in a fixed compact set \(\mathbb {R}^3\), and satisfying (H1)–(H2). For any trace-free symmetric matrix S and any \(\phi > 0\), let \(u_N\), resp. \(u_{eff}\), the solution of (1.1)–(1.2)–(1.8) with the radius a of the balls defined through (1.3), resp. the solution of (1.4)–(1.5)–(1.8) where \(\mu '\) obeys (1.6) with \(k=2\), \(\mu _1\) being given in (1.10).

If \(u_N - u_{eff} = o(\phi ^2)\) in \(H^{-\infty }_{loc}(\mathbb {R}^3)\), meaning that for all of bounded open set U, there exists \(s \in \mathbb {R}\) such that

$$\begin{aligned} \limsup _{N \rightarrow +\infty } \Vert u_N - u_{eff} \Vert _{H^s(U)} =o(\phi ^2) , \quad \text {as } \phi \rightarrow 0, \end{aligned}$$

then, necessarily, the coefficient \(\nu _2\) defined in (1.12) satisfies \(\, \nu _2 S : S = \mu \lim _{N \rightarrow +\infty } \mathcal {V}_N\) where \(\nu _2\) was defined in (1.12), and

$$\begin{aligned} \mathcal {V}_N := \frac{75 |\mathcal {O}|}{16 \pi } \left( \frac{1}{N^2} \sum _{i \ne j} g_S(x_i - x_j) - \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y \right) \end{aligned}$$
(1.13)

with the Calderón–Zygmund kernel

$$\begin{aligned} g_S := -D \left( \frac{S : (x \otimes x) x}{|x|^5}\right) : S. \end{aligned}$$
(1.14)

Roughly, this theorem states that if there is an effective model at order \(\phi ^2\), the mean quadratic correction \(\nu _2\) is given by the limit of \(\mathcal {V}_N\), defined in (1.13). Note that the integral at the right-hand side of (1.13) is well-defined: \(f \in L^2(\mathbb {R}^{3})\) and \(f \rightarrow g_S \star f\) is a Calderón–Zygmund operator, therefore continuous on \(L^2(\mathbb {R}^3)\). We insist that our result is an if theorem: the limit of (1.13) does not necessarily exist for any configuration of particles \(x_i = x_{i,N}\) satisfying (H1)–(H2). In particular, it is not clear that an effective model at order \(\phi ^2\) is available for all such configurations.

Still, the second part of our analysis shows that for points associated to stationary random processes (including periodic patterns or Poisson hard core processes), the limit of the functional does exist, and is given by an explicit formula. We shall leave for later investigation the problem of approximating \(u_N\) by \(u_{eff}\) when the limit of \(\mathcal {V}_N\) exists.

Our study of functional (1.13) is detailed in Sections 3 to 5. It borrows a lot from the mathematical analysis of Coulomb gases, as developped over the last years by Sylvia Serfaty and her coauthors [9, 36, 40]. Although our paper is self-contained, we find useful to give a brief account of this analysis here. As explained in the lecture notes [41], one of its main goals is to understand what configurations of points minimize Coulomb energies of the form

$$\begin{aligned} H_N = \frac{1}{N^2} \sum _{i\ne j} g(x_i - x_j) + \frac{1}{N} \sum _{i=1}^N V(x_i), \end{aligned}$$

where \(g(x) = \frac{1}{|x|}\) is a repulsive potential of Coulomb type, and V is typically a confining potential. It is well-known, see [41, chapter 2], that under suitable assumptions on V, the sequence of functionals \(H_N\) (seen as a functionals over probability measures by extension by \(+\infty \) outside the set of empirical measures) \(\Gamma \)-converges to the functional

$$\begin{aligned} H(\lambda ) = \int _{\mathbb {R}^3 \times \mathbb {R}^3} g(x-y) \mathrm{d}\lambda (x) \mathrm{d}\mu (y) + \int _{\mathbb {R}^3} V(x) \mathrm{d}\lambda (x). \end{aligned}$$

Hence, the empirical measure \(\delta _N = \frac{1}{N} \sum _{i=1}^N \delta _{x_i}\) associated to the minimizer \((x_1,\dots ,x_N)\) of \(H_N\) converges weakly to the minimizer \(\lambda \) of H.

In the series of works [36, 40], see also [39] on the Ginzburg-Landau model, Serfaty and her coauthors investigate the next order term in the asymptotic expansion of \(\min _{x_1,\dots ,x_N} H_N\). A keypoint in these works is understanding the behaviour of (the minimum of)

$$\begin{aligned} \mathcal {H}_N = \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus \text {Diag}} g(x-y) \mathrm{d}(\delta _N - \lambda )(x) \mathrm{d}(\delta _N -\lambda )(y) \end{aligned}$$
(1.15)

as \(N \rightarrow +\infty \). This is done through the notion of renormalized energy. Roughly, the starting point behind this notion is the (abusive) formal identity

$$\begin{aligned} \text {''} \int _{\mathbb {R}^3 \times \mathbb {R}^3} g(x-y) \mathrm{d}(\delta _N - \lambda )(x) \mathrm{d}(\delta _N -\lambda )(y) = \frac{1}{4\pi } \int _{\mathbb {R}^3} |{\nabla }h_N|^2 \, \text {''}, \end{aligned}$$
(1.16)

where \(h_N\) is the solution of \(\Delta h_N = 4 \pi (\delta _N - \lambda )\) in \(\mathbb {R}^3\). Of course, this identity does not make sense, as both sides are infinite. On one hand, the left-hand side is not well-defined: the potential g is singular at the diagonal, so that the integral with respect to the product of the empirical measures diverges. On the other hand, the right-hand side is not better defined: as the empirical measure does not belong to \(H^{-1}(\mathbb {R}^3)\), \(h_N\) is not in \(\dot{H}^1(\mathbb {R}^3)\).

Still, as explained in [41, chapter 3], one can modify this identity, and show a formula of the form

$$\begin{aligned} \mathcal {H}_N = \lim _{\eta \rightarrow 0} \left( \frac{1}{4\pi }\int _{\mathbb {R}^3} |{\nabla }h^\eta _N|^2 - N g(\eta ) \right) , \end{aligned}$$
(1.17)

where \(h_N^\eta \) is an approximation of \(h_N\) obtained by regularization of the Dirac masses at the right-hand side of the Laplace equation: \(\Delta h_N^\eta = 4 \pi (\delta ^\eta _N - \lambda )\) in \(\mathbb {R}^3\). Note the removal of the term \(N g(\eta )\) at the right-hand side of (1.17). This term, which goes to infinity as the parameter \(\eta \rightarrow 0\), corresponds to the self-interaction of the Dirac masses: it must be removed, consistently with the fact that the integral defining \(\mathcal {H}_N\) excludes the diagonal. This explains the term renormalized energy. See [41, chapter 3] for more details.

From there (omitting to discuss the delicate commutation of the limits in N and \(\eta \) !), the asymptotics of \(\min _{x_1, \dots , x_N} \mathcal {H}_N\) can be deduced from the one of \(\min _{x_1, \dots , x_N} \int _{\mathbb {R}^3} |{\nabla }h_N^\eta |^2\), for fixed \(\eta \). The next step is to show that such minimum can be expressed as spatial averages of (minimal) microscopic energies, expressed in terms of solutions of the so-called jellium problems: see [41, chapter 4]. These problems, obtained through rescaling and blow-up of the equation on \(h_N^\eta \), are an analogue of cell problems in homogenization. More will be said in Section 4, and we refer to the lecture notes [41] for all necessary complements.

Thus, the main idea in the second part of our paper is to take advantage of the analogy between the functionals \(\mathcal {V}_N\) and \(\mathcal {H}_N\) to apply the strategy just described. Doing so, we face specific difficulties: our distribution of points is not minimizing an energy, the potential \(g_S\) is much more singular than g, the reformulation of the functional in terms of an energy is less obvious, etc. Still, we are able to reproduce the same kind of scheme. We introduce in Section 3 an analogue of the renormalized energy. The analogue of the jellium problem is discussed in Section 4. Finally, in Section 5, we are able to tackle the convergence of \(\mathcal {V}_N\), and give explicit formula for the limit in two cases: the case of a (properly rescaled) \(L \mathbb {Z}^3\)-periodic pattern of M-spherical particles with centers \(a_1\), ..., \(a_M\), and the case of a (properly rescaled) hardcore stationary random process with locally integrable two points correlation function \(\rho _2(y,z) = \rho (y-z)\). In the first case, we show that

$$\begin{aligned} \lim _{N \rightarrow +\infty } \mathcal {V}_N = \frac{25 L^3}{2 M^2 } \Bigl ( \sum _{i \ne j} S {\nabla }\cdot G_{S,L}(a_i - a_j) \, + \, K S{\nabla }\cdot (G_{S,L} - G_S)(0) \Bigr ), \end{aligned}$$
(1.18)

where \(G_S\) and \(G_{S,L}\) are the whole space and \(L\mathbb {Z}^3\)-periodic kernels defined respectively in (3.12) and (5.18); see Proposition 5.4. In the special case of a primitive cubic lattice, for which \(M=L=1\), we can push the calculation further, finding that

$$\begin{aligned} \nu _2 S : S = \mu \big ( \alpha \sum _{i=1}^3 |S_{ii}|^2 + \beta \sum _{i \ne j} |S_{ij}|^2 \big ), \end{aligned}$$

with \(\alpha \approx 9.48\) and \(\beta \approx -2,15\), cf. Proposition 5.5 for precise expressions. Our result is in agreement with [42]. In the random stationary case, if the process has mean intensity one, we show that

$$\begin{aligned} \lim _N \mathcal {V}_N&= \frac{25}{2} \lim _{L \rightarrow +\infty } \frac{1}{L^3} \, \sum _{\begin{array}{c} z\ne z' \in \Lambda \cap K_L \end{array}} S {\nabla }\cdot G_{S,L}(z - z') \nonumber \\&= \frac{25}{2} \lim _{L \rightarrow +\infty } \frac{1}{L^3} \int _{K_L \times K_L} S{\nabla }\cdot G_{S,L}(z-z') \rho (z-z') dz dz'. \end{aligned}$$
(1.19)

These formula open the road to numerical computations of the viscosity coefficients of specific processes, and should in particular allow us to check the formula found in the literature [5, 35].

Let us conclude this introduction by pointing out that our analysis falls into the general scope of deriving macroscopic properties of dilute suspensions. From this perspective, it can be related to mathematical studies on the drag or sedimentation speed of suspensions; see [13, 23,24,25, 30] among many. See also the recent work [14] on the conductivity problem.

2 Expansion of the Effective Viscosity

The aim of this section is to understand the origin of the functional \(\mathcal {V}_N\) introduced in (1.13), and to prove Theorem 1.1. The outline is the following. We first consider the effective model (1.4)–(1.5)–(1.6). Given S a symmetric trace-free matrix, and a solution \(u_{eff}\) with condition at infinity (1.8), we exhibit an integral quantity \(\mathcal {I}_{eff} = \mathcal {I}_{eff}(S)\) that involves \(u_{eff}\) and allows us to recover (partially) the mean viscosity coefficient \(\nu _2\). In the next paragraph, we introduce the analogue \(\mathcal {I}_{N}\) of \(\mathcal {I}_{eff}\), that involves this time the solution \(u_N\) of (1.1)–(1.2) and (1.8). In brief, we show that if \(u_N\) is \(o(\phi ^2)\) close to \(u_{eff}\), then \(\mathcal {I}_{N}\) is \(o(\phi ^2)\) close to \(\mathcal {I}_{eff}\). Finally, we provide an expansion of \(\mathcal {I}_{N}\), allowing us to express \(\nu _2\) in terms of \(\mathcal {V}_N\). Theorem 1.1 follows.

2.1 Recovering the Viscosity Coefficients in the Effective Model

Let \(k \geqq 2\), \(\mu '\) satisfying (1.6), with viscosity coefficients \(\mu _i\) that may depend on x. Let S symmetric and trace-free. We denote \(u_0(x) = Sx\). Let \(u_{eff} = 1_{\mathbb {R}^3\setminus \mathcal {O}} u + 1_{\mathcal {O}} u'\) the weak solution in \(u_0+ \dot{H}^1(\mathbb {R}^3)\) of (1.4)–(1.5)–(1.8). By a standard energy estimate, one can show the expansion

$$\begin{aligned} u_{eff} - u_0 = \phi \, u_{eff,1} + \dots + \phi ^k u_{eff,k} + o(\phi ^k) \quad \text {in } \dot{H}^1(\mathbb {R}^3), \end{aligned}$$

where the system satisfied by \(u_{eff,i} = 1_{\mathbb {R}^3\setminus \mathcal {O}} u_i + 1_{\mathcal {O}} u'_i\) is derived by plugging the expansion in (1.4)–(1.5) and keeping terms with power \(\phi ^i\) only. Notably, we find that

$$\begin{aligned} \left\{ \begin{aligned} -\mu \Delta u_1 + {\nabla }p_1&= 0, \quad \hbox {div}~u_1 = 0, \quad x \in \mathbb {R}^3\setminus \mathcal {O}, \\ -\mu \Delta u'_1 + {\nabla }p'_1&= 2 \hbox {div}~(\mu _1 D(u_0)) \quad \hbox {div}~u'_1 = 0, \quad x \in \mathcal {O}, \end{aligned} \right. \end{aligned}$$
(2.1)

together with the conditions \(u_1 = 0\) at infinity,

$$\begin{aligned} u_1 = u_1' \quad \text {at } \, {\partial }\mathcal {O}, \quad \sigma _\mu (u_1,p_1)n \, = \, \sigma _\mu (u'_1, p'_1)n \, + \, 2 \mu _1 D(u_0)n \, \text {at } \, {\partial }\mathcal {O}. \end{aligned}$$

Similarly,

$$\begin{aligned} \left\{ \begin{aligned} -\mu \Delta u_2 + {\nabla }p_2&= 0, \quad \hbox {div}~u_2 = 0, \quad x \in \mathbb {R}^3\setminus \mathcal {O}, \\ -\mu \Delta u'_2 + {\nabla }p'_2&= 2 \hbox {div}~(\mu _2 D(u_0)) + 2 \hbox {div}~( \mu _1 D(u'_1)), \quad \hbox {div}~u'_2 = 0, \quad x \in \mathcal {O}, \end{aligned} \right. \end{aligned}$$
(2.2)

together with \(u_2 = 0\) at infinity,

$$\begin{aligned} u_2= & {} u_2' \quad \text {at } \, {\partial }\mathcal {O}, \quad \sigma _\mu (u_2,p_2)n\\= & {} \sigma _\mu (u'_2, p'_2)n \, + \, 2 \mu _2 D(u_0)n + 2 \mu _1 D(u_1')n \, \text { at } \, {\partial }\mathcal {O}. \end{aligned}$$

Now, inspired by formula (4.11.16) in [4], we define

$$\begin{aligned} \mathcal {I}_{eff} := \int _{{\partial }\mathcal {O}} \sigma _\mu (u-u_0, p_{eff}) n \cdot S x \mathrm{d}s - 2 \mu \int _{{\partial }\mathcal {O}} (u-u_0) \cdot Sn \mathrm{d}s, \end{aligned}$$
(2.3)

where n refers to the outward normal. We will show that

$$\begin{aligned} \mathcal {I}_{eff} = \, 2 |\mathcal {O}| \, \left( \phi \nu _1 S : S + \, \phi ^2 \nu _2 S : S \right) + 2 \phi ^2 \int _{\mathcal {O}} \mu _1 D(u_1') : S + o(\phi ^2). \end{aligned}$$
(2.4)

We first use (1.5) to write

$$\begin{aligned} \mathcal {I}_{eff}&= \int _{{\partial }\mathcal {O}} \sigma _{\mu '}(u'-u_0, p') n \cdot S x \mathrm{d}s + \int _{{\partial }\mathcal {O}} \sigma _{\mu '-\mu }(u_0,0) n \cdot S x \mathrm{d}s \\&\quad - 2 \mu \int _{{\partial }\mathcal {O}} (u' - u_0) \cdot Sn \mathrm{d}s \\&= \int _{{\partial }\mathcal {O}} \sigma _{\mu '}(\phi u'_1 + \phi ^2 u'_2, \phi p_1 + \phi ^2 p_2) n \cdot S x \mathrm{d}s\\&\quad + 2 \int _{{\partial }\mathcal {O}} (\phi \mu _1 + \phi ^2 \mu _2) S n \cdot S x \mathrm{d}s - 2 \mu \int _{{\partial }\mathcal {O}} (\phi u'_1 + \phi ^2 u'_2) \cdot Sn \mathrm{d}s + o(\phi ^2) \\&= \int _{{\partial }\mathcal {O}} \sigma _{\mu }(\phi u'_1 + \phi ^2 u'_2, \phi p_1 + \phi ^2 p_2) n \cdot S x \mathrm{d}s + \phi \int _{{\partial }\mathcal {O}} \sigma _{\mu _1}(\phi u'_1, 0) n \cdot S x \mathrm{d}s \\&\quad + 2 \int _{{\partial }\mathcal {O}} (\phi \mu _1 + \phi ^2 \mu _2) S n \cdot S x \mathrm{d}s - 2 \mu \int _{{\partial }\mathcal {O}} (\phi u'_1 + \phi ^2 u'_2) \cdot Sn \mathrm{d}s + o(\phi ^2). \end{aligned}$$

Using the equations satisfied by \(u'_1\) and \(u'_2\), after integration by parts, we get

$$\begin{aligned}&\int _{{\partial }\mathcal {O}} \sigma _{\mu }(\phi u'_1 + \phi ^2 u'_2, \phi p_1 + \phi ^2 p_2) n \cdot S x \mathrm{d}s \\&\quad = - \int _{\mathcal {O}} 2 \hbox {div}(\phi \mu _1 S + \phi ^2 \mu _2 S) \cdot Sx \mathrm{d}x - \int _{\mathcal {O}} 2 \hbox {div}(\phi ^2 \mu _1 D(u'_1)) \cdot S x \mathrm{d}x \\&\qquad + 2 \mu \int _{\mathcal {O}} D(\phi u'_1 + \phi ^2 u'_2) : S \mathrm{d}x \\&\quad = 2 |\mathcal {O}| (\phi \nu _1 S : S + \phi ^2 \nu _2 S : S) - 2 \int _{{\partial }\mathcal {O}} (\phi \mu _1 + \phi ^2 \mu _2) Sn \cdot Sx \mathrm{d}s \\&\qquad + 2 \int _{\mathcal {O}} \phi ^2 \mu _1 D(u'_1) : S - 2 \int _{{\partial }\mathcal {O}} \phi ^2 \mu _1 D(u'_1) n \cdot S x \mathrm{d}s\\&\qquad + 2 \mu \int _{\mathcal {O}} (\phi u'_1 + \phi ^2 u'_2) \cdot Sn \mathrm{d}x. \end{aligned}$$

Plugging this last line in the expression for \(\mathcal {I}_{eff}\) yields (2.4).

We see through formula (2.4) that the expansion of \(\mathcal {I}_{eff}\) in powers of \(\phi \) gives access to \(\nu _1\), and, if \(\mu _1\) is known, it further gives access to \(\nu _2\). On the basis of the works [1, 33] and of the recent paper [21], which considers the same setting as ours, it is natural to assume that \(\mu _1\) is given by (1.10). This implies \(\nu _1 = \frac{5}{2} \mu \). With such expression of \(\mu _1\), and the form of f specified in (H1), we can check that \(u_S = (5 |\mathcal {O}|)^{-1} u_{eff,1}\) satisfies

$$\begin{aligned} -\Delta u_S + {\nabla }p = \hbox {div}~(S f) = S{\nabla }f, \quad \hbox {div}~u_S = 0 \quad \text {in } \, \mathbb {R}^3, \quad \lim _{|x| \rightarrow \infty } u_S(x) = 0. \end{aligned}$$
(2.5)

It follows that

$$\begin{aligned} \mathcal {I}_{eff} = 5 \phi \mu |\mathcal {O}| |S|^2 \, +\, 2\phi ^2 |\mathcal {O}| \nu _2 S : S \, - \, 50 \mu \phi ^2 |\mathcal {O}|^2 \int _{\mathbb {R}^3} |D(u_S)|^2 + o(\phi ^2). \end{aligned}$$
(2.6)

2.2 Recovering the Viscosity Coefficients in the Model with Particles

To determine the possible value of the mean viscosity coefficient \(\nu _2\), we must now relate the functional \(\mathcal {I}_{eff}\), based on the effective model, to a functional \(\mathcal {I}_N\) based on the real model with spherical rigid particles. From now on, we place ourselves under the assumptions of Theorem 1.1. Note that, thanks to hypothesis (H2), the spherical particles do not overlap for \(\phi \) small enough, so that a weak solution \(u_N \in u_0 + \dot{H}^1(\mathbb {R}^3)\) of (1.1)–(1.2)–(1.8) exists and is unique.

By integration by parts, for any R such that \(\mathcal {O}\Subset B_R\), we have

$$\begin{aligned} \mathcal {I}_{eff} = \int _{{\partial }B_R} \sigma _\mu (u_{eff}-u_0, p_{eff}) n \cdot S x \mathrm{d}s - 2 \mu \int _{{\partial }B_R} (u_{eff}-u_0) \cdot Sn \mathrm{d}s. \end{aligned}$$
(2.7)

By analogy with (2.3), and as all particles remain in a fixed compact \(K \supset \mathcal {O}\) independent of N, we set for any R such that \(K \subset B_R\):

$$\begin{aligned} \mathcal {I}_N := \int _{{\partial }B_R} \sigma _\mu (u_N-u_0, p_N) n \cdot S x \mathrm{d}s - 2 \mu \int _{{\partial }B_R} (u_N-u_0) \cdot Sn \mathrm{d}s, \end{aligned}$$
(2.8)

which again does not depend on our choice of R by integration by parts. Now, if \(u_{eff}\) and \(u_N\) are \(o(\phi ^2)\)-close in the sense of Theorem 1.1, then

$$\begin{aligned} \limsup _{N \rightarrow +\infty } |\mathcal {I}_N - \mathcal {I}_{eff}| = o(\phi ^2). \end{aligned}$$
(2.9)

Indeed, \(u_N - u_{eff}\) is a solution of a homogenenous Stokes equation outside K. By elliptic regularity, we find that \(\limsup _{N \rightarrow +\infty } \Vert u_{eff} - u_N\Vert _{H^s(K')} = 0\), for any compact \(K' \subset \mathbb {R}^3\setminus K\) and any positive s. Relation (2.9) follows.

We now turn to the most difficult part of this section, that is expanding \(\mathcal {I}_N\) in powers of \(\phi \). We aim to prove

Proposition 2.1

Let \((x_{i})_{1\leqq i \leqq N}\), satisfying (H1)–(H2). For S trace-free and symmetric, for \(\phi > 0\), let \(u_N\) the solution of (1.1)–(1.2)–(1.8) with the ball radius a defined through (1.3). Let \(\mathcal {I}_N\) as in (2.8), \(\mathcal {V}_N\) as in (1.13), and \(u_S\) the solution of (2.5). One has

$$\begin{aligned} \mathcal {I}_N = 5 \phi \mu |\mathcal {O}| |S|^2 \, +\, 2 \phi ^2 \mu |\mathcal {O}| \mathcal {V}_N - \, 50 \mu \phi ^2 |\mathcal {O}|^2 \int _{\mathbb {R}^3} |D(u_S)|^2 + o(\phi ^2). \end{aligned}$$
(2.10)

As before, notation \(A_N = B_N + o(\phi ^2)\) means \(\limsup _N |A_N - B_N| = o(\phi ^2)\). Obviously, Theorem 1.1 follows directly from (2.6), (2.9) and from the proposition.

To start the proof, we set \(v_N := u_N - u_0\). Note that \(v_N \in \dot{H}^1(\mathcal F)\) still satisfies the Stokes equation outside the ball, with \(v_N = 0\) at infinity, and \(v_N = - Sx + u_i + \omega _i \times (x-x_i)\) inside \(B_i\). Moreover, taking into account the identities

$$\begin{aligned} \int _{{\partial }B_i} \sigma _{\mu }(u_0,0)n \, \mathrm{d}s = 2\mu \int _{{\partial }B_i} S n = 2 \mu \int _{B_i} \hbox {div}~S = 0 \end{aligned}$$

and

$$\begin{aligned} \int _{{\partial }B_i} \sigma _{\mu }(u_0,0)n \times (x-x_i) \, \mathrm{d}s&= 2\mu \int _{{\partial }B_i} S n \times (x-x_i) \, \mathrm{d}s = 2\mu \int _{{\partial }B_i} S (x-x_i) \times n \, \mathrm{d}s \nonumber \\&= 2\mu \int _{B_i} \hbox {curl} \left( S (x-x_i)\right) \, \mathrm{d}s = 0, \end{aligned}$$
(2.11)

one has for all i that

$$\begin{aligned} \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \, \mathrm{d}s = 0, \quad \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \times (x-x_i) \, \mathrm{d}s = 0. \end{aligned}$$

From the definition (2.8), we can re-express \(\mathcal {I}_N\) as

$$\begin{aligned} \mathcal {I}_N = \sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \cdot Sx \, \mathrm{d}s - 2\mu \sum _{i=1}^N \int _{{\partial }B_i} \, v_N \cdot Sn \, \mathrm{d}s. \end{aligned}$$
(2.12)

To obtain an expansion of \(\mathcal I_N\) in powers of \(\phi \), we will now approximate \((v_N,p_N)\) by some explicit field \((v_{app}, p_{app})\), inspired by the method of reflections. This approximation involves the elementary problem

$$\begin{aligned} \left\{ \begin{aligned} -\mu \Delta v + {\nabla }p&= 0 \quad \text { outside } B(0,a), \\ \hbox {div}\, v&= 0 \quad \text { outside } B(0,a), \\ v(x)&= - Sx, \quad x \in B(0,a). \end{aligned} \right. \end{aligned}$$
(2.13)

The solution of (2.13) is explicit [18], and given by

$$\begin{aligned} v^s[S](x):= & {} -\frac{5}{2} S : (x \otimes x) \frac{a^3 x}{|x|^5} - Sx \frac{a^5}{|x|^5} +\frac{5}{2} (S : x \otimes x) \frac{a^5 x}{|x|^7} \nonumber \\= & {} v[S] + O(a^5 |x|^{-4}), \end{aligned}$$
(2.14)

with

$$\begin{aligned} v[S](x) :=-\frac{5}{2} S : (x \otimes x) \frac{a^3 x}{|x|^5}. \end{aligned}$$
(2.15)

The pressure is

$$\begin{aligned} p^s[S](x) := -5 \mu a^3 \frac{S : (x \otimes x)}{|x|^5}. \end{aligned}$$
(2.16)

We now introduce

$$\begin{aligned} (v_{app},p_{app})(x) := \sum _{i=1}^N (v^s[S],p^s[S])(x-x_i) + \sum _{i=1}^N (v^s[S_i], p^s[S_i])(x-x_i), \end{aligned}$$
(2.17)

where

$$\begin{aligned} S_i \, := \, \sum _{j \ne i} D(v[S])(x_i-x_j). \end{aligned}$$
(2.18)

In short, the first sum at the right-hand side of (2.17) corresponds to a superposition of N elementary solutions, meaning that the interaction between the balls is neglected. This sum satisfies the Stokes equation outside the ball, but creates an error at each ball \(B_i\), whose leading term is \(S_i x\). This explains the correction by the second sum at the right-hand side of (2.17). One could of course reiterate the process: as the distance between particles is large compared to their radius, we expect the interactions to be smaller and smaller. This is the principle of the method of reflections that is investigated in [24]. From there, Proposition 2.1 will follow from two facts. Defining

$$\begin{aligned} \mathcal {I}_{app} := \sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_{app},p_{app}) n \cdot Sx \, \mathrm{d}s - 2\mu \sum _{i=1}^N \int _{{\partial }B_i} \, v_{app} \cdot Sn \, \mathrm{d}s, \end{aligned}$$

we will show first that

$$\begin{aligned} \mathcal {I}_{app} = 5 \phi \mu |S|^2 + 2 \phi ^2 \mu |\mathcal {O}| \mathcal {V}_N - \, 50 \mu \phi ^2 |\mathcal {O}|^2 \int _{\mathbb {R}^3} |D(u_S)|^2, \end{aligned}$$
(2.19)

and then

$$\begin{aligned} \limsup _{N \rightarrow +\infty } |\mathcal {I}_N - \mathcal {I}_{app}| = o(\phi ^2). \end{aligned}$$
(2.20)

Identity (2.19) follows from a calculation that we now detail. We define

$$\begin{aligned} \mathcal {I}_i(v,p) := \int _{{\partial }B_i} \bigl ( (\sigma (v,p) n \otimes x) - 2\mu (v \otimes n) \bigr ) \, \mathrm{d}s. \end{aligned}$$

We have

$$\begin{aligned} \mathcal {I}_{app}&= \sum _{i} \mathcal {I}_i(v^s[S](\cdot -x_i), p^s[S](\cdot -x_i)) : S \\&\quad + \sum _{i} \sum _{j\ne i} \mathcal {I}_i(v^s[S](\cdot -x_j), p^s[S](\cdot -x_j)) : S \\&\quad + \sum _{i} \mathcal {I}_i(v^s[S_i](\cdot -x_i), p^s[S_i](\cdot -x_i)) : S \\&\quad + \sum _{i} \sum _{j\ne i} \mathcal {I}_i(v^s[S_j](\cdot -x_j), p^s[S_j](\cdot -x_j)) : S \\&=: I_a + I_b + I_c + I_d. \end{aligned}$$

To treat \(I_b\) and \(I_d\), we rely on the following property, which is checked easily through integration by parts: for any (vp) solution of Stokes in \(B_i\), and any trace-free symmetric matrix S, \(\mathcal {I}_i(v,p) : S = 0\). As for all i and all \(j\ne i\), \(v^s[S](\cdot -x_j)\) or \(v^s[S_j](\cdot -x_j)\) is a solution of Stokes inside \(B_i\), we deduce

$$\begin{aligned} I_b = I_d = 0. \end{aligned}$$
(2.21)

As regards \(I_a\), we use the following formula, which follows from a tedious calculation [18]: for any traceless matrix S,

$$\begin{aligned} \mathcal {I}_i(v^s[S](\cdot -x_i)) = \frac{20 \pi }{3} \mu a^3 S. \end{aligned}$$
(2.22)

It follows that

$$\begin{aligned} I_a = N \frac{20 \pi }{3} \mu a^3 |S|^2 = 5 \phi |\mathcal {O}| \mu |S|^2. \end{aligned}$$
(2.23)

This term corresponds to the famous Einstein formula for the mean effective viscosity. It is coherent with the expression (1.10) for \(\mu _1\), which implies \(\nu _1 = \frac{5}{2} \mu \).

Eventually, as regards \(I_c\), we can use (2.22) again, replacing S by \(S_i\):

$$\begin{aligned} I_c&= \frac{20 \pi }{3} \mu a^3 \sum _{i} S_i : S = \frac{20 \pi }{3} \mu a^3 \sum _{i} \sum _{j \ne i} D(v[S])(x_i-x_j) : S \nonumber \\&= \frac{75 |\mathcal {O}|^2}{8\pi } \mu \phi ^2 \frac{1}{N^2} \sum _{i} \sum _{j \ne i} g_S(x_i-x_j) \nonumber \\&= 2 \phi ^2\mu |\mathcal {O}| \mathcal {V}_N + \phi ^2 \frac{75 |\mathcal {O}|^2}{8\pi } \mu \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y, \end{aligned}$$
(2.24)

with \(g_S\) defined in (1.14). In view of (2.21)–(2.23)–(2.24), to conclude that (2.19) holds, it is enough to prove

Lemma 2.2

For any \(f \in L^2(\mathbb {R}^3)\),

$$\begin{aligned} \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y = - \frac{16\pi }{3} \int _{\mathbb {R}^3} |D(u_S)|^2, \end{aligned}$$
(2.25)

with \(g_S\) defined in (1.14), and \(u_S \in \dot{H}^1(\mathbb {R}^3)\) the solution of (2.5).

Proof

Note that both sides of the identity are continuous over \(L^2\): the left-hand side is continuous as the Calderón–Zygmund operator \(f \rightarrow g_S \star f\) is continuous over \(L^2\), while the right-hand side is continuous by classical elliptic estimates for the Stokes operator. By density, this is therefore enough to assume that \(f \in C^\infty _c(\mathbb {R}^3)\). We denote by \(U = (U_{ij}), Q = (Q_j)\) the fondamental solution of the Stokes operator. This means that for all j, the vector field \(U_j = (U_{ij})_{1\leqq i \leqq 3}\) and the scalar field \(Q_j\) satisfy the Stokes equation

$$\begin{aligned} -\Delta U_j + {\nabla }Q_j = \delta e_j, \quad \hbox {div}~U_j = 0 \quad \text { in } \mathbb {R}^3. \end{aligned}$$
(2.26)

It is well-known, (see [16, p. 239]), that

$$\begin{aligned} U(x) = \frac{1}{8\pi } \left( \frac{1}{|x|} Id + \frac{x \otimes x}{|x|^3} \right) , \quad Q(x) = \frac{1}{4\pi } \frac{x}{|x|^3}. \end{aligned}$$

From there, one can deduce the following formula, cf [16, p. 290, equation (IV.8.14)]:

$$\begin{aligned} \sigma (U_j, Q_j) = - \frac{3}{4\pi } \frac{(x \otimes x) x_j}{|x|^5}. \end{aligned}$$

Using the Einstein convention for summation, this implies in turn that

$$\begin{aligned} g_S(x)&= - S_{kl} {\partial }_{x_k} \left( \frac{S : (x \otimes x) x_l}{|x|^5}\right) = \frac{4\pi }{3} S : S_{kl} {\partial }_{x_k} \sigma (U_l,Q_l)(x) \nonumber \\&= \frac{8\pi }{3} S : D S_{kl} {\partial }_{x_k} U_l = (S{\nabla }) \cdot (S_{kl} {\partial }_{x_k} U_l), \end{aligned}$$
(2.27)

where we have used that S is trace-free to obtain the third equality. Hence,

$$\begin{aligned} \int \int g_S(x-y) f(x) \mathrm{d}x f(y) \mathrm{d}y&= \frac{8\pi }{3} \int _{\mathbb {R}^3} \big ((S : D S_{kl} {\partial }_{x_k} U_l) \star f \big )(y) f(y) \mathrm{d}y \nonumber \\&= \frac{8\pi }{3} \int S : D S_{kl} {\partial }_{x_k} (U_l \star f)(y) \, f(y) \mathrm{d}y. \end{aligned}$$
(2.28)

Note that the permutations between the derivatives and the convolution product do not raise any difficulty, as \(f \in C_c^\infty (\mathbb {R}^3)\). Now, using \(S_{kl} = S_{lk}\), and denoting by \(\mathrm{St}^{-1}\) the convolution with the fundamental solution (inverse of the Stokes operator), we get

$$\begin{aligned} S_{kl} {\partial }_{x_k} \int U_l(y-x) f(x) \mathrm{d}x = \mathrm{St}^{-1} (S {\nabla }f)(y). \end{aligned}$$
(2.29)

Eventually,

$$\begin{aligned} \int \int g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y&= \frac{8\pi }{3} \int S : {\nabla }\mathrm{St}^{-1} (S {\nabla }f)(y) \, f(y) \mathrm{d}y \\&= - \frac{8\pi }{3} \int \mathrm{St}^{-1}(S {\nabla }f)(y) \cdot (S {\nabla }f)(y) \, \mathrm{d}y\\&= - \frac{16\pi }{3} \int _{\mathbb {R}^3} |D(u_S)|^2. \end{aligned}$$

This concludes the proof of the lemma. \(\square \)

Remark 2.3

By polarization of the previous identity, at least for \(f, \tilde{f}\) smooth and decaying enough, one has

$$\begin{aligned} \int \int g_S(x-y) f(y) \tilde{f}(x) \mathrm{d}x&= - \frac{8\pi }{3} \int \mathrm{St}^{-1}(S {\nabla }f)(x) \cdot (S {\nabla }\tilde{f})(x) \, \mathrm{d}x \nonumber \\&= \frac{8\pi }{3} \int (S {\nabla }) \cdot \big ( \mathrm{St}^{-1}(S {\nabla }f)\big )(x) \, \tilde{f}(x) \, \mathrm{d}x. \end{aligned}$$
(2.30)

The last step in proving Proposition 2.1, hence Theorem 1.1, is to show the bound (2.20). If \(w := v_N - v_{app}\), \(q := p_N - p_{app}\),

$$\begin{aligned} \mathcal {I}_N - \mathcal {I}_{app} = \sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (w,q) n \cdot Sx \, \mathrm{d}s - 2\mu \sum _{i=1}^N \int _{{\partial }B_i} \, w \cdot Sn \, \mathrm{d}s \end{aligned}$$

Direct verifications show that \(v_{app}\), hence w, satisfies the same force- and torque-free conditions as v. This means that for any family of constant vectors \(u_i\) and \(\omega _i\), \(1 \leqq i \leqq N\),

$$\begin{aligned} \mathcal {I}_N - \mathcal {I}_{app}= & {} \sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (w,q) n \cdot (Sx - u_i - \omega _i \times (x-x_i)) \, \mathrm{d}s \\&- 2\mu \sum _{i=1}^N \int _{{\partial }B_i} \, w \cdot Sn \, \mathrm{d}s. \end{aligned}$$

By a proper choice of \(u_i\) and \(\omega _i\), we find

$$\begin{aligned} \mathcal {I}_N - \mathcal {I}_{app}&= -\sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (w,q) n \cdot v_N \, \mathrm{d}s - 2\mu \sum _{i=1}^N \int _{{\partial }B_i} \, w \cdot Sn \, \mathrm{d}s \nonumber \\&= - \int _{\mathcal {F}} 2\mu D(w) : D(v_N) \, \mathrm{d}x - 2\mu \sum _{i=1}^N \int _{B_i} \, D(w) : S \, \mathrm{d}x \nonumber \\&= -\sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \cdot w \, \mathrm{d}s - 2\mu \sum _{i=1}^N \int _{B_i} \, D(w) : S \, \mathrm{d}x \nonumber \\&= -\sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \cdot (w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i)) \, \mathrm{d}s \nonumber \\&\quad - 2\mu \sum _{i=1}^N \int _{B_i} \, D(w) : S \, \mathrm{d}x \end{aligned}$$
(2.31)

for any family \((\tilde{u}_i, \tilde{\omega }_i)\), using this time that \(v_N\) is force- and torque-free. Let \(q \geqq 2\). By a proper choice of \((\tilde{u}_i, \tilde{\omega }_i)\), by Poincaré and Korn inequalities, one can ensure that for all i,

$$\begin{aligned} \Vert w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i)\Vert _{W^{1-\frac{1}{q},q}({\partial }B_i)} \leqq C \Vert D(w)\Vert _{L^q(B_i)}, \end{aligned}$$

where

$$\begin{aligned} \Vert g\Vert _{W^{1-\frac{1}{q},q}({\partial }B_i)} = \inf \Big \{ \frac{1}{a} \Vert G\Vert _{L^q(B_i)} + \Vert {\nabla }G\Vert _{L^q(B_i)}, \quad G\vert _{{\partial }B_i} = g \Big \}. \end{aligned}$$

Note that the factor \(\frac{1}{a}\) at the right-hand side is consistent with scaling considerations. Moreover, by standard use of the Bogovskii operator, see [16], there exists a constant C (depending only on the constant c in (H2)) and a field \(W \in W^{1,q}(\mathcal {F})\) , zero outside \(\cup _{i=1}^N B(x_i,2a)\) satisfying

$$\begin{aligned}&\hbox {div}~W = 0 \quad \text {in } \mathcal {F}, \quad W\vert _{B_i} = (w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i))\vert _{B_i}, \\&\Vert D(W) \Vert ^q_{L^q(\mathcal {F})} \leqq \sum _i \Vert w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i)\Vert _{W^{1-\frac{1}{q},q}(B_i)}^q. \end{aligned}$$

We deduce, with \(p \leqq 2\) the conjugate exponent of q, that

$$\begin{aligned}&\big |\sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \cdot (w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i)) \, \mathrm{d}s \bigr | = 2\mu \big | \int _{\mathcal {F}} D(v_N) : D(W) \bigr | \\&\quad \leqq 2 \mu \Vert D(v_N)\Vert _{L^p(\cup B(x_i,2a))} \Vert D(W)\Vert _{L^q(\mathcal {F})} \\&\quad \leqq C \phi ^{1/p-1/2} \Vert D(v_N)\Vert _{L^2(\mathbb {R}^3)} \big ( \sum _i \Vert D(w)\Vert ^q_{L^q(B_i)} \big )^{1/q}. \end{aligned}$$

By well-known variational properties of the Stokes solution, \(\Vert D(v_N)\Vert _{L^2}\) minimizes \(\Vert D(v)\Vert _{L^2}\) over the set of all v in \(\dot{H}^1(\mathbb {R}^3)\) satisfying a boundary condition of the form \(v\vert _{B_i} = - Sx + u_i + \omega _i \times (x-x_i)\) for all i. By the same considerations as before, based on the Bogovski operator, we infer that

$$\begin{aligned} \Vert D(v_N)\Vert _{L^2(\mathbb {R}^3)}^2 \leqq C \sum _{i=1}^N \Vert D(- Sx)\Vert _{L^2(B_i)}^2 \leqq C' \phi , \end{aligned}$$

so that

$$\begin{aligned}&\big |\sum _{i=1}^N \int _{{\partial }B_i} \sigma _\mu (v_N,p_N) n \cdot (w + \tilde{u}_i + \tilde{\omega }_i \times (x-x_i)) \, \mathrm{d}s \bigr | \\&\quad \leqq C \phi ^{1/p} \big ( \sum _i \Vert D(w)\Vert ^q_{L^q(B_i)} \big )^{1/q}. \end{aligned}$$

Using this inequality with the first term in (2.31) and applying the Hölder inequality to the second term, we end up with

$$\begin{aligned} |\mathcal {I}_N - \mathcal {I}_{app}| \leqq C \phi ^{1/p} \big ( \sum _i \Vert D(w)\Vert ^q_{L^q(B_i)} \big )^{1/q}. \end{aligned}$$
(2.32)

To deduce (2.20), it is now enough to prove that for all \(q > 1\), there exists a constant C independent of N or \(\phi \) such that

$$\begin{aligned} \sum _i \Vert D(w)\Vert ^q_{L^q(B_i)} \leqq C ( \phi ^{1+\frac{2q}{p}} + \phi ^{1+\frac{4q}{3}} ). \end{aligned}$$
(2.33)

Indeed, taking \(q > 2\), meaning \(p < 2\), and combining this inequality with (2.32) yields (2.20), more precisely

$$\begin{aligned} |\mathcal {I}_N - \mathcal {I}_{app}| \leqq C (\phi ^{1+\frac{2}{p}} + \phi ^{\frac{7}{3}}). \end{aligned}$$

In order to show the bound (2.33), we must write down the expression for \(w\vert _{B_i} = v_N\vert _{B_i} - v_{app}\vert _{B_i}\), where \(v_{app}\) was introduced in (2.17). A little calculation, using Taylor’s formula with an integral remainder, shows that

$$\begin{aligned} w\vert _{B_i}(x) = w^r_i(x) - D_i (x - x_i) - E_i (x - x_i) - \mathbf {F}_i\vert _{x}(x-x_i,x-x_i), \end{aligned}$$
(2.34)

with \(w_i^r\) being a rigid vector field (that disappears when taking the symmetric gradient), with

$$\begin{aligned} D_{i} := \sum _{j \ne i} D(v[S_j])(x_i-x_j) , \quad E_{i} := \sum _{j \ne i} D(v^s[S+S_j]-v[S+S_j])(x_i-x_j) \end{aligned}$$

and with the bilinear application

$$\begin{aligned} \mathbf {F}_i\vert _{x} := \sum _{j \ne i} \int _0^1 (1-t) {\nabla }^2 v^s[S+S_j](t(x-x_i) + x_i -x_j) \,\mathrm{d}t. \end{aligned}$$

We remind that \(v^s[S]\) and v[S] were introduced in (2.14) and (2.15), while the matrices \(S_j\) are defined in (2.18). Note that the matrices \(D_i\) and \(S_i\) have the same kind of structure. More precisely, we can define for a collection \((A_1, \dots , A_N)\) of N symmetric matrices, an application

$$\begin{aligned} \mathcal {A} : (A_1, \dots , A_N) \rightarrow (A'_1, \dots , A'_N), \quad A'_i = \sum _{j\ne i} D(v[A_j])(x_i - x_j). \end{aligned}$$

Then, \((S_1, \dots , S_N) = \mathcal {A}(S, \dots ,S)\) and \((D_1, \dots , D_N) = \mathcal {A}(S_1, \dots ,S_N) = \mathcal {A}^2(S, \dots ,S)\). Note that for any matrix A, the kernel D(v[A]), homogeneous of degree \(-3\), is of Calderón–Zygmund type. Using this property, we are able to prove in the appendix the following lemma, which is an adaptation of a result by the second author and Di Wu [21]:

Lemma 2.4

For all \(1< q < +\infty \), there exists a constant C, depending on q and on the constant c in (H2), such that, if \((A'_1, \dots , A'_N) = \mathcal {A}(A_1, \dots , A_N)\), then

$$\begin{aligned} \sum _{i=1}^N |A'_i|^q \leqq C \phi ^{\frac{q}{p}} \sum _{i=1}^N |A_i|^q. \end{aligned}$$

We can now proceed to the proof of (2.33). Denoting \(w_i^1:= D_i (x-x_i)\), we find by the lemma that

$$\begin{aligned} \sum _i \Vert D(w^1_i)\Vert ^q_{L^q(B_i)}\leqq & {} C a^3 \sum _i |D_i|^q \leqq C' a^3 \phi ^{\frac{q}{p}} \sum _{i=1}^N |S_i|^q \\\leqq & {} C'' a^3\phi ^{\frac{2q}{p}} \sum _{i=1}^N |S|^q \leqq \mathcal {C} \phi ^{1+\frac{2q}{p}}. \end{aligned}$$

Then, we notice that for any matrix A, \(|D(v^s[A] - v[A])(x)| = O(a^5 |x|^{-5})\). This implies that \(w^2_i := E_i (x - x_i)\) satisfies

$$\begin{aligned} \sum _i \Vert D(w^2_i)\Vert ^q_{L^q(B_i)} \leqq C a^3 \sum _i |E_i|^q \leqq C' a^3 a^{5q} \sum _i \Bigl ( \sum _{j \ne i} \frac{|S_j| + |S|}{|x_i - x_j|^5} \Bigr )^q. \end{aligned}$$

By assumption (H2), the points \(y_i := N^{1/3} x_i\) satisfy, for all \(i \ne j\), that

$$\begin{aligned} |y_i - y_j| \geqq \frac{1}{2} ( c + |y_i - y_j|) \geqq c. \end{aligned}$$

In particular,

$$\begin{aligned} \sum _i \Vert D(w^2_i)\Vert ^q_{L^q(B_i)} \leqq C a^3 \phi ^{5q/3} \sum _i \bigl ( \sum _{j} \frac{|S| + |S_j|}{(c + |y_i - y_j|)^5} \bigr )^q. \end{aligned}$$

We then make use of the following easy generalization of Young’s convolution inequality: \(\forall q \geqq 1\),

$$\begin{aligned} \sum _{i} ( \sum _j |a_{ij} b_j| )^q \leqq \max \big (\sup _i \sum _j |a_{ij}|, \sup _j \sum _i |a_{ij}|\big )^q \sum _i |b_i|^q. \end{aligned}$$
(2.35)

Applied with \(a_{ij} = \frac{1}{(c + |y_i - y_j|)^5}\) and \(b_j = |S| + |S_j|\), together with Lemma 2.4, this yields

$$\begin{aligned} \sum _i \Vert D(w^2_i)\Vert ^q_{L^q(B_i)}&\leqq C a^3 \phi ^{5q/3} \bigl ( \sum _j |S|^q + |S_j|^q \bigr )\\&\leqq C' a^3 \phi ^{5q/3} (1+\phi ^{\frac{q}{p}}) N \leqq \mathcal {C} \phi ^{1+\frac{5q}{3}}. \end{aligned}$$

It remains to bound the symmetric gradient of \(w_i^3 := \mathbf {F}_i\vert _{x}(x-x_i,x-x_i)\). By the expression of \(v^s\), we get that, in \(B_i\)

$$\begin{aligned} |D(w_i^3)| \leqq C \sum _{j \ne i}\left( \frac{a^5}{|x_i - x_j|^5} + \frac{a^4}{|x_i - x_j|^4}\right) (|S| + |S_j|). \end{aligned}$$

Proceeding as above, we find

$$\begin{aligned} \sum _i \Vert D(w^3_i)\Vert ^q_{L^q(B_i)} \leqq C a^3 ( \phi ^{5q/3} + \phi ^{4q/3}) (1+\phi ^{\frac{q}{p}}) N \leqq C' \phi ^{1+\frac{4q}{3}}. \end{aligned}$$

As \(D(w) = D(w_i^1) + D(w_i^2) + D(w_i^3)\), cf. (2.34), the previous estimates yield (2.33). This concludes the proof of Proposition 2.1, and therefore the proof of Theorem 1.1.

3 The \(\phi ^2\) Correction \(\mathcal {V}_N\) as a Renormalized Energy

We start in this section the asymptotic analysis of the viscosity coefficient

$$\begin{aligned} \mathcal {V}_N = \frac{75 |\mathcal {O}|}{16 \pi } \Big ( \frac{1}{N^2} \sum _{i \ne j} g_S(x_i - x_j) - \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y \Big ). \end{aligned}$$

As a preliminary step, we will show that there is no loss of generality in assuming

$$\begin{aligned} \forall i \in \{1, \dots , N\}, \quad \text {dist}(x_i, \mathcal {O}^c) \geqq \frac{1}{\ln N}. \end{aligned}$$
(3.1)

We introduce the set

$$\begin{aligned} I_{N,ext} = \big \{ 1 \leqq i \leqq N, \, \text {dist}(x_i, \mathcal {O}^c) \leqq \frac{1}{\ln N} \big \}, \quad \text {and } N_{ext} = N_{ext}(N) := |I_{N,ext}|. \end{aligned}$$

By (H1)–(H2), it is easily seen that \(N_{ext} = o(N)\) as \(N \rightarrow +\infty \). We now show

Lemma 3.1

\(\mathcal {V}_N\) is uniformly bounded in N, and

$$\begin{aligned} \mathcal {V}_{N,ext}:= & {} \mathcal {V}_N - \frac{75 |\mathcal {O}|}{16 \pi } \Big ( \frac{1}{(N - N_{ext})^2} \sum _{\begin{array}{c} i\ne j \\ i,j \notin I_{N,ext} \end{array}} g_S(x_i - x_j) \\&- \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y \Big ) \end{aligned}$$

goes to zero as \(N\rightarrow +\infty \).

Proof: For any open set U, we denote \(\fint _U = \frac{1}{|U|} \int _U\).

Let \(d := \frac{c}{4} N^{-1/3} \leqq \min _{i \ne j}\frac{|x_i-x_j|}{4}\) by (H2). We write

$$\begin{aligned} \frac{1}{N^2} \sum _{i \ne j} g_S(x_i - x_j)&= \frac{1}{N^2} \sum _{i \ne j} \left( g_S(x_i - x_j) - \fint _{B(x_j,d)} g_S(x_i-y) \mathrm{d}y \right) \\&\quad + \frac{1}{N^2} \sum _{i \ne j} \left( \fint _{B(x_j,d)} g_S(x_i-y) \mathrm{d}y - \fint _{B(x_i,d)} \fint _{B(x_j,d)} g_S(x-y) \mathrm{d}x \mathrm{d}y \right) \\&\quad + \frac{1}{N^2} \sum _{i \ne j} \fint _{B(x_i,d)} \fint _{B(x_j,d)} g_S(x-y) \mathrm{d}x \mathrm{d}y \, := I + II + III. \end{aligned}$$

For the first term, with \(y_i := N^{1/3} x_i\) and with (H2) in mind, that is \(|y_i - y_j| \geqq c\) for \(i \ne j\), we get that

$$\begin{aligned} \bigl | g_S(x_i - x_j) - \fint _{B(x_j,d)} g_S(x_i-y) \mathrm{d}y \bigr |&\leqq \fint _{B(x_j,d)} \sup _{z \in [x_j, y]}|{\nabla }g_S|(x_i-z)| |x_j - y| \mathrm{d}y \\&\leqq C N^{4/3}\frac{d}{(c + |y_i - y_j|)^4}; \end{aligned}$$

see (1.14). This yields, by a discrete convolution inequality,

$$\begin{aligned} |I| \leqq \frac{CN^{7/3}}{N^2} d \sup _i \sum _{j} \frac{1}{(c+|y_i- y_j|)^4} \leqq C' N^{1/3} d \leqq \mathcal {C}, \end{aligned}$$

where we have used that \(\sum _{j=1}^N \frac{1}{(c+|y_i - y_j|)^4}\) is uniformly bounded in N and in the index i thanks to the separation assumption. By similar arguments, \( | II | \leqq \mathcal {C}\). As regards the last term, we notice that

$$\begin{aligned} |III|\leqq & {} \frac{1}{N^2 d^6} \bigl | \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) F_N(x) F_N(y) \mathrm{d}y\\&- \sum _{i=1}^N \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) 1_{B(x_i,d)}(x) 1_{B(x_i,d)}(y) \mathrm{d}x \mathrm{d}y \bigr |, \end{aligned}$$

where \(F_N = \sum _{i=1}^N 1_{B(x_i,d)}\). The operator \(\mathcal {T} F(x) = \int g_S(x-y) F(y) \mathrm{d}y\) is a Calderón–Zygmund operator, and therefore continuous over \(L^2\). As \(F_N^2 = F_N\) (the balls are disjoint), we find that the \(L^2\) norm of \(F_N\) is \((N d^3)^{1/2}\) and

$$\begin{aligned} \bigl | \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) F_N(x) F_N(y) \mathrm{d}y \bigr | \leqq \Vert \mathcal {T}\Vert \Vert F_N \Vert _{L^2}^2 \leqq \Vert \mathcal {T}\Vert N d^3. \end{aligned}$$

Similarly,

$$\begin{aligned} \sum _{i=1}^N \bigl | \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) 1_{B(x_i,\eta )}(x) 1_{B(x_i,\eta )}(y) \mathrm{d}x \mathrm{d}y \bigr | \leqq N \Vert \mathcal {T}\Vert d^3. \end{aligned}$$

It follows that \(|III| \leqq \frac{C}{N d^3}\). With our choice of d, the first part of the lemma is proved.

From there, to prove that \(\mathcal {V}_{N,ext}\) goes to zero, as \(N_{ext} = o(N)\), it is enough to show that

$$\begin{aligned} \frac{1}{N^2} \big ( \sum _{i \ne j} g_S(x_i - x_j) - \,\, \sum _{\begin{array}{c} i\ne j, \\ i,j \notin I_{N,ext} \end{array}} \,\, g_S(x_i - x_j) \big ) \, \rightarrow \, 0. \end{aligned}$$

By symmetry, it is enough that

$$\begin{aligned} \frac{1}{N^2} \,\, \sum _{\begin{array}{c} i \ne j, \\ i \in I_{N,ext} \end{array}} \,\, g_S(x_i - x_j) \, \rightarrow \, 0. \end{aligned}$$

This can be shown by a similar decomposition as the previous one. Namely,

$$\begin{aligned} \frac{1}{N^2} \sum _{i \ne j} g_S(x_i - x_j)&= \frac{1}{N^2} \sum _{\begin{array}{c} i \ne j \\ i \in I_{N,ext} \end{array}} \left( g_S(x_i - x_j) - \fint _{B(x_j,d)} g_S(x_i-y) \mathrm{d}y \right) \\&\quad + \frac{1}{N^2} \sum _{\begin{array}{c} i \ne j \\ i \in I_{N,ext} \end{array}} \left( \fint _{B(x_j,d)} g_S(x_i-y) \mathrm{d}y - \fint _{B(x_i,d)} \fint _{B(x_j,d)} g_S(x-y) \mathrm{d}x \mathrm{d}y \right) \\&\quad + \frac{1}{N^2} \sum _{\begin{array}{c} i \ne j \\ i \in I_{N,ext} \end{array}} \fint _{B(x_i,d)} \fint _{B(x_j,d)} g_S(x-y) \mathrm{d}x \mathrm{d}y \, := I_{ext} + II_{ext} + III_{ext}. \end{aligned}$$

Proceeding as above, we find this time that

$$\begin{aligned} | I_{ext} | + | II_{ext} | + | III_{ext} | \leqq \mathcal {C} \frac{N_{ext}}{N} \rightarrow 0 \quad \text {as } N \rightarrow +\infty , \end{aligned}$$

which concludes the proof. \(\square \)

Remark 3.2

By Lemma 3.1, there is no restriction assuming (3.1) when studying the asymptotic behaviour of \(\mathcal {V}_N\). Therefore, we make from now on the assumption (3.1).

As explained in the introduction, the analysis of \(\mathcal {V}_N\) will rely on the mathematical methods introduced over the last years for Coulomb gases, the core problem being the analysis of a functional of the form (1.15). We shall first reexpress \(\mathcal {V}_N\) in a similar form. More precisely, we will show

Proposition 3.3

Denoting

$$\begin{aligned} \mathcal {W}_N := \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \big (\mathrm{d}\delta _N(x) - f(x) \mathrm{d}x\big ) \big (\mathrm{d}\delta _N(y) - f(y) \mathrm{d}y\big ) , \end{aligned}$$

we have \(\mathcal {V}_N = \mathcal {W}_N + \varepsilon (N)\) where \(\varepsilon (N) \rightarrow 0\) as \(N \rightarrow \infty .\)

Remark 3.4

In the definition of \(\mathcal {W}_N\), the integrals of the form

$$\begin{aligned}&\int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \mathrm{d}\delta _N(x) f(y) \mathrm{d}y, \, \\&\int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) f(x) \mathrm{d}x \mathrm{d}\delta _N(y), \end{aligned}$$

which appear when expanding the product, are understood as

$$\begin{aligned}&\int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \mathrm{d}\delta _N(x) f(y) \mathrm{d}y = \frac{8\pi }{3}\frac{1}{N} \sum _{i=1}^N S {\nabla }\cdot \mathrm{St}^{-1} S{\nabla }f(x_i), \\&\int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) f(x) \mathrm{d}x \mathrm{d}\delta _N(y) = \frac{8\pi }{3}\frac{1}{N} \sum _{i=1}^N S {\nabla }\cdot \mathrm{St}^{-1} S{\nabla }f(x_i), \end{aligned}$$

where \(\mathrm{St}\) is the Stokes operator; see (2.30) and the proof below for an explanation.

Proof

Clearly,

$$\begin{aligned} \mathcal {V}_N = \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \Big (\mathrm{d}\delta _N(x) \mathrm{d}\delta _N(y) - f(x) f(y) \mathrm{d}x \mathrm{d}y \Big ), \end{aligned}$$

so that, formally,

$$\begin{aligned} \mathcal {V}_N&= \mathcal {W}_N + \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) (\mathrm{d}\delta _N(x) - f(x)\mathrm{d}x) f(y) \mathrm{d}y \\&\quad + \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) f(x) \mathrm{d}x (\mathrm{d}\delta _N(y) -f(y) \mathrm{d}y). \end{aligned}$$

Note that it is not obvious that this formal decomposition makes sense, because all three quantities at the right-hand side involve integrals of \(g_S(x-y)\) against product measures of the form \(\mathrm{d} \delta _N(x) f(y) \mathrm{d}y\) (or the symmetric one), which may fail to converge because of the singularity of \(g_S\). To solve this issue, a rigorous path consists in approximating, at fixed N, each Dirac mass \(\delta _{x_i}\) by a (compactly suppported) approximation of unity \(\rho _\eta (x-x_i)\), where \(\eta > 0\) is the approximation parameter and goes to zero. One can then set, for each \(\eta \), \(\delta _N^\eta (x) := \frac{1}{N} \sum _{i=1}^N \rho _\eta (x-x_i)\), leading to the rigorous decomposition

$$\begin{aligned} \mathcal {V}_N^\eta&= \mathcal {W}_N^\eta + \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) (\delta _N^\eta (x) d(x) - f(x)\mathrm{d}x) f(y) \mathrm{d}y \\&\quad + \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) f(x) \mathrm{d}x (\delta _N^\eta (y) \mathrm{d}y -f(y) \mathrm{d}y), \end{aligned}$$

where \(\mathcal {V}_N^\eta \), \(\mathcal {W}_N^\eta \) are deduced from \(\mathcal {V}_N\), \(\mathcal {W}_N\) replacing the empirical measure by its regularization. It is easy to show that \(\lim _{\eta \rightarrow 0} \mathcal {V}_N^\eta = \mathcal {V}_N\). To conclude the proof, we shall establish the following: first,

$$\begin{aligned} \lim _{\eta \rightarrow 0} \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \delta _N^\eta (x) \mathrm{d}x f(y) \mathrm{d}y = \frac{8\pi }{3} \frac{1}{N} \sum _{i=1}^N S {\nabla }\mathrm{St}^{-1} S{\nabla }f(x_i); \end{aligned}$$
(3.2)

the same limit holding for the symmetric term. In particular, (3.2) will show that \(\mathcal {W}_N = \lim _{\eta \rightarrow 0} \mathcal {W}_N^\eta \) exists, in the sense given in Remark 3.4. Then, we will prove

$$\begin{aligned} \lim _{N \rightarrow +\infty } \frac{8\pi }{3} \frac{1}{N} \sum _{i=1}^N S {\nabla }\mathrm{St}^{-1} S{\nabla }f(x_i) = \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) f(x) f(y) \mathrm{d}x \mathrm{d}y, \end{aligned}$$
(3.3)

which, together with (3.2), will complete the proof of the proposition.

The limit (3.2) follows from identity (2.30). Indeed, for \(\eta > 0\), this formula yields

$$\begin{aligned} \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \delta _N^\eta (x) \mathrm{d}x f(y) \mathrm{d}y = - \frac{8\pi }{3} \int _{\mathbb {R}^3} \mathrm{St}^{-1}(S {\nabla }f)(x) \cdot S {\nabla }\delta _N^\eta (x) \mathrm{d}x. \end{aligned}$$

Now, we remark that due to our assumptions on f, by elliptic regularity, \(h = \mathrm{St}^{-1}(S {\nabla }f)(x)\) is \(C^1\) inside \(\mathcal {O}\). Moreover, in virtue of Remark (3.2), we can assume (3.1). Hence, as \(\eta \rightarrow 0\),

$$\begin{aligned} - \frac{8\pi }{3} \int _{\mathbb {R}^3} h(x) \cdot S {\nabla }\delta _N^\eta (x) \mathrm{d}x \, \rightarrow \, - \frac{8\pi }{3} \langle S {\nabla }\delta _N , h \rangle = \frac{8\pi }{3} \frac{1}{N} \sum _{i=1}^N S {\nabla }\cdot h(x_i). \end{aligned}$$

It remains to prove (3.3). In the special case where \(f \in C^r(\mathbb {R}^3)\) for some \(r \in (0,1)\) (implying that it vanishes at \({\partial }\mathcal {O}\)), classical results on Calderón–Zygmund operators yield that the function \(\int _{\mathbb {R}^3} g_S(x-y) f(x) \mathrm{d}x = \frac{8\pi }{3} S {\nabla }\cdot h(y)\) is a continuous (even Hölder) bounded function, so (H1) implies straightforwardly that

$$\begin{aligned}&\int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) f(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y)\\&\quad = \int _{\mathbb {R}^3} \frac{8\pi }{3} S {\nabla }\cdot h(y) (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \rightarrow 0. \end{aligned}$$

In the general case where f is discontinuous across \({\partial }\mathcal {O}\), the proof is a bit more involved. The difficulty lies in the fact that some points \(x_i\) get closer to the boundary as \(N \rightarrow +\infty \).

Let \({\varepsilon }> 0\). Under (H2), there exists \(c' > 0\) (depending on c only) such that for \(N^{-1/3} \leqq {\varepsilon }\),

$$\begin{aligned} \left| \{ i, \, x_i \, \text {belongs to the } c' {\varepsilon }\text { neighborhood of } {\partial }\mathcal {O}\}\right| \leqq {\varepsilon }N. \end{aligned}$$
(3.4)

Let \(\chi _{\varepsilon }: \mathbb {R}^3 \rightarrow [0,1]\) be a smooth function such that \(\chi _{\varepsilon }=1\) in a \(c' {\varepsilon }/4\) neighborhood of \({\partial }\mathcal {O}\), \(\chi _{\varepsilon }=0\) outside a \(c' {\varepsilon }/2\) neighborhood of \({\partial }\mathcal {O}\). We write

$$\begin{aligned}&\int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) f(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\quad = \int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) (\chi _{\varepsilon }f)(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\qquad + \int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) ((1- \chi _{\varepsilon })f)(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y). \end{aligned}$$

By formula (2.30), the second term reads as

$$\begin{aligned}&\int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) (1- \chi _{\varepsilon }f)(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y)\\&\quad = \frac{8\pi }{3} \int _{\mathbb {R}^3} S {\nabla }\cdot u_{\varepsilon }(y) \, (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y), \end{aligned}$$

with \(u_{\varepsilon }= \mathrm{St}^{-1} S {\nabla }((1- \chi _{\varepsilon }) f)\). The source term \((1- \chi _{\varepsilon }) f\) being \(C^1\) and compactly supported, \(S {\nabla }\cdot u_{\varepsilon }\) is Hölder and bounded, so that, as \(N \rightarrow +\infty \), the integral goes to zero by the weak convergence assumption (H1), for any fixed \({\varepsilon }> 0\). As regards the first term, we split it again into

$$\begin{aligned}&\int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) (\chi _{\varepsilon }f)(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\quad = \int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) (\chi _{\varepsilon }f)(x) \mathrm{d}x \chi _{\varepsilon }(y)(\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\qquad + \int _{(\mathbb {R}^3 \times \mathbb {R}^3)\setminus \text {Diag}} g_S(x-y) (\chi _{\varepsilon }f)(x) \mathrm{d}x (1-\chi _{\varepsilon })(y) (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\quad = \frac{8\pi }{3} \int _{\mathbb {R}^3} S{\nabla }\cdot v_{\varepsilon }(y) \chi _{\varepsilon }(y) (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \\&\qquad + \, \frac{8\pi }{3} \int _{\mathbb {R}^3} S{\nabla }\cdot v_{\varepsilon }(y) (1-\chi _{\varepsilon })(y) (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y), \end{aligned}$$

where \(v_{\varepsilon }\) is this time the solution of the Stokes equation with source \(S {\nabla }(\chi _{\varepsilon }f)\). It is Hölder away from \({\partial }\mathcal {O}\), so that the last term at the right-hand side goes again to zero as \(N \rightarrow +\infty \), by assumption (H1).

It remains to handle the first term at the right-hand side. We shall show below that for a proper choice of \(\chi _{\varepsilon }\) one has

$$\begin{aligned} \Vert {\nabla }v_{\varepsilon }\Vert _{L^\infty } \leqq C, \quad C\text { independent of } {\varepsilon }. \end{aligned}$$
(3.5)

Taking advantage of this fact, we write

$$\begin{aligned}&\left| \frac{8\pi }{3} \int _{\mathbb {R}^3} S{\nabla }\cdot v_{\varepsilon }(y) \chi _{\varepsilon }(y) (\mathrm{d}\delta _N(y) - f(y)\mathrm{d}y) \right| \\&\quad \leqq \frac{8\pi }{3} \Vert S\cdot {\nabla }v_{\varepsilon }\Vert _{L^\infty (\mathbb {R}^3)} \left( \frac{1}{N} \left| \{ i, \, \chi _{\varepsilon }(x_i) >0 \}\right| + \Vert \chi _{\varepsilon }f \Vert _{L^1} \right) \leqq C {\varepsilon }, \end{aligned}$$

where we used property (3.4) to obtain the last inequality. With this bound and the convergence to zero of the other terms for fixed \({\varepsilon }\) and \(N \rightarrow +\infty \), the limit (3.3) follows.

We still have to show that \({\nabla }v^{\varepsilon }\) is uniformly bounded in \(L^\infty \) for a good choice of \(\chi _{\varepsilon }\). We borrow here to the analysis of vortex patches in the Euler equation, initiated by Chemin in 2-d [10], extended by Gamblin and Saint-Raymond in 3-d [17]. First, as \(\mathcal {O}\) is smooth, one can find a family of five smooth divergence-free vector fields \(w_1, \dots , w_5\), tangent at \({\partial }\mathcal {O}\) and non-degenerate in the sense that

$$\begin{aligned} \inf _{x \in \mathbb {R}^3} \sum _{i \ne j} | w_i \times w_j | > 0; \end{aligned}$$

see [17, Proposition 3.2]. We take \(\chi _{\varepsilon }\) in the form \(\chi (t/{\varepsilon })\), for a coordinate t transverse to the boundary, meaning that \({\partial }_t\) is normal at \({\partial }\mathcal {O}\). With this choice and the assumptions on f, one checks easily that \(\chi _{\varepsilon }f\) is bounded uniformly in \({\varepsilon }\) in \(L^\infty (\mathbb {R}^3)\) and that for all i, \(w_i \cdot {\nabla }(\chi _{\varepsilon }f)\) is bounded uniformly in \({\varepsilon }\) in \(C^0(\mathbb {R}^3) \subset C^{r-1}(\mathbb {R}^3)\) for all \(r \in (0,1)\). Hence, the norm \(\Vert \chi _{\varepsilon }f\Vert _{r,W}\) introduced in [17, p. 395], where \(W = (w_1, \dots , w_5)\), is bounded uniformly in \({\varepsilon }\).

We then split the Stokes system

$$\begin{aligned} -\Delta v_{\varepsilon }+ {\nabla }p_{\varepsilon }= S {\nabla }(\chi _{\varepsilon }f), \quad \hbox {div}~v_{\varepsilon }= 0 \end{aligned}$$

into the equations

$$\begin{aligned} \hbox {curl} ~v_{\varepsilon }= \Omega _{\varepsilon }, \quad \hbox {div}~v_{\varepsilon }= 0 \end{aligned}$$

and

$$\begin{aligned} -\Delta \Omega _{\varepsilon }= \hbox {curl} ~S {\nabla }(\chi _{\varepsilon }f). \end{aligned}$$

Let us show that \({\partial }_i{\partial }_j \Delta ^{-1} (\chi _{\varepsilon }f)\) is bounded uniformly in \({\varepsilon }\) in \(L^\infty \). Let \(\chi \in C^\infty _c(\mathbb {R}^3)\), \(\chi \geqq 0\), \(\chi =1\) near zero. Let for all \(m \in \mathbb {R}\), \(\Lambda ^m(\xi ) := (\chi (\xi ) + |\xi |^2)^{m/2}\). It is easily seen through the Fourier transform that, for all \(s \in \mathbb {N}\),

$$\begin{aligned} \Vert {\partial }_i{\partial }_j \chi (D) \Lambda ^{-2}(D) \Delta ^{-1} (\chi _{\varepsilon }f) \Vert _{H^s} \leqq C_s \Vert \chi _{\varepsilon }f\Vert _{L^2} \leqq C'_s. \end{aligned}$$
(3.6)

Moreover, by the calculations in [17, p. 401], replacing \(\omega \) with \(\chi _{\varepsilon }f\), we get

$$\begin{aligned} \Vert {\partial }_i{\partial }_j \Lambda ^{-2}(D) (\chi _{\varepsilon }f) \Vert _{L^\infty } \leqq C \Vert \chi _{\varepsilon }f \Vert _{L^\infty } \ln (2+\frac{\Vert \chi _{\varepsilon }f\Vert _{r,W}}{ \Vert \chi _{\varepsilon }f \Vert _{L^\infty }}) \leqq C'_r, \quad \forall 0< r < 1. \end{aligned}$$
(3.7)

Combining (3.6) and (3.7), we find that

$$\begin{aligned} {\partial }_i{\partial }_j \Delta ^{-1} (\chi _{\varepsilon }f) {=} {\partial }_i{\partial }_j \left( \chi (D) \Lambda ^{-2}(D) \Delta ^{-1} {+} \Lambda ^{-2}(D) \right) (\chi _{\varepsilon }f) \end{aligned}$$

is bounded uniformly in \({\varepsilon }\) in \(L^\infty \), and consequently that

$$\begin{aligned} \Vert \Omega _{\varepsilon }\Vert _{L^\infty } \leqq C. \end{aligned}$$

Also, by continuity of Riesz transforms over \(L^p\), we have

$$\begin{aligned} \forall 1< p < \infty , \quad \Vert \Omega _{\varepsilon }\Vert _{L^2} \leqq C_p \Vert \chi _{\varepsilon }f\Vert _{L^p} \leqq C'_p. \end{aligned}$$

Now, applying \(w_k \cdot {\nabla }\) to the equation satisfied by \(\Omega _{\varepsilon }\), we obtain for all \(1 \leqq k \leqq 5\),

$$\begin{aligned} -\Delta ( w_k \cdot {\nabla }\Omega _{\varepsilon })&= \hbox {curl} ~S {\nabla }( w_k \cdot {\nabla }(\chi _{\varepsilon }f)) + [ w_k \cdot {\nabla }, \hbox {curl} ~S {\nabla }] (\chi _{\varepsilon }f) + [ w_k \cdot {\nabla }, \Delta ] \Omega _{\varepsilon }\nonumber \\&= \sum _{i,j} {\partial }_{i} {\partial }_j F_{i,j,{\varepsilon }} + \sum _{i} {\partial }_{i} G_{i,{\varepsilon }} + H_{\varepsilon }, \end{aligned}$$
(3.8)

where \(F_{i,j,{\varepsilon }}\), \(G_{i,{\varepsilon }}\) and \(H_{\varepsilon }\) are combinations of \(\Omega _{\varepsilon }\), \(\chi _{\varepsilon }f\) and \(w_k \cdot {\nabla }(\chi _{\varepsilon }f)\). In particular, they are bounded uniformly in \({\varepsilon }\) in \(L^\infty \cap L^p\), for any \(1< p < \infty \).

For the first term at the r.h.s., we write, with the same cut-off function \(\chi \) as before,

$$\begin{aligned} (-\Delta )^{-1} \sum _{i,j} {\partial }_{i} {\partial }_j F_{i,j,{\varepsilon }} {=} \chi (D) (-\Delta )^{-1} \sum _{i,j} {\partial }_{i} {\partial }_j F_{i,j,{\varepsilon }} {+} (1-\chi (D)) \sum _{i,j} {\partial }_{i} {\partial }_j F_{i,j,{\varepsilon }}. \end{aligned}$$

By continuity of \( (-\Delta )^{-1} {\partial }_{i} {\partial }_j\) over \(L^2\), the first term, with low frequencies, belongs to \(H^s\) for any s, with uniform bound in \({\varepsilon }\). By the continuity of \((1-\chi (D)) (-\Delta )^{-1} {\partial }_{i} {\partial }_j\) over Hölder spaces (Coifman-Meyer theorem), the second term, with high frequencies, is uniformly bounded in \({\varepsilon }\) in \(C^{r-1}(\mathbb {R}^3)\), for any \(0< r < 1\).

For the second and third terms in (3.8), we claim that

$$\begin{aligned} \Vert (-\Delta )^{-1} \sum _{i} {\partial }_{i} G_{i,{\varepsilon }}\Vert _{L^\infty } \leqq C, \quad \Vert (-\Delta )^{-1} H_{{\varepsilon }}\Vert _{L^\infty } \leqq C. \end{aligned}$$

This can be easily seen by expressing these fields as \( \sum _{i} {\partial }_{i} \Phi \star G_{i,{\varepsilon }}\) and \(\Phi \star H_{\varepsilon }\) with \(\Phi \) the fundamental solution, and by using the uniform \(L^p\) bounds on \(G_{i,{\varepsilon }}\) and \(H_{\varepsilon }\). Eventually, we find that

$$\begin{aligned} \Vert w_k \cdot {\nabla }\Omega _{\varepsilon }\Vert _{C^{r-1}} \leqq C_r, \quad \forall 1 \leqq k \leqq 5, \quad \forall 0< r < 1. \end{aligned}$$

We conclude by [17, Proposition 3.3] that \({\nabla }v_{\varepsilon }\) is bounded in \(L^\infty (\mathbb {R}^3)\) uniformly in \({\varepsilon }\). \(\square \)

3.1 Smoothing

By Proposition 3.3, we are left with understanding the asymptotic behaviour of

$$\begin{aligned} \mathcal {W}_N := \frac{75 |\mathcal {O}|}{16\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3\setminus Diag} g_S(x-y) \big (\mathrm{d}\delta _N(x) - f(x) \mathrm{d}x\big ) \big (\mathrm{d}\delta _N(y) - f(y) \mathrm{d}y\big ). \end{aligned}$$
(3.9)

The following field will play a crucial role: for UQ defined in (2.26), we set

$$\begin{aligned} G_S(x) \, := \, S_{kl} {\partial }_k U_l(x), \quad p_S(x) = S_{kl} {\partial }_k Q_l(x). \end{aligned}$$
(3.10)

From (2.27), we have \( g_S = \frac{8\pi }{3} \, (S {\nabla }) \cdot G_S\), and that \(G_S\) solves, in the sense of distributions,

$$\begin{aligned} -\Delta G_S + {\nabla }p_S = S {\nabla }\delta , \quad \hbox {div}~G_S = 0 \, \text { in } \, \mathbb {R}^3. \end{aligned}$$
(3.11)

Moreover, from the explicit expression

$$\begin{aligned} U_l(x) = \frac{1}{8\pi } \left( \frac{1}{|x|} e_l + \frac{x_l}{|x|^3} x \right) , \quad Q_l(x) = \frac{1}{4\pi } \frac{x_l}{|x|^3}, \end{aligned}$$

and taking into account the fact that S is symmetric and trace-free, we get

$$\begin{aligned} G_S(x) = - \frac{3}{8\pi } S_{kl} x_l x_k \frac{x}{|x|^5} = - \frac{3}{8\pi } (Sx\cdot x) \frac{x}{|x|^5}, \quad p_S(x) = - \frac{3}{4\pi } \frac{(Sx \cdot x)}{|x|^5}. \end{aligned}$$
(3.12)

Let us note that \(G_S\) is called a point stresslet in the literature, see [18]. It can be interpreted as the velocity field created in a fluid of viscosity 1 by a point particle whose resistance to a strain is given by \(-S\).

We now come back to the analysis of (3.9). Formal replacement of the function f in Lemma 2.2 by \(\delta _N - f\) yields the formula

$$\begin{aligned} ''\int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) \big (\mathrm{d}\delta _N(x) - f(x) \mathrm{d}x\big ) \big (\mathrm{d}\delta _N(y) - f(y) \mathrm{d}y\big ) = - \frac{16\pi }{3N^2} \int _{\mathbb {R}^3} |D(h_N)|{^2} '', \end{aligned}$$
(3.13)

where

$$\begin{aligned} h_N(x):= & {} \sum _{i=1}^N G_S(x-x_i) - N \mathrm{St}^{-1} (S {\nabla }f)\nonumber \\= & {} \sum _{i=1}^N G_S(x-x_i) - N \int _{\mathbb {R}^3} G_S(x-y) f(y) \mathrm{d}y \end{aligned}$$
(3.14)

satisfies

$$\begin{aligned} - \Delta h_N + {\nabla }q_N = S{\nabla }\sum _i \delta _{x_i} \, - \, N S{\nabla }f, \quad \hbox {div}~h_N = 0 \quad \text {in } \mathbb {R}^3. \end{aligned}$$
(3.15)

The formula (3.13) is similar to the formula (1.16), and is as much abusive, as both sides are infinite. Still, by an appropriate regularization of the source term \(S{\nabla }\sum _i \delta _{x_i}\), we shall be able in the end to obtain a rigorous formula, convenient for the study of \(\mathcal {W}_N\). This regularization process is the purpose of the present paragraph.

For any \(\eta > 0\), we denote \(B_\eta = B(0,\eta )\), and define \(G_S^\eta \) by

$$\begin{aligned}&G_S^\eta = G_S, \, p_S^\eta = p_S \, \text { outside } B_\eta , \end{aligned}$$
(3.16)
$$\begin{aligned}&- \Delta G_S^\eta + {\nabla }p_S^\eta = 0, \quad \hbox {div}\ G_S^\eta = 0, \quad G_S^\eta \vert _{{\partial }B_\eta } = G_S\vert _{{\partial }B_\eta } \, \text { in } B_\eta . \end{aligned}$$
(3.17)

Note that, by homogeneity,

$$\begin{aligned} G_S^\eta (x) = \frac{1}{\eta ^2} G_S^1(x/\eta ). \end{aligned}$$
(3.18)

The field \(G_S^\eta \) belongs to \(\dot{H}^1(\mathbb {R}^3)\), and solves

$$\begin{aligned} - \Delta G_S^\eta + {\nabla }p_S^\eta = S^\eta , \end{aligned}$$
(3.19)

where \(S^\eta \) is the measure on the sphere defined by

$$\begin{aligned} S^\eta := - \left[ 2D(G_S^\eta ) n - p_S^\eta n \right] \, s^\eta = - \left[ {\partial }_n G_S^\eta - p_S^\eta n \right] \, s^\eta , \end{aligned}$$
(3.20)

with \(n=\frac{x}{|x|}\) the unit normal vector pointing outward \(B_\eta \), \(\left[ F\right] := F\vert _{{\partial }B_\eta ^+} - F\vert _{{\partial }B_\eta ^-}\) the jump at \({\partial }B_\eta \) (with \({\partial }B_\eta ^+\), resp. \({\partial }B_\eta ^-\), the outer, resp. inner boundary of the ball), and \(s^\eta \) the standard surface measure on \(\partial B_\eta \). We claim the following:

Lemma 3.5

For all \(\eta > 0\), \(\, S^\eta = \mathrm{div} \, \Psi ^\eta \,\) in \(\mathbb {R}^3\), where

$$\begin{aligned} \Psi ^\eta&:= \frac{3}{\pi \eta ^5} \left( Sx \otimes x + x \otimes Sx - 5\frac{|x|^2}{2} S + \frac{5}{4} \eta ^2 S \right) \nonumber \\&\quad - 2 D(G^\eta _S)(x) + p^\eta _S(x) \mathrm{Id}, \quad x \in B_\eta ,\nonumber \\ \Psi ^\eta&:= 0 \text { outside}. \end{aligned}$$
(3.21)

Moreover, \(\Psi ^\eta \rightarrow S\delta \) in the sense of distributions as \(\eta \rightarrow 0\), so that \(S^\eta \rightarrow S{\nabla }\delta \).

Proof of the lemma.

From the explicit formula (3.12) for \(G_S\) and \(p_S\), we find

$$\begin{aligned} 2D(G_S) = -\frac{3}{4\pi } \frac{Sx \otimes x + x \otimes Sx}{|x|^5} + \frac{15}{4\pi } \frac{(Sx \cdot x) x \otimes x}{|x|^7} - \frac{3}{4\pi } \frac{Sx\cdot x}{|x|^5} \mathrm{Id}., \end{aligned}$$

so that

$$\begin{aligned} (2D(G_S^\eta )n - p^\eta _S n)\vert _{{\partial }B_\eta ^+} = (2D(G_S)n - p_S n)\vert _{{\partial }B_\eta ^+} = \frac{3}{4\pi |\eta |^3} \left( 4 (Sn \cdot n) n - Sn \right) . \end{aligned}$$
(3.22)

Using that S is trace-free, one can check from definition (3.21) that \(\hbox {div}~\Psi ^\eta = 0\) in the complement of \({\partial }B_\eta \), while

$$\begin{aligned}{}[\Psi ^\eta n]&= -\Psi ^\eta n\vert _{{\partial }B_\eta ^-} \\&= \frac{3}{\pi \eta ^3} \Big ( (Sn \otimes n)n + (n \otimes Sn)n - \frac{5}{4} Sn \Big ) - (2 D(G^\eta _S)n + p^\eta _S n)\vert _{{\partial }B_\eta ^-} \\&= (2D(G_S^\eta )n - p^\eta _S n)\vert _{{\partial }B_\eta ^+} - (2 D(G^\eta _S)n + p^\eta _S n)\vert _{{\partial }B_\eta ^-}, \end{aligned}$$

where the last equality comes from (3.22). Together with (3.20), this implies the first claim of the lemma.

To compute the limit of \(\Psi ^\eta \) as \(\eta \rightarrow 0\), we write \(\Psi ^\eta = \Psi ^\eta _1 + \Psi ^\eta _2\), with

$$\begin{aligned}&\Psi _1^\eta = \frac{3}{\pi \eta ^5} \left( Sx \otimes x + x \otimes Sx - 5\frac{|x|^2}{2} S + \frac{5}{4} \eta ^2 S \right) ,\\&\quad \Psi _2^\eta = - 2 D(G^\eta _S)(x) + p^\eta _S(x) \mathrm{Id}.. \end{aligned}$$

Let \(\varphi \in C^\infty _c(\mathbb {R}^3)\) be a test function. We can write \( \langle \Psi _1^\eta , \varphi \rangle = \langle \Psi _1^\eta , \varphi (0) \rangle + \langle \Psi _1^\eta , \varphi - \varphi (0) \rangle . \) The second term is \(O(\eta )\), while the first term can be computed using the elementary formula \(\int _{B_1} x_i x_j \mathrm{d}x = \frac{4\pi }{15} \delta _{ij}\). We find

$$\begin{aligned} \lim _{\eta \rightarrow 0} \langle \Psi _1^\eta , \varphi \rangle = \frac{3}{5} S \varphi (0) = \langle \frac{3}{5}S \delta , \varphi \rangle . \end{aligned}$$
(3.23)

For the second term, using the homogeneity (3.18), we find again that \(\lim _{\eta } \langle \Psi _2^\eta , \varphi \rangle = \langle \Psi _2^1 , \varphi (0) \rangle \). Note that the pressure \(p_S^1\) is defined up to a constant, so that we can always select the one with zero average. With this choice, we find

$$\begin{aligned} \langle \Psi _2^1 , \varphi (0) \rangle&= \int _{B_\eta } \big ( - 2 D(G^\eta _S) + p^\eta _S\mathrm{Id} \big ) \varphi (0) = - 2 \int _{B_1} D(G^1_S) \, \varphi (0) \nonumber \\&= - \int _{{\partial }B_1} (n \otimes G^1_S + G^1_S \otimes n) \, \varphi (0) = - \int _{{\partial }B_1} (n \otimes G_S + G_S \otimes n) \, \varphi (0) \nonumber \\&= \frac{3}{4\pi } \int _{{\partial }B_1} (S n \cdot n) n \otimes n \, \varphi (0) = \frac{2}{5}S \varphi (0) = \langle \frac{2}{5}S \delta , \varphi \rangle , \end{aligned}$$
(3.24)

where the sixth equality comes from the elementary formula \(\int _{{\partial }B_1} n_i n_j n_k n_l \mathrm{d}s^1= \frac{4\pi }{15} (\delta _{ij} \delta _{kl} + \delta _{ik} \delta _{jl} + \delta _{il} \delta _{jk})\). The result follows. \(\quad \square \)

For later purposes, we also prove here

Lemma 3.6

$$\begin{aligned} \int _{{\partial }B_\eta } G_S^\eta d S^\eta = \int _{{\partial }B_\eta } G_S d S^\eta = \frac{1}{\eta ^3}\left( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) . \end{aligned}$$

Proof

$$\begin{aligned} \int _{{\partial }B_\eta } G_S^\eta d S^\eta&= \int _{{\partial }B_\eta } G_S^\eta \left( {\partial }_n G_S^\eta - p_S n \right) \vert _{{\partial }B_\eta ^-} \mathrm{d}s^\eta&- \int _{{\partial }B_\eta } G_S^\eta \left( {\partial }_n G_S^\eta - p_S n \right) \vert _{{\partial }B_\eta ^+} \mathrm{d}s^\eta \\&= \int _{B_\eta } |{\nabla }G^\eta _S|^2 \mathrm{d}x&- \int _{{\partial }B_\eta } G_S \left( {\partial }_r G_S - p_S e_r \right) \vert _{{\partial }B_\eta } \mathrm{d}s^\eta . \end{aligned}$$

By (3.18), \(\int _{B_\eta } |{\nabla }G^\eta _S|^2 \mathrm{d}x = \frac{1}{\eta ^3} \int _{B_1} |{\nabla }G^1_S|^2 \mathrm{d}x\). The second term can be computed with (3.12):

$$\begin{aligned}&\int _{{\partial }B_\eta } G_S \left( {\partial }_r G_S - p_S e_r \right) \vert _{{\partial }B_\eta } \mathrm{d}s^\eta = \int _{{\partial }B_\eta } \left( -\frac{3}{8\pi \eta ^2} (S n \cdot n ) n \right) \, \left( \frac{3}{2\pi \eta ^3} (S n \cdot n) n \right) \mathrm{d}s^\eta \\&\quad = - \frac{9}{16\pi ^2 \eta ^3} \int _{{\partial }B_1} (S n \cdot n)^2 \mathrm{d}s^1 = -\frac{3}{10\pi } |S|^2. \end{aligned}$$

\(\square \)

3.2 The Renormalized Energy

Thanks to the regularization of \(S{\nabla }\delta \) introduced in the previous paragraph, cf. Lemma 3.5, we shall be able to set a rigorous alternative to the abusive formula (3.13). Specifically, we shall state an identity involving \(\mathcal {W}_N\), defined in (3.9), and the energy of the function

$$\begin{aligned} h_N^\eta (x):= & {} \sum _{i=1}^N G_S^\eta (x-x_i) + N \mathrm{St}^{-1} (S {\nabla }f)\nonumber \\= & {} \sum _{i=1}^N G_S^\eta (x-x_i) - N \int _{\mathbb {R}^3} G_S(x-y) f(y) \mathrm{d}y. \end{aligned}$$
(3.25)

This function solves

$$\begin{aligned} - \Delta h_N^\eta + {\nabla }p_N^\eta = \sum _{i=1}^N S^\eta (x-x_i) - N S {\nabla }f, \quad \hbox {div}\, h_N^\eta = 0, \end{aligned}$$
(3.26)

and is a regularization of \(h_N\), cf. (3.14)–(3.15).

The main result of this section is

Proposition 3.7

$$\begin{aligned} \mathcal {W}_N = - \frac{25|\mathcal {O}|}{2N^2} \lim _{\eta \rightarrow 0} \left( \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \frac{N}{\eta ^3}( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \right) . \end{aligned}$$
(3.27)

Proof

We assume that \(\eta \) is small enough so that \(2 \eta < \min _{i \ne j} |x_i - x_j|\). From the explicit expressions (3.14), (3.25), we find that \(h_N, h_N^\eta = O(|x|^{-2})\), \({\nabla }(h_N, h_N^\eta ) = O(|x|^{-3})\) and \(p_N, p_N^\eta = O(|x|^{-3})\) at infinity. As these quantities decay fast enough, we can perform an integration by parts to find

$$\begin{aligned} \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2&= \langle -\Delta h_N^\eta , h_N^\eta \rangle = \langle -\Delta h_N^\eta + {\nabla }p_N^\eta , h_N^\eta \rangle \\&= \langle \sum _i S^\eta (x-x_i) - N S {\nabla }f , h_N \rangle \\&\quad + \langle \sum _i S^\eta (x-x_i) - N S {\nabla }f , h_N^\eta - h_N \rangle \\&= \sum _i \langle S^\eta (x-x_i), h_N^i \rangle + \sum _i \langle S^\eta (x-x_i), G_S(x-x_i) \rangle - \langle N S {\nabla }f , h_N \rangle \,\\&\quad + \, \langle \sum _i S^\eta (x-x_i) - N S {\nabla }f , h_N^\eta - h_N \rangle =: a + b + c + d, \end{aligned}$$

where we defined \(h_N^i := h_N - G_S(x-x_i)\).

As \(h_N^i\) is smooth over the support of \(S^\eta (\cdot - x_i)\), we can apply Lemma 3.5 to obtain

$$\begin{aligned} \lim _{\eta \rightarrow 0} a = - \sum _i S {\nabla }\cdot h^i_N(x_i). \end{aligned}$$

We can then apply Lemma 3.6 to obtain

$$\begin{aligned} b = \frac{N}{\eta ^3} ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ). \end{aligned}$$

As regards the fourth term, we notice that by our definition (3.16)–(3.17) of \(G_S^\eta \), and the fact that the balls \(B(x_i, \eta )\) are disjoint, the function \(h_N - h_N^\eta = \sum _{i} (G_S(x-x_i) - G_S^\eta (x-x_i))\) is zero over \(\cup _i {\partial }B(x_i, \eta )\), which is the support of \(\sum _i S^\eta (x-x_i)\). It follows that

$$\begin{aligned} d = -N \langle S {\nabla }f , h_N^\eta - h_N \rangle&= N \sum _i \int _{B(x_i, \eta )} S{\nabla }\cdot G_S^\eta (x-x_i) \left( f(x) - f(x_i) \right) \mathrm{d}x \\&\quad - N \sum _i \int _{B(x_i, \eta )} S{\nabla }\cdot G_S(x-x_i) \left( f(x) - f(x_i) \right) \mathrm{d}x, \end{aligned}$$

where we integrated by parts, using that \(G_S - G_S^\eta \) is zero outside the balls. Let us notice that the second integral at the right-hand side converges despite the singularity of \(S{\nabla }\cdot G_S\), using the smoothness of f near \(x_i\) (by assumption (3.1) and Remark 3.2). Moreover, it goes to zero as \(\eta \rightarrow 0\). Using the homogeneity and smoothness properties of \(G_S^\eta \) inside \(B^\eta \), we also find that the first sum goes to zero with \(\eta \), resulting in

$$\begin{aligned} \lim _{\eta \rightarrow 0} d = 0. \end{aligned}$$

We end up with

$$\begin{aligned}&\lim _{\eta \rightarrow 0 } \left( \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \frac{N}{\eta ^3} ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \right) \\&\quad = - \sum _i S {\nabla }\cdot h^i_N(x_i) - \langle N S {\nabla }f , h_N \rangle \end{aligned}$$

It remains to rewrite properly the right-hand side. We first get

$$\begin{aligned} - \sum _i S {\nabla }\cdot h^i_N(x_i)&= - \sum _{i \ne j} S {\nabla }\cdot G_S(x_i - x_j) + N \sum _{i} \int _{\mathbb {R}^3} S {\nabla }\cdot G_S(x_i-y) f(y) \mathrm{d}y \\&= - \frac{3N^2}{8\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3 \setminus \text {Diag}} g_S(x-y) d \delta _N(x) (\mathrm{d}\delta _N(y) - f(y) \mathrm{d}y), \end{aligned}$$

and, integrating by parts,

$$\begin{aligned} - \langle N S {\nabla }f , h_N \rangle&= N \int _{\mathbb {R}^3}S {\nabla }\cdot h_N(x) f(x) \mathrm{d}x \\&= N \int _{\mathbb {R}^3} \left( \sum _i S {\nabla }\cdot G_S(x-x_i) - N\int _{\mathbb {R}^3} S {\nabla }\cdot G_S(x - y) f(y) \mathrm{d}y \right) f(x) \mathrm{d}x \\&= \frac{3N^2}{8\pi } \int _{\mathbb {R}^3 \times \mathbb {R}^3} g_S(x-y) f(x) \mathrm{d}x (\mathrm{d}\delta _N(y) - f(y) \mathrm{d}y) \mathrm{d}x. \end{aligned}$$

The last equality was deduced from the identity \(g_S = \frac{8\pi }{3} \, (S {\nabla }) \cdot G_S\), see the line after (3.10). The proposition follows: \(\square \)

We can refine the previous proposition as follows:

Proposition 3.8

Let \(c > 0\) the constant in (H2). There exists \(C > 0\) such that: for all \(\alpha< \eta < \frac{c}{2} N^{-1/3}\),

$$\begin{aligned} \Bigl | \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \int _{\mathbb {R}^3} | {\nabla }h_{N}^\alpha |^2 - N \left( \frac{1}{\eta ^3} - \frac{1}{\alpha ^3} \right) ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \Bigr | \leqq C N^2 \eta . \end{aligned}$$

Proof

One has from (3.25) that

$$\begin{aligned} h_{N}^\eta = h_{N}^\alpha + \sum _{i=1}^N (G_S^\eta - G_S^\alpha )(x-x_i). \end{aligned}$$

It follows that

$$\begin{aligned} \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \int _{\mathbb {R}^3} | {\nabla }h_{N}^\alpha |^2&= \sum _{i,j} \int _{\mathbb {R}^3 }{\nabla }(G_S^\eta - G_S^\alpha )(x-x_i) : {\nabla }(G_S^\eta - G_S^\alpha )(x-x_j) \\&\quad + 2 \sum _{i} \int _{\mathbb {R}^3} {\nabla }h_N^\alpha : {\nabla }(G_S^\eta - G_S^\alpha )(x-x_i). \end{aligned}$$

After integration by parts,

$$\begin{aligned}&\int _{\mathbb {R}^3 }{\nabla }(G_S^\eta - G_S^\alpha )(\cdot -x_i) : {\nabla }(G_S^\eta - G_S^\alpha )(\cdot -x_j)\\&\quad = \langle (S^\eta - S^\alpha )(\cdot -x_i) , (G_S^\eta - G_S^\alpha )(\cdot - x_j) \rangle , \end{aligned}$$

while

$$\begin{aligned} \int _{\mathbb {R}^3 } {\nabla }h_N^\alpha : {\nabla }(G_S^\eta - G_S^\alpha )(x-x_i) = \langle \sum _j S^\alpha (\cdot - x_i) - N S {\nabla }f , (G_S^\eta - G_S^\alpha )(\cdot -x_i) \rangle . \end{aligned}$$

We get

$$\begin{aligned}&\int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \int _{\mathbb {R}^3} | {\nabla }h_{N}^\alpha |^2 = \sum _{i \ne j} \langle (S^\alpha + S^\eta )(\cdot -x_i) , (G_S^\eta - G_S^\alpha )(\cdot - x_j) \rangle \nonumber \\&\quad - 2 \sum _i N \langle S {\nabla }f , (G_S^\eta - G_S^\alpha )(\cdot -x_i) \rangle \nonumber \\&\quad + \, N \langle (S^\alpha + S^\eta ) , (G_S^\eta - G_S^\alpha ) \rangle =: a + b + c. \end{aligned}$$
(3.28)

We note that \(G_S^\eta - G_S^\alpha \) is zero outside \(B_\eta \), while \(S^\alpha + S^\eta \) is supported in \(B_\eta \). Moreover, thanks to (H2), for \(\alpha< \eta < \frac{c}{2}\), the balls \(B(x_i, \eta )\) are disjoint. We deduce: \(a=0\).

After integration by parts, taking into account that \(G_S^\eta - G_S^\alpha \) vanishes outside \(B_\eta \), we can write \(b = b_\eta - b_\alpha \) with

$$\begin{aligned} b_\alpha&:= 2\sum _i N \int _{B(x_i, \eta )} S {\nabla }\cdot G_S^\alpha (\cdot -x_i) \, (f - f(x_i)) \\ b_\eta&:= 2\sum _i N \int _{B(x_i,\eta )} S {\nabla }\cdot G_S^\eta (\cdot -x_i) \, (f - f(x_i)). \end{aligned}$$

By assumption (3.1), for N large enough, for all \(1\leqq i \leqq N\) and all \(\eta \leqq \frac{c}{2} N^{-1/3}\), \(B(x_i, \eta )\) is included in \(\mathcal {O}\). Hence, f is \(C^{1}\) in \(B(x_i, \eta )\), and

$$\begin{aligned} \bigl | \int _{B(x_i, \eta )} S {\nabla }\cdot G_S^\eta (\cdot -x_i) \, (f - f(x_i) \bigr | \leqq \frac{C}{\eta ^3} \Vert {\nabla }f\vert _{\mathcal {O}}\Vert _{\infty } \int _{B(x_i, \eta )} |x-x_i| \mathrm{d}x \leqq C \eta . \end{aligned}$$

This results in \(b_\eta \leqq C N^2 \eta \).

Similarly, decomposing \(B(x_i, \eta ) = B(x_i, \alpha ) \cup \Big ( B(x_i, \eta ) \setminus B(x_i, \alpha ) \Big )\), we find that

$$\begin{aligned} \bigl | \int _{B(x_i, \eta )} S {\nabla }\cdot G_S^\alpha (\cdot -x_i) \, (f - f(x_i))\bigr | \leqq C\left( \alpha + \int _{B(x_i, \eta )} \frac{1}{|x-x_i|^2} \mathrm{d}x\right) \leqq C' \eta , \end{aligned}$$

using again that f is Lipschitz over \(B(x_i, \eta )\). We end up with \(b_\alpha \leqq C N^2 \eta \), and finally \(b \leqq C N^2 \eta \).

For the last term c in (3.28), we first notice that as \(G_S^\eta - G_S^\alpha \) is zero outside \(B_\eta \):

$$\begin{aligned}&\langle (S^\alpha + S^\eta ) , (G_S^\eta - G_S^\alpha ) \rangle = \langle S^\alpha , (G_S^\eta - G_S^\alpha ) \rangle \nonumber \\&\quad = \langle S^\alpha , G_S^\eta \rangle \, - \, \langle S^\alpha , G_S \rangle \nonumber \\&\quad = \langle S^\alpha , G_S^\eta \rangle \, - \, \frac{1}{\alpha ^3}\left( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) , \end{aligned}$$
(3.29)

where we used Lemma 3.6 in the last line. By the definition of \(S^\alpha \), the remaining term splits into

$$\begin{aligned} \langle S^\alpha , G_S^\eta \rangle = - \int _{{\partial }B_\alpha ^+} \left( {\partial }_r G_S - p_S e_r \right) \cdot G_S^\eta \mathrm{d}s^\alpha \, + \, \int _{{\partial }B_\alpha ^-} \left( {\partial }_r G_S^\alpha - p_S^\alpha e_r \right) \cdot G_S^\eta \mathrm{d}s^\alpha . \end{aligned}$$

By integration by parts, applied in \(B_\eta \setminus B_\alpha \) for the first term and in \(B_\alpha \) for the second term, we get

$$\begin{aligned} \langle S^\alpha , G_S^\eta \rangle&= - \int _{{\partial }B_\eta ^-} \left( {\partial }_r G_S - p_S e_r \right) \cdot G_S^\eta \mathrm{d}s^\eta + \int _{B_\eta \setminus B_\alpha } {\nabla }G_S : {\nabla }G_S^\eta \\&\quad + \int _{B_\alpha } {\nabla }G_S^\alpha : {\nabla }G_S^\eta \\&= - \int _{{\partial }B_\eta } \left( {\partial }_r G_S - p_S e_r \right) \cdot G_S^\eta \mathrm{d}s^\eta + \int _{B_\eta } {\nabla }G_S^\alpha \cdot {\nabla }G_S^\eta \\&= - \int _{{\partial }B_\eta } \left( {\partial }_r G_S - p_S e_r \right) \cdot G_S^\eta \mathrm{d}s^\eta + \int _{{\partial }B_\eta ^-} G_S^\alpha \cdot \left( {\partial }_r G_S^\eta - p_S^\eta e_r \right) \\&= \langle S^\eta , G_S \rangle = \frac{1}{\eta ^3}\left( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) . \end{aligned}$$

From there, the conclusion follows easily. \(\square \)

If we let \(\alpha \rightarrow 0\) in Proposition 3.8, combining with Propositions 3.27 and 3.3, we find

Corollary 3.9

For all \( \eta < \frac{c}{2} N^{-1/3}\),

$$\begin{aligned} \Bigl | \mathcal {V}_N + \frac{25|\mathcal {O}|}{2N^2} \Bigl ( \int _{\mathbb {R}^3} | {\nabla }h_{N}^\eta |^2 - \frac{N}{\eta ^3} ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \Bigr ) \Bigr | \leqq {\varepsilon }(N), \end{aligned}$$

where \({\varepsilon }(N) \rightarrow 0\) as \(N \rightarrow +\infty \).

This corollary shows that to understand the limit of \(\mathcal {V}_N\), it is enough to study the limit of

$$\begin{aligned} \frac{25|\mathcal {O}|}{2N^2} \Bigl ( \int _{\mathbb {R}^3} | {\nabla }h_{N}^{\eta _N} |^2 - \frac{N}{\eta _N^3} ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \Bigr ) \end{aligned}$$

for \(\eta _N := \eta N^{-1/3}\), \(\eta < \frac{c}{2}\) fixed. For periodic and more general stationary point processes, this will be possible through a homogenization approach. This homogenization approach involves an analogue of a cell equation, called jellium in the literature on Coulomb gases. We will motivate and introduce this system in the next section.

4 Blown-up System

Formula (3.27) suggests to understand at first the behaviour of \(\int _{\mathbb {R}^3} |{\nabla }h_N^\eta |^2\) at fixed \(\eta \), when \(N \rightarrow +\infty \). To analyze the system (3.26), a useful intuition can be taken from classical homogenization problems of the form

$$\begin{aligned} - \Delta h_{\varepsilon }+ {\nabla }p_{\varepsilon }= & {} S {\nabla }\left( \frac{1}{{\varepsilon }^3} F(x,x/{\varepsilon }) - \frac{1}{{\varepsilon }^3} \overline{F}(x) \right) , \, \hbox {div}~h_{\varepsilon }\nonumber \\= & {} 0 \, \text { in a domain } \Omega , \quad h_{\varepsilon }\vert _{{\partial }\Omega } = 0, \end{aligned}$$
(4.1)

with F(xy) periodic in variable y, and \(\overline{F}(x) := \int _{\mathbb {T}} F(x,y) \mathrm{d}y\). Roughly, \(\Omega \) would be like \(\mathcal {O}\), the small scale \({\varepsilon }\) like \(N^{-1/3}\), the term \( \frac{1}{{\varepsilon }^3} F(x,x/{\varepsilon })\) would correspond to the sum of (regularized) Dirac masses, while the term \( \frac{1}{{\varepsilon }^3} \overline{F}\) would be an analogue of Nf. The factor \(\frac{1}{{\varepsilon }^3}\) in front of F is put consistently with the fact that \(\sum _i \delta _{x_i}\) has mass N. The dependence on x of the source term in (4.1) mimics the possible macroscopic inhomogeneity of the point distribution \(\{x_i\}\).

In the much simpler model (4.1), standard arguments show that \(h_{\varepsilon }\) behaves like

$$\begin{aligned} h_{\varepsilon }(x) \approx \frac{1}{{\varepsilon }^2} H(x,x/{\varepsilon }), \end{aligned}$$
(4.2)

where H(xy) satisfies the cell problem

$$\begin{aligned} - \Delta _y H(x,\cdot ) + {\nabla }_y P(x,\cdot ) = S {\nabla }_y F(x,\cdot ), \quad \mathrm{div}_y H(x,\cdot ) = 0, \quad y \in \mathbb {T}^3. \end{aligned}$$

Let us stress that substracting the term \(\frac{1}{{\varepsilon }^3} \overline{F}(x)\) in the source term of (4.1) is crucial for the asymptotics (4.2) to hold. It follows that

$$\begin{aligned} {\varepsilon }^6 \int _{\Omega } |{\nabla }h_{\varepsilon }|^2 \approx \int _{\Omega } |{\nabla }_y H(x,x/{\varepsilon })|^2 \mathrm{d}x \xrightarrow [{\varepsilon }\rightarrow 0]{} \int _{\Omega } \int _{\mathbb {T}^3} |{\nabla }_y H(x,y)|^2 \mathrm{d}y \mathrm{d}x. \end{aligned}$$

Note that the factor \({\varepsilon }^6\) in front of the left-hand side is coherent with the factor \(\frac{1}{N^2}\) at the right-hand side of (3.27). Note also that

$$\begin{aligned} \int _{\mathbb {T}^3} |{\nabla }_y H(x,y)|^2 \mathrm{d}y = \lim _{R \rightarrow +\infty } \frac{1}{R^3} \int _{(-R,R)^3} |{\nabla }_y H(x,y)|^2 \mathrm{d}y. \end{aligned}$$

Such average over larger and larger boxes may be still meaningful in more general settings, typically in stochastic homogenization.

Inspired by those remarks, and back to system (3.26), the hope is that some homogenization process may take place, at least locally near each \(x \in \mathcal {O}\). More precisely, we hope to recover \(\lim _{N} \mathcal {W}_N\) by summing over \(x \in \mathcal {O}\) some microscopic energy, locally averaged around x. This microscopic energy will be deduced from an analogue of the cell problem, called a jellium in the literature on the Ginzburg-Landau model and Coulomb gases.

4.1 Setting of the Problem

We will call point distribution a locally finite subset of \(\mathbb {R}^3\). Given a point distribution \(\Lambda \), we consider the following problem in \(\mathbb {R}^3\):

$$\begin{aligned} - \Delta H + {\nabla }P&= \sum _{z \in \Lambda } S {\nabla }\delta _{-z} \nonumber \\ \hbox {div}H&= 0. \end{aligned}$$
(4.3)

Given a solution \(H = H(y)\), \(P = P(y)\), we introduce, for any \(\eta > 0\),

$$\begin{aligned} H^\eta := H + \sum _{z \in \Lambda } (G_S^\eta - G_S)(\cdot +z), \end{aligned}$$
(4.4)

which satisfies, by (3.11), (3.19), that

$$\begin{aligned} - \Delta H^\eta + {\nabla }P^\eta&= \sum _{z \in \Lambda } S^\eta (\cdot +z) \nonumber \\ \hbox {div}~H^\eta&= 0. \end{aligned}$$
(4.5)

We remark that, the set \(\Lambda \) being locally finite, the sum at the right-hand side of (4.3) or (4.5) is well-defined as a distribution. Also, the sum at the right-hand side of (4.4) is well-defined pointwise, because \(G_S^\eta - G_S\) is supported in \(B_\eta \).

As discussed at the beginning of Section 4, we expect the limit of \(\int _{\mathbb {R}^3} |{\nabla }h_N^\eta |^2\) to be described in terms of quantities of the form

$$\begin{aligned} \lim _{R \rightarrow +\infty } \frac{1}{R^3}\int _{K_R} |{\nabla }H^\eta (y)|^2 \, \mathrm{d}y, \end{aligned}$$

where \(K_R := (-\frac{R}{2},\frac{R}{2})^3\), for various \(\Lambda \) and solutions \(H^\eta \) of (4.5). Broadly, the energy concentrated locally around a point x should be understood from a blow-up of the original system (3.26), zooming at scale \(N^{-1/3}\) around x. Let \(x \in \mathcal {O}\) (the center of the blow-up), and \(\eta _N := \eta N^{-1/3}\), for a fixed \(\eta > 0\). If we introduce

$$\begin{aligned}&H^{\eta }_N(y) := N^{-2/3} h_N^{\eta _N}(x+N^{-1/3} y), \quad P^\eta _N(y) := N^{-1} p_N^{\eta _N}(x+N^{-1/3} y), \nonumber \\&z_{i,N} := N^{1/3} (x-x_{i,N}) \end{aligned}$$
(4.6)

we find that

$$\begin{aligned} - \Delta H_N^\eta + {\nabla }P_N^\eta = \sum _{i=1}^N S^{\eta }(\cdot + z_{i,N}) - N^{-1/3} S{\nabla }_x f(x+N^{-1/3}y), \quad \hbox {div}~H_N^\eta = 0. \end{aligned}$$
(4.7)

System (4.5) corresponds to a formal asymptotics where one replaces \(\sum _{i=1}^N \delta _{z_{i,N}}\) by \(\sum _{i=1}^\infty \delta _{z_i}\), with \(\Lambda = \{z_i\}\) a point distribution. Note that, under (H2), we expect this point distribution to be well-separated, meaning that there is \(c >0\) such that: for all \(z' \ne z \in \Lambda \), \(|z'-z| \geqq c\). Still, we insist that this asymptotics is purely formal and requires much more to be made rigorous. Such rigorous asymptotics will be carried in Section 5 for various classes of point configurations.

We now collect several general remarks on the blown-up system (4.3). We start by defining a renormalized energy. For any \(L > 0\), we denote \(K_L := (-\frac{L}{2}, \frac{L}{2})^3\).

Definition 4.1

Given a point distribution \(\Lambda \), we say that a solution H of (4.3) is admissible if for all \(\eta > 0\), the field \(H^\eta \) defined by (4.4) satisfies \({\nabla }H^\eta \in L^2_{loc}(\mathbb {R}^3)\).

Given an admissible solution H and \(\eta > 0\), we say that \(H^\eta \) is of finite renormalized energy if

$$\begin{aligned} \mathcal {W}^\eta ({\nabla }H) := -\lim _{R \rightarrow +\infty } \frac{1}{R^3} \left( \int _{K_R} |{\nabla }H^\eta |^2 -\frac{1}{\eta ^3}\left| \Lambda \cap K_R \right| \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr ) \right) \end{aligned}$$

exists in \(\mathbb {R}\). We say that H is of finite renormalized energy if \(H^\eta \) is for all \(\eta \), and

$$\begin{aligned} \mathcal {W}({\nabla }H) := \lim _{\eta \rightarrow 0} \mathcal {W}^\eta ({\nabla }H) \end{aligned}$$

exists in \(\mathbb {R}\).

Remark 4.2

From formula (4.4), it is easily seen that H is admissible if and only if there exists one \(\eta > 0\) with \({\nabla }H^\eta \in L^2_{loc}(\mathbb {R}^3)\).

Proposition 4.3

If \(H_1\) and \(H_2\) are admissible solutions of (4.3) satisfying, for some \(\eta > 0\), that

$$\begin{aligned} \limsup _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R} |{\nabla }H^\eta _1|^2< +\infty , \quad \limsup _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R} |{\nabla }H^\eta _2|^2 < +\infty , \end{aligned}$$

then \({\nabla }H_1\) and \({\nabla }H_2\) differ from a constant matrix.

Proof

We set \(H := H_1 - H_2 = H_1^\eta - H_2^\eta \). It is a solution of the homogeneous Stokes equation with

$$\begin{aligned} \limsup _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R}|{\nabla }H|^2 < +\infty . \end{aligned}$$

By standard elliptic regularity, any solution v of the Stokes equation in the unit ball

$$\begin{aligned} - \Delta v + {\nabla }p = 0, \quad \hbox {div}~v = 0 \, \text { in } \, B(0,1) \end{aligned}$$

satisfies, for some absolute constant C,

$$\begin{aligned} |{\nabla }^2 v(0)| \leqq C \Vert {\nabla }v\Vert _{L^2(B(0,1))} . \end{aligned}$$

We apply this inequality to \(v(x) = H(x_0 + R x)\), \(x_0\) arbitrary. After rescaling, we find that

$$\begin{aligned} |{\nabla }^2 H(x_0)| \leqq \frac{C}{R} \Big ( \frac{1}{R^{3/2}} \Vert {\nabla }H(x_0 + \cdot )\Vert _{L^2(B(0,R))}\Big ). \end{aligned}$$

As \(R \rightarrow +\infty \), the right hand-side goes to zero, which concludes the proof. \(\square \)

Proposition 4.4

Let \(\Lambda \) be a well-separated point distribution, meaning there exists \(c > 0\) such that for all \(z' \ne z \in \Lambda \), \(|z'-z| \geqq c\). Let \(0< \alpha< \eta < \frac{\min (c,1)}{4}\). Let H be an admissible solution of (4.3) such that \(H^\eta \) is of finite renormalized energy. Then, \(H^\alpha \) is also of finite renormalized energy, and

$$\begin{aligned} \mathcal {W}^\alpha ({\nabla }H) = \mathcal {W}^\eta ({\nabla }H). \end{aligned}$$

In particular, H is of finite renormalized energy as soon as \(H^\eta \) is for some \(\eta \in (0, \frac{c}{4})\), and \( \mathcal {W}({\nabla }H) = \mathcal {W}^\eta ({\nabla }H)\) for all \(\eta < \frac{\min (c,1)}{4}\).

Proof

Let \(R > 0\). As \(\Lambda \) is well-separated,

$$\begin{aligned} \left| \Lambda \cap (K_{R+2}\setminus K_{R-2}) \right| \leqq C R^{2}. \end{aligned}$$
(4.8)

From this and the fact that the limit \(\mathcal {W}^\eta ({\nabla }H)\) exists (in \(\mathbb {R}\)), it follows that

$$\begin{aligned} \lim _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_{R+2}\setminus K_{R-2}} |{\nabla }H^\eta |^2 = 0. \end{aligned}$$
(4.9)

Let \(\Omega _R\) be an open set such that \(K_{R-1} \subset \Omega _R \subset K_R\) and such that

$$\begin{aligned} \text {dist}\bigl ( {\partial }\Omega _R \, , \, \cup _{z \in \Lambda } B(-z,\eta )\bigr ) \geqq c' > 0, \end{aligned}$$
(4.10)

where \(c'\) depends on c only. This implies that \(G^{\eta }(\cdot + z)\), \(G^\alpha (\cdot + z)\) are smooth at \({\partial }\Omega _R\) for all \(z \in \Lambda \), and that \(H^{\eta }\), \(H^\alpha \) are smooth at \({\partial }\Omega _R\).

We now proceed as in the proof of Proposition 3.8. We write

$$\begin{aligned} H^\eta&= H^\alpha + \sum _{z \in \Lambda } (G_S^\eta - G_S^\alpha )(\cdot + z),\\ \int _{\Omega _R} |{\nabla }H^\eta |^2&= \int _{\Omega _R} |{\nabla }H^\alpha |^2 + 2 \sum _{z \in \Lambda } \int _{\Omega _R} {\nabla }H^\alpha : {\nabla }(G_S^\eta - G_S^\alpha )(\cdot + z) \\&\quad + \sum _{z,z' \in \Lambda } \int _{\Omega _R} {\nabla }(G_S^\eta - G_S^\alpha )(\cdot + z) : {\nabla }(G_S^\eta - G_S^\alpha )(\cdot + z'). \end{aligned}$$

After integration by parts, and manipulations similar to those used to show Proposition 3.8, we end up with

$$\begin{aligned} \int _{\Omega _R} |{\nabla }H^\eta |^2 - \int _{\Omega _R} |{\nabla }H^\alpha |^2 = \sum _{z \in \Lambda } \int _{\Omega _R} (G_S^\eta - G_S^\alpha )(\cdot + z) d S^\alpha (\cdot + z). \end{aligned}$$
(4.11)

Let us emphasize that the contribution of the boundary terms at \({\partial }\Omega _R\) is zero: indeed, thanks to (4.10), \((G_S^\eta - G_S^\alpha )(\cdot + z)\) is zero at \({\partial }\Omega _R\) for any \(z \in \Lambda \). Similarly,

$$\begin{aligned} \sum _{z \in \Lambda } \int _{\Omega _R} (G_S^\eta - G_S^\alpha )(\cdot + z) d S^\alpha (\cdot + z)&= \sum _{z \in \Lambda \cap \Omega _R} \int _{\Omega _R} (G_S^\eta - G_S^\alpha )(\cdot + z) d S^\alpha (\cdot + z) \\&= \sum _{z \in \Lambda \cap \Omega _R} \int _{\mathbb {R}^3} (G_S^\eta - G_S^\alpha )(\cdot + z) d S^\alpha (\cdot + z). \end{aligned}$$

The integral in the right-hand side was computed above, (see (3.29) and the lines after):

$$\begin{aligned}&\sum _{z \in \Lambda \cap \Omega _R} \int _{\mathbb {R}^3} (G_S^\eta - G_S^\alpha )(\cdot + z) d S^\alpha (\cdot + z)\\&\quad = \left| \Lambda \cap \Omega _R\right| \left( \frac{1}{\eta ^3} - \frac{1}{\alpha ^3} \right) \left( \int _{B_1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) . \end{aligned}$$

Back to (4.11), we find that

$$\begin{aligned} \int _{\Omega _R} |{\nabla }H^\eta |^2 - \int _{\Omega _R} |{\nabla }H^\alpha |^2 = \left| \Lambda \cap \Omega _R\right| \left( \frac{1}{\eta ^3} - \frac{1}{\alpha ^3} \right) \left( \int _{B_1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) . \end{aligned}$$

We deduce from this identity, (4.8) and (4.9) that

$$\begin{aligned} \lim _{R \rightarrow +\infty } \frac{1}{R^3} \left( \int _{\Omega _R} |{\nabla }H^\alpha |^2 - \frac{\left| \Lambda \cap K_R\right| }{\alpha ^3} \left( \int _{B_1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) \right) = \mathcal {W}^\eta ({\nabla }H), \end{aligned}$$

and replacing R by \(R+1\),

$$\begin{aligned} \lim _{R \rightarrow +\infty } \frac{1}{R^3} \left( \int _{\Omega _{R+1}} |{\nabla }H^\alpha |^2 - \frac{\left| \Lambda \cap K_R\right| }{\alpha ^3} \left( \int _{B_1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \right) \right) = \mathcal {W}^\eta ({\nabla }H). \end{aligned}$$

As \(\Omega _R \subset K_R \subset \Omega _{R+1}\), the result follows. \(\square \)

4.2 Resolution of the Blown-up System for Stationary Point Processes

As pointed out several times, we follow the strategy described in [41] for the treatment of minimizers and minima of Coulomb energies, but in our effective viscosity problem, the points \(x_{i,N}\) do not minimize the analogue \(\mathcal {V}_N\) of the Coulomb energy \(\mathcal {H}_N\). Actually, although we consider the steady Stokes equation, our point distribution may be time dependent. More precisely, in many settings, the dynamics of the suspension evolves on a timescale associated with viscous transport (scaling like \(a^2\), with a the radius of the particle), which is much smaller than the convective time scale (scaling like a). This allows us to neglect the time derivative in the Stokes equation: system (1.1)–(1.2) corresponds then to a snapshot of the flow at a given time t. Even when one is interested in the long time behaviour, the existence of an equilibrium measure for the system of particles is a very difficult problem. To bypass this issue, a usual point of view in the physics literature is to assume that the distribution of points is given by a stationary random process (whose refined description is an issue per se).

We will follow this point of view here, and introduce a class of random point processes for which we can solve (4.3). Let \(X = \mathbb {R}\) or \(X =\mathbb {T}_L := \mathbb {R}/(L\mathbb {Z})\) for some \(L > 0\). We denote by \(Point_X\) the set of point distributions in \(X^3\): an element of \(Point_X\) is a locally finite subset of \(X^3\), in particular a finite subset when \(X=\mathbb {T}_L\). We endow \(Point_X\) with the smallest \(\sigma \)-algebra \(\mathcal {P}_X\) which makes measurable all the mappings

$$\begin{aligned} Point_X \rightarrow \mathbb {N}, \quad \omega \rightarrow |A \cap \omega |, \quad A \text { borelian bounded subset of } X. \end{aligned}$$

Given a probability space \((\Omega ,\mathcal {A},P)\), a random point process \(\Lambda \) with values in \(X^3\) is a measurable map from \(\Omega \) to \(Point_X\), see [12]. By pushing forward the probability P with \(\Lambda \), we can always assume that the process is in canonical form, that is \(\Omega = Point_X\), \(\mathcal {A} = \mathcal {P}_X\), and \(\Lambda (\omega ) = \omega \).

We shall consider processes that, once in canonical form, are

  • (P1) stationary: the probability P on \(\Omega \) is invariant by the shifts

    $$\begin{aligned} \tau _y : \Omega \rightarrow \Omega , \quad \omega \rightarrow y + \omega , \quad y \in X^3. \end{aligned}$$
  • (P2) ergodic: if \(A \in \mathcal {A}\) satisfies \(\tau _y(A) = A\) for all y, then \(P(A) = 0\) or \(P(A) = 1\).

  • (P3) uniformly well-separated: we mean that there exists \(c > 0\) such that almost surely, \(|z - z'| \geqq c\) for all \(z \ne z'\) in \(\omega \).

These properties are satisfied in two important contexts:

Example 4.5

(Periodic point distributions). Namely, for \(L > 0\), \(a_1, \dots , a_M\) in \(K_L\), we introduce the set \(\Lambda _0 := \{a_1, \dots , a_M\} + L \mathbb {Z}^d\). We can of course identify \(\Lambda _0\) with a point distribution in \(X^3\) with \(X = \mathbb {T}_L\). We then take \(\Omega = \mathbb {T}_L^3\), P the normalized Lebesgue measure on \(\mathbb {T}^3_L\), and set \(\Lambda (\omega ) := \Lambda _0 + \omega \). It is easily checked that this random process satisfies all assumptions. Moreover, a realization of this process is a translate of the initial periodic point distribution \(\Lambda _0\). By translation, the almost sure results that we will show below (well-posedness of the blown-up system, convergence of \(\mathcal {W}_N\)) will actually yield results for \(\Lambda _0\) itself.

Example 4.6

(Poisson hard core processes). These processes are obtained from Poisson point processes, by removing balls in order to guarantee the hypothesis (P3). For instance, given \(c > 0\), one can remove from the Poisson process all points z which are not alone in B(zc). This leads to the so-called Matérn I hard-core process. To increase the density of points while keeping (P3), one can refine the removal process in the following way: for each point z of the Poisson process, one associates an “age” \(u_z\), with \((u_z)\) a family of i.i.d. variables, uniform over (0, 1). Then, one retains only the points z that are (strictly) the “oldest” in B(zc). This leads to the so-called Matérn II hard-core process. Obviously, these two processes satisfy (P1) by stationarity of the Poisson process, and satisfy (P2) because they have only short range of correlations. For much more on hard core processes, we refer to [8].

The point is now to solve almost surely the blown-up system (4.3) for point processes with properties (P1)–(P2)–(P3). We first state

Proposition 4.7

Let \(\Lambda = \Lambda (\omega )\) a random point process with properties (P1)–(P2)–(P3). Let \(\eta > 0\). For almost every \(\omega \), there exists a solution \(\mathbf{H}^\eta (\omega , \cdot )\) of (4.5) in \(H^1_{loc}(X^3)\) such that

$$\begin{aligned} {\nabla }\mathbf{H}^\eta (\omega ,y) = D_\mathbf{H}^\eta (\tau _y\omega ), \end{aligned}$$

where \(D_\mathbf{H}^\eta \in L^2(\Omega )\) is the unique solution of the variational formulation (4.12) below.

Remark 4.8

In the case \(X = \mathbb {T}_L\), point distributions and solutions \(H^\eta \) over \(X^3\) can be identified with \(L\mathbb {Z}^3\)-periodic point distributions and \(L\mathbb {Z}^3\)-periodic solutions defined on \(\mathbb {R}^3\). This identification is implicit here and in all that follows.

Proof

We treat the case \(X=\mathbb {R}\), the case \(X=\mathbb {T}_L\) follows the same approach. We remind that the process is in canonical form: \(\Omega = Point_\mathbb {R}\), \(\mathcal {A} = \mathcal {P}_\mathbb {R}\), \(\Lambda (\omega ) = \omega \). The idea is to associate to (4.5) a probabilistic variational formulation. This approach is inspired by works of Kozlov [7, 26], see also [3]. Prior to the statement of this variational formulation, we introduce some vocabulary and functional spaces. First, for any \(\mathbb {R}^d\)-valued measurable \(\phi = \phi (\omega )\), we call a realization of \(\phi \) an application

$$\begin{aligned} R_\omega [\phi ](y) := \phi (\tau _y\omega ), \quad \omega \in \Omega . \end{aligned}$$

For \(p \in [1,+\infty )\), \(\phi \in L^p(\Omega )\), as \(\tau _y\) is measure preserving, we have for all \(R > 0\) that \(\mathbb {E} \int _{K_R} |R_\omega [\phi ]|^p = R^3 \, \mathbb {E} |\phi |^p\). Hence, almost surely, \(R_\omega [\phi ]\) is in \(L^p_{loc}(\mathbb {R}^3)\). Also, for \(\phi \in L^\infty (\Omega )\), one finds that almost surely \(R_\omega [\phi ] \in L^\infty _{loc}(\mathbb {R}^3)\). It is a consequence of Fatou’s lemma: for all \(R > 0\),

$$\begin{aligned} \mathbb {E} \Vert R_\omega [\phi ]\Vert _{L^\infty (K_R)}&= \mathbb {E} \liminf _{p \rightarrow +\infty } \Vert R_\omega [\phi ]\Vert _{L^p(K_R)} \leqq \liminf _{p \rightarrow +\infty } \mathbb {E} \Vert R_\omega [\phi ]\Vert _{L^p(K_R)} \\&\leqq \liminf _{p \rightarrow +\infty } \left( \mathbb {E} \Vert R_\omega [\phi ]\Vert _{L^p(K_R)}^p \right) ^{1/p}\\&= \liminf _{p \rightarrow +\infty } \left( \mathbb {E} |\phi |^p \right) ^{1/p} = \Vert \phi \Vert _{L^\infty (\Omega )}. \end{aligned}$$

We say that \(\phi \) is smooth if, almost surely, \(R_\omega [\phi ]\) is. For a smooth function \(\phi \), we can define its stochastic gradient \({\nabla }_\omega \phi \) by the formula

$$\begin{aligned} {\nabla }_\omega \phi (\omega ) := {\nabla }R_\omega [\phi ]\vert _{y=0}, \end{aligned}$$

where here and below, \({\nabla }= {\nabla }_y\) refers to the usual gradient (in space). Note that \({\nabla }_\omega \phi (\tau _y \omega ) = {\nabla }R_\omega [\phi ](y)\). One can define similarly the stochastic divergence, curl, etc, and reiterate to define partial stochastic derivatives \({\partial }^\alpha _\omega \).

Starting from a function \(V \in L^p(\Omega )\), \(p \in [1,+\infty ]\) one can build smooth functions through convolution. Namely, for \(\rho \in C^\infty _c(\mathbb {R}^3)\), one can define

$$\begin{aligned} \rho \star V(\omega ) := \int _{\mathbb {R}^3} \rho (y) V(\tau _y \omega ) \mathrm{d}y, \end{aligned}$$

which is easily seen to be in \(L^p(\Omega )\), as

$$\begin{aligned} \mathbb {E} |\rho \star V(\omega )|^p\leqq & {} \mathbb {E} \left( \int _{\mathbb {R}^3} |\rho (y)| \mathrm{d}y \right) ^{p-1} \left( \int _{\mathbb {R}^3} |\rho (y)| |V(\tau _y \omega ) |^p \mathrm{d}y \right) \\= & {} \left( \int _{\mathbb {R}^3} |\rho (y)| \mathrm{d}y \right) ^{p} \mathbb {E} |V(\omega )|^p, \end{aligned}$$

using that \(\tau _y\) is measure-preserving. Moreover, it is smooth: we leave to the reader to check

$$\begin{aligned} R_\omega [\rho \star V] = \check{\rho } \star R_\omega [V], \quad {\nabla }_\omega (\rho \star V) = {\nabla }\check{\rho } \star V, \quad \check{\rho }(y) := \check{\rho }(-y). \end{aligned}$$

We are now ready to introduce the functional spaces we need. We set

$$\begin{aligned} \mathcal {D}_\sigma&:= \{ \phi : \Omega \rightarrow \mathbb {R}^3 \text { smooth, } {\partial }^\alpha _\omega \phi \in L^2(\Omega ) \; \forall \alpha , \; {\nabla }_\omega \cdot \phi = 0 \}, \\ \mathcal {V}_\sigma&:= \text { the closure of } \{ {\nabla }_\omega \phi , \, \phi \in \mathcal {D}_\sigma \} \text { in } L^2(\Omega ). \end{aligned}$$

We remind that \(S^\eta = \hbox {div}~\Psi ^\eta \), with \(\Psi ^\eta \) defined in (3.21). We introduce

$$\begin{aligned} \mathsf {\Pi }^\eta (\omega ) := \sum _{z \in \omega } \Psi ^\eta (z) \end{aligned}$$

Note that it is well-defined, as \(\Psi ^\eta \) is supported in \(B_\eta \) and \(\omega \) is a discrete subset. It is measurable: indeed, \(\Psi ^\eta \) is the pointwise limit of a sequence of simple functions of the form \(\sum _i \alpha _i 1_{A_i}\), where \(A_i\) are Borel subsets of \(\mathbb {R}^3\). As

$$\begin{aligned} \omega \rightarrow \sum _{z \in \omega } \sum _i \alpha _i 1_{A_i}(z) = \sum _i \alpha _i |A_i \cap \omega | \end{aligned}$$

is measurable by definition of the \(\sigma \)-algebra \(\mathcal {A}\), we find that \(\mathsf {\Pi }^\eta \) is. Moreover, as \(\Lambda \) is uniformly well-separated, one has \(|\mathsf {\Pi }^\eta (\omega )| \leqq C \Vert \Psi ^\eta \Vert _{L^\infty }\) for a constant C that does not depend on \(\omega \), so that \(\mathsf {\Pi }^\eta \) belongs to \(L^\infty (\Omega )\).

We now introduce the variational formulation: find \(D_\mathbf{H}^\eta \in \mathcal {V}_\sigma \) such that for all \(D_\phi \in \mathcal {V}_\sigma \),

$$\begin{aligned} \mathbb {E} \, D_\mathbf{H}^\eta : D_\phi = - \mathbb {E} \, \mathsf {\Pi }^\eta : D_\phi . \end{aligned}$$
(4.12)

As \(\mathcal {V}_\sigma \) is a closed subspace of \(L^2(\Omega )\), existence and uniqueness of a solution comes from the Riesz theorem.

It remains to build a solution of (4.5) almost surely, based on \(D_\mathbf{H}^\eta \). Let \(\phi _k = \phi _k(\omega )\) a sequence in \(\mathcal {D}_\sigma \) such that \({\nabla }_\omega \phi _k\) converges to \(D_\mathbf{H}^\eta \) in \(L^2(\Omega )\). Let \(\rho \in C^\infty _c(\mathbb {R}^3)\). It is easily seen that \(\rho \star \phi _k\) also belongs to \(\mathcal {D}_\sigma \) and that \({\partial }^\alpha _\omega {\nabla }_\omega (\rho \star \phi _k) = {\partial }^\alpha _\omega (\rho \star {\nabla }_\omega \phi _k)\) converges to the smooth function \({\partial }^\alpha _\omega (\rho \star D_\mathbf{H}^\eta )\) in \(L^2(\Omega )\), for all \(\alpha \). In particular, as \({\nabla }_\omega \times {\nabla }_\omega (\rho \star \phi _k) = 0\), we find that \({\nabla }_\omega \times (\rho \star D_\mathbf{H}^\eta ) = 0\). Applying the realization operator \(R_\omega \), we deduce that

$$\begin{aligned} {\nabla }\times (\check{\rho } \star R_\omega [D_\mathbf{H}^\eta ]) = \check{\rho } \star {\nabla }\times R_\omega [D_\mathbf{H}^\eta ] = 0. \end{aligned}$$

We recall that \(R_\omega [D_\mathbf{H}^\eta ]\) belongs almost surely to \(L^2_{loc}(\mathbb {R}^3)\), so that \({\nabla }\times R_\omega [D_\mathbf{H}^\eta ]\) is well-defined in \(H^{-1}_{loc}(\mathbb {R}^3)\). Taking \(\rho = \rho _n\) an approximation of the identity, and sending n to infinity, we end up with \({\nabla }\times R_\omega [D_\mathbf{H}^\eta ] = 0\) in \(\mathbb {R}^3\). As curl-free vector fields on \(\mathbb {R}^3\) are gradients, it follows that almost surely, there exists \(\mathbf{H}^\eta = \mathbf{H}^\eta (\omega , y)\) with

$$\begin{aligned} {\nabla }\mathbf{H}^\eta (\omega ,y) = R_\omega [D_\mathbf{H}^\eta ](y) = D_\mathbf{H}^\eta (\tau _y(\omega )), \quad \forall y \in \mathbb {R}^3. \end{aligned}$$

In the case \(X = \mathbb {T}_L\), one can show that the mean of \(R_\omega [D_\mathbf{H}^\eta ]\) is almost surely zero, so that the same result holds. In addition, because the matrices \({\nabla }_\omega \phi \), \(\phi \in \mathcal {D}_\sigma \), have zero trace, the same holds for \(D_\mathbf{H}^\eta \). Hence,

$$\begin{aligned} \hbox {div}~\mathbf{H}^\eta (\omega ,y) = \text {trace}({\nabla }\mathbf{H}^\eta (\omega ,y)) = \text {trace}(D_\mathbf{H}^\eta )(\tau _y(\omega )) = 0. \end{aligned}$$

One still has to prove that the first equation of (4.5) is satisfied. Therefore, we use (4.12) with test function \(D_\phi = {\nabla }_\omega \phi \), where the smooth function \(\phi \) is of the form

$$\begin{aligned} \phi = \rho \star ( {\nabla }_\omega \times \varphi ), \, \varphi : \Omega \rightarrow \mathbb {R}^3 \text { a smooth function}. \end{aligned}$$

Note that for smooth functions \(\varphi , \tilde{\varphi }\), a stochastic integration by parts formula holds:

$$\begin{aligned} \mathbb {E} \, {\partial }_{\omega }^i \varphi \, \tilde{\varphi }&= \mathbb {E} \int _{K_1} {\partial }_{i} R_\omega [\varphi ] \, R_\omega [\tilde{\varphi }] = - \mathbb {E} \int _{K_1} R_\omega [\varphi ] \, {\partial }_i R_\omega [\tilde{\varphi }] \\&\quad + \mathbb {E} \int _{{\partial }K_1} n_i R_\omega [\varphi ] \, R_\omega [\tilde{\varphi }] \\&= - \mathbb {E} \int _{K_1} R_\omega [\varphi ] \, {\partial }_i R_\omega [\tilde{\varphi }] = - \mathbb {E} \, \varphi \, {\partial }_{\omega ,i} \tilde{\varphi }. \end{aligned}$$

Thanks to this formula, we may write

$$\begin{aligned} \mathbb {E} \, D_\mathbf{H}^\eta : {\nabla }_\omega (\rho \star ({\nabla }_\omega \times \varphi ))&= \mathbb {E} \, \check{\rho } \star D_\mathbf{H}^\eta : {\nabla }_\omega ( {\nabla }_\omega \times \varphi ) \\&= - \mathbb {E} \, {\nabla }_\omega \times ({\nabla }_\omega \cdot (\check{\rho } \star D_\mathbf{H}^\eta )) \cdot \varphi . \end{aligned}$$

Similarly, we find

$$\begin{aligned} - \mathbb {E} \, \mathsf {\Pi }^\eta : {\nabla }_\omega (\rho \star {\nabla }_\omega \times \varphi ) = \mathbb {E} {\nabla }_\omega \times ({\nabla }_\omega \cdot (\check{\rho } \star \mathsf {\Pi }^\eta )) \cdot \varphi . \end{aligned}$$

As this identity is valid for all smooth test fields \(\varphi \), we end up with

$$\begin{aligned} - {\nabla }_\omega \times ({\nabla }_\omega \cdot (\check{\rho } \star D_\mathbf{H}^\eta )) = {\nabla }_\omega \times ({\nabla }_\omega \cdot (\check{\rho } \star \mathsf {\Pi }^\eta )). \end{aligned}$$

Proceeding as above, we find that, almost surely,

$$\begin{aligned} - {\nabla }\times \hbox {div}R_\omega [D_\mathbf{H}^\eta ] = {\nabla }\times \hbox {div}R_\omega [\mathsf {\Pi }^\eta ], \end{aligned}$$

which can be written as

$$\begin{aligned} {\nabla }\times (-\Delta \mathbf{H}^\eta ) = {\nabla }\times \hbox {div}\sum _{z \in \Omega } \Psi ^\eta (\cdot +z). \end{aligned}$$

It follows that there exists \(\mathbf{P}^\eta = \mathbf{P}^\eta (\omega ,y)\) such that

$$\begin{aligned} - \Delta \mathbf{H}^\eta + {\nabla }\mathbf{P}^\eta = \hbox {div}\sum _{z \in \omega } \Psi ^\eta (\cdot +z) = \sum _{z \in \omega } S^\eta (\cdot +z), \end{aligned}$$

which concludes the proof of the proposition. \(\square \)

Corollary 4.9

For random point processes with properties (P1)–(P2)–(P3), there exists almost surely a solution H of (4.3) with finite renormalized energy and such that for all \(\eta > 0\), the gradient field \({\nabla }H^\eta \), where \(H^\eta \) is given by (4.4), coincides with the gradient field \({\nabla }\mathbf{H}^\eta \) of Proposition 4.7. Moreover,

$$\begin{aligned} \mathcal {W}({\nabla }H) = - \lim _{\eta \rightarrow 0} \left( \mathbb {E} \int _{K_1}|{\nabla }H^\eta |^2 - \frac{m}{\eta ^3} \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr )\right) , \end{aligned}$$

where \(m := \mathbb {E}|\Lambda \cap K_1|\) is the mean intensity of the point process, the expression at the right-hand side being actually constant for \(\eta \) small enough.

Proof

By the definition of the mean intensity and by property (P2), which allows us to apply the ergodic theorem (cf. [12, Corollary 12.2.V]), we have, almost surely, that

$$\begin{aligned} \lim _{R \rightarrow \infty } \frac{|\Lambda \cap K_R|}{R^3} = m. \end{aligned}$$
(4.13)

Let \(\eta _0 < \frac{\min (c,1)}{4}\) fixed, and \(\mathbf{H}^{\eta _0}\) given by the previous proposition. We set

$$\begin{aligned} H(\omega ,y) := \mathbf{H}^{\eta _0}(\omega ,y) + \sum _{z \in \omega } (G_S- G_S^{\eta _0})(y+z). \end{aligned}$$
(4.14)

It is clearly an admissible solution of (4.3). By Proposition 4.4, in order to show that H has almost surely finite renormalized energy, it is enough to show that for one \(\eta < \frac{\min (c,1)}{4}\), almost surely, the function \(H^\eta \) given by (4.4), namely,

$$\begin{aligned} H^\eta (\omega ,y)&:= H(\omega ,y) + \sum _{z \in \omega } (G^\eta _S - G_S)(y+z) \\&= \mathbf{H}^{\eta _0}(\omega ,y) + \sum _{z \in \omega } (G^\eta _S- G_S^{\eta _0})(y+z), \end{aligned}$$

has finite renormalized energy. This holds for \(\eta = \eta _0\), as \(H^{\eta _0} = \mathbf{H}^{\eta _0}\) and the ergodic theorem applies. We then notice that

$$\begin{aligned} {\nabla }H^\eta (\omega ,y) = D_H^\eta (\tau _y(\omega )), \quad D_H^\eta (\omega ) := D_\mathbf{H}^{\eta _0}(\omega ) + \sum _{z \in \omega } {\nabla }(G^\eta _S- G_S^{\eta _0})(z). \end{aligned}$$
(4.15)

We remark that \(G^\eta _S- G_S^{\eta _0} = 0\) outside \(B_{\max (\eta ,\eta _0)}\), so that the sum at the r.h.s. has only a finite number of non-zero terms. In the same way as we proved that the function \(\mathsf {\Pi }^\eta \) belongs to \(L^\infty (\Omega )\), we get that \(\sum _{z \in \omega } {\nabla }(G^\eta _S- G_S^{\eta _0})(z)\) defines an element of \(L^\infty (\Omega )\). Hence, by the ergodic theorem, we have, almost surely, that

$$\begin{aligned} \lim _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R} |{\nabla }H^\eta |^2 \, \rightarrow \mathbb {E} \int _{K_1} |{\nabla }H^\eta |^2. \end{aligned}$$

Combining this with (4.13) and Proposition 4.4, we obtain the formula for \(\mathcal {W}({\nabla }H)\).

The last step is to prove that for all \(\eta > 0\), \({\nabla }H^\eta = {\nabla }\mathbf{H}^\eta \) almost surely. As a consequence of the ergodic theorem, one has, almost surely, that

$$\begin{aligned} \limsup _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R} |{\nabla }H^\eta |^2< +\infty , \quad \limsup _{R \rightarrow +\infty } \frac{1}{R^3} \int _{K_R} |{\nabla }\mathbf{H}^\eta |^2 < +\infty . \end{aligned}$$

Reasoning as in the proof of Proposition 4.3, we find that their gradients differ by a constant:

$$\begin{aligned} {\nabla }H^\eta (\omega ,y) = {\nabla }\mathbf{H}^\eta (\omega ,y) + C(\omega ). \end{aligned}$$

Applying again the ergodic theorem, we get that almost surely \( \mathbb {E}D_{H}^\eta = \mathbb {E}D_\mathbf{H}^\eta + C(\omega )\). As \(D_\mathbf{H}^\eta \) belongs to \(\mathcal {V}_\sigma \), its expectation is easily seen to be zero. To conclude, it remains to prove that \(\mathbb {E}D_{H}^\eta = \mathbb {E}\sum _{z \in \omega } {\nabla }(G_S^\eta - G_S^{\eta _0})(z)\) is zero. Using stationarity, we write, for all \(R > 0\),

$$\begin{aligned} \mathbb {E}\sum _{z \in \omega } {\nabla }(G_S^\eta - G_S^{\eta _0})(z) = \frac{1}{R^3} \mathbb {E}\sum _{z \in \omega } \int _{K_R} {\nabla }(G_S^\eta - G_S^{\eta _0})(z+y) \mathrm{d}y. \end{aligned}$$

We remark that for all z outside a \(\max (\eta , \eta _0)\)-neighborhood of \({\partial }K_R\), \( \int _{K_R} {\nabla }(G_S^\eta - G_S^{\eta _0})(z+\cdot ) = \int _{{\partial }K_R} n \otimes (G_S^\eta - G_S^{\eta _0})(z+\cdot ) = 0\). It follows from the separation assumption and the \(L^\infty \) bound on \({\nabla }(G_S^\eta - G_S^{\eta _0})\) that

$$\begin{aligned} \frac{1}{R^3} \mathbb {E}\sum _{z \in \omega } \int _{K_R} {\nabla }(G_S^\eta - G_S^{\eta _0})(z+y) \mathrm{d}y = O(1/R) \rightarrow 0 \quad \text {as } \, R \rightarrow +\infty . \end{aligned}$$

\(\square \)

5 Convergence of \(\mathcal {V}_N\)

This section concludes our analysis of the quadratic correction to the effective viscosity. From Theorem 1.1, we know that this quadratic correction should be given by the limit of \(\mathcal {V}_N\) as N goes to infinity, where \(\mathcal {V}_N\) was introduced in (1.13). We show here that the functional \(\mathcal {V}_N\) has indeed a limit, when the particles are given by the kind of stationary point processes seen in Section 4.

5.1 Proof of Convergence

Let \({\varepsilon }> 0\) a small parameter, and \(\Lambda = \Lambda (\omega )\) a random point process with properties (P1)–(P2)–(P3): stationarity, ergodicity, and uniform separation. As seen in Examples 4.5 and 4.6, this setting covers the case of periodic patterns of points as well as classical hard core processes. We set \(N = N({\varepsilon })\) the cardinal of the set

$$\begin{aligned} \{ x \in {\varepsilon }\check{\Lambda }, \, B(x,{\varepsilon }) \subset \mathcal {O}\} = \{ x_{1,N}, \dots , x_{N,N} \}, \end{aligned}$$

where \(\check{\Lambda } := -\Lambda \) and where we label the elements arbitrarily. Note that N depends on \(\omega \), although it does not appear explicitly. From the fact that \(\Lambda \) is uniformly well-separated and from the ergodic theorem (cf. [12, Corollary 12.2.V]), we can deduce that, almost surely,

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} N({\varepsilon }) {\varepsilon }^3 = \lim _{{\varepsilon }\rightarrow 0} |{\varepsilon }\check{\Lambda }(\omega ) \cap \mathcal {O}| \, {\varepsilon }^3 = \lim _{{\varepsilon }\rightarrow 0} \frac{ |\check{\Lambda }(\omega ) \cap {\varepsilon }^{-1}\mathcal {O}|}{ {\varepsilon }^{-3} |\mathcal {O}|} |\mathcal {O}| = m |\mathcal {O}|, \end{aligned}$$
(5.1)

so that we shall note indifferently \(\lim _{{\varepsilon }\rightarrow 0}\) or \(\lim _{N \rightarrow +\infty }\). Note that, strictly speaking, \(N = N({\varepsilon })\) does not necessarily cover all integer values when \({\varepsilon }\rightarrow 0\), but this is no difficulty.

More generally, for all \(\varphi \) smooth and compactly supported in \(\mathbb {R}^3\), ergodicity implies

$$\begin{aligned} \lim _{N \rightarrow +\infty } \frac{1}{N} \sum _{i=1}^N \varphi (x_{i})= & {} \lim _{N \rightarrow +\infty } \frac{1}{N} \sum _{x_i \in \mathcal {O}} \varphi (x_{i})\\= & {} \lim _{N \rightarrow +\infty } \frac{1}{{\varepsilon }^3 N} m \int _{\mathcal {O}} \varphi (x) \mathrm{d}x = \frac{1}{|\mathcal {O}|} \int _{\mathcal {O}} \varphi (x) \mathrm{d}x, \end{aligned}$$

which shows that (H1) is satisfied with \(f = \frac{1}{|\mathcal {O}|} 1_{\mathcal {O}}\). The hypothesis (H2) is also trivially satisfied, as well as (3.1). Our main theorem is

Theorem 5.1

Almost surely,

$$\begin{aligned} \lim _{N \rightarrow +\infty } \mathcal {V}_N = \frac{25}{2 m^2} \mathcal {W}({\nabla }H), \end{aligned}$$

with m the mean intensity of the process, and H the solution of (4.3) given in Corollary 4.9.

The rest of the paragraph is dedicated to the proof of this theorem.

Let \(\eta \) satisfying \(\eta < \frac{\min (c,1)}{4}\) and \(\eta < \frac{c}{2} (m|\mathcal {O}|)^{-1/3}\). By (5.1), it follows that, almost surely, for \({\varepsilon }\) small enough, \({\varepsilon }\eta < \frac{c}{2} N^{-1/3}\). By Corollary 3.9,

$$\begin{aligned} \lim _{N \rightarrow +\infty } \quad \mathcal {V}_N + \frac{25 |\mathcal {O}|}{2 N^2} \Bigl ( \int _{\mathbb {R}^3} | {\nabla }h_{N}^{\eta {\varepsilon }} |^2 - \frac{N}{(\eta {\varepsilon })^3} \big ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \big ) \Bigr ) = 0. \end{aligned}$$
(5.2)

We denote \(h_{\varepsilon }^\eta := h_N^{\eta {\varepsilon }}\), see (3.25)–(3.26). Let H be the solution of the blown-up system (4.3) provided by Corollary 4.9, \(H^\eta \) given in (4.4), and \(P^\eta \) as in (4.5). We define new fields \(\bar{h}^\eta _{\varepsilon }, \bar{p}^\eta _{\varepsilon }\) by the following conditions: \(\bar{h}^\eta _{\varepsilon }\in \dot{H}^1(\mathbb {R}^3)\),

$$\begin{aligned}&\bar{h}^\eta _{\varepsilon }(\omega ,x) = \frac{1}{{\varepsilon }^2} H^\eta \big (\frac{x}{{\varepsilon }}\big ) - \fint _{\mathcal {O}} \frac{1}{{\varepsilon }^2} H^\eta \big (\frac{\cdot }{{\varepsilon }}\big ), \, x \in \mathcal {O}\\&\overline{p}_{\varepsilon }^\eta (\omega ,x) = \frac{1}{{\varepsilon }^3} P^\eta \big (\frac{x}{{\varepsilon }}\big ) - \fint _{\mathcal {O}} \frac{1}{{\varepsilon }^3} P^\eta \big (\frac{\cdot }{{\varepsilon }}\big ), \, x \in \mathcal {O}\\&\quad - \Delta \bar{h}^\eta _{\varepsilon }+ {\nabla }\bar{p}^\eta _{\varepsilon }= 0, \quad \hbox {div}~\bar{h}^\eta _{\varepsilon }= 0 \, \text { in } \, \mathrm{ext} \,\mathcal {O}. \end{aligned}$$

We omit indication of the dependence in \(\omega \) to lighten notations. We claim

Proposition 5.2

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} -\frac{1}{N^2} \bigl ( \int _{\mathbb {R}^3} |{\nabla }\bar{h}^\eta _{\varepsilon }|^2 - \frac{N}{(\eta {\varepsilon })^3} ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 ) \bigr ) = \frac{1}{m^2|\mathcal {O}|} \mathcal {W}^\eta ({\nabla }H). \end{aligned}$$

Proposition 5.3

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} {\varepsilon }^6 \int _{\mathbb {R}^3} |{\nabla }(h^\eta _{\varepsilon }- \bar{h}^\eta _{\varepsilon })|^2 = 0. \end{aligned}$$

Note that, by Proposition 4.4 and our choice of \(\eta \), \(\mathcal {W}^\eta ({\nabla }H)\) = \(\mathcal {W}({\nabla }H)\). Theorem 5.1 follows directly from this fact, (5.2), and the propositions.

Proof of Proposition 5.2.

We know from Corollary 4.9 that

$$\begin{aligned} \mathcal {W}^\eta ({\nabla }H) = - \Big ( \mathbb {E} \int _{K_1}|{\nabla }H^\eta |^2 - \frac{m}{\eta ^3} \bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \bigr )\Big ). \end{aligned}$$

From this and relation (5.1), we see that the proposition amounts to the statement

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} \frac{{\varepsilon }^6}{|\mathcal {O}|}\int _{\mathbb {R}^3} |{\nabla }\bar{h}^\eta _{\varepsilon }|^2 = \mathbb {E}\int _{K_1} |{\nabla }H^\eta |^2. \end{aligned}$$

A simple application of the ergodic theorem shows that, almost surely,

$$\begin{aligned} \frac{{\varepsilon }^6}{|\mathcal {O}|}\int _{\mathcal {O}} |{\nabla }\bar{h}^\eta _{\varepsilon }|^2 = \frac{1}{|\mathcal {O}|}\ \int _{\mathcal {O}} |{\nabla }_y H^\eta \big (\frac{x}{{\varepsilon }}\big )|^2 \mathrm{d}y \rightarrow \mathbb {E}\int _{K_1} |{\nabla }H^\eta |^2. \end{aligned}$$

It remains to show that

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} \, {\varepsilon }^6 \int _{\mathrm{ext} \,\mathcal {O}} |{\nabla }\bar{h}^\eta _{\varepsilon }|^2 = 0. \end{aligned}$$
(5.3)

It will be deduced from the well-known fact that the Stokes solution \(\bar{h}^\eta _{\varepsilon }\) minimizes

$$\begin{aligned} \displaystyle \int _{\mathrm{ext} \,\mathcal {O}} |{\nabla }\bar{h}|^2 \end{aligned}$$

among divergence-free fields \(\bar{h}\) in \(\mathrm{ext} \,\mathcal {O}\) satisfying the Dirichlet condition \(\bar{h}\vert _{{\partial }\mathcal {O}} = \bar{h}^\eta _{\varepsilon }\vert _{{\partial }\mathcal {O}}\).

First, we prove that the \(H^{1/2}(\partial \mathcal {O})\)-norm of \({\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\) goes to zero. In this perspective, we introduce for all \(\delta > 0\) a function \(\chi _\delta \) with \(\chi _\delta = 1\) in a \(\frac{\delta }{2}\)-neighborhood of \({\partial }\mathcal {O}\), \(\chi _\delta = 0\) outside a \(\delta \)-neighborhood of \({\partial }\mathcal {O}\). We write

$$\begin{aligned} \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\Vert _{H^{1/2}({\partial }\mathcal {O})}&= \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{H^{1/2}({\partial }\mathcal {O})} \\&\leqq C \left( \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{L^2(\mathcal {O})} + \Vert {\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{L^2(\mathcal {O})} + \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }{\nabla }\chi _\delta \Vert _{L^2(\mathcal {O})} \right) . \end{aligned}$$

By the ergodic theorem and Corollary 4.9, \({\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }= {\nabla }_y H^\eta (\frac{\cdot }{{\varepsilon }})\) converges almost surely weakly in \(L^2(\mathcal {O})\) to \(\mathbb {E}D_{\mathbf {H}^{\eta }} = 0\). Let \(\varphi \in L^2(\mathcal {O})\). By standard results on the divergence operator, cf [16], there exists \(v \in H^1_0(\mathcal {O})\) with \(\hbox {div}~v = \varphi - \fint _{\mathcal {O}} \varphi \), \(\Vert v\Vert _{H^1(\mathcal {O})} \leqq C_\mathcal {O}\Vert \varphi \Vert _{L^2(\mathcal {O})}\). As by definition \(\bar{h}^\eta _{\varepsilon }\) has zero mean over \(\mathcal {O}\), it follows that

$$\begin{aligned} \int _\mathcal {O}{\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\varphi = \int _\mathcal {O}{\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\, (\varphi - \fint _{\mathcal {O}} \varphi ) = - \int _\mathcal {O}{\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }\, v \rightarrow 0 \, \text { as } {\varepsilon }\rightarrow 0. \end{aligned}$$

Hence, \({\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\) converges weakly to zero in \(H^1(\mathcal {O})\) and therefore strongly in \(L^2(\mathcal {O})\). It follows that, for any given \(\delta \),

$$\begin{aligned} \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{L^2(\mathcal {O})} \rightarrow 0, \quad \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }{\nabla }\chi _\delta \Vert _{L^2(\mathcal {O})} \rightarrow 0 \quad \text {as } {\varepsilon }\rightarrow 0. \end{aligned}$$

To conclude, it is enough to show that \(\limsup _{{\varepsilon }\rightarrow 0} \Vert {\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{L^2(\mathcal {O})}\) goes to zero as \(\delta \rightarrow 0\). This comes from

$$\begin{aligned} \Vert {\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }\chi _\delta \Vert _{L^2(\mathcal {O})}^2 = \int _\mathcal {O}|{\nabla }H^\eta (\cdot /{\varepsilon })|^2 \, \chi _\delta ^2 \, \xrightarrow [{\varepsilon }\rightarrow 0]{} \, \mathbb {E}|D_\mathbf{H}^\eta |^2 \int _\mathcal {O}\chi _\delta ^2 \leqq C \delta . \end{aligned}$$
(5.4)

Finally, \(\Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\Vert _{H^{1/2}({\partial }\mathcal {O})} = o(1)\). To conclude that (5.3) holds, we notice that

$$\begin{aligned} \int _{{\partial }\mathcal {O}} \bar{h}^\eta _{\varepsilon }\cdot n = \int _{\mathcal {O}} \hbox {div}~\bar{h}^\eta _{\varepsilon }= 0. \end{aligned}$$

By classical results on the right inverse of the divergence operator, see [16], one can find for R such that \(\mathcal {O}\Subset B(0,R)\) a solution \(\bar{h}\) of the equation

$$\begin{aligned} \hbox {div}\bar{h} = 0 \quad \text { in } \, \mathrm{ext} \,\mathcal {O}\cap B(0,R), \quad \bar{h}\vert _{{\partial }\mathcal {O}} = \bar{h}^\eta _{\varepsilon }\vert _{{\partial }\mathcal {O}}, \quad \bar{h}\vert _{{\partial }B(0,R)} = 0, \end{aligned}$$

and such that

$$\begin{aligned} \Vert \bar{h} \Vert _{H^1(\mathrm{ext} \,\mathcal {O}\cap B(0,R))} \leqq C \Vert \bar{h}^\eta _{\varepsilon }\Vert _{H^{1/2}({\partial }\mathcal {O})} = o({\varepsilon }^{-3}). \end{aligned}$$

Extending \(\bar{h}\) by zero outside B(0, R), we find

$$\begin{aligned} \int _{\mathrm{ext} \,\mathcal {O}} |{\nabla }\bar{h}^\eta _{\varepsilon }|^2 \leqq \int _{\mathrm{ext} \,\mathcal {O}} |{\nabla }\bar{h}|^2 = o({\varepsilon }^{-6}). \end{aligned}$$
(5.5)

This concludes the proof of the proposition. \(\quad \square \)

Proof of Proposition 5.3.

Let \(h := h_{\varepsilon }^\eta - \bar{h}_{\varepsilon }^\eta \). It satisfies an equation of the form

$$\begin{aligned} -\Delta h + {\nabla }p = R_1 + R_2 + R_3, \quad \hbox {div}~h = 0 \text { in } \, \mathbb {R}^3, \end{aligned}$$

where the various source terms will now be defined. First,

$$\begin{aligned} R_1 := \sigma (\bar{h}_{\varepsilon }^\eta , \bar{p}^\eta _{\varepsilon })n\vert _{{\partial }(\mathrm{ext} \,\mathcal {O})} \, s_{{\partial }}. \end{aligned}$$

Here, the value of the stress is taken from \(\mathrm{ext} \,\mathcal {O}\), n refers to the normal vector pointing outward \(\mathcal {O}\) and \(s_{{\partial }}\) refers to the surface measure on \({\partial }\mathcal {O}\). We remind that \(\bar{h}^\eta _{\varepsilon }\in \dot{H}^1(\mathbb {R}^3)\) does not jump at the boundary, but its derivatives do, so that one must specify from which side the stress is considered. Then,

$$\begin{aligned} R_2 := -\sigma (\bar{h}_{\varepsilon }^\eta , \bar{p}^\eta _{\varepsilon })n\vert _{{\partial }\mathcal {O}} \, s_{{\partial }} = - \frac{1}{{\varepsilon }^3} \sigma \Big (H^\eta , P^\eta - \fint _\mathcal {O}P^\eta (\omega ,\cdot /{\varepsilon })\Big )\bigl (\frac{\cdot }{{\varepsilon }}\bigr )\vert _{{\partial }\mathcal {O}}n \, s_{{\partial }}, \end{aligned}$$

with the value of the stress taken from \(\mathcal {O}\), and n as before. Noticing that \( S {\nabla }f = - \frac{1}{|\mathcal {O}|} Sn \, s_{\partial }\), we finally set

$$\begin{aligned} R_3 := -\mathbf{1}_{\mathcal {O}} \, \sum _{i \in I^\eta _{\varepsilon }} S^{\eta {\varepsilon }}(x-x_i) + \frac{N}{|\mathcal {O}|} Sn \, s_{{\partial }}, \end{aligned}$$

where

$$\begin{aligned} I^\eta _{\varepsilon }= \{ i, B(x_i, {\varepsilon }) \not \subset \mathcal {O}, \, B(x_i, \eta {\varepsilon }) \cap \mathcal {O}\ne \emptyset \}. \end{aligned}$$

Note that the term \(R_3\) is supported in pieces of spheres. From (3.20), we know that for all \(\eta > 0\),

$$\begin{aligned} \int _{\mathbb {R}^3} S^\eta = \int _{\mathbb {R}^3} \left( -\Delta G_S^\eta + {\nabla }p_S^\eta \right) = 0. \end{aligned}$$
(5.6)

This allows us to show that the integral of \(R_2+R_3\) is zero. Indeed,

$$\begin{aligned} \int _{\mathbb {R}_3} R_2&= \frac{1}{{\varepsilon }^4} \int _{\mathcal {O}} (-\Delta H^\eta + {\nabla }P^\eta )(\cdot /{\varepsilon }) = \int _{\mathcal {O}} \sum _{i, B(x_i, \eta {\varepsilon }) \cap \mathcal {O}\ne \emptyset } \quad S^{\eta {\varepsilon }}(\cdot -x_i) \, \\&= \sum _{i \in I_{\varepsilon }^\eta } \int _{\mathcal {O}} S^{\eta {\varepsilon }}(\cdot -x_i), \end{aligned}$$

so that

$$\begin{aligned} \int _{\mathbb {R}^3} (R_2 + R_3) = \frac{N}{|\mathcal {O}|} \int _{{\partial }\mathcal {O}} S n \, \mathrm{d}s_{\partial }= 0. \end{aligned}$$
(5.7)

The point is now to prove that \({\varepsilon }^3 \Vert {\nabla }h\Vert _{L^2(\mathbb {R}^3)} \rightarrow 0\) as \({\varepsilon }\rightarrow 0\). From a simple energy estimate, and taking (5.7) into account, we find

$$\begin{aligned} \Vert {\nabla }h\Vert _{L^2(\mathbb {R}^3)}^2 = \langle R_1 , h \rangle + \langle R_2, h - \fint _{\mathcal {O}} h \rangle + \langle R_3, h - \fint _{\mathcal {O}} h \rangle . \end{aligned}$$
(5.8)

As \((\bar{h}_{\varepsilon }^\eta , \bar{p}_{\varepsilon }^\eta )\) is a solution of a homogeneous Stokes equation in \(\mathrm{ext} \,\mathcal {O}\), we get, from an integration by parts, that

$$\begin{aligned} \langle R_1 , h \rangle = \int _{\mathrm{ext} \,\mathcal {O}} {\nabla }\bar{h}_{\varepsilon }^\eta \cdot {\nabla }h \leqq \nu ({\varepsilon }){\varepsilon }^{-3} \Vert {\nabla }h \Vert _{L^2(\mathbb {R}^3)}, \quad \nu ({\varepsilon }) \rightarrow 0 \quad \text {as } {\varepsilon }\rightarrow 0, \end{aligned}$$
(5.9)

using the Cauchy–Schwarz inequality and the bound (5.5).

We now wish to show that

$$\begin{aligned} \langle (R_2 + R_3) , h - \fint _{\mathcal {O}} h \rangle \leqq \nu ({\varepsilon }) {\varepsilon }^{-3} \Vert {\nabla }h \Vert _{L^2(\mathbb {R}^3)} \end{aligned}$$
(5.10)

for some \(\nu ({\varepsilon })\) going to zero with \({\varepsilon }\). More precisely, we will prove that for any divergence-free \(\varphi \in \dot{H}^1(\mathbb {R}^3)\),

$$\begin{aligned} \langle (R_2 + R_3) , \varphi \rangle \leqq \nu ({\varepsilon }) {\varepsilon }^{-3} (\Vert {\nabla }\varphi \Vert _{L^2(\mathbb {R}^3)} + \Vert \varphi \Vert _{H^1(\mathcal {O})}), \quad \nu ({\varepsilon }) \rightarrow 0 \text { as } {\varepsilon }\rightarrow 0, \end{aligned}$$
(5.11)

which implies (5.10), by Poincaré inequality. We first notice that

$$\begin{aligned} \langle R_2 , \varphi \rangle = \frac{1}{{\varepsilon }^3} \, \langle \, n \cdot F_2^{\varepsilon }, \varphi \, \rangle _{\langle H^{-1/2}({\partial }\mathcal {O}), H^{1/2}({\partial }\mathcal {O})\rangle }, \end{aligned}$$
(5.12)

where

$$\begin{aligned} F_2^{\varepsilon }:= {\varepsilon }^3 \left( 2 D(\bar{h}^\eta _{\varepsilon }) - \overline{p}_{\varepsilon }^\eta \mathrm{Id}\right) = 2 D(H^\eta )(\omega , \cdot /{\varepsilon }) + \left( P^\eta (\omega ,\cdot /{\varepsilon }) - \fint _\mathcal {O}P^\eta (\omega ,\cdot /{\varepsilon })\right) \mathrm{Id}. \end{aligned}$$
(5.13)

Then, we use the relation \(S^\eta = \hbox {div}~\Psi ^\eta \), cf. Lemma 3.5 and integrate by parts to get

$$\begin{aligned} \langle R_3, \varphi \rangle&= \frac{1}{{\varepsilon }^3} \sum _{i \in I^{\varepsilon }_\eta } \Biggl ( -\int _{{\partial }\mathcal {O}} n \cdot \Psi ^\eta \left( \frac{x-x_i}{{\varepsilon }}\right) \cdot \varphi (x) \mathrm{d}s_{\partial }(x) \\&\quad + \int _{\mathcal {O}} \Psi ^\eta \left( \frac{x-x_i}{{\varepsilon }}\right) : {\nabla }\varphi (x) \mathrm{d}x \Biggr ) + \frac{N}{|\mathcal {O}|}\int _{{\partial }\mathcal {O}} Sn(x) \cdot \varphi (x) \, \mathrm{d}s_{\partial }(x). \end{aligned}$$

For a fixed \(\eta \), there is a constant C (depending on \(\eta \)) such that

$$\begin{aligned}&\sum _{i \in I^{\varepsilon }_\eta } \int _{\mathcal {O}} \Psi ^\eta \left( \frac{x-x_i}{{\varepsilon }}\right) : {\nabla }\varphi (x) \mathrm{d}x \\&\quad \leqq C \sum _{i \in I^{\varepsilon }_\eta } \int _{B(x_i, \eta {\varepsilon }) \cap \mathcal {O}} |{\nabla }\varphi |(x) \mathrm{d}x \leqq C | \cup _{i \in I_{\varepsilon }^\eta } B(x_i, \eta {\varepsilon })|^{1/2} \Vert {\nabla }\varphi \Vert _{L^2(\mathbb {R}^3)}\\&\quad \leqq C {\varepsilon }^{1/2} \Vert {\nabla }\varphi \Vert _{L^2(\mathbb {R}^3)}. \end{aligned}$$

For the last inequality, we have used that all \(x_i\)’s with \(i \in I_{\varepsilon }^\eta \) belong to an \({\varepsilon }\)-neighborhood of \({\partial }\mathcal {O}\), so that \(|I_{\varepsilon }^\eta | = O({\varepsilon }^{-2})\). Hence,

$$\begin{aligned} \langle R_3, \varphi \rangle&\leqq \frac{1}{{\varepsilon }^3} \sum _{i \in I^{\varepsilon }_\eta } - \int _{{\partial }\mathcal {O}} n \cdot \Psi ^\eta \left( \frac{x-x_i}{{\varepsilon }}\right) \cdot \varphi (x) \mathrm{d}s_{\partial }(x) \nonumber \\&\quad + \frac{N}{|\mathcal {O}|}\int _{{\partial }\mathcal {O}} Sn(x) \cdot \varphi (x) \, \mathrm{d}s_{\partial }(x) + \nu ({\varepsilon }) {\varepsilon }^{-5/2} \Vert {\nabla }\varphi \Vert _{L^2(\mathcal {O})}. \end{aligned}$$
(5.14)

Let

$$\begin{aligned} F_3(\omega ) := -\sum _{z \in \Lambda } \Psi ^\eta (z) + m S, \quad F_3^{\varepsilon }(x) := F_3(\tau _{x/{\varepsilon }}(\omega )). \end{aligned}$$
(5.15)

We claim that \(\mathbb {E}\int _{K_1} F_3 = 0\). Indeed, by stationarity, for all \(R > 0\)

$$\begin{aligned} \mathbb {E}\sum _{z \in \Lambda } \Psi ^\eta (z)&= \frac{1}{R^3} \mathbb {E}\sum _{z \in \Lambda } \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&= \frac{1}{R^3} \mathbb {E}\,\, \sum _{\begin{array}{c} z \in \Lambda , \\ K_R \supset B(-z,\eta ) \end{array}} \,\, \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&\quad + \frac{1}{R^3} \mathbb {E}\,\, \sum _{\begin{array}{c} z \in \Lambda , \\ {\partial }K_R \cap B(-z,\eta ) \ne \emptyset \end{array}} \,\, \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&\quad + \frac{1}{R^3} \mathbb {E}\,\, \sum _{\begin{array}{c} z \in \Lambda , \\ K_R \cap B(-z,\eta ) = \emptyset \end{array}} \,\, \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&= \frac{1}{R^3} \mathbb {E}\,\, \sum _{\begin{array}{c} z \in \Lambda , \\ K_R \supset B(-z,\eta ) \end{array}} \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&\quad + \frac{1}{R^3} \mathbb {E}\,\, \sum _{\begin{array}{c} z \in \Lambda , \\ {\partial }K_R \cap B(-z,\eta ) \ne \emptyset \end{array}} \,\, \int _{K_R} \Psi ^\eta (y+z) \mathrm{d}y \\&= \frac{1}{R^3} \mathbb {E}\big |\{z, K_R \supset B(0,\eta ) - z\}\big | \int _{B(0,\eta )} \Psi ^\eta (y) \mathrm{d}y \, + \, O\big (\frac{1}{R}\big ), \quad R \gg 1. \end{aligned}$$

We have used crucially the fact that \(\Psi ^\eta \) is supported in \(B(0,\eta )\). The \(O(\frac{1}{R})\)-term is associated to the points \(z \in \Lambda \) which lie in a \(\delta \)-neighborhood of \({\partial }K_R\): see the end of the proof of Corollary 4.9 for similar reasoning. By sending R to infinity, we find that, almost surely,

$$\begin{aligned} \mathbb {E}F_3 = - m \int _{B(0,\eta )} \Psi ^\eta \, + m S. \end{aligned}$$

The last step is to compute \(\int _{B(0,\eta )} \Psi ^\eta \), which is independent of \(\eta \) by homogeneity. It is in particular equal to \(\lim _{\eta \rightarrow 0} \langle \Psi ^\eta , 1 \rangle \), a limit that was already computed in the proof of Lemma 3.5, cf. (3.23)–(3.24). We get \(\int _{B(0,\eta )} \Psi ^\eta = S\), which shows that \(\mathbb {E}F_3 = 0\).

By the definition of \(F^{\varepsilon }_3\), we can write

$$\begin{aligned}&\frac{1}{{\varepsilon }^3}\sum _{i \in I^{\varepsilon }_\eta } \int _{{\partial }\mathcal {O}\cap B(x_i, \eta {\varepsilon })} \,\, - n \cdot \Psi ^\eta \left( \frac{x-x_i}{{\varepsilon }}\right) \cdot \varphi (x) \mathrm{d}s_{\partial }(x) \\&\qquad + \, \frac{N}{|\mathcal {O}|}\int _{{\partial }\mathcal {O}} Sn(x) \cdot \varphi (x) \, \mathrm{d}s_{\partial }(x) \\&\quad = \frac{1}{{\varepsilon }^3}\int _{{\partial }\mathcal {O}} n(x) \cdot F^{\varepsilon }_3(x) \cdot \varphi (x) \mathrm{d}s_{\partial }(x) + \left( \frac{N}{|\mathcal {O}|} - \frac{m}{{\varepsilon }^3} \right) \int _{{\partial }\mathcal {O}} Sn \cdot \varphi \\&\quad \leqq \frac{1}{{\varepsilon }^3}\int _{{\partial }\mathcal {O}} n(x) \cdot F^{\varepsilon }_3(x) \cdot \varphi (x) \mathrm{d}s_{\partial }(x) + \nu ({\varepsilon }) {\varepsilon }^{-3}\Vert \varphi \Vert _{H^1(\mathcal {O})}, \quad \nu ({\varepsilon }) \xrightarrow [{\varepsilon }\rightarrow 0]{} 0, \end{aligned}$$

where the last inequality follows from (5.1). Plugging this inequality in (5.14), and combining with (5.12), we see that to derive (5.11), it remains to show that almost surely, for all divergence-free fields \(\varphi \in H^1(\mathcal {O})\),

$$\begin{aligned} | \langle n \cdot F^{\varepsilon }, \varphi \rangle _{\langle H^{-1/2}({\partial }\mathcal {O}), H^{1/2}({\partial }\mathcal {O})\rangle } | \leqq \nu ({\varepsilon }) \Vert \varphi \Vert _{H^1(\mathcal {O})}, \quad \nu ({\varepsilon }) \rightarrow 0 \text { as } {\varepsilon }\rightarrow 0, \end{aligned}$$
(5.16)

where \(F^{\varepsilon }:= F_2^{\varepsilon }+F_3^{\varepsilon }\). Notice that \(\hbox {div}~(F_2^{\varepsilon }+ F_3^{\varepsilon }) = 0\). We introduce again the functions \(\chi _\delta \), \(\delta > 0\), seen above. We get

$$\begin{aligned}&\langle n \cdot F^{\varepsilon }, \varphi \rangle _{\langle H^{-1/2}({\partial }\mathcal {O}), H^{1/2}({\partial }\mathcal {O})\rangle } = \langle n \cdot \chi _\delta F^{\varepsilon }, \varphi \rangle _{\langle H^{-1/2}({\partial }\mathcal {O}), H^{1/2}({\partial }\mathcal {O})\rangle } \\&\quad = \int _\mathcal {O}({\nabla }\chi _\delta \cdot F^{\varepsilon }) \cdot \varphi - \int _\mathcal {O}\chi _\delta F^{\varepsilon }\cdot {\nabla }\varphi \end{aligned}$$

For the last term, we take into account that \(\varphi \) is divergence-free, so that the pressure disappears. We find that

$$\begin{aligned} |\int _\mathcal {O}\chi _\delta F^{\varepsilon }\cdot {\nabla }\varphi | \leqq \left( \Vert 2 \chi _\delta D(H)(\cdot /{\varepsilon }) \Vert _{L^2(\mathcal {O})} + \Vert \chi _\delta F_3(\cdot /{\varepsilon }) \Vert _{L^2(\mathcal {O})} \right) \, \Vert \varphi \Vert _{H^1(\mathcal {O})}. \end{aligned}$$

As seen in (5.4), we have

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} \Vert 2 \chi _\delta D(H)(\cdot /{\varepsilon }) \Vert _{L^2(\mathcal {O})}^2 \leqq C \delta , \end{aligned}$$

and similarly,

$$\begin{aligned} \lim _{{\varepsilon }\rightarrow 0} \Vert \chi _\delta F_3^{\varepsilon }\Vert _{L^2(\mathcal {O})}^2 \leqq C \delta . \end{aligned}$$

For the first term, we write

$$\begin{aligned} \int _\mathcal {O}({\nabla }\chi _\delta \cdot F^{\varepsilon }) \cdot \varphi&= 2 \int _\mathcal {O}{\nabla }\chi _\delta \cdot {\varepsilon }^3 D(\bar{h}^\eta _{\varepsilon }) \cdot \varphi \\&\quad - \int _\mathcal {O}({\nabla }\chi _\delta \, {\varepsilon }^3 \overline{p}_{\varepsilon }^\eta ) \cdot \varphi + \int _\mathcal {O}({\nabla }\chi _\delta \cdot F^{\varepsilon }_3) \cdot \varphi . \end{aligned}$$

We know that \({\varepsilon }^3 D(\bar{h}^\eta _{\varepsilon })\) goes weakly to zero in \(L^2(\mathcal {O})\), so that it converges strongly to zero in \(H^{-1}(\mathcal {O})\). As \({\nabla }\chi _\delta \otimes \varphi \) belongs to \(H^1_0(\mathcal {O})\), we find that, for a fixed \(\delta \),

$$\begin{aligned} |2 \int _\mathcal {O}{\nabla }\chi _\delta \cdot {\varepsilon }^3 D(\bar{h}^\eta _{\varepsilon }) \cdot \varphi | \leqq C \Vert {\varepsilon }^3 D(\bar{h}^\eta _{\varepsilon }) \Vert _{H^{-1}(\mathcal {O})} \Vert {\nabla }\chi _\delta \varphi \Vert _{H^1(\mathcal {O})} \leqq \nu ({\varepsilon }) \Vert \varphi \Vert _{H^1(\mathcal {O})}. \end{aligned}$$

Similarly, as \(\mathbb {E}F_3 = 0\), \(F_3^{\varepsilon }\) converges weakly to zero in \(L^2(\mathcal {O})\) and we get

$$\begin{aligned} |\int _\mathcal {O}({\nabla }\chi _\delta \cdot F^{\varepsilon }_3) \cdot \varphi | \leqq \nu ({\varepsilon }) \Vert \varphi \Vert _{H^1(\mathcal {O})}. \end{aligned}$$

The last step is to prove that \({\varepsilon }^3 \overline{p}^\eta _{\varepsilon }\) converges weakly to zero in \(L^2(\mathcal {O})\), which will yield

$$\begin{aligned} |\int _\mathcal {O}({\nabla }\chi _\delta \, {\varepsilon }^3 \overline{p}_{\varepsilon }^\eta ) \cdot \varphi | \leqq \nu ({\varepsilon }) \Vert \varphi \Vert _{H^1(\mathcal {O})}. \end{aligned}$$

As above, for \(\phi \in L^2(\mathcal {O})\), we introduce \(v \in H^1_0(\mathcal {O})\) such that \(\hbox {div}~v = \phi - \fint _\mathcal {O}\phi \), \(\Vert v\Vert _{H_1(\mathcal {O})} \leqq C_\mathcal {O}\Vert \phi \Vert _{L^2(\mathcal {O})}\). Then, using the equation satisfied by \(\overline{p}_{\varepsilon }^\eta \) in \(\mathcal {O}\),

$$\begin{aligned} - \Delta {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }+ {\nabla }{\varepsilon }^3 \overline{p}^\eta _{\varepsilon }= \hbox {div}\, F_3^{\varepsilon }, \end{aligned}$$

we find, after integration by parts, that

$$\begin{aligned} \int _{\mathcal {O}} {\varepsilon }^3 \overline{p}_{\varepsilon }^\eta \phi = \int _{\mathcal {O}} {\varepsilon }^3 \overline{p}_{\varepsilon }^\eta (\phi - \fint _\mathcal {O}\phi ) = \int _{\mathcal {O}} {\varepsilon }^3 {\nabla }\bar{h}^\eta _{\varepsilon }: {\nabla }v + \int _{\mathcal {O}} F_3^{\varepsilon }: {\nabla }v \xrightarrow [{\varepsilon }\rightarrow 0]{} 0. \end{aligned}$$

This concludes the proof of (5.16), of Proposition 5.3 and of the theorem. \(\square \)

5.2 Formula for Periodic Point Distributions

Theorem 5.1 gives the limit of \(\mathcal {V}_N\) for properly rescaled stationary and ergodic point processes, under uniform separation of the points. Such setting includes periodic point distributions, as well as Poisson hard core processes. We focus here on the periodic case, for which further explicit formula can be given. For \(L > 0\), we consider distinct points \(a_1, \dots , a_M\) in \(K_L\), and set \(\Lambda _0 := \{a_1, \dots , a_M\} + L \mathbb {Z}^d\), which can be seen as a subset of \(\mathbb {T}_L^3\). In Example 4.5, we explained how to build a process on \(\mathbb {T}_L^3\) out of \(\Lambda _0\), with \(\Lambda (\omega ) = \Lambda _0 + \omega \), \(\omega \in \mathbb {T}_L^3\). By a simple translation, the results above, that are valid for \(\Lambda _0 + \omega \) for almost everywhere \(\omega \), are still valid for \(\omega =0\). Thus, for \(\Lambda = \Lambda _0\), we deduce from Proposition 4.7 the existence of an \(L\mathbb {Z}^3\)-periodic solution \(\mathbf{H^\eta }\) of (4.5) with \({\nabla }\mathbf{H^\eta } \in L^2_{loc}\). If we further assume that \(\mathbf{H^\eta }\) is mean-free, it is clearly unique. Then, following Corollary 4.9 and Theorem 5.1, there exists an \(L\mathbb {Z}^3\)-periodic solution H of (4.3), such that

$$\begin{aligned}&\lim _{N \rightarrow +\infty } \mathcal {V}_N = \frac{25L^6}{2M^2} \mathcal {W}({\nabla }H), \quad \mathcal {W}({\nabla }H) = \lim _{\eta \rightarrow 0} \mathcal {W}^\eta ({\nabla }H), \nonumber \\&\mathcal {W}^\eta ({\nabla }H) = - \left( \fint _{K_L}|{\nabla }H^\eta |^2 - \frac{M}{L^3\eta ^3} \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr )\right) , \end{aligned}$$
(5.17)

where \(H^\eta \) is associated to H by (4.4). We have used that in the periodic case, the intensity of the process is \(m=\frac{M}{L^3}\), while the expectation is simply the average over \(K_L\).

To make things more explicit, we introduce the periodic Green function \(G_{S,L} : \mathbb {R}^3 \rightarrow \mathbb {R}^3\), satisfying

$$\begin{aligned} -\Delta G_{S,L} + {\nabla }p_{S,L} = S{\nabla }\delta _0 \,, \;\; \hbox {div}~G_{S,L} = 0 \, \text {in } K_L, \;\; G_{S,L} \, L\mathbb {Z}^3\text {-periodic}, \;\; \int _{K_L} G_{S,L} = 0. \end{aligned}$$
(5.18)

The Green function \(G_{S,L}\) is easily expressed in Fourier series. If we write

$$\begin{aligned} G_{S,L}(y) = \sum _{k \in \mathbb {Z}^3_*} e^{\frac{2i\pi k}{L} \cdot y} \widehat{G}_{S,L}(k), \end{aligned}$$

a straightforward calculation shows that, for all \(k \in \mathbb {Z}^3_*\),

$$\begin{aligned} \widehat{G}_{S,L}(k) = \frac{i}{2\pi L^2 |k|} \left( S \frac{k}{|k|} - \frac{Sk \cdot k}{|k|^2} \frac{k}{|k|} \right) = \frac{i}{2\pi L^2 |k|^2} \pi ^\perp _kSk, \end{aligned}$$

where \(\pi ^\perp _k\) denotes the projection orthogonally to the line \(\mathbb {R}k\). Note that the Fourier series for \(G_{S,L}\) converges, for instance, in the quadratic sense as follows:

Proposition 5.4

$$\begin{aligned} \lim _{N \rightarrow +\infty } \mathcal {V}_N \, = \, \frac{25 L^3}{2M^2} \, \left( \sum _{i\ne j \in \{1,\dots ,M\}} S {\nabla }\cdot G_{S,L}(a_i - a_j) + M \lim _{y \rightarrow 0} S{\nabla }\cdot (G_{S,L}(y) - G_{S}(y)) \right) . \end{aligned}$$

Proof

Clearly, the \(L\mathbb {Z}^3\)-periodic field defined on \(K_L\) by \(\tilde{H}(y) := \sum _{i=1}^M G_{S,L}(y+a_i)\) is a solution of (4.3), and by Proposition 4.3\({\nabla }\tilde{H}\) and \({\nabla }H\) differ from a constant matrix. As \({\nabla }(\tilde{H} - H) = {\nabla }(\tilde{H}^\eta - H^\eta )\) is the gradient of a periodic function, we have eventually \({\nabla }\tilde{H} = {\nabla }H\). Up to adding a constant field to H, we can assume that

$$\begin{aligned} H(y) = \sum _{i=1}^M G_{S,L}(y+a_i). \end{aligned}$$

Then, if \(\eta \) is small enough so that \(B(a_i, \eta ) \subset K_L\) for all i, \(H^\eta \) is the L-periodic field given on \(K_L\) by

$$\begin{aligned} H^\eta (y) = \sum _{i=1}^M \left( G_{S,L}(y+a_i) + (G_S^\eta - G_S)(y+a_i) \right) . \end{aligned}$$

We integrate by parts to find

$$\begin{aligned} \frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta |^2&= \frac{1}{L^3} \int _{K_L} \sum _{i=1}^M H^\eta \mathrm{d}S^\eta (\cdot +a_i) \\&= \frac{1}{L^3} \int _{K_L} \sum _{i,j } G_{S,L}(\cdot +a_j) \mathrm{d}S^\eta (\cdot +a_i) \\&\quad + \frac{1}{L^3} \int _{K_L} \sum _{i,j } (G_S^\eta - G_S)(\cdot +a_j) \mathrm{d}S^\eta (\cdot +a_i) \\&= \frac{1}{L^3} \sum _{i \ne j} \int _{K_L} G_{S,L}(\cdot +a_j) \mathrm{d}S^\eta (\cdot +a_i) \\&\quad + \frac{1}{L^3} \sum _i \int _{K_L} G_{S,L}(\cdot +a_i) \mathrm{d}S^\eta (\cdot +a_i), \end{aligned}$$

where we have used that the last term of the second line vanishes identically. We then write \(G_{S,L} = G_S + \phi _{S,L}\) with \(\phi _{S,L}\) smooth near 0 to obtain

$$\begin{aligned} \frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta |^2&= \sum _{i \ne j} \frac{1}{L^3} \int _{K_L} G_{S,L}(\cdot +a_j) \mathrm{d}S^\eta (\cdot +a_i) \\&\quad + \frac{1}{L^3} \sum _i \int _{K_L} \phi _{S,L}(\cdot +a_i) \mathrm{d}S^\eta (\cdot +a_i) + \frac{M}{L^3} \int _{\mathbb {R}^3} G_S \mathrm{d}S^\eta . \end{aligned}$$

Combining this with Lemma 3.6 and (5.17), we get

$$\begin{aligned} \lim _{N \rightarrow \infty } \mathcal {V}_N= & {} -\frac{25L^6}{2M^2} \lim _{\eta \rightarrow 0} \Big ( \sum _{i \ne j} \frac{1}{L^3} \int _{K_L} G_{S,L}(\cdot +a_j) \mathrm{d}S^\eta (\cdot +a_i) \\&+ \frac{1}{L^3} \sum _i \int _{K_L} \phi _{S,L}(\cdot +a_i) \mathrm{d}S^\eta (\cdot +a_i) \Big ). \end{aligned}$$

We conclude by the last point of Lemma 3.5 that

$$\begin{aligned} \lim _{N \rightarrow \infty } \mathcal {V}_N = \frac{25 L^3}{2M^2} \Big ( \sum _{i \ne j} S {\nabla }\cdot G_{S,L}(a_i - a_j) \, + \, M S{\nabla }\cdot \phi _{S,L}(0) \Big ). \end{aligned}$$

\(\square \)

Proposition 5.5

(Simple cubic lattice). In the special case where \(L=M=1\), we find

$$\begin{aligned} \lim _{N \rightarrow \infty } \mathcal {V}_N = \alpha \sum _i S_{ii}^2 + \beta \sum _{i \ne j} S_{ij}^2, \end{aligned}$$

with \(\alpha = \frac{5}{2} (1 - 60 a)\), \(\beta = \frac{5}{2} (1 + 40 a)\), and \(a \approx -0,04655\) is defined in (5.19).

Proof

When \(M=L=1\), the formula from the last proposition simplifies into \(\lim _N \mathcal {V}_N = \frac{25}{2} S{\nabla }\cdot \phi _{S,1}(0)\), with \(\phi _{S,1} = G_{S,1} - G_S\). The periodic Green function \(G_{S,1}\) was computed using the Fourier series in the last paragraph. We found

$$\begin{aligned} G_{S,1}(y)&= \sum _{k \in \mathbb {Z}^3_*} \frac{i}{2\pi |k|} \left( S \frac{k}{|k|} - \frac{Sk \cdot k}{|k|^2} \frac{k}{|k|} \right) e^{2i\pi k \cdot y} \\&= S {\nabla }\left( \sum _{k \in \mathbb {Z}^3_*} \frac{1}{4\pi ^2 |k|^2} e^{2i\pi k \cdot y} \right) {+} S : ({\nabla }\otimes {\nabla }) {\nabla }\left( \sum _{k \in \mathbb {Z}^3_*} \frac{1}{16\pi ^4 |k|^4} e^{2i\pi k \cdot y} \right) . \end{aligned}$$

We use formulas from [20], (see also [42, Eqs. (64)–(65)]) to get

$$\begin{aligned} \sum \frac{1}{4\pi ^2 |k|^2} e^{2i\pi k \cdot y} = \frac{1}{4\pi } \left( \frac{1}{|y|} - c_1 + \frac{2\pi }{3} |y|^2 + O(|y|^4) \right) \end{aligned}$$

and

$$\begin{aligned} \sum \frac{1}{16\pi ^4 |k|^4} e^{2i\pi k \cdot y} = -\frac{1}{4\pi } \left( \frac{|y|}{2} - c_2 - \frac{c_1}{6}|y|^2 + \frac{\pi }{30} |y|^4 + a P(y) + O(|y|^6) \right) , \end{aligned}$$
(5.19)

where \(c_1\) and \(c_2\) are constants, and

$$\begin{aligned} P(y) = \frac{4\pi }{3} \Big ( \frac{5}{8} (y_1^4 + y_2^4 + y_3^4) - \frac{15}{4} (y_1^2 y_2^2 + y_1^2 y_3^2 + y_2^2 y_3^2) + \frac{3}{8} |y|^4 \Big ). \end{aligned}$$

Note that the formula (5.19) defines implicitly a. A numerical computation was carried in [42], see also [34], giving \(a \approx -0,04655\).

Inserting in the expression for \(G_{S,1}\), we find, after a tedious calculation, that

$$\begin{aligned} S {\nabla }\cdot G_{S,1}(y)= & {} S {\nabla }\cdot \big ( - \frac{3}{8\pi } \frac{(Sy \cdot y) y}{|y|^5} \big )\\&+ \frac{1}{5} |S|^2 - 12 a \sum _i S_{ii}^2 + 8 a \sum _{i \ne j} |S_{ij}|^2 + O(|y|). \end{aligned}$$

Note that to carry out this calculation, we used the fact that S is trace-free, which leads to the identity

$$\begin{aligned} 0 = \left( \sum _i S_{ii}\right) ^2 = \sum _i S_{ii}^2 + \sum _{i \ne j} S_{ii} S_{jj}. \end{aligned}$$

Moreover, we know from (3.12) that

$$\begin{aligned} G_S(y) = -\frac{3}{8\pi } \frac{(Sy \cdot y) y}{|y|^5}. \end{aligned}$$

We end up with

$$\begin{aligned} S{\nabla }\cdot \phi _{S,1}(0) = \frac{1}{5}|S|^2 - 12 a \sum _i S_{ii}^2 + 8 a \sum _{i \ne j} |S_{ij}|^2, \end{aligned}$$

and the right formula for \(\lim _N \mathcal {V}_N\). \(\square \)

5.3 Formula in the Stationary Case with the 2-Point Correlation Function

We consider here the case of random point processes in \(\mathbb {R}^3\) (\(X=\mathbb {R}\)), such that (P1)–(P2)–(P3) hold. We further assume that the mean density is \(m=1\). We assume moreover that this point process admits a 2-point correlation function, that is a function \(\rho _2=\rho _2(x,y) \in L^1_{loc}(\mathbb {R}^3 \times \mathbb {R}^3)\) such that for all bounded sets K and all smooth F in a neighborhood of K,

$$\begin{aligned} \mathbb {E}\sum _{z \ne z' \in K} F(z,z') = \int _{K \times K} F(x,y) \rho _2(x,y) \mathrm{d}x \mathrm{d}y. \end{aligned}$$

As the process is stationary, one can write \(\rho _2(x,y) = \rho (x-y)\). Our goal is to prove the following formula:

Proposition 5.6

Almost surely,

$$\begin{aligned} \lim _N \mathcal {V}_N&= \frac{25}{2} \lim _{L \rightarrow +\infty } \frac{1}{L^3} \, \sum _{\begin{array}{c} z\ne z' \in \Lambda \cap K_{L-1} \end{array}} S {\nabla }\cdot G_{S,L}(z - z') \\&= \frac{25}{2} \lim _{L \rightarrow +\infty } \frac{1}{L^3} \int _{K_{L-1} \times K_{L-1}} S{\nabla }\cdot G_{S,L}(z-z') \rho (z-z') dz dz', \end{aligned}$$

where \(G_{S,L}\) refers to the \(L \mathbb {Z}^3\)-periodic Green function introduced in (5.18).

Remark 5.7

We remind the reader that the periodic Green function \(G_{S,L}\) has singularities at each point of \(L \mathbb {Z}^d\). But as the sum is restricted to points \(z,z'\) in \(\Lambda \cap K_{L-1}\), \(z-z'\) is always away from this set of singularities. In the same way, the integral over \(K_{L-1} \times K_{L-1}\) in the second equality is well-defined. Under further assumption on the two-point correlation function \(\rho \), one could make sense of the integral over \(K_L \times K_L\) and replace the former by the latter.

Proof

Let \(\eta \) small enough so that Proposition 4.4 holds. We have

$$\begin{aligned} \mathcal {W}({\nabla }H) = \mathcal {W}^\eta ({\nabla }H) = - \left( \mathbb {E} \int _{K_1}|{\nabla }H^\eta |^2 - \frac{1}{\eta ^3} \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr )\right) . \end{aligned}$$

Let \(H_L = \sum _{i=1}^M G_{S,L}(\cdot +a_i)\), where \(\{ a_1, \dots , a_M\} = \Lambda \cap K_{L-1}\). Note that \(H_L\) is associated to the point process \(\Lambda _L\) obtained by \(L \mathbb {Z}^d\)-periodization of \(\Lambda \cap K_{L-1}\). We shall prove below that

$$\begin{aligned} \mathbb {E} \int _{K_1}|{\nabla }H^\eta |^2 = \lim _{L \rightarrow +\infty } \frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta _L|^2, \text { almost surely}. \end{aligned}$$
(5.20)

As \(\frac{M}{L^3} = \frac{|\Lambda \cap K_L|}{L^3} \rightarrow 1\) as \(L \rightarrow +\infty \), it follows from (5.20) that

$$\begin{aligned} \mathcal {W}({\nabla }H)&= \lim _{L \rightarrow +\infty } - \left( \frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta _L|^2 - \frac{M}{(\eta L)^3} \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr )\right) , \nonumber \\&= \lim _{L \rightarrow +\infty } \mathcal {W}^\eta ({\nabla }H_L) = \lim _{L \rightarrow +\infty } \mathcal {W}({\nabla }H_L), \end{aligned}$$
(5.21)

where the last equality comes from Proposition 4.4. One can apply such proposition because the \(L\mathbb {Z}^d\)-periodized network \(\Lambda _L\) has a minimal distance between points that is independent of L. This is the reason why we used \(K_{L-1}\) instead of \(K_L\) in the definition of \(\Lambda _L\). Eventually, by Proposition 5.4,

$$\begin{aligned}&\lim _{L \rightarrow +\infty } \mathcal {W}({\nabla }H_L) \\&\quad = \lim _{L \rightarrow +\infty } \Big ( \frac{1}{L^3} \sum _{i\ne j \in \{1,\dots ,M\}} S {\nabla }\cdot G_{S,L}(a_i - a_j) + \frac{M}{L^3} \lim _{y \rightarrow 0} S{\nabla }\cdot (G_{S,L}(y) - G_{S}(y)) \Big ). \end{aligned}$$

Using that

$$\begin{aligned} G_{S,L}(y) = \frac{1}{L^2} G_{S,1}\left( \frac{\cdot }{L}\right) , \quad G_{S}(y) = \frac{1}{L^2} G_{S}\left( \frac{\cdot }{L}\right) , \end{aligned}$$

we get that

$$\begin{aligned} \frac{M}{L^3} \Big |\lim _{y \rightarrow 0} S{\nabla }\cdot (G_{S,L} - G_{S})(y)\Bigr |&\leqq C \Big |\lim _{y \rightarrow 0} S{\nabla }\cdot (G_{S,L} - G_{S})(y)\Big | \\&\leqq \frac{C'}{L^3} \Big | \lim _{y \rightarrow 0} S {\nabla }\cdot (G_{S,1} - G_{S})(y/L)\Big | = O(L^{-3}). \end{aligned}$$

We obtain

$$\begin{aligned} \mathcal {W}({\nabla }H) = \lim _{L \rightarrow +\infty } \frac{1}{L^3} \, \sum _{i\ne j \in \{1,\dots ,M\}} S {\nabla }\cdot G_{S,L}(a_i - a_j). \end{aligned}$$
(5.22)

This is the first formula of the proposition. To prove the second one, one can go back to formula (5.21) and take the expectation of both sides. The left-hand side, which is deterministic, is of course unchanged. As regards the r.h.s., one can swap the limit in L and the expectation by invoking the dominated convergence theorem. Indeed, both terms \(\frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta _L|^2\) and \(\frac{M}{(\eta L)^3} \Bigl ( \int _{B^1} |{\nabla }G_S^1|^2 + \frac{3}{10\pi } |S|^2 \Bigr )\) are bounded uniformly in n and in the random parameter \(\omega \) (but not uniformly on \(\eta \)): the first term is bounded through a simple energy estimate, while the second one is bounded thanks to the almost sure separation assumption.

The final step is to prove (5.20), almost surely. We set \({\varepsilon }:= \frac{1}{L}\), and introduce, for all \(x \in K_1\),

$$\begin{aligned} h^\eta _{{\varepsilon }}(x) = \frac{1}{{\varepsilon }^2} H^\eta _L(\frac{x}{{\varepsilon }}), \quad p_{{\varepsilon }}^\eta (x) = \frac{1}{{\varepsilon }^3} p^\eta _L(\frac{x}{{\varepsilon }}), \end{aligned}$$

and similarly, for all \(x \in K_1\),

$$\begin{aligned} \bar{h}^\eta _{\varepsilon }(x)&= \frac{1}{{\varepsilon }^2} H^\eta (\frac{x}{{\varepsilon }}) - \fint _{K_1} \frac{1}{{\varepsilon }^2} H^\eta (\frac{\cdot }{{\varepsilon }}),\\ \overline{p}^\eta _{\varepsilon }(x)&= \frac{1}{{\varepsilon }^3} P^\eta (\frac{x}{{\varepsilon }}) - \fint _{K_1} \frac{1}{{\varepsilon }^3} P^\eta (\frac{\cdot }{{\varepsilon }}), \end{aligned}$$

where \((H^\eta , P^\eta )\) refers to the field built in Proposition 4.7. Clearly,

$$\begin{aligned} {\varepsilon }^6 \int _{K_1} |{\nabla }h^\eta _{{\varepsilon }}|^2 = \frac{1}{L^3} \int _{K_L} |{\nabla }H_{L}^\eta |^2, \end{aligned}$$

while, by the ergodic theorem, one has, almost surely, that

$$\begin{aligned} {\varepsilon }^6 \int _{K_1} |{\nabla }\bar{h}^\eta _{{\varepsilon }}|^2 = \frac{1}{L^3} \int _{K_L} |{\nabla }H^\eta |^2 \xrightarrow [{\varepsilon }\rightarrow 0]{} \mathbb {E}\int _{K_1} |{\nabla }H^\eta |^2. \end{aligned}$$

It remains to show that

$$\begin{aligned} {\varepsilon }^6 \int _{K_1} |{\nabla }(\bar{h}^\eta _{{\varepsilon }} - h^\eta _{{\varepsilon }})|^2 \rightarrow 0 \quad \text {as } {\varepsilon }\rightarrow 0. \end{aligned}$$

We notice that the difference \(h_{\varepsilon }= \bar{h}^\eta _{\varepsilon }- h^\eta _{{\varepsilon }}\) satisfies the Stokes equation

$$\begin{aligned} - \Delta h_{\varepsilon }+ {\nabla }p_{\varepsilon }= \frac{1}{{\varepsilon }^3}\hbox {div}~(R_{\varepsilon }- R_{{\varepsilon },L}), \quad \hbox {div}~h_{{\varepsilon }} = 0 \quad \text { in } K_1, \end{aligned}$$

where

$$\begin{aligned} R_{\varepsilon }:= \sum _{z \in \Lambda } \Psi ^\eta (x/{\varepsilon }+ z), \quad \quad R_{{\varepsilon },L} := \sum _{z \in \Lambda _L} \Psi ^\eta (x/{\varepsilon }+ z), \end{aligned}$$

and where we recall that \(\Lambda _L\) is obtained by \(L\mathbb {Z}^3\)-periodization of \(\Lambda \cap K_{L-1}\). Testing against \({\varepsilon }^6 h_{\varepsilon }\), we find

$$\begin{aligned} {\varepsilon }^6 \int _{K_1} |{\nabla }h_{\varepsilon }|^2= & {} - \int _{K_1} (R_{\varepsilon }- R_{{\varepsilon },L}) {\varepsilon }^3 {\nabla }h_{\varepsilon }\nonumber \\&+ \int _{{\partial }K_1} F_{\varepsilon }n \cdot {\varepsilon }^3 (h_{\varepsilon }- \fint _{K_1} h_{\varepsilon }) \, - \, \int _{{\partial }K_1} G_{\varepsilon }n \cdot {\varepsilon }^3 h_{\varepsilon }, \end{aligned}$$
(5.23)

where

$$\begin{aligned} F_{{\varepsilon }}(x)&:= {\nabla }H^\eta (\frac{x}{{\varepsilon }}) - P^\eta \big (\frac{x}{{\varepsilon }}\big ) I_d + \int _{K_1} P^\eta \big (\frac{\cdot }{{\varepsilon }}\big ) I_d + \tilde{F}(x), \\ G_{\varepsilon }(x)&:= {\nabla }H^\eta _L(\frac{x}{{\varepsilon }}) - P^\eta _L(\frac{x}{{\varepsilon }}) I_d + \tilde{G}(x) , \end{aligned}$$

with

$$\begin{aligned} \tilde{F}(x) := \sum _{z \in \Lambda } \Psi ^\eta (x/{\varepsilon }+z) - S, \quad \tilde{G}(x) := \sum _{z \in \Lambda _L} \Psi ^\eta (x/{\varepsilon }+z) - S. \end{aligned}$$

Note that both \(F_{\varepsilon }\) and \(G_{\varepsilon }\) are divergence-free.

To handle the first term at the right-hand side of (5.23), we notice that

$$\begin{aligned} \left| \{ z \in \Lambda \bigtriangleup \Lambda _L, K_L \cap B(-z,\eta ) \ne \emptyset \} \right| = O(L^2) = O({\varepsilon }^{-2}), \end{aligned}$$

resulting in

$$\begin{aligned} \int _{K_1} (R_{\varepsilon }- R_{{\varepsilon },L}) {\varepsilon }^3 {\nabla }h_{\varepsilon }\leqq & {} C \Big ( {\varepsilon }\int _{\mathbb {R}^3} |\Psi ^\eta |^2\Big )^{1/2} \Vert {\varepsilon }^3 {\nabla }h_{\varepsilon }\Vert _{L^2(K_1)}\\\leqq & {} C {\varepsilon }^{1/2} \Vert {\varepsilon }^3 {\nabla }h_{\varepsilon }\Vert _{L^2(K_1)}. \end{aligned}$$

As regards the second term, one proceeds exactly as in Paragraph 5.1, replacing \(\mathcal {O}\) by \(K_1\): see the treatment of \(F^2_{\varepsilon }\) and \(F^3_{\varepsilon }\), defined in (5.13) and (5.15). One gets in this way that for all divergence-free \(\varphi \in H^1(K_1)\),

$$\begin{aligned} |\int _{{\partial }K_1} F_{\varepsilon }n \cdot \varphi | \leqq \nu ({\varepsilon }) \Vert {\nabla }\varphi \Vert _{L^2(K_1)}, \quad \nu ({\varepsilon }) \rightarrow 0 \quad \text {as } {\varepsilon }\rightarrow 0. \end{aligned}$$

As regards the last term, we take into account the periodicity of \(H^\eta _L\) and \(\tilde{G}\) to write

$$\begin{aligned} \int _{{\partial }K_1} G_{\varepsilon }n \cdot {\varepsilon }^3 h_{\varepsilon }= \int _{{\partial }K_1} G_{\varepsilon }n \cdot {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\mathrm{d}x. \end{aligned}$$

As \(\int _{{\partial }K_1} \bar{h}^\eta _{\varepsilon }\cdot n = 0\), we can introduce a solution \(\Phi _{\varepsilon }\) of

$$\begin{aligned} \hbox {div}\Phi _{\varepsilon }= 0 \quad \text { in } \, K_1, \quad \Phi _{\varepsilon }\vert _{{\partial }K_1} = {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\vert _{{\partial }K_1}, \quad \Vert \Phi _{\varepsilon }\Vert _{H^1(K_1)} \leqq C \Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\vert _{{\partial }K_1}\Vert _{H^{1/2}({\partial }K_1)}. \end{aligned}$$

Proceeding as in Paragraph 5.1 (replacing \(\mathcal {O}\) by \(K_1\)), one can show that \(\Vert {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\Vert _{H^{1/2}({\partial }K_1)}\) goes to zero with \({\varepsilon }\), and so \(\Vert \Phi ^{\varepsilon }\Vert _{H^1(K_1)}\) goes to zero as well. Eventually, we write

$$\begin{aligned} |\int _{{\partial }K_1} G_{\varepsilon }n \cdot {\varepsilon }^3 \bar{h}^\eta _{\varepsilon }\mathrm{d}x|&= |\int _{K_1} G_{\varepsilon }\cdot {\nabla }\Phi _{\varepsilon }| \\&= |\int _{K_1} \left( 2 D(H^\eta _L)(\cdot /{\varepsilon }) + \tilde{G}\right) \cdot {\nabla }\Phi _{\varepsilon }| \\&\leqq C \left( \frac{1}{L^3} \Vert {\nabla }H^\eta _L\Vert ^2_{L^2(K_L)} + \Vert \Psi ^{\eta }\Vert ^2_{L^2} + 1 \right) ^{1/2} \Vert {\nabla }\Phi _{\varepsilon }\Vert _{L^2}\\&\leqq C' \Vert {\nabla }\Phi _{\varepsilon }\Vert _{L^2}. \end{aligned}$$

Hence, we find

$$\begin{aligned} {\varepsilon }^6 \int _{K_1} |{\nabla }h|^2 \leqq C \left( {\varepsilon }+ \nu ({\varepsilon })^2 + \Vert {\nabla }\Phi _{\varepsilon }\Vert ^2_{L^2} \right) \xrightarrow [{\varepsilon }\rightarrow 0]{} 0, \end{aligned}$$

which concludes the proof. \(\square \)