Abstract
The Kac model is a simplified model of an \(N\)-particle system in which the collisions of a real particle system are modeled by random jumps of pairs of particle velocities. Kac proved propagation of chaos for this model, and hence provided a rigorous validation of the corresponding Boltzmann equation. Starting with the same model we consider an \(N\)-particle system in which the particles are accelerated between the jumps by a constant uniform force field which conserves the total energy of the system. We show propagation of chaos for this model.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The most fundamental equation in the kinetic theory of gases is perhaps the Boltzmann equation, which was derived by Ludwig Boltzmann in 1872. This equation describes the time evolution of the density of a single particle in a gas consisting of a large number of particles and reads
where \(f(x,v,t)\) is a density function of a single particle, \(x, v\in \mathbb {R}^3\) represent the position and velocity of the particle, and \(t \ge 0\) represents time. The collision operator \(Q\) is given by
The case where \(f\) is independent of \(x\) is called the spatially homogeneous Boltzmann equation. In Eq. (1.2), the pair \((v,v_*)\) represents the velocities of two particles before a collision and \((v',v_*')\) the velocities of these particles after the collision. Excellent references about the Boltzmann equation are [4, 11].
The fact that the collision operator \(Q(f,f)\) involves products of the density \(f\) rather than a two particle density \(f_2(x_1,v_1,x_2,v_2,t)\) is a consequence of Boltzmann’s stosszahlansatz, the assumption that two particles engaging in a collision are independent before the interaction. It is a very challenging problem to improve on Landford’s result from 1975 [7], which essentially states that the stosszahlansats holds for a time interval of the order of one fifth of the time mean time between collisions of an individual particle. In an attempt to address the fundamental questions concerning the derivation of spatially homogeneous the Boltzmann equation, Mark Kac introduced a stochastic particle process consisting of \(N\) particles from which he obtained an equation like the Boltzmann equation (1.1) as a mean field limit when the numbers of the particles \(N\rightarrow \infty \), (see [6]): Consider the master vector \({\mathbf {V}}=(v_1,\dots ,v_N)\), \(v_i\in \mathbb {R}\), where each coordinate represents the velocity of a particle. The spatial distribution of the particles is ignored in this model, and the velocities are one dimensional. The state space of the particles is the sphere in \(\mathbb {R}^N\) with radius \(\sqrt{N}\), that is the velocities are restricted to satisfy the equation
The binary collisions in a the gas are represented by jumps involving pairs of velocities from the master vector, with exponentially distributed time intervals with intensity \(1/N\). At each collision time, the pair of velocities \((v_i, v_j)\) are chosen randomly from the master vector and changes to \((v_i',v_j')\) according to
The parameter \(\theta \) is chosen according to a law \(b(\theta )\mathrm {d}\theta \). In [6], for the sake of simplicity, Kac chooses \(b(\theta )=(2\pi )^{-1}\). Any bounded \(b(\theta )\) can be treated in the same way. The post-collision master vector is denoted by \(R_{ij}(\theta ){\mathbf {V}}\). Note that the collision process does not conserve both momentum and energy (only trivial collisions can conserve both invariants in this one dimensional case). Hence
but in general
The equation governing the evolution of this process is called Kac’s master equation (a Kolmogorov forward equation for Markov processes). It is given by
where the collision operator \(\mathcal {K}\) has the form
with
The particles are assumed to be identical and this corresponds to the initial density being symmetric:
Definition 1.1
A probability density \(W({\mathbf {V}})\) on \(\mathbb {R}^N\) is said to be symmetric if for any bounded continuous function \(\phi \) on \(\mathbb {R}^N\)
where for any permutation \(\sigma \in \{1,\dots ,N\}\)
We note that the master equation (1.4) preserves symmetry. To obtain an equation like (1.1) which describes the time evolution of a one-particle density, Kac studied the \(k\)-th marginal \(f_k^N\) of \(W_N({\mathbf {V}},t)\), where
Here, \(\sigma ^{(k)}\) is the spherical measure on \(\Omega _k=\mathbb {S}^{N-1-k}\left( \sqrt{N-(v_1^2+\dots +v_k^2)}\right) \). Since \(W_N\) is symmetric, the \(k\)-th marginal is also symmetric, and the time evolution for the first marginal \(f_1^N\) is obtained by integrating the master equation (1.4) over the variables \(v_2 \dots v_N\). This yields
where
If we had \(f_2^N(v_1,v_2,t)\approx f_1^N(v_1,t)f_1^N(v_2,t)\) in a weak sense (which is defined later) then the evolution equation (1.9) for the first marginal would look like the spatially homogenous Boltzmann equation, i.e., Eq. (1.1) without the position variable \(x\). Kac suggested in [6] that one should take a sequence of initial densities \(W_{N,0}({\mathbf {V}})\) which have the “Boltzmann property” that is,
weakly in the sense of measures on \(\mathbb {R}^k\). The Boltzmann property means that for each fixed \(k\), the joint probability densities of the first \(k\) coordinates tend to product densities when \(N\rightarrow \infty \). By analyzing how the collision operator acts on functions depending on finitely many variables and a combinatorial argument Kac showed that for all \(t>0\), the sequence \(W_N({\mathbf {V}},t)\) also has the Boltzmann property, that is, the Boltzmann property propagates in time. In this case the limit of the first marginal \(f(v,t)=\lim _{N\rightarrow \infty }f_1^N(v,t)\) satisfies the Boltzmann–Kac equation
where
What Kac refereed to as the ’Boltzmann property’ is nowadays often called chaos. More precisely, we have the following definition:
Definition 1.2
Let \(f\) be a given probability density on \(\mathbb {R}\) with respect to the Lebesgue measure \(m\). For each \(N\in \mathbb {N}\), let \(W_N\) be a probability density on \(\mathbb {R}^N\) with respect to the product measure \(m^{(N)}\). Then the sequence \(\{W_N\}_{N\in \mathbb {N}}\) of probability densities on \(\mathbb {R}^N\) is said to be \(f\)-chaotic if
-
(1)
Each \(W_N\) is a symmetric function of the variables \(v_1,v_2,\ldots ,v_N\).
-
(2)
For each fixed \(k\in \mathbb {N}\) the k-th marginal \(f_k^N(v_1,\ldots ,v_k)\) of \(W_N\) converges to \(\prod _{i=1}^{k}f(v_i) \), as \(N\rightarrow \infty \), (\(f(v)=\lim _{N\rightarrow \infty }f_1^N(v)\)) in the sense of weak convergence, that is, if \(\phi (v_1,v_2,\dots ,v_k)\) is bounded continuous function on \(\mathbb {R}^k\), then
$$\begin{aligned} \lim _{N\rightarrow \infty }\int _{\mathbb {R}^N}\phi (v_1,v_2,\dots ,v_k)W_N({\mathbf {V}})\mathrm {d}m^{(N)}=\int _{\mathbb {R}^k} \phi (v_1,v_2,\dots ,v_k) \prod _{i=1}^{k}f(v_i)\mathrm {d}m^{(k)}. \end{aligned}$$
The aim of this paper is to show propagation of chaos for a new many particle model with the same collision process, but where between the collisions, the particles are accelerated by a force field which always keep the total energy constant. In the next subsection we describe this process. In the original problem considered by Kac [6], correlations between particles were only introduced through the binary collisions. In our case, the force field will introduce correlations as well, but of a different character.
Our proof of propagation of chaos for this model with two distinct sources of correlation builds on recent work on propagation of chaos, but also includes a quantitative development of Kac’s original argument which we apply to control the correlations introduced by the collisions. We must quantify these correlations in order to control the correlating effects of the force field.
1.1 The Thermostatted Kac Master Equation
In the Kac model, the particles interact via random jumps which correspond to random collisions between pairs of particles. We now consider a stochastic model where the particles have the same jump process as in the Kac model, but are now also accelerated between the jumps under a constant uniform force field \({\mathbf {E}}=E(1,1,\ldots ,1)\) which interacts with a Gaussian thermostat in order to keep the total energy of the system constant. For a detailed discussion see [13]. Consider the master vector \({\mathbf {V}}=(v_1,\ldots ,v_N)\) on the sphere \(\mathbb {S}^{N-1}(\sqrt{N})\). The vector \({\mathbf {V}}\) clearly depends on time, and when needed we write \({\mathbf {V}}(t)\) instead of \({\mathbf {V}}\). It is also convenient to use a coordinate-system in which \(E>0\). The Gaussian thermostat is implemented as the projection of \({\mathbf {E}}\) into the tangent plane of \(\mathbb {S}^{N-1}(\sqrt{N})\) at the point \({\mathbf {V}}\). The time evolution of the master vector between collisions is then given by:
where
and \({\mathbf {1}}=(1,\ldots ,1)\). The quantities \(J({\mathbf {V}})\) and \(U({\mathbf {V}})\) represent the average momentum per particle and the average energy per particle, respectively, and are given by
If \(W_N({\mathbf {V}},t)=W_N(v_1,v_2,\ldots ,v_N,t)\) is the probability density of the \(N\) particles at time \(t\), it satisfies the so called Thermostatted Kac master equation (see [13])
We see that (1.15), in the absence of the force field reduces to the master equation of the Kac model. Under the assumption that the sequence of probability densities \(\{W_N({\mathbf {V}},t) \}_{N\in \mathbb {N}}\) propagates chaos it is shown in [13, Theorem 2.1] that \(f(v,t)=\lim _{N\rightarrow \infty }f_1^N(v,t)\) where \(f_1^N(v,t)\) is the first marginal of \(W_N({\mathbf {V}},t)\) satisfies the Thermostatted Kac equation
where
and \(\mathcal {Q}(f,f)\) is given by (1.10). For the investigation of Eq. (1.16) we refer to Wennberg and Wondmagegne [12] and Bagland [1].
The interest in studying thermostatted kinetic equations comes from attempts to fully understand Ohm’s law. Many of the ideas of this paper come from [2], which presents a more realistic model where the positions of the particles are also taken into account. However, the collision term is easier, and the main difficulty comes from analyzing a spatially homogenous model.
The proof of propagation of chaos is in many ways similar to that of Kac, but whereas his proof is carried out entirely by analyzing the collision operator (1.5), the proof presented here (and in [2]) requires a more detailed analysis of the underlying stochastic jump process, and thus approaches Grünbaum’s method for proving propagation of chaos (see [5]), which is based on studying the empirical measure \(\mu _N\) generated by the \(N\) velocities, and proving that the sequence \(\{\mu _N\}_{N=1}^\infty \) converges weakly to a measure, which is the solution to the Boltzmann equation. While essentially all ingredients of the proof are present in [5], there are many technical difficulties that were treated rigorously only later in [9], and in a much greater generality in [8] and other papers by the same authors. A standard reference addressing many aspects of the propagation of chaos is [8].
The structure of the paper is as follows: In Sect. 2 we introduce a master equation (a “quenched equation”) which is an approximation to the master equation (1.15). In Sect. 3, we show that the quenched master equation propagates chaos with a quantitative rate. In Sect. 4 we make pathwise comparison of the stochastic processes corresponding to the master equation (1.15) and the approximation master equation. The main result is that, for large \(N\), the paths of the two stochastic processes are close to each other. Finally, in Sect. 5 we show that the second marginal of \(W_N({\mathbf {V}},t)\) converges as \(N\rightarrow \infty \) to a product of two one marginals of \(W_N({\mathbf {V}},t)\).
2 An Approximation Process
To show propagation of chaos for the evolution described by the master equation (1.15), we consider the two particle marginal \(f_2(v_1,v_2,t)\) of \(W_N({\mathbf {V}},t)\) and show that it can be written as a product of \(2\) one particle marginals of \(W_N({\mathbf {V}},t)\) when \(N\rightarrow \infty \). In [2], by introducing an approximation master equation which propagates independence, it is shown that for large \(N\), the path described by this approximate master equation is close to the path described by the original master equation. This in turn implies propagation of chaos. The independence property is not crucial, and the ideas in [2] can be adapted and further developed so as to apply to the model we consider here. If one tries to directly show propagation of chaos for the master equation (1.15) using the classical method by Kac [6], one encounters difficulties, even with the master equation in [2]. The difficulty lies in the nature of the force field \({\mathbf {F}}({\mathbf {V}})\) which depends on \(J({\mathbf {V}})\) and \(U({\mathbf {V}})\).
To overcome this difficulty in [2] a modified force field is introduced in which the random quantities \(J({\mathbf {V}})\) and \(U({\mathbf {V}})\) are replaced by their expectations which only depend on time. This gives rise to a new master equation. In the next section we introduce this modified problem and related properties.
2.1 The Modified Force Field and the Quenched Master Equation
Following the lines in [2], given a probability density on \(\mathbb {R}^N\) we define the quenched current and the quenched energy approximation as:
where \(<\cdot >_{W_N}\) denotes the expectation with respect to a given density \(W_N\), i.e., for an arbitrary continuous function \(\phi \),
with \(\mathrm {d}m^{(N)}\) denoting the Lebesgue measure on \(\mathbb {R}^N\). The modified force field which now depends on the quenched current and energy is defined as
We note that with given \(\widehat{J}_{W_N}(t)\) and \(\widehat{U}_{W_N}(t)\), the particles move independently when subject to (2.2), while in (1.12) all particles interact through the force field \({\mathbf {F}}\). With this modified force field, we consider the following quenched master equation
where now the modified force is the one corresponding to the density \(\widehat{W}_N({\mathbf {V}},t)\). Besides the difference in force fields, the quenched master equation (2.3) is non-linear (\(\widehat{{\mathbf {F}}}(t)\) depends on \(\widehat{W}(t)\)) compared to the master equation (1.15) but they both have the same collision process.
The motivation for introducing the quenched process is that if there is propagation of chaos, then the different particle velocities will be approximately independent, and then for large \(N\), the Law of Large Numbers will imply that almost surely,
and likewise for the energy, to a very good approximation. In this case, there will be a negligible difference between the quenched force field and the thermostatting force field, and thus we might expect the two processes to be pathwise close. To follow the strategy of [2], we shall need quantitative estimates on the propagation of chaos by the quenched process, which shall justify using the Law of Large Numbers to show that for large \(N\) the two force fields are indeed close.
Henceforth, to simplify notations, let
In the next lemma we describe the time evolution of \(\widehat{J}_N(t)\) and \(\widehat{U}_N(t)\) in terms of differential equations.
Lemma 2.1
Given initial distribution \(\widehat{W}_{N,0}({\mathbf {V}})\), \(\widehat{J}_N(t)\) and \(\widehat{U}_N(t)\) satisfy the differential equations, both independent of \(N\):
and
Proof
Formally the result can be obtained by multiplying Eq. (2.3) by \(v_i\) or \(v_i^2\), integrating (partially) and summing over \(i\). To avoid any difficulties in the formal manipulations that lead to Eq. (2.4) and Eq. (2.5), we consider first a linear equation, with a force field a priori determined by solutions to (2.4) and (2.5), and observe that the solutions to this linear equation actually solve (2.3). Cf. also Ref. [1].
Let \(\pi _i\) be the continuous function on \(\mathbb {R}^N\) defined by
From (1.5), we get
Consider the force field
where \(\bar{u}=\widehat{U}_N(0)\) is a constant and \(\xi (t)\) satisfies the differential equation
with initial condition \(\xi (0)=\widehat{J}_N(0)\). The dynamics of each particle under the force field \(\widetilde{{\mathbf {F}}}\) is given by
or
Abbreviating \(\gamma (t)=E \xi (t)/ \bar{u}\), we see that
where
Given \({\mathbf {V}}\) at time \(s\), let \(\widetilde{S}_{t,s}\) be the flow such that \(\widetilde{S}_{t,s}({\mathbf {V}})\) is the unique solution of (2.8) at time \(t\) with each component given by (2.9). Let \(\widetilde{W}_N({\mathbf {V}},t)\) be a solution to (2.3) with \(\widehat{{\mathbf {F}}}_{\widehat{W}_N}(t)=\widetilde{ {\mathbf {F}}}(t)\). Moreover, let \(\widetilde{\mathcal {J}}_{t,s}({\mathbf {V}})\) be the determinant of the Jacobian of \(\widetilde{S}_{t,s}({\mathbf {V}})\), that is,
Next, the master equation (2.3) with this a priori determined force field can be written in mild form as
Multiplying both sides of the last equality by \(\widetilde{\mathcal {J}}_{t,s}\) and integrating yields
Now, we multiply both sides of the last equality by \(\widetilde{S}_{t,s}(v_i)\) and integrate over \(\mathbb {R}^N\) with \(\mathrm {d}m^{(N)}=\mathrm {d}v_1,\ldots , \mathrm {d}v_N\). We get
By a change of variables and the property that \(\widetilde{S}_{t,s}(v_i)=\widetilde{S}_{t,\tau }(\widetilde{S}_{\tau ,s}(v_i))\) we can write the last equality as
Using (2.6), summing both sides of the last equality from \(i=1\) to \(i=N\) and dividing by \(N\) leads to the following relation
Differentiating the last equality with respect to \(t\) yields
We now see that \(\widetilde{J}_N(t)\) satisfies the differential equation (2.7) and hence is equal to \(\xi (t)\). A similar calculation also yields
Differentiating the last equality with respect to \(t\) yields
Using \(\xi (t)=\widetilde{J}_N(t)\), we find that
and hence \(\widetilde{U}_N(t)=\bar{u}\) as the (unique) solution to (2.11),
Hence the a posteriori determined \(\widetilde{J}_N(t)\) and \(\widetilde{U}_N(t)\) coincide with \(\xi (t)\) and \(\bar{u}\), and hence the \(\widetilde{W}_N\) solves (2.3), and the conclusions of the lemma holds. \(\square \)
The consequence of the last lemma is that, given initial data, at time \(t\), we can obtain \(\widehat{J}_N(t)\) and \(\widehat{U}_N(t)\) using the differential equations (2.4) and (2.5). This is independent of knowing \(\widehat{W}({\mathbf {V}},t) \) which is required when using (2.1) to obtain \(\widehat{J}_N(t)\) and \(\widehat{U}_N(t)\). Since \(\widehat{U}_N(t)\) is constant in time, in the remaining of the paper we abbreviate
Using the new notations, the evolution of each particle is given by
which we also can write as
Given \({\mathbf {V}}_0\), let \(\widehat{S}_{t,0}\) be the flow such that \(\widehat{S}_{t,0}({\mathbf {V}}_0)\) is the unique solution of (2.13) at time \(t\). In what follows we shall need a bound on a sixth moment of \(\widehat{W}_N\) which is defined as
Because \(\widehat{W}_N\) is symmetric, the definition does not depend on the index \(i\).
Lemma 2.2
Assume that \(\widehat{m}_{6,N}(0)<\infty \). For all \(t>0\) we have
where \(C_{\widehat{m}_{6,N}(0),t}\) is a positive constant which depends on \(\widehat{m}_{6,N}(0)\) and \(t\).
Proof
Making computations similar to those in the proof of the last lemma we have
The left hand side of the last equality is by definition \(\widehat{m}_{6,N}(t)\). To estimate \(C_1(t)\), we first note that \(|\widehat{J}_N(t)| \le \sqrt{\widehat{U}_N}\) by using the Cauchy Schwartz inequality. Furthermore, a crude estimate on the differential equation (2.12) for the evolution of the particle \(\widehat{v}_i\) yields
Let
Straightforward estimation yields
Hence,
To estimate \(C_2(t)\), using that \(\mathcal {K}\) is self-adjoint, we can write
A calculation similar to (2.6) on \(\mathcal {K}\widehat{S}_{t,s}(v_i)^6\) and the inequality \((a+b)^2\le 2(a^2+b^2)\) for \(a,b \in \mathbb {R}\) yield
Since \(\widehat{S}_{t,s}(v_i)\) is of the form \(\widehat{S}_{t,s}(v_i)=\widehat{\alpha }_{ts}v_i+\widehat{\beta }_{ts}\), it follows that
where \(C\) is a positive constant and
Combining the estimates above, we have
Rewriting this inequality slightly we can apply Gronwall’s lemma to obtain
where \(C_{\widehat{m}_{6,N}(0),t}\) is constant depending on \(\widehat{m}_{6,N}(0)\) and \(t\). Note that the Gronwall’s lemma gives that \(C_{\widehat{m}_{6,N}(0),t}\) depends on \(N\) only from the initial condition \(m_{6,N}(0)\). \(\square \)
3 Propagation of Chaos for the Quenched Master Equation
In the previous section we defined the quenched master equation (2.3). The goal of this section is to show that it propagates chaos. However, in order to take advantage of this to show that the master equation (1.15) propagates chaos, we need to know at which rate (2.3) propagates chaos. The reason that propagation of chaos holds for (2.3) is that the particles driven by the quenched force field evolve independently between collisions. In [6] it is shown that given chaotic initial data the master equation (2.3) without the term \(\nabla \cdot (\widehat{{\mathbf {F}}}_{\widehat{W}_N}(t)\widehat{W}_N({\mathbf {V}},t))\) propagates chaos. The main idea in the proof of Kac is that as \(N\) tends to infinity, the probability that any given particle collides with some other particle more than once tends to zero. By isolating the contribution of “recollisions” to the evolution, and showing that their contribution is negligible in the limit, Kac deduced his asymptotic factorization property. To state this precisely, and to state our quantitative version, we first introduce some notation, defining the marginals of \(\widehat{W}_N({\mathbf {V}},t)\). Let
be the one-particle marginal of \(\widehat{W}_N({\mathbf {V}},t)\) at time \(t\). Since \(\widehat{W}_N\) is symmetric under permutation of the variables \(v_1,\dots ,v_n\), it does not matter which variables we integrate over. Similarly, the \(k\)-th marginal of \(\widehat{W}_N({\mathbf {V}},t)\) at time \(t\) is defined as
The qualitative result of Kac is that
for all bounded, continuous functions \(\phi \).
The main result of this section is:
Theorem 3.1
Let \(\{\widehat{W}_N({\mathbf {V}},0)\}_{N\in \mathbb {N}}\) be a sequence of symmetric probability densities on \(\mathbb {R}^N\) such that
for all \(k\in \mathbb {N}\), where \(\phi (v_1,\ldots ,v_k)\) is a bounded continuous function on \(\mathbb {R}^k\) and
with \(C_0\) being a positive constant. Then, we have for \(0\le t \le T\), where \(T<\infty \) that
where
and \(C(T)\) is constant depending only on \(T\).
This result provides the means to adapt the strategy developed in [2] for controlling the effects of correlations that are introduced by the thermostatting force field. In [2], the collisions were not binary collisions, but were a model of collisions with background scatterers. These collisions did not introduce any correlations at all, and in that work the analogous quenched process exactly propagated independence. This facilitated appeal to the Law of Large Numbers. When we need to apply the Law of Large Numbers here, the individual velocities in our quenched process will not be independent, and we must quantify the lack of independence. Theorem 3.1 provides the means to do this, and may be of independent interest. The proof Theorem 3.1 is build on ideas from the original proof of [6], in particular, on his idea of controlling the effect of recollisions, and the refined combinatoric arguments from [3]. However, the proof is rather long. We divide it into 5 steps.
Proof
Step 1
In this step we express the solution \(\widehat{W}_N({\mathbf {V}},t)\) of (2.3) as a series depending on \(\widehat{W}_N({\mathbf {V}},0)\) and use this to find an expression for the k-th marginal of \(\widehat{W}_N({\mathbf {V}},t)\) at time \(t\). Recall that the quenched master equation is given by
with initial data \(\widehat{W}_N({\mathbf {V}},0)=\widehat{W}_{N,0}({\mathbf {V}})\), and where \(\widehat{{\mathbf {F}}}_{\widehat{W}_N} \) is given by (2.2). Let \(\widehat{P}_{t,0}\widehat{W}_N({\mathbf {V}},0)\) denote the solution to its homogenous part where the operator \(\widehat{P}_{t,s}:L^1\rightarrow L^1\) transforms the density \(\widehat{W}_N\) from time \(s\) to time \(t\). Explicitly
where \(\widehat{\mathcal {J}}_{t,s}\) is the determinant of the Jacobian of \(\widehat{S}_{t,s}({\mathbf {V}})\), i.e.,
By the Duhamel formula,
Iterating (3.6) expresses \(\widehat{W}_N({\mathbf {V}},t)\) as a series:
where
For a continuous function \(\phi \) of \(k\) variables \(v_1,\dots ,v_k\), it follows from (3.1) and (3.7) that
The operator \(\widehat{P}_{t,s}^*:L^\infty \rightarrow L^\infty \) is the adjoint of the operator \(\widehat{P}_{t,s}\). Explicitly,
and \(\widehat{P}_{t,s}^*\phi \) is still a function of \(v_1,\dots , v_k\) but also depends on \(\widehat{J}_N(t)\) and \(\widehat{U}_N\). Moreover, the operator \(\widehat{P}_{t,s}^*\) preserves the \(L^{\infty }\) norm, i.e., \(||\widehat{P}_{t,s}^*\phi ||_\infty =||\phi ||_\infty \).
Step 2
Observing what we obtained in (3.8), we now need to see how the operator
acts on a bounded continuous function \(\phi \) depending on finitely many variables. Introducing the notation
we can now write (3.9) as
Following [6], let us see how \(\widehat{P}^*_{t_1,0}\Gamma ^{j}_{t,t_j,\dots ,t_1}\) acts on a function \(\phi _1(v_1)\) depending only on one variable. For \(j=1\), we have
where \(\phi _{1;1}(v_1;t,t_1)=\widehat{P}^*_{t,t_1}\phi _1(v_1)\). The operator \(Q\) adds a new variable \(v_j\) to \(\phi _{1;1}(v_1;t,t_1)\) at time \(t_1\). Setting
we have
For \(j=2\) we get
Setting \(\phi _{2;2}(v_1,v_2;t,t_2,t_1)= \widehat{P}^*_{t_2,t_1}\phi _{2;1}(v_1,v_2;t,t_2)\), we see that
When \(Q\) acts on \(\phi _{2;2}\) in two last expressions above again a new velocity variable is created in \(\phi _{2;2}\) leading to \(\phi _{3;2}\). In this fashion each time \(\Gamma ^1\) acts on a function, a new time variable and a new velocity variable is created. Since \(\widehat{P}^*_{t,s}\) preserves the \(L^{\infty }\) norm , it follows that \(||\phi _{2;2}||_{\infty }\le 4||\phi _1 ||_{\infty }\). This in turn implies that
From (3.11) it also follows that
It is tedious but straightforward to show that in general we have
A detailed proof of (3.12) in the case \(\widehat{P}^*_{t_1,0}\Gamma ^j_{t,t_j,\ldots ,t_1}=\mathcal {K}^j\), i.e, \(\widehat{P}^*_{t,s}=Id\) for all \(t\) and \(s\) can be found in [3], for \(j+1<N\) as well as for \(j+1\ge N\) (the latter has to be handled a little differently). In our case the proof follows along the same lines since \(\widehat{P}^*_{t,s}\) preserves the \(L^\infty \) norm. More generally, if \(\phi _m\) is function of \(m\) variables, \(m\ge 2 \), it can be shown by induction that
An important feature of the estimate (3.13) is that it is independent of \(N\), and we note that in (3.11) for large \(N\) the first term is small but the two last terms where new velocity variables are added have an impact.
Step 3
In the previous step we found that the action of \(P^*_{t_1,0}\Gamma _{t,t_1}\) on \(\phi _1\) results in a sum in which the terms consist of functions \(\phi _{2;1}\) depending on two velocity variables by adding a velocity variable and a time variable to \(\phi _1\), and \(P^*_{t_1,0}\Gamma ^2_{t,t_2,t_1}\) acting on \(\phi _1\) results in a sum in which the terms consist of functions \(\phi _{3;2}\) depending on three velocity variables by adding two velocity variables and two time variables to \(\phi _1\). In this step we look at the action of \(\widehat{P}^*_{t_1,0}\Gamma ^1_{t_2,t_1}\) on a function \(\phi _{k;l-1}\) depending on \(k\) velocity variables and \(l-1\) time variables to see that for large \(N\), only the terms where a new velocity variable is added make a significant contribution. To be more precise, we have the following lemma which corresponds to Lemma 3.5 in [3] which deals with the case \(\widehat{P}^*_{t_1,0}\Gamma ^j_{t,t_j,\dots ,t_1}=\mathcal {K}^j\).
Lemma 3.2
Assume that \(\widehat{W}_N({\mathbf {V}},0)\) is a symmetric probability density on \(\mathbb {R}^N\). For \(l\ge 2\), let \(\phi _{k;l-1}=\phi _{k;l-1}(v_1,\ldots ,v_k;t,t_l,\ldots ,t_2)\), where \(\phi _{k;0}=\phi _k(v_1,\ldots ,v_k)\) is a bounded continuous function of the variables \(v_1\dots ,v_k\). Define \(\widehat{P}^*_{t_1,0}\phi _{k+1;l}\) as
Then, we have
where \(\phi ^R_{k+1;l}\) is a function depending \(v_1,\ldots ,v_{k+1}\) and \(t_1,\dots ,t_l\), and
Proof
By the definition of \(\mathcal {K}\) and since \(\phi _{k;l-1}\) depends on \(v_1,\dots ,v_k\), we have
Because \( \widehat{W}_N({\mathbf {V}},0)\) is a symmetric probability density, we find that
Using this, it follows that
where
and finally we obtain the estimate
\(\square \)
The construction in step 2 and 3 shows that \(\widehat{P}^*_{t_1,0}\phi _{k+1;l}(v_1,\dots ,v_{k+1};t,t_l,\dots ,t_1)\), up to an error term that vanishes in the limit of large \(N\) like \(1/N\), can be written as a sum of terms of which each can be represented by a binary tree, which determines in which order new velocities are added. For example, starting with one velocity \(v_1\) at time \(t\) and adding three new velocities as described above, we could find the following sequence of graphs:
where the new velocities are always added to the right branch of the tree, or
where the tree is built symmetrically. The terms of order \(1/N\) that are deferred to the rest term can be represented by trees very much in the same way, but may have two or more leafs with the same velocity variable. This is exactly as in the original Kac paper as far as the collisions go: the collision process is independent of the force field and of the state of the \(N\)-particle system, and hence the number of terms represented by a particular tree, and the distribution of time points where new velocities are added (giving a new branch of the tree) are exactly the same in our setting as in the original one. But what happens between the collisions is important, and hence the added velocities are noted together with the time of addition. In this construction the velocities are added in increasing order of indices, but it is important to understand that any set of four different variables out of \(v_1,\ldots ,v_N\), or any permuation of \(v_1,\ldots ,v_4\) would give the same result.
Step 4
In (3.8) we obtained an expression for the \(k\)-th marginal \(f_k^N\) of \(\widehat{W}_N({\mathbf {V}},t)\) at time \(t\) as a series. In this step we check that this series representation is uniformly convergent in \(N\). We will only consider the case \(k=1,2\), the other cases being similar but more tedious. Setting \(\phi _{1;0}(v_1)=\phi _1(v_1)\) and defining \(P^*_{t_1,0}\phi _{j+1;j}\) inductively by (3.14) we have that (3.8) without the first term equals (recall also notation (3.10))
By induction on (3.14) it follows that
Noting that
we have
This is an estimate of a general term in the first series in the right hand side of (3.18). Hence, that series is uniformly convergent in \(N\) if \(t<1/4\). For the second series in the right hand side of (3.18) using (3.13) and (3.15), we first obtain
Since \(||\phi _{i:i-1}||\le 4^{i-1} (i-1)! ||\phi _1||_{\infty }\), we have
Hence,
Finally, we arrive at
We also get that the second series in the right hand side of (3.18) is uniformly convergent in \(N\) if \(t<1/4\).
Similarly to the computation above, for \(\phi _{2;0}=\phi _2\) where \(\phi _2\) a function of the two variables \(v_1,v_2\), and defining inductively \(\widehat{P}^*_{t_1,0}\phi _{j+2,j}\) by (3.14), again it follows that
By induction and (3.14) we get
which together with (3) yields
This implies that the first series in the right hand side of (3.20) is uniformly convergent in \(N\) if \(t<1/4\). Now, using (3.13) and (3.17) we get
Using \(||\phi _{i+1;i-1}||\le 4^{i-1} i! ||\phi _2||_{\infty }\), we get
which implies that
Finally, using the last estimate, we have
Therefore, the second series in (3.20) is also uniformly convergent in \(N\) if \(t<1/4\).
Step 5
In this last step we use the series representation (3.8) to obtain that the second marginal of \(\widehat{W}_N({\mathbf {V}},t)\) can be written as product of two first marginals of \(\widehat{W}_N({\mathbf {V}},t)\) as \(N\) tends to infinity. From (3.8), (3.18) and the estimates in Step 4 it follows for \(0 \le t <T\) where \(T<1/4\) that
where the series is absolutely convergent uniformly in \(N\) and using (3.19), we have
The constant \(C_1(T)\) depends only on \(T\) and
Now, for a function \(\psi _{2;0}(v_1,v_2)=\phi _{1;0}(v_1)\varphi _{1;0}(v_2)\), where \(\psi _{j+2;j}\) is inductively defined by (3.14) again it follows for \(0 \le t<T\) that
where the series is absolutely convergent uniformly in \(N\). Using (3.21), we get
where \(C_2(T)\) is a constant depending only on \(T\), and
From the assumptions in the initial data (3.2), we now obtain
where using (3.2) and (3.13) yields
Here \(C_3(T)\) is constant depending only on \(T\) and
The main contribution thus comes from the sum, and we want to show that this is equal to
Since \(\psi _{2;0}(v_1,v_2)=\phi _{1;0}(v_1)\varphi _{1;0}(v_2)\), and the operator \(P^*_{t,s}\) acts independently on each velocity variable, we have
For the remaining terms, using (3.14) the calculation follows very much like in Kac’s original work, but taking into account the times \(t_i\) when new velocities are added to the original two.
Consider again the construction of the factors \(\widehat{P}^*_{t_1,0}\psi _{j+2;j}(v_1,\ldots ,v_{j+2};t,t_j,\ldots ,t_1)\) in Eq. 3.26. These factors in turn consists of several terms, where each term is constructed by adding new velocities as described in Step 2 and Step 3. But here the starting point consists of two velocities, and the main contribution will come from terms represented by two trees, rooted at \((v_1,t)\) and \((v_2,t)\) respectively. Already from Kac’s original work it follows that adding all terms in which the tree rooted at \(v_1\) has \(k+1\) leafs and the tree rooted at \(v_2\) has \(l+1\) leafs would give exactly those terms in the product (3.28) which come from multiplying the \(k\)-th term in the first factor with the \(l\)-th term in the second factor (using as always the symmetry with respect to permutation of the variables), if the time points were not important. Consider the following two pairs of trees, representing terms where three new velocities are added to the original two:
In the trees, the time points of added velocities are denoted \(s_j\) for the tree rooted at \(v_1\) and \(t_j\) for the tree rooted at \(v_2\), as if they were representing terms in the product (3.28), and to the left the same time points are indexed by only \(t_j\)-s, as when representing a term in (3.26). The examples to the left and right would be identical in Kac’s original model, but here they are different, and we need a small computation to see that after carrying out the integrals, we do get the correct result.
Consider an arbitrary function \(u\in C( \mathbb {R}^j)\). Then
where \((\tau _i)_{i=1}^j\) is the increasing reordering of \(i\) independent random variables uniformly distributed on \([0,t]\). In the same way, for \(u\in C(\mathbb {R}^{k+l})\),
where \((\tau _i)_{i=1}^k\) and \((\sigma _i)_{i=1}^{l}\) are two increasing lists of time points obtained as reorderings of i.i.d random variables as above. But these independent increasing lists can also be obtained by taking \(k+l\) independent random variables, uniformly distributed on \([0,t]\), reordering them in increasing order, and then making a random choice of \(k\) of them to form \((\tau _i)_{i=1}^k\), leaving the remaining ones for \((\sigma _i)_{i=1}^{l}\). Hence
where the sum is taken over all partitions of \(t_1,\ldots ,t_{l+k}\) into to increasing sequences \(t_{L_1},\ldots ,t_{L_k}\) and \(t_{R_1},\ldots ,t_{R_{l}}\). Because \(\left| A_{k+l}\right| = \frac{t^{l+k}}{(k+l)!}\) we see that
We may now conclude by taking
Abbreviating \(R_{T,N}=R_{2,T,N}+ \widetilde{R}_{2,T,N}\) and \(C(T)=C_2(T)+C_3(T)\), we have
Thus, for \(0\le t \le T\) we have that
where \(R_{T,N}\rightarrow 0\) as \(1/N\). Since the \(T\) is independent of the initial distribution, we can take \(t_1\) with \(0<t_1<T\) and repeat the proof to extend the result to \(t_1\le t <t_1+T\). Clearly, the constant \(C(T)\) also changes, but it still depends on time and the factor \(1/N\) remains unchanged. We can continue in this way to cover any time range \(0\le t \le T<\infty \). This concludes the proof. \(\square \)
Combining Theorem 3.1 and Lemma 2.2 yields the following corollary which will be needed later.
Corollary 3.3
Assume that \(\widehat{m}_{6,N}(0)<\infty \). Let \(\psi \) and \(\phi \) be two functions such that
where \(C_1\) and \(C_2\) are two positive constants. Then we have
where
Here \(\widetilde{C}\) is a positive constant and \(C(T)\) is given by Theorem 3.1.
Proof
Let \(0<\alpha <1\), we have
Choosing smooth cutoff functions to approximate the characteristic functions in \(I\) it follows by using Theorem 3.1
where
In \(\widetilde{S}_{T,\psi , \phi }\), either \(|v_1|\) or \(|v_2|\) is larger than \(N^\alpha \), hence \(|v_1|^2+|v_2|^2 \ge N^{2\alpha }\). Hence, using the inequalities \(2ab<a^2+b^2\) andFootnote 1 \(a^2 b + a b^2 \le |a|^3+|b|^3\), where \(a,b\in \mathbb {R}\)
where \(\widetilde{C}\) is a generic constant. The proof may then be concluded by choosing \(\alpha =1/4\). \(\square \)
4 Pathwise Comparison of the Processes
We now have two master equations, the master equation (1.15) and the quenched master equation (2.3) where the latter propagates chaos according to the Kac’s definition. Following [2] we now consider the two stochastic processes
corresponding to the master equation (1.15) and
corresponding to the quenched master equation (2.3). The aim of this section is to compare these two processes and show that when \(N\) is large, with high probability the paths of the two processes are close to each other. The starting point is to find a formula for the difference between the paths of the stochastic processes. There are two sources of randomness in these processes, the first coming from initial data while the second is from the collision history, i.e., the collision times \(t_k\) and the pair of velocities \((v_i(t_k),v_j(t_k))\) or \((\widehat{v}_i(t_k),\widehat{v}_j(t_k))\) participating in this collision process and the random collision parameter \(\theta _k\). Let \({\mathbf {V}}_0\) be the vector of initial velocities and \(\omega \) the collision history. We assume that the two stochastic processes have the same initial velocities and collision history. For each process there is a unique sample path given \({\mathbf {V}}_0\) and \(\omega \). Let \({\mathbf {V}}(t,{\mathbf {V}}_0,\omega )\) and \(\widehat{{\mathbf {V}}}(t,{\mathbf {V}}_0,\omega )\) denote these sample paths. As in [2], we define
to be the flows generated by the autonomous dynamics (1.11) and non-autonomous dynamics (2.13), respectively. Given a collision history \(\omega \), consider the time interval \([s,t]\) where no collision occur at time \(s\) and \(t\), and suppose that there are \(n\) collisions in this time interval with collision times \(t_k\), i.e., \(s<t_1<t_2<\cdots <t_{n-1}<t_n<t\). Moreover, denote by \(\{(t_k,(i,j),\theta _k)\}\) the collision history in the time interval \([s,t]\), where \(t_k\) is the time for the \(k\)-th collision, \((i,j)\) the indices of the colliding particles and \(\theta _k\) is the random collision parameter. Then the stochastic process corresponding to the master equation (1.15), starting from \({\mathbf {V}}_s\) at time \(s\), has the path
while the stochastic process corresponding the quenched master equation (2.3) has the path
In what follows we shall use the following two norms: Given a vector \({\mathbf {V}}=(v_1,\ldots ,v_N) \in \mathbb {R}^N\),
The goal of this section is to estimate
where \(\epsilon \) is given positive number. In order to do that we first need to find an expression for \({\mathbf {V}}(t,{\mathbf {V}}_0,\omega )- \widehat{{\mathbf {V}}}(t,{\mathbf {V}}_0,\omega )\). For completeness we carefully explain the steps. The first step is to find an expression for the difference of the paths of the two processes between collisions, i.e., the difference between the flows \(\Psi _t({\mathbf {V}})\) and \(\widehat{\Psi }_{0,t}({\mathbf {V}})\). We recall the following useful formula for the difference between the product of a sequence of real numbers. If \((a_1,a_2,\dots ,a_n)\in \mathbb {R}^N\) and \((b_1,b_2,\dots ,b_n)\in \mathbb {R}^N\) then
Lemma 4.1
Between time \(s\) and time \(t\), the difference between \(\Psi _t({\mathbf {V}})\) and \(\widehat{\Psi }_{s,t}({\mathbf {V}})\) is given by
where \(D\Psi _t({\mathbf {V}})\) is the differential of the flow starting at \({\mathbf {V}}\) at time \(s\).
Proof
The flows can be written as
and
where \(s=t_0 < t_1 < t_2 < \cdots < t_n <t_{n+1}=t\). In the following expressions the symbol \(\prod \) is used to denote composition. Using the identity (4.6), we have
where \(Id\) denotes the identity operator. Let \(\triangle t=t_{n+2-j}-t_{n+1-j}\). By the flow property and a first order Taylor expansion (\({\mathbf {F}}(\cdot )\) is differentiable), we have
Making a first order Taylor expansion in the last equality around \(\widehat{\Psi }_{s,t_{n+1-j}}\) yields
Plugging (4.10) into \(I_1^n\), we finally have
Next, by a first order Taylor expansion
which together with another first order Taylor expansion leads to
Plugging (4.11) into \(I_2^n\), we obtain
This completes the proof. \(\square \)
Since we assumed that our two stochastic processes \({\mathbf {V}}(t,{\mathbf {V}}_0,\omega )\) and \(\widehat{{\mathbf {V}}}(t,{\mathbf {V}}_0,\omega )\) have the same collision history and \(R_{ij}(\theta _k)\) is a norm preserving linear operator, i.e.,
we can extend (4.7) to include the collisions and hence obtain a formula for the difference of the path of the two processes:
Having this formula, we see that in order to estimate (4.5), the quantities
and
need to be estimated. We start with (4.14). From Theorem 3.1 we know that the quenched master equation propagates chaos and we also know the rate of convergence. This implies that for large \(N\), \(J(\widehat{{\mathbf {V}}}(t,{\mathbf {V}}_0,\omega ))\) should be close to \(\widehat{J}_N(t)\). More precisely, we have the following proposition which corresponds to proposition \(3.3\) in [2] but the difference appears in that their quenched master equation propagates independence, while here we only have propagation of chaos:
Proposition 4.2
Let \(\widehat{W}_N({\mathbf {V}},0)\) be a probability density on \(\mathbb {R}^N\) that satisfies the assumptions in Theorem 3.1. Suppose also that
Then, for \(t<T\)
where
Here \(\widetilde{C}\) is positive constant and \(C(T)\) is given by Theorem 3.1.
Proof
The difference componentwise of the forces \({\mathbf {F}}(\widehat{{\mathbf {V}}}(s,{\mathbf {V}}_0,\omega ))\), \(\widehat{{\mathbf {F}}}(\widehat{{\mathbf {V}}}(s,{\mathbf {V}}_0,\omega ))\) at time \(s\) given by
Moreover, we can write
and
Using the inequality \(|J(\widehat{{\mathbf {V}}})|\le \sqrt{U(\widehat{{\mathbf {V}}})}\) together with the triangle inequality we arrive at
Integrating both sides of the last inequality over the interval \([0,t]\), taking expectation with respect to \(\widehat{W}_N({\mathbf {V}},t)\) and using the Cauchy Schwartz inequality leads to
First, by definition it follows
To estimate \(\mathbb {E}\big |J(\widehat{{\mathbf {V}}}(s,{\mathbf {V}}_0,\omega ))-\widehat{J}_N(s)\big |^2\), we first note that
where \(\widehat{v}_j(s)=\widehat{v}_j(s,{\mathbf {V}}_0,\omega )\). From this it follows
Using Corollary 3.3 with \(\psi (v)=(v-\widehat{J}_N(s))\) and \(\phi (w)=(w-\widehat{J}_N(s))\), we get
where
with \(\widetilde{C}\) being a positive constant and \(C(T)\) is given by Theorem 3.1. Estimating the first of the two last integrals yields
and by symmetry
Combining these inequalities, we have
A similar computation like the one we preformed to estimate \(\mathbb {E}|J(\widehat{{\mathbf {V}}}(s,{\mathbf {V}}_0,\omega ))-\widehat{J}_N(s)|^2\) also yields
Collecting all the inequalities above and plugging them into (4.15), we finally get
\(\square \)
Following the lines in [2] the next step is to estimate (4.13).
Proposition 4.3
where
Proof
Let \(s<t_1<t\) and \({\mathbf {X}}\in \mathbb {R}^N\). Consider the expression
which is a part of (4.2). We have
The fact that \(\Vert \Psi _t({\mathbf {V}})\Vert =\Vert {\mathbf {V}}\Vert \) and that \(DR_{ij}(\theta _1)\) is norm preserving implies that
which in turn leads to
By repeating this procedure \(n\) times we get
To estimate the right hand side of the last inequality, we begin by noting that
with \(D\Psi _0({\mathbf {V}})=Id\). Next,
Differentiating the left hand side of the last inequality and using the Cauchy Schwartz inequality yields
where
By (1.12), we have
where
Writing \({\mathbf {X}}=(x_1,\dots ,x_N)\) with \(||{\mathbf {X}}||=1\), we have
Using the Cauchy–Schwartz inequality twice yields
Hence
The inequality \(|J({\mathbf {V}})|\le \sqrt{U({\mathbf {V}})}\) together with the definition of \(U({\mathbf {V}})\) leads to
and
Collecting all these inequalities, we finally obtain
Plugging this into (4.21), for all \({\mathbf {X}} \in \mathbb {R}^N\), we have
Solving this differential inequality yields
Applying the last inequality to (4.20), we conclude that
\(\square \)
In order to complete the estimate for (4.5), we actually also need to show that, for large \(N\), the probability that \(\sup _{0\le s\le t}U(\widehat{{\mathbf {V}}})^{-1/2}\) is large is small. This is because, while the quantity \(U({{\mathbf {V}}})\) is conserved by the master equation (1.15), the quantity \(U(\widehat{{\mathbf {V}}})\) is not conserved by the quenched master equation (2.3). The following lemma is based on [2] with a small difference in that, here we only have that the quenched master equation equation propagates chaos and not independence.
Lemma 4.4
Let \(\widehat{W}_N({\mathbf {V}},0)\) be a probability density on \(\mathbb {R}^N\) satisfying the assumptions in Theorem 3.1 and
Then for \(0<t<T\), we have
where \(A_2(T)\) is given by Proposition 4.2 and \(n(t)\) is the smallest integer such that \(n(t)\ge \frac{t}{\delta _t}+1\) with \(\delta _t \) defined by
\(\square \)
Proof
This proof can be carried out almost as in [2], but with some modification to account for the lack of independence. For completeness we present the full proof, not only the needed modifications. From the definitions, we have
Using the inequalities \(|J(\widehat{{\mathbf {V}}}(t)|\le \sqrt{U(\widehat{{\mathbf {V}}}(t))}\) and \(|\widehat{J}_N(t)|\le \sqrt{\widehat{U}_N}\), we obtain
Writing
the last differential inequality corresponds to the following differential equation
with initial condition \( x(t_0)=x_0\). The solution is given by
The above solution \(x(t)\) blows up in finite time. However, we can still hope that the solutions starting at \(x_0\) do not blow up in a time interval whose length is independent of \(t_0\). To be more precise, let \(t_1\) denote the time at which \(x(t_1)=2x_0\). Then
Choosing \(x_0=\sqrt{2/\widehat{U}_N}\) leads to
The length of the interval \([t_0,t_1]\) is independent of \(t_0\) since \(E\) and \(\widehat{U}_N\) are given. Thus, we now have that if \(\left( U(\widehat{{\mathbf {V}}}(t_0))\right) ^{-1/2}\le \sqrt{2/\widehat{U}_N}\), then for all \(t\) in \([t_0,t_1]\), \(\left( U(\widehat{{\mathbf {V}}}(t))\right) ^{-1/2}\le 2\sqrt{2/\widehat{U}_N}\).
Moreover, for any given \(t_0<T\) with \(T\) from Theorem 3.1, using Corollary 3.3 we get
where \(A_2(T)\) is given by Proposition 4.2. By the Chebychev inequality, we now have
Hence, for large \(N\), the probability that \(U(\widehat{{\mathbf {V}}}(t_0))^{-1/2}>\sqrt{2/\widehat{U}_N}\) is small.
Let now \(\delta _t=t_1-t_0\) where \(t_1-t_0\) is given by (4.25). Furthermore, for any given \(t>0\), we set \(n(t)\) to be the smallest integer such that \(n(t)\ge \frac{t}{\delta _t}+1\). It now follows, if
we have by the reasoning above that
Using this, we now get
This implies for \(t<T\) that
This is what we wanted to show. \(\square \)
Combining Proposition 4.3 with the last lemma leads to following corollary.
Corollary 4.5
Let \(\widehat{W}_N({\mathbf {V}},0)\) be a probability density on \(\mathbb {R}^N\) satisfying the assumptions in Theorem 3.1 and
Let
Then for \(0<t<T\), we have
with \(A_2(T)\) given by Proposition 4.2 and \(n(T)\) by Lemma 4.4.
Proof
Since the exponential function is increasing, the proof follows from the last Lemma. \(\square \)
We are now ready to prove the main result of this section which again is a modification of the corresponding result in [2]:
Theorem 4.6
Let \(\widehat{W}_N({\mathbf {V}},0)\) be a probability density on \(\mathbb {R}^N\) satisfying the assumptions in Theorem 3.1 and
Then for all \(\epsilon >0\),
with \(A_1(T)\), \(A_2(T)\) are given by Proposition 4.2 and \(n(T)\) by Lemma 4.4
Proof
Following the lines of [2], we define two events \(A\) and \(B\), where, \(A\) is the event such that
and \(B\) is the event such that
To estimate \(\mathbb {P}(B)\), note that
From Corollary 4.5 we obtain
To estimate \(\mathbb {P}(B \cap A^c)\), we first note that on \(A^c\), by (4.12) and Proposition 4.3 it follows that
Using the Markov inequality, we get
By Proposition 4.2 we get
Collecting the inequalities above, we conclude
\(\square \)
5 Propagation of Chaos for the Master Equation (1.15)
We are finally ready to show the main result of this paper, namely, that the second marginal \(f_2^N(v_1,v_2,t)\) of \(W_N({\mathbf {V}},t)\) satisfying the master equation (1.15) converges as \(N\rightarrow \infty \) to the product of two one marginals \(f(v_1,t)f(v_2,t)\) of \(W_N({\mathbf {V}},t)\) where \(f(v,t)\) solves (1.16). In [2], the idea is to introduce two empirical distributions corresponding to the two stochastic processes \({\mathbf {V}}(t)\), \(\widehat{{\mathbf {V}}}(t)\) and make use of the propagation of independence to apply the law of large numbers. In our case, independence between particles is not propagated, but the quenched master equation (2.3) propagates chaos which together with Theorem 4.6 gives that, for large \(N\) with high probability the distance between the paths of the two stochastic processes can be made arbitrary small. We start by introducing the two following empirical distributions: For each fixed \(N\) and \(t>0\), let
and
Since we have shown that the master equation (2.3) propagates chaos, it follows from [10, Proposition 2.2], that
where the convergence is in distribution and \(\widehat{f}(v,t)\) is the solution to (1.16), see [13, Theorem 2.1].
Theorem 4.6 shows that the distance between the two empirical measures above goes to zero as \(N \rightarrow \infty \). To be more precise, we need to recall the Kantorovich-Rubinstein Theorem (KRT) concerning the \(1\)- Wasserstein distance. Let
Theorem 5.1
For any \(\mu , \eta \in \mathcal {P}_1(\mathbb {R}^N)\)
where the supremum is taken over the set of all \(1\)-Lipschitz continuous functions \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\).
We now have
Lemma 5.2
Let the assumptions of Theorem 3.1 be satisfied, and assume that \(\widehat{U}_N>0\) and that \(\widehat{m}_{6,N}(0)<\infty \). Moreover let \(\phi \) be a 1-Lipschitz function. For the \(\mathcal {W}_1\) defined as in Theorem 5.1, and all \(\epsilon >0\) we have
Proof
The Lipschitz condition together with the Cauchy Schwartz inequality yields
Using Theorem 4.6 we get
\(\square \)
Using Lemma 5.2 and (5.3) together with the fact that the quenched master equation propagates chaos (Theorem 3.1), we are now ready to prove our main result.
Theorem 5.3
Let \(\widehat{W}_N({\mathbf {V}},0)\) be a probability density on \(\mathbb {R}^N\) satisfying the assumptions in Theorem 3.1 and
Let \({\mathbf {V}}(t,{\mathbf {V}}_0,\omega )=(v_1(t,{\mathbf {V}}_0,\omega ),\dots ,v_N(t,{\mathbf {V}}_0,\omega ))\) be the stochastic process corresponding to the master equation (1.15) with initial condition given by (3.2). Then for all 1-Lipschitz function \(\phi \) on \(\mathbb {R}^2\) with \(||\phi ||_{\infty }<\infty \) and all \(t>0\) we have
where the expectation is with respect to the collision history \(\omega \) and initial velocities \({\mathbf {V}}_0\).
Proof
Since the probability density \(\widehat{W}\) is symmetric under permutation, we have
Let \(\Psi (u)\) be defined by
From the properties of \(\phi \) it follows that \(\Psi \) is 1-Lipschitz on \(\mathbb {R}\) and \(||\Psi ||_{\infty }\le ||\phi ||_{\infty }\). For a any given \(\varepsilon >0 \), we now have
To obtain the last inequality we have used the 1-Lipschitz condition on \(\phi \) and the Cauchy Schwartz inequality. A similar argument also yields
Consulting Theorem 4.6 and choosing \(\epsilon =N^{-1/8}\) finally gives
Since by (5.3) it follows that
we conclude that
which is what we wanted to show. \(\square \)
Notes
This is a consequence of the first inequality.
References
Bagland, V.: Well-posedness and large time behaviour for the non-cutoff Kac equation with a Gaussian thermostat. J. Statist. Phys. 138, 838–875 (2010)
Bonetto, F., Carlen, E., Esposito, R., Lebowitz, J., Marra, R.: Propagation of chaos for a thermostated kinetic model. J. Statist. Phys. 154(1-2), 265–285 (2014)
Carlen, E., Degond, P., Wennberg, B.: Kinetic limits for pair-interaction driven master equations and biological swarm models. Math. Models Methods Appl. Sci. 23, 1339–1376 (2013)
Cercignani, C., Illner, R., and Pulvirenti, M.: The Mathematical Theory of Dilute Gases. Springer, Berlin (1994)
Grünbaum, A.: Propgation of chaos for the Boltzmann equation. Arch. Ration. Mech. Anal. 42, 323–345 (1971)
Kac, M.: Foundations of kinetic theory. In: Proceedings of the Third Berkely Symposium on Mathematical Statistics and Probability, 1954–1955, vol. III, pp. 171–197. University of California Press, Berkerly (1956)
Lanford, O.: Time evolution of large classical system. In: Moser, E.J. (ed.) Lecture Notes in Physics, vol. 38, pp. 1–111. Springer, Berlin (1975)
Mishler, S., Mouhot, C.: Kac’s Program in Kinetic Theory. Springer, Berlin (2012)
Mishler, S., Mouhot, C., Wennberg, B.: A new approach to quantitative propagation of chaos for drift, diffusion and jump processes. Probab. Theory Relat. Fields. (2013). doi:10.1007/s00440-013-0542-8
Sznitman, A.: Topics in propagation of chaos. Ecole d’Eté de Probabilités de Saint-Flour XIX-1989. In: Lecture Notes in Mathematics, vol. 1464, pp. 165–251. Springer, Berlin (1991)
Villani, C.: A review of mathematical topics in collisional kinetic theory. In: Friedlander S., Serre D. (eds.) Handbook of Mathematical Fluid Dynamics. Elsevier Science, Amsterdam (2002)
Wennberg, B., Wondmagegne, Y.: The Kac equation with a thermostatted force field. J. Statist. Phys. 124(2–4), 859–880 (2006)
Wondmagegne, Y.: Kinetic equations with a Gaussian thermostat. Doctoral Thesis, Department of Mathematical Sciences, Chalmers University of Technology and Göteborg University (2005)
Acknowledgments
We would like to thank an anonymous referee for having read the previous version of this paper very carefully and for pointing out several important issues. E.C. would like to thank Chalmers Institute of Technology during a visit in the Spring of 2012, and would like to acknowledge support from N.S.F. Grant DMS-1201354. D.M. and B.W. would like to thank Kleber Carrapatoso for the discussions during the initial phase of this work. This work was supported by Grants from the Swedish Science Council and the Knut and Alice Wallenberg foundation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Carlen, E., Mustafa, D. & Wennberg, B. Propagation of Chaos for the Thermostatted Kac Master Equation. J Stat Phys 158, 1341–1378 (2015). https://doi.org/10.1007/s10955-014-1155-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-014-1155-z