1 Introduction

When only Compton scattering events are considered, the evolution of the particle density of a gas of photons that interact with electrons at non relativistic equilibrium is usually described by means of a Boltzmann equation that may be found in [11, 22, 26] and many others. For a spatially homogeneous isotropic gas of photons and non relativistic electrons at equilibrium, the equation is simplified to the following expression, with a notation more usual in the literature of physics:

$$\begin{aligned} k^2{\partial f\over \partial t}(t, k)&=Q_{\beta }(f, f)(t, k),\qquad t>0,\;k\ge 0, \end{aligned}$$
(1.1)
$$\begin{aligned} Q_{\beta }(f, f)(t, k)&=\int _0^\infty \left( \frac{}{} f(t, k') \,(1+f(t, k)) e^{-\beta k} \right. \nonumber \\&\quad \left. -f(t, k) (1 + f(t, k')) e^{-\beta k'} \right) k k' \mathcal {B} _{ \beta }(k, k')dk'. \end{aligned}$$
(1.2)

The variable \(k=|\mathbf k |\) denotes the energy of a photon of momentum \(\mathbf k \in \mathbb {R}^3\) (taking the speed of light c equal to one), \(\beta =(\hbar T)^{-1}\), with T the temperature of the gas of electrons, \((4\pi /3)k^2 f(t, k)\ge 0\) is the particle density, and \(\mathcal {B} _{ \beta }(k, k')\) is a function called sometimes the redistribution function.

We emphasize that only elastic collisions of one photon and one electron giving rise to one photon and one electron are considered in this equation, and no radiation effects are taken into account. As shown in [5], the cross section for emission of an additional photon of energy k diverges as k approaches zero, and so the probability of a Compton process unaccompanied by such emission is zero. It follows that the equation (1.1), (1.2) can not take accurately into account photons with too small energy.

When the speed of light c is taken into account, the corresponding Eqs. (1.1) and (1.2) is very often approximated by a nonlinear Fokker Planck equation (cf. [22]). For \(\beta>>(m c^2)^{-1}\) (that corresponds to non relativistic electrons with mass m), the scattering cross section of photons with energies \(k<< mc^2\) may be approximated by the Thompson scattering cross section. It is then possible to deduce the following expression of \(\mathcal {B}_\beta (k, k')\):

$$\begin{aligned}&\displaystyle \mathcal {B} _{ \beta }(k, k')=\sqrt{\beta } \;e^{\beta \frac{(k'+k )}{2}}\int _0^\pi \frac{(1+\cos ^2\theta )}{ |\mathbf k '-\mathbf k | }e^{-\beta \frac{\Delta ^2+\frac{m^2v^4}{4}}{2mv^2}} d\cos \theta , \end{aligned}$$
(1.3)
$$\begin{aligned}&\displaystyle v=\frac{1}{m }|\mathbf k '-\mathbf k |,\quad \Delta =k'-k, \end{aligned}$$
(1.4)

(cf. [24] and [14]). It is then argued (cf. [22] for example) that \(\mathcal {B} _{ \beta }(k, k')\) is strongly peaked in the region

$$\begin{aligned} \left\{ k>0, \;k'>0;\,\,|k-k'|<<\min \{k, k'\}\right\} \end{aligned}$$
(1.5)

for large values of \(\beta \), (cf. Figs. 1, 2 and 3 in Appendix B.2) and then, if the variations of f are not too large, it is possible to expand the integrand of (1.1) around k and, after a suitable rescaling of the time variable, the Eqs. (1.1) and (1.2) is approximated by:

$$\begin{aligned} \frac{\partial f}{\partial t}=\frac{1}{k^2}\frac{\partial }{\partial k}\left( k^4 \left( \frac{\partial f}{\partial k}+f^2+f \right) \right) , \end{aligned}$$
(1.6)

the Kompaneets equation ([22]). However, it is difficult to determine under what conditions on the initial data and in what range of photon energies k, is this approximation correct.

Fig. 1
figure 1

The kernel \(B_{\beta }(x,y)\) for \(\beta =100\), \(m=1\), \((x,y)\in [0.1,4]^2\)

Fig. 2
figure 2

From left to right, the kernel \(B_{\beta }(x,y)\) for \(m=1\), \((x,y)\in [0.1,4]^2\) and \(\beta =10\), \(\beta =50\) and \(\beta =200\)

Fig. 3
figure 3

Sections of \(B_{\beta }\) for \(\beta =50\) and \(m=1\). The horizontal axis corresponds to the variable \(\xi =(x-y)/\sqrt{2}\). The vertical axis corresponds to \(B_{\beta }(x,y)\) for \(x+y=\text {constant}\). In blue, \(x+y=0.3\), in red \(x+y=0.5\) and in yellow, \(x+y=1\)

Due in particular to its importance in modern cosmology and high energy astrophysics, the Kompaneets equation has received great attention in the literature of physics (cf. the review [3]). It has also been studied from a more strictly mathematical point of view ([7, 12, 20]), and several of its possible approximations have also been considered ([1, 23]). It was first observed in [28] that for a large class of initial data, as t increases, the solutions of (1.6) may develop steep profiles, very close to a shock wave, near \(k=0\). This was proved to happen in [12] for some of the solutions, for k in a neighborhood of the origin and large times.

On the basis of the equilibrium distributions \(F_M\) of (1.1), (1.2) given by

$$\begin{aligned} k^2F_M&=k^2f_\mu +\alpha \delta _0, \,\,\,\mu \le 0,\,\,\alpha \ge 0,\,\,\,\alpha \mu =0, \end{aligned}$$
(1.7)
$$\begin{aligned} f_\mu (k)&=\frac{1}{e^{k-\mu }-1},\,\,\int _{ 0 }^\infty k^2f_\mu (k)dk=M_\mu ,\,\,M=\alpha +M_\mu , \end{aligned}$$
(1.8)

some of its unsteady solutions are also expected to develop, asymptotically in time, very large values and strong variation in very small regions near the origin. This was proved to be true in [13] where, under the assumptions that \(e^{-\eta (k+k')}(kk')^{-1}\mathcal {B} _{ \beta }(k, k')\) is a bounded function on \([0, \infty )^2\) for some \(\eta \in [0, 1)\), it is shown that, as \(t\rightarrow \infty \), certain solutions form a Dirac mass at the origin. A detailed description of this formation was given later in [15], assuming \(\mathcal {B} _{ \beta }(k, k')(kk')^{-1}\equiv 1\) and for some classes of initial data. Of course, in the region where this delta formation takes place, the Eqs. (1.1) and (1.2) can not be approximated by the Kompaneets equation (1.6).

It is obvious however that the function \(\mathcal {B} _{ \beta }(k, k')\) in (1.3), (1.4) does not satisfies the conditions imposed in [13] or [15]. On the other hand, the Boltzmann equation (1.1), (1.2) with the kernel (1.3), (1.4) was considered in [9] and [16]. Local existence for small initial data with a moment of order \(-1\) was proved in [9]. It was proved in [16] that, although globally solvable in time for initial data bounded from above by the Planck distribution, the Cauchy problem has no solution, even local in time, for initial data greater that the Planck distribution. This seems to be an effect of the very small values of k and \(k'\) with respect to \(|k-k'|\) in the collision integral, and indicates that some truncation is needed in order to have a reasonable theory for the Cauchy problem. (cf. Sect. 1.1.1 below).

In this article we consider first the Cauchy problem for an equation where the kernel (1.3), (1.4) is truncated in a region where k or \(k'\) are much smaller than \(|k-k'|\), although the strong singularity at the origin \(k=k'=0\) is kept. This is achieved by multiplying the kernel \(\mathcal {B} _{ \beta }\) by a suitable cut off function \(\Phi (k, k')\),

$$\begin{aligned} k^2{\partial f\over \partial t}(t, k)&=\widetilde{Q}_{\beta }(f, f)(t, k) \end{aligned}$$
(1.9)
$$\begin{aligned} \widetilde{Q}_{\beta }(f, f)(t, k)&=\int _0^\infty \left( \frac{}{} f(t, k') \,(1+f(t, k)) e^{-\beta k} \right. \nonumber \\&\quad \left. -f(t, k) (1 + f(t, k')) e^{-\beta k'} \right) k k' \Phi (k, k')\mathcal {B} _{ \beta }(k, k')dk' \end{aligned}$$
(1.10)

The Cauchy problem for (1.9), (1.10) proved to have weak solutions for a large class of initial data in the space of non negative measures. Because of some difficulties coming from the kernel \(\mathcal {B}_\beta \) and its truncation, it is not possible to perform the same analysis as in [13] or [15], where the asymptotic behavior of the solutions was described.

Some further insight may be obtained from a simplified equation, proposed in [29] and [30], where the authors suggest to keep only the quadratic terms in (1.2) when \(f>>1\) (or when the function f has a large derivative) and consider,

$$\begin{aligned} k^2{\partial f\over \partial t} (t, k) =f(t, k) \int _0^\infty f(t, k') \bigl ( e^{-\beta k} - e^{-\beta k'} \bigr ) kk'\mathcal {B} _{ \beta }(k, k')dk' . \end{aligned}$$
(1.11)

This equation may be formally obtained in the limit \(\beta \rightarrow \infty \) and \(\beta k\) of order one (cf. Section B below). If the reasoning leading from Eq. (1.1) to the Kompaneets equation goes for the Eq. (1.11), the following non linear first order equation is obtained,

$$\begin{aligned} \frac{\partial f}{\partial t}=\frac{1}{k^2}\frac{\partial }{\partial k}\left( k^4 f^2\right) ,\,\,\,t>0, \, k>0. \end{aligned}$$
(1.12)

For the same reasons as for the Eq. (1.1), we consider the Eq. (1.11) with the truncated redistribution function,

$$\begin{aligned} k^2{\partial f\over \partial t} (t, k) =f(t, k) \int _0^\infty f(t, k') \bigl ( e^{-\beta k} - e^{-\beta k'} \bigr ) kk'\Phi (k, k')\mathcal {B} _{ \beta }(k, k')dk' . \end{aligned}$$
(1.13)

As for the Eqs. (1.9), (1.10) and (1.13) has weak solutions for a large set of initial data. Moreover, if the initial data is an integrable function, sufficiently flat around the origin, it has a global solution, that remains, for all time, an integrable function, flat around the origin. The weak solutions of (1.13) converge as t tends to infinity to a limit that may be almost completely characterized. It is formed by an at most countable number of Dirac masses, whose locations are determined by the way in which the mass of the initial data is distributed. This suggests a possible transient behavior for the solutions of the complete Eq. (1.9), where large and concentrated peaks could form and remain for some time.

We refer to [19] for recent numerical simulations on the behavior of the solutions of the Eq. (1.1) and the Kompaneets approximation. The anisotropic case has also been recently considered in [6].

We describe now our results in more detail.

1.1 The Function \(\mathcal {B} _{ \beta }(k, k')\) Weak Formulation

Due to the \(k^2\) factor in the left hand side of (1.1), it is natural to introduce the new variable

$$\begin{aligned} v(t, k)=k^2f(t, k). \end{aligned}$$
(1.14)

This variable v is now, up to a constant, the photon density in the radial variables, and Eqs. (1.1) and (1.2) reads,

$$\begin{aligned} \frac{\partial v}{\partial t}(t, k)&= \mathcal {Q}_{\beta }(v, v)(t,k),\qquad t>0,\;k\ge 0, \end{aligned}$$
(1.15)
$$\begin{aligned} \mathcal {Q}_{\beta }(v, v)(t,k)&=\int _0^\infty q_\beta (v, v') \frac{\mathcal {B} _{ \beta }(k, k')}{kk'}dk', \end{aligned}$$
(1.16)
$$\begin{aligned} q_\beta (v, v')&=v' (k^2+v) e^{-\beta k} - v (k'^2 + v') e^{-\beta k'}, \end{aligned}$$
(1.17)

where we use the common notation \(v=v(t,k)\) and \(v'=v(t,k')\). As a consequence of the change of variables (1.14), the factor \(kk'\) in the collision integral has been changed to \((kk')^{-1}\).

An expression of \(\mathcal {B} _{ \beta }(k, k')\) may be obtained at low density of electrons and using the non relativistic approximation of the Compton scattering cross section (cf. [14, 24]). It may be seen in particular that \(\mathcal {B} _{ \beta }(k, 0)>0\) for all \(k>0\), and

$$\begin{aligned} \mathcal {B} _{ \beta }(k, k')&=\frac{44}{15 }\left( \frac{2}{k+k '}+1\right) +\mathcal {O}(k+k'),\quad k+k'\rightarrow 0, \end{aligned}$$
(1.18)
$$\begin{aligned} \mathcal {B} _{ \beta }(k, 0)&= \frac{8 \sqrt{\beta }}{3}e^{\frac{k}{2}}\frac{e^{- \frac{\beta ^2+k^2}{2\beta m}}}{k}. \end{aligned}$$
(1.19)

The kernel \(\mathcal {B} _{ \beta }(k, k')(kk')^{-1}\) is then rather singular near the axes, and the collision integral \(\mathcal {Q}(v,v)\) is not defined for v(t) a general non negative bounded measure. In order to overcome this problem, it is usual to introduce weak solutions. A natural definition of weak solution is:

$$\begin{aligned} \frac{d}{dt} \int \limits _{ [0, \infty ) }v(t, k)\varphi (k)dk=\frac{1}{2}\iint \limits _{[0, \infty )^2 } \big (\varphi -\varphi '\big ) q_\beta (v, v')\frac{\mathcal {B} _{ \beta }(k, k')}{kk'}dk dk' \end{aligned}$$
(1.20)

for a suitable space of test functions \(\varphi \). Again, we use the notation \(\varphi =\varphi (k)\) and \(\varphi '=\varphi (k')\). Since \(\mathcal {B} _{ \beta }(k, 0)>0\) for all \(k>0\), the integral in the right hand side of (1.20) may still diverge. It was actually proved in [16] that for initial data \(v_0\) such that

$$\begin{aligned} v_0(x)> \frac{x^2}{e^x-1},\quad \forall x\ge 0, \end{aligned}$$

the (1.20) has no solution in \(C([0, T), \mathscr {M}_+^1([0,\infty )))\), for any \(T>0\).

Kernels with that kind of singularities have been considered in coagulation equations. One possible way to overcome this difficulty and obtain global solutions is to impose test functions \(\varphi \) compactly supported on \((0, \infty )\), like in [25], or such that \(\varphi (x)\sim x^\alpha \) as \(x\rightarrow 0\) for some \(\alpha \) large enough, like for example in [18], (but in that case we could not expect to obtain any information on what happens near the origin), or also to look for solutions v in suitable weighted spaces like in [2, 8] (but that would exclude the Dirac delta at the origin). In all these cases, the propagation of negative moments for all \(t>0\) is necessary. That property does not seem to hold true for (1.9), cf. Remark 2.13 for the local propagation of some negative moments. See Remarks 1.5 and 5.8 for the Eq. (1.13).

1.1.1 Truncated Kernel: Why and How

As we have already mentioned, the Eqs. (1.1) and (1.2) does not describe the Compton scattering if “too” low energy photons are considered, since in that case the spontaneous emission of photons must be taken into account (cf. [5]). At this level of description then, some cut off seems necessary for a coherent description, where only collisions of one photon and one electron giving one photon and one electron are considered.

In view of the properties of the function \(\mathcal {B}_\beta \) for \(\beta \) large presented in Appendix B, and since no precise indication is available in the literature of physics, we use a mathematical criteria as follows:

(i) We truncate the kernel \(\mathcal {B}_\beta \), down to zero, out of the following subset of \([0, \infty )\times [0, \infty )\):

$$\begin{aligned}&\forall (k,k')\in [0,\delta _*]^2,\quad |k-k'|\le \rho _*(kk')^{\alpha _1} (k+k')^{\alpha _2}, \end{aligned}$$
(1.21)
$$\begin{aligned}&\forall (k, k')\in [0,\infty )^2\setminus [0,\delta _*]^2,\quad \theta k\le k'\le \theta ^{-1} k, \end{aligned}$$
(1.22)

for some constants \(\delta _*>0\), \(\rho _*>0\), \(\alpha _1\ge 1/2\), \(2\alpha _2\ge 3-4\alpha \), and \(\theta \in (0, 1)\).

(ii) In order to minimize the region of this truncation, we choose \(\alpha _1=\alpha _2 =1/2\).

(iii) We leave \(\mathcal {B}_\beta \) unchanged as much as possible inside that region, but at the same time we want the resulting truncated kernel to belong to \(C((0, \infty )\times (0, \infty ))\).

Remark 1.1

It is suggested in [27] that for very large values of \(\beta \), the support of \(\mathcal {B}_\beta \) is a subdomain of \(|k-k'|<2k^2/mc^2\) for small values of k and \(k'\). That would be a stronger truncation than in (ii).

Then we multiply \(\mathcal {B} _{ \beta }(k, k')\) by \(\Phi (k, k')\), where:

  1. (1)

    \(\Phi (k,k')=\Phi (k',k)\) for all \(k>0\), \(k'>0\),

  2. (2)

    \(\Phi \in C([0,\infty )^2\setminus \{(0,0\})\),

  3. (3)

    \({\text {supp}}(\Phi )=D\), where \((k,k')\in D\) if and only if (1.21) and (1.22) hold for \(\alpha _1=\alpha _2=1/2\), and some constants \(\theta \in (0,1)\), \(\delta _*>0\) and \(\rho _*=\rho _*(\theta ,\delta _*)\).

  4. (4)

    \(\Phi (k,k')=1\;\forall (k,k')\in D_1\subset D\), where \((k,k')\in D_1\) if and only if,

    $$\begin{aligned} \begin{array}{lll} |k-k'|\le \rho _1\sqrt{kk'(k+k')}&{}\quad \hbox {if}&{}(k,k')\in [0,\delta _*]^2,\\ \theta _1 k\le k'\le \theta _1 ^{-1} k&{}\quad \hbox {if}&{}(k, k')\in [0,\infty )^2\setminus [0,\delta _*]^2, \end{array} \end{aligned}$$

    for some \(\theta _1\in (\theta ,1)\) and \(\rho _1=\rho _1(\theta _1,\delta _*)>0\). (Cf. also (2.5)–(2.11)).

Then, for all \(\varphi \in C^1([0, \infty ))\),

$$\begin{aligned} \big (e^{-\beta k}-e^{-\beta k'}\big )\big (\varphi (k) -\varphi (k')\big )\frac{\mathcal {B} _{ \beta }(k, k')}{kk'}\Phi (k , k' )\in L^\infty _{ loc }([0, \infty )\times [0, \infty )), \end{aligned}$$

and if \(\varphi '(0)=0\),

$$\begin{aligned} \big (e^{-\beta k}-e^{-\beta k'}\big )\big (\varphi (k) -\varphi (k')\big )\frac{\mathcal {B} _{ \beta }(k, k')}{kk'}\Phi (k , k' )\in C([0, \infty )\times [0, \infty )) \end{aligned}$$

(cf. Lemmas A.1, A.3 and (2.29)).

In the first part of this work, we then consider the problem

$$\begin{aligned} \frac{\partial v}{\partial t}(t,k)=\int _{[0,\infty )}q_{\beta }(v,v') \frac{\mathcal {B}_{\beta }(k,k')\Phi (k,k')}{kk'}dk'. \end{aligned}$$
(1.23)

We need the following notations:

\(C^1_b([0,\infty ))\) is the space of bounded continuous functions, with continuous bounded derivative, on \([0,\infty )\).

The space of nonnegative bounded Radon measures is denoted \(\mathscr {M}_+([0,\infty ))\), and

$$\begin{aligned} \mathscr {M}^{\rho }_+([0,\infty ))&=\{v\in \mathscr {M}_+([0,\infty )): M_{\rho }(v)<\infty \},\,\,\,\forall \rho \in \mathbb {R},\nonumber \\ M_{\rho }(v)&=\int _{[0,\infty )}k^{\rho }v(k)dk\quad (\text {moment of order } \rho ), \end{aligned}$$
(1.24)
$$\begin{aligned} X_{\rho }(v)&=\int _{[0,\infty )}e^{\rho k}v(k)dk. \end{aligned}$$
(1.25)

We use the notation \(\int v(k)dk\) instead of \(\int dv(k)\), even if the measure v is not absolutely continuous with respect to the Lebesgue measure.

Unless stated otherwise, the space \(\mathscr {M}_+([0,\infty ))\) is considered with the narrow topology. We recall that the narrow topology is generated by the metric \(d_0(\mu ,\nu )=\Vert \mu -\nu \Vert _0\), where (cf. [4], Theorem 8.3.2),

$$\begin{aligned} \Vert \mu \Vert _0= & {} \sup \bigg \{\int _{[0,\infty )}\varphi d\mu :\varphi \in \text {Lip}_1([0,\infty )),\;\Vert \varphi \Vert _{\infty }\le 1\bigg \}, \end{aligned}$$
(1.26)
$$\begin{aligned} \text {Lip}_1([0,\infty ))= & {} \{\varphi :[0,\infty )\rightarrow \mathbb {R}:|\varphi (x)-\varphi (y)|\le |x-y|\}. \end{aligned}$$
(1.27)

The following is an existence result for the problem (1.23).

Theorem 1.2

Given any \(v_0\in \mathscr {M}_+([0,\infty ))\) satisfying

$$\begin{aligned} X_{\eta }(v)<\infty , \end{aligned}$$
(1.28)

for some \(\eta \in \left( \frac{1-\theta }{2},\frac{1}{2}\right) \), then there exists \(v\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) weak solution of (1.23), i.e., such that satisfies the following (i)–(ii):

  1. (i)

    For all \(\varphi \in C_b([0,\infty ))\),

    $$\begin{aligned}&\displaystyle \int _{ [0, \infty ) }v(\cdot , k)\varphi (k)dk \in C( [0, \infty );\mathbb {R}), \end{aligned}$$
    (1.29)
    $$\begin{aligned}&\displaystyle \int _{ [0, \infty ) }v(0 , k)\varphi (k)dk=\int _{ [0, \infty ) }v_0(k)\varphi (k)dk, \end{aligned}$$
    (1.30)
  2. (ii)

    For all \(\varphi \in C^1_b([0,\infty ))\) with \(\varphi '(0)=0\),

    $$\begin{aligned} \int _{ [0, \infty ) }v(\cdot , k)\varphi (k)dk \in W^{1, \infty }_{loc} ([0, \infty ); \mathbb {R}), \end{aligned}$$
    (1.31)

    and for almost every \(t>0\),

    $$\begin{aligned} \frac{d}{dt}\int \limits _{ [0, \infty ) }v(t , k)\varphi (k)dk=\frac{1}{2}\iint \limits _{[0, \infty )^2}\frac{\Phi \mathcal {B}_\beta }{kk'} q_{\beta }(v,v')(\varphi -\varphi ')dkdk'. \end{aligned}$$
    (1.32)

The measure v(t) also satisfies, for all \(t\ge 0\),

$$\begin{aligned} M_0(v(t))= & {} M_0(v_0) \end{aligned}$$
(1.33)
$$\begin{aligned} X_{\eta }(v(t))\le & {} e^{C_{\eta }t}X_{\eta }(v_0), \end{aligned}$$
(1.34)

where

$$\begin{aligned} C_{\eta }=\frac{C_*}{2\theta ^2}\frac{(1-\theta )}{(1+\theta )} \frac{\eta }{\left( \frac{1}{2}-\eta \right) },\quad C_*>0. \end{aligned}$$
(1.35)

Remark 1.3

Theorem 1.2 does not precludes the formation, in finite time, of a Dirac measure at the origin in the weak solutions of (1.23) with integrable initial data. Such a possibility was actually considered for the solutions of the Kompaneets equation (cf. [27, 28, 30] and others). It was proved in [12, 13] that, for large sets of initial data, this does not happen, neither in the Kompaneets equation, nor in Eq. (1.15) with a very simplified kernel. But it is not known yet if it may happen for the Eq. (1.15) with the kernel \(\Phi (k, k')\mathcal {B}_\beta (k, k')\).

Given a weak solution \(\{u(t)\} _{ t>0 }\) of (1.23) whose Lebesgue decomposition is \(u(t)=g(t)+G(t)\), with \(g(t)\in L^1([0, \infty ))\), the natural physical entropy is

$$\begin{aligned} H(u(t))&=\int _{ (0, \infty ) }h(x, g(t, x))dx-\int _{ (0, \infty ) } xG(t, x)dx, \end{aligned}$$
(1.36)
$$\begin{aligned} h(x, s)&=(x^2+s)\log (x^2+s)-s\log s-x^2\log x^2-sx. \end{aligned}$$
(1.37)

It was proved in [13] that the maximum of H over all the non negative measures of total mass M is achieved at, and only at, \(U_M=k^2F_M\) given in (1.7). But the corresponding dissipation of entropy used in [13], is not defined here due to the singularity of the kernel \(\frac{\Phi \mathcal {B}_\beta }{kk'} \) at the origin. The study of the long time behavior of the weak solutions obtained in Theorem 1.2 seems then to be more involved than in [13], (cf. also Sect. 5.3).

1.2 The Simplified Equation

In view of the exponential terms in (1.2), it is very natural to consider the scaled variable \(\beta k=x\), then scale the time variable too as \(\beta ^3 t=\tau \), and the dependent variable as \(\beta ^2 k^2 f(t, k)=u(\tau , x)\) in order to let the total number of particles to be unchanged (cf. Sect. B.1). When this is done, it appears that the linear term is formally of lower order in \(\beta>>1\):

$$\begin{aligned} \frac{\partial u}{\partial \tau }(\tau ,x)&=\int _0^\infty \frac{\widetilde{B}_{\beta }(x,y)}{xy}\big (e^{-x}-e^{-y}\big )u(\tau ,x)u(\tau ,y)dy\nonumber \\&\quad +\beta ^{-3}\int _0^\infty \frac{\widetilde{B}_{\beta }(x,y)}{xy}\big (u(\tau ,y)x^2e^{-x}-u(\tau ,x)y^2e^{-y}\big )dy, \end{aligned}$$
(1.38)

where \(\widetilde{B}_{\beta }(x, y)\) is the suitably scaled version of \( \mathcal {B}_{\beta }(k, k')\).

If only the quadratic term is kept in (1.23), the following equation follows:

$$\begin{aligned} \frac{\partial v}{\partial t}(t,k)=v(t, k)\int \limits _{[0, \infty )} v(t, k') \big (e^{- \beta k }-e^{-\beta k'}\big )\frac{\mathcal {B}_\beta (k, k')\Phi (k, k')}{kk'}dk'. \end{aligned}$$
(1.39)

Weak solutions \(u\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) to (1.39) for all initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (1.28) are proved to exist (cf. Theorem 5.1) with similar arguments as for the complete equation.

But Eq. (1.39) also has solutions \(v\in C([0, \infty ), L^1([0, \infty )))\) for initial data \(v_0\in L^1([0, \infty ))\) that are sufficiently flat around the origin. This “flatness” condition happens then to be sufficient to prevent the finite time formation of a Dirac measure at the origin in the solutions of (1.39).

Theorem 1.4

For any nonnegative initial data \(v_0\in L^1([0, \infty ))\) such that:

$$\begin{aligned} \forall r>0,\quad \int _0^\infty v_0(k)\left( e^{\frac{r}{k^{3/2}}}+e^{\eta k}\right) dk<\infty , \end{aligned}$$
(1.40)

for some \(\eta >(1-\theta )/2\), there exists a nonnegative global weak solution \(v\in C([0, \infty ), L^1([0, \infty )))\) of (1.39) that also satisfies

$$\begin{aligned} v(t, k)=v_0(k)e^{\int _0^{t}\int _0^\infty \big (e^{- \beta k }-e^{-\beta k'}\big )\frac{\mathcal {B}_\beta (k, k')\Phi (k, k')}{kk'}\, v(s, k')\, dk'\, ds}, \end{aligned}$$
(1.41)

for all \(t>0\), and \( a. e.\, k>0\). Moreover, for all \(t>0\),

$$\begin{aligned} \Vert v(t)\Vert _1&= \Vert v_0\Vert _1, \end{aligned}$$
(1.42)
$$\begin{aligned} v(t, k)&\le v_0(k)e^{\frac{tC_0}{k^{3/2}}},\,\forall t>0, \,\, and\,\,\,\,a. e.\, k>0, \end{aligned}$$
(1.43)

where \(C_0=\frac{\rho _* C_*}{\sqrt{\theta (1+\theta )}}X_{\eta }(v_0)\).

Remark 1.5

It follows from (1.43) that for the solution v obtained in Theorem 1.4, v(t) satisfies (1.40) for almost every \(t>0\). That property is then propagated globally in time.

For a solution v to Eq. (1.39), the moment \(M_{\rho }(v(t))\) defined in (1.24) is proved to be a Lyapunov function on \([0,\infty )\) for all \(\rho \ge 1\) (cf. Lemma 5.10). With some abuse of language, we sometimes refer to \(M_{\rho }(v)\) as an entropy functional for Eq. (1.39). It is possible to characterize the nonnegative measures that minimize \(M_{\rho }(v)\) for \(\rho >1\), or satisfy \(D_\rho (v)=0\), where

$$\begin{aligned} D_\rho (v)=&\iint _{ (0, \infty )^2 }\frac{\Phi \mathcal {B}_\beta }{kk'} \big (e^{-\beta k}-e^{-\beta k'} \big )(k^\rho -k'^\rho )v(k)v(k')dkdk' \end{aligned}$$
(1.44)

is the corresponding entropy dissipation functional. This question is solved, usually, at mass M and energy E fixed. Because of the truncated kernel \(\Phi \mathcal {B}_\beta \), it is also necessary to introduce the following property about the support of the measure v in connection with the support of the kernel \(\Phi \mathcal {B}_\beta \).

Given a measure \(v\in \mathscr {M}_+([0,\infty ))\), we denote \(\{A_n(v)\} _{ n\in \mathbb {N}}\) the, at most, countable collection of disjoint closed subsets of the support of v such that,

$$\begin{aligned}&(k, k')\in A_n\times A_n\hbox { for some} \,n\in \mathbb {N},\hbox { if and only if},\,\Phi (k, k')\not =0\,\,\,\hbox {or} \nonumber \\&\quad \exists \{k_n\} _{ n\in \mathbb {N}}\subset [k, k'); \,k_1=k, \lim _{ n\rightarrow \infty } k_n=k',\Phi (k_n, k _{ n+1 })\not =0\;\forall n\in \mathbb {N}. \end{aligned}$$
(1.45)

(cf. Sect. 5.3 for a precise definition of \(A_n(v)\)).

Let us define now, for any countable collection \(\mathcal {C}=\{C_n, M_n\} _{ n\in \mathbb {N}}\) of disjoint, closed subsets \(C_n\subset [0, \infty )\) enjoying the property (1.45), and positive real numbers \(M_n\), the following family of non negative measures,

$$\begin{aligned} \mathcal {F} _{ \mathcal {C},\alpha }=\bigg \{ v\in \mathscr {M}^\alpha _+([0,\infty )):C_n=A_n(v),\; M_n=\int _{ A_n(v) }v(k)dk \bigg \}. \end{aligned}$$

Theorem 1.6

For any \(\mathcal {C}\) and \(\alpha >1\) as above, the following statements are equivalent:

  1. (i)

    \(v\in \mathcal {F} _{ \mathcal {C},\alpha }\) and \(D_{\alpha }(v)=0\).

  2. (ii)

    \(M_{\alpha }(v)=\min \{M_{\alpha }(v):v\in \mathcal {F} _{ \mathcal {C},\alpha }\}\).

  3. (iii)

    \(v=\sum _{n=0}^{\infty } M_n\delta _{k_n}\), where \(k_n=\min \{k\in A_n\}\).

Remark 1.7

For any sequence \(\{x_n\} _{ n\in \mathbb {N}}\) such that \(x_n>0\), \(x_n\rightarrow 0\) as \(n\rightarrow \infty \), and \(\Phi (x_n, x _m)=0\) for all \(n\not =m\), the measure

$$\begin{aligned} u=\sum _{ n=0 }^\infty \alpha _n\delta _{x_n} \end{aligned}$$

satisfies the conditions (i)–(ii) in Theorem 1.6. Although \(0\in {\text {supp}}\,u\), there is no Dirac measure at the origin.

The long time behavior of the weak solutions of (1.39) is described in the following Theorem,

Theorem 1.8

Let v be a weak solution of (1.39) constructed in Theorem 5.1 for an initial data \(v_0\in \mathscr {M}_+([0,\infty ))\) satisfying \(X_{\eta }(v_0)<\infty \) for some \(\eta \ge (1-\theta )/2\).

Then, as \(t\rightarrow \infty \), v(t) converges in \(C([0,\infty ),\mathscr {M}_+([0,\infty )))\) to the measure

$$\begin{aligned} \mu =\sum _{i=0}^{\infty }M_i'\delta _{k_i'}, \end{aligned}$$
(1.46)

where \(M_i'\ge 0\), \(k_i'\ge 0\) satisfy the following properties:

  1. (1)

    \(k_i'\in {\text {supp}}(v_0)\) for all \(i\in \mathbb {N}\),

  2. (2)

    \(\Phi (k_i',k_j')=0\) for all \(i\ne j\),

  3. (3)

    If we define \(\mathcal {J}_n=\{i\in \mathbb {N}:k_i'\in A_n(v_0),\,M_i'>0\}\) for all \(n\in \mathbb {N}\),

    $$\begin{aligned} \sum _{i\in \mathcal {J}_n}M_i'=M_n, \end{aligned}$$
    (1.47)
  4. (4)

    For all \(n\in \mathbb {N}\), if \(k_n=\min \{k\in A_n(v_0)\}>0\), then there exists \(k_i'\) such that \(k_i'=k_n\).

Remark 1.9

If in Point 4 of Theorem 1.8, \(k_n=\min \{k\in A_n(v_0)\}=0\) for some \(n\in \mathbb {N}\), but \(v_0\) has no Dirac measure at \(k=0\), we do not know if \(k'_i=0\) for some \(i\in \mathbb {N}\), even if the origin belongs to the support of the limit measure \(\mu \) (cf. Remark 1.7 for example).

The measure \(\mu \) is of course determined by the initial data \(v_0\), but its complete description (i.e. the values of \(k'_i\) and \(M'_i\)) is not known, only the locations \(k'_i\) of some of the Dirac masses. For example, it is possible to have \(k' _{i }= k_n=\min \{k\in A_n(v_0)\}\) and \(k' _{i }< k' _{j }\in A_n(v_0)\) for some n, i, j in \(\mathbb {N}\), and \(k'_{ i }\), \(k' _{ j}\) not seeing each other, i.e., \(\Phi (k'_{i},k'_{j})=0\) (cf. Example 1, Sect. 5.4 ). The location of a Dirac measure at \(k' _{i}=x_n\) is just given by the support of the initial data, but the appearance of a Dirac measure at \(k' _{ j}\) is more difficult to be determined.

The long time behaviour that is proved in Theorem 1.8 for the solutions of the simplified Eq. (1.39) can not be expected of course to hold for the solutions of the complete Eq. (1.23). But in combination with the Eq. (1.38) for \(\beta \) large, it could indicate that the solutions of the complete problem (1.23) also undergo the formation of large an concentrated peaks, that could remain for some long, although finite, time.

1.3 General Comment

The main results of the article are stated in this Introduction in terms of the original variables, t, k, and \(v(t, k)=k^2f(t, k)\). However, in order to make clearly appear some important aspects of the equation, it is useful to introduce \(\tau \), x, and \(u(\tau , x)\), variables scaled with the parameter \(\beta \). This is a natural parameter since it is related with the inverse of the temperature of the gas of electrons.. This scaling makes clearly appear two features of the equation for \(\beta>>1\), namely, the fact that \(\mathcal {B}_\beta \) is very much peaked along the diagonal, and the different scaling properties of the quadratic and linear part of the collision integral in (1.15) (cf. Sect. B.1 for details).

However, since in all this work the value of the parameter \(\beta \) remains fixed, it is taken equal to one, without any loss of generality. Therefore, except in Sect. B, we have \(\tau \equiv t\), \(x\equiv k\) and \(u\equiv v\). In particular, for the sake of brevity, we do not re-write again the main results in terms of the variables x and u, although the proofs will be written in those terms.

The main results are actually proved for general kernels B satisfying some of the properties that the truncated kernel \(\Phi (k, k') \mathcal {B}_\beta (k, k')\) is proved to enjoy, and that are sufficient for our purpose.

2 Existence of Weak Solutions

In this Section we prove existence of weak solutions to the following problem:

$$\begin{aligned} \frac{\partial u}{\partial t}(t,x)&=Q(u,u)=\int _{[0,\infty )}b(x,y)q(u,u)dy, \end{aligned}$$
(2.1)
$$\begin{aligned} u(0)&=u_0\in \mathscr {M}_+([0, \infty )), \end{aligned}$$
(2.2)

where \(t>0\), \(x\ge 0\),

$$\begin{aligned}&\displaystyle q(u,u)=u(t,y)(x^2+u(t,x))e^{-x}-u(t,x)(y^2+u(t,y))e^{-y}, \end{aligned}$$
(2.3)
$$\begin{aligned}&\displaystyle b(x,y)=\frac{B(x,y)}{xy}, \end{aligned}$$
(2.4)

under the following assumptions on the kernel B:

$$\begin{aligned}&\text {(i) } B(x,y)\ge 0 \text { for all } (x,y)\in [0,\infty )^2, \end{aligned}$$
(2.5)
$$\begin{aligned}&\text {(ii) } B(x,y)=B(y,x) \text { for all } (x,y)\in [0,\infty )^2, \end{aligned}$$
(2.6)
$$\begin{aligned}&\text {(iii) } B\in C([0,\infty )^2\setminus \{(0,0)\}), \end{aligned}$$
(2.7)
$$\begin{aligned}&\text {(iv) There exist } \theta \in (0,1), \delta _*>0 \text { and } \rho _*=\rho _*(\theta ,\delta _*)>0 \text { such that}\nonumber \\&\qquad {\text {supp}}(B)=\Gamma =\Gamma _1\cup \Gamma _2, \end{aligned}$$
(2.8)
$$\begin{aligned}&\Gamma _1=\big \{(x,y)\in [0,\infty )^2\setminus [0,\delta _*]^2:\theta x\le y\le \theta ^{-1}x\big \}, \end{aligned}$$
(2.9)
$$\begin{aligned}&\Gamma _2=\big \{(x,y)\in [0,\delta _*]^2:|x-y|\le \rho _*\sqrt{xy(x+y)}\big \} \end{aligned}$$
(2.10)
$$\begin{aligned}&\text {(v) There exists a constant } C_*>0 \text { such that, for all } (x,y)\in \Gamma ,\nonumber \\&\qquad B(x,y)\le B\left( \frac{x+y}{2},\frac{x+y}{2}\right) \le \frac{C_*e^{\frac{x+y}{2}}}{x+y}. \end{aligned}$$
(2.11)

Remark 2.1

The region \(\Gamma \) in (2.8)–(2.10) is such that:

$$\begin{aligned} \Gamma =\left\{ (x, y)\in [0,\infty )^2: y\in \big (\gamma _1(x), \gamma _2(x)\big ) \right\} , \end{aligned}$$
(2.12)

where

$$\begin{aligned} \gamma _1(x)= & {} \left\{ \begin{array}{ll} \frac{2x+\rho _*^2x^2-\rho _* x^{3/2}\sqrt{\rho _*^2x+8}}{2(1-\rho _*^2x)}&{}\text {if }x\in [0,\delta _*]\\ \theta x&{}\text {if }x\in (\delta _*,\infty ), \end{array}\right. \end{aligned}$$
(2.13)
$$\begin{aligned} \gamma _2(x)= & {} \left\{ \begin{array}{ll} \frac{2x+\rho _*^2x^2+\rho _* x^{3/2}\sqrt{\rho _*^2x+8}}{2(1-\rho _*^2x)}&{}\text {if }x\in [0,\theta \delta _*]\\ \theta ^{-1}x&{}\text {if }x\in (\theta \delta _*,\infty ). \end{array}\right. \end{aligned}$$
(2.14)

In particular \(\theta x\le \gamma _1(x)\le x\le \gamma _2(x)\le \theta ^{-1}x\) for all \(x\ge 0\).The value of \(\rho _*=\rho _*(\theta ,\delta _*)\) is chosen so that \(\gamma _1\) and \(\gamma _2\) are continuous.

Definition 2.2

We say that a map \(u:[0, \infty )\rightarrow \mathscr {M}_+([0, \infty ))\) is a weak solution of (2.1)–(2.2) if

$$\begin{aligned}&(i)\,\, \forall \varphi \in C_b([0, \infty )),\,\,\int _{ [0, \infty ) }u(\cdot , x)\varphi (x)dx \in C( [0, \infty );\mathbb {R}) \end{aligned}$$
(2.15)
$$\begin{aligned}&\qquad \hbox {and} \qquad \int _{ [0, \infty ) }u(0 , x)\varphi (x)dx=\int _{ [0, \infty ) }u_0(x)\varphi (x)dx, \end{aligned}$$
(2.16)
$$\begin{aligned}&(ii)\,\, \forall \varphi \in C^1_b([0, \infty )),\, \varphi '(0)=0, \end{aligned}$$
(2.17)
$$\begin{aligned}&\qquad \int _{ [0, \infty ) }u(\cdot , x)\varphi (x)dx \in W^{1,\infty }_{loc}([0, \infty ); \mathbb {R}), \end{aligned}$$
(2.18)
$$\begin{aligned}&\qquad \frac{d}{dt}\int \limits _{ [0, \infty )}u(t , x)\varphi (x)dx=\frac{1}{2}\iint \limits _{[0, \infty )^2}b(x, y)q(u,u)(\varphi (x)-\varphi (y))dydx. \end{aligned}$$
(2.19)

The existence of weak solutions for the problem (2.1), (2.2) was proved in [13] under conditions on the kernel b not fulfilled in our case. In order to use that result in [13], we first consider a regularised version of (2.1), with a truncated function \(b_n\in L^\infty ([0, \infty )\times [0, \infty ))\).

It is not possible to define the dissipation of entropy for the weak solutions of (2.1) as in [13], for the same reason as for the Eq. (1.23). However, it may be defined for the solutions \(u_n\) of the regularised version of (2.1), with the truncated kernel \(b_n\),

$$\begin{aligned} D^{(n)}(u_n)&=\frac{1}{2}D^{(n)}_1(g_n)+D^{(n)}_2(g_n, G_n)+\frac{1}{2}D^{(n)}_3(G_n), \end{aligned}$$
(2.20)
$$\begin{aligned} D^{(n)}_1(g_n)&=\iint _{ (0, \infty )^2 }b_n(x, y)j\left( (x^2+g_n)e^{-x}g_n', (y^2+g_n')e^{-y}g_n \right) dydx, \end{aligned}$$
(2.21)
$$\begin{aligned} D^{(n)}_2(g_n, G_n)&=\iint _{ (0, \infty )^2 }b_n(x, y)j\left( (x^2+g_n)e^{-x}, g_ne^{-y} \right) G_n(y)dydx, \end{aligned}$$
(2.22)
$$\begin{aligned} D^{(n)}_3(G_n)&=\iint _{ (0, \infty )^2 }b_n(x, y) j\left( e^{-x}, e^{-y} \right) G_n(y)G_n(x)dydx, \end{aligned}$$
(2.23)
$$\begin{aligned} j(a, b)&=(a-b)(\ln a -\ln b),\,\,\,\forall a>0, b>0, \end{aligned}$$
(2.24)

where \(u_n=g_n+G_n\) is the Lebesgue’s decomposition of \(u_n\).

2.1 Regularised Problem

For \(n\in \mathbb {N}\), let \(\phi _n\in C_c((0,\infty ))\) be such that \(0\le \phi _n(x)\le x^{-1}\) for all \(x\ge 0\), \({\text {supp}}(\phi _n)=[1/(n+1),n+1]\) and \(\phi _n(x)=x^{-1}\) for \(x\in [1/n,n]\), so that \(\lim _{n\rightarrow \infty }\phi _n(x)=x^{-1}\). Then we define

$$\begin{aligned} b_n(x,y)=B(x,y)\phi _n(x)\phi _n(y), \end{aligned}$$
(2.25)

and consider the problem

$$\begin{aligned}&\displaystyle \frac{\partial u_n}{\partial t}(t,x)=Q_n(u_n,u_n)=\int _{[0,\infty )}b_n(x,y)q(u_n,u_n)dy, \end{aligned}$$
(2.26)
$$\begin{aligned}&\displaystyle u_n(0)=u_0\in \mathscr {M}_+([0, \infty )). \end{aligned}$$
(2.27)

If we denote

$$\begin{aligned}&\displaystyle K_{\varphi }(u,u)=\frac{1}{2}\iint _{[0,\infty )^2}k_{\varphi }(x,y)u(t,x)u(t,y)dydx, \end{aligned}$$
(2.28)
$$\begin{aligned}&\displaystyle k_{\varphi }(x,y)=b(x,y)(e^{-x}-e^{-y})(\varphi (x)-\varphi (y)), \end{aligned}$$
(2.29)
$$\begin{aligned}&\displaystyle L_{\varphi }(u)=\frac{1}{2}\int _{[0,\infty )}\mathcal {L}_{\varphi }(x)u(t,x)dx, \end{aligned}$$
(2.30)
$$\begin{aligned}&\displaystyle \mathcal {L}_{\varphi }(x)=\int _0^{\infty }\ell _{\varphi }(x,y)dy, \end{aligned}$$
(2.31)
$$\begin{aligned}&\displaystyle \ell _{\varphi }(x,y)=b(x,y)y^2e^{-y}(\varphi (x)-\varphi (y)), \end{aligned}$$
(2.32)

then (2.19) reads

$$\begin{aligned} \frac{d}{dt}\int _{[0,\infty )}\varphi (x)u(t,x)=K_{\varphi }(u,u)-L_{\varphi }(u), \end{aligned}$$
(2.33)

and the weak formulation of (2.26) reads

$$\begin{aligned} \frac{d}{dt}\int _{[0,\infty )}\varphi (x)u_n(t,x) =K_{\varphi ,n}(u_n,u_n)-L_{\varphi ,n}(u_n), \end{aligned}$$
(2.34)

where b is replaced by \(b_n\) in the formulas (2.28)–(2.32). Since \(b_n\in L^{\infty }([0,\infty )^2)\) for all \(n\in \mathbb {N}\), Theorem 3 in [13] may be applied (cf. Proposition 2.4). For any \(u\in \mathscr {M}_+([0,\infty ))\), we denote \(u=u_r+u_s\) the Lebesgue decomposition of u into an absolutely continuous measure with respect to the Lebesgue measure, \(u_r\), and a singular measure, \(u_s\).

Remark 2.3

By symmetry and Lemma A.3, for all \(\varphi \in C^1_b([0,\infty ))\),

$$\begin{aligned} K_{\varphi }(u,u)=\int _{[0,\infty )}\int _{[0,x)}k_{\varphi }(x,y)u(t,x)u(t,y)dydx. \end{aligned}$$

Proposition 2.4

For any \(n\in \mathbb {N}\) and any initial data \(u_0=u_{0,r}+u_{0,s}\in \mathscr {M}_+^1([0,\infty ))\), there exists a unique weak solution \(u_n=u_{n,r}+u_{n,s}\in C([0,\infty ),\mathscr {M}_+^1([0,\infty )))\) to (2.26), (2.27) that satisfies

$$\begin{aligned}&\displaystyle M_0(u_n(t))=M_0(u_0)\quad \forall t\ge 0, \end{aligned}$$
(2.35)
$$\begin{aligned}&\displaystyle {\text {supp}}(u_{n,s}(t))\subset {\text {supp}}(u_{0,s})\quad \forall t\ge 0, \end{aligned}$$
(2.36)

and for all \(\varphi \in C_c([0,\infty )\times [0,\infty ))\),

$$\begin{aligned}&\int _{[0,\infty )}\varphi (t,x)u_n(t,x)dx=\int _{[0,\infty )}\varphi (0,x)u_0(x)dx\nonumber \\&\quad +\int _0^t\int _{[0,\infty )}\varphi _t(t, x)u(t, x)dxds +\int _0^t\int _{[0,\infty )}Q_n(u_n,u_n)\varphi (s,x)dxds, \end{aligned}$$
(2.37)

and for all \(t_1\) and \(t_2\) with \(t_2\ge t_1\ge 0\),

$$\begin{aligned} \int _{ t_1 }^{t_2}D^{(n)}(u_n(t))dt=H(u_n(t_1))-H(u_n(t_2)). \end{aligned}$$
(2.38)

Moreover, if \(u_0\in L^1([0,\infty ))\) then \(u_n\in C([0,\infty ),L^1([0,\infty )))\).

Proof

Theorem 3 in [13]. \(\square \)

Remark 2.5

In Proposition 2.4, the space \(\mathscr {M}_+^1([0,\infty ))\) is endowed with the total variation norm.

Corollary 2.6

Let \(u_n\) be as in Proposition 2.4 for \(n\in \mathbb {N}\). Then (2.34) holds for all \(t>0\) and for all nonnegative \(\varphi \in C([0,\infty ))\) such that \(\int _{[0,\infty )}\varphi (x)u_0(x)dx<\infty \).

Proof

Given a nonnegative function \(\varphi \in C([0,\infty ))\) such that \(\int _{[0,\infty )}\varphi (x)u_0(x)dx<\infty \), let \(\{\varphi _k\}_{k\in \mathbb {N}}\subset C_c([0,\infty ))\) be such that \(\varphi _k(x)\rightarrow \varphi (x)\) as \(k\rightarrow \infty \) for all \(x\in [0,\infty )\), and \(\varphi _k\le \varphi _{k+1}\le \varphi \) for all \(k\in \mathbb {N}\). By (2.37) with test function \(\varphi _k\), and recalling that \(\phi _n\) is compactly supported, it is easy to deduce using Fubini’s theo, the symmetry of B, and the antisymmetry of \(q(u_n,u_n)\), that for all \(k\in \mathbb {N}\),

$$\begin{aligned} \int _{[0,\infty )}\varphi _k(x)u_n(t,x)dx&=\int _{[0,\infty )}\varphi _k(x)u_0(x)dx\nonumber \\&\quad +\int _0^t \big (I_{\varphi _k,n}(u_n,u_n)-L_{\varphi _k,n}(u_n)\big )ds. \end{aligned}$$
(2.39)

Using again that \(\phi _n\) is compactly supported, we can pass to the limit as \(k\rightarrow \infty \) in (2.39) by monotone and dominated convergence theorems to obtain (2.39) with \(\varphi \) instead of \(\varphi _k\).

Now, since \(u_n\in C([0,\infty ),\mathscr {M}_+^1([0,\infty )))\), where the topology on \(\mathscr {M}_+^1([0,\infty ))\) is the total variation norm, it follows that the maps

$$\begin{aligned} t\mapsto K_{\varphi ,n}(u_n(t),u_n(t)),\quad t\mapsto L_{\varphi ,n}(u_n(t)) \end{aligned}$$

are continuous for all \(n\in \mathbb {N}\) and all \(t>0\). Then (2.34) follows from (2.39) with \(\varphi \) istead of \(\varphi _k\), by the fundamental theo of calculus. \(\square \)

2.2 The Limit \(n\rightarrow \infty \)

The goal now is to pass to the limit as \(n\rightarrow \infty \) in (2.34) and obtain a weak solution of (2.1)–(2.11). We start with the following uniform estimate.

Proposition 2.7

Let \(u_n\) and \(u_0\) be as in Proposition 2.4. If \(X_{\eta }(u_0)<\infty \) for some \(\eta \in (0,1/2)\), then for all \(t>0\) and all \(n\in \mathbb {N}\),

$$\begin{aligned} X_{\eta }(u_n(t))\le e^{C_{\eta }t}X_{\eta }(u_0) \end{aligned}$$
(2.40)

where \(C_{\eta }\) is defined in (1.35).

Proof

Let \(\eta \in (0,1/2)\) and take \(\varphi (x)=e^{\eta x}\) in (2.34), which is allowed by Proposition 2.6. If we drop all the negative terms in (2.34), we use (A.2) in Appendix A (for \(C^1\) functions instead of Lipschitz functions), and \(\phi _n(x)\le x^{-1}\), then

$$\begin{aligned}&\frac{d}{dt}\int _{[0,\infty )}e^{\eta x}u_n(t,x)dx\le \frac{1}{2}\int _{[0,\infty )}u_n(t,x)\int _x^{\infty }|\ell _{\varphi }(x,y)|dydx\\&\quad \le \frac{C_*(1-\theta )}{2\theta ^2(1+\theta )}\int _{[0,\infty )}u_n(t,x)e^{\frac{x}{2}}\int _x^{\infty }\varphi '(y)e^{-\frac{y}{2}}dydx\\&\quad \le C_{\eta }\int _{[0,\infty )}e^{\eta x}u_n(t,x)dx, \end{aligned}$$

from where (2.40) follows using Gronwall’s inequality. \(\square \)

We prove now the following pre-compactness result of \(\{u_n(t)\} _{ n\in \mathbb {N}}\) for any fixed \(t>0\).

Proposition 2.8

Let \(u_n\) and \(u_0\) be as in Proposition 2.4. Then, for every fixed \(t>0\), there exist a subsequence of \(\{u_n(t)\}_{n\in \mathbb {N}}\) (not relabelled) and \(U \in \mathscr {M}_+([0,\infty ))\) such that, for all \(\varphi \in C_0([0,\infty ))\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{[0,\infty )}\varphi (x)u_n(t,x)dx =\int _{[0,\infty )}\varphi (x)U(x)dx. \end{aligned}$$
(2.41)

Moreover, if \(u_0\) satisfies \(X_{\eta }(u_0)<\infty \) for some \(\eta \in (0,1/2)\), then

$$\begin{aligned} X_{\eta }(U)\le e^{C_{\eta }t}X_{\eta }(u_0), \end{aligned}$$
(2.42)

where \(C_{\eta }\) is defined in (1.35), and (2.41) holds for all \(\varphi \in C([0,\infty ))\) satisfying the growth condition

$$\begin{aligned} |\varphi (x)|\le c e^{\alpha x}\quad \forall x\in [0,\infty ),\;c>0,\;0\le \alpha <\eta . \end{aligned}$$
(2.43)

Proof

By (2.35), the sequence \(\{u_n(t)\}_{n\in \mathbb {N}}\) is uniformly bounded in \(\mathscr {M}_+([0,\infty ))\), and thus has a subsequence, still denoted \(u_n(t)\), that converges to some \(U\in \mathscr {M}_+([0,\infty ))\) in \(\sigma (\mathscr {M}([0,\infty )),C_0([0,\infty )))\) (the weak* topology), i.e., (2.41) holds for all \(\varphi \in C_0([0,\infty ))\). Moreover, if \(\zeta _j\in C_c([0,\infty ))\) is such that \(0\le \zeta _j\le 1\), \(\zeta _j(x)=1\) for all \(x\in [0,j]\) and \(\zeta _j(x)=0\) for all \(x\ge j+1\), so that \(\zeta _j\rightarrow 1\), then by weak* convergence and (2.35),

$$\begin{aligned} \int _{[0,\infty )}\zeta _j(x)U(x)dx&=\lim _{ n\rightarrow \infty }\int _{[0,\infty )}\zeta _j(x)u_n(t,x)dx \nonumber \\&\le \lim _{n\rightarrow \infty }\int _{[0,\infty )}u_n(t,x)dx=\int _{[0,\infty )}u_0(x)dx, \end{aligned}$$

and then, as \(j\rightarrow \infty \),

$$\begin{aligned} \int _{[0,\infty )}U(x)dx\le \int _{[0,\infty )}u_0(x)dx. \end{aligned}$$
(2.44)

Suppose now that \(u_0\) satisfies (1.28) for some \(\eta \in (0,1/2)\), and let \(\psi (x)=e^{\eta x}\) and \(\psi _j=\psi \zeta _j\), where \(\zeta _j\) is as before. Then, by weak* convergence and Proposition 2.7,

$$\begin{aligned} \int _{[0,\infty )}\psi _j(x)U(x)dx&=\lim _{n\rightarrow \infty }\int _{[0,\infty )}\psi _j(x)u_n(t,x)dx\\&\le \liminf _{n\rightarrow \infty }\int _{[0,\infty )}e^{\eta x}u_n(t,x)dx\le e^{C_{\eta }t}\int _{[0,\infty )}e^{\eta x}u_0(x)dx, \end{aligned}$$

and letting \(j\rightarrow \infty \), (2.42) holds.

Let now \(\varphi \in C([0,\infty ))\) satisfying (2.43), and define \(\varphi _j=\varphi \zeta _j\), with \(\zeta _j\) as before, so that \(\varphi _j\rightarrow \varphi \) pointwise as \(j\rightarrow \infty \). Then, for all \(j\in \mathbb {N}\),

$$\begin{aligned}&\bigg |\int _{[0,\infty )}\varphi (x)u_n(t,x)dx-\int _{[0,\infty )} \varphi (x)U(x)dx\bigg |\nonumber \\&\quad \le \bigg |\int _{[0,\infty )}\varphi _j(x)u_n(t,x)dx-\int _{[0,\infty )} \varphi _j(x)U(x)dx\bigg |\nonumber \\&\quad +\int _{[0,\infty )}|\varphi (x)-\varphi _j(x)|u_n(t,x)dx +\int _{[0,\infty )}|\varphi (x)-\varphi _j(x)|U(x)dx. \end{aligned}$$
(2.45)

By (2.41), the first term in the right hand side above converges to zero as \(n\rightarrow \infty \) for all \(j\in \mathbb {N}\). We just need to prove that the second and the third terms are arbitrarly small (for j large enough). Both terms are treated in the same way. We use that \(\varphi _j=\varphi \) on [0, j], (2.43), and Proposition 2.7 to obtain

$$\begin{aligned}&\int _{[0,\infty )}|\varphi (x)-\varphi _j(x)|u_n(t,x)dx =\int _{(j,\infty )}|\varphi (x)-\varphi _j(x)|u_n(t,x)dx\\&\quad \le 2\int _{(j,\infty )}|\varphi (x)|u_n(t,x)dx \le 2c\int _{(j,\infty )}e^{\alpha x}u_n(t,x)dx\\&\quad \le 2ce^{(\alpha -\eta ) j}\int _{(j,\infty )}e^{\eta x}u_n(t,x)dx\le 2ce^{(\alpha -\eta )j}e^{C_{\eta }t}\int _{[0,\infty )}e^{\eta x}u_0(x)dx, \end{aligned}$$

and by similar estimates, and (2.42),

$$\begin{aligned} \int _{[0,\infty )}&|\varphi (x)-\varphi _j(x)|U(x)dx\le 2ce^{(\alpha -\eta )j}e^{C_{\eta }t}\int _{[0,\infty )}e^{\eta x}u_0(x)dx. \end{aligned}$$

Since \(\alpha <\eta \), both terms converges to zero as \(j\rightarrow \infty \). \(\square \)

The equicontinuity of \(\{u_n\}_{n\in \mathbb {N}}\) in the narrow topology is proved in the following Proposition,

Proposition 2.9

Let \(u_n\) and \(u_0\) be as in Proposition 2.4, and suppose that \(X_{\eta }(u_0)<\infty \) for some \(\eta \in \left[ \frac{1-\theta }{2},\frac{1}{2}\right) \). Then, for all \(n\in \mathbb {N}\), \(\varphi \)L-Lipschitz on \([0,\infty )\), \(0<T<\infty \) and t, \(t_0\in [0,T]\),

$$\begin{aligned} \bigg |\int _{[0,\infty )}\varphi (x)u_n(t,x)dx -\int _{[0,\infty )}\varphi (x)u_n(t_0,x)dx\bigg |\le C(u_0,T)|t-t_0|, \end{aligned}$$
(2.46)

where

$$\begin{aligned} C(u_0,T)= LC_*\left[ AM_0(u_0)+\frac{(1-\theta )}{2\theta ^2(1+\theta )}\right] e^{TC_{\frac{(1-\theta )}{2}}}X_{\frac{(1-\theta )}{2}}(u_0),\nonumber \end{aligned}$$

and A is given in (A.1). In particular, the sequence \(\{u_n\} _{ n\in \mathbb {N}}\) is equicontinuous from \([0,\infty )\) into \(\mathscr {M}_+([0,\infty )\) with the narrow topology.

Proof

Let \(\varphi \) be L-Lipschitz, \(0<T<\infty \) and let t, \(t_0\in [0,T]\) with \(t_0\le t\). By (2.34)

$$\begin{aligned}&\bigg |\int _{[0,\infty )}\varphi (x)u_n(t,x)dx-\int _{[0,\infty )} \varphi (x)u_n(t_0,x)dx\bigg |\nonumber \\&\quad \le \int _{t_0}^t\Big (|K_{\varphi ,n}(u_n(s),u_n(s))|+|L_{\varphi ,n}(u_n(s))|\Big )ds. \end{aligned}$$
(2.47)

By (A.5), Remark A.5, (2.35), and Proposition 2.7,

$$\begin{aligned}&\int _{t_0}^t|K_{\varphi ,n}(u_n(s),u_n(s))|ds\le LC_*AM_0(u_0)\int _{t_0}^tX_{\frac{(1-\theta )}{2}}(u_n(s))ds\nonumber \\&\quad \le LC_*AM_0(u_0)e^{tC_{\frac{(1-\theta )}{2}}}X_{\frac{(1-\theta )}{2}}(u_0)(t-t_0), \end{aligned}$$
(2.48)

and by (A.2) (positive part only), Remark A.5 and Proposition 2.7,

$$\begin{aligned}&\int _{t_0}^t|L_{\varphi ,n}(u_n(s))|ds\le \frac{LC_*(1-\theta )}{2\theta ^2(1+\theta )} \int _{t_0}^tX_{\frac{(1-\theta )}{2}}(u_n(s))ds\nonumber \\&\quad \le \frac{LC_*(1-\theta )}{2\theta ^2(1+\theta )}e^{tC_{\frac{(1-\theta )}{2}}}X_{\frac{(1-\theta )}{2}}(u_0)(t-t_0). \end{aligned}$$
(2.49)

Using (2.48) and (2.49) in (2.47), the estimate (2.46) follows. For the equicontinuity, let \(\varepsilon >0\) and consider \(\delta <\varepsilon / C(u_0,T)\). By (1.26), (1.27), if we take the supremum on (2.46) among all \(\varphi \in \text {Lip}_1([0,\infty ))\) with \(\Vert \varphi \Vert _{\infty }\le 1\), we deduce that for all \(t\in [0,T]\), \(t_0\in [0,T]\) such that \(|t-t_0|<\delta \), then \(d_0(u_n(t),u_n(t_0))<\varepsilon \) for all \(n\in \mathbb {N}\), that is, \(\{u_n\}_{n\in \mathbb {N}}\) is equicontinuous on [0, T]. \(\square \)

As a Corollary of Proposition 2.8 and Proposition 2.9, we obtain that a subsequence of \(\{u_n\}_{n\in \mathbb {N}}\) converges to a limit u in the space \(C([0,\infty ),\mathscr {M}_+([0,\infty )))\).

Corollary 2.10

Let \(u_n\) and \(u_0\) be as in Proposition 2.4, and suppose that \(X_{\eta }(u_0)<\infty \) for some \(\eta \in \left[ \frac{1-\theta }{2},\frac{1}{2}\right) \). Then there exist a subsequence of \(\{u_n\}_{n\in \mathbb {N}}\) (not relabelled) and \(u\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }d_0(u_n(t),u(t))=0\quad \forall t\ge 0, \end{aligned}$$
(2.50)

and the convergence is uniform on the compact sets of \([0,\infty )\). Moreover,

$$\begin{aligned} X_{\eta }(u(t))\le e^{C_{\eta }t}X_{\eta }(u_0)\qquad \forall t\ge 0, \end{aligned}$$
(2.51)

where \(C_{\eta }\) is given in (1.35), and for all \(\varphi \in C([0,\infty ))\) satisfying (2.43),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{[0,\infty )}\varphi (x)u_n(t,x)dx =\int _{[0,\infty )}\varphi (x)u(t,x)dx\quad \forall t\ge 0. \end{aligned}$$
(2.52)

Remark 2.11

(2.50) implies that, for every \(\varphi \in C_b([0,\infty ))\),

$$\begin{aligned} \lim _{ n\rightarrow \infty }\sup _{ t_1\le t\le t_2 }\bigg |\int _{ [0, \infty ) }u_n(t, x)\varphi (x)dx-\int _{ [0, \infty ) }u(t, x)\varphi (x)dx\bigg |=0. \end{aligned}$$
(2.53)

Proof

By Proposition 2.8, the sequence \(\{u_n\}_{n\in \mathbb {N}}\) is relatively compact on \((\mathscr {M}_+([0,\infty )),d_0)\), and by Proposition 2.9, the sequence \(\{u_n\}_{n\in \mathbb {N}}\) is equicontinuous from \([0,\infty )\) into \((\mathscr {M}_+([0,\infty )),d_0)\). Then, from Arzelà-Ascoli theo, \(u_n\) converges pointwise (for all \(t\ge 0\)) to a continuous function u, and the convergence is uniform on compact sets. Since the metric \(d_0\) generates the narrow topology, and the convergence in (2.50) is uniform on compact sets, then (2.53) follows. The estimate (2.51) and the limit (2.52) are obtained as in Proposition 2.8, since the time t is fixed. \(\square \)

We prove now that the limit u of the sequence \(\{u_n\}_{n\in \mathbb {N}}\) is indeed a weak solution of (2.1)–(2.2).

Corollary 2.12

Given any \(v_0\in \mathscr {M}_+([0,\infty ))\) satisfying (1.28) for some \(\eta \in \left( \frac{1-\theta }{2},\frac{1}{2}\right) \), there exists \(v\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) weak solution of (2.1)–(2.2), that also satisfies (1.33) and (1.34).

Proof

Let \(\{u_n\}_{n\in \mathbb {N}}\) be the sequence of solutions for the regularised problem (2.26), (2.27). By Corollary 2.10, a subsequence of \(\{u_n\}_{n\in \mathbb {N}}\) converges to a limit \(u\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\). Since u is continuous from \([0,\infty )\) to (\(\mathscr {M}_+([0,\infty )),d_0)\) and \(d_0\) generates the narrow topology, then (2.15) holds. Next, we prove that u satisfies (2.16)–(2.19). To this end, let \(\varphi \in C^1_b([0,\infty ))\) with \(\varphi '(0)=0\). By (2.34), for all \(n\in \mathbb {N}\) and all \(t\ge 0\),

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)u_n(t,x)dx&=\int _{[0,\infty )}\varphi (x)u_0(x)dx\nonumber \\&\quad +\int _0^t\Big (K_{\varphi ,n}(u_n(s),u_n(s))+L_{\varphi ,n}(u_n(s))\Big )ds, \end{aligned}$$
(2.54)

and our goal is now to pass to the limit as \(n\rightarrow \infty \) term by term. By (2.53), for all \(t\ge 0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{[0,\infty )}\varphi (x)u_n(t,x)dx=\int _{[0,\infty )}\varphi (x)u(t,x)dx. \end{aligned}$$
(2.55)

Let us prove that for all \(t\ge 0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }L_{\varphi ,n}(u_n(t))&=L_{\varphi }(u(t)), \end{aligned}$$
(2.56)
$$\begin{aligned} \lim _{n\rightarrow \infty }K_{\varphi ,n}(u_n(t),u_n(t))&=K_{\varphi }(u(t),n(t)). \end{aligned}$$
(2.57)

Starting with (2.56), we have

$$\begin{aligned} \big |L_{\varphi }(u)-L_{\varphi ,n}(u_n)\big |\le&\big |L_{\varphi }(u)-L_{\varphi }(u_n)\big |+|L_{\varphi }(u_n)-L_{\varphi ,n}(u_n)|. \end{aligned}$$
(2.58)

Since \(\mathcal {L}_{\varphi }\in C([0,\infty ))\) and \(\mathcal {L}_{\varphi }\) satisfies the growth condition (2.43) with \(\alpha =(1-\theta )/2\), (cf. Lemma A.1), then by (2.52) the first term in the right hand side of (2.58) converges to zero as \(n\rightarrow \infty \). For the second term we have, for any \(R>0\),

$$\begin{aligned} |L_{\varphi }(u_n)-L_{\varphi ,n}(u_n)|&\le \int _{[0,R]} |\mathcal {L}_{\varphi }(x)-\mathcal {L}_{\varphi ,n}(x)|u_n(t,x)dx\\&\quad +\int _{(R,\infty )} |\mathcal {L}_{\varphi }(x)-\mathcal {L}_{\varphi ,n}(x)|u_n(t,x)dx. \end{aligned}$$

On the one hand, using (2.35),

$$\begin{aligned} \int _{[0,R]} |\mathcal {L}_{\varphi }(x)-\mathcal {L}_{\varphi ,n}(x)|u_n(t,x)dx\le M_0(u_0)\Vert \mathcal {L}_{\varphi }-\mathcal {L}_{\varphi ,n}\Vert _{C([0,R])}, \end{aligned}$$

which converges to zero as \(n\rightarrow \infty \) by Lemma A.6. On the other hand, by (A.3),

$$\begin{aligned}&\int _{(R,\infty )} |\mathcal {L}_{\varphi }(x)-\mathcal {L}_{\varphi ,n}(x)|u_n(t,x)dx \le 2\int _{(R,\infty )}|\mathcal {L}_{\varphi }(x)|u_n(t,x)dx\nonumber \\&\quad \le C \int _{(R,\infty )}e^{\frac{(1-\theta )x}{2}}u_n(t,x)dx\le C e^{R\left( \frac{1-\theta }{2}-\eta \right) }\int _{(R,\infty )}e^{\eta x}u_n(t,x)dx, \end{aligned}$$
(2.59)

where \(C=\frac{LC_*(1-\theta )}{\theta ^2(1+\theta )}\), and by Proposition 2.7 we deduce that (2.59) converges to zero as \(R\rightarrow \infty \). That concludes the proof of (2.56).

In order to prove (2.57), we use

$$\begin{aligned} |K_{\varphi }(u,u)-K_{\varphi ,n}(u_n,u_n)|&\le |K_{\varphi }(u,u)-K_{\varphi }(u_n,u_n)|\nonumber \\&\quad +|K_{\varphi }(u_n,u_n)-K_{\varphi ,n}(u_n,u_n)|. \end{aligned}$$
(2.60)

Then, for the first term in the right hand side of (2.60), given \(R>0\), we use

$$\begin{aligned}&\iint _{[0,\infty )^2}k_{\varphi }(x,y)u(x)u(y)dydx\\&\quad \le \bigg (\iint _{[0,R]^2}+\iint _{[\gamma _1(R),\infty )^2} -\iint _{[\gamma _1(R),R]}\bigg )k_{\varphi }(x,y)u(x)u(y)dydx, \end{aligned}$$

to deduce

$$\begin{aligned}&|K_{\varphi }(u,u)-K_{\varphi }(u_n,u_n)|\le I_1+I_2+I_3,\nonumber \\&\quad I_1=\bigg |\iint _{[0,R]^2}k_{\varphi }u(x)u(y)dydx-\iint _{[0,R]^2} k_{\varphi }u_n(x)u_n(y)dydx\bigg |,\nonumber \\&\quad I_2=\bigg |\iint _{[\gamma _1(R),R]^2}k_{\varphi }u(x)u(y)dydx -\iint _{[\gamma _1(R),R]^2}k_{\varphi }u_n(x)u_n(y)dydx\bigg |,\nonumber \\&\quad I_3=\bigg |\iint _{[\gamma _1(R),\infty )^2}k_{\varphi }u(x)u(y)dydx -\iint _{[\gamma _1(R),\infty )^2}k_{\varphi }u_n(x)u_n(y)dydx\bigg |. \end{aligned}$$
(2.61)

Since \(k_{\varphi }\in C([0,\infty )^2)\) (cf. Lemma A.3), then by Stone-Weierstrass theo, \(k_{\varphi }(x,y)\) can be approximated on any compact subset \(X\subset [0,\infty )^2\) by functions of the form \(\psi _1(x)\psi _2(y)\), with \(\psi _i\in C(X)\) for \(i=1,2\). By Tietze extension theo we may assume that \(\psi _i\in C([0,\infty ))\) for \(i=1,2\). Then, using that \(u_n\) converges narrowly to u, we deduce that for any \(\varepsilon >0\), \(R>0\), there exists \(n_*\in \mathbb {N}\) such that for all \(n\ge n_*\)

$$\begin{aligned} I_1<\varepsilon ,\qquad I_2<\varepsilon . \end{aligned}$$
(2.62)

Then, for \(I_3\) we have the following.

$$\begin{aligned} I_3\le \iint _{[\gamma _1(R),\infty )^2}k_{\varphi }(x,y)\big (u(x)u(y)+u_n(x)u_n(y)\big )dydx, \end{aligned}$$
(2.63)

and by (A.1), calling \(C=\Vert \varphi '\Vert _{\infty } C_* A\),

$$\begin{aligned}&\iint _{[\gamma _1(R),\infty )^2}k_{\varphi }u(t,x)u(t,y)dydx \le C \iint _{[\gamma _1(R),\infty )^2}e^{\frac{|x-y|}{2}} u(t,x)u(t,y)dydx\nonumber \\&\quad \le 2C\int _{[\gamma _1(R),\infty )}e^{\frac{(1-\theta )x}{2}}u(t,x) \int _{[\gamma _1(R),x]}u(t,y)dydx\nonumber \\&\quad \le 2C X_{\eta }(u(t))\int _{[\gamma _1(R),\infty )}u(t,y)dy. \end{aligned}$$
(2.64)

We now use that for all \(x>0\), \(t>0\), there exists \(R>0\) such that

$$\begin{aligned} \int _{[\gamma _1(R),x]}u(t,y)dy&\le \frac{1}{\gamma _1(R)}\int _{[\gamma _1(R),\infty )}yu(t,y)dy\nonumber \\&\le \frac{1}{\gamma _1(R)}\int _{[\gamma _1(R),\infty )}e^{\eta y}u(t,y)dy \le \frac{e^{C_{\eta }t}X_{\eta }(u_0)}{\gamma _1(R)}, \end{aligned}$$
(2.65)

where we have used (2.51). Using (2.65) in (2.64), and (2.51) again,

$$\begin{aligned}&\iint _{[\gamma _1(R),\infty )^2}k_{\varphi }u(t,x)u(t,y)dydx\le \frac{2Ce^{2C_{\eta }t}(X_{\eta }(u_0))^2}{\gamma _1(R)}, \end{aligned}$$

and the same estimate holds when u is replaced by \(u_n\). We then obtain from (2.63) that, for any \(\varepsilon >0\), there exists \(R>0\) such that \(I_3<\varepsilon \) for all \(n\in \mathbb {N}\). Combining this with (2.62), we then deduce from (2.61) that for all \(t>0\)

$$\begin{aligned} \lim _{n\rightarrow \infty }|K_{\varphi }(u(t),u(t))-K_{\varphi }(u_n(t),u_n(t))|=0. \end{aligned}$$
(2.66)

Now, for the second term in the right hand side of (2.60), we have

$$\begin{aligned}&|K_{\varphi }(u_n,u_n)-K_{\varphi ,n}(u_n,u_n)|\\&\quad \le \int _{[0,\infty )}\int _{[0,x]}|k_{\varphi }(x,y)||1-xy\phi _n(x)\phi _n(y)|u_n(t,x)u_n(t,y)dydx, \end{aligned}$$

and we decompose the integral above as follows:

$$\begin{aligned} \int _{[0,\infty )}\int _{[0,x]}=\int _{\frac{1}{n}}^n\int _{\frac{1}{n}}^x+\int _n^{\infty }\int _0^x+\int _0^n\int _0^{\min \{x,\frac{1}{n}\}}. \end{aligned}$$
(2.67)

By definition \(\phi _n(x)=x^{-1}\) for all \(x\in [1/n,n]\), and then

$$\begin{aligned} \int _{\frac{1}{n}}^n\int _{\frac{1}{n}}^x|k_{\varphi }(x,y)||1-xy\phi _n(x)\phi _n(y)|u_n(t,x)u_n(t,y)dydx=0. \end{aligned}$$

Now, by (A.5) and (2.35),

$$\begin{aligned}&\int _n^{\infty }\int _0^x |k_{\varphi }(x,y)|u_n(t,x)u_n(t,y)dydx\\&\quad \le LC_*AM_0(u_0)\int _n^{\infty }e^{\frac{(1-\theta )x}{2}}u_n(t,x)dx\\&\quad \le LC_*AM_0(u_0) e^{n\left( \frac{1-\theta }{2}-\eta \right) }\int _n^{\infty }e^{\eta x}u_n(t,x)dx, \end{aligned}$$

and from Proposition 2.7 we deduce that it converges to zero as \(n\rightarrow \infty \). For the las term in the right hand side of (2.67), we argue as follows. Let us define \(x_n=\gamma _2(1/n)\) and \(D_n=[0,x_n]\times [0,1/n]\). Notice that \(x_n\rightarrow 0\) as \(n\rightarrow \infty \). Then by (2.35)

$$\begin{aligned}&\int _0^n\int _0^{\min \left\{ x,\frac{1}{n}\right\} }|k_{\varphi }(x,y)|| 1-xy\phi _n(x)\phi _n(y)|u_n(t,x)u_n(t,y)dydx\\&\quad \le \max _{(x,y)\in D_n}|k_{\varphi }(x,y)|M_0(u_0)^2. \end{aligned}$$

Since \(k_{\varphi }(0,0)=0\) and \(k_{\varphi }\) is continuous (cf. Lemma A.3), it follows that \(k_{\varphi }(x,y)\rightarrow 0\) for all \((x,y)\in D_n\) as \(n\rightarrow \infty \). That concludes the proof of (2.57).

From the limits (2.56), (2.57), the uniform bounds (2.48), (2.49), dominated convergence theo and (2.55), we obtain

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)u(t,x)dx= & {} \int _{[0,\infty )}\varphi (x)u_0(x)dx\nonumber \\&+\int _0^t\Big (K_{\varphi }(u(s),u(s))+L_{\varphi }(u(s))\Big )ds. \end{aligned}$$
(2.68)

The identity (2.16) then follows from (2.68) for \(t=0\). It follows from Proposition 2.9, by passage to the limit as \(n\rightarrow \infty \), that for any \(\varphi \in C^1_b([0,\infty ))\) with \(\varphi '(0)=0\), the map \(t\mapsto \int _{ [0, \infty ) }u(t, x)\varphi (x)dx\) is locally Lipschitz on \([0, \infty )\), i.e., (2.17) holds, and then from (2.68), the weak formulation (2.19) follows. Taking \(\varphi =1\) in (2.19), we obtain (1.33). The estimate (1.34) is just (2.51). \(\square \)

Remark 2.13

Because of the exponential growth of the kernel B, an exponential moment is required on the initial data \(u_0\). This exponential moment is propagated to the solution for all \(t>0\). Using that exponential moment, it easily follows that for any \(\rho \ge 1\), if \(M _{ -\rho }(u_0)<\infty \), there exists a constant \(C_1>0\), and a non negative locally bounded function \(C_2(t)\) such that,

$$\begin{aligned} \frac{d}{dt} \int _{ [0, \infty )}u(t, x)x^{-\rho }dx \le C_1\bigg ( \int _{ [0, \infty )} u(t, x)x^{-\rho }dx\bigg )^2+C_2(t), \end{aligned}$$

from where it follows that \(M _{ -\rho }(u(t))<\infty \) for t in a bounded interval, that depends on \(M_\rho (u_0)\).

Proof of Theorem 1.2

Theorem 1.2 follows from Corollary 2.12 since the function \(b(k, k')=\frac{\Phi \mathcal {B}_\beta }{kk'}\) satisfies (2.5)–(2.11). \(\square \)

3 The Singular Part of the Solution

If u is a weak solution of (2.1)–(2.11) obtained in Theorem 1.2, for all \(t>0\), the measure u(t) may now be decomposed by the Lebesgue’s decomposition Theorem as

$$\begin{aligned}&u(t)=g(t)+\alpha (t)\delta _0+G(t), \end{aligned}$$
(3.1)
$$\begin{aligned}&g(t)\in L^1([0,\infty )),\;\alpha \ge 0,\;G(t)\perp dx,\; G(t,\{0\})=0. \end{aligned}$$
(3.2)

In this Section we give some properties of u, \(\alpha \), and G.

We first notice that the weak solution u of (2.1)–(2.11) obtained in Theorem 1.2, satisfies the Eq. (2.1) in the sense of distributions. This follows from the properties of the support of the function B and Fubini’s Theorem. A similar argument may be used for slightly more general test functions \(\varphi \). To be more precise, let us define the set

$$\begin{aligned} \mathscr {C}=\Big \{\varphi \in C_b([0,\infty )):\sup _{x\ge 0}\frac{|\varphi (x)|}{x^{3/2}}<\infty \Big \}. \end{aligned}$$
(3.3)

Proposition 3.1

Let u be a solution of (2.1)–(2.11) obtained in Theorem 1.2. Then, for almost every \(t>0\), \(\partial u/\partial t \in \mathscr {D}'((0, \infty )) \), \(Q(u(t), u(t))\in \mathscr {D}'((0, \infty ))\), and

$$\begin{aligned} \forall \varphi \in C_c((0, \infty )),\quad \frac{d}{dt}\langle u(t), \varphi \rangle =\langle Q(u(t), u(t)), \varphi \rangle . \end{aligned}$$
(3.4)

Moreover,

$$\begin{aligned} \forall \varphi \in \mathscr {C},\quad \frac{d}{dt}\langle u(t), \varphi \rangle =\langle \mathcal {Q}(u(t), u(t)), \varphi \rangle , \end{aligned}$$
(3.5)

where

$$\begin{aligned} \mathcal {Q}(u(t),u(t))&=\int _{[0,\infty )}b(x,y) \Big [(e^{-x}-e^{-y})u(t,x)u(t,y)\nonumber \\&\quad -u(t,x)y^2e^{-y}+u(t,y)x^2e^{-x}\Big ]dy. \end{aligned}$$
(3.6)

Remark 3.2

Notice that in (3.6), the integral containing the factor \((e^{-x}-e^{-y})\) is convergent near the origin even for test functions \(\varphi \in \mathscr {C}\setminus C_c((0, \infty ))\). That is not true anymore if we consider each of the terms \(e^{-x}\) and \(e^{-y}\) separately.

Proof

By (2.8)–(2.11) and (1.33),

$$\begin{aligned} \int _{[0,\infty )}|\varphi (x)|\int _{[0,\infty )}\frac{B(x,y)}{xy}E(x,y)dydx<\infty , \end{aligned}$$

where E(xy) is one of the functions in

$$\begin{aligned} \Big \{u(y)x^2e^{-x}, u(x)y^2e^{-y}, u(x)u(y)e^{-x}, u(x)u(y)e^{-y}\Big \} \end{aligned}$$

when \(\varphi \in C^1_c((0,\infty ))\), or, one of the functions in

$$\begin{aligned} \Big \{ u(x)u(y)|e^{-x}-e^{-y}|,u(x)y^2e^{-y},u(y)x^2e^{-x}\Big \} \end{aligned}$$

when \(\varphi \in \mathscr {C}\cap C^1_b([0,\infty )).\) Since u is a weak solution and satisfies (2.19), we deduce from Fubini’s Theorem the identity (3.4) for \(\varphi \in C^1_c((0, \infty ))\), and the identities (3.5)–(3.6) for \(\varphi \in \mathscr {C}\cap C^1_b([0,\infty ))\). By a density argument the Proposition follows. \(\square \)

We may prove now the following property of the singular measure G(t).

Theorem 3.3

Let u be a weak solution of (2.1)–(2.11) obtained in Theorem 1.2, and consider the decomposition (3.1), (3.2). If \(G(0)=0\) in \(\mathscr {D}'((0, \infty ))\), then \(G(t)=0\) in \(\mathscr {D}'((0, \infty ))\) for all \(t>0\).

Proof

By (3.4), for a.e. \(t>0\) and for all \(\varphi \in C_c((0,\infty ))\),

$$\begin{aligned} \frac{d}{dt}\int _{[0, \infty ) }u(t, x)\varphi (x)dx=\int _{[0,\infty )} \varphi (x) Q(u(t), u(t))(x)dx, \end{aligned}$$

and then, after integration in time:

$$\begin{aligned} \int _{[0, \infty ) } \left\{ u(t,x)-u(0,x)-\int _0^t Q(u(s), u(s))(x)ds\right\} \varphi (x)dx=0 \end{aligned}$$

for a.e. \(t>0\). If we plug now \(u=g+\alpha \delta _0+G\) in this formula and use that \(\varphi \in C_c((0,\infty ))\), we obtain for a.e. \(t>0\),

$$\begin{aligned}&\int _{[0, \infty ) } \Big \{g(t)+G(t)-g(0)-G(0)-R(t)-S(t)\Big \}\varphi (x)dx=0, \end{aligned}$$

where

$$\begin{aligned} R(t,x)&=\int _0^t\bigg (g(s,x)W(s,x)+x^2e^{-x}\int _{ [0, \infty )}b(x, y)u(s, y)dy\bigg )ds, \end{aligned}$$
(3.7)
$$\begin{aligned} S(t,x)&=\int _0^t G(s,x)W(s, x)ds,\end{aligned}$$
(3.8)
$$\begin{aligned} W(s,x)&=\int _{[0,\infty )}b(x,y)(e^{-x}-e^{-y})u(s,y)dy -\int _{[0,\infty )}b(x,y)y^2e^{-y}dy. \end{aligned}$$
(3.9)

It follows that, for a.e. \(t>0\),

$$\begin{aligned} g(t)+G(t)-g(0)-G(0)-R(t)-S(t)=0\,\,\hbox {in}\,\,\mathscr {D}'((0, \infty )). \end{aligned}$$
(3.10)

Let us prove now that \(R(t,\cdot )\in L^1_{loc}((0,\infty ))\) for all \(t\ge 0\). To this end, we first show that \(W(t,\cdot )\in L^{\infty }_{loc}((0,\infty ))\) for all \(t\ge 0\). Let then x be in a compact set [ac], with \(0<a<c<\infty \), and let \(t\ge 0\). Using that \({\text {supp}}(b)=\Gamma \subset \{(x,y)\in [0,\infty )^2:\theta x\le y\le \theta ^{-1}x\}\), the bound (2.11), and that \(x\in [a,c]\), it is easily proved that there exists a constant \(0<C<\infty \) that depends only on a, c, \(\theta \) and \(C_*\), such that for all \((x,y)\in \Gamma \) with \(x\in [a,c]\),

$$\begin{aligned} b(x,y)|e^{-x}-e^{-y}|\le C\qquad \text {and}\qquad b(x,y)\max \{x^2,y^2\}\le C. \end{aligned}$$
(3.11)

We then obtain from (3.9) that for all \(t\ge 0\), \(x\in [a,c]\),

$$\begin{aligned} |W(t,x)|\le C(M_0(u(t))+1), \end{aligned}$$

and by the conservation of mass (1.33),

$$\begin{aligned} \sup _{t\ge 0}\Vert W(t,\cdot )\Vert _{L^{\infty }([a,c])}\le C(M_0(u_0)+1). \end{aligned}$$
(3.12)

Using now (3.11), (3.12) and (1.33), we deduce from (3.7) that for all \(t\ge 0\), \(x\in [a,c]\),

$$\begin{aligned} |R(t,x)|\le C(M_0(u_0)+1)\int _0^t g(s,x)ds +CM_0(u_0) t. \end{aligned}$$
(3.13)

Then, since \(\sup _{t\ge 0}\Vert g(t,\cdot )\Vert _{L^1([a,c])}\le \sup _{t\ge 0} M_0(u(t))=M_0(u_0)\), it follows from (3.13) that

$$\begin{aligned} \Vert R(t,\cdot )\Vert _{L^1([a,c])}\le C(M_0(u_0)+1) M_0(u_0) t+(c-a)CM_0(u_0) t. \end{aligned}$$
(3.14)

On the other hand, using the Lebesgue decomposition Theorem, we have for all \(t\ge 0\):

$$\begin{aligned} S(t)=S_{ac}(t)+S_s(t),\qquad S_{ac}(t)\in L^1([0,\infty )),\quad S_s(t)\perp dx. \end{aligned}$$

Using this decomposition in (3.10), we deduce that for a.e. \(t>0\),

$$\begin{aligned} g(t)-g(0)-R(t)-S_{ac}(t)=-G(t)+G(0)+S_s(t)\quad \hbox {in}\,\,\mathscr {D}'((0, \infty )). \end{aligned}$$

Since the left hand side is absolutely continuous with respect to the Lebesgue measure and the right hand side is singular, we then obtain for a.e. \(t>0\),

$$\begin{aligned} g(t)&=g(0)+R(t)+S_{ac}(t)\qquad \hbox {in}\quad \mathscr {D}'((0, \infty )),\\ G(t)&=G(0)+S_s(t)\qquad \hbox {in}\quad \mathscr {D}'((0, \infty )). \end{aligned}$$

Then, for all \(\varphi \in C_c((0, \infty ))\) and a.e. \(t>0\),

$$\begin{aligned} \int _{[0,\infty )} \varphi (x)G(t, x)dx=\int _{[0,\infty )} \varphi (x) G(0, x)dx+\int _{[0,\infty )}\varphi (x)S_s(t, x)dx. \end{aligned}$$
(3.15)

We use now that for all nonnegative \(\varphi \in C_c((0,\infty ))\), \(t\ge 0\),

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)S_s(t, x)dx\le \int _{[0,\infty )}\varphi (x)|S_s(t, x)|dx\le \int _{[0,\infty )}\varphi (x)|S(t, x)|dx, \end{aligned}$$
(3.16)

where \(|S_s(t)|\) and |S(t)| are the total variation measures of \(S_s(t)\) and S(t) respectively. Then, if \(\varphi \ge 0\) and \({\text {supp}}(\varphi ) \subset [a, c]\) for finite \(c>a>0\), we deduce from (3.8), (3.12) and (3.16) that

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)S_s(t, x)dx\le & {} \int _0^t \Vert W(s,\cdot )\Vert _{L^{\infty }([a,c])} \int _{[0,\infty )}\varphi (x)G(s, x)dxds\\\le & {} C(M_0(u_0)+1)\int _0^t \int _{[0,\infty )}\varphi (x)G(s, x)dxds, \end{aligned}$$

and then, we obtain from (3.15) that, for a.e. \(t>0\),

$$\begin{aligned} \int _{[0,\infty )} \varphi (x) G(t, x)dx \le&\int _{[0,\infty )}\varphi (x)G(0, x)dx\\&+C(M_0(u_0)+1)\int _0^t \int _{[0,\infty )}\varphi (x)G(s, x)dxds. \end{aligned}$$

Then by Gronwall’s Lemma,

$$\begin{aligned}&\int _{[0,\infty )} \varphi (x) G(t, x)dx\\&\quad \le \bigg (\int _{[0,\infty )} \varphi (x) G(0, x)dx\bigg )\bigg (1+C(M_0(u_0)+1) t e^{C(M_0(u_0)+1) t}\bigg ). \end{aligned}$$

We deduce that, if \(G(0)= 0\), then \(\int _{[0,\infty )}\varphi (x) G(t, x)dx=0\) for every \(\varphi \in C_c((0,\infty ))\) and then, \(G(t)=0\) in \(\mathscr {D}'((0, \infty ))\) for a.e. \(t>0\). \(\square \)

Remark 3.4

It would be interesting to know if, when \(\alpha (0)=0\), \(\alpha (t)=0\) for a.e. \(t>0\) also or not.

3.1 An Equation for the Mass at the Origin

We can obtain information of the measure at the origin \(u(t,\{0\})\) from the weak formulation (2.19), by choosing test functions like in the following Remark.

Remark 3.5

Let \(\varphi \in C^1_b([0,\infty ))\) be nonincreasing with \({\text {supp}}\,\varphi =[0,1]\), \(\varphi (0)=1\) and \(\varphi '(0)=0\). Then, let \(\varphi _{\varepsilon }(x)=\varphi (x/\varepsilon )\) for \(\varepsilon >0\). It follows from (1.33) and dominated convergence that for all \(t\ge 0\),

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t,x)dx=u(t,\{0\}). \end{aligned}$$
(3.17)

Proposition 3.6

Let u be a weak solution of (2.1)–(2.11) obtained in Theorem 1.2, and denote \(\alpha (t)=u(t,\{0\})\). Then \(\alpha \) is right continuous, nondecreasing and a.e. differentiable on \([0,\infty )\). Moreover, for all t and \(t_0\) with \(t\ge t_0\ge 0\), and all \(\varphi _{\varepsilon }\) as in Remark 3.5, the following limit exists:

$$\begin{aligned}&\lim _{\varepsilon \rightarrow 0}\int _{t_0}^t K_{\varphi _{\varepsilon }}(u(s),u(s))ds, \end{aligned}$$
(3.18)

and

$$\begin{aligned} \alpha (t)=\alpha (t_0)+\lim _{\varepsilon \rightarrow 0}\int _{t_0}^tK_{\varphi _{\varepsilon }}(u(s),u(s))ds. \end{aligned}$$
(3.19)

Proof

Let us prove first (3.19). Using \(\varphi _{\varepsilon }\) in (2.33), we deduce by (3.17) that for all t and \(t_0\) with \(t\ge t_0\ge 0\), the following limit exists:

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{t_0}^t\big (K_{\varphi _{\varepsilon }}(u(s),u(s))-L_{\varphi _{\varepsilon }}(u(s))\big )ds, \end{aligned}$$

and moreover

$$\begin{aligned} \alpha (t)=\alpha (t_0)+\lim _{\varepsilon \rightarrow 0}\int _{t_0}^t\big (K_{\varphi _{\varepsilon }}(u(s),u(s))-L_{\varphi _{\varepsilon }}(u(s))\big )ds. \end{aligned}$$
(3.20)

We claim

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\int _{t_0}^tL_{\varphi _{\varepsilon }}(u(s))ds=0. \end{aligned}$$
(3.21)

In order to prove (3.21), we first obtain an integrable majorant of \(L_{\varphi _{\varepsilon }}(u(s))\), and then we show

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}L_{\varphi _{\varepsilon }}(u(s))=0\quad \forall s\ge 0. \end{aligned}$$
(3.22)

Taking into account \(\Gamma \), the support of \(\Psi _{\varepsilon }(x,y)=\varphi _{\varepsilon }(x)-\varphi _{\varepsilon }(y)\), and using \(\mathcal {L}_{\varphi _{\varepsilon }}(0)=0\) (cf. Lemma A.1), we have

$$\begin{aligned} |L_{\varphi _{\varepsilon }}(u(s))|&\le \int _{(0,\varepsilon )}u(s,x)\int _{\theta x}^{\theta ^{-1}x}|\ell _{\varphi _{\varepsilon }}(x,y)|dydx\nonumber \\&\quad +\int _{\left[ \varepsilon ,\frac{\varepsilon }{\theta }\right] }u(s,x)\int _{\theta x}^{\varepsilon }|\ell _{\varphi _{\varepsilon }}(x,y)|dydx. \end{aligned}$$
(3.23)

Since

$$\begin{aligned} \varphi _{\varepsilon }(x)-\varphi _{\varepsilon }(y) =\int _{\frac{y}{\varepsilon }}^{\frac{x}{\varepsilon }}\varphi '(z)dz\le \frac{\Vert \varphi '\Vert _{\infty }}{\varepsilon }|x-y|, \end{aligned}$$

then by (A.2)

$$\begin{aligned} |\ell _{\varphi _{\varepsilon }}(x,y)|\le \frac{c}{\varepsilon }e^{\frac{x-y}{2}},\quad c=\frac{C_*(1-\theta )}{\theta ^2(1+\theta )}\Vert \varphi '\Vert _{\infty }, \end{aligned}$$

and from (3.23) we deduce

$$\begin{aligned} |L_{\varphi _{\varepsilon }}(u(s))|&\le \frac{2c}{\varepsilon } \bigg [\int _{(0,\varepsilon )}u(s,x) \big (e^{\frac{(1-\theta )x}{2}}-e^{\frac{(1-\theta ^{-1}x)}{2}}\big )dx\\&\quad +\int _{\left[ \varepsilon ,\frac{\varepsilon }{\theta }\right] }u(s,x) \big (e^{\frac{(1-\theta )x}{2}}-e^{\frac{x-\varepsilon }{2}}\big )dx\bigg ]. \end{aligned}$$

We now use \(e^{\frac{(1-\theta )x}{2}}-e^{\frac{(1-\theta ^{-1}x)}{2}}\le \frac{(\theta ^{-1}-\theta )x}{2}e^{\frac{(1-\theta )x}{2}}\), \(e^{\frac{(1-\theta )x}{2}}-e^{\frac{x-\varepsilon }{2}}\le \frac{(\varepsilon -\theta x)}{2}e^{\frac{(1-\theta )x}{2}}\), and (2.51) to obtain, for all \(\varepsilon >0\),

$$\begin{aligned} |L_{\varphi _{\varepsilon }}(u(s))|&\le c(\theta ^{-1}-\theta )\int _{(0,\varepsilon )}u(s,x)e^{\frac{(1-\theta )x}{2}}dx\nonumber \\&\quad +c(1-\theta )\int _{\left[ \varepsilon ,\frac{\varepsilon }{\theta }\right] }u(s,x)e^{\frac{(1-\theta )x}{2}}dx\nonumber \\&\le c(\theta ^{-1}-\theta )\int _{\left( 0,\frac{\varepsilon }{\theta }\right] }e^{\frac{(1-\theta )x}{2}}u(s,x)dx\nonumber \\&\le c(\theta ^{-1}-\theta )e^{sC_{(1-\theta )/2}}\int _{[0,\infty )}e^{\frac{(1-\theta )x}{2}}u_0(x)dx. \end{aligned}$$
(3.24)

The right hand side above is independent of \(\varepsilon \), and it is clearly integrable on [0, t], for all \(t>0\).

Let us prove now (3.22). If we prove

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\mathcal {L}_{\varphi _{\varepsilon }}(x)=0\quad \forall x\ge 0, \end{aligned}$$
(3.25)

then by (3.24) and dominated convergence, (3.22) follows. Therefore we are left to prove (3.25). On the one hand, since \(\mathcal {L}_{\varphi _{\varepsilon }}(0)=0\) for all \(\varepsilon >0\) (cf. Lemma A.1), then \(\lim _{\varepsilon \rightarrow 0}\mathcal {L}_{\varphi _{\varepsilon }}(0)=0\). On the other hand, for all \(x>0\) and \(y\in [0,\infty )\), the function \(\ell _{\varphi _{\varepsilon }}(x,y)\) is well defined and

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\ell _{\varphi _{\varepsilon }}(x,y)=0. \end{aligned}$$
(3.26)

Moreoever, by (2.11)

$$\begin{aligned} |\ell _{\varphi _{\varepsilon }}(x,y)|&\le B(x,y)\frac{y}{x}e^{-y}(\varphi _{\varepsilon }(x)+\varphi _{\varepsilon }(y)) \le 2 C_*\frac{ye^{\frac{x-y}{2}}}{x(x+y)}\mathbb {1}_{\Gamma }(x,y), \end{aligned}$$

and then

$$\begin{aligned} \int _0^{\infty }|\ell _{\varphi _{\varepsilon }}(x,y)|dy&\le 2C_*e^{\frac{(1-\theta )x}{2}}\frac{1}{x}\int _{\theta x}^{\theta ^{-1}x}\frac{y}{x+y}dy\nonumber \\&=2C_*e^{\frac{(1-\theta )x}{2}}\int _{\theta }^{\theta ^{-1}}\frac{z}{1+z}dz<+\infty . \end{aligned}$$
(3.27)

It follows from (3.26), (3.27) and dominated convergence that \(\mathcal {L}_{\varphi _{\varepsilon }}(x)\rightarrow 0\) as \(\varepsilon \rightarrow 0\) for all \(x>0\), and then (3.25) holds. That proves (3.22), which combined with (3.24) and dominated convergence, finally proves (3.21). Using (3.21) in (3.20), then the limit in (3.18) exists and (3.19) holds.

Since \(K_{\varphi _{\varepsilon }}(u,u)\ge 0\) for all \(\varepsilon >0\), it follows from (3.19) that \(\alpha \) is monotone nondecreasing, and then a.e. differentiable by Lebesgue Theorem.

We are left to prove the right continuity of \(\alpha \). Since \(\alpha \) is nondecreasing, we already know

$$\begin{aligned} \alpha (t)\le \liminf _{h\rightarrow 0^+}\alpha (t+h), \end{aligned}$$
(3.28)

so it is sufficient to prove

$$\begin{aligned} \limsup _{h\rightarrow 0^+}\alpha (t+h)\le \alpha (t). \end{aligned}$$
(3.29)

To this end, let \(\varphi _{\varepsilon }\) as in Remark 3.5. Using \(\alpha (t+h)\le \int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t+h,x)dx\) and (2.33) with \(\varphi _{\varepsilon }\), we have

$$\begin{aligned} \alpha (t+h)\le \int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t,x)dx +\int _t^{t+h}\big (K_{\varphi _{\varepsilon }}(u(s),u(s)) +L_{\varphi _{\varepsilon }}(u(s))\big )ds. \end{aligned}$$

From Proposition A.4 and (2.51), we deduce that \(K_{\varphi _{\varepsilon }}(u(s),u(s))\) and \(L_{\varphi _{\varepsilon }}(u(s))\) are locally integrable in time for every fixed \(\varepsilon >0\), so letting \(h\rightarrow 0\) above, and then \(\varepsilon \rightarrow 0\), we finally obtain (3.29). The right continuity then follows from (3.28) and (3.29). \(\square \)

Remark 3.7

By a standard approximation argument, it is possible to use \(\mathbb {1}_{[0,\varepsilon )}\) as a test function in (2.33). Then, by similar arguments as in the proof of Proposition 3.6, it can be seen that Eq. (3.19) also holds when \(\varphi _{\varepsilon }\) is replaced by \(\mathbb {1}_{[0,\varepsilon )}\), and then, for all \(t\ge t_0\ge 0\),

$$\begin{aligned} \alpha (t)&=\alpha (t_0)+\lim _{\varepsilon \rightarrow 0}\int _{t_0}^t\iint _{D_\varepsilon } \frac{B(x,y)}{xy}(e^{-x}-e^{-y})u(s,x)u(s,y)dydxds, \end{aligned}$$
(3.30)

where \(D_{\varepsilon }=[\varepsilon ,\gamma _2(\varepsilon ))\times (\gamma _1(x),\varepsilon )\).

4 On Entropy and Entropy Dissipation

Although the entropy H given by (1.36) is well defined for the weak solutions u of (2.1), it is not known if \(t\mapsto H(u(t))\) is monotone and may still be used as Lyapunov function to study the long time behavior of these solutons (we recall that the dissipation of entropy is not defined in general). But, if u is a weak solution of (2.1) with initial data \(u _{ in }\) given by Theorem 1.2, if \(\{u_n\} _{ n\in \mathbb {N}}\) is the sequence given by Proposition 2.4 and if \(D^{(n)}\) is the functional defined in (2.20), the same calculations as in Sect. 2 and Sect. 6 of [13] yield

$$\begin{aligned} \int _T^{\infty }D^{(n)}(u_n(t))dt\le H(U_M)+C_1\int _{ [0, \infty ) }(1+x)u _{ in }(x)dx,\,\,\forall n\in \mathbb {N}\end{aligned}$$

for some constant \(C_1>0\), where \(U_M\) is the unique equilibrium with the same mass than \(u _{ in }\), \(M=M_0(u_{in})\). Since the sequence of functions \(\{b_n\}_{n\in \mathbb {N}}\) is increasing,

$$\begin{aligned} \int _T^{\infty }D^{(m)}(u_n(t))dt \le H(U_M)+C_1\int _{ [0, \infty ) }(1+x)u _{ in }(x)dx,\,\forall n>m. \end{aligned}$$

Therefore, by the weak lower semi continuity of the function \(D^{(m)}\) (cf. Theorem 4.6 in [13]), and the weak convergence of \(u_n\) to u:

$$\begin{aligned} \int _T^{\infty }D^{(m)}(u(t))dt\le H(U_M)+C_1\int _{ [0, \infty ) }(1+x)u _{ in }(x)dx,\,\,\,\forall m\in \mathbb {N}\end{aligned}$$

It follows that, for any sequence \(t_n\rightarrow \infty \), and \(\left\{ U(t)\right\} _{ t>0 }\subset \mathscr {M}_+^1([0, \infty ))\) such that \(u(t_n+t)\rightarrow U(t)\) in \(\mathscr {M}_+^1([0, \infty ))\) for \(a. e. \, t>0\) we have \(\lim _{ n\rightarrow \infty } D^{(m)}(u(t_n+t))=0\) and then, by weak lower semi continuity, \(D^{(m)}U(t)=0\) for all \(m\in \mathbb {N}\) and \(a. e.\, t>0\). However, it is only possible to obtain a partial characterization of the measures \(U\in \mathscr {M}_+([0,\infty ))\) with total mass M and such that \(D^{(m)}(U)=0\) for all \(m\in \mathbb {N}\).

Proposition 4.1

A measure \(U\in \mathscr {M}_+([0,\infty ))\) with total mass \(M>0\), satisfies \(D^{(m)}(U)=0\) for all \(m\in \mathbb {N}\), if and only if, there exists \(\mu \le 0\) and \(\alpha \ge 0\) such that \(U=g_\mu +\alpha \delta _0\) and \(\int _0^\infty g _{ \mu }(x)dx+\alpha =M\), where

$$\begin{aligned} g_{\mu }(x)=\frac{x^2}{e^{x-\mu }-1},\qquad x>0. \end{aligned}$$
(4.1)

Proof

It is straightforward to check that if \(U=g_\mu +\alpha \delta _0\) for some \(\mu \le 0\) and \(\alpha \ge 0\), such that \(\int _0^{\infty } g _{ \mu }(x)dx+\alpha =M\), then \(D^{(m)}(U)=0\). On the other hand, if \(U=g+G\) is the Lebesgue decomposition of U and \(D^{(m)}(U)=0\), then \(D^{(m)}_1(g)=D^{(m)}_2(g,G)=D^{(m)}_3(G)=0\). From \(D^{(m)}_1(g)=0\) it follows that, for a.e. \((x,y)\in [0,\infty )^2\),

$$\begin{aligned} b_m(x, y)j\Big (g'(x^2+g)e^{-x},g(y^2+g')e^{-y}\Big )=0. \end{aligned}$$
(4.2)

Since \(b_m(x, y)>0\) for \((x, y)\in \Gamma _{\varepsilon , m}\) for all \(\varepsilon >0\) and all \(m\in \mathbb {N}\), where

$$\begin{aligned} \Gamma _{\varepsilon , m}=\Big \{(x,y)\in \Gamma : d((x,y), \partial \Gamma )>\varepsilon ,\,\,(x, y)\in \left( \frac{1}{m}, m\right) \times \left( \frac{1}{m}, m\right) \Big \}, \end{aligned}$$

we deduce from (4.2),

$$\begin{aligned} \frac{g(x)e^{x}}{x^2+g(x)}=\frac{g(y)e^{y}}{y^2+g(y)}\quad a.e.\; (x,y)\in \Gamma _{\varepsilon , m }, \end{aligned}$$

and both terms must then be equal to a nonnegative constant, say \(\gamma \). If \(\gamma =0\), then \(g=0\) for a.e. \(x>\varepsilon \). If \(\gamma >0\), then \(\gamma =e^{\mu }\) for some \(\mu \in \mathbb {R}\) and \(g=g_{\mu }\) for a.e. \(x>\varepsilon \). Letting \(\varepsilon \rightarrow 0\) we obtain that either \(g=0\) or \(g=g_{\mu }\) a.e. in \((0,\infty )\) and, since \(g\ge 0\), then \(\mu \le 0\).

From \(D^{(m)}_3(G)=0\) for all \(m\in \mathbb {N}\), we obtain that \(j(e^{-x},e^{-y})=(e^{-x}-e^{-y})(x-y)=0\) for \(G\times G\) a.e. \((x,y)\in \Gamma _{\varepsilon , m}\). Letting \(\varepsilon \rightarrow 0\), we deduce that

$$\begin{aligned} G=\sum _{i=0}^{\infty }\alpha _i\delta _{x_i}, \end{aligned}$$
(4.3)

for some \(\alpha _i\ge 0\), \(x_i\ge 0\) with \(b_m(x_i,x_j)=0\) for all \(i\ne j\), and all \(m\in \mathbb {N}\).

From \(D^{(m)}_2(g, G)=0\), \(g=g_\mu \) and G as in (4.3), we deduce that, for all \(m\in \mathbb {N}\),

$$\begin{aligned} D^{(m)}_2(g,G)=\sum _{i=0}^{\infty }\alpha _i(x_i-\mu )\big (e^{-\mu }-e^{-x_i}\big ) \int _0^\infty b_m(x, x_i)g_\mu (x)dx=0, \end{aligned}$$

and therefore, each of the terms in the sum above is zero. If \(\alpha _i>0\) and \(x_i>0\) for some \(i\in \mathbb {N}\), it then follows that \(\mu =x_i\), which is a contradiction since \(\mu \le 0\). Hence \(G=\alpha \delta _0\) for some \(\alpha \ge 0\). \(\square \)

Remark 4.2

The measure U in the statement of Proposition 4.1 is not uniquely determined because, since \(b_m(x, 0)=0\) for all \(x>0\), it is possible to have \(\mu < 0\) and \(\alpha >0\).

5 A Simplified Equation

There may be several reasons to consider the following simplified version of Eqs. (1.15) and (1.16),

$$\begin{aligned}&\displaystyle \frac{\partial u}{\partial t}(t, x)=u(t, x)\int _0^\infty R(x, y)u(t, y)dy, \end{aligned}$$
(5.1)
$$\begin{aligned}&\displaystyle R(x, y)=b(x, y)(e^{-x}-e^{-y}). \end{aligned}$$
(5.2)

Although the integral collision operator in (5.1) only contains the nonlinear terms of the integral collision operator in (1.15), it may supposed to be the dominant term when u is large. This was the underlying idea in [29, 30], when such approximation was suggested. Let us also recall that, as shown in Sect. B.1 in Appendix B, if the variables kt and f are suitably scaled with the parameter \(\beta \) to obtain the new variables \(x, \tau \) and u (cf. (B.2) and (B.6)), the Eqs. (1.15) and (1.16) yields Eq. (B.7), where the dependence on the parameter \(\beta >0\) has been kept. Then, the reduced Eq. (5.1) appears as the lower order approximation as \(\beta \rightarrow \infty \). No rigorous result is known about the validity of such an approximation . In any case, it may be expected from (B.2), to break down for \(t\gtrsim \beta ^{3}\) and \(x\gtrsim \beta \).

Due to its simpler form, the study of (5.1) is slightly easier. The existence of solutions \(u\in C([0, \infty ), L^1([0, \infty )))\), that do not form a Dirac mass at the origin in finite time, is proved (cf. Sect. 5.2) and it is also possible to describe the long time behaviour of the solutions. Both questions remain open for the Eq. (1.15).

5.1 Existence and Properties of Weak Solutions

In this Section we prove the following result on the existence of weak solutions of the Eqs. (5.1) and (5.2).

Theorem 5.1

For any initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying

$$\begin{aligned} X_{\eta }(u_0)<\infty \quad \text {for some}\quad \eta >\frac{1-\theta }{2}, \end{aligned}$$
(5.3)

there exists \(u\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) such that:

$$\begin{aligned}&(i)\,\, \forall \varphi \in C_b([0, \infty )),\,\,\int _{ [0, \infty ) }u(\cdot , x)\varphi (x)dx \in C( [0, \infty );\mathbb {R}) \nonumber \\&\qquad \text {and}\quad \int _{ [0, \infty ) }u(0 , x)\varphi (x)dx= \int \limits _{ [0, \infty ) }u_0(x)\varphi (x)dx, \end{aligned}$$
(5.4)
$$\begin{aligned}&(ii)\,\, \forall \varphi \in C^1_b([0, \infty )),\, \varphi '(0)=0,\nonumber \\&\qquad \int _{ [0, \infty ) }u(\cdot , x)\varphi (x)dx \in W_{loc}^{1, \infty }([0, \infty ); \mathbb {R}),\,\hbox {and for}\,\,a. e. \, t>0,\nonumber \\&\qquad \frac{d}{dt}\int \limits _{ [0, \infty ) }u(t , x)\varphi (x)dx=\frac{1}{2}\iint \limits _{[0, \infty )^2}R(x, y)u(t,x)u(t,y)(\varphi (x)-\varphi (y))dydx. \end{aligned}$$
(5.5)

(We will say that u is a weak solution of (5.1) with initial data \(u_0\)). The solution also satisfies,

$$\begin{aligned} M_0(u(t))&=M_0(u_0)\quad \forall t> 0, \end{aligned}$$
(5.6)
$$\begin{aligned} X_{\eta }(u(t))&\le X_{\eta }(u_0)\quad \forall t>0. \end{aligned}$$
(5.7)

This result is similar to Theorem 1.2 for the Eqs. (2.1)–(2.11), and its proof uses similar arguments. The main difference is that Theorem 3 in [13] can not be used to obtain approximate solutions, and this must be done using a classical truncation argument. Let us then consider the following auxiliary problem:

$$\begin{aligned}&\displaystyle \frac{\partial u_n}{\partial t}(t,x)=u_n(t,x)\int _0^{\infty }R_n(x,y)u_n(t,y)dy, \end{aligned}$$
(5.8)
$$\begin{aligned}&\displaystyle u_n(0,x)=u_{in}(x) \end{aligned}$$
(5.9)
$$\begin{aligned}&\displaystyle R_n(x, y)=b_n(x, y)(e^{-x}-e^{-y}) \end{aligned}$$
(5.10)

where \(b_n\) is defined in (2.25).

Proposition 5.2

For every \(n\in \mathbb {N}\) and for every nonnegative initial data \(u_{in}\in L^1([0,\infty ))\), there exists a nonnegative function \(u_n\in C([0,\infty ),L^1([0,\infty )))\cap C^1((0,\infty ),L^1([0,\infty )))\) that satisfies (5.8) and (5.9) in \(C((0, \infty ), L^1([0,\infty )))\) and \(L^1([0,\infty ))\) respectively and,

$$\begin{aligned} M_0(u_n(t))=M_0(u_{in})\quad \forall t\ge 0. \end{aligned}$$
(5.11)

Moreover, for all \(\varphi \), defined on \([0, \infty )\), measurable, non negative and non decreasing function,

$$\begin{aligned} \int _{ [0, \infty ) }u_n(t, x)\varphi (x)dx\le \int _{ [0, \infty ) }u _{ in }(x)\varphi (x)dx,\,\,\,\forall n\in \mathbb {N},\;\forall t>0. \end{aligned}$$
(5.12)

Proof

The proof uses a simple Banach fixed point argument. For any nonnegative \(f\in C([0, \infty ), L^{1 }([0,\infty )))\) we consider the solution u to the problem

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \frac{\partial u}{\partial t}(t,x)=u(t,x)\int _0^\infty R_n(x,y)f(t,y)dy\quad x>0,\;t>0,\\ u(0,x)=u _{ in }(x),\quad x>0, \end{array}\right. \end{aligned}$$

given by:

$$\begin{aligned} A_n(f)\equiv u(t,x)=u _{ in }(x)e^{\int _0^{t}\int _0^\infty R_n(x,y)f(s,y)dyds}. \end{aligned}$$

Our goal is then to prove first that \(A_n\) is a contraction on \(\mathcal {X} _{ \rho , T }\) for some \(\rho >0\) and \(T>0\) where,

$$\begin{aligned} \mathcal {X}_{\rho , T} =\left\{ f\in C([0, T); L^1([0,\infty ))); \sup _{ 0\le t \le T }\Vert f(t)\Vert _1\le \rho \right\} . \end{aligned}$$

For all \(T>0\), \(t\in [0, T)\) and \(f\in \mathcal {X} _{ \rho , T }\),

$$\begin{aligned} \Vert A_n(f)(t)\Vert _1\le&\Vert u _{ in }\Vert _1e^{T\rho \Vert R_n\Vert _{\infty } }; \end{aligned}$$
(5.13)

and for all \(t_1\), \(t_2\) such that \(0\le t_1\le t_2\le T\):

$$\begin{aligned}&|A_n(f)(t_1,x)-A_n(f)(t_2,x)|\\&\quad =u _{ in }(x)\left| e^{\int _0^{t_1}\int _0^\infty R_n(x, y)f(s, y)dyds}-e^{\int _0^{t_2}\int _0^\infty R_n(x, y)f(s, y)dyds} \right| \\&\quad \le u _{ in }(x) \left| \int _{t_1}^{t_2}\int _0^\infty R_n(x, y)f(s, y)dyds\right| \\&\qquad \times e^{\theta \int _0^{t_1}\int _0^\infty R_n(x, y)f(s, y)dyds+(1-\theta )\int _0^{t_2}\int _0^\infty R_n(x, y)f(s, y)dyds}\\&\quad \le u _{ in }(x)\rho \Vert R_n\Vert _\infty |t_1-t_2|e^{T\rho \Vert R_n\Vert _\infty }. \end{aligned}$$

It then follows that

$$\begin{aligned}&\Vert A_n(f)(t_1)-A_n(f)(t_2)\Vert _1 \le \rho \Vert u _{ in }\Vert _1 \Vert R_n\Vert _\infty e^{\rho T\Vert R_n\Vert _{\infty } } |t_1-t_2|. \end{aligned}$$
(5.14)

Let now f and g be in \(\mathcal {X} _{ \rho , T }\) and denote \(v=A_n(g)\) and \(u=A_n(f)\). Arguing as before,

$$\begin{aligned} \Vert u(t)-v(t)\Vert _1\le \Vert u _{ in }\Vert _1\Vert R_n\Vert _\infty \Vert f-g\Vert _{C([0,T],\,L^1([0,\infty )))}Te^{\rho T\Vert R_n\Vert _\infty }. \end{aligned}$$
(5.15)

By (5.13)–(5.15), if

$$\begin{aligned}&\Vert u _{ in }\Vert _1e^{T\rho \Vert R_n\Vert _{\infty } }<\rho ,\,\,\hbox {and}\\&\quad \Vert u _{ in }\Vert _1\Vert R_n\Vert _\infty Te^{\rho T\Vert R_n\Vert _\infty }<1, \end{aligned}$$

then \(A_n\) is a contraction on \(C([0, T], L^{1 }([0,\infty )))\), and has a fixed point \(u_n\) that satisfies

$$\begin{aligned} u_n(x, t)=u _{ in }(x)e^{\int _0^{t}\int _0^\infty R_n(x, y)u_n(s, y)dyds}. \end{aligned}$$
(5.16)

This solution may then be extended to \(C([0, T _{ \max }), L^{1 }([0,\infty )))\). It immediately follows from (5.16) that \(u_n\ge 0\).

Moreover, since \(u_n\in C([0, T _{ \max }), L^{1 }([0,\infty )))\) and \(R_n\) is bounded, we deduce from (5.16) that \(u_n\in C^1([0, T _{ \max }), L^{1 }([0,\infty )))\) and, for every \(t\in (0, T _{ \max })\), the Eq. (5.8) is satisfied in \(L^1([0,\infty ))\). For all \(T<T_{\max }\) and all \(t\in [0,T]\),

$$\begin{aligned}&\left| u _{ in }(\cdot )\frac{d}{dt}\left( e^{\int _0^t\int _0^{\infty }R_n(\cdot ,y) u_n(s,y)dy}\right) \right| \le u _{ in }(\cdot )\Vert R_n\Vert _{\infty }\\&\quad \times \Vert u_n\Vert _{C([0,T],L^1([0,\infty )))}e^{T\Vert R_n\Vert _{\infty }\Vert u_n\Vert _{C([0,T],L^1([0,\infty )))}}\in L^1([0,\infty )), \end{aligned}$$

then if we multiply (5.16) by any \(\varphi \in L^{\infty }([0,\infty ))\), we deduce that for all \(t<T_{\max }\),

$$\begin{aligned} \frac{d}{dt}\int _0^{\infty }u_n(t,x)\varphi (x)dx=\iint \limits _{ (0, \infty )^2 }R_n(x,y)u_n(t,x)u_n(t,y)\varphi (x)dydx. \end{aligned}$$

Recalling the definition of \(R_n\), then by the symmetry of \(b_n\) and Fubini’s theo,

$$\begin{aligned} \frac{d}{dt}\int _0^{\infty }u_n(t,x)\varphi (x)dx=\iint \limits _{ (0, \infty )^2 }k_{\psi ,n}(x,y)u_n(s,x)u_n(s,y)dydx, \end{aligned}$$
(5.17)

i.e. \(u_n\) is a weak solution of (5.8), (5.9) for \(t\in [0, T)\). If we chose \(\varphi =1\) we deduce that (5.11) holds for all \(t<T_{\max }\). Then, by a classical argument, \(T_{\max }=\infty \).

In order to prove (5.12) let \(\psi \) be non negative and measurable function such that \(\int _0^{\infty }u_0(x)\psi (x)dx<\infty \), and consider \(\{\psi _k\}_{k\in \mathbb {N}}\) the sequence of simple functions that converges monotonically to \(\psi \) as \(k\rightarrow \infty \). Since \(\psi _k\in L^{\infty }([0,\infty ))\), then (5.17) holds with \(\varphi =\psi _k\) for all k, and by Lebesgue’s and monotone convergence Theorems,

$$\begin{aligned} \int _0^{\infty }u_n(t,x)\psi (x)dx&=\int _0^{\infty }u _{ in }(x)\psi (x)dx\\&\quad +\int _0^t \int _0^{\infty }\int _0^{\infty }k_{\psi ,n}(x,y)u_n(s,x)u_n(s,y)dydxds. \end{aligned}$$

Using that \(u_n\in C([0,\infty ),L^1([0,\infty )))\), (5.11) and

$$\begin{aligned}&\iint \limits _{ (0, \infty )^2 } k_{\psi ,n}(x,y)\big |u_n(t_1,x)u_n(t_1,y)-u_n(t_2,x)u_n(t_2,y)\big |dydx\\&\quad \le 2\Vert k_{\psi ,n}\Vert _{\infty }M_0(u _{ in })\Vert u_n(t_1)-u_n(t_2)\Vert _1, \end{aligned}$$

so that \(t\mapsto \int _0^{\infty }\int _0^{\infty }k_{\psi ,n}(x,y)u_n(s,x)u_n(s,y)dydx\) is continuous, it follows by the fundamental theo of calculus that (5.17) holds for \(\psi \). If, in addition, \(\varphi \) is nondecreasing, then \((e^{-x}-e^{-y})(\varphi (x)-\varphi (y))\le 0\) for all \((x,y)\in [0,\infty )^2\), and then (5.12) follows. \(\square \)

Proof of Theorem 5.1

Consider first a initial data \(u_0\in L^1([0,\infty ))\). Let \(\{u_n\}_{n\in \mathbb {N}}\) be the sequence of solutions to (5.8)–(5.9) constructed in Proposition 5.2 for \(n\in \mathbb {N}\). As in the proof of Theorem 1.2, the result follows from the precompactness (given by the conservation of \(M_0(u)\)) and the equicontinuity of \(\{u_n\} _{ n\in \mathbb {N}}\). These properties follow as in the proof of Proposition 2.8 and Proposition 2.9 respectively. The existence of the solution u follows using the same arguments as in Corollary 2.10 and the end of the Proof of Theorem 1.2.

Property (5.7) for u follows from (5.12), the lower semicontinuity of the non negative function \(e^{\eta x}\), and the weak convergence to u of \(u_n\).

For a general initial data \(u_0\in \mathscr {M}_+([0,\infty ))\), by Corollary 8.6 in [10] there exists a sequence \(\{u_{0,n}\}_{n\in \mathbb {N}}\subset L^1([0,\infty ))\) such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^{\infty }\varphi (x)u_{0,n}(x)dx=\int \limits _{[0,\infty )}\varphi (x)u_0(x)dx,\,\forall \varphi \in C_b([0,\infty )). \end{aligned}$$
(5.18)

Since \(u _{ 0, n }\in L^1([0,\infty ))\), using the previous step there exists a weak solution \(u_n\) that satisfies (5.4)–(5.7). By (5.6) and (5.7), the sequence \(\{u_n\} _{ n\in \mathbb {N}}\) is precompact in \(C([0, \infty ), \mathscr {M}_+ ([0,\infty )))\). Arguing as in Proposition 2.9, we deduce that it is also equicontinuous. Therefore, using the same arguments as in the end of the Proof of Theorem 1.2, we deduce the existence of a subsequence, still denoted \(\{u_n\}_{n\in \mathbb {N}}\), and a weak solution of (5.1), \(u\in C([0, \infty ),\mathscr {M}_+ ([0,\infty )))\), satisfying (5.4)–(5.7).

The property (5.7) is obtained using first in the weak formulation (5.5) a sequence of monotone non decreasing test functions \(\{\varphi _k\} _{ k\in \mathbb {N}}\subset C^1_b([0, \infty )\) such that \(\varphi '_k(0)=0\) and \(\varphi _k(x)\rightarrow e^{\eta x}\) for all \(x\ge 0\), to obtain:

$$\begin{aligned} \int _{ [0, \infty ) }u(t, x)\varphi _k(x)dx\le \int _{ [0, \infty ) }u _{ 0}(x)\varphi _k(x)dx, \end{aligned}$$
(5.19)

and then pass to the limit as \(k\rightarrow \infty \). \(\square \)

Remark 5.3

In Theorem 1.2, the initial data is required to satisfy \(X_{\eta }(u_0)<\infty \) for some \(\eta \in \left( \frac{1-\theta }{2},\frac{1}{2}\right) \). On the one hand, the condition \(\eta >\frac{1-\theta }{2}\) is sufficient in order to have boundedness of the operators \(K_{\varphi }(u,u)\) and \(L_{\varphi }(u)\). On the other hand, the condition \(\eta <1/2\) comes from the estimate (2.40). In Theorem 5.1, however, that last condition is not needed, thanks to the estimate (5.12).

We show now that the support of u(t) is constant in time.

Proposition 5.4

Let u be a weak solution of (5.1) constructed in Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3). The following statements hold:

  1. (i)

    For all \(r>0\), \(t_0\) and t with \(0\le t_0\le t\), and \(\varphi \in C^1_c((0,\infty ))\) nonnegative such that \({\text {supp}}(\varphi )\subset [r,L]\) for some \(L>r\),

    $$\begin{aligned}&\int _{[0,\infty )}\varphi (x)u(t,x)dx\ge e^{-(t-t_0)C_1}\int _{[0,\infty )}\varphi (x)u(t_0,x)dx, \end{aligned}$$
    (5.20)
    $$\begin{aligned}&\int _{[0,\infty )}\varphi (x)u(t,x)dx\le e^{(t-t_0)C_2}\int _{[0,\infty )}\varphi (x)u(t_0,x)dx, \end{aligned}$$
    (5.21)

    where

    $$\begin{aligned} C_1=\frac{C_*\rho _*M_0(u_0)}{\sqrt{\theta (1+\theta )}}\frac{e^{\frac{(1-\theta )L}{2}}}{r^{3/2}},\quad C_2=\frac{C_*\rho _*M_0(u_0)}{\sqrt{2}}\frac{e^{\frac{(1-\theta )L}{2\theta }}}{r^{3/2}}. \end{aligned}$$
    (5.22)
  2. (ii)

    For all \(r>0\), \(t_0\) and t with \(0\le t_0\le t\),

    $$\begin{aligned} \int \limits _{[0,r)}u(t_0,x)dx\le & {} \int \limits _{[0,r)}u(t,x)dx\le e^{(t-t_0)C_{r}}\int \limits _{[0,r)}u(t_0,x)dx, \end{aligned}$$
    (5.23)
    $$\begin{aligned} \text {where}\qquad C_{r}= & {} \frac{C_*\rho _*M_0(u_0)}{\sqrt{\theta (1+\theta )}} \frac{e^{\frac{(1-\theta )r}{2\theta }}}{r^{3/2}}. \end{aligned}$$
    (5.24)
  3. (iii)

    \({\text {supp}}(u(t))={\text {supp}}(u_0)\) for all \(t>0\).

Proof

Proof of (i). Since there are no integrability issues near the origin because \({\text {supp}}(\varphi )\subset [r,L]\), then by Fubini’s theo

$$\begin{aligned}&\frac{1}{2}\iint _{[0,\infty )^2}R(x,y)(\varphi (x)-\varphi (y))u(t,x)u(t,y)dydx\\&\quad =\int _{[0,\infty )}\varphi (x)u(t,x)\int _{[0,\infty )}R(x,y)u(t,y)dydx. \end{aligned}$$

Let us prove the lower bound (5.20). Using (2.8)–(2.11), for all \((x,y)\in \Gamma \), \(y\le x\),

$$\begin{aligned} |R(x,y)|\le \frac{C_*e^{\frac{x-y}{2}}(x-y)}{xy(x+y)} \le C^*\frac{e^{\frac{(1-\theta )x}{2}}}{x^{3/2}},\quad C^*=\frac{C_*\rho _*}{\sqrt{\theta (1+\theta )}}, \end{aligned}$$
(5.25)

and taking into account the support of \(\varphi \), we deduce that

$$\begin{aligned}&\int _{[0,\infty )}\varphi (x)u(t,x)\int _{[0,\infty )}R(x,y)u(t,y)dydx\\&\quad \ge \int _{r}^{L} \varphi (x)u(t,x)\int _0^{x}R(x,y)u(t,y)dydx\\&\quad \ge -\frac{C^*e^{\frac{(1-\theta )L}{2}}}{r^{3/2}} \int _{r}^{L}\varphi (x)u(t,x)\int _0^x u(t,y)dydx\\&\quad \ge -C_1\int _{r}^{L}\varphi (x)u(t,x)dx, \end{aligned}$$

and then, from the weak formulation, we obtain that for all \(t>0\),

$$\begin{aligned} \frac{d}{dt}\int _{r}^{L}\varphi (x)u(t,x)dx\ge -C_1\int _{r}^{L}\varphi (x)u(t,x)dx, \end{aligned}$$

and (5.20) follows by Gronwall’s Lemma.

We now prove the upper bound (5.21) by similar arguments. Since \(R(x,y)\le 0\) for \(y\le x\), then

$$\begin{aligned}&\int _{[0,\infty )}\varphi (x)u(t,x)\int _{[0,\infty )}R(x,y)u(t,y)dydx\\&\quad \le \int _{r}^{L} \varphi (x)u(t,x)\int _x^{\infty }R(x,y)u(t,y)dydx, \end{aligned}$$

and since for all \((x,y)\in \Gamma \), \(x\le y\),

$$\begin{aligned} R(x,y)&\le \frac{C_*e^{\frac{y-x}{2}}(y-x)}{xy(x+y)} \le C'\frac{e^{\frac{(1-\theta )x}{2\theta }}}{x^{3/2}}, \qquad C'=\frac{C_*\rho _*}{\sqrt{2}}, \end{aligned}$$

we deduce from the weak formulation that for all \(t>0\),

$$\begin{aligned} \frac{d}{dt}\int _{r}^{L}\varphi (x)u(t,x)dx&\le \frac{C'e^{\frac{(1-\theta )L}{2\theta }}}{r^{3/2}} \int _{r}^{L}\varphi (x)u(t,x)\int _x^{\infty }u(t,y)dydx\\&\le C_2\int _{r}^{L}\varphi (x)u(t,x)dx, \end{aligned}$$

and then (5.21) follows by Gronwall’s Lemma.

Proof of (ii). We first prove the lower bound in (5.23). Given \(r>0\), let \(0<r_*<r\), and \(\varphi \in C^1_c([0,\infty ))\) be nonnegative, nonincreasing, and such that \(\varphi (x)=1\) for all \(x\in [0,r_*]\) and \(\varphi (x)=0\) for all \(x\ge r\). Since \((e^{-x}-e^{-y})(\varphi (x)-\varphi (y))\ge 0\) for all \(0\le y\le x\), it follows from the weak formulation

$$\begin{aligned} \frac{d}{dt}\int _{[0,r)}\varphi (x)u(t,x)dx\ge 0\qquad \forall t>0, \end{aligned}$$

hence

$$\begin{aligned} \int _{[0,r)}\varphi (x)u(t,x)dx\ge \int _{[0,r)}\varphi (x)u(t_0,x)dx\qquad \forall t\ge t_0\ge 0, \end{aligned}$$

and then the lower bound in (5.23) follows by taking the supremum over all \(\varphi \) as above, i.e., letting \(r_*\rightarrow r\).

Let us prove now the upper bound in (5.23). Given \(r>0\), let \(r_*\) and \(\varphi \) be as before. Keeping only the positive terms in the weak formulation and taking \(\Gamma \) into account, we deduce

$$\begin{aligned} \frac{d}{dt}\int _{[0,r)}\varphi (x)u(t,x)dx \le \int _{r_*}^{\frac{r}{\theta }}\int _{\theta x}^{\min \{x,r\}} |R(x,y)|\varphi (y) u(t,x)u(t,y)dydx, \end{aligned}$$

and by (5.25) we obtain

$$\begin{aligned} \frac{d}{dt}\int _{[0,r)}\varphi (x)u(t,x)dx&\le \frac{C^*e^{\frac{(1-\theta )r}{2\theta }}}{r_*^{3/2}} \int _{r_*}^{\frac{r}{\theta }}u(t,x)\int _{\theta x}^{\min \{x,r\}}\varphi (y)u(t,y)dydx\\&\le C_{r_*}\int _{[0,r)}\varphi (y)u(t,y)dy, \end{aligned}$$

where \(C_{r_*}=\frac{C^*e^{\frac{(1-\theta )r}{2\theta }}}{r_*^{3/2}}M_0(u_0),\) and then, from Gronwall’s Lemma,

$$\begin{aligned} \int _{[0,r)}\varphi (x)u(t,x)dx\le e^{(t-t_0)C_{r_*}}\int _{[0,r)}\varphi (x)u(t_0,x)dx\qquad \forall t\ge t_0\ge 0. \end{aligned}$$

The upper bound in (5.23) then follows by letting \(r_*\) tend to r.

Proof of (iii). We recall the following characterization of the support of a Radon measure \(\mu \) (see [17], Chapter 7): \(x\in {\text {supp}}(\mu )\) if and only if \(\int _{[0,\infty )}\varphi d\mu >0\) for all \(\varphi \in C_c([0,\infty ))\) with \(0\le \varphi \le 1\) such that \(\varphi (x)>0\). Then, from (5.20) and (5.21) for \(t_0=0\), we deduce that

$$\begin{aligned} (0,\infty )\cap {\text {supp}}(u_0)=(0,\infty )\cap {\text {supp}}(u(t))\qquad \forall t>0, \end{aligned}$$

and from (5.23) for \(t_0=0\), we deduce that for all \(t>0\),

$$\begin{aligned} 0\in {\text {supp}}(u_0)\quad \text {if and only if}\quad 0 \in {\text {supp}}(u(t)), \end{aligned}$$

which completes the proof. \(\square \)

The queues of the weak solutions are decreasing in time, as proved in the following Proposition.

Proposition 5.5

Let u be the weak solution of (5.1) constructed in Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3). Then

  1. (i)

    For all \(r\ge 0\), the map \(t\mapsto \int _{[r,\infty )}u(t,x)dx\) is nonincreasing on \([0,\infty )\).

  2. (ii)

    For all \(r>0\), if

    $$\begin{aligned}&\exists x_0\in [r,\gamma _2(r))\cap {\text {supp}}(u_0),\;\exists y_0\in (\gamma _1(r),r)\cap {\text {supp}}(u_0),\qquad \nonumber \\&\quad \text {such that}\quad B(x_0,y_0)>0, \end{aligned}$$
    (5.26)

    then the map \(t\mapsto \int _{[r,\infty )}u(t,x)dx\) is strictly decreasing on \([0,\infty )\).

Remark 5.6

Condition (5.26) holds, for instance, if r is an interior point of the support of \(u_0\).

Proof

Proof of (i). For \(r=0\), the result follows from the conservation of mass (5.6). For \(r>0\), let \(\varepsilon \in (0,r)\) and \(\varphi _{\varepsilon }\in C^1_b([0,\infty ))\) be an increasing function such that \(\varphi _{\varepsilon }(x)=1\) for all \(x\ge r\), \(\varphi _{\varepsilon }(x)=0\) for all \(x\in [0,r-\varepsilon ]\). Using the monotonicity of \(\varphi _{\varepsilon }\), we deduce from the weak formulation (5.5) that for all \(t\ge 0\),

$$\begin{aligned} \frac{d}{dt}\int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t,x)dx\le 0, \end{aligned}$$

and then the map \(t\mapsto \int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t,x)dx\) is nonincreasing. The result then follows by letting \(\varepsilon \rightarrow 0\).

Proof of (ii). Since (5.5) is invariant under time translations, it suffices to prove that for all \(r>0\),

$$\begin{aligned} \int _{[r,\infty )}u(t,x)dx<\int _{[r,\infty )}u_0(x)dx\qquad \forall t>0, \end{aligned}$$

provided (5.26) holds. To this end, consider \(\varphi _{\varepsilon }\) as in part (i). By (5.5)

$$\begin{aligned} \int _{[0,\infty )}\varphi _{\varepsilon }(x)u(t,x)dx&=\int _{[0,\infty )}\varphi _{\varepsilon }(x)u_0(x)dx\\&\quad +\int _0^t \int _0^{\infty }\int _0^x k_{\varphi _{\varepsilon }}(x,y)u(s,x)u(s,y)dydxds. \end{aligned}$$

Then, since \(\lim _{\varepsilon \rightarrow 0}k_{\varphi _{\varepsilon }}(x,y)=k_{\varphi }(x,y)\) for all \((x,y)\in [0,\infty )^2\), where \(\varphi (x)=\mathbb {1}_{[r,\infty )}(x)\), and for all \(\varepsilon \) small enough,

$$\begin{aligned}&\int _0^{\infty }\int _0^x |k_{\varphi _{\varepsilon }}(x,y)|u(s,x)u(s,y)dydx\\&\quad =\int _{r-\varepsilon }^{\infty }\int _{\theta x}^x |k_{\varphi _{\varepsilon }}(x,y)|u(s,x)u(s,y)dydx\\&\quad \le 2C_*\rho _*\int _{r-\varepsilon }^{\infty }\int _{\theta x}^x\frac{e^{\frac{x-y}{2}}}{\sqrt{xy(x+y)}}u(s,x)u(s,y)dydx\\&\quad \le \frac{2C_*\rho _*}{\sqrt{\theta (1+\theta )}(r-\varepsilon )^{3/2}}\int _{r-\varepsilon }^{\infty }e^{\frac{(1-\theta )x}{2}}u(s,x)\int _{\theta x}^x u(s,y)dydx\\&\quad \le \frac{4C_*\rho _*M_0(u_0)}{\sqrt{\theta (1+\theta )}r^{3/2}}\int _0^{\infty }e^{\frac{(1-\theta )x}{2}}u_0(x)dx<\infty , \end{aligned}$$

we deduce from dominated convergence Theorem

$$\begin{aligned}&\int _{[r,\infty )}u(t,x)dx=\int _{[r,\infty )}u_0(x)dx\nonumber \\&\quad +\int _0^t\int _{[r,\infty )}\int _{[0,r)} R(x,y)u(s,x)u(s,y)dydxds. \end{aligned}$$
(5.27)

Taking \(\Gamma \) into account, we observe that

$$\begin{aligned}&\int _{[r,\infty )}\int _{[0,r)} R(x,y)u(s,x)u(s,y)dydxds\nonumber \\&\quad =\int _{[r,\gamma _2(r))}\int _{(\gamma _1(x),r)} R(x,y)u(s,x)u(s,y)dydxds\le 0. \end{aligned}$$
(5.28)

The goal is to show that the integral above is, indeed, strictly negative for all \(s\in [0,t]\). By (5.26) and Proposition 5.4 (iii), there exists an open rectangle \(G=G_1\times G_2\) centered around \((x_0,y_0)\) and contained in \(\{(x,y)\in [0,\infty )^2:x\in [r,\gamma _2(r)),\;y\in (\gamma _1(x),r)\}\) such that

$$\begin{aligned} \int _{G_i} u(t,x)dx>0\qquad \forall t>0,\;i=1,2. \end{aligned}$$

We then obtain

$$\begin{aligned}&\int _{[r,\gamma _2(r))}\int _{(\gamma _1(x),r)} R(x,y)u(s,x)u(s,y)dydxds\\&\quad \le \max _{(x,y)\in G}R(x,y)\int _{G_1} u(t,x)dx\int _{G_2}u(t,y)dy<0, \end{aligned}$$

and the result then follows from (5.27) and (5.28). \(\square \)

5.2 Global “Regular” Solutions

We prove in this Section Theorem 1.4, for initial data \(u_0\) sufficiently flat around the origin. This condition on \(u_0\) is sufficient to prevent the formation of a Dirac mass in finite time. We do not know if it is necessary. We prove first the following,

Proposition 5.7

For all \(v_0\in L^1([0, \infty ))\), \(v_0\ge 0\), satisfying (1.40) for some \(\eta >(1-\theta )/2\), there exists a nonnegative global weak solution \(v\in C([0, \infty ), L^1([0, \infty )))\) of (5.1), (5.2) such that

$$\begin{aligned} u(t,x)=u_{0}(x)e^{\int _0^t\int _0^{\infty }R(x,y)u(s,y)dyds}\,\,\,\,\forall t>0,\,a. e.\, x>0, \end{aligned}$$
(5.29)

and also satisfies \(v(0)=v_0\), (1.42), (1.43).

Proof of Proposition 5.7

The proof has two steps.

Step 1 We consider first a compactly supported initial data, say \({\text {supp}}\,u_0\subset [0,L]\), \(L>0\). We first prove that the operator

$$\begin{aligned} A(f)(t,x)=u_0(x)e^{\int _0^{t}\int _0^\infty R(x,y)f(s,y)dyds} \end{aligned}$$
(5.30)

is a contraction on \(Y _{ \rho , T }\) for some \(\rho >0\) and \(T>0\), where

$$\begin{aligned} Y_{\rho , T}= & {} \left\{ f\in C([0, T), L^1([0,\infty ),\omega dx)):\; \Vert f\Vert _T\le \rho \right\} ,\\ \Vert f\Vert _T= & {} \sup _{0\le t<T}\int _0^{\infty }\omega (x)|f(t,x)|dx=\sup _{0\le t<T}\Vert f(t)\Vert _{\omega },\\ \omega (x)= & {} (1+x^{-3/2}). \end{aligned}$$

Using (2.8)–(2.11), for all \((x,y)\in \Gamma \), \(x\le L\),

$$\begin{aligned} |R(x,y)|&\le \frac{C_*\rho _* e^{\frac{|x-y|}{2}}}{\sqrt{xy(x+y)}}\le \frac{C_*\rho _*}{\sqrt{\theta (1+\theta )}}\frac{e^{\frac{(1-\theta )x}{2\theta }}}{y^{3/2}}\le C_L\omega (y), \end{aligned}$$

where \(C_L=\frac{C_*\rho _*e^{\frac{(1-\theta )L}{2\theta }}}{\sqrt{\theta (1+\theta )}}\). Then, for all nonnegative \(f\in Y_{\rho ,T}\), \(x\in [0,L]\), and \(t\in [0,T)\),

$$\begin{aligned} \int _0^t\int _0^{\infty }|R(x,y)|f(s,y)dyds&\le C_L\rho T, \end{aligned}$$

and then

$$\begin{aligned} A(f)(t, x)&\le u_0(x)e^{C_L\rho T},\nonumber \\ \Vert A(f)\Vert _T&\le \Vert u_0\Vert _{\omega }e^{C_L\rho T}. \end{aligned}$$
(5.31)

Notice that \(\Vert u_0\Vert _{\omega }<\infty \) by the hypothesis (1.40). Let now \(t_1\) and \(t_2\) be such that \(0\le t_1\le t_2<T\). Then, for all \(x\in [0,L]\),

$$\begin{aligned}&\big |A(f)(t_1,x)-A(f)(t_2,x)\big |\\&\quad =u_0(x)\left| e^{\int _0^{t_1}\int _0^{\infty } R(x,y)f(s,y)dyds}-e^{\int _0^{t_2}\int _0^{\infty }R(x,y)f(s,y)dyds}\right| \\&\quad \le u_0(x)e^{C_L\rho T}C_L\rho |t_1-t_2|, \end{aligned}$$

and therefore

$$\begin{aligned} \Vert A(f)(t_1)-A(f)(t_2)\Vert _{\omega }&\le \Vert u_0\Vert _{\omega }e^{C_L\rho T}C_L\rho |t_1-t_2|, \end{aligned}$$

from where it follows that \(A\in C([0, T), L^1([0,\infty ),\omega dx))\). On the other hand, if we chose \(\rho =2\Vert u_0\Vert _{\omega }\) and \(T>0\) such that \(e^{C_L\rho T}\le 2\), we deduce from (5.31) that \(\Vert A(f)\Vert _T\le \rho \), i.e., \(A(f)\in Y_{\rho ,T}\).

Let now f and g be in \(Y _{ \rho , T }\) . By similar computations as before,

$$\begin{aligned} \Vert A(f)-A(g)\Vert _{T}&\le \Vert u_0\Vert _{\omega }e^{C_L\rho T} C_LT\Vert f-g\Vert _T, \end{aligned}$$

and if T is such that

$$\begin{aligned} \Vert u_0\Vert _{\omega }e^{C_L\rho T} C_LT<1, \end{aligned}$$

then A is a contraction on \(Y _{ \rho , T }\), and has then a fixed point u that satisfies (5.29) for all \(t\in (0, T)\) and \(a. e. \, x>0\). It then follows in particular that \(u\ge 0\). Let us denote

$$\begin{aligned} T _{ \max }=\sup \Big \{T>0; \exists \, \rho >0, \exists u\in Y _{ \rho , T }\,\,\hbox {satisfying}\,\, (5.29),\,\forall t\in [0, T)\Big \}. \end{aligned}$$

We claim that if \(T _{ \max }<\infty \), then \(\lim \sup _{t\rightarrow T_{\max }}\Vert u(t)\Vert _{\omega }=\infty \). Suppose that \(T_{\max }<\infty \) and \(\limsup _{ t\rightarrow T_{\max } }\Vert u(t)\Vert _{\omega }=\ell <\infty \), and let \(t_n\rightarrow T_{\max }\). For every \(n\in \mathbb {N}\) we define \(\rho _n=2\Vert u(t_n)\Vert _{\omega }\), and the map

$$\begin{aligned} A_n(f)(t,x)=u(t_n, x)e^{\int _0^t\int _0^\infty R(x, y)f(s, y)dyds}, \end{aligned}$$

for \(f\in C([0, T), L^1([0,\infty ),\omega dx))\), \(T>0\). For every \(T>0\), \(t\in [0,T)\), \(x\in [0,L]\), and \(f\in Y_{\rho _n,T}\),

$$\begin{aligned} A_n(f)(t,x)&\le u(t_n, x)e^{C_L\rho _n T},\\ \Vert A_n(f)\Vert _{T }&\le \Vert u(t_n)\Vert _{\omega }e^{C_L\rho _n T}, \end{aligned}$$

and for all T such that \(T\le (\ln 2)/(C_L\rho _n)=T_n\), it follows that \(A_n(f)\in Y_{\rho _n,T}\). Notice that by hypothesis, \(\rho _n\le 2\ell \) for all n, and then

$$\begin{aligned} T_n\ge \frac{\ln 2}{C_L2\ell }:=\tau _1\quad \forall n\in \mathbb {N}. \end{aligned}$$

Now let f and g be in \(Y_{\rho _n,T}\) for \(T>0\). Arguing as before

$$\begin{aligned} \Vert A_n(f)-A_n(g)\Vert _{T}&\le \Vert u(t_n)\Vert _{\omega }e^{C_L2\ell T} C_LT\Vert f-g\Vert _T, \end{aligned}$$

and since

$$\begin{aligned} u(t_n,x)\le u_0(x)e^{C_L2\ell T_{\max }},\quad \forall n\in \mathbb {N}, \end{aligned}$$

then

$$\begin{aligned} \Vert A_n(f)-A_n(g)\Vert _{T}&\le \Vert u_0\Vert _{\omega }e^{C_L2\ell (T+T_{\max })} C_LT\Vert f-g\Vert _T. \end{aligned}$$

If we chose \(\tau _2>0\) such that

$$\begin{aligned} \Vert u_0\Vert _{\omega }e^{C_L2\ell (\tau _2+T_{\max })} C_L\tau _2<1, \end{aligned}$$

and we let \(\tau _*=\min \{\tau _1,\tau _2\},\) then \(A_n\) is a contraction from \(Y_{\rho _n,\tau _*}\) into itself, and has then a fixed point, say \(v_n\). The function \(v_n\) satisfies

$$\begin{aligned} v_n(t, x)=u(t_n, x)e^{\int _0^t\int _0^\infty R(x, y)\, v_n(t, y)\, dy},\quad \forall t\in [0, \tau _*),\,\text {a.e. }x\in [0,\infty ). \end{aligned}$$

Therefore, the function \(w_n\) defined as

$$\begin{aligned} w_n(t,x)={\left\{ \begin{array}{ll} u(t, x)&{}\text {if}\;t\in [0,t_n)\\ v_n(t-t_n, x),&{}\text {if}\; t\in [t_n, t_n+\tau _*) \end{array}\right. } \end{aligned}$$

satisfies the integral equation:

$$\begin{aligned} w_n(t, x)=u_0(x)e^{\int _0^t\int _0^\infty R(x, y)w_n(s, y)dyds},\quad \forall t\in [0, t_n+\tau _*). \end{aligned}$$

Since \(t_n\rightarrow T_{\max }\), then \(t_n+\tau _*>T _{ \max }\) for n large enough, and this contradicts the definition of \(T _{ \max }\). We deduce that, either \(T _{ \max }=\infty \), and the solution is said to be global, or \(\limsup _{t\rightarrow T_{\max }}\Vert u(t)\Vert _{\omega }= \infty \) and the solution is said to blow up in finite time, at \(T _{ \max }\).

Since for all \(T<T _{ \max }\), \(t\in [0,T]\) and \(a. e.\,x\in [0,L]\),

$$\begin{aligned}&\left| \frac{d}{dt}\left( u_0(x)\varphi (x)e^{\int _0^t\int _0^{\infty }R(x,y)u(s,y)dyds}\right) \right| \\&\quad \le u_0(x)|\varphi (x)|e^{C_LT\Vert u\Vert _T}C_LT\Vert u\Vert _T\quad (\text {integrable in }x), \end{aligned}$$

we may then multiply both sides of the Eq. (5.29) by a function \(\varphi \in C_b([0,\infty ))\) and integrate on \([0,\infty )\):

$$\begin{aligned} \frac{d}{dt}\int _0^\infty u(t, x)\varphi (x)dx=\int _0^\infty u(t, x)\varphi (x)\left( \int _0^\infty R(x, y)u(t, y)dy\right) dx, \end{aligned}$$

and since

$$\begin{aligned} \left| u(t,\cdot )u(t,\cdot )\varphi (t,\cdot )R(\cdot , \cdot )\right| \in L^1([0,\infty )\times [0,\infty ))\quad \forall t\in [0,T_{\max }), \end{aligned}$$

by Fubini’s Theorem and the antysimmetry of R(xy),

$$\begin{aligned}&\frac{d}{dt}\int _0^\infty u(t, x)\varphi (x)dx=\int _0^\infty \int _0^\infty \varphi (x)R(x, y) u(t,x)u(t,y)dxdy\nonumber \\&\quad =\frac{1}{2}\int _0^\infty \int _0^\infty (\varphi (x)-\varphi (y)) R(x, y)u(t,x)u(t,y)dxdy. \end{aligned}$$
(5.32)

This shows that u is a weak solution of (5.1), (5.2) . If \(\varphi =1\):

$$\begin{aligned} \frac{d}{dt}\int _0^\infty u(t, x)dx=0, \end{aligned}$$

and then \(\Vert u(t)\Vert _1=\Vert u_0\Vert _1\) for all \(t>0\). Then, since

$$\begin{aligned} \int _0^{\infty }R(x,y)u(s,y)dx&\le \frac{C_L\Vert u_0\Vert _1}{x^{3/2}}, \end{aligned}$$

we obtain from (5.29)

$$\begin{aligned} u(t,x)\le u_0(x)e^{ \frac{tC_L\Vert u_0\Vert _1}{x^{3/2}}}, \end{aligned}$$

and then

$$\begin{aligned} \Vert u(t)\Vert _{\omega }\le \int _0^{\infty }\omega (x)e^{\frac{tC_L\Vert u_0\Vert _1}{x^{3/2}}}u_0(x)dx. \end{aligned}$$
(5.33)

Notice that (1.40) implies

$$\begin{aligned} \forall r>0,\qquad \int _0^1 u_0(x)\frac{e^{\frac{r}{x^{3/2}}}}{x^{3/2}}dx<\infty . \end{aligned}$$
(5.34)

Indeed, if we write \(x^{-3/2}=e^{-\frac{3}{2}\ln x}\), then for all \(r>0\),

$$\begin{aligned} \int _0^1u_0(x) \frac{e^{\frac{r}{x^{3/2}}}}{x^{3/2}}dx= \int _0^1 u_0(x)e^{\frac{r}{x^{3/2}}-\frac{3}{2}\ln x}dx \le \int _0^1 u_0(x)e^{\frac{r'}{x^{3/2}}}dx<\infty , \end{aligned}$$

where

$$\begin{aligned} r'=r+e^{-1}=\max _{x\in [0,1]}\left( r-\frac{3}{2}x^{3/2}\ln x\right) . \end{aligned}$$

We then obtain from (5.33), (5.34) that

$$\begin{aligned} \Vert u(t)\Vert _{\omega }\le \int _0^{\infty }\omega (x)e^{\frac{tC_L\Vert u_0\Vert _1}{x^{3/2}}}u_0(x)dx<\infty \quad \forall t\in [0,T_{\max }), \end{aligned}$$
(5.35)

therefore \(\lim _{t\rightarrow T_{\max }}\Vert u(t)\Vert _{\omega }<\infty \) if \(T_{\max }<\infty \), and then by the alternative, \(T_{\max }=\infty \).

Step 2 For a general initial data \(u_0\), let \(u_{0,n}(x)=u_0(x)\mathbb {1}_{[0,n]}(x)\), and \(u_n\) be the weak solution constructed in Step 1 for the initial data \(u_{0,n}\) that satisfies

$$\begin{aligned} u_n(t,x)=u_{0,n}(x)e^{\int _0^t\int _0^{\infty }R(x,y)u_n(s,y)dyds}, \end{aligned}$$
(5.36)

and \(\Vert u_n(t)\Vert _1=\Vert u_{0,n}\Vert _1\le \Vert u_0\Vert _1\) for all \(t>0\) and all \(n\in \mathbb {N}\). Then, arguing as in the proof of Theroem 1.2, a subsequence of \(\{u_n\}_{n\in \mathbb {N}}\) (not relabelled) converges to some \(u\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\) in the space \(C([0,\infty ),\mathscr {M}_+([0,\infty )))\). On the other hand, since for all \(n\in \mathbb {N}\),

$$\begin{aligned} \int _0^{\infty }R(x,y)u_n(s,y)dy\le \frac{C^*}{x^{3/2}} \int _x^{\infty }e^{\frac{(1-\theta )y}{2}}u_n(s,y)ds\le \frac{C_0}{x^{3/2}}, \end{aligned}$$
(5.37)

where \(C_0=C^*\int _0^{\infty }e^{\eta y}u_0(y)dy\), it follows from (1.40) that for all \(\varepsilon >0\) there exists \(\delta >0\) such that for all \(E\subset [0,\infty )\) mesasurable with \(|E|<\delta \),

$$\begin{aligned} \int _E u_n(t,x)dx&\le \int _{E}u_0(x)e^{\frac{C_0 t}{x^{3/2}}}dx<\varepsilon \quad \forall n\in \mathbb {N},\;\forall t>0. \end{aligned}$$
(5.38)

Moreover, for all \(\varepsilon >0\) there exists \(M>0\) such that

$$\begin{aligned} \int _{M}^{\infty }u_n(t,x)dx&\le e^{-\eta M}\int _M^{\infty }e^{\eta x}u_n(t,x)dx\nonumber \\&\le e^{-\eta M}\int _0^{\infty }e^{\eta x}u_0(x)dx<\varepsilon \quad \forall n\in \mathbb {N},\;\forall t>0. \end{aligned}$$
(5.39)

It then follows from (5.38)–(5.39) and Dunford–Pettis Theorem, that for all \(t>0\), a subsequence of \(u_n(t)\) (not relabelled) converges to a function \(U(t)\in L^1([0,\infty ))\) in the weak topology \(\sigma (L^1,L^{\infty })\). Therefore we deduce that for all \(t>0\),

$$\begin{aligned} \int _0^{\infty }\varphi (x)U(t,x)dx=\int _{[0,\infty )}\varphi (x)u(t,x)dx\,\forall \varphi \in C_b([0,\infty )), \end{aligned}$$

i.e., the measure u(t) is absolutely continuous with respect to the Lebesgue measure, with density U(t). With some abuse of notation we identify u and U. The goal now is to pass to the limit in (5.36) as \(n\rightarrow \infty \). Since \(R(x,\cdot )\in L^{\infty }([0,\infty ))\) for a.e. \(x>0\) and all \(t>0\), and

$$\begin{aligned}&\int _0^{\infty }|R(x,y)|u_n(s,y)dy\le \frac{C^*}{x^{3/2}}\left( \int _0^x e^{\frac{x-y}{2}}u_n(s,y)dy\right. \nonumber \\&\quad \left. +\int _x^{\infty }e^{\frac{y-x}{2}}u_n(s,y)dy\right) \end{aligned}$$
(5.40)
$$\begin{aligned}&\le \frac{C^*}{x^{3/2}}\left( e^{\eta x}\Vert u_0\Vert _1+\int _0^{\infty }e^{\eta y} u_0(y)dy\right) ,\,\forall n\in \mathbb {N}, \end{aligned}$$
(5.41)

it follows by the weak convergence \(u_n(t)\rightharpoonup u(t)\) and dominated convergence, that for all \(t>0\), a.e. \(x>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^t\int _0^{\infty }R(x,y)u_n(s,y)dyds =\int _0^t\int _0^{\infty }R(x,y)u(s,y)dyds, \end{aligned}$$

and then, using that \(u_{0,n}\rightarrow u_0\) a.e., (5.37), and dominated convergence,

$$\begin{aligned}&\lim _{n\rightarrow \infty }\int _0^{\infty }u_{0,n}(x)e^{\int _0^t\int _0^{\infty }R(x,y)u_n(s,y)dyds}dx\\&\quad =\int _0^{\infty }u_{0}(x)e^{\int _0^t\int _0^{\infty }R(x,y)u(s,y)dyds}dx. \end{aligned}$$

Therefore, u satisfies (5.29) for all \(t>0\) and \(a. e.\, x>0\).

Arguing as in (5.37) we obtain (1.43), and arguing as in Step 1 we obtain (1.42).

We now claim that

$$\begin{aligned} u\in C([0,\infty ),L^1((0,\infty ))). \end{aligned}$$
(5.42)

For all \(T>0\), \(t_1\) and \(t_2\) with \(0\le t_1\le t_2\le T\), we have by (5.40),

$$\begin{aligned}&\Vert u(t_1)-u(t_2)\Vert _1\\&\quad \le \int _0^{\infty } u_0(x)\left| e^{\int _0^{t_1}\int _0^{\infty }R(x,y)u(s,y)dyds}-e^{\int _0^{t_2}\int _0^{\infty }R(x,y)u(s,y)dyds}\right| dx\\&\quad \le \int _0^{\infty }u_0(x)e^{\frac{TC_0}{x^{3/2}}}\left( \int _{t_1}^{t_2}\int _0^{\infty }|R(x,y)|u(s,y)dyds\right) dx\\&\quad \le |t_1-t_2|\int _0^{\infty }u_0(x)e^{\frac{TC_0}{x^{3/2}}}\frac{C^*}{x^{3/2}}\left( e^{\eta x}\Vert u_0\Vert _1+\int _0^{\infty }e^{\eta y} u_0(y)dy\right) dx, \end{aligned}$$

and then (5.42) follows using (1.40). Arguing as in Step 1 we deduce that u is a weak solution of (5.1), (5.2). \(\square \)

Proof of Theorem 1.4

Theorem 1.4 follows from Proposition 5.7 since the function \(b(k, k')=\frac{\Phi \mathcal {B}_\beta }{kk'}\) satisfies (2.5)–(2.11). \(\square \)

Remark 5.8

The same proof shows that Theorem 1.4 is still true for the equation

$$\begin{aligned} \frac{\partial v}{\partial t}(t,k)=v(t, k)\int \limits _{[0, \infty )} v(t, k') \big (e^{- \beta k }-e^{-\beta k'}\big )\frac{\mathcal {B}_\beta (k, k')}{kk'}dk'. \end{aligned}$$
(5.43)

where the redistribution function \(\mathcal {B}_\beta \) is kept without truncation. This is possible because the property (1.40) is also propagated by the weak solutions of (5.43) such that

$$\begin{aligned} v(t, k)=v_0(k)e^{\int _0^{t}\int _0^\infty \big (e^{- \beta k }-e^{-\beta k'}\big )\frac{\mathcal {B}_\beta (k, k')}{kk'}\, v(s, k')\, dk'\, ds}. \end{aligned}$$
(5.44)

Notice in particular that the integral term in the exponential is well defined when v(t) satisfies (1.40).

Remark 5.9

Let u and v be two solutions of (1.39), with a compactly supported initial data \(u_0\in L^1([0,\infty ))\) satisfying (1.40) and such that \({\text {supp}}(u_0)\subset [0,L]\), \(L>0\). It follows from the representation(5.29) that, for all \(t>0\) and \(a.e.\, x>0\),

$$\begin{aligned}&|u(t,x)-v(t, x)|\le u_0(x)e^{\frac{t C_L\Vert u_0\Vert _1}{x^{3/2}}} \frac{C_L}{x^{3/2}}\int _0^t\int _0^{\infty }|u(s,y-v(s,y)|dyds, \end{aligned}$$

and then, by Gronwall’s Lemma, \(u=v\) for a.e. \(t>0\) and a.e. \(x>0\).

5.3 \(\varvec{M_\alpha } \) as Lyapunov Functional

The goal of this Section is the study of the functionals \(M _{ \alpha }(u(t))\), defined in (1.24), and \( D _{ \alpha }(u(t))\), defined in (1.44), acting on the weak solutions of problem (5.1), and to prove, in particular, Theorem 1.6.

Let us start with the following simple lemma, that establishes a monotonicity property for the moments of a solution to (5.1).

Lemma 5.10

Let u be the weak solution of (5.1) given by Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3). Then, the weak formulation (5.5) holds for \(\varphi (x)=x^{\alpha }\) for all \(\alpha \ge 1\). Moreover, for all \(t_0\ge 0 \),

$$\begin{aligned} M_{\alpha }(u(t))\le M_{\alpha }(u(t_0))\qquad \forall t\ge t_0. \end{aligned}$$
(5.45)

Proof

Let \(\alpha \ge 1\) and \(\varphi (x)=x^{\alpha }\). We first notice from (5.3) and (5.7) that \(M_{\alpha }(u(t))<\infty \) for all \(t\ge 0\). Then, consider an approximation \(\{\varphi _k\}_{k\in \mathbb {N}}\subset C^1_b([0,\infty ))\) such that \(\varphi _k\) is nondecreasing, \(\varphi _k'(0)=0\), \(\varphi _k'\le \varphi '\) for all \(k\in \mathbb {N}\), and \(\varphi _k\rightarrow \varphi \) pointwise as \(k\rightarrow \infty \). Using the definite sign of the right hand side of (5.5) for the test function \(\varphi _k\), we obtain

$$\begin{aligned} \frac{d}{dt}\int _{[0,\infty )}\varphi _k(x)u(t,x)dx\le 0\qquad \forall t>0,\;\;\forall k\in \mathbb {N}, \end{aligned}$$

from where, for all \(t_0\ge 0\), \(\int _{[0,\infty )}\varphi _k(x)u(t,x)dx\le \int _{[0,\infty )}\varphi _k(x)u(t_0,x)dx\) for all \(t\ge t_0\) and all \(k\in \mathbb {N}\), and then (5.45) follows from dominated convergence theo, by letting \(k\rightarrow \infty \).

Let us prove now that (5.5) holds for \(\varphi (x)=x^{\alpha }\). From (5.5) for the test function \(\varphi _k\),

$$\begin{aligned}&\int _{[0,\infty )}\varphi _k(x)u(t,x)dx=\int _{[0,\infty )}\varphi _k(x)u_0(x)dx\nonumber \\&\quad +\int _0^t \iint _{[0,\infty )^2}k_{\varphi _k}(x,y)u(s,x)u(s,y)dydxds. \end{aligned}$$
(5.46)

Using that \(\varphi _k'\le \varphi '\) for all \(k\in \mathbb {N}\), we obtain from (A.4) that for all \((x,y)\in \Gamma \),

$$\begin{aligned} |k_{\varphi _k}(x,y)|\le C\alpha \max \{x,y\}^{\alpha -1}e^{\frac{|x-y|}{2}}, \quad C=\max \left\{ \frac{(1-\theta )^2}{\theta \delta _*(1+\theta )},\rho _*\right\} , \end{aligned}$$

and, since \(|x-y|\le (1-\theta )\max \{x,y\}\) and \(\max \{x,y\}\le \theta ^{-1}\min \{x,y\}\) for all \((x,y)\in \Gamma \), we then deduce using also (5.7) and (5.45), that for all \(t\ge 0\) and \(k\in \mathbb {N}\),

$$\begin{aligned}&\iint _{[0,\infty )^2}|k_{\varphi _k}(x,y)|u(t,x)u(t,y)dydx\\&\quad \le \frac{C\alpha }{\theta ^{\alpha -1}}\iint _{[0,\infty )^2} \min \{x,y\}^{\alpha -1}e^{\frac{(1-\theta )}{2}\max \{x,y\}}u(t,x)u(t,y)dydx\\&\quad \le \frac{2C\alpha }{\theta ^{\alpha -1}} \bigg (\int _{[0,\infty )}e^{\frac{(1-\theta )x}{2}}u(t,x)dx\bigg ) \bigg (\int _{[0,\infty )}y^{\alpha -1}u(t,y)dy\bigg )\\&\quad \le \frac{2C\alpha }{\theta ^{\alpha -1}}\bigg (\int _{[0,\infty )}e^{\eta x}u_0(x)dx\bigg )\bigg ( \int _{[0,\infty )}y^{\alpha -1}u_0(y)dy\bigg ). \end{aligned}$$

On the other hand, \(k_{\varphi _k}(x,y)\rightarrow k_{\varphi }(x,y)\) for all \((x,y)\in [0,\infty )^2\) as \(k\rightarrow \infty \). Passing to the limit as \(k\rightarrow \infty \) in (5.46), it then follows from dominated convergence theo that for all \(t\ge 0\),

$$\begin{aligned}&\int _{[0,\infty )} \varphi (x)u(t,x)dx=\int _{[0,\infty )}\varphi (x)u_0(x)dx\nonumber \\&\quad +\int _0^t \iint _{[0,\infty )^2}k_{\varphi }(x,y)u(s,x)u(s,y)dydxds , \end{aligned}$$
(5.47)

and then (5.5) holds. \(\square \)

If u is a weak solution to (5.1) given by Theorem 5.1, then by Lemma 5.10 the following identity holds,

$$\begin{aligned} \frac{d}{dt}M_{ \alpha }(u(t))=\frac{1}{2}D _{ \alpha } (u(t)) \qquad \forall t>0. \end{aligned}$$
(5.48)

Since \(D_\alpha (u(t))\le 0\) for all \(t>0\), this shows that \(M_\alpha \) is a Lyapunov functional on these solutions. The identity (5.48) is reminiscent of the usual entropy - dissipation of entropy identity. As already observed in the Introduction, since the support of the function B is contained in the region \(\Gamma \subset [0,\infty )^2\), if \(a>0\) and \(b>0\) are such that \((a, b)\not \in \Gamma \) (they do not see each other) then, for all \(\varphi \in C_b^1([0, \infty ))\) such that \(\varphi '(0)=0\),

$$\begin{aligned} \iint _{ [0, \infty )^2 }\delta (x-a)\delta (y-b)R(x, y)(\varphi (x)-\varphi (y))dxdy=0. \end{aligned}$$

Let us then see some of the consequences of this simple observation.

Definition 5.11

We say that two points a and c on \([0,\infty )\) are \(\Gamma \)-disjoint if \((a, c)\not \in \Gamma \). We say that two sets A and C on \([0,\infty )\) are \(\Gamma \)-disjoint if for all \((a,c)\in A\times C\), \((a, c)\not \in \Gamma \), i.e., if \(A\times C\subset [0,\infty )^2\setminus \mathring{\Gamma }\).

Since the support of any given measure \(u\in \mathscr {M}_+([0,\infty ))\) is, by definition, a closed subset of \([0,\infty )\), then

$$\begin{aligned} ({\text {supp}}(u))^c=\bigcup _{k=0}^{\infty }I_k,\,\,\, I_k\;\text {open interval },\,\, I_k\cap I_j=\emptyset \;\;\text {if}\;\;k\ne j. \end{aligned}$$
(5.49)

We may write \(I_k=(a_k,b_k)\) for \(0\le a_k<b_k\) for all \(k\in \mathbb {N}\), except if \({\text {supp}}\,u\subset [r,\infty )\), \(r>0\), for which \(I_k=[0=a_k,b_k)\) for some k. We now define

$$\begin{aligned} \mathcal {I}=\{I_k:\gamma _1(b_k)\ge a_k\}, \end{aligned}$$

and denote \(\{C_k\}_{k\in \mathcal {J}}\) the connected components of \(\big (\bigcup _{I\in \mathcal {I}}I\big )^c\). Notice that, in general, \(\mathcal {J}\) could be uncountable. Finally define, for all \(u\in \mathscr {M}_+([0,\infty ))\)

$$\begin{aligned} A_k (u)=C_k\cap {\text {supp}}(u),\,\,\,\forall k\in \mathcal {J}. \end{aligned}$$
(5.50)

Notice by (5.50) that \(A_k(u)\) is a closed subset of \([0,\infty )\) for all \(k\in \mathbb {N}\), since it is the intersection of two closed sets.

We write \(A_k (u)=A_k\) when no confusion is possible.

Lemma 5.12

\(\mathcal {J}\) is a countable set.

Proof

Given two elements of \(\mathcal {I}\), there is at most a finite number of elements of \(\mathcal {I}\) between them. More precisely, we claim that, for any given \(I_i\in \mathcal {I}\), \(I_j\in \mathcal {I}\), with \(I_i=(a_i,b_i)\), \(I_j=(a_j,b_j)\), \(0<b_i\le a_j\), then: \(\text {card}(\{I_k=(a_k,b_k)\in \mathcal {I}: b_i\le a_k<b_k\le a_j\})<\infty .\) The proof of this fact start with this trivial remark: if \(I_k\in \mathcal {I}\), then \(|I_k|=b_k-a_k\ge b_k-\gamma _1(b_k)\). Using that, if we consider the decreasing sequence \(b_j\), \(\gamma _1(b_j)\), \(\gamma _1^2(b_j)=\gamma _1(\gamma _1(b_j))\), \(\gamma _1^3(b_j)\),..., then \(\gamma _1^m(b_j)<b_i\) for some integer m, and therefore there could be only m elements of \(\mathcal {I}\) between \(I_i\) and \(I_j\).

For the sake of the argument, let us say that given two elements \(I_i=(a_i,b_i)\) and \(I_j=(a_j,b_j)\) of \(\mathcal {I}\), there are 2 more elements \(I_1=(a_1,b_1)\) and \(I_2=(a_2,b_2)\) of \(\mathcal {I}\) between them, i.e.,

$$\begin{aligned} a_i<b_i\le a_1<b_1\le a_2<b_2\le a_j<bj. \end{aligned}$$

Then, there are 3 connected components in \((a_i,b_j)\setminus \big (I_i\cup I_1\cup I_2\cup I_j\big )\), namely \([b_i,a_1]\), \([b_1,a_2]\) and \([b_2,a_j]\). With this idea, it can be proved that the number of connected components of \([0,\infty )\setminus \big (\bigcup _{I\in \mathcal {I}}I\big )\), i.e., the collection \(\{C_k\}_{k\in \mathcal {J}}\), is at most countable. \(\square \)

We prove now several useful properties of the collection \(\{A_k\}_{k\in \mathbb {N}}\).

Lemma 5.13

Let \(u\in \mathscr {M}_+([0,\infty ))\) and consider the collection \(\{A_k\}_{k\in \mathbb {N}}\) constructed above. Then \(A_i\) and \(A_j\) are \(\Gamma \)-disjoint if and only if \(i\ne j\), and

$$\begin{aligned} {\text {supp}}(u)=\bigcup _{k=0}^{\infty }A_k. \end{aligned}$$
(5.51)

Proof

It is clear that \(A_i\) and \(A_i\) are not \(\Gamma \)-disjoint, since \(A_i\times A_i\) contains points on the diagonal, and therefore on \(\mathring{\Gamma }\). Now, if \(i\ne j\), we first observe that \(A_i\) and \(A_j\) are disjoint. Indeed, by definition \(A_i\subset C_i\) and \(A_j\subset C_j\), where \(C_i\) and \(C_j\) are different connected components of \([0,\infty )\setminus \big (\bigcup _{I\in \mathcal {I}}I\big )\), therefore disjoint. We now prove that \(A_i\) and \(A_j\) are in fact \(\Gamma \)-disjoint. Let us assume that \(A_i\) is on the left of \(A_j\), i.e., \(\sup A_i<\inf A_j\). It follows from the construction that there exists at least one \(I_k=(a_k,b_k)\in \mathcal {I}\) between \(A_i\) and \(A_j\), i.e., such that

$$\begin{aligned} \sup A_i\le a_k<b_k\le \inf A_j. \end{aligned}$$

By definition of \(\mathcal {I}\), the points \(a_k\) and \(b_k\) are \(\Gamma \)-disjoint, and then, for all \((a_i,a_j)\in A_i\times A_j\),

$$\begin{aligned} \gamma _1(a_j)\ge \gamma _1(b_k)\ge a_k\ge a_i, \end{aligned}$$

hence \(a_i\) and \(a_j\) are \(\Gamma \)-disjoint. Finally, (5.51) follows from the construction. Indeed, since by definition \(A_k=C_k\cap {\text {supp}}(u)\), then \(\cup _{k\in \mathbb {N}}A_k\subset {\text {supp}}\,u\). On the other hand, by definition \(\cup _{k\in \mathbb {N}} C_k=[0,\infty )\setminus \big (\cup _{I\in \mathcal {I}}I\big )\), and then by (5.49)

$$\begin{aligned} {\text {supp}}(u)=\bigcap _{k\in \mathbb {N}}I_k^c\subset \bigcup _{k\in \mathbb {N}}C_k, \end{aligned}$$

from where the inclusion \({\text {supp}}(u)\subset \cup _{k\in \mathbb {N}}A_k\) follows. \(\square \)

In the remaining part of the section we will use several times the following simple remark.

Remark 5.14

Consider the function \(z(x)=x-\gamma _1(x)\), \(x\ge 0\), where \(\gamma _1\) is given by (2.13) in Remark 2.1. Then, z is a continuous and strictly increasing function on \([0,\infty )\), with \(z(0)=0\).

In the next Lemma we prove that any two sets \(A_i\) and \(A_j\) of the collection \(\{A_k\}_{k\in \mathbb {N}}\) are separated from each other by a positive distance, given by the function z(x) of Remark 5.14.

Lemma 5.15

Let \(u\in \mathscr {M}_+([0,\infty ))\) and consider the collection \(\mathcal {A}=\{A_k\}_{k\in \mathbb {N}}\) constructed above. Suppose that \(card (\mathcal {A})\ge 2\). For any \(k\in \mathbb {N}\), let us denote \(x_k=\min A_k\) and \(y_k=\sup A_k\). Given two elements \(A_i\), \(A_j\) in \(\mathcal {A}\), suppose that \(y_i<x_j\). Then,

$$\begin{aligned} dist (A_i,A_j)\ge x_j-\gamma _1(x_j)>0. \end{aligned}$$
(5.52)

Moreover, for every \(\varepsilon >0\), let

$$\begin{aligned} \mathcal {A}_{\varepsilon }=\{A_k\in \mathcal {A}: A_k\subset (\varepsilon ,\infty )\}. \end{aligned}$$
(5.53)

If \(\mathcal {A}_{\varepsilon }\ne \emptyset \) and \(card (\mathcal {A}_{\varepsilon })\ge 2\), then

$$\begin{aligned} dist (A_i,A_j)> \varepsilon -\gamma _1(\varepsilon )>0\qquad \forall A_i,A_j\in \mathcal {A}_{\varepsilon },\,i\ne j. \end{aligned}$$
(5.54)

Proof

Since \(A_i\) and \(A_j\) are closed sets and \(y_i<x_j\), it follows that \(\text {dist}(A_i,A_j)=x_j-y_i\). By Lemma 5.13, the closed sets \(A_i\) and \(A_j\) are \(\Gamma \)-disjoint and then, by Definition 5.11,

$$\begin{aligned} y_i\le \gamma _1(x_j). \end{aligned}$$

Therefore \(\text {dist}(A_i,A_j)\ge x_j-\gamma _1(x_j)\) and, since \(x_j>0\), (5.52) follows from Remark 5.14.

Let now \(\varepsilon >0\) be fixed and consider \(A_i\) and \(A_j\) in \(\mathcal {A}_{\varepsilon }\). Without loss of generality, we may assume that \(y_i<x_j\). Using Remark 5.14, it then follows from (5.52) and (5.53) that

$$\begin{aligned} \text {dist}(A_i,A_j)\ge z(x_j)> z(\varepsilon )>0. \end{aligned}$$

\(\square \)

Lemma 5.16

Let u be the weak solution of (5.1) constructed in Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3), and consider the collection \(\mathcal {A}=\{A_k(u_0)\}_{k\in \mathbb {N}}\) constructed above. Then

$$\begin{aligned} \int _{A_k}u(t,x)dx=\int _{A_k}u_0(x)dx\qquad \forall t>0,\;\forall k\in \mathbb {N}. \end{aligned}$$
(5.55)

Proof

In the trivial case \(A_k={\text {supp}}(u_0)\) for all \(k\in \mathbb {N}\), then (5.55) is just the conservation of mass (5.6). Suppose then that \(\text {card}(\mathcal {A})\ge 2\). We consider separately two different cases.

(i) Suppose that there exists \(\varepsilon >0\) such that \([0,\varepsilon ]\subset {\text {supp}}(u_0)\). Then, since \([0,\varepsilon ]\) can not intersect \(A_k\) for two different values of k, there exists \(k_0\in \mathbb {N}\) such that \([0,\varepsilon ]\subset A _{ k_0 }\). In particular \(A _{ k_0 }\not \in \mathcal {A}_\varepsilon \). Let us see that

$$\begin{aligned} \mathcal {A}=\mathcal {A}_\varepsilon \cup \{A _{ k_0 }\}. \end{aligned}$$
(5.56)

Arguing by contradiction, suppose that for some \(\ell \not =k_0\) we have \(A_\ell \in \mathcal {A} \setminus \mathcal {A}_\varepsilon \). Since \([0,\varepsilon ]\subset A_{k_0}\) and \(A_{k_0}\cap A_{\ell }=\emptyset \), then \(x_{\ell }=\min A_{\ell }>\varepsilon \). Therefore \(A_{\ell }\in \mathcal {A}_{\varepsilon }\), which is a contradiction.

We wish now to estimate from below the distances \(\text {dist}(A_i, A_j)\) for all \(A_i\in \mathcal {A}\), \(A_j\in \mathcal {A}\), \(i\not =j\). By (5.54) and (5.56),

$$\begin{aligned} \text {dist}(A_i,A_j)>\varepsilon -\gamma _1(\varepsilon )>0\qquad \forall i\not = k_0, \forall j\not = k_0, i\not =j. \end{aligned}$$
(5.57)

On the other hand, for all \(i\not =k_0\), \(x_i=\min A_i >\varepsilon \) by (5.56) and then, by (5.52) and Remark 5.14,

$$\begin{aligned} \text {dist}(A_i,A _{ k_0 })\ge x_i-\gamma _1(x_i)=z(x_i)>z(\varepsilon ). \end{aligned}$$
(5.58)

By (5.56)–(5.58) we have then:

$$\begin{aligned} \text {dist}(A_i,A _{ j })>z(\varepsilon )>0, \,\,\,\forall A_j\in \mathcal {A}, \, \forall A_i\in \mathcal {A}. \end{aligned}$$
(5.59)

For any fixed \(k\in \mathbb {N}\), we now claim that, since \(A_i\) is closed for every \(i\in \mathbb {N}\), by (5.59) the set

$$\begin{aligned} D_k=\bigcup _{i\in \mathbb {N},i\ne k}A_i \end{aligned}$$
(5.60)

is a closed subset of \([0, \infty )\). In order to prove that property, let us assume, by contradiction, that there exists a point \(x_*\in \overline{D}_k \setminus D_k\). Let \(\{x_n\} _{ n\in \mathbb {N}}\subset D_k\) be a sequence such that converges to \(x_*\). In particular \(\{x_n\} _{ n\in \mathbb {N}}\) is a Cauchy sequence. Therefore, by (5.59), there exists \(k_*\in \mathbb {N}\setminus \{k\}\) such that, for some \(n_*\) sufficiently large:

$$\begin{aligned} x_n\in A _{ k_* },\,\,\forall n\ge n_*. \end{aligned}$$

Since \(A _{ k_* }\) is a closed set, it follows that \(x_*\in A _{ k_* }\subset D_k\), and this is a contradiction.

By (5.59), \(D_k\) and \(A_k\) are disjoint subsets of \([0,\infty )\). Therefore, by Urysohn’s lemma, there exists a function \(\varphi \in C_b([0,\infty ))\) such that

$$\begin{aligned} \varphi (x)=1\quad \forall x\in A_k\quad \text {and}\quad \varphi (x)=0\quad \forall x\in D_k. \end{aligned}$$

Using (5.59) and a density argument, we may assume that \(\varphi \in C^1_b([0,\infty )).\) Then, since \({\text {supp}}(u(t))={\text {supp}}(u_0)\) (cf. Proposition 5.4 (iii)), it follows from (5.51)

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)u(t,x)dx=\int _{A_k}u(t,x)dx, \end{aligned}$$

and since \(A_i\) and \(A_j\) are \(\Gamma \)-disjoint for \(i\ne j\) (cf. Lemma 5.13), then by construction of \(\varphi \),

$$\begin{aligned}&\iint _{[0,\infty )^2}R(x,y)(\varphi (x)-\varphi (y))u(t,x)u(t,y)dydx\\&\quad =\sum _{i=0}^{\infty }\iint _{A_i\times A_i}R(x,y)(\varphi (x)-\varphi (y))u(t,x)u(t,y)dydx\\&\quad =\iint _{A_k\times A_k}R(x,y)(\varphi (x)-\varphi (y))u(t,x)u(t,y)dydx=0, \end{aligned}$$

from where (5.55) follows by the weak formulation.

(ii) Suppose that the assumption of part (i) does not hold. In this case, there exists a strictly decreasing sequence \(\{x_n\}_{n\in \mathbb {N}}\) with \(x_n\rightarrow 0\) as \(n\rightarrow \infty \) such that \(x_n\notin {\text {supp}}(u_0)\) for all \(n\in \mathbb {N}\). Moreover, since \({\text {supp}}(u_0)\) is a closed set, for each \(n\in \mathbb {N}\) there exists \(\delta _n>0\) such that

$$\begin{aligned} (x_n-\delta _n,x_n+\delta _n)\subset ({\text {supp}}(u_0))^c. \end{aligned}$$

For every \(n\in \mathbb {N}\) and \(k\in \mathbb {N}\) fixed such that

$$\begin{aligned} A_k\in \mathcal {A} _{ x_n }, \end{aligned}$$
(5.61)

where \(\mathcal {A}_{x_n}\) is defined in (5.53), we consider the set:

$$\begin{aligned} D _{ k, n }=\bigcup _{\begin{array}{c} A_i\in \mathcal {A}_{x_n}\\ A_i\ne A_k \end{array}} A_i \end{aligned}$$

Using now (5.54) for \(\varepsilon =x_n\) we deduce that \(D _{ k, n }\) is a closed set by the same argument as for \(D_k\) in (5.60). By Urysohn’s lemma again, we can then construct a test function \(\varphi \in C^1_b([0,\infty ))\) such that

$$\begin{aligned} \varphi (x)=1\quad \forall x\in A_k\qquad \text {and}\qquad \varphi (x)=0\quad \forall x\in [0,x_n]\cup D _{ k, n }. \end{aligned}$$

Arguing as in part (i), we then deduce that

$$\begin{aligned} \int _{A_k}u(t,x)dx=\int _{A_k}u_0(x)dx\qquad \forall t>0,\;\forall A_k\in \mathcal {A}_{x_n}. \end{aligned}$$
(5.62)

We use now that

$$\begin{aligned} \mathcal {A}=\bigg (\bigcup _{ n\in \mathbb {N}}\mathcal {A} _{ x_n }\bigg ) \bigcup \Big \{ A_i\in \mathcal {A}: A_i \not \subset (0, \infty )\Big \} \end{aligned}$$

because

$$\begin{aligned} \bigcup _{ n\in \mathbb {N}}\mathcal {A} _{ x_n } = \left\{ A_j \in \mathcal {A}: A_j \subset (0, \infty )\right\} . \end{aligned}$$

But, if \(A_i \not \subset (0, \infty )\), then \(0\in A_i\). Therefore, if \(0\not \in {\text {supp}}(u_0)\) there is no such \(A_i\). If \(0 \in {\text {supp}}(u_0)\), since the sets \(A_k\) are pairwise disjoint, such subset \(A_i\) is unique. It follows that there exists at most a unique \(k_0\in \mathbb {N}\) such that:

$$\begin{aligned} \mathcal {A}=\bigg (\bigcup _{ n\in \mathbb {N}}\mathcal {A} _{ x_n }\bigg ) \bigcup \Big \{ A _{ k_0 }\Big \}. \end{aligned}$$
(5.63)

The equality (5.55) then follows from (5.62), (5.63) and the conservation of mass (5.6). \(\square \)

We may prove now the main result of this Section.

Proof of Theorem 1.6

Let us prove \((i)\implies (iii)\). Suppose that \(D_{\alpha }(u)=0\), and let, for \(\varepsilon >0\),

$$\begin{aligned} \Gamma _{\varepsilon }=\{(x,y)\in \Gamma : d((x,y), \partial \Gamma )>\varepsilon ,\;|x-y|>\varepsilon \}. \end{aligned}$$

Since \(b(x,y)(e^{-x}-e^{-y})(x^{\alpha }-y^{\alpha })<0\) for all \((x,y)\in \Gamma _{\varepsilon }\), it follows from (i) that \( {\text {supp}}(u\times u)\subset \Gamma _{\varepsilon }^c. \) Letting \(\varepsilon \rightarrow 0\), we deduce that

$$\begin{aligned} {\text {supp}}(u\times u)\subset \Delta \cup (\mathring{\Gamma })^c,\qquad \Delta =\{(x,x):x\ge 0\}. \end{aligned}$$
(5.64)

Notice that any two points \(y<x\) in the support of u have to be at distance, namely, \(x-y\ge x-\gamma _1(x)\). Otherwise \(\gamma _1(x)<y\) and then \((x,y)\in \mathring{\Gamma }\setminus \Delta \), in contradiction with (5.64). Moreover, since the map \(z(x)= x-\gamma _1(x)\) is continuous and strictly increasing on \([0,\infty )\), with \(z(0)=0\), it follows that the support of u consists, at most, on a countable number of points, where the only possible accumulation point is \(x=0\). Therefore \(A_k=\{x_k\}\) for all \(k\in \mathbb {N}\), and then (iii) holds.

Let us prove \((iii)\implies (i)\). If u is as in (iii), then \({\text {supp}}(u\times u)=\{(x_i,x_j):i,j\in \mathbb {N}\}\), and then

$$\begin{aligned} D_{\alpha }(u)=\sum _{i\le j}\chi (i,j)\alpha _i\alpha _jb(x_i,x_j)(e^{-x_i}-e^{-x_j})(x_i^{\alpha }-x_j^{\alpha })=0, \end{aligned}$$

where \(\chi (i,j)=2\) if \(i\ne j\) and \(\chi (i,j)=1\) if \(i=j\). Indeed, the terms with \(i=j\) vanish due to the factor \((e^{-x_i}-e^{-x_j})(x_i^{\alpha }-x_j^{\alpha })\), and for those terms with \(i\ne j\), then \(b(x_i,x_j)=0\) since \((x_i,x_j)\notin \Gamma \).

We now prove \((iii)\implies (ii)\). Using (5.51) in Lemma 5.13 and the definition of \(x_k\), for any \(v\in \mathcal {F}\),

$$\begin{aligned} M_{\alpha }(v)=\sum _{k=0}^{\infty }\int _{A_k}x^{\alpha }v(x)dx\ge \sum _{k=0}^{\infty }x_k^{\alpha }m_k=M_{\alpha }(u), \end{aligned}$$

and since \(u\in \mathcal {F}\), u is indeed the minimizer of \(M_\alpha \).

We finally prove \((ii)\implies (iii)\). Let u be a minimizer of \(M_\alpha \) and let \(v=\sum _{k=0}^{\infty }m_k\delta _{x_k}\). We already know by the previous case that v is also a minimizer of \(M_\alpha \), hence \(M_{\alpha }(u)=M_{\alpha }(v)\). Since moreover

$$\begin{aligned} M_{\alpha }(v)=\sum _{k=0}^{\infty }x_k^{\alpha }m_k=\sum _{k=0}^{\infty }x_k^{\alpha }\int _{A_k}u(x)dx, \end{aligned}$$

it follows that

$$\begin{aligned} \sum _{k=0}^{\infty }\int _{A_k}(x^{\alpha }-x_k^{\alpha })u(x)dx=0. \end{aligned}$$

By definition of \(x_k\), all the terms in the sum above are nonnegative, and therefore

$$\begin{aligned} \int _{A_k}(x^{\alpha }-x_k^{\alpha })u(x)dx=0\qquad \forall k\in \mathbb {N}, \end{aligned}$$

which implies that \(A_k=\{x_k\}\) for all \(k\in \mathbb {N}\), and therefore \(u=v\). \(\square \)

5.4 Long time Behavior

This Section is devoted to he proof of Theorem 1.8, that we have divided in several steps. For a given increasing sequence \(t_n\rightarrow \infty \) as \(n\rightarrow \infty \), let us define

$$\begin{aligned} u_n(t)=u(t+t_n),\qquad t\ge 0,\;n\in \mathbb {N}, \end{aligned}$$
(5.65)

where u is the weak solution of (5.1) constructed in Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3). We first notice by (5.48) and Lemma 5.5 that for all \(\alpha >1\) and \(t>0\),

$$\begin{aligned} \frac{1}{2}\int _0^t |D_{\alpha }(u(s))|ds=M_{\alpha }(u_0)-M_{\alpha }(u(t))\le M_{\alpha }(u_0), \end{aligned}$$

so by letting \(t\rightarrow \infty \) we deduce \(D_{\alpha }(u)\in L^1([0,\infty ))\). Since moreover

$$\begin{aligned} \int _0^{t}D_{\alpha }(u_n(s))ds=\int _{t_n}^{t_n+t}D_{\alpha }(u(s))ds,\qquad \forall t\ge 0, \end{aligned}$$

it follows that

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^{t}D_{\alpha }(u_n(s))ds=0\qquad \forall t\ge 0. \end{aligned}$$
(5.66)

Proposition 5.17

Let u be the weak solution of (5.1) constructed in Theorem 5.1 for an initial data \(u_0\in \mathscr {M}_+([0,\infty ))\) satisfying (5.3). For every sequence \(\{t_n\} _{ n\in \mathbb {N}}\) such that \(t_n\rightarrow \infty \), there exist a subsequence, still denoted \(\{t_n\} _{ n\in \mathbb {N}}\), and

$$\begin{aligned} U\in C([0,\infty ),\mathscr {M}_+([0,\infty ))) \end{aligned}$$
(5.67)

such that for all \(\varphi \in C([0,\infty ))\) satisfying (2.43), and all \(t>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _{[0,\infty )}\varphi (x)u(t+t_n,x)dx=\int _{[0,\infty )}\varphi (x)U(t,x)dx. \end{aligned}$$
(5.68)

Moreover, U is a weak solution of (5.1) such that \(M_0(U(t))=M_0(u_0)\) and for all \(t>0\).

Proof

The proof is the same as the first part of the proof of Theorem 1.2 for Eq. (2.1). \(\square \)

Lemma 5.18

Let u, \(u_0\) and U be as in Proposition 5.17. Then

$$\begin{aligned} {\text {supp}}(U(t))={\text {supp}}(U(0))\subset {\text {supp}}(u_0)\qquad \forall t\ge 0. \end{aligned}$$
(5.69)

Proof

On the one hand, since U is a weak solution of (5.1), then by Proposition 5.4 it follows that \({\text {supp}}(U(t))={\text {supp}}(U(0))\) for all \(t>0\), where U(0) is given by (5.68) for \(t=0\). On the other hand, again by Proposition 5.4 we have, in particular, that \({\text {supp}}(u_n(0))={\text {supp}}(u_0)\) for all \(n\in \mathbb {N}\). The result then follows from the convergence of \(u_n(0)\) towards U(0) in the sense of (5.68). Indeed, let \(x_0\in {\text {supp}}\,U(0)\). We use the characterization of the support of a measure given in the proof of part (iii) of Proposition 5.4. Then

$$\begin{aligned} \rho _{\varphi }=\int _{[0,\infty )}\varphi (x)U(0,x)dx>0, \end{aligned}$$

for all \(\varphi \in C_c([0,\infty ))\) such that \(0\le \varphi \le 1\) and \(\varphi (x_0)>0\). Using then (5.68) for \(t=0\), we deduce that for all \(\varphi \) as before, there exists \(n_*\in \mathbb {N}\), such that

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)u_n(0,x)dx\ge \frac{\rho _{\varphi }}{2}>0\qquad \forall n\ge n_*, \end{aligned}$$

and then \(x_0\in {\text {supp}}(u_n(0))={\text {supp}}(u_0)\). \(\square \)

A partial identification of the limit U is given in our next Proposition.

Proposition 5.19

Let u, \(u_0\) and U be as in Proposition 5.17. Then

$$\begin{aligned} U(t)=\mu \qquad \forall t\ge 0, \end{aligned}$$
(5.70)

where \(\mu \) is the measure defined in (1.46).

Proof

We first prove that \(D_{\alpha }(U(t))=0\) for a.e. \(t>0\) and for all \(\alpha >1\). Indeed, if we define as in proof of Theorem 1.2, \(u_n(t)=u(t+t_n)\), we deduce by the same arguments

$$\begin{aligned} \lim _{n\rightarrow \infty }\int _0^t D_{\alpha }(u_n(s))ds=\int _0^t D_{\alpha } (U(s))ds=0\qquad \forall t\ge 0, \end{aligned}$$

hence \(D_{\alpha }(U(t))=0\) for a.e. \(t\ge 0\).

Then by Theorem 1.6 , there exist \(m_j(t)\ge 0\), \(x_j(t)\ge 0\) such that,

$$\begin{aligned}&U(t)=\sum _{j=0}^{\infty }m_j(t)\delta _{x_j(t)}, \end{aligned}$$
(5.71)
$$\begin{aligned}&x_i(t),\,x_j(t)\quad \text {are}\;\;\Gamma \text {-disjoint}\;\;\forall i\ne j. \end{aligned}$$
(5.72)

By (5.69) in Proposition 5.18,

$$\begin{aligned} x_j(t)=x_j(0):=x_j'\in {\text {supp}}(u_0)\qquad \forall t\ge 0,\;\forall j\in \mathbb {N}. \end{aligned}$$
(5.73)

Furthermore, since by Proposition 5.17, U is a weak solution of (5.1), it follows from Lemma 5.16 that for all \(t\ge 0\), \(j\in \mathbb {N}\),

$$\begin{aligned} m_j(t)=\int _{\{x_j(t)\}}U(t,x)dx=\int _{\{x_j(0)\}}U(0,x)dx=m_j(0):=m_j', \end{aligned}$$

and then by (5.71) we conclude that U is independent of t

Let us prove now that U satisfies Properties 1-4. Properties 1 and 2 are already proved in (5.72) and (5.73). In order to prove 3, let \(k\in \mathbb {N}\) and \(\varphi \in C^1_c([0,\infty ))\) be such that \(\varphi (x)=1\) for all \(x\in A_k\) and \(\varphi (x)=0\) for all \(x\in \cup _{i\ne k} A_i\). This construction is possible by Urysohn’s Lemma. Then, by (5.55) in Lemma 5.16,

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)u_n(t,x)dx=\int _{A_k}u_n(t,x)dx=m_k,\qquad \forall n\in \mathbb {N}, \end{aligned}$$

and then by (5.68) in Proposition 5.17,

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)U(x)dx=m_k. \end{aligned}$$

Since \({\text {supp}}(U)\subset {\text {supp}}(u_0)\) by Lemma 5.18, we then deduce

$$\begin{aligned} \int _{[0,\infty )}\varphi (x)U(x)dx=\int _{A_k}U(x)dx=\sum _{j\in \mathcal {J}_k}m_j', \end{aligned}$$

and thus Property 3 holds.

Let us prove Property 4. Let \(k\in \mathbb {N}\) and suppose that \(x_k=\min \{x\in A_k\}>0\). By (1.47) in Property 3, the set \(\mathcal {J}_k\) in non empty. Let then \(x_j'\in \mathcal {J}_k\). If \(x_j'=x_k\), there is nothing left to prove. Suppose then \(x_j'\ne x_k\), which by definition of \(x_k\) implies \(x_j'>x_k\). We first notice that between \(x_k\) and \(x_j'\), there can only be a finite number of elements in \(\mathcal {J}_k\). This is because \(x_k>0\) and the points in \(\mathcal {J}_k\) are pairwise \(\Gamma \)-disjoint, thus, the only possible accumulation point for any sequence in \(\mathcal {J}_k\) is \(x=0\). Consequently, the point \(x_{j_0}'=\min \{x\in \mathcal {J}_k\}\) is well define. Again, if \(x_{j_0}'=x_k\), there is nothing left to prove. Suppose then \(x_{j_0}'>x_k\), and let \(0<\varepsilon <(x_{j_0}'-x_k)/2\). On the one hand,

$$\begin{aligned} \int _{[x_k,x_{j_0}'-\varepsilon ]}U(x)dx=0. \end{aligned}$$
(5.74)

On the other hand, let us show the integral in (5.74) is strictly positive, which will be a contradiction. Since \(A_k\subset {\text {supp}}(u_0)\), in particular

$$\begin{aligned} \delta =\int _{[x_k,x_{j_0}'-\varepsilon )}u_0(x)dx>0,\quad \text {and}\quad \int _{A_k\cap [x_{j_0}'-\varepsilon ,\infty )}u_0(x)dx>0, \end{aligned}$$

and then by Proposition 5.5,

$$\begin{aligned} \int _{[0,x_{j_0}'-\varepsilon )}u(t,x)dx>\int _{[0,x_{j_0}'-\varepsilon )}u_0(x)dx\qquad \forall t>0. \end{aligned}$$

We now deduce from Lemma 5.16 that

$$\begin{aligned} \int _{\{x<x_k\}}u(t,x)dx=\text {constant}=\int _{\{x<x_k\}}u_0(x)dx\qquad \forall t>0, \end{aligned}$$

and then we obtain

$$\begin{aligned} \int _{[x_k,x_{j_0}'-\varepsilon )}u(t,x)dx>\int _{[x_k,x_{j_0}'-\varepsilon )}u_0(x)dx=\delta \qquad \forall t>0. \end{aligned}$$

It then follows from (5.68) that

$$\begin{aligned} \int _{[x_k,x_{j_0}'-\varepsilon ]}U(x)dx\ge \limsup _{n\rightarrow \infty }\int _{[x_k,x_{j_0}'-\varepsilon ]}u_n(t,x)dx>\delta >0, \end{aligned}$$

in contradiction with (5.74). \(\square \)

Proof of Theorem 1.8

By Propositions 5.17, 5.19, Lemma 5.18 there exists a sequence, \(\{t_n\}_{n\in \mathbb {N}}\) such that, if \(u_n(t)=u(t+t_n)\) for all \(t>0\) and \(n\in \mathbb {N}\), then \(u_n\) converges in \(C([0,\infty ),\mathscr {M}_+([0,\infty )))\) to the measure \(\mu \) defined in (1.46).

Let us assume that for some other sequence \(\{s_m\}_{m\in \mathbb {N}}\), the sequence \(\omega _m(t)=u(t+s_m)\) is such that \(\omega _m\) converges in \(C([0,\infty ),\mathscr {M}_+([0,\infty )))\) to a measure \(W\in C([0,\infty ),\mathscr {M}_+([0,\infty )))\).

Arguing as before, there exists a subsequence of \(\{\omega _m\}_{m\in \mathbb {N}}\), still denoted \(\{\omega _m\}_{m\in \mathbb {N}}\), such that, \(\{\omega _m(t)\}_{m\in \mathbb {N}}\) converges narrowly to a measure \(W\in \mathscr {M}_+([0,\infty ))\) for every \(t\ge 0\) as \(m\rightarrow \infty \). Moreover, the limit W is of the form

$$\begin{aligned} W=\sum _{j=0}^{\infty }c_j\delta _{y_j}, \end{aligned}$$

and satisfies the properties 1-4 in Theorem 1.8. We claim that \(W=U\).

By Point (i) of Proposition 5.5, for any \(x\ge 0\), the map \(t\mapsto \int _{[x,\infty )}u(t,y)dy\) is monotone nonincreasing on \([0,\infty )\). Therefore the following limit exists:

$$\begin{aligned} F(x)=\lim _{t\rightarrow \infty }\int _{[x,\infty )}u(t,y)dy,\quad x\ge 0. \end{aligned}$$

From,

$$\begin{aligned} \int _{[x,\infty )}u(t,y)dy\ge \int _{[x,\infty )}\omega _m(t,y)dy,\quad \forall m\in \mathbb {N}, \end{aligned}$$

we first deduce that,

$$\begin{aligned} \int _{[x,\infty )}u(t,y)dy\ge \int _{[x,\infty )}W(y)dy. \end{aligned}$$

On the other hand, it follows from the narrow convergence,

$$\begin{aligned} \int _{[x,\infty )}W(y)dy\ge \limsup _{m\rightarrow \infty }\int _{[x,\infty )}w_m(t,y)dy, \end{aligned}$$

and then

$$\begin{aligned} F(x)=\int _{[x,\infty )}W(y)dy. \end{aligned}$$

The same argument yields

$$\begin{aligned} F(x)=\int _{[x,\infty )}\mu (y)dy, \end{aligned}$$

and then, using that \(M_0(W)=M_0(u_0)=M_0(\mu )\), it follows that W and \(\mu \) have the same (cumulative) distribution function, and therefore \(W=\mu \) (cf. [21], Example 1.44, p. 20). \(\square \)

We describe in the following example the behavior of a particularly simple solution of the reduced Eq. (5.1) for which, although the sequence \(\{A_k(u_0)\} _{ k\in \mathbb {N}}\) has only one element, the asymptotic limit \(\mu \) has two Dirac measures.

Example 1

Let \(0<a<b<c\) be such that \(B(a,b)>0\), \(B(b,c)>0\), and \(B(a,c)=0\), and let \(x_0>0\), \(y_0>0\) and \(z_0>0\) be such that \(x_0+y_0+z_0=1\). If we define

$$\begin{aligned} u_0=x_0\delta _a+y_0\delta _b+z_0\delta _c, \end{aligned}$$

it follows from the choice of the constant abc that \(A_0(u_0)=\{a, b, c\}\) and \(A_k(u_0)=\emptyset \) for all \(k>0\). On the other hand, by Proposition 5.4, (iii) the weak solution u of (5.1) given by Theorem 5.1 is of the form,

$$\begin{aligned} u(t)=x(t)\delta _a+y(t)\delta _b+z(t)\delta _c,\,\,\,\forall t>0, \end{aligned}$$

where, in addition, \(x(t)+y(t)+z(t)=1\) for all \(t>0\). Using the weak formulation (5.5) for the test functions \(\mathbb {1}_{[b,\infty )}\), \(\mathbb {1}_{[c,\infty )}\), and the conservation of mass, we obtain the following system of equations:

$$\begin{aligned} x'(t)&=R(a,b)x(t)y(t),\,\,\,x(0)=x_0\\ y'(t)&=-R(a,b)x(t)y(t)+R(b,c)y(t)z(t),\,\,\,y(0)=y_0\\ z'(t)&=-R(b,c)y(t)z(t),\,\,\,z(0)=z_0. \end{aligned}$$

Since \(x'(t)\ge 0\) for all t and \(x(t)\in (x_0, 1)\),

$$\begin{aligned} \lim _{ t\rightarrow \infty } x(t)=x_\infty \in [x_0, 1]. \end{aligned}$$

Moreover, for all \(t>0\),

$$\begin{aligned} y(t)&=y_0e^{\int _0^t \big (R(b,c)z(s)-R(a,b)x(s)\big )ds}\\ z(t)&=z_0e^{-R(b,c)\int _0^t y(s)ds}, \end{aligned}$$

and, by the conservation of mass,

$$\begin{aligned} \frac{y(t)}{z(t)}&=\frac{y_0}{z_0}e^{\int _0^t\big (R(b,c)-x(s)(R(a,b)+R(b,c))\big )ds}\le \frac{y_0}{z_0}e^{C t}, \nonumber \\ C&=\big (R(b,c)-x_0(R(a,b)+R(b,c))\big ). \end{aligned}$$
(5.75)

If we suppose that

$$\begin{aligned} x_0>\frac{R(b,c)}{R(a,b)+R(b,c)}, \end{aligned}$$

then \(C<0\) and, by (5.75),

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{y(t)}{z(t)}=0. \end{aligned}$$

Using that for all \(t>0\), \(z(t)\le z_0\), we also have, using again (5.75), \(y(t)\le y_0 e^{Ct}\), and then \(\lim _{t\rightarrow \infty }y(t)=0\). However, since for all \(t>0\),

$$\begin{aligned} z'(t)=-R(b,c)z(t)y(t)\ge -R(b,c)z(t)y_0e^{Ct}, \end{aligned}$$

we have,

$$\begin{aligned} z(t)\ge z_0\exp \bigg (\frac{R(b,c)y_0(1-e^{Ct})}{C}\bigg ). \end{aligned}$$

Then, since \(z'(t)<0\),

$$\begin{aligned} z_\infty =\lim _{t\rightarrow \infty }z(t)\ge z_0\exp \bigg (\frac{R(b,c)y_0}{C}\bigg )>0, \end{aligned}$$

and the measure \(\mu \) is,

$$\begin{aligned} \mu =x_\infty \delta _a+z_\infty \delta _c. \end{aligned}$$