1 Introduction

Coagulation and breakage models describe the mechanisms by which clusters combine to form bigger clusters or break into smaller fragments. These models are used to explain a wide range of phenomena, such as cloud droplets formation [28, 31] and planet formation [10, 30]. Each cluster in these situations is fully characterized by a size variable (volume, mass, number of monomers, etc.) that can be either a positive real number (continuous models) or a positive integer (discrete models). The clusters we are looking at are discrete in the sense that they are made up of a finite number of fundamental building blocks (monomers) having unit mass. In nature, when we examine a very short period of time, coagulation is binary, whereas breakage can occur in two ways: linear (spontaneous) or non-linear. The linear breakage process is governed solely by cluster properties (and also by external forces, if any), whereas the non-linear breakage process occurs when two or more clusters encounter and the matter is transferred between them. As a result, the mass of the emerging cluster in a non-linear breakage process may be larger than the colliding clusters.

Denoting by \(w_{i}(t)\), \(i\ge 1\), the number of clusters made of \(i\) particles (\(i\)-clusters) per unit volume at time \(t\ge 0\), the discrete coagulation equations with collisional breakage read

$$\begin{aligned} \frac{dw_{i}}{dt} =& \frac{1}{2}\sum _{j=1}^{i-1} p_{j,i-j} \Lambda _{j,i-j} w_{j} w_{i-j} -\sum _{j=1}^{\infty} \Lambda _{i,j} w_{i} w_{j} \\ &+ \frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i}(1- p_{j-k,k}) \Lambda _{j-k,k} w_{j-k} w_{k}, \end{aligned}$$
(1.1)
$$\begin{aligned} w_{i}(0) &= w_{i}^{\mathrm{{in}}}, \end{aligned}$$
(1.2)

for \(i \ge 1\). The first term on the right-hand side of (1.1) accounts for the appearance of \(i\)-clusters through collision and coagulation of smaller ones, while the second term accounts for their disappearance due to collisions with other clusters. The third term describes the appearance of \(i\)-clusters after the collision and breakup of larger clusters. Here \(\Lambda _{i, j}\) denotes the rate by which clusters of size \(i\) collide with clusters of size \(j\) and \(p_{i,j}\) is the probability of the event that the two colliding clusters of sizes \(i\) and \(j\) join to form a single cluster. If this does not occur, clusters fragment with the possibility of matter transfer, and this event occurs with the probability \((1-p_{ i,j})\). The distribution function of the generated fragments, \(\{B_{i,j}^{s},s=1,2,\ldots,i+j-1\}\), has the properties listed below.

$$\begin{aligned} B_{i,j}^{s} = B_{j,i}^{s} \geq 0 \hspace{.7cm} \text{and} \hspace{.7cm} \sum _{s=1}^{i+j-1} s B_{i,j}^{s} = i+j. \end{aligned}$$
(1.3)

The second term in (1.3) infers that mass is conserved during each collisional breakage event. We also assume that the collision kernel is non-negative and symmetric, i.e.,

$$\begin{aligned} 0\leq \Lambda _{i,j}= \Lambda _{j,i} \qquad \text{for} ~~~~i,j\ge 1. \end{aligned}$$
(1.4)

For a solution \(w(t) = (w_{i}(t))_{i \ge 1}\) of (1.1), we define the \(r\)-th moment as

$$\begin{aligned} \mathcal{M}_{r}(w(t))=\mathcal{M}_{r}(t) :=\sum _{i=1}^{\infty} i^{r} w_{i}(t) \hspace{.3cm} \text{for} \hspace{.3cm} r \geq 0. \end{aligned}$$
(1.5)

In the above equation (1.5), the zeroth \((r=0)\) and first \((r=1)\) moments denote the total number of particles and the total mass of particles, respectively, in the system.

Before going any further, it is important to note that in the absence of fragmentation \((p_{i, j} = 1)\), the system (1.1)–(1.2) is a Smoluchowski coagulation equation that physicists and mathematicians have widely studied. Since the particles are neither created nor destroyed in the reactions described by (1.1), it is expected that the total mass \(\mathcal{M}_{1}(t)\) remains conserved throughout the time evolution. However, when the coagulation dominates the fragmentation, it is by now well understood from the theory of classical coagulation-fragmentation equations that the mass conservation fails in finite time for the coagulation rates growing rapidly, a phenomenon known as gelation (see, e.g., [15] and the references therein). Therefore, it is expected that the gelation may occur for the solutions to (1.1)–(1.2) when coagulation dominates the collisional breakage, and this will be addressed in Sect. 5.

In the last few decades, the linear (spontaneous) fragmentation equation with coagulation has received a lot of attention, which was initially studied by Filippov [19], Kapur [22], McGrady and Ziff [29, 35]. In [36, 23], the semigroup technique has been employed to study the existence and uniqueness of classical solutions to linear fragmentation equations with coagulation having appropriate assumptions on coagulation and fragmentation kernels, whereas in [11, 12, 14, 2527, 32] issues related to existence and uniqueness of weak solutions to coagulation equation with spontaneous fragmentation have been investigated by using weak \(L^{1}\) compactness method (for more information, see [7] and references therein). On the other hand, the nonlinear breakage equation has not been studied to that level. In [13], Cheng and Redner discussed the dynamics of continuous, linear, and collision-induced nonlinear fragmentation events. For a linear fragmentation process, they looked at scaling theory to characterize the evolution of the cluster size distribution, whereas, for a nonlinear fragmentation process, they examined the asymptotic behavior of a class of models in which a two-particle collision causes both particles to break into two equal parts, just the larger particle to split in two, or only the smaller particle to split. In addition, it is also demonstrated in [13] that certain models can be transformed into the linear fragmentation equation by adjusting the time scale. This transformation technique is employed in [16] to analyze the nonlinear fragmentation equation with product collision kernels, and to investigate the existence and non-existence of solutions, as well as the formation of singularities within a finite time. Later, Krapivsky and Ben-Naim [24] studied the kinetics of nonlinear collision-induced fragmentation, obtaining the fragment mass distribution analytically using the traveling wave behavior of the nonlinear collision equation. Moreover, it is also shown that the system goes through a shattering transition, in which a finite part of the mass is lost to fragments of infinitesimal sizes. The first mathematical study of (1.1)–(1.2) is due to Laurençot and Wrzosek [27], in which the existence, uniqueness, mass conservation, and the large time behavior of weak solutions to (1.1)–(1.2) with suitable restrictions on the collision kernel and probability function. In [17, 18], Fasano et al. proposed an analogous (continuous) system with the imposition of a maximum cluster size in the context of liquid-liquid dispersions in chemical engineering, see also [33]. From a mathematical point of view, the continuous collision-induced fragmentation equation is recently studied in [8, 9, 20] where coagulation is assumed to be the dominant process. When coagulation is absent, the existence, non-existence, and uniqueness of mass-preserving solutions to the continuous collision-induced fragmentation equation are investigated in [21], for the collision kernel of the form \(\Lambda (x, y) = x^{\alpha _{0}}y^{\beta _{0}}+ x^{\beta _{0}} y^{ \alpha _{0}}\). This investigation shows that the well-posedness depends strongly on the range of \((\alpha _{0}+\beta _{0})\) and that a finite-time singularity may occur, a phenomenon previously observed in [16] for product collision kernels (when \(\alpha _{0}= \beta _{0}\)).

Laurençot and Wrzosek [27] address the issue related to the existence, uniqueness, and various other interesting properties of weak solutions to the system (1.1)–(1.2). The goal of this paper is to prove the existence, uniqueness, and mass conservation of classical solutions to equation (1.1)–(1.2) using the approach developed in [34].

The paper is organized as follows. Section 2 covers the local existence theorem, and the proof of the main theorem. In Sect. 3, it is shown that the solution is unique. In Sect. 4, we have addressed the issue of the positivity of solutions. Finally, in Sect. 5 the occurrence of gelation is discussed for certain classes of the collision kernel.

2 Existence of Classical Solution

We begin by outlining the problem and providing some definitions. Let

$$\begin{aligned} Y_{\mu} = \Big\{ w= (w_{i})\in \mathbb{R}^{\mathbb{N}}, \sum _{i=1}^{ \infty} i^{\mu} |w_{i}| < \infty \Big\} \end{aligned}$$
(2.1)

equipped with the norm

$$\begin{aligned} \|w\|_{\mu} = \sum _{i=1}^{\infty} i^{\mu} |w_{i}|. \end{aligned}$$

To be more precise, we use the positive cone \(Y_{\mu}^{+}\) of \(Y_{\mu}\), that is,

$$\begin{aligned} Y_{\mu}^{+} =\{ w \in Y_{\mu},~~~ w_{i} \geq 0~~~\text{for each}~~~ i \geq 1\}. \end{aligned}$$

Next we define what we mean by a solution to (1.1)–(1.2).

Definition 2.1

Let \(T\in (0,+\infty ]\) and \(w^{\mathrm{{in}}}=(w_{i}^{\mathrm{{in}}})_{i\ge 1}\) be a sequence of non-negative real numbers. A solution to (1.1)–(1.2) on \([0,T)\) is a sequence of non-negative continuous functions satisfying, for each \(i\ge 1\) and \(t\in (0,T)\)

  1. (a)

    \(w_{i} \in \mathcal{C}([0,T])\),

  2. (b)

    \(\int _{0}^{t} \sum _{j=1}^{\infty} \Lambda _{i,j} w_{i}w_{j} d \sigma <\infty \), \(\int _{0}^{t} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1}(1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k} w_{j-k}w_{k} d\sigma <\infty \),

  3. (c)

    and there holds

    $$\begin{aligned} w_{i}(t) = w_{i}^{\mathrm{{in}}} + \int _{0}^{t} \Bigg(& \frac{1}{2} \sum _{j=1}^{i-1} p_{j,i-j}\Lambda _{j,i-j} w_{j}(\sigma ) w_{i-j}(\sigma ) -\sum _{j=1}^{ \infty} \Lambda _{i,j} w_{i}(\sigma ) w_{j}(\sigma ) \\ & +\frac{1}{2} \sum _{j=i+1}^{\infty}\sum _{k=1}^{j-1} (1-p_{j-k,k})B_{j-k,k}^{i} \Lambda _{j-k,k} w_{j-k}(\sigma ) w_{k}(\sigma ) \Bigg) d\sigma . \end{aligned}$$
    (2.2)

Throughout this section the assumptions made on the collision kernel \((\Lambda _{i,j})\) and the daughter distribution function (\(B_{i,j}^{s}\)) are the following: there are positive real numbers \(A\) and \(\beta \) such that

$$\begin{aligned} 0\le \Lambda _{i,j} \le A(i+j), \qquad i,j \ge 1, \end{aligned}$$
(2.3)
$$\begin{aligned} B_{i,j}^{s} \le \beta , \qquad 1 \le s \le i+j-1, \qquad i,j\ge 1. \end{aligned}$$
(2.4)

2.1 Approximated Solutions

For \(n\ge 1\), we define a sequence of approximations of \(w^{\mathrm{{in}}}\) and \(\Lambda _{i,j}\) by

$$\begin{aligned} w^{\mathrm{{in}},\mathrm{n}} = w^{\mathrm{{in}}}\textbf{1}_{[0,n]}, \end{aligned}$$

and

$$\begin{aligned} \Lambda _{i,j}^{n} = \textstyle\begin{cases} \Lambda _{i,j}, \qquad &\text{if} ~~~~i+j \le n \\ 0, & \text{elsewhere} \end{cases}\displaystyle \end{aligned}$$

which implies

$$\begin{aligned} \Lambda _{i,j}^{n} \le A(i+j), \qquad i,j \ge 1. \end{aligned}$$
(2.5)

Owning to (2.5), \(\Lambda _{i,j}^{n}\) exhibits at most linear growth. Consequently, we can employ [27, Theorem 3.1 and Proposition 3.7] to establish the existence of solutions \(w^{n}\) to

$$\begin{aligned} \frac{dw_{i}^{n}}{dt} =& \frac{1}{2}\sum _{j=1}^{i-1} p_{j,i-j} \Lambda _{j,i-j}^{n} w_{j}^{n} w_{i-j}^{n} -\sum _{j=1}^{\infty} \Lambda _{i,j}^{n} w_{i}^{n} w_{j}^{n} \\ &+ \frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i}(1- p_{j-k,k}) \Lambda _{j-k,k}^{n} w_{j-k}^{n} w_{k}^{n}, \hspace{.5cm} i \in \mathbb{N}, \end{aligned}$$
(2.6)
$$\begin{aligned} w_{i}^{n}(0) &= w_{i}^{\mathrm{{in}},\mathrm{n}}, \hspace{.5cm} i \in \mathbb{N}. \end{aligned}$$
(2.7)

More precisely we have the following result.

Proposition 2.1

There is at least one non-negative mass conserving solution \(w^{n}\) to (2.6)(2.7) on \([0,+\infty )\). Moreover, \(w^{n}\) belongs to \(L_{\textit{loc}}([0,+\infty ), Y_{r})\) for all \(r>1\).

We present a classical identity for \(w^{n}\), typically applicable to bounded sequences. However, the summability properties of \(w^{n}\), as stated in Proposition 2.1, enable us to handle any sequence with algebraic growth.

Lemma 2.1

Let \((\psi _{i})_{i\ge 1}\) be a non-negative sequence such that \((i^{-r}\psi _{i})\) is bounded for \(r\ge 1\). Then there holds

$$\begin{aligned} \frac{d}{dt} \sum _{i=1}^{\infty} \psi _{i} w_{i}^{n} =& \frac{1}{2} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty} (\psi _{i+j}-\psi _{i} - \psi _{j}) \Lambda _{i,j}^{n} w_{i}^{n} w_{j}^{n} \\ &+\frac{1}{2} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty} (1-p_{i,j}) \Big(\sum _{s=1}^{i+j-1} \psi _{s} B_{i,j}^{s}-\psi _{i} - \psi _{j} \Big) \Lambda _{i,j}^{n} w_{i}^{n} w_{j}^{n}. \end{aligned}$$
(2.8)

Now, let us state and prove the main theorem of this section. We follow the same approach as in [34], which deals with the Smoluchowski coagulation equations.

Theorem 2.1

Consider the system of equations given by (1.1)(1.2) and assume that the assumptions (1.3), (1.4), (2.3) and (2.4) hold. Assume further that \(\mathcal{M}_{r}(0) =\sum _{i=1}^{\infty} i^{r}w_{i}^{\mathrm{{in}}} < \infty \) for some \(r>1\). Then the infinite system (1.1)(1.2) has a global solution \((w_{i}) \in Y_{1}\).

Proof

The consistency of the initial moment of the truncated system \(\mathcal{M}_{1}^{n}(t)\) and the boundedness of the distribution function are essential ingredients of the proof. As a result, it will imply that both \(w_{i}^{n}\) and \(\dot{w}_{i}^{n}\) are uniformly bounded. Since, it follows from Proposition 2.1 that \(w_{i}^{n}\) are non-negative and using (2.8), we get the bound on the first moment

$$\begin{aligned} \sum _{i=1}^{\infty} i w_{i}^{n}(t) =\sum _{i=1}^{\infty}i w_{i}^{\mathrm{{in}},\mathrm{n}} \leq \sum _{i=1}^{\infty} i w_{i}^{\mathrm{{in}}} = \|w^{\mathrm{{in}}}\|_{1}. \end{aligned}$$
(2.9)

In addition, it follows from the above equation that \(w_{i}^{n}(t) \leq i^{-1} \|w^{\mathrm{{in}}}\|_{1}\) for each \(n\) and \(i\ge 1\). In the same way, for the derivatives, we have, for \(i \ge 1\) and \(n\ge i\)

$$\begin{aligned} \Bigg|\frac{dw_{i}^{n}}{dt}\Bigg| \leq &\frac{A}{2} \sum _{j=1}^{i-1} i w_{i}^{n} w_{j}^{n} + A \sum _{j=1}^{\infty}(i+j)w_{i}^{n} w_{j}^{n} \\ & + \frac{A\beta}{2}\sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} j w_{j-k}^{n} w_{k}^{n} \\ &\le A_{0} \|w^{\mathrm{{in}}}\|_{1}^{2}, \end{aligned}$$

where \(A_{0}\) is a positive constant depending on \(A\), \(\beta \) and \(\|w^{\mathrm{{in}}}\|_{1}\). Therefore, the sequence \((w_{i}^{n})\) is uniformly bounded and equicontinuous. Then by invoking the Arzelá–Ascoli theorem, we infer that there is a subsequence of \((w_{i}^{n})_{n\ge i}\) still denoted by \((w_{i}^{n})_{n\ge i}\) which converges uniformly to a continuous function, say \(w_{i}\), i.e.

$$\begin{aligned} \lim _{n \to \infty} w_{i}^{n}(t) = w_{i}(t) \end{aligned}$$
(2.10)

for each \(i\ge 1\) and \(t\ge 0\). Clearly \(w_{i}(t) \ge 0\) for \(i \ge 1\) and \(t \ge 0\) and it follows from the convergence of the sequence and (2.9) that \(w(t)\in Y_{1}^{+}\) with \(\|w(t)\|_{1} \leq \|w^{\mathrm{{in}}}\|_{1}\) for \(t\ge 0\). To show that \(w_{i}(t)\) is a solution to the original problem we need to show the series \(\sum _{j=1}^{\infty} \Lambda _{i,j}^{n} w_{j}^{n}\) and \(\sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} (1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k}^{n} w_{j-k}^{n} w_{k}^{n}\) converges uniformly on bounded intervals of time \([0, T]\) for \(T\in (0,+\infty )\). In order to prove this, we need to establish the boundedness of higher moments. Hence, without loss of generality take \(\psi _{i} = i^{r}\) for some \(1< r\leq 2\) in (2.8), we have

$$\begin{aligned} \dot{\mathcal{M}}_{r}^{n}(t) = \frac{1}{2} \sum _{i=1}^{\infty}&\sum _{j=1}^{ \infty} \big[(i+j)^{r}-i^{r} -j^{r})]\Lambda _{i,j}^{n} w_{i}^{n}w_{j}^{n} \\ &+\frac{1}{2} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty} (1-p_{i,j}) \Big( \sum _{q=1}^{i+j-1} q^{r}B_{i,j}^{q} - (i+j)^{r}\Big) \Lambda _{i,j}^{n} w_{i}^{n} w_{j}^{n}. \end{aligned}$$
(2.11)

Using (1.3), we deduce that the second term in the above equation is negative, whereas in the first term, we use the following inequality from [1],

$$\begin{aligned} (i+j) [(i+j)^{r} - i^{r} - j^{r}] \leq C_{r} (ij^{r} + i^{r} j). \end{aligned}$$

Hence, we have

$$\begin{aligned} \dot{\mathcal{M}}_{r}^{n}(t) \leq \frac{A}{2} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty} (ij^{r}+ji^{r}) w_{i}^{n}w_{j}^{n} \leq A \mathcal{M}_{r}^{n}(t) \mathcal{M}_{1}(0). \end{aligned}$$

With the help of Gronwall’s inequality, one can obtain

$$\begin{aligned} \mathcal{M}_{r}^{n}(t) \le \mathcal{M}_{r}^{n}(0) \exp (A \mathcal{M}_{1}(0) t) \le \Pi _{r}(T), \end{aligned}$$
(2.12)

where \(\Pi _{r}(T) := \mathcal{M}_{r}(0)\exp (A \mathcal{M}_{1}(0) T)\). Next, using (2.10) and a lower semicontinuity argument, we have

$$\begin{aligned} \mathcal{M}_{r}(t) \le \Pi _{r}(T). \end{aligned}$$
(2.13)

Finally to complete the proof of the theorem, we show that \(w\) is solution to (1.1)–(1.2). To do this, let us consider the following, by using (1.1) and (2.6) as

$$\begin{aligned} (w_{i}^{n}(t)&-w_{i}(t)) +w_{i}(t) \\ & = w_{i}^{\mathrm{{in}},\mathrm{n}}+ \int _{0}^{t}\Bigg[\frac{1}{2} \sum _{j=1}^{i-1} p_{i-j,j} (\Lambda _{i-j,j}^{n} -\Lambda _{i-j,j})w_{i-j}^{n}(\sigma ) w_{j}^{n}(\sigma ) \\ &\qquad \qquad +\sum _{j=1}^{i-1} p_{i-j,j} \Lambda _{i-j,j} [w_{i-j}^{n}( \sigma ) - w_{i-j}(\sigma )]w_{j}^{n}(\sigma ) \\ & \qquad \qquad + \frac{1}{2} \sum _{j=1}^{i-1} p_{i-j,j} \Lambda _{i-j,j} [w_{j}^{n}(\sigma ) - w_{j}^{n}(\sigma )] w_{i-j}(\sigma ) \\ & \qquad \qquad + \frac{1}{2}\sum _{j=1}^{i-1} p_{i-j,j} \Lambda _{i-j,j} w_{i-j}(\sigma ) w_{j}(\sigma ) - w_{i}^{n}(\sigma ) \sum _{j=1}^{ \infty} (\Lambda _{i,j}^{n}- \Lambda _{i,j})w_{j}^{n}(\sigma ) \\ & \qquad \qquad - (w_{i}^{n}(\sigma ) -w_{i}(\sigma )) \sum _{j=1}^{ \infty} \Lambda _{i,j} w_{j}^{n}(\sigma ) -w_{i}(\sigma ) \sum _{j=1}^{ \infty} \Lambda _{i,j}(w_{j}^{n}(\sigma ) -w_{j}(\sigma )) \\ & \qquad \qquad -w_{i}(\sigma ) \sum _{j=}^{\infty} \Lambda _{i,j} w_{j}( \sigma ) \\ & \qquad \qquad+\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i}(1- p_{j-k,k}) \Lambda _{j-k,k} w_{j-k}(\sigma ) w_{k}(\sigma ) \\ & \qquad \qquad +\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1}(1-p_{j-k,k}) B_{j-k,k}^{i} (\Lambda _{j-k,k}^{n}-\Lambda _{j-k,k})w_{j-k}^{n}( \sigma ) w_{k}^{n}(\sigma ) \\ & \qquad \qquad +\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1}(1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k}(w_{j-k}^{n}(\sigma )-w_{j-k}(\sigma )) w_{k}^{n}( \sigma ) \\ &\qquad \qquad +\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1}(1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k}w_{j-k}(\sigma ) (w_{k}^{n}(\sigma )-w_{k}( \sigma ))\Bigg]d\sigma \end{aligned}$$
(2.14)

In order to control the tail of infinite sums involved in (2.14), we choose a positive constant \(r_{1}\) such that \(1+r_{1}\le r\). Let us estimate the tail of the integral involved in the fifth term on the right-hand side of (2.14), with the help of (2.5), (2.12) and (2.13) as

$$\begin{aligned} \Big| \sum _{j=m}^{\infty} (\Lambda _{i,j}^{n}- \Lambda _{i,j})w_{j}^{n}( \sigma ) \Big| \le 2 A(i+1)m^{-r_{1}}\Pi _{r}(T). \end{aligned}$$
(2.15)

By applying (2.3) and (2.12), we evaluate the sixth term on the right hand side of (2.14) as

$$\begin{aligned} \Big|\sum _{j=1}^{\infty} \Lambda _{i,j} w_{j}^{n}(\sigma ) \Big| \le A(i+1)\mathcal{M}_{1}(0). \end{aligned}$$
(2.16)

Similarly, using (2.3), (2.12) and (2.13), we can estimate the tail of the seventh term on the right-hand side of (2.14) as

$$\begin{aligned} \Big|\sum _{j=m}^{\infty} \Lambda _{i,j} (w_{j}^{n}(\sigma ) - w_{j}( \sigma )) \Big| \le 4A(i+1) m^{-r_{1}} \Pi _{r}(T). \end{aligned}$$
(2.17)

Now, let us consider the tenth term of the right-hand side of (2.14) as

$$\begin{aligned} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1}&(1-p_{j-k,k})B_{j-k,k}^{i} ( \Lambda _{j-k,k}^{n} -\Lambda _{j-k,k})w_{j-k}^{n}(\sigma )w_{k}^{n}( \sigma ) \\ & \le \sum _{j=1}^{\infty} \sum _{k=1}^{\infty} (1-p_{j,k})B_{j,k}^{i}| \Lambda _{j,k}^{n} -\Lambda _{j,k}|w_{j}^{n}(\sigma )w_{k}^{n}( \sigma ). \end{aligned}$$
(2.18)

With the help of (2.3), (2.5) and (2.12), the tail of the term on the right-hand side of (2.18) is calculated as

$$\begin{aligned} \sum _{j=1}^{\infty}\sum _{k=m}^{\infty}(1-p_{j,k})&B_{j,k}^{i}| \Lambda _{j,k}^{n} -\Lambda _{j,k}| w_{j}^{n}(\sigma )w_{k}^{n}( \sigma ) \\ & \le 2A_{0}\beta \sum _{j=1}^{\infty}\sum _{k=m}^{\infty}(j+k) w_{j}^{n}( \sigma ) w_{k}^{n}(\sigma ) \\ & \le 4A_{0}\beta \mathcal{M}_{1}(0) \sum _{k=m}^{\infty} k w_{k}^{n}( \sigma )\le 4A \beta \mathcal{M}_{1}(0)m^{-r_{1}} \Pi _{r}(T). \end{aligned}$$
(2.19)

Next, similar to tenth term on the right-hand side of (2.14), using (2.3), (2.12) and (2.13), we can estimate the tail of the eleventh and the twelfth term

$$\begin{aligned} \sum _{j=1}^{\infty}\sum _{k=m}^{\infty}(1-p_{j,k})B_{j,k}^{i} \Lambda _{j,k}|w_{j}^{n}(\sigma )-w_{j}(\sigma )|w_{k}^{n}(\sigma ) \le 4A\beta \mathcal{M}_{1}(0) m^{-r_{1}} \Pi _{r}(T), \end{aligned}$$
(2.20)
$$\begin{aligned} \sum _{j=1}^{\infty}\sum _{k=m}^{\infty}&(1-p_{j,k})B_{j,k}^{i} \Lambda _{j,k} w_{j}(\sigma )|w_{k}^{n}(\sigma )-w_{k}(\sigma )|\le 4A \beta \mathcal{M}_{1}(0) m^{-r_{1}} \Pi _{r}(T), \end{aligned}$$
(2.21)

respectively. Consequently, we infer from the above estimates that the right-hand side of the terms (2.15), (2.17), (2.19), (2.20), and (2.21) can be made arbitrarily small by choosing \(m\) large enough. Next, taking limit \(n\to \infty \) in (2.14), it can easily be seen that all other difference terms tends to zero. Thus, it is concluded that the function \(w\) is a solution to the integral form (2.2) of (1.1)–(1.2). □

Remark 2.1

From the construction of the proof in the preceding theorem, if we consider the integral form of the equations, we can ensure that the limit solution \(w_{i}(t)\) is differentiable due to the uniform convergence of \(w_{i}^{n}\) and the sums involved. We also note that, with the boundedness of the higher moments, i.e. \(\mathcal{M}^{n}_{r} (t) < \mathcal{M}_{r}(0) \exp ({A\mathcal{M}_{1}(0) t})\) for \(r > 1\), the truncated solutions converge strongly for every fixed \(t\) to the limit function, i.e.

$$\begin{aligned} \lim _{n\to \infty} \|w_{i}^{n} - w_{i}\|_{\mu} =0 \qquad \text{for} ~~~~ \mu < r. \end{aligned}$$

Following the previous remark, we are able to state the following corollary as a consequence.

Corollary 2.1

Let \(w_{i}\) be the solution of (1.1)(1.2) under the conditions of Theorem 2.1for some \(r>1\). Then \(w_{i}\) is continuously differentiable, and it holds, on \([0, T]\), that

$$\begin{aligned} \sum _{i=1}^{\infty} iw_{i}(t) = \sum _{i=1}^{\infty} i w_{i}^{\mathrm{{in}}}. \end{aligned}$$

We can only show the local existence of solutions for collision kernels that increase faster than the linearity, as shown in the subsequent corollary.

Corollary 2.2

Consider the infinite system (1.1)(1.2). Let \(\Lambda _{i,j}\) be a symmetric kernel and satisfy \(\Lambda _{i,j} \leq A_{1} ij\) (for \(i,j\ge 1\)) and \(\mathcal{M}_{r}(0)< \infty \) for some \(r>2\). Then the system (1.1)(1.2) has a local solution \((w_{i}) \in Y_{2}\).

Proof

The proof follows similar to that of Theorem 2.1. In fact, if we consider \(\mathcal{M}_{2}^{n}(t)\) using (2.11) under the assumption \(\Lambda _{i,j}\leq A_{1}ij\), we have

$$\begin{aligned} \dot{\mathcal{M}}_{2}^{n}(t) \leq A_{1}\sum _{i=1}^{\infty}\sum _{j=1}^{ \infty} i^{2} j^{2} w_{i}^{n} w_{j}^{n}\leq A_{1}(\mathcal{M}_{2}^{n}(t))^{2}. \end{aligned}$$

By using this differential inequality, we can derive the following uniform bound

$$\begin{aligned} \mathcal{M}_{2}^{n}(t) \leq \frac{1}{\frac{1}{\mathcal{M}_{2}^{n}(0)}-2A_{1}t} \le \frac{1}{\frac{1}{\mathcal{M}_{2}(0)}-2A_{1}t}, \end{aligned}$$

which holds only upto some finite time. However, this still enables us to construct a subsequence \(w_{i}^{n}\) that, as previously, converges uniformly to a limit function \(w_{i}(t)\). We then find a bound for \(\mathcal{M}_{2}(t)\) (valid up to some finite time \(T\)), we may then establish that the partial sums in the truncated system converge uniformly up to time \(T\), showing the existence of local solutions. □

In the next section, we will examine the uniqueness of a classical solution to (1.1)–(1.2).

3 Uniqueness of the Solution

The method of proof of existence we used in the previous section does not guarantee uniqueness as there may be many subsequences of \((w_{i}^{n})\) which converges to different limit functions. Hence, uniqueness has to be analyzed separately.

Theorem 3.1

Assume that the assumptions (1.3) and (1.4) hold and there are \(\gamma \in [0,1]\) and \(B>0\) such that

$$\begin{aligned} \Lambda _{i,j} \leq B (i^{\gamma}+ j^{\gamma}), \qquad i,j \ge 1. \end{aligned}$$
(3.1)

Consider \(w^{\mathrm{{in}}} \in Y_{r}^{+}\) for \(r\geq 1+\gamma \), then there is a unique solution to (1.1)(1.2) on \([0,+\infty )\) satisfying

$$\begin{aligned} \sup _{t \in [0,T]} \sum _{i=1}^{\infty} i^{r} w_{i}(t) < \infty \end{aligned}$$
(3.2)

for each \(T\in (0,+\infty )\).

Proof

First we notice that the property (3.2) follows from (2.13). Let \(w(t)=(w_{i}(t))_{i\ge 1}\) and \(v(t)=(v_{i}(t))_{i\ge 1}\) be two solutions to (1.1)–(1.2) on \([0,T]\), where \(T>0\) with the same initial condition \(w^{\mathrm{{in}}}=(w_{i}^{\mathrm{{in}}})_{i\ge 1} \in Y_{r}^{+}\). Let \(u := w- v\).

Define

$$\begin{aligned} \rho (t) = \sum _{i=1}^{\infty} i |u_{i}(t)|, \end{aligned}$$
(3.3)

where

$$\begin{aligned} u_{i}(t) =& w_{i}(t) -v_{i}(t) =\int _{0}^{t} \frac{1}{2} \sum _{j=1}^{i-1} p_{j,i-j} \Lambda _{j,i-j}[w_{j}(s) w_{i-j}(s) -v_{j}(s) v_{i-j}(s)]ds \\ &- \int _{0}^{t}\sum _{j=1}^{\infty} \Lambda _{i,j} [ w_{i}(s) w_{j}(s) - v_{i}(s) v_{j}(s)] ds \\ & + \int _{0}^{t} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} (1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k}[ w_{j-k}(s)w_{k}(s) - v_{j-k}(s) v_{k}(s) ] ds. \end{aligned}$$
(3.4)

Substituting equation (3.4) into (3.3), we get

$$\begin{aligned} \rho (t) =& \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{i-1}i \operatorname{sgn}(u_{i}(s)) p_{j,i-j} \Lambda _{j,i-j}[w_{j}(s) w_{i-j}(s) -v_{j}(s) v_{i-j}(s)]ds \\ &-\int _{0}^{t}\sum _{i=1}^{\infty} \sum _{j=1}^{\infty}i \operatorname{sgn}(u_{i}(s)) \Lambda _{i,j} [ w_{i}(s) w_{j}(s) - v_{i}(s) v_{j}(s)] ds \\ +&\frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} i \operatorname{sgn}(u_{i}(s)) (1-p_{j-k,k}) B_{j-k,k}^{i} \Lambda _{j-k,k} \\ & \hspace{3.5cm} \times [w_{j-k}(s)w_{k}(s) - v_{j-k}(s) v_{k}(s) ]ds. \end{aligned}$$

In the above equation, we can change the order of summation due to the finiteness of higher moments (given by (3.2)). Hence, by repeated application of Fubini’s theorem in the first and third terms on the right-hand side of the preceding equation and rearranging the indices in summation, we arrive

$$\begin{aligned} \rho (t) =& \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{ \infty}(i+j) \operatorname{sgn}(u_{i+j}(s)) p_{i,j} \Lambda _{i,j} [w_{i}(s)w_{j}(s) -v_{i}(s)v_{j}(s) ] ds \\ &-\int _{0}^{t}\sum _{i=1}^{\infty} \sum _{j=1}^{\infty}i \operatorname{sgn}(u_{i}(s)) \Lambda _{i,j} [ w_{i}(s) w_{j}(s) - v_{i}(s) v_{j}(s)] ds \\ &+ \frac{1}{2}\int _{0}^{t} \sum _{k=1}^{\infty} \sum _{j=1}^{\infty} \Big(\sum _{i=1}^{j+k-1} i\operatorname{sgn}(u_{i}(s))B_{j,k}^{i}\Big) \\ &\times (1-p_{j,k}) \Lambda _{j,k}[ w_{j}(s)w_{k}(s) - v_{j}(s) v_{k}(s) ] ds. \end{aligned}$$
(3.5)

Note that

$$ w_{i}(s)w_{j}(s) -v_{i}(s)v_{j}(s) = u_{i}(s)w_{j}(s) +v_{i}(s)u_{j}(s). $$

With the help of the above identity and after rearranging the terms, (3.5) becomes,

$$\begin{aligned} \rho (t) =& \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{ \infty}[(i+j) \operatorname{sgn}(u_{i+j}(s)) - i \operatorname{sgn}(u_{i}(s)) - j \operatorname{sgn}(u_{j}(s))] p_{i,j} \Lambda _{i,j} \\ & \hspace{6cm} \times [u_{i}(s)w_{j}(s) +v_{i}(s)u_{j}(s) ] ds \\ &+ \frac{1}{2}\int _{0}^{t} \sum _{k=1}^{\infty} \sum _{j=1}^{\infty} \Bigg(\sum _{i=1}^{j+k-1} i\operatorname{sgn}(u_{i}(s))B_{j,k}^{i} - j\operatorname{sgn}(u_{j}(s)) - k\operatorname{sgn}(u_{k}(s))\Bigg) \\ &\hspace{6cm}\times (1-p_{j,k}) \Lambda _{j,k} \\ & \hspace{6cm} \times [ w_{j}(s)u_{k}(s) + v_{k}(s) u_{j}(s) ] ds. \end{aligned}$$

This can be rewritten as

$$\begin{aligned} \rho (t) =& \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{ \infty}\mathcal{P}(i,j,s) p_{i,j} \Lambda _{i,j} u_{i}(s)w_{j}(s) ds \\ &+ \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty} \mathcal{P}(i,j,s) p_{i,j} \Lambda _{i,j}v_{i}(s)u_{j}(s) \\ &+ \frac{1}{2}\int _{0}^{t} \sum _{k=1}^{\infty} \sum _{j=1}^{\infty} \mathcal{Q}(j,k,s) (1-p_{j,k}) \Lambda _{j,k} w_{j}(s)u_{k}(s) ds \\ &+\frac{1}{2}\int _{0}^{t} \sum _{k=1}^{\infty} \sum _{j=1}^{\infty} \mathcal{Q}(j,k,s) (1-p_{j,k}) \Lambda _{j,k} v_{k}(s) u_{j}(s):= \sum _{i=1}^{4} \mathcal{R}_{i}(t), \end{aligned}$$
(3.6)

where

$$\begin{aligned} \mathcal{P} (i,j,t) := (i+j) \operatorname{sgn}(u_{i+j}(s)) - i \operatorname{sgn}(u_{i}(s)) - j \operatorname{sgn}(u_{j}(s)), \end{aligned}$$

and

$$\begin{aligned} \mathcal{Q}(i,j,t) := \sum _{k=1}^{i+j-1} k \operatorname{sgn}(u_{k}(s))B_{i,j}^{k} - i \operatorname{sgn}(u_{i}(s)) - j\operatorname{sgn}(u_{j}(s)). \end{aligned}$$

Using the properties of the signum function, we can evaluate

$$\begin{aligned} \mathcal{P}(i,j,t) u_{i}(t)& = [(i+j) \operatorname{sgn}(u_{i+j}(t)) - i \operatorname{sgn}(u_{i}(t)) - j \operatorname{sgn}(u_{j}(t))]u_{i}(t) \\ & \leq [(i+j) -i +j] |u_{i}(t)|= 2j |u_{i}(t)|. \end{aligned}$$

Similar to the preceding argument, we obtain

$$\begin{aligned} &\mathcal{P}(i,j,t) u_{j}(t) \leq 2i |u_{j}(t)|, \hspace{.5cm} \mathcal{Q(}i,j,t) u_{j}(t) \leq 2i |u_{j}(t)| \\ \hspace{.3cm} &\text{and} \hspace{.3cm} \mathcal{Q}(i,j,t) u_{i}(t) \leq 2j|u_{i}(t)|. \end{aligned}$$

Let us evaluate the first term in (3.6) as

$$\begin{aligned} \mathcal{R}_{1}(t)& = \frac{1}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{\infty}\mathcal{P}(i,j,s) p_{i,j} \Lambda _{i,j} u_{i}(s)w_{j}(s) ds \\ & \leq \frac{B}{2}\int _{0}^{t} \sum _{i=1}^{\infty} \sum _{j=1}^{ \infty} 2j |u_{i}(s)| (i^{\gamma}+j^{\gamma}) w_{j}(s) ds \\ &\leq B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s) + \mathcal{M}_{1+\gamma}(s)) \int _{0}^{t} \sum _{i=1}^{\infty} i |u_{i}(s)| ds \\ & \leq B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)) \int _{0}^{t} \rho (s) ds. \end{aligned}$$

Analogously, \(\mathcal{R}_{2}(t)\), \(\mathcal{R}_{3}(t)\) and \(\mathcal{R}_{4}(t)\) can be estimated as

$$\begin{aligned} \mathcal{R}_{2}(t) & \leq B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)) \int _{0}^{t} \rho (s) ds, \end{aligned}$$
$$\begin{aligned} \mathcal{R}_{3}(t) & \leq B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)) \int _{0}^{t} \rho (s) ds, \end{aligned}$$
$$\begin{aligned} \mathcal{R}_{4}(t)&\leq B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)) \int _{0}^{t} \rho (s) ds. \end{aligned}$$

Now gathering the estimates on \(\mathcal{R}_{1}\), \(\mathcal{R}_{2}\), \(\mathcal{R}_{3}\), and \(\mathcal{R}_{4}\) and inserting into (3.6) to obtain

$$\begin{aligned} \rho (t) &\leq 4 B \sup _{s\in [0,t]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)) \int _{0}^{t} \rho (s) ds \\ & \leq \Theta \int _{0}^{t} \rho (s) ds, \end{aligned}$$

where \(\Theta =4 B \sup _{s\in [0,T]}(\mathcal{M}_{1}(s)+ \mathcal{M}_{r}(s)))\). Next, the application of Gronwall’s inequality gives

$$ \rho (t) \leq 0 \times \exp (\Theta T)=0, $$

which implies \(w_{i}(t) = v_{i}(t) \) for \(t \in [0,T]\). □

In the next section, we will discuss the positivity of solutions and here we follow the proof from [2, Theorem 4.6].

4 Positivity of Solutions

Suppose that the collision kernel, the probability function, and the distribution function satisfy the following conditions:

$$\begin{aligned} \Lambda _{i,1}>0, \hspace{.3cm} 0< p_{i,1}< 1, \hspace{.2cm} \text{and} \hspace{.2cm} B_{i,j}^{1}>0, \hspace{.2cm} \text{for all} \hspace{.2cm} i,j \geq 1. \end{aligned}$$
(4.1)

Then either the solution to (1.1)–(1.2) is trivial (zero for all arguments) or strictly positive for all \(t > 0\): Namely, the following theorem holds.

Theorem 4.1

Let (4.1) hold and \(w\) be a non-negative continuous solution of (1.1)(1.2) on \([0, T]\). Suppose that there exists \(r>1\) such that \(w_{r}^{\mathrm{{in}}} > 0\). Then \(w_{i}(t)>0\) for all \(t\in [0,T]\) and all \(i\geq 1\).

Proof

Assume for the sake of contradiction that \(w_{i}(\tau ) = 0\) for some \(i\) and some \(\tau \in (0,T]\). If \(i >1\), then consider

$$\begin{aligned} \frac{dw_{i}}{dt} = \varphi _{i}(t) - w_{i}(t) \varpi _{i}(t) \end{aligned}$$
(4.2)

where

$$\begin{aligned} \varphi _{i}(t)={}& \frac{1}{2}\sum _{j=1}^{i-1} p_{j,i-j} \Lambda _{j,i-j} w_{j}(t) w_{i-j}(t) \\ &{}+\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i}(1- p_{j-k,k}) \Lambda _{j-k,k} w_{j-k}(t) w_{k}(t), \end{aligned}$$

and

$$\begin{aligned} \varpi _{i}(t) = \sum _{j=1}^{\infty} \Lambda _{i,j} w_{j}(t). \end{aligned}$$

Now from (4.2), we obtain

$$\begin{aligned} 0 = w_{i}(\tau ) \exp \Big(\int _{0}^{\tau}\varpi _{i}(s) ds\Big) = w_{i}(0) + \int _{0}^{\tau} \exp \Big(\int _{0}^{t}\varpi _{i}(s) ds\Big) \varphi _{i}(t) dt. \end{aligned}$$

Hence

$$\begin{aligned} \frac{1}{2}\sum _{j=1}^{i-1} p_{j,i-j} \Lambda _{j,i-j} w_{j}(t) w_{i-j}(t) +\frac{1}{2} \sum _{j=i+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i}(1- p_{j-k,k}) \Lambda _{j-k,k} w_{j-k}(t) w_{k}(t)=0. \end{aligned}$$

Next, recalling (4.1), we end up with

$$\begin{aligned} \sum _{j=1}^{i-1} p_{i-j,j} \Lambda _{i-j,j} w_{i-j}(t) w_{j}(t) =0 \end{aligned}$$

for all \(t \in [0,\tau ]\), and thus either \(w_{i-1}(\tau ) =0 \) or \(w_{1}(\tau ) =0\). We obtain \(w_{i-1}(\tau )=0\) if \(w_{1}(\tau ) \neq 0\). Repeating similar arguments, we arrive at \(w_{i-2}(\tau ) =0\) if \(w_{1}(\tau )\neq 0\) and so on. Finally, we establish that \(w_{1}(\tau ) =0\).

For \(w_{1}\), we have

$$\begin{aligned} \frac{dw_{1}}{dt} = -w_{1}(t) \varsigma (t)+ \ell (t), \hspace{.2cm} t\in (0,T]. \end{aligned}$$
(4.3)

where

$$\begin{aligned} \varsigma (t) = \sum _{j=1}^{\infty} \Lambda _{1,j} w_{j}(t), \hspace{.3cm} \ell (t) = \frac{1}{2} \sum _{j=1}^{\infty} \sum _{k=1}^{\infty} B_{j,k}^{1}(1- p_{j,k}) \Lambda _{j,k} w_{j}(t) w_{k}(t). \end{aligned}$$
(4.4)

From (4.3), it is clear that

$$\begin{aligned} 0= w_{1}(\tau ) \exp \Big(\int _{0}^{\tau}\varsigma (s) ds\Big) = w_{1}^{ \mathrm{{in}}} + \int _{0}^{\tau} \exp \Big(\int _{0}^{t}\varsigma (s) ds \Big)\ell (t) dt. \end{aligned}$$

As a result, for all \(t\in (0, \tau )\), \(w_{1}^{\mathrm{{in}}} = 0\) and \(\ell (t) = 0\). Since each \(w_{i}\) is continuous, and we can deduce from (4.3) that \(w_{i} = 0\) for all \(i\geq 2\), and we get \(w^{\mathrm{{in}}} = 0\), which is a contradiction. Thus, the proof of Theorem 4.1 has completed. □

Remark 4.1

It is worth noting that we have essentially used the positivity of the collisional breakage kernel. If, e.g., we consider pure coagulation (\(p_{i,j}=1\)), then \(\ell (t) = 0\) and we do not obtain the result.

In the next section, the occurrence of gelation for the solutions to (1.1)–(1.2) is shown when coagulation dominates breakage for a particular class of collision kernels. The result presented here is an extension of the work done in [27, Proposition 4.3] for the case when \(\beta _{0}=2\).

5 Gelation Phenomenon in (1.1)–(1.2)

Proposition 5.1

Assume that \((\Lambda _{i, j})\), \((p_{i, j})\) and \((B_{i, j}^{s})\) satisfy (1.3)(1.4) and

$$\begin{aligned} \sum _{s=1}^{i+j-1} B_{i,j}^{s} \leq \beta _{0} \hspace{.4cm} \textit{and} \hspace{.4cm} p_{i,j}> \frac{(\beta _{0}-2)}{ (\beta _{0}-1)}, \end{aligned}$$
(5.1)
$$\begin{aligned} \zeta ij \leq \Big[ p_{i,j}-\frac{(\beta _{0}-2)}{ (\beta _{0}-1)} \Big] \Lambda _{i,j} \hspace{.4cm} \textit{and} \hspace{.4cm} \Lambda _{i,j} \leq \mu ij, \end{aligned}$$
(5.2)

for \(i,j \geq 1\), and \(\beta _{0}\geq 2\), the constants \(\mu \) and \(\zeta \) are positive real numbers.

Consider \(w^{\mathrm{{in}}} \in Y_{1}^{+} \), \(w^{\mathrm{{in}}}\not \equiv 0\) and assume that (1.1)(1.2) has a solution \(w\) on \([0,+\infty )\) such that \(t\mapsto \|w(t) \|_{1}\) is a non-increasing function on \([0,+\infty )\). Then

$$\begin{aligned} \lim _{t\to \infty} \|w(t)\|_{1} =0. \end{aligned}$$

Remark 5.1

In (5.1), the first condition implies that the number of particles produced in each collision event remains finite. Meanwhile, the second condition implies that coagulation is the dominant process compared to breakage. On the one hand, the first condition in (5.2) with the help of (5.1) gives

$$\begin{aligned} (\beta _{0}-2) (1-p_{i,j}) \Lambda _{i,j} + \zeta ij \le p_{i,j} \Lambda _{i,j}, \end{aligned}$$

which clearly shows that the coagulation kernel \((p_{i,j}\Lambda _{i,j})\) dominates the breakage kernel \(((1-p_{i,j})\Lambda _{i,j})\) and has quadratic growth lower bound. On the other hand from the second condition in (5.2), we infer that the collision kernel has at most quadratic growth.

Proof

For \(l \geq 1\), \(\tau _{1} \geq 0\) and \(\tau _{2} > \tau _{1}\), let us consider (1.1), which after some rearrangement of terms gives

$$\begin{aligned} \sum _{i=1}^{l} (w_{i}(\tau _{2})-w_{i}(\tau _{1}))=& -\frac{1}{2} \int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{l-1} \sum _{j=1}^{l-i}p_{i,j} \Lambda _{i,j} w_{i}w_{j} d\tau \\ &- \frac{1}{2} \int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{l} \sum _{j=l+1-i}^{ \infty} \Lambda _{i,j} w_{i}w_{j} d\tau \\ &+\int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{l} \sum _{j=l+1}^{\infty} \sum _{k=1}^{j-1} B_{j-k,k}^{i} (1-p_{j-k,k}) \Lambda _{j-k,k} w_{j-k} w_{k} d\tau \\ & + \frac{1}{2} \int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{l} \sum _{j=1}^{l-i} \Big( \sum _{s=1}^{i+j-1} B_{i,j}^{s} -2 \Big) (1-p_{i,j}) \Lambda _{i,j} w_{i} w_{j} d\tau . \end{aligned}$$

Since \(w(\tau ) \in Y_{1}^{+}\) with \(\|w(\tau )\|_{1} \leq \|w^{\mathrm{{in}}}\|_{1}\) for every \(\tau \in [\tau _{1}, \tau _{2}]\), as a result, we can use the growth conditions (5.1)–(5.2) and (1.4) to pass to the limit as \(l \to \infty \) in the above equality and get

$$\begin{aligned} \sum _{i=1}^{\infty} (w_{i}(\tau _{2})-w_{i}(\tau _{1})) \le - \frac{1}{2} &\int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{\infty} \sum _{j=1}^{ \infty}p_{i,j} \Lambda _{i,j} w_{i}w_{j} d\tau \\ &+ \frac{(\beta _{0}-2)}{2} \int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{ \infty} \sum _{j=1}^{\infty} (1-p_{i,j}) \Lambda _{i,j} w_{i} w_{j} d \tau . \end{aligned}$$

In the above equation, the first term on the right-hand side represents the loss due to coagulation, while the second term corresponds to the gain resulting from breakage. On rearranging these terms, we obtain

$$\begin{aligned} \sum _{i=1}^{\infty} (w_{i}(\tau _{2})-w_{i}(\tau _{1}))\leq & - \frac{ (\beta _{0}-1)}{2} \int _{\tau _{1}}^{\tau _{2}} \sum _{i=1}^{ \infty} \sum _{j=1}^{\infty}\Big[ p_{i,j}- \frac{(\beta _{0}-2)}{ (\beta _{0}-1)}\Big]\Lambda _{i,j} w_{i}w_{j} d \tau . \end{aligned}$$
(5.3)

Now, with the help of the lower bound in (5.2), we obtain

$$\begin{aligned} \sum _{i=1}^{\infty} w_{i}(\tau _{2}) + \frac{\zeta (\beta _{0}-1)}{2} \int _{\tau _{1}}^{\tau _{2}} \|w( \tau )\|_{1}^{2} d\tau \leq \sum _{i=1}^{\infty} w_{i}(\tau _{1}). \end{aligned}$$

Let \(t\in (0,+\infty )\) and since it is given that \(t\mapsto \|w(t)\|_{1}\) is non-increasing, we can deduce from the previous estimate (with \(\tau _{1}=0\) and \(\tau _{2}=t\)) that

$$\begin{aligned} \frac{\zeta (\beta _{0}-1) t}{2} \|w(t)\|_{1}^{2} \leq \sum _{i=1}^{ \infty} w_{i}^{\mathrm{{in}}} \leq \|w^{\mathrm{{in}}}\|_{1}. \end{aligned}$$

Thus

$$\begin{aligned} \|w(t)\|_{1} \leq \Big( \frac{2\|w^{\mathrm{{in}}}\|_{1}}{\zeta (\beta _{0}-1) t} \Big)^{ \frac{1}{2}}, \hspace{.3cm} t\in (0,+\infty ), \end{aligned}$$

which completes the proof of Proposition 5.1. □

An interesting consequence of the above Proposition is that when

$$\begin{aligned} \Big[ p_{i,j}-\frac{(\beta _{0}-2)}{ (\beta _{0}-1)}\Big] \Lambda _{i,j} \geq \kappa , \hspace{.3cm} \text{where} \hspace{.3cm} \kappa >0, \hspace{.3cm} \text{for} \hspace{.3cm} i,j\geq 1. \end{aligned}$$

Then, we have

$$\begin{aligned} \sum _{i=1}^{\infty} w_{i}(\tau _{2}) + \frac{\kappa (\beta _{0}-1)}{2} \int _{\tau _{1}}^{\tau _{2}} \|w( \tau )\|_{0}^{2} d\tau \leq \sum _{i=1}^{\infty} w_{i}(\tau _{1}). \end{aligned}$$
(5.4)

From the previous inequality (with \(\tau _{1}=0\) and \(\tau _{2}=\infty \)), it follows that

$$\begin{aligned} \mathcal{M}_{0} \in L^{2}(0, +\infty ). \end{aligned}$$
(5.5)

Recalling (5.4), we realize that \(\mathcal{M}_{0}\) is a non-increasing and non-negative function which also belongs to \(L^{2}(0,+\infty )\). Therefore

$$\begin{aligned} \lim _{t\to \infty } \mathcal{M}_{0}(t) =0. \end{aligned}$$