1 Introduction

Bootstrap percolation was introduced by Chalupa et al. [13] in 1979 in the context of magnetic disordered systems and has been re-discovered since then by several authors mainly due to its connections with various physical models. A bootstrap percolation process with activation threshold an integer \(r\ge 2\) on a graph \(G = G(V, E)\) is a deterministic process which evolves in rounds. Every vertex has two states: it is either infected or uninfected. Initially, there is a subset \(\mathcal {A}_0\subseteq V\) which consists of infected vertices, whereas every other vertex is uninfected. This set can be selected either deterministically or randomly. Subsequently, in each round, if an uninfected vertex has at least \(r\) of its neighbours infected, then it also becomes infected and remains so forever. This is repeated until no more vertices become infected. We denote the final infected set by \(\mathcal {A}_f\).

Bootstrap percolation processes (and extensions) have been used as models to describe several complex phenomena in diverse areas, from jamming transitions [30] and magnetic systems [26] to neuronal activity [4, 29]. Bootstrap percolation also has connections to the dynamics of the Ising model at zero temperature [19]. A short survey regarding applications of bootstrap percolation processes can be found in [1].

These processes have also been studied on a variety of graphs, such as trees [7, 18], hyperbolic lattices [27], grids [9, 12, 21], hypercubes [5], as well as on several distributions of random graphs [3, 6, 23]. In particular, consider the case when \(G\) is the two-dimensional grid on \([n]^2=\{1, \dots , n\}^2\) (i.e., a finite square \([n]^2\) in the square lattice), and \(r=2\) (i.e., an uninfected site becomes infected if at least two of its four neighbours are infected). Then, for an initial set \(\mathcal {A}_0\subseteq V\) whose elements are chosen independently at random, each with probability \(p(n)\), the following sharp threshold was determined by Holroyd [21]. The probability \(I(n, p)\) that the entire square is eventually infected satisfies \(I(n,p) \rightarrow 1\) if \(\liminf _{n \rightarrow \infty } p(n) \log n > \pi ^2/18\), and \(I(n,p) \rightarrow 0\) if \(\limsup _{n\rightarrow \infty } p(n) \log n < \pi ^2/18\). A generalization of this result to the higher dimensional case has been recently proved by Balogh et al. [8] (when \(G\) is the 3-dimensional grid on \([n]^3\) and \(r=3\)) and Balogh et al. [9] (in general).

In the context of real-world networks and in particular in social networks, a bootstrap percolation process can be thought of as a primitive model for the spread of ideas or new trends within a set of individuals which form a network. Each of them has a threshold \(r\) and \(\mathcal {A}_0\) corresponds to the set of individuals who initially are “infected” with a new belief. If for an “uninfected” individual at least \(r\) of its acquaintances have adopted the new belief, then this individual adopts it as well.

More than a decade ago, Faloutsos et al. [17] observed that the Internet exhibits a power-law degree distribution, meaning that the proportion of vertices of degree \(k\) scales like \(k^{-\beta }\), for all sufficiently large \(k\), and some \(\beta > 2\). In particular, the work of Faloutsos et al. [17] suggested that the degree distribution of the Internet at the router level follows a power law with \(\beta \approx 2.6\). Kumar et al. [25] also provided evidence on the degree distribution of the World Wide Web viewed as a directed graph on the set of web pages, where a web page “points” to another web page if the former contains a link to the latter. They found that the indegree distribution follows a power law with exponent approximately 2.1, whereas the outdegree distribution follows also a power law with exponent close to 2.7. Other empirical evidence on real-world networks has provided examples of power law degree distributions with exponents between two and three, see e.g., [2, 24].

Thus, in the present work, we focus on the case where \(2 < \beta <3\). More specifically, the underlying random graph distribution we consider was introduced by Chung and Lu [14], who invented it as a general purpose model for generating graphs with a power-law degree sequence. Consider the vertex set \([n]: = \{1,\ldots , n\}\). Every vertex \(i \in [n]\) is assigned a positive weight \(w_i\), and the pair \(\{i, j\}\), for \(i\not = j \in [n]\), is included in the graph as an edge with probability proportional to \(w_i w_j\), independently of every other pair. Note that the expected degree of \(i\) is close to \(w_i\). With high probability the degree sequence of the resulting graph follows a power law, provided that the sequence of weights follows a power law (see [31] for a detailed discussion). Such random graphs are also characterized as ultra-small worlds, due to the fact that the typical distance of two vertices that belong to the same component is \(O(\log \log n)\)—see [15] or [31].

Regarding the initial conditions of the bootstrap percolation process, our general assumption will be that the initial set of infected vertices \(\mathcal {A}_0\) is chosen randomly among all subsets of vertices of a certain size.

The aim of this paper is to analyse the evolution of the bootstrap percolation process on such random graphs and, in particular, the typical value of the ratio \(|\mathcal {A}_f|/ |\mathcal {A}_0|\). The main finding of the present work is the existence of a critical function \(a_c (n)\) such that when \(|\mathcal {A}_0|\) “crosses” \(a_c(n)\) we have a sharp change on the evolution of the bootstrap percolation process. When \(|\mathcal {A}_0| \ll a_c (n)\), then typically the process does not evolve, but when \(|\mathcal {A}_0|\gg a_c(n)\), then a linear fraction of vertices is eventually infected. Note that \(|\mathcal {A}_0|\) itself may be sublinear. What turns out to be the key to such a dissemination of the infection is the vertices of high weight. These are typically the vertices that have high degree in the random graph and, moreover, they form a fairly dense graph. We exploit this fact and show how this causes the spread of the infection to a linear fraction of the vertices (see Theorem 2.4). Interpreting this from the point of view of a social network, these vertices correspond to popular and attractive individuals with many connections—these are the hubs of the network. Our analysis sheds light to the role of these individuals in the infection process.

These results are in sharp contrast with the behaviour of the bootstrap percolation process in \(G(n,p)\) random graphs, where every edge on a set of \(n\) vertices is included independently with probability \(p\). Assume that \(p=d/n\), where \(d>0\) does not depend on \(n\). An observation of Balogh and Bollobás (cf. [6] pp. 259–260) implies that if \(|\mathcal {A}_0| = o(n)\), then in this case typically no evolution occurs. In other words, the density of the initially infected vertices must be positive in order for the density of infected vertices to grow. Similar behavior has been observed in the case of random regular graphs by Bollobás (cf. [6]), as well as in random graphs with given vertex degrees constructed through the configuration model. The latter have been studied by the first author in [3], under the assumption that the sum of the squares of degrees scales linearly with \(n\), which is the number of vertices of the graph. The later case includes random graphs with power-law degree sequence with exponent \(\beta > 3\). Our results show that the two regimes \(2 < \beta < 3\) and \(\beta > 3\) exhibit completely different behavior. Recently, Janson et al. [23] came up with a complete analysis of the bootstrap percolation process for all ranges of the probability \(p\).

Basic Notation. Let \(\mathbb {R}^+\) be the set of positive real numbers. For non-negative sequences \(x_n\) and \(y_n\), we describe their relative order of magnitude using Landau’s \(o(\cdot )\) and \(O(\cdot )\) notation. We write \(x_n = O(y_n)\) if there exist \(N \in \mathbb {N}\) and \(C > 0\) such that \(x_n \le C y_n\) for all \(n \ge N\), and \(x_n = o(y_n)\), if \(x_n/ y_n \rightarrow 0\), as \(n \rightarrow \infty \). We also write \(x_n \ll y_n\) when \(x_n = o(y_n)\).

Let \(\{ X_n \}_{n \in \mathbb {N}}\) be a sequence of real-valued random variables on a sequence of probability spaces \(\{ (\Omega _n, \mathbb {P}_n)\}_{n \in \mathbb {N}}\). If \(c \in \mathbb {R}\) is a constant, we write \(X_n \mathop {\rightarrow }\limits ^{p} c\) to denote that \(X_n\) converges in probability to \(c\). That is, for any \(\varepsilon >0\), we have \(\mathbb {P}_n (|X_n - c|>\varepsilon ) \rightarrow 0\) as \(n \rightarrow \infty \).

Let \(\{ a_n \}_{n \in \mathbb {N}}\) be a sequence of real numbers that tends to infinity as \(n \rightarrow \infty \). We write \(X_n = o_p (a_n)\), if \(|X_n|/a_n\) converges to zero in probability. Additionally, we write \(X_n = O_p (a_n)\), to denote that for any positive-valued function \(\omega (n) \rightarrow \infty \), as \(n \rightarrow \infty \), we have \(\mathbb {P} (|X_n|/a_n \ge \omega (n)) = o(1)\). If \(\mathcal {E}_n\) is a measurable subset of \(\Omega _n\), for any \(n \in \mathbb {N}\), we say that the sequence \(\{ \mathcal {E}_n \}_{n \in \mathbb {N}}\) occurs asymptotically almost surely (a.a.s.) if \(\mathbb {P} (\mathcal {E}_n) = 1-o(1)\), as \(n\rightarrow \infty \).

Also, we denote by \(\mathsf {Be}(p)\) a Bernoulli distributed random variable whose probability of being equal to one is \(p\). The notation \(\mathsf {Bin}(k,p)\) denotes a binomially distributed random variable corresponding to the number of successes of a sequence of \(k\) independent Bernoulli trials each having probability of success equal to \(p\).

2 Models and Results

The random graph model that we consider is asymptotically equivalent to a model considered by Chung and Lu [15], and is a special case of the so-called inhomogeneous random graph, which was introduced Söderberg [28] and was studied in great detail by Bollobás et al. in [11].

2.1 Inhomogeneous Random Graphs: The Chung-Lu Model

In order to define the model we consider for any \(n\in \mathbb {N}\) the vertex set \([n] := \{1, \dots , n\}\). Each vertex \(i\) is assigned a positive weight \(w_i(n)\), and we will write \(\mathbf {w}=\mathbf {w}(n) = (w_1(n), \dots , w_n(n))\). We assume in the remainder that the weights are deterministic, and we will suppress the dependence on \(n\), whenever this is obvious from the context. However, note that the weights could be themselves random variables; we will not treat this case here, although it is very likely that under suitable technical assumptions our results generalize to this case as well. For any \(S \subseteq [n]\), set

$$\begin{aligned} W_S(\mathbf {w}) := \sum _{i\in S} w_i. \end{aligned}$$

In our random graph model, the event of including the edge \(\{i,j\}\) in the resulting graph is independent of the events of including all other edges, and equals

$$\begin{aligned} p_{ij}(\mathbf {w}) = \min \left\{ \frac{w_iw_j}{W_{[n]}(\mathbf {w})}, 1\right\} . \end{aligned}$$
(2.1)

This model was considered by Chung et al., for fairly general choices of \(\mathbf {w}\), who studied in a series of papers [1416] several typical properties of the resulting graphs, such as the average path length or the component distribution. We will refer to this model as the Chung-Lu model, and we shall write \(CL(\mathbf {w})\) for a random graph in which each possible edge \(\{i,j\}\) is included independently with probability as in (2.1). Moreover, we will suppress the dependence on \(\mathbf {w}\), if it is clear from the context which sequence of weights we refer to.

Note that in a Chung-Lu random graph, the weights essentially control the expected degrees of the vertices. Indeed, if we ignore the minimization in (2.1), and also allow a loop at vertex \(i\), then the expected degree of that vertex is \(\sum _{j=1}^n w_iw_j/W_{[n]} = w_i\). In the general case, a similar asymptotic statement is true, unless the weights fluctuate too much. Consequently, the choice of \(\mathbf {w}\) has a significant effect on the degree sequence of the resulting graph. For example, the authors of [15] choose \(w_i = d {\beta -2 \over \beta -1}(\frac{n}{i + i_0})^{1/(\beta - 1)}\), which typically results in a graph with a power-law degree sequence with exponent \(\beta \), average degree \(d\), and maximum degree proportional to \(({n}/{i_0})^{1/(\beta - 1)}\), where \(i_0\) was chosen such that this expression is \(O(n^{1/2})\). Our results will hold in a more general setting, where larger fluctuations around a “strict” power law are allowed, and also larger maximum degrees are possible, thus allowing a greater flexibility in the choice of the parameters.

2.2 Power-Law Degree Distributions

Following van der Hofstad [31], let us write for any \(n\in \mathbb {N}\) and any sequence of weights \(\mathbf {w} = (w_1(n), \dots , w_n(n))\) (not necessarily in any order)

$$\begin{aligned} F_n(x) = n^{-1} \sum _{i=1}^n \mathbf {1}[w_i(n) < x], \ \ \forall x\in [0,\infty ) \end{aligned}$$

for the empirical distribution function of the weight of a vertex chosen uniformly at random. We will assume that \(F_n\) satisfies the following two conditions.

Definition 2.1

We say that \((F_n)_{n \ge 1}\) is regular, if it has the following two properties.

  • [Weak convergence of weight] There is a distribution function \(F:[0,\infty )\rightarrow [0,1]\) such that for all \(x\) at which \(F\) is continuous \(\lim _{n\rightarrow \infty }F_n(x) = F(x)\);

  • [Convergence of average weight] Let \(W_n\) be a random variable with distribution function \(F_n\), and let \(W_F\) be a random variable with distribution function \(F\). Then we have \(\lim _{n\rightarrow \infty }\mathbb {E} \left[ \, W_n\,\right] = \mathbb {E} \left[ \, W_F\,\right] \).

The regularity of \((F_n)_{n\ge 1}\) guarantees two important properties. Firstly, the weight of a random vertex is approximately distributed as a random variable that follows a certain distribution. Secondly, this variable has finite mean and therefore the resulting graph has bounded average degree. Apart from regularity, our focus will be on weight sequences that give rise to power-law degree distributions.

Definition 2.2

We say that a regular sequence \((F_n)_{n \ge 1}\) is of power law with exponent \(\beta \), if there are \(0<\gamma _1<\gamma _2\), \(x_0 >0\) and \(0 < \zeta \le {1/(\beta -1)} \) such that for all \(x_0 \le x \le n^{\zeta }\)

$$\begin{aligned} \gamma _1 x^{-\beta + 1}\le 1 - F_n(x) \le \gamma _2 x^{-\beta + 1}, \end{aligned}$$

and \(F_n(x) = 0\) for \(x < x_0\), but \(F_n(x) = 1\) for \(x > n^{\zeta }\).

Thus, we may assume that for \(1\le i \le n(1-F_n(n^{\zeta }))\) we have \(w_i = n^{\zeta }\), whereas for \((1-F_n(n^{\zeta }))n < i \le n\) we have \(w_i = [1-F_n]^{-1} (i/n)\), where \([1-F_n]^{-1}\) is the generalized inverse of \(1-F_n\), that is, for \(x \in [0,1]\) we define \([1-F_n]^{-1}(x) = \inf \{ s \ : \ 1-F_n(s) < x \}\). Note that according to the above definition, for \(\zeta > {1/(\beta -1)}\), we have \(n(1-F_n(n^\zeta )) = 0\), since \(1-F_n(n^\zeta ) \le \gamma _2 n^{-\zeta (\beta -1)}= o(n^{-1})\). So it is natural to assume that \(\zeta \le {1/(\beta -1)}\). Recall finally that in the Chung-Lu model [15] the maximum weight is \(O(n^{1/2})\).

2.3 Results

The main theorem of this paper regards random infection of the whole of \([n]\). We determine explicitly a critical function which we denote by \(a_c(n)\) such that when we infect randomly \(a(n)\) vertices in \([n]\), then the following threshold phenomenon occurs. If \(a(n) \ll a_c(n)\), then a.a.s. the infection spreads no further than \(\mathcal {A}_0\), but when \(a(n) \gg a_c(n)\), then at least \(\varepsilon n\) vertices become eventually infected, for some \(\varepsilon >0\). We remark that \(a_c(n) = o(n)\).

Theorem 2.3

For any \(\beta \in (2,3)\) and any integer \(r\ge 2\), we let \(a_c (n) = n^{r(1-\zeta ) + \zeta (\beta -1)-1\over r}\) for all \(n \in \mathbb {N}\). Let \(a: \mathbb {N} \rightarrow \mathbb {N}\) be a function such that \(a(n) \rightarrow \infty \), as \(n \rightarrow \infty \), but \(a(n)=o(n)\). Let also \({r-1 \over 2r - \beta + 1} < \zeta \le {1\over \beta -1}\). If we initially infect randomly \(a(n)\) vertices in \([n]\), then the following holds:

  • if \(a(n) \ll a_c(n)\), then a.a.s. \(\mathcal {A}_f= \mathcal {A}_0\);

  • if \(a(n) \gg a_c (n)\), then there exists \(\varepsilon >0\) such that a.a.s. \(|\mathcal {A}_f| > \varepsilon n\).

When \(0< \zeta \le {r-1 \over 2r - \beta + 1}\) setting \( a_c^+ (n) =n^{1- \zeta ~{r-\beta +2 \over r-1 }}\), the following holds.

  • if \(a(n) \ll a_c(n)\), then a.a.s. \(\mathcal {A}_f= \mathcal {A}_0\);

  • if \(a(n) \gg a_c^+ (n)\), then there exists \(\varepsilon >0\) such that a.a.s. \(|\mathcal {A}_f| > \varepsilon n\).

Note that the above theorem implies that when the maximum weight of the sequence is \(n^{1/(\beta -1)}\), then the threshold function becomes equal to \(n^{\beta - 2 \over \beta -1}\) and does not depend on \(r\).

In the subcritical regime it is the case that the density of \(\mathcal {A}_0\) is so low that with high probability no vertex which is initially uninfected has at least \(r\) infected neighbours in \(\mathcal {A}_0\). Therefore, the process does not evolve at all. The supercritical case essentially follows from our next theorem.

This has to do with the targeted infection of \(a(n)\) vertices where \(a(n) \rightarrow \infty \), as \(n \rightarrow \infty \). Let \(f: \mathbb {N} \rightarrow \mathbb {R}^+\) be a function. We define the \(f\)-kernel to be

$$\begin{aligned} \mathcal {K}_f : = \{ i \in [n] \ : \ w_i \ge f(n) \}. \end{aligned}$$

We set \(N_f:= |\mathcal {K}_f|\). We will denote by \(CL [\mathcal {K}_f]\) the subgraph of \(CL (\mathbf {w})\) that is induced by the vertices of \(\mathcal {K}_f\). We show that there exists a function \(f\) such that if we infect randomly \(a(n)\) vertices of \(\mathcal {K}_f\), then this is sufficient to infect almost the whole of the \(C\)-kernel, for some constant \(C>0\), with high probability. In other words, the gist of this theorem is that there is a specific part of the random graph of size \(o(n)\) such that if the initially infected vertices belong to it, then this is enough to spread the infection to a positive fraction of the vertices.

Theorem 2.4

Let \(a: \mathbb {N} \rightarrow \mathbb {N}\) be a function such that \(a(n) \rightarrow \infty \), as \(n \rightarrow \infty \), but \(a(n)=o(n)\). Assume also \({r-1 \over 2r - \beta + 1} < \zeta \le {1\over \beta -1}\). If \(\beta \in (2,3)\), then there exists an \(\varepsilon _0 = \varepsilon _0 (\beta , \gamma _1, \gamma _2)\) such that for any positive \(\varepsilon < \varepsilon _0\) there exists a constant \(C=C(\gamma _1,\gamma _2, \beta , \varepsilon , r) >0\) and a function \(f: \mathbb {N} \rightarrow \mathbb {R}^+\) such that \(f(n) \rightarrow \infty \) as \(n \rightarrow \infty \) but \(f(n) \ll n^{\zeta }\) satisfying the following. If we infect randomly \(a(n)\) vertices in \(\mathcal {K}_f\), then at least \((1-\varepsilon )|\mathcal {K}_C|\) vertices in \(\mathcal {K}_C\) become infected a.a.s.

The choice of the function \(f\) is such that the vertices in \(\mathcal {K}_f\) induce a very dense random graph (if the maximum weight is large enough, then this is a complete graph). Thus, if the density of \(\mathcal {A}_0\) is relatively large, then most vertices in \(\mathcal {K}_f\) become infected. Subsequently, we define a decreasing sequence of functions \(f_0=f,f_1,\ldots \) which induce a partition of \(\mathcal {K}_C\): the \(i\)th part consists of those vertices whose weight is between \(f_{i-1}\) and \(f_i\), for \(i\ge 1\), whereas the 0th part is \(\mathcal {K}_f\) itself. We show inductively, that with high probability most vertices in the \(i\)th part have large degree in the \(i-1\)th part. Hence, as most vertices of \(\mathcal {K}_f\) have become infected, the infection is spread from one part to the other, thus covering most of \(\mathcal {K}_C\).

In both theorems, the sequence of probability spaces we consider are the product spaces of the random graph together with the random choice of \(\mathcal {A}_0\).

3 Proofs

In this section we present the proofs of Theorems 2.3 and 2.4. We begin with stating a recent result due to Janson et al. [23] regarding the evolution of bootstrap percolation processes on Erdős-Rényi random graphs, as these will be needed in our proofs. These results regard the binomial model \(G(N,p)\) introduced by Gilbert [20] and subsequently became a major part of the theory of random graphs (see [10] or [22]). Here \(N\) is a natural number and \(p\) is a real number that belongs to \([0,1]\). We consider the set \([N]=: \{1,\ldots , N\}\) and create a random graph on the set \([N]\), including each pair \(\{i,j\}\), where \(i \not = j \in [N]\), independently with probability \(p\).

We begin with a few definitions as they were given in [23]. Recall that \(r\ge 2\) is an integer and denotes the activation threshold. We set

$$\begin{aligned} T_c (N,p)&:= \left( {(r-1)!\over N p^r} \right) ^{1/(r-1)}, \ A_c (N):= \left( 1-{1\over r} \right) T_c(N,p), \ \nonumber \\ \hbox {and } B_c (N)&:= N {(pN)^{r-1}\over (r-1)!} e^{-pN}. \end{aligned}$$
(3.1)

Observe that if \(p=p(N) \gg 1/N\), then \(B_c(N) = o(N)\). The following theorem is among the main results in [23].

Theorem 3.1

(Theorem 3.1 [23]) Let a :\(\mathbb {N} \rightarrow \mathbb {N}\) be a function. Assume that \(\mathcal {A}_0\) is a subset of \([N]\) that has size \(a(N)\). Let \(p=p(N)\) be such that \(N^{-1} \ll p \ll N^{-1/r}\). Then a.a.s. (on the product space of the choice of \(\mathcal {A}_0\) and the random graph \(G(N,p)\)) we have

  1. (i)

    if \(a(N)/A_c(N) \rightarrow \alpha < 1\), then \(|\mathcal {A}_f| = (\phi (\alpha )+ o_p(1))T_c(N,p)\), where \(\phi (\alpha )\) is the unique root in \([0,1]\) of

    $$\begin{aligned} r \phi (\alpha ) - \phi (\alpha )^r = (r-1) \alpha . \end{aligned}$$

    Further, \(|\mathcal {A}_f|/a(N) \mathop {\rightarrow }\limits ^{p} \phi _1 (\alpha ):= {r \over r-1} \phi (\alpha )/\alpha \), with \(\phi _1 (0):=1\);

  2. (ii)

    if \(a(N)/A_c(N) \ge 1 + \delta \), for some \(\delta >0\), then \(|\mathcal {A}_f| = N - O_p (B_c(N))\). In other words, we have almost complete percolation with high probability.

  3. (iii)

    In case (ii), if further \(a(N)\le N/2\), we have complete percolation, that is, \(|\mathcal {A}_f|=N\) a.a.s., if and only if \(B_c (N) \rightarrow 0\), as \(N\rightarrow \infty \), that is, if and only if \(Np - (\log N + (r-~1)\log \log N) \rightarrow \infty \) as \(N \rightarrow \infty \).

The following theorem (also from [23]) treats the dense regime.

Theorem 3.2

(Theorem 5.8 [23]) Let \(r\ge 2\). If \(p\gg N^{-1/r}\) and \(a(N)\ge r\), then a.a.s. \(|\mathcal {A}_f| = N\).

Now we proceed with the proof of Theorem 2.4, as parts of it will be used in the proof of Theorem 2.3.

3.1 Proof of Theorem 2.4

We will determine an \(f\) sufficiently fast growing such that \(CL[\mathcal {K}_f]\) stochastically “contains” a dense enough \(G(|\mathcal {K}_f|,p)\). The density is high enough so that \(a(n)\) exceeds the threshold given in Theorem 3.1 and therefore with high probability we have the almost complete infection of \(\mathcal {K}_f\). To show that the infection spreads over most of the vertices of the \(C\)-kernel, we will split the set of vertices of \(\mathcal {K}_C \setminus \mathcal {K}_f\) into “bands” \(\mathcal {L}_j := \{ i \in [n] \ : \ f_j (n) \le w_i < f_{j-1} (n) \}\), for \(j=1,\ldots , T (n)\), where \(T (n)\) as well as the functions \(f_j \) will be defined during our proof. We will show inductively that given that \(\mathcal {L}_{j-1}\) is almost completely infected, then with high probability \(\mathcal {L}_j\) becomes almost completely infected as well. We now proceed with the details of the proof.

3.2 Determining the Function f

We set

$$\begin{aligned} f(n) = \left[ {(r-1)! W_{[n]}^r \over \gamma _1 n a^{r-1}(n)} \right] ^{1 \over 2r - \beta +1}. \end{aligned}$$
(3.2)

We first need to show that \(f(n) = o\left( n^{\zeta } \right) \), in order to ensure that \(\mathcal {K}_f \not =\emptyset \).

Claim 3.3

If \(\beta < 3\), then \(f(n) = o \left( n^{\zeta } \right) \).

Proof

Note that

$$\begin{aligned} f(n) = \Theta \left( \left( {n \over a(n)} \right) ^{r-1 \over 2r - \beta + 1}\right) . \end{aligned}$$

Thus the assumption that \({r-1 \over 2r - \beta + 1} < \zeta \) implies the claim. \(\square \)

Since \({r-1 \over 2r -\beta +1} < {1\over 2}\), for \(\beta < 3\), the proof of the above claim implies the following:

Corollary 3.4

If \(\beta <3\), we have \(f(n) = o (n^{1/2})\).

We also need to show that \(a(n) \le N_f\). Recall that by Definition 2.2, for all \(n \in \mathbb {N}\) that are sufficiently large we have

$$\begin{aligned} \gamma _1 f(n)^{-\beta + 1} \le {N_f \over n} \le \gamma _2 f(n)^{-\beta + 1}. \end{aligned}$$
(3.3)

Claim 3.5

If \(\beta < 3\), then \(a(n) \ll N_f\).

Proof

Observe that \(a(n) = \Theta \left( n f^{{\beta -1 - 2r\over r-1}}(n) \right) \) whereas \(N_f = \Theta (n f^{-\beta + 1} (n))\). But

$$\begin{aligned} {\beta - 1 - 2r \over r-1} < -\beta + 1, \end{aligned}$$

which holds since \(\beta < 3\). This concludes the proof of the claim. \(\square \)

For any two distinct vertices \(i,j \in \mathcal {K}_f\) the probability of the edge \(\{ i,j\}\) being present is at least \(p_f:=f^2 (n) / W_{[n]}\). Since \(N_f = |\mathcal {K}_f|\), it follows that \(CL[\mathcal {K}_f]\) stochastically contains \(G(N_f,p_f)\). More precisely, there exists a coupling between these two random graphs such that always \(G (N_f, p_f) \subseteq CL [\mathcal {K}_f]\). Thus, if \(\mathcal {P}\) is a non-decreasing property of graphs (that is, a set of graphs closed under automorphisms and under the addition of edges), we have

$$\begin{aligned} \mathbb {P} \left[ \, G(N_f, p_f) \in \mathcal {P}\,\right] \le \mathbb {P} \left[ \,CL [\mathcal {K}_f] \in \mathcal {P}\,\right] . \end{aligned}$$
(3.4)

To apply Theorem 3.1, we first need to show that

$$\begin{aligned} p_f \ll N_f^{-1/r}. \end{aligned}$$

Let us set for convenience \(x_{\beta ,r}={r-1 \over 2r - \beta +1}\). We have

$$\begin{aligned} p_f = \Theta \left( n^{2x_{\beta ,r}-1} a^{-2x_{\beta ,r}}(n) \right) \end{aligned}$$

and

$$\begin{aligned} N_f^{-1/r}:= \Theta \left( n^{-1/r+{(\beta -1)x_{\beta ,r}\over r}} a^{-{(\beta -1)x_{\beta ,r}\over r}} (n) \right) . \end{aligned}$$

In fact, we have

$$\begin{aligned} 2x_{\beta ,r}-1 = -1/r+{(\beta -1)x_{\beta ,r}\over r}, \end{aligned}$$

and

$$\begin{aligned} -2 x_{\beta ,r} < -{(\beta -1)x_{\beta ,r}\over r}, \end{aligned}$$

as \(2r \ge 4 > \beta -1\).

Now, from (3.1) we have

$$\begin{aligned} T_c (N_f, p_f) = \left( {(r-1)!\over N_f p_f^r} \right) ^{1\over r-1}. \end{aligned}$$

We will show that the choice of \(f\) is such that \(a(n) \ge T_c(N_f,p_f)\). Then since \(A_c(N_f) = (1-1/r) T_c(N_f,p_f)\), it follows from Theorem 3.1(ii) together with (3.4), that a.a.s. \(CL [\mathcal {K}_f]\) becomes almost completely infected. To verify this statement, we present the related calculations. By the choice of \(f\) we have

$$\begin{aligned} a (n) = \left( {(r-1)! W_{[n]}^r \over n \gamma _1 f(n)^{-\beta +1+2r} } \right) ^{1\over r-1} \ge T_c (N_f,p_f). \end{aligned}$$

Finally, we need to bound \(B_c(N_f)\) and show that \(B_c(N_f) = o(N_f)\). To this end, it suffices to show that \(N_fp_f \rightarrow \infty \) as \(n \rightarrow \infty \).

Claim 3.6

If \(\beta < 3\), then the function \(f(n)\) is such that \(N_f p_f \rightarrow \infty \) as \(n \rightarrow \infty \).

Proof

Indeed, by (3.3) we have

$$\begin{aligned} N_f p_f \ge n \gamma _1 {f(n)^{-\beta + 3} \over W_{[n]}} = \Omega \left( f(n)^{3-\beta }\right) . \end{aligned}$$

But \(f(n) = \Theta \left( \left( {n\over a(n)}\right) ^{r-1 \over 2r-\beta +1}\right) \) and, since \(a(n) = o(n)\), this implies that \(f(n) \rightarrow \infty \) as \(n \rightarrow \infty \). This concludes the proof of the claim. \(\square \)

By the above claims, Theorem 3.1(ii) implies that for any \(\varepsilon >0\) a.a.s. at least \((1-\varepsilon )|\mathcal {K}_f|\) vertices of \(\mathcal {K}_f\) become infected.

The Dissemination of the Infection in \(\mathcal {K}_C \setminus \mathcal {K}_f\)

In this part of the proof we will show the following proposition which implies Theorem 2.4.

Proposition 3.7

Let \(f: \mathbb {N} \rightarrow \mathbb {R}^+\) be a function such that \(f(n) \rightarrow \infty \) as \(n \rightarrow \infty \), but \(f(n)=o(n^{\zeta })\). Then there exists an \(\varepsilon _0 = \varepsilon _0 (\beta , \gamma _1, \gamma _2) >0\) such that for any positive \(\varepsilon < \varepsilon _0\) there exists \(C=C(\gamma _1,\gamma _2, \beta , \varepsilon , r) >0\) for which the following holds. If \((1-\varepsilon )|\mathcal {K}_f|\) vertices of \(\mathcal {K}_f\) have been infected, then a.a.s. at least \((1-\varepsilon )|\mathcal {K}_C|\) vertices of \(\mathcal {K}_C\) become infected.

Proof

We will define a partition on the set of vertices in \(\mathcal {K}_C \setminus \mathcal {K}_f\) as follows. Firstly, we define a sequence of functions \(f_i : \mathbb {N} \rightarrow \mathbb {R}^+\), for any integer \(i \ge 0\). Next, we define the real-valued function \(\psi \) on the set of natural numbers where for any \(n \in \mathbb {N}\) we set \(\psi (n) := {\ln C \over \ln f_0(n)}\) and \(C \in \mathbb {R}^+\) will be determined during our proof. Let \(g_{\beta , n}(x) =(\beta -2)x + \psi (n)\), for \(x \in \mathbb {R}\) and for \(i\in \mathbb {N}\) we let \(g_{\beta , n}^{(i)}(x)\) be the \(i\)th iteration of \(g_{\beta , n}(x)\) on itself. We set \(f_0 := f\) and for \(i\ge 1\) we set \(f_i:= f_{0}^{g_{\beta , n}^{(i)}(1)}\). Also, for any \(n \in \mathbb {N}\) we let \(T(n)\) be the maximum \(i \in \mathbb {N}\) such that \(f_0(n)^{g_{\beta , n}^{(i)}(1)} \ge C^{2\over 3-\beta }\). Note that

$$\begin{aligned} T(n) = O \left( \log \log n \right) . \end{aligned}$$
(3.5)

We are now ready to define the partition of \(\mathcal {K}_C \setminus \mathcal {K}_f\). For any \(n \in \mathbb {N}\) and for \(j = 1,\ldots , T(n)\) we set

$$\begin{aligned} \mathcal {L}_j : = \{i \in [n] \ : \ f_j(n) \le w_i < f_{j-1} (n)\}. \end{aligned}$$

We also set \(\mathcal {L}_0 : = \mathcal {K}_f\) and finally \(\mathcal {L}_{T(n)+1}:= \mathcal {K}_C \setminus \{ \cup _{j=0}^{T(n)} \mathcal {L}_j \}\).

The previous analysis has shown that a.a.s. if we infect randomly \(a(n)\) vertices in \(\mathcal {L}_0\), then at least \((1-\varepsilon )|\mathcal {L}_0 |\) of \(\mathcal {L}_0\) become infected. The following lemma serves as the inductive step, the proof of which is postponed to the end of this section. \(\square \)

Lemma 3.8

There exists an \(\varepsilon _0 = \varepsilon _0 (\beta , \gamma _1, \gamma _2) >0\) such that for any positive \(\varepsilon < \varepsilon _0\) there exists \(C=C(\gamma _1,\gamma _2, \beta , \varepsilon , r) >0\) for which the following holds. For \(j=0,\ldots , T(n)\), if \((1-\varepsilon )|\mathcal {L}_s|\) vertices of \(\mathcal {L}_s\) have been infected, for \(s=0,\ldots , j\), then with probability at least \(1 - \exp \left( - {\varepsilon ^2 |\mathcal {L}_{j+1}|} \right) \) at least \((1- \varepsilon ) |\mathcal {L}_{j+1}|\) vertices of \(\mathcal {L}_{j+1}\) become infected.

The above lemma implies that the probability that for \(j=1,\ldots , T(n)\) there are at least \((1-\varepsilon )|\mathcal {L}_j|\) vertices in \(\mathcal {L}_j\) that become infected, conditional on almost complete infection of \(\mathcal {K}_f\), is at least

$$\begin{aligned} \prod _{j=0}^{T(n)-1} \left( 1 - \exp \left( - {\varepsilon ^2 |\mathcal {L}_{j+1}|\over 16} \right) \right) \ge 1 - \sum _{j=0}^{T(n)-1} \exp \left( - {\varepsilon ^2 |\mathcal {L}_{j+1}| \over 16} \right) . \end{aligned}$$
(3.6)

Thus, we need to bound from above the sum in the right-hand side of the above inequality. To this end, we need to bound \(|\mathcal {L}_{j+1}|\) from below, for \(j=0,\ldots , T(n)-1\). Note that

$$\begin{aligned} |\mathcal {L}_{j+1}| = |\mathcal {K}_{f_{j+1}}| - |\mathcal {K}_{f_{j}}|. \end{aligned}$$

To bound the quantities on the right-hand side we use the bounds given by Definition 2.2. In particular, this implies that

$$\begin{aligned} |\mathcal {L}_{j+1}| \ge n \left( \gamma _1 f_{j+1}^{-\beta + 1}(n) - \gamma _2 f_{j}^{-\beta +1}(n)\right) . \end{aligned}$$

We use the definition of \(f_j\) and in particular the identity:

$$\begin{aligned} f_{j+1}(n)&= f_0^{g_{\beta ,n}^{(j+1)}(1)}(n) = f_0^{(\beta -2)g_{\beta ,n}^{(j)}(1) + \psi (n)}(n) \nonumber \\&= f_{j}^{\beta -2 }(n) f_0^{\psi (n)}(n) = f_{j}^{\beta -2}(n) C. \end{aligned}$$
(3.7)

Thus we write

$$\begin{aligned} |\mathcal {L}_{j+1}|&\ge n \left( \gamma _1 f_{j+1}^{-\beta + 1}(n) - \gamma _2 f_{j}^{-\beta +1}(n)\right) \\&= n \gamma _1 C^{-\beta + 1} f_{j}^{-(\beta - 1)(\beta -2)} (n)\left( 1- {\gamma _2 C^{\beta -1}\over \gamma _1} f_{j}^{(\beta - 1)(\beta - 3)}(n)\right) \\&\ge n \gamma _1 C^{-\beta + 1} f_{j}^{-(\beta - 1)(\beta -2)} (n)\left( 1- {\gamma _2 C^{-\beta +1}\over \gamma _1} \right) , \end{aligned}$$

where in the last inequality we used the fact that \(f_{j} (n) \ge C^{2 \over 3-\beta }\), which implies that \( f_{j}^{(\beta -1)(\beta -3)} (n) \le C^{-2(\beta -1)}\). Also, \(f_{j} (n) \le f_0(n) = f(n)\). Thus, if \(C\) is large enough, we obtain:

$$\begin{aligned} |\mathcal {L}_{j+1}| \ge n f^{-(\beta - 1)(\beta -2)} (n) {\gamma _1 C^{-\beta + 1} \over 2}. \end{aligned}$$

But since \(f(n)=o(n^{\zeta })\) we have \(f(n)= o(n^{1/(\beta -1)})\) as well. So, for \(n\) sufficiently large, \(f(n) \le n^{1/(\beta -1)}\). Thereby we obtain \(f^{(\beta -1)(\beta -2)}(n) \le n^{\beta -2}\). Hence, there exists a constant \(C'=C'(C,\beta , \gamma _1,\gamma _2)\) such that for any \(n\) sufficiently large we have

$$\begin{aligned} |\mathcal {L}_{j+1}| \ge C' n^{3-\beta }. \end{aligned}$$

Substituting this lower bound into the right-hand side of (3.6) and using (3.5) to bound the number of summands there, we deduce that this sum is \(o(1)\).

We conclude with the proof of Lemma 3.8.

Proof of Lemma 3.8

Assume that for some integer \(0\le j \le T(n)\), for \(s=0,\ldots , j\) there are \((1-\varepsilon )|\mathcal {L}_s|\) infected vertices in \(\mathcal {L}_s\). For \(s=0,\ldots , j\) let \(I_s \subset \mathcal {L}_s\) be the set of infected vertices of \(\mathcal {L}_s\) and let \(\hat{I}_{j} := \cup _{s=0}^{j} I_s\).

Consider a vertex \(i \in \mathcal {L}_{j+1}\) and let \(d_{\hat{I}_j}(i)\) denote the degree of \(i\) in the set \(\hat{I}_j\), that is, the number of neighbours of vertex \(i\) in this set. We condition on the event that for \(s=0,\ldots , j\) there are \((1-\varepsilon )|\mathcal {L}_s|\) infected vertices in \(\mathcal {L}_s\) – thus every probability calculation in this proof is conditional on this event. We will first calculate the expected value of \(d_{\hat{I}_j}(i)\) and show that it is large enough so that the probability that \(d_{\hat{I}_j}(i) < r\) is less than \(\varepsilon ^2\). Thereafter, we apply the Chernoff bound for sums of indicator random variables to bound the probability that there are at least \(2\varepsilon |\mathcal {L}_{j+1}|\) such vertices in the set \(\mathcal {L}_{j+1}\) and conclude the proof of the lemma.

To carry out these calculations we will need estimates on the total weight of the vertices that belong to a kernel. Here and elsewhere the Landau notation involves absolute constants depending only on \(\gamma _1, \gamma _2\) as well as the average degree of the random graph.

Claim 3.9

Let \(f: \mathbb {N} \rightarrow \mathbb {R}^+\) be such that \(f(n) \ll n^{\zeta }\). For \(\beta \in (2,3)\) we have

$$\begin{aligned} \sum _{i \in \mathcal {K}_f } w_i = \Theta \left( {n \over f^{\beta -2}(n)}\right) . \end{aligned}$$

Proof

By Definition 2.2, there exists a positive real \(x_0\) such that for every \(x_0 \le s \le n^{\zeta }\) we have

$$\begin{aligned} \gamma _1 s^{-\beta + 1} \le 1 - F_n(s) \le \gamma _2 s^{-\beta +1}, \end{aligned}$$
(3.8)

whereas for \(s < x_0\) we have \(F_n(s)=0\) and for \(s > n^{\zeta }\) we have \(F_n(s)=1\). Thus, since \(f(n) \le n^{\zeta }\), we have

$$\begin{aligned} \gamma _1 f^{-\beta + 1}(n)\le {|\mathcal {K}_f|\over n} \le \gamma _2 f^{-\beta +1}(n). \end{aligned}$$
(3.9)

We define the function \(g_n\) on \([0,1]\) as follows. For \(0 \le x \le 1- F_n (n^{\zeta })\) we set \(h_n (x) = n^{\zeta }\) and for \(1-F_n(n^{\zeta }) < x \le 1\) we set \(h_n(x) = [1-F_n]^{-1}(x)\). Thus we have

$$\begin{aligned} \sum _{i \in \mathcal {K}_f} w_i&= n\int \limits _{0}^{|\mathcal {K}_f|/n}h_n(x) dx = n \left( \int \limits _{0}^{1-F_n(n^{\zeta })}h_n(x) dx + \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n}h_n(x) dx\right) \\&= n^{1+\zeta }(1-F_n(n^{\zeta })) + n \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n}h_n(x) dx \\&= \Theta \left( n^{1-\zeta (\beta -2)} \right) + n \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n}h_n(x) dx. \end{aligned}$$

Since \(f(n) \ll n^{\zeta }\) it suffices to show that the second integral on the right-hand side satisfies the bounds of the claim.

Let us also define for every \(x \in (0,1]\) the functions \(h_{1,n} (x) = \inf \{ s \ : \ \gamma _1 s^{-\beta + 1} \le x \}\) and \(h_{2,n} (x) = \inf \{ s \ : \ \gamma _2 s^{-\beta +1} \le x \}\). By (3.8), for any \(x \in (1-F_n(n^{\zeta }),1]\)

$$\begin{aligned} \{ s \ : \ \gamma _2 s^{-\beta +1} \le x \} \subseteq \{ s \ : \ 1-F_n(s) \le x \} \subseteq \{ s \ : \ \gamma _1 s^{-\beta + 1} \le x \}, \end{aligned}$$

which implies that

$$\begin{aligned} h_{1,n} (x) \le h_n(x) \le h_{2,n} (x). \end{aligned}$$

Note that \(h_{1,n} (x) = \left( {\gamma _1/x}\right) ^{1\over \beta -1}\) and \(h_{2,n} (x) = \left( {\gamma _2/x}\right) ^{1\over \beta -1}\). Hence

$$\begin{aligned} \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n} \left( {\gamma _1 \over x}\right) ^{1\over \beta -1}dx \le \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n} h_n(x) dx \le \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n} \left( {\gamma _2 \over x}\right) ^{1\over \beta -1}dx. \end{aligned}$$
(3.10)

For \(\ell \in \{1,2\}\) and since \(\beta \in (2,3)\) we have

$$\begin{aligned} \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n} \left( {\gamma _\ell \over x}\right) ^{1\over \beta -1}dx&= \gamma _\ell ^{1\over \beta -1} \int \limits _{1-F_n(n^{\zeta })}^{|\mathcal {K}_f|/n} \left( {1\over x}\right) ^{1\over \beta -1}dx \nonumber \\&= \gamma _\ell ^{1\over \beta -1}~{\beta - 1 \over \beta -2}~ \left[ \left( {|\mathcal {K}_f|\over n} \right) ^{\beta -2 \over \beta -1} - (1-F_n(n^{\zeta }))^{{\beta -2 \over \beta -1}}\right] . \end{aligned}$$

As \(1- F_n (n^{\zeta }) = \Theta \left( n^{-\zeta (\beta -1)} \right) \) and \(f(n)=o(n^{\zeta })\), substituting the bounds of (3.9) the claim follows. \(\square \)

Let us continue with the estimate on \(\mathbb {E}\, \big [{d_{\hat{I}_j} (i)}\big ]\).

Lemma 3.10

There exists \(\varepsilon _0 = \varepsilon _0 (\beta , \gamma _1, \gamma _2)>0\) such that for every \(\varepsilon < \varepsilon _0\) we have that uniformly for all \(i\in \mathcal {L}_{j+1}\) we have

$$\begin{aligned} \mathbb {E} \left[ \, d_{\hat{I}_j} (i)\,\right] = \Omega (C). \end{aligned}$$

Proof

Firstly, the definition of the random graph model implies that

$$\begin{aligned} \mathbb {E} \left[ \, d_{\hat{I}_j} (i)\,\right] = {w_i W_{\hat{I}_j} \over W_{[n]}}. \end{aligned}$$
(3.11)

The weight \(w_i\) is bounded from below by \(f_{j+1}(n)\). We now need to bound from below \(W_{\hat{I}_j}\). \(\square \)

Claim 3.11

We have

$$\begin{aligned} W_{\hat{I}_j} = W_{\mathcal {K}_{f_j}} \left( 1 - O\left( \varepsilon ^{\beta - 2 \over \beta -1} \right) \right) . \end{aligned}$$

Proof

Note that \(|\hat{I}_j|\ge (1-\varepsilon ) |\mathcal {K}_{f_j}|\), which implies that \(W_{\hat{I}_j}\) is at least as large as the total weight of the \((1-\varepsilon ) |\mathcal {K}_{f_j}|\) vertices of smallest weight in \(\mathcal {K}_{f_j}\). To this end, we need to determine a function \(\tilde{f}_j\) such that \(\mathcal {K}_{\tilde{f}_j}\) has size \(\varepsilon |\mathcal {K}_{f_j}|\). By Definition 2.2 we have

$$\begin{aligned} \gamma _1 \tilde{f}_j(n)^{-\beta + 1} \le {|\mathcal {K}_{\tilde{f}_j}|\over n} = {\varepsilon |\mathcal {K}_{f_j}|\over n} \le {\varepsilon \gamma _2} f_j(n)^{-\beta +1}. \end{aligned}$$

This inequality implies that

$$\begin{aligned} \tilde{f}_j(n) \ge f_j(n) \left( {1\over \varepsilon }~{\gamma _1 \over \gamma _2}\right) ^{1\over \beta -1}. \end{aligned}$$

Thus, by Claim 3.9 we have

$$\begin{aligned} W_{\mathcal {K}_{\tilde{f}_j}} = O \left( {n \over f_j^{\beta -2}(n)} \left( {\varepsilon \gamma _2 \over \gamma _1}\right) ^{\beta -2 \over \beta -1}\right) . \end{aligned}$$

In turn, this implies that

$$\begin{aligned} W_{\hat{I}_j} \ge W_{\mathcal {K}_{f_j}} \left( 1 - O\left( \varepsilon ^{\beta - 2 \over \beta -1} \right) \right) . \end{aligned}$$

As \(W_{\hat{I}_j} \le W_{\mathcal {K}_{f_j}}\), the claim follows. So if \(\varepsilon \) is small enough, then the right-hand side of (3.11) is bounded from below as follows.

$$\begin{aligned} \mathbb {E} \left[ \, d_{\hat{I}_j} (i)\,\right] \ge {1\over 2} {f_{j+1}(n) W_{\mathcal {K}_{f_j}} \over W_{[n]}} = \Omega \left( f_{j+1}(n) f_j^{-\beta +2}(n)\right) , \end{aligned}$$
(3.12)

using Claim 3.9 for the lower bound on \(W_{\mathcal {K}_{f_j}}\). By (3.7)

$$\begin{aligned} f_{j+1}(n) f_j^{-\beta +2}(n) =f_j^{\beta -2 }(n) f_0^{\psi (n)}(n) f_j^{-\beta +2}(n) = f_0^{\psi (n)}(n) = C. \end{aligned}$$

Substituting this into the right-hand side of (3.12) yields the lower bound in the lemma. \(\square \)

The next step is to show that if \(C\) is large enough, then the probability that \(d_{\hat{I}_j}(i) < r\) can become as small as we need.

Lemma 3.12

Let \(\varepsilon _0\) be as in Lemma 3.10. For all \(\varepsilon < \varepsilon _0\) there exists \(C=C(\gamma _1, \gamma _2, \beta , \varepsilon , r)>0\) such that for \(j=0,\ldots , T(n)\) and for all \(i \in \mathcal {L}_{j+1}\)

$$\begin{aligned} \mathbb {P} \left[ \,d_{\hat{I}_j}(i) < r \,\right] < \varepsilon ^2. \end{aligned}$$

Proof

We will bound this probability using Chebyschev’s inequality. Observe that \(d_{\hat{I}_j}(i) = \sum _{\ell \in \hat{I}_j} \mathsf {Be}\small {\left( {w_i w_{\ell } \over W_n}\right) }\), where the summands are independent random variables. Hence, \(\mathsf {Var}[d_{\hat{I}_j}\) \((i) ] \le \mathbb {E}\, \big [{d_{\hat{I}_j}(i) }\big ]\). By Lemma 3.10, the latter is \(\Omega (C)\). Thus, if \(C\) is large enough, so that \(\mathbb {E}\, \big [{d_{\hat{I}_j}(i)}\big ] -r > \mathbb {E}\, \big [{d_{\hat{I}_j}(i)}\big ] / 2\), then Chebyschev’s inequality implies that

$$\begin{aligned} \mathbb {P} \left[ \,d_{\hat{I}_j}(i) < r \,\right] = O \left( {1 \over \mathbb {E} \left[ \, d_{\hat{I}_j}(i)\,\right] }\right) . \end{aligned}$$

By Lemma 3.10, the latter is \(O \left( {1/ C} \right) \). Making \(C\) even larger, so that the right-hand side becomes smaller than \(\varepsilon ^2\), the lemma follows.

Setting \(X_{j+1}:= \sum _{i \in \mathcal {L}_{j+1}} \mathbf {1}[d_{\hat{I}_j}(i) \ge r]\), we have \(|I_{j+1}|\ge X_{j+1}\). As \(X_{j+1}\) is the sum of independent identically distributed Bernoulli random variables we will use the Chernoff bound to show that with high probability \(X_{j+1} > |\mathcal {L}_{j+1}| (1-2\varepsilon )\). By Lemma 3.12

$$\begin{aligned} \mathbb {E} \left[ \, X_{j+1}\,\right] \ge |\mathcal {L}_{j+1}|(1-\varepsilon ^2). \end{aligned}$$
(3.13)

Thus, if \(X_{j+1} \le |\mathcal {L}_{j+1}| (1-2\varepsilon )\), then \(\mathbb {E} \left[ \, X_{j+1}\,\right] - X_{j+1} \ge \varepsilon |\mathcal {L}_{j+1}|\). Setting \(t:= \varepsilon |\mathcal {L}_{j+1}|\), Theorem 2.1 in [22] (Inequality (2.6)) yields

$$\begin{aligned} \mathbb {P} \left[ \,X_{j+1} \le \mathbb {E} \left[ \, X_{j+1}\,\right] - t\,\right] \le \exp \left( - {t^2 \over 2 \mathbb {E} \left[ \, X_{j+1}\,\right] } \right) \le \exp \left( - {\varepsilon ^2\over 2} |\mathcal {L}_{j+1}| \right) , \end{aligned}$$

for \(\varepsilon < 1/2\). Thus, we deduce that

$$\begin{aligned} \mathbb {P} \left[ \,|I_{j+1}|\ge |\mathcal {L}_{j+1}| (1-2\varepsilon )\,\right] \ge 1 - \exp \left( - {\varepsilon ^2\over 2} |\mathcal {L}_{j+1}| \right) . \end{aligned}$$
(3.14)

\(\square \)

3.3 Proof of Theorem 2.3: Subcritical Case

We will use a first moment argument to show that if \(a(n) =o (a_c(n))\), then a.a.s. there are no vertices outside \(\mathcal {A}_0\) that have at least \(r\) neighbours in \(\mathcal {A}_0\) and, therefore, the bootstrap percolation process does not actually evolve. Here we assume that initially each vertex becomes infected with probability \(a(n)/n\), independently of every other vertex.

For every vertex \(i \in [n]\), we define an indicator random variable \(X_i\) which is one precisely when vertex \(i\) has at least \(r\) neighbours in \(\mathcal {A}_0\). Let \(X = \sum _{i \in [n]} X_i\). Our aim is to show that \(\mathbb {E} \left[ \, X\,\right] = o(1)\), thus implying that a.a.s. \(X=0\).

For \(i\in [n]\) let \(p_i=\mathbb {E} \left[ \, X_i\,\right] = \mathbb {P} \left[ \,X_i=1\,\right] \). We will first give an upper bound on \(p_i\) and, thereafter, the linearity of the expected value will conclude our statement.

Lemma 3.13

For all integers \(r\ge 2\) and all \(i\in [n]\), we have

$$\begin{aligned} p_i \le \left( {e w_i a(n) \over r n}\right) ^r. \end{aligned}$$

From this, we can use the linearity of the expected value to deduce an upper bound on \(\mathbb {E} \left[ \, X\,\right] \). We have

$$\begin{aligned} \mathbb {E} \left[ \, X\,\right] = \sum _{i \in [n]} p_i \le \sum _{i \in [n]} \left( {e w_i a(n) \over r n} \right) ^r = o \left( \left( {a_c (n) \over n}\right) ^r \right) ~\sum _{i \in [n]} w_i^r. \end{aligned}$$
(3.15)

We now need to give an estimate on \(\sum _{i \in [n]} w_i^r\).

Claim 3.14

For all integers \(r\ge 2\) and for \(\beta \in (2,3)\) we have

$$\begin{aligned} \sum _{i \in [n]} w_i^r = \Theta \left( n^{1+\zeta (r -\beta +1)} \right) . \end{aligned}$$

Proof

By Definition 2.2, there exists a positive real \(x_0\) such that for every \(x_0 \le s \le n^{\zeta }\) we have

$$\begin{aligned} \gamma _1 s^{-\beta + 1} \le 1 - F_n(s) \le \gamma _2 s^{-\beta +1}, \end{aligned}$$
(3.16)

whereas for \(s < x_0\) we have \(F_n(s)=0\) and for \(s > n^{\zeta }\) we have \(F_n(s)=1\). As before, we define the function \(g_n\) on \([0,1]\) as follows. For \(0 \le x \le 1- F_n (n^\zeta )\) we set \(h_n (x) = n^{\zeta }\) and for \(1-F_n(n^{\zeta }) < x \le 1\) we set \(h_n(x) = [1-F_n]^{-1}(x)\). Hence, we write

$$\begin{aligned} \sum _{i \in [n]} w_i^r&= n\int _{0}^{1}h_n^r(x) dx = n \left( \int _{0}^{1-F_n(n^{\zeta })} h_n^r (x) dx + \int _{1-F_n(n^{\zeta })}^1 h_n^r (x) dx \right) \\&= \Theta \left( n^{1 + \zeta (r -\beta + 1)} \right) + n\int _{1-F_n(n^{\zeta })}^1 h_n^r (x) dx. \end{aligned}$$

Hence, it suffices to show that the integral on the right-hand side satisfies the bounds of the claim.

Let us also define for every \(x \in (0,1]\) the functions \(h_{1,n} (x) = \inf \{ s \ : \ \gamma _1 s^{-\beta + 1} \le x \}\) and \(h_{2,n} (x) = \inf \{ s \ : \ \gamma _2 s^{-\beta +1} \le x \}\). By (3.16), for any \(x \in (1-F_n(n^{\zeta }),1]\)

$$\begin{aligned} \left\{ s \ : \ \gamma _2 s^{-\beta +1} \le x \} \subseteq \{ s \ : \ 1-F_n(s) \le x \} \subseteq \{ s \ : \ \gamma _1 s^{-\beta + 1} \le x \right\} , \end{aligned}$$

which implies that

$$\begin{aligned} h_{1,n} (x) \le h_n(x) \le h_{2,n} (x). \end{aligned}$$

Note that \(h_{1,n} (x) = \left( {\gamma _1/x}\right) ^{1\over \beta -1}\) and \(h_{2,n} (x) = \left( {\gamma _2/x}\right) ^{1\over \beta -1}\). Hence

$$\begin{aligned} \int \limits _{1-F_n(n^{\zeta })}^{1} \left( {\gamma _1 \over x}\right) ^{r\over \beta -1}dx \le \int \limits _{1-F_n(n^{\zeta })}^{1} h_n^r(x) dx \le \int \limits _{1-F_n(n^{\zeta })}^{1} \left( {\gamma _2 \over x}\right) ^{r\over \beta -1}dx. \end{aligned}$$
(3.17)

For \(\ell \in \{1,2\}\), since \(\beta \in (2,3)\) and \(r\ge 2\), we have

$$\begin{aligned} \int \limits _{1-F_n(n^{\zeta })}^{1} \left( {\gamma _\ell \over x}\right) ^{r\over \beta -1}dx&= \gamma _\ell ^{r\over \beta -1} \int \limits _{1-F_n(n^{\zeta })}^{1} \left( {1\over x}\right) ^{r\over \beta -1}dx \\&= \gamma _\ell ^{r\over \beta -1}~{\beta - 1 \over r-\beta +1}~\left[ (1-F_n(n^{\zeta }))^{-{r \over \beta -1}+1} - 1 \right] . \end{aligned}$$

Recall that \(1-F_n(n^{\zeta }) = \Theta (n^{-\zeta (\beta -1)})\). Thus through (3.17) we deduce that for \(r\ge 2\) and \(\beta \in (2,3)\)

$$\begin{aligned} n \int \limits _{1- F_n(n^{\zeta })}^{1} h_n^r(x) dx = \Theta \left( n^{1+\zeta (r - \beta + 1)}\right) . \end{aligned}$$

The claim now follows. \(\square \)

Substituting this bound into the right-hand side of (3.15), we obtain:

$$\begin{aligned} \mathbb {E} \left[ \, X\,\right] = o \left( {n^{r(1-\zeta ) + \zeta (\beta - 1) - 1} \over n^r}~n^{1+\zeta (r - \beta + 1)}\right) . \end{aligned}$$

But

$$\begin{aligned} r(1-\zeta ) + \zeta (\beta - 1) - 1 - r +1 +\zeta (r - \beta + 1)= 0, \end{aligned}$$

thus implying that \(\mathbb {E} \left[ \, X\,\right] = o(1)\). We finish the proof of this part of Theorem 2.3 with the proof of Lemma 3.13.

Proof of Lemma 3.13

Note that for all \(i\in [n]\) we have

$$\begin{aligned} p_i = \mathbb {P} \left[ \,\sum _{j \in [n] \setminus \{i\} } e_{ij} {\mathbf 1}[j \in \mathcal {A}_0] \ge r\,\right] , \end{aligned}$$

where \(e_{ij}\) is the indicator random variable that is equal to 1 precisely when the pair \(\{i,j\}\) belongs to the set of edges of \(CL (\mathbf {w})\). The random variable \(e_{ij}{\mathbf 1}_{j \in \mathcal {A}_0}\) is Bernoulli distributed with expected value equal to \({w_i w_j \over W_{[n]}}~{a(n) \over n}\). We denote it by \(I_j\), for all \(j\in [n]\setminus \{ i\}\).

We will use a Chernoff-bound-like technique to bound this probability. Let \(\theta >0\) be a real number. We have

$$\begin{aligned} \mathbb {P} \left[ \,\sum _{j \in [n]\setminus \{ i \} } I_j \ge r\,\right]&= \mathbb {P} \left[ \, \exp \left( \theta \sum _{j \in [n]\setminus \{ i \} } I_j\right) \ge \exp \left( \theta r \right) \,\right] \\&\le {\mathbb {E} \left[ \, \exp \left( \theta \sum _{j \in [n]\setminus \{ i \} } I_j\right) \,\right] \over e^{\theta r}} = {\prod _{j \in [n]\setminus \{ i \}} \mathbb {E} \left[ \, e^{\theta I_j}\,\right] \over e^{\theta r}}\\&= {\prod _{j \in [n]\setminus \{ i \}} \left( e^{\theta }~{w_i w_j \over W_{[n]}}~{a(n) \over n} + \left( 1- {w_i w_j \over W_{[n]}}~{a(n) \over n} \right) \right) \over e^{\theta r}} \\&\le {\prod _{j \in [n]\setminus \{ i \}} \exp \left( (e^{\theta } -1)~{w_i w_j \over W_{[n]}}~{a(n) \over n} \right) \over e^{\theta r}}\\&= {\exp \left( (e^{\theta } -1)~\sum _{j \in [n]\setminus \{ i \}}{w_i w_j \over W_{[n]}}~{a(n) \over n} \right) \over e^{\theta r}} \\&\le {\exp \left( (e^{\theta } -1)~ w_i~{a(n) \over n} -\theta r \right) }. \end{aligned}$$

The exponent in the last expression is minimized when \(\theta \) is such that \(e^{\theta } = {r n \over w_i a(n)}\). Thus, we obtain

$$\begin{aligned} \mathbb {P} \left[ \,\sum _{j \in [n]\setminus \{ i \} } I_j \ge r\,\right]&\le s \exp \left( r - {w_i a(n) \over n}\right) ~ \left( {w_i a(n)\over rn} \right) ^r \\&= \left[ \exp \left( 1 - {w_i a(n) \over rn}\right) ~ \left( {w_i a(n)\over rn} \right) \right] ^r \le \left( {e w_i a(n)\over rn} \right) ^r. \end{aligned}$$

3.4 Proof of Theorem 2.3: Supercritical Case

In this part of the proof, we shall be assuming that \(a_c(n) = o(a(n))\). Additionally, we shall assume that the initially infected set is the set of the \(a(n)\) vertices of smallest weight.

We will show first that there exists a function \(f:\mathbb {N} \rightarrow \mathbb {R}^+\) such that \(f(n) \rightarrow \infty \) as \(n \rightarrow \infty \) but \(f(n) = o (n^{\zeta })\) for which a.a.s. \(\mathcal {K}_f\) will become completely infected. Thereafter, using the proof of Theorem 2.4 we will deduce that there exists a real number \(C >0\) such that with high probability \(\mathcal {K}_C\) will be almost completely infected. This implies that there exists an \(\varepsilon >0\) such that a.a.s. at least \(\varepsilon n\) vertices become infected.

3.5 Spreading the Infection to a Positive Fraction of the Vertices

We begin with determining the function \(f\) much as we did in the proof of Theorem 2.4. To this end, we need to bound from below the probability that an arbitrary vertex in \(\mathcal {K}_f\) becomes infected. In fact, we shall bound from below the probability that an arbitrary vertex in \(\mathcal {K}_f\) will become infected already in the first round. Note that this amounts to bounding the probability that such a vertex has at least \(r\) neighbours in \(\mathcal {A}_0\). Therefore, this forms a collection of independent events which is equivalent to the random independent infection of the vertices of \(\mathcal {K}_f\) with probability equal to the derived lower bound. Recall that the random graph induced on \(\mathcal {K}_f\) stochastically contains an Erdős-Rényi random graph with the appropriate parameters. This observation allows us to determine \(f\). To be more specific, if the probability that any given vertex in \(\mathcal {K}_f\) exceeds the complete infection threshold of this Erdős-Rényi random graph and the condition of Theorem 3.1 (iii) is satisfied, then a.a.s. \(\mathcal {K}_f\) eventually becomes completely infected. This condition will specify \(f\).

Under the assumption that \(\mathcal {A}_0\) consists of the \(a(n)\) vertices of smallest weight, we will bound from below the probability a vertex \(v \in \mathcal {K}_f\) has at least \(r\) neighbours in \(\mathcal {A}_0\). We denote the degree of \(v\) in \(\mathcal {A}_0\) by \(d_{\mathcal {A}_0}(v)\) and note that this random variable is equal to \(\sum _{i \in \mathcal {A}_0} \mathsf {Be}\left( {w_v w_i \over W_{[n]}} \right) \), where the summands are independent Bernoulli distributed random variables. Note also that for all \(n\) and for all \(i \in [n]\) we have \(w_i \ge x_0\). Thus, we can deduce the following (parts of it hold for \(n\) sufficiently large)

$$\begin{aligned} \mathbb {P} \left[ \,\sum _{i \in \mathcal {A}_0} \mathsf {Be}\left( {w_v w_i \over W_{[n]}} \right) \ge r\,\right]&\ge \mathbb {P} \left[ \,\sum _{i \in \mathcal {A}_0} \mathsf {Be}\left( {w_v x_0 \over W_{[n]}} \right) \ge r\,\right] \\&= \mathbb {P} \left[ \, \mathsf {Bin}\left( a(n), {w_v x_0 \over W_{[n]}} \right) \ge r \,\right] \\&\ge {a(n) \atopwithdelims ()r}~\left( {w_v x_0 \over W_{[n]}} \right) ^r ~\left( 1- {w_v x_0 \over W_{[n]}} \right) ^{a(n)-r} \\&\ge {a(n)^r \over 1.5~r!}~\left( {f(n) x_0 \over W_{[n]}} \right) ^r ~\left( 1- {f(n) x_0 \over W_{[n]}} \right) ^{a(n)-r}. \end{aligned}$$

Thus, assuming that \(a(n)f(n) = o(n)\) we have

$$\begin{aligned} \left( 1- {f(n) x_0 \over W_{[n]}} \right) ^{a(n)-r} = 1-o(1). \end{aligned}$$

Therefore, for \(n\) sufficiently large

$$\begin{aligned} \mathbb {P} \left[ \,\sum _{i \in \mathcal {A}_0} \mathsf {Be}\left( {w_v w_i \over W_{[n]}} \right) \ge r\,\right] \ge {1\over 2 r!}~ \left( {a(n) f(n) x_0 \over W_{[n]}} \right) ^r =:p_{Inf}. \end{aligned}$$
(3.18)

Thus every vertex of \(\mathcal {K}_f\) becomes infected during the first round with probability at least \(p_{Inf}\) independently of every other vertex in \(\mathcal {K}_f\).

We shall first consider the case where \({2r - \beta +1 \over r-1} \le \zeta \le {1\over \beta -1}\), where \(a_c (n) = n^{r(1-\zeta ) + \zeta (\beta -1) - 1 \over r}\). Let us assume that \(a(n)=\omega (n) n^{r(1-\zeta ) + \zeta (\beta -1) - 1 \over r}\), where \(\omega : \mathbb {N} \rightarrow \mathbb {R}^+\) is some increasing function that grows slower than any polynomial. Setting \(f=f(n)={n^{\zeta } \over \omega ^{1+1/r} (n)}\), we will consider \(CL[\mathcal {K}_f]\). Before doing so, we will verify the assumption that \(a(n)f(n)=o(n)\). Indeed, we have

$$\begin{aligned} a(n)f(n)= {1\over \omega ^{1/r}(n)}~n^{{r(1-\zeta ) + \zeta (\beta -1) - 1 \over r}+\zeta }. \end{aligned}$$

But

$$\begin{aligned} {r(1-\zeta ) + \zeta (\beta -1) - 1 \over r}+\zeta&= {r(1-\zeta ) + \zeta (\beta -1) - 1 +r\zeta \over r}\\&= 1+ {\zeta (\beta - 1) - 1\over r} \le 1, \end{aligned}$$

since \(\zeta \le 1/(\beta -1)\), whereby \(a(n)f(n) \le {n \over \omega ^{1/r}(n)} = o(n)\).

Now, note that if \(\zeta > {1 \over 2}\), then \(CL[\mathcal {K}_f]\) is the complete graph on \(|\mathcal {K}_f|\) vertices. However, when \(\zeta \le {1 \over 2}\), then \(CL [\mathcal {K}_f]\) stochastically contains \(G(N_f,p_f)\), where \(N_f = |\mathcal {K}_f|\) and \(p_f = {f^2(n) \over W_{[n]}}\). We will treat these two cases separately.

Case I: \({1\over 2} < \zeta \le {1\over \beta -1}\).

In this case, as \(CL[\mathcal {K}_f]\) is the complete graph, it suffices to show that with high probability at least \(r\) vertices of \(\mathcal {K}_f\) become infected already at the first round. In fact, we will show that the expected number of vertices of \(\mathcal {K}_f\) that become infected during the first round tends to infinity as \(n\) grows. Note that this number is at least \(N_f p_{Inf}\). Thus, once we show that \(N_f p_{Inf} \rightarrow \infty \), as \(n \rightarrow \infty \), then Chebyschev’s inequality or a standard Chernoff bound can show that with probability \(1-o(1)\), there are at least \(r\) infected vertices in \(\mathcal {K}_f\) and, thereafter, the whole of \(\mathcal {K}_f\) becomes infected in one round.

We have

$$\begin{aligned} N_f = |\mathcal {K}_f| = \Omega \left( n \left( \omega (n) \over n^\zeta \right) ^{\beta - 1}\right) , \end{aligned}$$

and by (3.18) we have

$$\begin{aligned} p_{Inf} = \Theta \left( {1\over \omega (n)}~\left( { n^{r(1-\zeta ) + \zeta (\beta -1) -1\over r} \cdot n^{\zeta }\over n } \right) ^r \right) = \Theta \left( {n^{\zeta (\beta -1) - 1}\over \omega (n)} \right) . \end{aligned}$$

Hence

$$\begin{aligned} N_f p_{Inf} = \Omega \left( \omega ^{\beta -2}(n)\right) . \end{aligned}$$

Case II: \({r-1\over 2r - \beta + 1} < \zeta \le {1 \over 2}\).

As we mentioned above, \(CL [\mathcal {K}_f]\) stochastically contains \(G(N_f,p_f)\), where \(p_f = {f^2(n) \over W_{[n]}}\), as \(\zeta \le {1\over 2}\). We will show that here \(N_fp_f^r \rightarrow \infty \) as \(n \rightarrow \infty \) and by Theorem 3.2 we deduce that \(\mathcal {K}_f\) becomes completely infected with probability \(1-o(1)\). We have

$$\begin{aligned} N_fp_f^r = \Theta \left( \omega ^{\beta -1}(n) n^{1-\zeta (\beta -1)}{n^{2\zeta r}\over \omega ^{2r+2}(n) n^r} \right) . \end{aligned}$$
(3.19)

and the expression on the right-hand side is

$$\begin{aligned} \omega ^{-(2r-\beta +3)}(n) n^{-(r-1) + \zeta (2r-\beta +1)} \rightarrow \infty , \end{aligned}$$

by our assumption on \(\zeta \).

Finally, we deal with smaller values of \(\zeta \), proving the last part of Theorem 2.3.

Case III: \(0< \zeta \le {r-1 \over 2r -\beta +1}\).

In this case, we appeal to Theorem 3.1. We will show first that

$$\begin{aligned} p_f \ll N^{-1/r}_f. \end{aligned}$$

By (3.19) we have

$$\begin{aligned} N_f p_f^r = \Theta \left( \omega ^{-(2r- \beta +3)}(n) n^{1-\zeta (\beta -1) +r (2\zeta - 1)} \right) . \end{aligned}$$

The second exponent on the right-hand side of the above is equal to \(- (r-1) + \zeta (2r -\beta +1) \le 0\), by our assumption on \(\zeta \), whereby we have \(p_f = o\left( N_f^{-1/r}\right) \).

Recall that \(a_c^+ (n) = n^{1 - \zeta {r-\beta +2 \over r-1}}\). Let us set \(\xi = 1 - \zeta {r-\beta +2 \over r-1}\). It suffices to show that

$$\begin{aligned} N_f p_{Inf} \gg T_c (N_f, p_f). \end{aligned}$$
(3.20)

Since \(T_c (N_f,p_f) = \Theta \small {\left( \left( N_f p_f^r \right) ^{-{1\over r-1}} \right) }\), the above calculation implies that

$$\begin{aligned} T_c (N_f,p_f) = \Theta \left( \omega ^{2r -\beta + 3 \over r-1}(n) n^{1 - \zeta {2r-\beta + 1\over r-1}}\right) . \end{aligned}$$

Let \(a(n) = \omega ^2 (n) a_c^+ (n)\). Then

$$\begin{aligned} N_f p_{Inf} = \Theta \left( a^r(n) n^{1 - \zeta (\beta -1) + \zeta r - r }\right) . \end{aligned}$$

Hence

$$\begin{aligned} N_f p_{Inf} = \Theta \left( \omega ^{2r} (n) n^{r\xi -(r-1) + \zeta (r - \beta + 1)} \right) . \end{aligned}$$

But \(\xi \) satsfies

$$\begin{aligned} r \xi = r -\zeta \left( r-\beta + 1 +{2r - \beta +1 \over r-1} \right) , \end{aligned}$$

since

$$\begin{aligned}&r-\beta + 1 +{2r - \beta +1 \over r-1} = {(r-1)(r-\beta + 1) + 2r -\beta + 1\over r-1} \\&= {r(r-\beta + 1) + r \over r-1} = r ~{r -\beta +2 \over r-1}. \end{aligned}$$

Hence

$$\begin{aligned} r \xi -(r-1) + \zeta (r - \beta + 1) = 1- \zeta {2r -\beta + 1 \over r-1}. \end{aligned}$$

Also \({2r -\beta + 3 \over r-1} \le 2r - \beta + 3 < 2r\), since \(r\ge 2\). Thus (3.20) follows.

For each one of the above cases, Proposition 3.7 implies that for any real \(\varepsilon >0\) that is small enough there exists a real number \(C=C(\gamma _1,\gamma _2,\beta , \varepsilon ) >0\) such that a.a.s. at least \((1-\varepsilon )|\mathcal {K}_C|\) vertices of \(\mathcal {K}_C\) become infected. But by (3.3) we have \(|\mathcal {K}_C| = \Theta (n)\) and the second part of Theorem 2.3 follows.