1 Introduction

Domination in graphs and networks is a central topic in graph theory, with numerous applications in computer science and engineering. It has thousands of research papers on the theoretical side and important applications on the practical side. Formally, given a graph \(G=(V,E)\) with vertex set V and edge set E, a dominating set is a subset \(D\subseteq V\) such that every vertex in \(V\setminus D\) is adjacent with at least one vertex in D. The domination number of G, denoted by \(\gamma (G)\), is the minimum cardinality of a dominating set in G. Basics of the theory can be found in the classical two-volume research monograph [1, 2].

For extensive discussions on probability theory and properties of random graphs we refer to [3, 4]. Further results related to our current topic can be found in [5, 6].

In this note we deal with one version of graph domination which is of high practical importance, namely connected domination. Assuming that the graph \(G=(V,E)\) is connected, a subset \(D\subseteq V\) is said to be a connected dominating set if it is a dominating set and the subgraph G[D] induced by D is connected. The minimum cardinality of a connected dominating set is termed the connected domination number, denoted by \(\gamma _c(G)\). These notions offer an approach to the study of backbone networks, and their relevance is demonstrated e.g. in the publications [7,8,9] with over a thousand scholar.google citations each. For a survey on practical construction algorithms we refer to [10].

The inequality \(\gamma (G)\le \gamma _c(G)\) follows by the definitions for every connected graph G. From the other side Duchet and Meyniel [11] observed \(\gamma _c(G)\le 3\gamma (G)-2\), an inequality tight for every path \(P_n\) whose number n of vertices is a multiple of 3. These graphs have \(\gamma (G)=n/3\) and \(\gamma _c(G)=n-2\), the latter value achieving its maximum over the class of connected graphs of order n. (The maximum of \(\gamma \) is \(\lfloor n/2 \rfloor \), by a classical result of Ore [12].) Combining the results of Alon [13] and Caro et al. [14], however, it follows that for graphs of minimum degree d both \(\gamma \) and \(\gamma _c\) have their worst-case asymptotics \((1+o_d(1)) \frac{1+\ln (d+1)}{d+1} \, n\) as \(n\rightarrow \infty \).

Here our goal is to study the average behavior of connected dominating sets in graphs of given edge density. For this, we consider the random graph model \(\mathbf {G}_{n,p}\) on the vertex set \(V=\{1,2,\dots ,n\}\); for any \(1\le i<j\le n\), the vertices i and j are adjacent with probability p, totally independently of all the other adjacencies.

Sharp concentration theorems are known for \(\gamma \) on random graphs [15, 16]. On the other hand, to the best of our knowledge, no such result is available for \(\gamma _c\). Since the probability of disconnectedness is not zero, in order to interpret connected domination one has to disregard graphs which are not connected. Duckworth and Mans [17] carried out studies on the expected value of \(\gamma _c\) in regular random graphs for fixed vertex degree and n large, i.e. the class of edge probabilities in the range \(\Theta (1/n)\), by solving differential equations numerically. Dropping the restriction of regularity, in Section 2 we consider the case of constant \(0<p<1\), and in Section 3 we study smaller edge probabilities \(p=p_n\), with \(\displaystyle \lim _{n\rightarrow \infty }p_n=0\).

2 Asymptotic equality for constant probability

In this section we investigate the model with constant edge probability p, which we assume to be given, with \(0<p<1\). Let us introduce the notation

$$\begin{aligned} f(n):=\frac{(1+x)\ln n}{-\ln (1-p)} \end{aligned}$$

where \(x>0\) is not necessarily constant but may depend on n.

We now consider the random graph \(\mathbf {G}_{n,p}\) on n vertices. Let the vertices be labeled as \(v_1,\dots ,v_n\).

Lemma 1

For any constant edge probability p and any real \(x>0\) possibly depending on n, we have:

$$\begin{aligned} P(\{v_1,\dots ,v_{f(n)}\} \mathrm { \ is \ not \ dominating \ }in\ \mathbf {G}_{n,p}) < n^{-x} . \end{aligned}$$

Proof

Consider any fixed \(v_j\) in the range \(f(n)<j\le n\). The exact probability for \(\{v_1,\dots ,v_{f(n)}\}\) to not dominate \(v_j\) is

$$\begin{aligned} P(\lnot j) := P(v_j \mathrm {\ has \ no \ neighbor \ in \ } \{v_1,\dots ,v_{f(n)}\}) = (1-p)^{f(n)} . \end{aligned}$$

Consequently

$$\begin{aligned} P(\{v_1,\dots ,v_{f(n)}\} \mathrm { \ is \ not \ dominating \ in} \ \mathbf {G}_{n,p})\le & {} \sum _{j=f(n)+1}^n P(\lnot j) \\= & {} \sum _{j=f(n)+1}^n (1-p)^{f(n)} \\< & {} n \cdot (1-p)^{f(n)} \\= & {} n \cdot e^{ \frac{(1+x)\ln n}{-\ln (1-p)}\, \cdot \, \ln (1-p) } \\= & {} n \cdot (e^{-\ln n})^{1+x} \\= & {} n^{-x} . \end{aligned}$$

\(\square \)

Before stating the first theorem, let us recall a result from the literature, which will also be applied in the proof.

Lemma 2

(Gilbert [18]) For the random graph \(\mathbf {G}_{n,p}\) with n vertices and edge probability p constant, we have the following asymptotic probability of the event that \(\mathbf {G}_{n,p}\) is connected as \(n\rightarrow \infty \) :

$$\begin{aligned} P(\mathbf {G}_{n,p}\mathrm { \ is \ connected }) \sim 1 - n\cdot (1-p)^{n-1} . \end{aligned}$$

Theorem 1

Let \(y:{\mathbb {N}}\rightarrow {\mathbb {R}}^+\) be a non-decreasing function tending to infinity arbitrarily slowly, such that  \(\ln y(n) = o(\ln n)\). Then, as \(n\rightarrow \infty \), for every constant \(0<p<1\) we have

$$\begin{aligned} \gamma _c(\mathbf {G}_{n,p}) \le \frac{\ln n}{-\ln (1-p)} + \frac{\ln y}{-\ln (1-p)} = (1+o(1))\cdot \gamma (\mathbf {G}_{n,p}) \end{aligned}$$

with probability \(1-o(1)\).

Proof

It is known [15] that

$$\begin{aligned} \gamma (\mathbf {G}_{n,p}) = \frac{\ln n}{-\ln (1-p)} - O(\ln \ln n) . \end{aligned}$$

So this is a lower bound on \(\gamma _c(\mathbf {G}_{n,p})\), and also verifies the asymptotic equality on the right-hand side of the assertion. Now Lemma 1 implies with \(x=\frac{\ln y}{\ln n}\) that the first \(\left\lceil \frac{\ln n}{-\ln (1-p)} + \frac{\ln y}{-\ln (1-p)}\right\rceil \) vertices dominate \(\mathbf {G}_{n,p}\) with probability at least

$$\begin{aligned} 1 - n^{-\frac{\ln y}{\ln n}} = 1 - e^{-\ln y} = \frac{y-1}{y} = 1-o(1) . \end{aligned}$$

Actually in the choice of vertices one may replace ‘ceiling’ with ‘floor’ as well, since it yields only a o(1) change in the lower bound of \(\frac{y-1}{y}\) on the favorable probability for domination.

Transforming now \(1 - n\cdot (1-p)^{n-1}\) of Lemma 2 to the continuous function

$$\begin{aligned} h(z) := 1 - z\cdot (1-p)^{z-1} \end{aligned}$$

we see that h is a monotone increasing function after some threshold, say \(z>z_0(p)\), for any fixed \(p>0\). Indeed, the derivative is

$$\begin{aligned} h'(z) = -(1-p)^{z-1} + z\cdot (1-p)^{z-1}\cdot \ln \frac{1}{1-p} = \frac{ -1 + z\cdot \ln \frac{1}{1-p} }{ \left( \frac{1}{1-p} \right) ^{z-1} } \end{aligned}$$

which is positive and exponentially small as z gets large. In particular, within a constant change of z it changes with o(1) only. To derive a simple formula, we plug in \(z=\frac{\ln n}{-\ln (1-p)}+1\) and obtain

$$\begin{aligned} h(z) = 1 - \frac{\ln \frac{n}{1-p}}{-\ln (1-p)} \cdot (1-p) ^ {\frac{\ln n}{-\ln (1-p)}} = 1 - \frac{\ln \frac{n}{1-p}}{-\ln (1-p)} \cdot e^{-\ln n} = 1 - O\!\left( \frac{\ln n}{n}\right) . \end{aligned}$$

Consequently, the probability that \(\{v_1,\dots ,v_{f(n)}\}\) is not dominating or induces a disconnected subgraph in \(\mathbf {G}_{n,p}\) is at most

$$\begin{aligned} O\!\left( \frac{\ln n}{n}\right) + \frac{1}{y} + o(1) = o(1) \end{aligned}$$

as n tends to infinity. It follows that \(\{v_1,\dots ,v_{f(n)}\}\) almost surely is a set inducing a connected dominating subgraph, thus \(\gamma _c(\mathbf {G}_{n,p})\le f(n)\) with probability \(1-o(1)\). \(\square \)

3 The nonconstant case

Here we consider the random graph \(\mathbf {G}_{n,p_n}\) on n vertices, with \(p_n=o(1)\). We begin with observations on dominating sets, and finish with connectivity.

Let us have an integer function g with \(1\le g(n)\le n\). Our aim is to estimate the probability \(\delta _n\) that a given set X on g(n) vertices dominates the whole \(\mathbf {G}_{n,p_n}\). (We have abbreviated the notation, \(\delta _n\) depends also on g(n).)

Let the vertices of the graph be labeled again as \(v_1,\dots ,v_n\). First, we give an exact formula for \(\delta _n\).

Lemma 3

For any g(n) we have

$$\begin{aligned} \delta _n=[1-(1-p_n)^{g(n)}]^{n-g(n)} . \end{aligned}$$

Proof

Assume without loss of generality that \(X=\{v_1,\dots ,v_{g(n)}\}\). Consider any fixed \(v_j\) in the range \(g(n)<j\le n\). Let the exact probability for X to not dominate \(v_j\) be denoted by \(\mu _j\). Then

$$\begin{aligned} \mu _j=P(v_j \mathrm {\ has \ no \ neighbor \ in \ } \{v_1,\dots ,v_{f(n)}\}) = (1-p_n)^{g(n)} . \end{aligned}$$

Consequently

$$\begin{aligned}&P(\{v_1,\dots ,v_{g(n)}\} \mathrm { is \ dominating \ in \ }\mathbf {G}_{n,p_n}) \\&\quad = \prod _{j=g(n)+1}^n [1-P(X \mathrm { \ does \ not \ dominate} \ v_j)] \nonumber \\ \end{aligned}$$

because of the complete independence of the events, constructed from pairwise disjoint sets of edges. The \(\mu _j\)’s have a common value \(\mu \). Thus

$$\begin{aligned} \delta _n=(1-\mu )^{n-g(n)} \end{aligned}$$

as stated. \(\square \)

Notation. Let \(\Delta _n\) denote the probability that there exists a dominating set of cardinality at most g(n) in \(\mathbf {G}_{n,p_n}\). Furthermore, let \(\phi (n):=p_n\,g(n)\), \(s_n:=1/{p_n}\), \(e_n:=[1-1/{s_n}]^{s_n}\), \(r_n:=1/{e_n}\) and \(F(n):=[n-g(n)]/r_n^{\phi (n)}\).

The following theorem gives a sufficient condition for \(\displaystyle \lim _{n\rightarrow \infty }\delta _n=\displaystyle \lim _{n\rightarrow \infty }\Delta _n=\) 1.

Theorem 2

If F(n) tends to zero, then \(\delta _n\) and thus also \(\Delta _n\) tends to 1.

Proof

With the notation introduced above, Lemma 3 yields

$$\begin{aligned} \delta _n=[1-([1-1/{s_n}]^{s_n})^{\phi (n)}]^{n-g(n)}, \end{aligned}$$

which can more briefly be written as

$$\begin{aligned} \delta _n=[1-e_n^{\phi (n)}]^{n-g(n)}=[1-1/{r_n^{\phi (n)}}]^{n-g(n)}. \end{aligned}$$

Then, denoting \(r_n^{\phi (n)}\) by \(t_n\),

$$\begin{aligned} \delta _n = ([1-1/t_n]^{t_n})^{F(n)}. \end{aligned}$$

By the assumption \(F(n)\rightarrow 0\) we necessarily have that \(r_n^{\phi (n)}\) tends to infinity; hence \([1-1/t_n]^{t_n} \rightarrow 1/e\), and beyond some threshold \(n_0\) we have \(\delta _n>1/3^{F(n)}\) for all \(n>n_0\). This implies the validity of the theorem. \(\square \)

Examples. In both of the following assertions, \(b >1\) denotes a constant, and the conclusions are derived from Theorem 2.

  • (i) Let \(g(n)=\left\lfloor \log _b^{\alpha } n\right\rfloor \) with \(\alpha >1\), and let \(p_n=1/{\log _b n}\). Then \(\delta _n\) tends to 1.

  • (ii) Let \(\displaystyle \lim _{n\rightarrow \infty } {p_n\, g(n)-\log _b n}=\infty \). Then \(\delta _n\) tends to 1.

The following statement is a little bit surprising.

Proposition 3

If \(g(n)=n-1\) and \(p_n=c/(n-1)\) where \(c>0\) is a constant, then \(\delta _n\) tends to \(1-e^{-c}\).

Proof

Let \(e_n:=[1-1/{s_n}]^{s_n}\) again. Using that this sequence tends to 1/e, we obtain the assertion. \(\square \)

The following theorem gives a general sufficient condition for \(\displaystyle \lim _{n\rightarrow \infty }\Delta _n=0\).

Theorem 4

If \(g(n)=o(n/\ln n)\) and \(\phi (n)=p_n\,g(n)=O(1)\), then \(\Delta _n\) tends to 0.

Proof

Let us consider the rough estimation

$$\begin{aligned} \Delta _n\le {\left( {\begin{array}{c}n\\ g(n)\end{array}}\right) \delta _n} \end{aligned}$$

using that \(P(A_1+A_2+\dots A_k)\le P(A_1)+P(A_2)+\ldots + P(A_k)\) for any events. Simplifying the Stirling formula to the inequality \(x! < (x/e)^x\) for x large enough, the binomial coefficient can be bounded from above

$$\begin{aligned} \left( {\begin{array}{c}n\\ g(n)\end{array}}\right) < \left( \frac{e\cdot n}{g(n)} \right) ^{g(n)} = \exp \left( \, g(n) + g(n) \ln n - g(n) \ln g(n) \, \right) \end{aligned}$$

where the standard notation \(\exp (z)=e^z\) is applied. Moreover, as shown in the proof of Theorem 2, for a small \(c>0\) we have

$$\begin{aligned} \delta _n=([1-1/t_n]^{t_n})^{F(n)} < (1/e+c)^{F(n)} = \exp \left( (c' - 1 ) ( n - g(n) ) \cdot e_n ^ {\phi (n)} \right) \end{aligned}$$

if n is sufficiently large, where also \(c'\) is small, can be chosen to be arbitrarily close to zero. Since \(\phi (n)=O(1)\), it can be assumed to not exceed a constant. Thus, combining the above formulas we obtain

$$\begin{aligned} \Delta _n < \exp \left( g(n) + g(n) \ln n - g(n) \ln g(n) - C\cdot n + C\cdot g(n) \right) \end{aligned}$$

for a suitably chosen positive constant C. Here the largest positive term is \(g(n) \ln n\), which is of the order o(n) by assumption, consequently the right-hand side tends to zero. This fact completes the proof. \(\square \)

We also give a sufficient condition for \(\displaystyle \lim _{n\rightarrow \infty }\delta _n=0\).

Theorem 5

If \(\phi (n)\) tends to zero, then \(\delta _n\) also tends to zero, except if \(g(n)=n\) holds for infinitely many n.

Proof

We use the notation above. From the proof of Theorem 2 we know that

$$\begin{aligned} \delta _n=[1-e_n^{\phi (n)}]^{n-g(n)} \end{aligned}$$

where \(e_n=(1-p_n)^{1/p_n}\) and \(\phi (n)=p_n\,g(n)\). Hence if \(p_n\rightarrow 0\), then \(e_n\rightarrow 1/e\), and \(e_n\) can be bounded from below by a positive constant. Therefore \(e_n^{\phi (n)}\) tends to 1 and \(1-e_n^{\phi (n)}\) tends to zero. Suppose first that \(n-g(n)\) tends to infinity. Then \(\delta _n\) tends to zero as promised.

For a bounded exponent, we get a fork. In the extreme case, \(g(n)=n\), we have the trivial \(n-g(n)=0\) and \(\delta _n=1\), independently of the actual value of \(p_n\). Otherwise we obtain a base tending to zero, and an exponent having a positive lower bound, namely 1. Consequently, \(\delta _n\) tends to zero in this case, too. \(\square \)

Now we incorporate the condition of connectivity. As we quoted in Lemma 2, Gilbert [18] proved for fixed p that the probability of \(\mathbf {G}_{n,p}\) being connected is \(1-n\cdot (1-p)^{n-1}\) asymptotically. Here we observe that Gilbert’s formula is also valid for a sequence \(p_n\) of probabilities tending to zero, even when the sequence grows quite slowly. The argument follows the lines of the one in [18], but asymptotics need to be analyzed as \(p_n\) is small.

Theorem 6

For the random graph \(\mathbf {G}_{n,p_n}\) with n vertices and edge probability \(p_n\), where \((n\cdot p_n - 2\ln n)\) tends to infinity, we have the following asymptotic probability of the event that \(\mathbf {G}_{n,p_n}\) is connected as \(n\rightarrow \infty \) :

$$\begin{aligned} P(\mathbf {G}_{n,p_n}\mathrm { \ is \ connected }) \sim 1 - n\cdot (1-p_n)^{n-1} . \end{aligned}$$

Proof

Let us note first that the term \(n\cdot (1-p_n)^{n-1}\) tends to zero as n gets large, whenever \((n\cdot p_n - 2\ln n)\) tends to infinity. Indeed, disregarding the multiplier \(\frac{1}{1-p_n}\) one may write \((1-p_n)^n = \left( (1-p_n)^{1/p_n} \right) ^{n\cdot p_n} \approx e^{-n\cdot p_n} = n^{-1}\cdot e^{-(n\cdot p_n-\ln n)} = o(n^{-1})\). Analogously, a similar argument shows that \(n\cdot (1-p_n)^{n/2}\) tends to zero if \((n\cdot p_n - 2\ln n)\) tends to infinity.

Let now \(P_n=P(\mathbf {G}_{n,p_n}\mathrm { \ is \ connected })\). Instead of \(P_n\) we shall estimate \(1-P_n\). Let us introduce the notation \(q_n=1-p_n\). We claim

$$\begin{aligned} 1-P_n=\sum _{k=1}^{n-1}P_k\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) q_n^{k(n-k)} . \end{aligned}$$
(1)

Indeed, let us fix a vertex, say, \(v_0\). The whole graph is disconnected if and only if \(v_0\) is contained in a connected subgraph \(G_0\) in such a way that the vertices of \(G_0\) are not joined with any vertex outside. Namely, \(G_0\) is the connected component containing \(v_0\). The order k of \(G_0\) is running between 1 and \(n-1\), and the set of its vertices can be chosen in \(\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \) different ways. Any two choices mutually exclude each other, therefore the total probability is equal to the sum of the individual probabilities.

Let \(E^{n}_i\) denote the event \(v_i\) is an isolated vertex, i.e., that \(v_i\) is not adjacent to any other vertex in the graph \(\mathbf {G}_{n,p_n}\). A lower bound on \(1-P_n\) is the probability \(P(E^{n}_1+ E^{{n}}_2+\ldots + E^{n}_{n})\) that at least one of the vertices \(v_1,v_2,\dots ,v_n\) is isolated. Then

$$\begin{aligned} 1-P_n\ge & {} P(E^{n}_1 + E^{{n}}_2+\dots +E^{n}_{n}) \nonumber \\\ge & {} \sum _{i=1}^n P(E^{n}_i)-\sum _{1\le j<i\le n}P(E^{n}_iE^{n}_j)\nonumber \\= & {} nq^{n-1}_n-\frac{n(n-1)}{2}q^{2n-3}_n \end{aligned}$$
(2)

where we applied a simplified version of the inclusion-exclusion principle.

Furthermore, we used that \(P(E^{n}_i)=q^{n-1}_n\) and \(P(E^{n}_iE^{n}_j)=q^{2n-3}_n\) hold, as we need \(2n-3\) non-edges to make both \(v_i\) and \(v_j\) isolated for \(E^{n}_iE^{n}_j\). Moreover, analogously to \(nq_n^{n-1} = o(1)\), also \(n^2q_n^{2n-3} = o(nq_n^{n-1})\) is valid. Now the two ends of the above chain of inequalities leading to the formula of (2) yield the lower bound

$$\begin{aligned} nq^{n-1}_n - o(nq^{n-1}_n)\le 1-P_n . \end{aligned}$$
(3)

A matching upper bound will be obtained using (1). For \(k=1,\dots ,n-1\) we bound \(P_k\) by 1. The terms \(q^{k(n-k)}_n\) can be bounded using the fact that \(x(n-x)\) is a concave function of x and takes its minimum at the two ends of the domain \([1,n-1]\), hence the exponent can be underestimated with the piecewise linear function

$$\begin{aligned} k(n-k)\ge \left\{ \begin{array}{ll} { \displaystyle \frac{(n-2)k}{2} + \frac{n}{2} \,,} &{} \hbox {if}\,\, 1\le k\le \frac{n}{2}\,, \\ { \displaystyle \frac{(n-2)(n-k)}{2} + \frac{n}{2} \,, } \qquad \qquad &{} \hbox {if}\,\, \frac{n}{2}\le k\le n-1\,, \end{array} \right. \end{aligned}$$

adjusted to hold with equality for \(k=1,\,n/2,\,n-1\).

In order to treat k under and above n/2 in a unified way, it is convenient to take a combination of the two functions in a way that will cause relatively small additional error terms, and estimate \(q^{k(n-k)}_n\) as

$$\begin{aligned} q^{k(n-k)}_n < q_n^{n/2}(q^{(n-2)\cdot k/2}_n+q^{(n-2)(n-k)/2}_n) \end{aligned}$$

for \(k = 1,2,\dots , n-1\). To simplify the exponents, let us write \(Q:=q_n^{(n-2)/2}\). Hence in particular we have \(n\cdot Q=o(1)\), and the above inequality can be rewritten in the form of

$$\begin{aligned} q^{k(n-k)}_n < q_n^{n/2}(Q^k+Q^{n-k}) . \end{aligned}$$

We substitute the right-hand side into Equality (1), and obtain

$$\begin{aligned} 1-P_n< & {} q_n^{n/2} \left( \sum _{k=1}^{n-1}\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) Q^k + \sum _{k=1}^{n-1}\left( {\begin{array}{c}n-1\\ n-k\end{array}}\right) Q^{n-k} \right) \\= & {} q_n^{n/2} \left( Q\cdot \sum _{j=0}^{n-2}\left( {\begin{array}{c}n-1\\ j\end{array}}\right) Q^j + \sum _{j=1}^{n-1}\left( {\begin{array}{c}n-1\\ j\end{array}}\right) Q^j \right) \\= & {} q_n^{n/2} \left( \, Q\cdot \left[ (1+Q)^{n-1}-Q^{n-1} \right] + \left[ (1+Q)^{n-1}-1 \right] \,\right) \\< & {} q_n^{n/2} (\, Q + [\, Q\cdot \sum _{j=1}^{n-2} (nQ)^j \,] + [\, (n-1)\cdot Q +\sum _{j=2}^{n-1} (nQ)^j \,] \,) \\= & {} n\cdot Q\cdot q_n^{n/2} + [\, Q\cdot q_n^{n/2}\cdot \sum _{j=1}^{n-2} (nQ)^j \,] + [\, q_n^{n/2}\cdot \sum _{j=2}^{n-1} (nQ)^j \,] . \end{aligned}$$

Here the main term is \(n\cdot Q\cdot q_n^{n/2} = n\cdot (1-p_n)^{n-1}\) as claimed; the second largest term is \(n\cdot Q\cdot q_n^{n/2}\) from the beginning of the last big sum, but it is already \(o(n\cdot Q\cdot q_n^{n/2})\) ; and the sum of all the other terms is negligible. This completes the proof.

\(\square \)

4 Conclusion

  1. 1.

    Concerning the generalization of Gilbert’s theorem, it is worth comparing Theorem 6 with the commonly used estimation \(e ^ { -e ^ {(\ln n) - p \cdot n } }\) (where \(p=p_n\)) for the probability of \(\mathbf {G}_{n,p}\) to be connected, usually written in the form \(e^{-e^{-x}}\) by the substitution \(p=\frac{\ln n}{n}+\frac{x}{n}\). With the asymptotic \(e^{-z} \sim 1-z\) around zero, it is approximately \(1 - e ^ {(\log n) - p \cdot n } = 1 - n\cdot e ^ {- p \cdot n }\). On the other hand, we can rewrite Theorem 6 in the form \(1 - n\cdot ([1-p]^{1/p})^{p\cdot (n-1)}\) . Observing that inside the prarentheses the expression tends to 1/e as \(p\rightarrow 0\), the function can be approximated as \(1 - n\cdot e^{-p\cdot (n-1)}\).

  2. 2.

    Furthermore, we present here the following open question.

Problem 1

Does there exist some \(p_n\) tending to zero and some constant b such that \(\displaystyle \lim _{n\rightarrow \infty }P(\gamma _c(\mathbf {G}_{n,p_n})\le \log _b n)>0\)?