Abstract
In this paper, we study cliques and chromatic number in the random subgraph Gn of the complete graph Kn on n vertices, where each edge is independently open with a probability pn. Associating Gn with the probability measure ℙn, we say that the sequence {ℙn} is multiregime if the edge probability sequence {pn} is not convergent. Using a recursive method we obtain uniform bounds on the maximum clique size and chromatic number for such multiregime random graphs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The study of cliques and chromatic numbers in random subgraphs of the complete graph Kn on n vertices is of importance from both theoretical and application perspectives. For the case where every edge of Kn is independently open with probability pn with limnpn ∈{0, p,1} for some 0 < p < 1, bounds on cliques and chromatic numbers have been obtained using a combination of second moment method and martingale inequalities (see Alon and Spencer (2003) & Bollobás (2001)) and references therein). For the intermediate regime where pn = p is a constant, Shamir and Spencer (1987) studied the concentration of the chromatic number around its mean using martingale techniques. Sharp bounds for the chromatic number was then derived in Bollobás (1988) by exploring a maximal disjoint set of cliques. Alex Scott (2008) uses a combinatorial argument to sharpen the interval of concentration of the chromatic number to within an interval of length n log n and Panagiotou and Steger (2009) use a counting argument to obtain sharp lower bounds on the chromatic number.
For the sparse regime where pn → 0 as n →∞, Frieze (1990) uses martingales to investigate the independence number of homogenous random graphs and Luczak (1991) studies the chromatic number using a more delicate second moment method. Alon and Krivelevich (1997) use k −choosable graphs to study two point concentration of the chromatic number when the edge probability pn = n− 1/2−δ for δ > 0. More recently, Achlioptas and Naor (2005) use an analytic approach combining sharp threshold results with second moment methods to obtain the two possible values of the chromatic number when pn = d n and d > 0 is a constant.
In this paper, we consider random graphs where the edge probability {pn} is not necessarily convergent and use a recursive method to obtain bounds for the clique numbers and consequently the chromatic numbers. In the rest of this subsection, we briefly describe the random graph model under consideration and state our main results (Theorems 1 and 2) regarding the cliques and chromatic number.
1.1 Clique Number
For n ≥ 2, let Kn be the labelled complete graph with vertex set {1, 2, … , n} and for 1 ≤ i < j ≤ n, let e(i, j) be the edge between the vertices i and j. Let {X(i, j)}1≤i<j≤n be independent Bernoulli random variables with
We say that edge e(i, j) is open if X(i, j) = 1 and closed otherwise. The resulting random graph G = G(n, pn) is an Erdős-Rényi (ER) random graph, defined on the probability space (Ω,, ℙn). We say that the sequence of probability measures {ℙn} is multiregime if {pn} is not convergent. For notational simplicity we henceforth drop the subscript from the probability measure ℙn and denote it simply as ℙ.
A subset U ⊆ {1, … , n} is said to be an open clique of G if for all u, v ∈ U, the edge e(u, v) is open. The clique number
is the size of the largest open clique in G, where #U denotes the cardinality of a set U. To estimate ω(G), we let
and define
Theorem 1.
Suppose α1 < 2 and α2 < 1. There are positive constants η and γ such that
For all 𝜖 > 0 and η, γ satisfying (1.3), there is a positive integer N ≥ 1 so that
for all n ≥ N, where
In words, the clique number ω(G) is of the order of Wn with high probability, i.e., with probability converging to one as n →∞.
We briefly outline the proof of Theorem 1. To obtain the lower bound in Eq. 1.4, we let 1 − tL(n) denote the probability of finding a clique of size L in the random graph G. We first obtain a recursive estimate roughly of the form (see Eq. 2.12 of Lemma 6 for a more precise formulation)
where β = βn ∈ (0,1) and θ = θn > 0. Using the recursion (1.7) we then find an explicit upper bound of the form \(t_{L}(n) \leq e^{-A_{1}} + 2e^{-A_{2}}\) where A1 = A1(L, n, pn) and A2 = A2(L, n, pn) (see Lemma 7 for explicit expressions for A1 and A2). Choosing L = (1 − η − α2)Wn, we then show that both A1 and A2 are at least \(n^{\theta _{clq}}\) (see proof of Theorem 1 in Section 3).
For the upper bound in Eq. 1.4, we use a union bound and estimate the probability that there exists an open clique of size L for L = (2 + 2𝜖)Wn + 1, to be at most exp (−𝜖(2 + 2𝜖)Wn logn). Combining with the lower bound estimates described in the previous paragraph, we then get Eq. 1.4. To obtain Eq. 1.5 from Eq. 1.4, we use the fact that \(p_{n} \geq \frac {1}{n^{\alpha _{1}+\epsilon }}\) for all n large (see Eq. 1.2) and so Wn = log n log(1 pn) ≥ 1 α1+𝜖 for all n large. This implies that
and so
for all n large. This obtains Eq. 1.5.
1.2 Chromatic Number
Let G = G(n, pn) be the random graph obtained in the previous subsection. The chromatic number χ(G) is defined as follows. A k −colouring of G is a map h : {1, 2, … , n} → {1, 2, … , k}; i.e., each vertex in G is assigned one of k colours. The k −colouring h is said to be proper if h(v1) ≠ h(v2) for v1, v2 that are endvertices of an open edge; i.e., no two endvertices of an open edge have the same colour. Using a distinct colour for each of the n vertices of G, we obtain a proper n −colouring. Define
to be the chromatic number of G.
Letting α1, α2 be as in Eq. 1.2 and Wn be as in Eq. 1.1, we have the following result regarding the chromatic number of G.
Theorem 2.
Suppose {pn} is such that α1 < 1, α2 < 1 2 and 1 − α2 > max(α1, α2). There are positive constants η, γ and c such that
and
For all 𝜖 > 0 and η, γ, c satisfying (1.8) and (1.9), there is a positive integer N ≥ 1 so that
for all n ≥ N.
Thus the chromatic number χ(G) is of the order of nWn with high probability. As in Alon and Spencer (2003), we use the estimates in Theorem 1 regarding the clique number and consider the complement graph G(n,1 − pn) to find upper and lower bounds on the chromatic number (see Lemma 9 in Section 4).
Remark: Because we use recursion (see Lemma 6) to estimate the probability of occurrence of cliques, the information regarding the graph G becomes coarser at each successive step and this reflects in the difference between the upper and lower bounds for the clique number in Theorem 1. Since we use the clique number estimates to bound the chromatic number (see Lemma 9 in Section 4), the corresponding difference appears in the chromatic number bounds (Theorem 2) as well.
1.3 Special Cases
For completeness and illustration, we use our results above to state and provide brief proofs for the special cases where the probability sequence {pn} belongs to one of the three regimes, sparse, intermediate and dense. For stronger versions of below stated results, we refer to Alon and Spencer (2003) and references therein.
Theorem 3.
-
(i) Suppose \(p_{n} = \frac {1}{n^{\theta _{1}}}\) for some 0 < θ1 < 2. For every ξ > 0, there is a positive integer N ≥ 1 so that
$$ \mathbb{P}\left( \frac{2-\theta_{1}}{2\theta_{1}} -\xi \leq \omega(G(n,p_{n})) \leq \frac{2+\theta_{1}}{\theta_{1}} + \xi\right) \geq 1-4 n^{-\xi} $$(1.12)for all n ≥ N.
-
(ii) Suppose pn = p ∈ (0, 1) for all n. For every ξ > 0, there is a positive integer N ≥ 1 so that
$$ \begin{array}{@{}rcl@{}} &&\mathbb{P}\left( \frac{(1-\xi)\log{n}}{\log\left( \frac{1}{p}\right)} \leq \omega(G(n,p_{n})) \leq \frac{(2+\xi)\log{n}}{\log\left( \frac{1}{p}\right)} \right) \\ && \geq 1- 4\exp\left( \frac{-\xi}{\log\left( \frac{1}{p}\right)}(\log{n})^{2}\right) \end{array} $$(1.13)for all N ≥ N.
-
(iii) Suppose \(p_{n} = 1-\frac {1}{n^{\theta _{2}}}\) for some 0 < θ2 < 1. For every ξ > 0, there is a positive integer N ≥ 1 so that
$$ \begin{array}{@{}rcl@{}} &&\mathbb{P}\left( (1-\theta_{2})(1- \xi)n^{\theta_{2}} \log{n} \leq \omega(G(n,p_{n})) \leq (2+\xi)n^{\theta_{2}}\log{n}\right) \\ && \geq 1- 4\exp\left( -\frac{\xi}{4} n^{\theta_{2}}(\log{n})^{2}\right) \end{array} $$(1.14)for all n ≥ N.
Using the bounds for the clique number in Theorem 3, we derive bounds for the chromatic number of ER graphs where each edge is independently open with probability rn, belonging to one of the three regimes sparse, intermediate or dense. As before, we discuss separate cases depending on the asymptotic behaviour of rn.
Theorem 4.
-
(i) Suppose \(r_{n} = \frac {1}{n^{\theta _{2}}}\) for some 0 < θ2 < 1 2. For all ξ > 0, there is a constant N ≥ 1 so that
$$ \begin{array}{@{}rcl@{}}&&\mathbb{P}\left( (1-\xi)\frac{n^{1-\theta_{2}}}{2\log{n}} \leq \chi(G(n,r_{n})) \leq \frac{2(1+\xi)}{1-2\theta_{2}} \frac{n^{1-\theta_{2}}}{\log{n}} \right) \\ && \geq 1-4\exp\left( -\frac{\xi}{4}n^{\theta_{2}}(\log{n})^{2}\right) \end{array} $$(1.15)for all n ≥ N.
-
(ii) Suppose rn = p for some constant 0 < p < 1 and for all n. For all ξ > 0, there is a constant N ≥ 1 so that
$$ \begin{array}{@{}rcl@{}}&&\mathbb{P}\left( (1-\xi)\frac{n\log\left( \frac{1}{1-p}\right)}{2\log{n}} \leq \chi(G(n,r_{n})) \leq 2(1+\xi) \frac{n\log\left( \frac{1}{1-p}\right)}{\log{n}} \right) \\ && \geq 1-4\exp\left( -\frac{\xi}{3\log\left( \frac{1}{1-p}\right)}(\log{n})^{2}\right) \end{array} $$(1.16)for all n ≥ N.
-
(iii) Suppose \(r_{n} = 1-\frac {1}{n^{\theta _{1}}}\) for some 0 < θ1 < 1. For all ξ > 0, there is a constant N ≥ 1 so that
$$ \begin{array}{@{}rcl@{}}&&\mathbb{P}\left( (1-\xi)\frac{\theta_{1} n}{2+\theta_{1}}\leq \chi(G(n,r_{n})) \leq (1+\xi) \frac{2\theta_{1}n}{1-\theta_{1}} \right) \\ && \geq 1-4n^{-\frac{\xi^{2}}{3\theta_{1}}} \end{array} $$(1.17)for all n ≥ N.
We remark here that Alon and Spencer (2003) consider the random graph G as a whole and use the extended Janson’s inequality to obtain sharp concentration type estimates for the clique number ω(G) for example, when the edge probability is pn = 1 2. Using the bounds for the chromatic number in terms of the clique number (see Lemma 9 in Section 4) then results in the property that χ(G) is concentrated around n log 2n with high probability. Because we use recursion to estimate the probability of occurrence of cliques, this inherently results in a “loss of information” at each successive step and reflects in the differences between our upper and lower bounds for the clique number and correspondingly the chromatic number. Indeed, we compute the chromatic number using the clique number estimates (Lemma 9) analogous to Alon and Spencer (2003) (see Section 4 for more details). For additional results regarding the chromatic number in the sparse regime, we also refer to Bollobás (2001).
The paper is organized as follows. In Section 2, we obtain preliminary estimates used for proving the main Theorems 1 and 2. In Section 3, we prove Theorem 1 and in Section 4, we prove Theorem 1. Finally, in Appendix, we prove Theorems 3 and 4.
2 Preliminary Estimates
We use the following estimates throughout.
- Logarithm estimate::
-
The following logarithmic estimate is used throughout. For 0 < x < 1,
$$ x < -\log(1-x) = \sum\limits_{k\geq 1}\frac{x^{k}}{k} < \sum\limits_{k\geq 1}x^{k} < \frac{x}{1-x}. $$(2.1) - Binomial Estimate::
-
Let {Xj}1≤j≤m be independent Bernoulli random variables with ℙ(Xj = 1) = pj = 1 − ℙ(Xj = 0) and fix 0 < 𝜖 < 1 6. If Tm = ∑ j= 1mXj and μm = ETm, then
$$ \mathbb{P}\left( \left|T_{m} - \mu_{m}\right| \geq \mu_{m} \epsilon \right) \leq 2\exp\left( -\frac{\epsilon^{2}}{4}\mu_{m}\right) $$(2.2)for all m ≥ 1. For a proof of Eq. 2.2, we refer to Corollary A.1.14, pp. 312 of Alon and Spencer (2003).
The rest of the section is divided into three parts: The first part concerns a technical Lemma (see Lemma 5) that estimates a number δn and a decreasing sequence {qi}i≥ 0 both of which occur in the recursive estimate of the second part. In the second part, we estimate the probability tL(qi) of finding a L −open clique (i.e., an open clique formed by L vertices) from among a total of qi vertices, in terms of tL− 1(qi+ 1) (see estimate (2.12) in Lemma 6). The final part uses the recursion estimate (2.12) to explicitly compute tL(q0) for arbitrary q0 and L (see Lemma 7).
In Sections 3 and 4, we use the estimate for tL(q0) computed in Lemma 7 with appropriately chosen values of L = Ln and q0 = q0(n) to prove Theorems 1 and 2, respectively.
2.1 A Technical Lemma
In this subsection, we define and estimate certain quantities that are used throughout in the proofs of the main theorems. Let 0 < 𝜖 < 1 6 and let M = M(𝜖) ≥ 2 be large such that
Fixing such an M, we let 𝜖1 = 𝜖1(𝜖, M) ≤ 𝜖 be small so that
For n ≥ 1, we now set
For a positive integer q let q0 = q and for i ≥ 1, let
be the largest integer less than or equal to (pn − δn)(qi− 1 − 1).
In the next subsection, we see that the number δn and the numbers {qi} both occur in the main recursive estimate used in the proof of Theorem 1. Indeed, the quantity δn occurs in the exponent of a term in Eq. 2.12 of Lemma 6 below and is a part of an estimate that relates the probability tL(qi) of finding an L −open clique among qi vertices, in terms of tL− 1(qi+ 1). For convenience, we therefore record properties of δn and the numbers of {qi} for future use. We recall from Eq. 1.1 that Wn = log n log(1 pn).
Lemma 5.
For any n ≥ 2, the difference pn − δn > 0 and there is a constant K ≥ 1 such that for all n ≥ K,
and
The numbers {qi} satisfy the following properties. For i ≥ 1,
and
(Proof of Eqs. 2.7 and 2.8 in Lemma 5).
If pn < 1 − 1 M then δn = 𝜖1pn and so pn − δn = (1 − 𝜖1)pn > 0. Also log (1pn) ≥ log (MM− 1) . Consequently
by the choice of 𝜖1 in Eq. 2.4.
If pn ≥ 1 − 1 M, then δn = 𝜖(1 − pn) and so
using Eq. 2.4. Moreover using the bound − log(1 − x) > x (see Eq. 2.1) with x = 1 − pn we get
for all n large, where α2 = limsupn log(1 1−pn) log n is as defined in Eq. 1.2. This means that \(W_{n} = \frac {\log {n}}{\log \left (\frac {1}{p_{n}}\right )} \leq n^{\alpha _{2}+\epsilon } \log {n},\) giving Eq. 2.7.
To prove (2.8), we use the fact that (1 + 𝜖)(1 − pn) ≤ 1+𝜖 M < 1, since M ≥ 2 and 0 < 𝜖 < 1. Therefore using the upper bound − log(1 − x) < x 1−x from Eq. 2.1 with x = (1 + 𝜖)(1 − pn) = 1 − (pn − δn), we have that
Similarly using the lower bound estimate − log(1 − x) > x from Eq. 2.1, we have − logpn = −log(1 − (1 − pn)) > 1 − pn. Thus
by choice of M in Eq. 2.3. This proves (2.8). □
(Proof of Eqs. 2.9 and 2.10 in Lemma 5).
The relation (2.9) is obtained using the property x − 1 ≤⌊x⌋≤ x for any x > 0. Set Δ = pn − δn ∈ (0,1) and apply the upper bound in Eq. 2.9 recursively, to get qi ≤Δiq0 = Δiq. This proves the upper bound in Eq. 2.10. For the lower bound we again proceed iteratively and first obtain for i ≥ 2 that qi ≥Δ(qi− 1 − 1) − 1 = Δqi− 1 − (1 + Δ). Continuing iteratively we obtain qi ≥Δiq0 − (1 + Δ)∑ j= 0iΔj− 1. Since Δ < 1, the sum ∑ j= 0iΔj ≤ 1 1−Δ and so qi ≥Δiq0 − 1+Δ 1−Δ ≥Δiq − 2 1−Δ, since Δ ∈ (0,1). This proves (2.10). □
2.2 Recursion Estimate
Recall that the random graph G = G(n, pn) is said to have an open L −clique if there exists a subset S ⊂ {1, 2, … , n} such that #S = L and for any u, v ∈ S, the edge e(u, v) is open (see statement prior to Eq. 1.1). For integer 2 ≤ q ≤ n, let Sq := {1, 2, … , q} and let G(Sq) be the induced subgraph of G = G(n, pn), with vertex set Sq. For integer L ≥ 2, let BL(Sq) denote the event that the random subgraph G(Sq) contains an open L −clique and set
By definition tL(q) = 0 if L > q.
In this subsection, we obtain a recursive estimate for the probability tL(q) for arbitrary L and q in terms of tL− 1(q1) where q1 is as defined in Eq. 2.6 with i = 1. In the next subsection, we use the recurrence relation obtained below to obtain an explicit estimate for tL(q).
Lemma 6.
Fix 0 < 𝜖 < 1 6 and let 𝜖1 and δn be as defined in Eqs. 2.4and 2.5, respectively. There exists a constant N0 = N0(𝜖) such that the following holds for all n ≥ N0 : If q, L ≥ 2 are such that L − 1 ≥ 2 and q1 = ⌊(pn − δn)(q − 1)⌋ ≥ 2 (see Eq. 2.6) then
In Lemma 7 stated below, we use recursion to estimate the first term in Eq. 2.12 and obtain explicit estimates for tL(q) (see proof of Lemma 7).
(Proof of Lemma 6).
Recall from Eq. 2.5 that δn ∈{𝜖1pn,(1 − 𝜖)pn}. For simplicity, we write p = pn, δ = δn and first prove that Eq. 2.12 is satisfied with δ = p𝜖1.
If Ne denotes the number of open edges in the random graph G(Sq), then ENe = p(q2). Fixing 0 < 𝜖 < 1 6, setting δ = p𝜖1 and applying the binomial estimate (2.2) with Tm = Ne then gives
since 14(q2) ≥ q2 16 for all q ≥ 2.
We now write ℙ(BLc(Sq)) = I1 + I2, where
and I2 := ℙ (BLc(Sq)⋂ {Ne < (p − δ)(q2)}) is bounded above as
by Eq. 2.13.
It remains to estimate I1. Suppose that the event Ne ≥ (p − δ)(q2) occurs. If d(v) denotes the degree of vertex v ∈ Sq in the random graph G(Sq), then ∑ 1≤v≤qd(v) = 2Ne ≥ (p − δ)q(q − 1) and so there exists a vertex w such that d(w) ≥ (p − δ)(q − 1) ≥ q1, where q1 is as defined in the statement of the Lemma. Thus
If N(z) = N(z, G(Sq)) is the set of neighbours of z in the random graph G(Sq), then
where the summation is over all subsets S of Sq such that #S ≥ q1 and z∉S.
Suppose now that the event BLc(Sq)⋂ {N(z) = S} occurs for some fixed set S ⊆ Sq. We recall that since BLc(Sq) occurs, there is no open L −clique in the random graph G(Sq) with vertex set Sq. This necessarily means that there is no open (L − 1) −clique in the random induced subgraph of G(Sq) formed by the vertices of S; i.e., the event BL− 1c(S) occurs. This is because each vertex of S is connected to z by an open edge. Therefore ℙ (BLc(Sq) ⋂ {N(z) = S}) is bounded above by
where the final equality is true since the event {N(z) = S} depends only on the state of edges containing z as an endvertex. On the other hand, the event BL− 1c(S) depends only on the state of edges having both their endvertices in S. Since the set S does not contain the vertex z (see Eq. 2.17), the events {N(z) = S} and BL− 1c(S) are independent.
To obtain the desired recursion (2.12) using Eq. 2.18, let T denote the set of the q1 least indices in S, where q1 ≤ #S is as defined in the statement of the Lemma. Since T ⊆ S, the events BL− 1c(S) ⊆ BL− 1c(T); i.e., there is no open (L − 1) −clique in the random induced subgraph formed by the vertices of T. From Eq. 2.18, we then get that ℙ (BLc(Sq)⋂ {N(z) = S}) is bounded above by
since #T = q1 (see Eq. 2.11).
Substituting (2.19) into (2.17) we get that ℙ (BLc(Sq)⋂ {N(z) ≥ q1}) is bounded above by
where the equality (2.20) is true since the events {N(z) = S} are disjoint for distinct S. Substituting (2.20) into (2.16) gives
Using this estimate for I1 and Eq. 2.15 we get
This proves (2.12) for δ = p𝜖1.
Suppose now that δ = 𝜖(1 − p). Recalling that Ne denotes the number of open edges in the random graph G(Sq) (see the first paragraph of this proof), we let We =(q2) − Ne denote the number of closed edges so that EWe = (1 − p)(q2). If We ≤ (1 − p)(1 + 𝜖)(q2), then We − EWe ≤ 𝜖 EWe. Applying the binomial estimate (2.2) with Tm = We therefore gives that ℙ (We ≤ (1 − p)(1 + 𝜖)(q2)) is bounded below by
for all q ≥ 2, since δ = 𝜖(1 − p). Moreover,
and so we again obtain (2.13) with 𝜖 instead of 𝜖1. Arguing as before, we then get ℙ(BLc(Sq)) ≤ qtL− 1(q1) + exp (−𝜖δ16q2) . Since 𝜖1 ≤ 𝜖 (see Eq. 2.4), this proves (2.12). □
2.3 Small Cliques Estimate
We now use the recursion obtained in the Lemma 6 iteratively, to obtain an explicit estimate for the probability tL(q).
Lemma 7.
Fix 0 < 𝜖 < 1 6 and let 𝜖1, δn be as defined in Eqs. 2.4and 2.5, respectively. There is a constant N0 = N0(𝜖) ≥ 1 so that the following holds for all n ≥ N0 : If q = qn and L = Ln are such that
then \(t_{L}(q) \leq e^{-A_{1}} + 2e^{-A_{2}}\) where
We recall that vL is as defined in Eq. 2.10. In the proof of Theorem 1, we use a auxiliary lemma (Lemma 8) to first check that the condition (2.22) holds and then use Lemma 7 to obtain lower bounds on the clique number.
(Proof of Lemma 7).
For simplicity, let p = pn, δ = δn and r(q) := exp (−𝜖1δ16 q2) . Recalling the definition of {qi} in Eq. 2.6 and applying the recursion (2.12) twice we get tL(q) ≤ qtL− 1(q1) + r(q) ≤ q(q1tL− 2(q2) + r(q1)) + r(q) provided L − 2 and q2 are both at least 2. Since q2 ≤ q1 ≤ q0 = q, (see Eq. 2.9), we further get tL(q) ≤ q2tL− 2(q2) + qr(q1) + r(q). Proceeding iteratively, we therefore get that
provided qL− 2 ≥ 2.
Since qL− 2 ≥ qL ≥ vL (see Esqs. 2.9 and 2.10), it is enough to prove the estimate for tL(q) in the Lemma from Eq. 2.23, assuming that vL ≥ 2. We now show that if vL ≥ 2, then the first term in Eq. 2.23 is at most \(e^{-A_{1}}\) and the second summation in Eq. 2.23 is at most \(2e^{-A_{2}},\) where A1 and A2 are as in the statement of the Lemma. This proves the Lemma. To estimate the summation term in Eq. 2.23, we use qj ≥ qL ≥ vL for 1 ≤ j ≤ L − 1 (see Eqs. 2.9 and 2.10) to get that r(qj) ≤ r(vL). Thus
where the second inequality in Eq. 2.24 follows from the fact that qL− 2− 1q− 1 ≤ 2qL− 3 for all q ≥ 2. Since the final term of Eq. 2.24 is in fact \(2e^{-A_{2}},\) we have bounded the summation in Eq. 2.23.
To bound the first term in Eqs. 2.23, we use the estimate
where the first equality in Eq. 2.25 is true since there is no open 2 −clique among a set of vertices if and only if all the edges between the vertices are closed. The final inequality in Eq. 2.25 follows from the fact that qL− 2 ≥ qL ≥ vL (see Eqas. 2.9 and 2.10). Since (vL 2) ≥ vL2 4 for all vL ≥ 2, we get from Eq. 2.25 that the first term in Eq. 2.23 is at most \(e^{-A_{1}},\) proving the Lemma. □
3 Proof of Theorem 1
We use the small cliques estimate in Lemma 7 with appropriately chosen value of L = Ln, to obtain the lower bound on the clique number. For the upper bound on the clique number, we use the union bound to estimate the probability that there an open clique of size L, chosen sufficiently large.
We begin with the proof of the lower bound on the clique number. To apply the small cliques estimate in Lemma 7, we first state and prove an auxiliary result (Lemma 8 below) that describes the feasibility of using Lemma 7. Let η, γ > 0 and 0 < c ≤ 1 be constants such that
For c = 1, this corresponds to the conditions mentioned in Eq. 1.3 of the statement of Theorem 1. To see that are constants η, γ and c satisfying (3.1), we recall that α1 < 2 and α2 < 1. Therefore 2 > max(α1,2α2) and choosing α2 + η0 and c0 close enough to one and γ0 close enough to zero, we get that Eq. 3.1 is satisfied with γ0, η0 and c0. Arguing similarly, we also have that there are constants η and γ such that Eq. 3.1 holds with c = 1.
For future use, we perform the analysis below for a general 0 < c ≤ 1. We recall the definition of 𝜖, 𝜖1 and δn prior to Eq. 2.5 and the term Wn = log n log(1 pn) from Eq. 1.1. We now set
and show that the quantities \(v_{L_{n}}, A_{1}\) and A2 defined in Lemma 7 satisfy the following estimates.
Lemma 8.
There are constants 𝜖 > 0 small and K = K(𝜖) ≥ 1 large such that for all n ≥ K
The first condition in Eq. 3.3 in the above Lemma implies that condition (2.22) holds for all large n and therefore ensures the feasibility of using the small cliques estimate Lemma 7. Below, in the proof of Theorem 1, we use the estimates in Eq. 3.3 with c = 1 together with Lemma 7, to get the following lower bound for ω(G(n, pn)) :
where θclq = 2(α2 + η − γ) − max(α1, α2) is as defined in Eq. 1.6.
(Proof of Lemma 8).
Let 0 < 𝜖 < 1 6 be arbitrary but fixed. We first estimate \(v_{L_{n}}\) by writing
(see Lemma 7 for the expression for vL) and use Eq. 2.8 (which states that log(pn−δn) log pn < 1 + 2𝜖) and the fact that Ln logpn < 0 to get
by the definition of Ln in Eq. 3.2. Thus
for all n large and so the exponent term in the expression for \(v_{L_{n}}\) in Eq. 3.5 is at least \(n^{c(\alpha _{2}+ \eta -3\epsilon )}.\)
To evaluate the term 11−pn+δn in Eq. 3.5, we consider two cases depending on whether δn = 𝜖1pn or δn = 𝜖(1 − pn) (see Eq. 2.5). If δn = 𝜖1pn, then
and so 11−pn+δn ≤ 1 𝜖1 . If δn = 𝜖(1 − pn), then
for all n large, by definition of α2 = limsupn log(1 1−pn) log n in Eq. 1.2. In any case, \(\frac {1}{1-p_{n}+\delta _{n}} \leq \frac {n^{\alpha _{2}+\epsilon }}{1+\epsilon }\) for all n large and so combining with the estimate (3.7) obtained in the previous paragraph we get that
for all n large. Using the fact that c(α2 + η) > α2 (see Eq. 3.1), we choose 𝜖 > 0 small so that
Fixing such an 𝜖, we get \(v_{L_{n}} \geq n^{c(\alpha _{2} + \eta - 3\epsilon )}\) for all n large, proving the estimate for \(v_{L_{n}}\) in Eq. 3.3.
To estimate A1 and A2, we consider two separate cases depending on whether pn < 1 − 1 M and pn ≥ 1 − 1 M. If pn < 1 − 1 M, then δn = 𝜖1pn (see the definition of δn in Eq. 2.5). From the estimate for Wn in Eq. 2.7 we have Ln ≤ Wn ≤ log n log(MM− 1) . Since α1 = limsupn log(1 pn) log n < 1 (see Eq. 1.2), we have that \(p_{n} \geq \frac {1}{n^{\alpha _{1}+\epsilon }}\) for all n large. Together with the estimate for \(v_{L_{n}}\) obtained above in Eq. 3.3, we therefore get that
for all n large. Using the condition (3.1) and choosing 𝜖 > 0 smaller if necessary, we have that 2c(α2 + η − 3𝜖) − (α1 + 𝜖) > 2c(α2 + η − γ) − α1 > 0. Thus \(A_{1} \geq n^{2c(\alpha _{2} + \eta - \gamma ) - \alpha _{1}}\) and since δn = 𝜖1pn, an analogous analysis holds for A2.
Suppose now that pn ≥ 1 − 1 M. From Eq. 2.5 we have δn = 𝜖(1 − pn) and from Eq. 2.7 we have that \(L_{n} \leq W_{n} \leq n^{\alpha _{2} + \epsilon } \log {n}.\) Substituting into the expression for A1 (see Eq. 3.8) we get
for all n large, provided we choose 𝜖 > 0 smaller if necessary so that
This is possible by the conditions on α2, η and c in Eq. 3.1.
To estimate A2, we use α2 = limsupn log(1 1−pn) log n < 1 (see Eq. 1.2) to get that \(\delta _{n} = \epsilon (1-p_{n}) \geq \frac {\epsilon }{n^{\alpha _{2}+ \epsilon }}\) for all n large. Thus
As before, we use Eq. 3.1 and choose 𝜖 > 0 smaller if necessary to get that
This implies that \(A_{2} \geq n^{2c(\alpha _{2} + \eta - \gamma ) - \alpha _{2}}\) for all n large, proving the Lemma. □
(Proof of Theorem 1).
To prove the lower bound for ω(G) in Eq. 3.4, we set c = 1 in Eq. 3.1 and get from Lemma 8 that
where θclq is as defined in Eq. 1.6. Moreover, from Eq. 3.3 we get \(v_{L_{n}} \geq 2\) for all n large and so Lemma 7 is applicable. Thus \(t_{L_{n}}(n) \leq e^{-A_{1}} + 2e^{-A_{2}} \leq 3\exp \left (-n^{\theta _{clq}}\right ).\) In other words, with probability at least \(1-3\exp \left (-n^{\theta _{clq}}\right ),\) the random graph G contains an open clique of size Ln, proving the lower bound (3.4) for ω(G(n, pn)).
To obtain the upper bound for ω(G(n, pn)), we prove below that for any positive sequence {Hn}, the term
for all n ≥ 2, where fn := (Hn− 1) 2 log (1pn) − logn. Setting Hn = (2 + 2𝜖)Wn + 1 ≥ (2 + 2𝜖)Wn, where Wn = log n log(1 pn) as defined in Eq. 1.1, we get that fn = 𝜖 logn. From Eq. 3.9, we then get the desired upper bound (1.4). As argued in the discussion following Theorem 1, the estimate (1.5) follows from Eq. 1.4.
To prove (3.9) we use the fact that if there is an open L −clique in G with L = Hn, then some set T with #T = L has the property that every vertex in T is connected to every other vertex in T by an open edge. The number of edges with both endvertices in T is (L2) and the number of possible choices for T is (nL). Therefore,
since L = Hn. This proves (3.9). □
4 Proof of Theorem 2
We begin with the following definition:
Independence Number of a Graph
We recall that the graph G = G(n, pn) is the random subgraph of Kn obtained by allowing each edge to be independently open with probability pn. Also ω(G) is the size of the largest clique in G. The independence number α(G) is defined as follows: α(G) = h if and only if there is a set of h vertices, none of which have an open edge between them and every set of h + 1 vertices have an open edge between them in G.
Let G¯ = (V¯, E¯) denote the complement of the graph G obtained by flipping the states of all edges in G; i.e., all open edges in G are closed in G¯ and all closed edges in G are open in G¯. We use the following Lemma (Alon and Spencer (2003)) in the proof of Theorem 2.
Lemma 9.
(a1) The independence number α (G) = ω(G¯) and so the chromatic number
(a2) Suppose for some integer 1 ≤ m ≤ n, every set of m vertices in the complement graph G¯ contains an open clique of size L. We then have
(Proof of Theorem 2).
As in Section 1, let G = G(n, pn) be the random subgraph of the complete graph Kn where each edge is open with probability pn. We recall that ω(G) is the clique number of G as defined prior to Theorem 1. The lower bound in Eq. 1.11 then follows from property (a1) in Lemma 9 and the upper bound for the clique number ω(G(n, pn)) in Eq. 1.5, since the random graph G¯(n,1 − pn) has the same distribution as the random graph G(n, pn). Indeed, we recall from Eq. 3.9 that
where Wn = log n log(1 pn) is as defined in Eq. 1.1. Using Eq. 4.3 in Eq. 4.1 we therefore get
For the upper bound in Eq. 1.11, we use property (a2) with m = nc, where c satisfies the conditions in the statement of Theorem 2. For that we first see that there are positive constants η, γ and c such that Eq. 1.8 holds; i.e.,
Indeed, recalling that α1 < 1 and α2 < 1 2 we have that 2(1 − α2) > max(α1,2α2). Therefore choosing c0, α2 + η0 and γ0, close enough to 1 − α2, one and zero, respectively, we get that Eq. 4.4 holds. To see that Eq. 1.9 also holds; i.e.,
we recall that 1 − α2 > max(α1, α2). Therefore 2 − 1 1−α2 max(α1, α2) > 1. Choosing c0, α2 + η0 and γ0 closer to 1 − α2, one and zero, respectively, if necessary, we get that Eq. 4.5 also holds.
We now let m = nc where c satisfies (4.4) and let m be the set of subsets of size m in {1,2, … , n}. For a set S ∈m and integer L ≥ 2, we recall from Eq. 2.11 that BL(S) denotes the event that the random induced subgraph of G(n, pn) with vertex set S contains an open L −clique. We set
by Eq. 1.1 and the fact that m = nc. Also from our choices of c, γ and η in Eqs. 4.4 and 4.5, the conditions in Eq. 3.1 hold. Therefore the estimates for A1 and A2 Lemma 8 hold and we get
where θchr > 1, by Eq. 4.5.
If \(F_{n} = \bigcap _{S \in \mathcal {S}_{m}}B_{L}(S)\) denotes the event that every set of m vertices in the random graph G(n, pn) contains an open L −clique, then
where \(B = m^{\theta _{chr}} - m\log {n} \geq \frac {1}{2}m^{\theta _{chr}}\) for all n large, since θchr > 1 by Eq. 4.5. The first estimate in Eq. 4.7 is obtained from the fact that the number of subsets of size m in the set {1,2, … , m} is (nm) and the probability that any subset of size m does not contain an open L −clique is at most \(3\exp \left (-m^{\theta _{chr}}\right )\) by Eq. 4.6. If the event Fn occurs, then using property (a2) in Lemma 9, we have that
We now estimate nWn and show that it is much larger than m = nc. Fix 𝜖 > 0 and recall from Eq. 1.1 that Wn = log n log(1 pn). Using the bound − log(1 − x) > x in Eq. 2.1 with x = 1 − pn, we get \(\log \left (\frac {1}{p_{n}}\right ) = -\log (1-(1-p_{n})) > 1-p_{n} \geq \frac {1}{n^{\alpha _{2}+\epsilon }}\) for all n large, by the definition of α2 in Eq. 1.2. Thus the term \(W_{n} \leq n^{\alpha _{2}+\epsilon }\log {n}\) and so \(\frac {n}{W_{n} } \geq \frac {n^{1-\alpha _{2}-\epsilon }}{\log {n}}.\) Since c < 1 − α2 (see Eq. 1.8), we choose 𝜖 > 0 small so that c < 1 − α2 − 𝜖. Fixing such an 𝜖, we get from Eq. 4.8 that χ(G(n,1 − pn)) ≤ n(1+𝜖) c(1−η−α2)Wn for all n large, proving the upper bound in Eq. 1.10. Arguing as in the discussion following Theorem 1, the estimate (1.11) follows from Eq. 1.10. □
References
Achlioptas, D. and Naor, A. (2005). The two possible values of the chromatic number of a random graph. Annals of Mathematics 162, 1335–1351.
Alon, N. and Spencer, J. (2003). The Probabilistic method. Wiley, New York.
Alon, N. and Krivelevich, M. (1997). The concentration of the chromatic number of random graphs. Combinatorica 17, 3, 303–313.
Bollobás, B. (1988). The chromatic number of random graphs. Combinatorica 8, 49–55.
Bollobás, B. (2001). Random graphs. Cambridge University Press, Cambridge.
Frieze, A. (1990). On the independence number of random graphs. Discrete Mathematics 81, 171–176.
Panagiotou, K. and Steger, A. (2009). A note on the chromatic number of a dense random graph. Discrete Mathematics 309, 3420–3423.
Luczak, T. (1991). The chromatic number of random graphs. Combinatorica 11, 45–54.
Scott, A. (2008). On the concentration of the chromatic number of random graphs. Arxiv Link: 0806.0178.
Shamir, E. and Spencer, J. (1987). Sharp concentration of the chromatic number on random graphs Gn, p.. Combinatorica 7, 121–129.
Acknowledgments
I thank Professors Rahul Roy, Federico Camia, C. R. Subramanian and the referees for crucial comments that led to an improvement of the paper. I also thank Professors Rahul Roy, Federico Camia, C. R. Subramanian and IMSc for my fellowships.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix
Appendix: Proof of Theorems 3 and 4
Proof of Theorem 3
We use Theorem 1 with appropriate choices of η and γ.
-
(i) Here α1 = θ1 < 2, α2 = 0 and Wn = log n log(1 pn) = 1 θ1 . For 0 < ξ < 2−θ1 θ1 , we set 𝜖 = ξθ1 2 , η = θ1 2 + ξθ1 and γ = ξθ1 2 so that Eq. 1.3 is satisfied and θclq = ξ. Using 𝜖(2 + 2𝜖)Wn logn ≥ 2𝜖Wn logn = ξ logn in Eq. 1.5, we then get Eq. 1.12.
-
(ii) Here α1 = α2 = 0 and Wn = log n log(1 p). We let 0 < ξ < 1 and set 𝜖 = ξ 2, η = ξ and γ = ξ 2 and so that Eq. 1.3 is satisfied and θclq = ξ. As before, we use 𝜖(2 + 2𝜖)Wn logn ≥ 2𝜖Wn logn = ξ(log n)2 log(1 p) and get Eq. 1.13 from Eq. 1.5.
-
(iii) Here α1 = 0 and α2 = θ2 < 1 and letting 0 < ξ < 1 we set 𝜖 = ξ 4, η = ξ 2 −θ2ξ 2 and γ > 0 smaller than η. With these choices (1.3) is satisfied and moreover θclq = θ2 + 2(η − γ) > θ2. To evaluate Wn, use the log estimates (2.1) and (2.1) to get that \(\frac {\frac {1}{n^{\theta _{2}}}}{1-n^{-\theta _{2}}} > \log \left (\frac {1}{p_{n}}\right ) = -\log \left (1-\frac {1}{n^{\theta _{2}}}\right ) > \frac {1}{n^{\theta _{2}}}\) and so
$$ n^{\theta_{2}}\log{n}\left( 1-\frac{1}{n^{\theta_{2}}}\right) \leq W_{n} = \frac{\log{n}}{\log\left( \frac{1}{p_{n}}\right)} \leq n^{\theta_{2}}\log{n} $$(A.1)for all n large. Moreover
$$ (1-\eta - \alpha_{2})W_{n} = (1-\theta_{2})\left( 1-\frac{\xi}{2}\right)W_{n} \geq (1-\theta_{2})(1-\xi)n^{\theta_{2}} \log{n} $$for all n large and \((2+2\epsilon )W_{n} +1 \leq (2+\xi )n^{\theta _{2}}\log {n}\) and \(\epsilon (2+2\epsilon )W_{n}\log {n} \geq \frac {\xi }{4} n^{\theta _{2}} (\log {n})^{2}\) for all n large. Plugging the above into Eq. 1.5 we get Eq. 1.14.
Proof of Theorem 4
We use Theorem 2 with pn = 1 − rn and appropriate choices of η and γ.
-
(i) Here pn = 1 − rn with α1 = 0 and α2 = θ2 < 1 2. Thus η0 := 1 2(1−θ2) − θ2 < 1 − θ2 and for 0 < ξ < 1 to be determined later, we set
$$ \epsilon = \frac{\xi}{6}, c = (1-\theta_{2})\left( 1-\frac{\xi^{3}}{6}\right) , \eta = \eta_{0} \left( 1+\frac{\xi^{2}}{6}\right) \text{ and } \gamma = \frac{\eta_{0} \xi^{2}}{12}. $$(A.2)We need to ensure that conditions (1.8) and (1.9) hold with α1 = 0 and α2 = θ2. By definition c < 1 − θ2 and η0 < 1 − θ2 and so max(η, c) < 1 − θ2 provided ξ > 0 is small. We choose ξ > 0 smaller if necessary so that
$$ c(\theta_{2} + \eta) = \frac{1}{2} - \frac{\xi^{3}}{12} + \frac{\eta_{0}\xi^{2}}{6}(1-\theta_{2})\left( 1-\frac{\xi^{3}}{6}\right) \geq \frac{1}{2} + \frac{\eta_{0}\xi^{2}}{12}(1-\theta_{2})\left( 1-\frac{\xi^{3}}{6}\right) $$and so θ2 < 1 2 < c(θ2 + η) < 1. To ensure the third condition in Eq. 1.8, we have
$$ 2c(\theta_{2} + \eta-\gamma) = 1-\frac{\xi^{3}}{6} + \frac{\eta_{0}\xi^{2}}{6}(1-\theta_{2})\left( 1 - \frac{\xi^{3}}{6}\right) \!\geq\! 1 + \frac{\eta_{0}\xi^{2}}{12}(1-\theta_{2})\left( 1 - \frac{\xi^{3}}{6}\right) $$provided ξ > 0 is small. Fixing such a ξ we get 2c(θ2 + η − γ) > 1 > 2θ2, since θ2 < 1 2. Thus Eq. 1.8 holds.
To ensure (1.9), we write θchr = 1 1−θ2 + η0ξ2 6 − θ2 (1−θ2)(1−ξ3 6) and choose ξ > 0 small so that (1 −ξ3 6)− 1 ≤ 1 + ξ3 4 and so θchr ≥ 1 + η0ξ2 6 − θ2 (1−θ2) ξ3 4 ≥ 1 + η0ξ2 12 . For future use we choose ξ > 0 smaller if necessary so that
$$ c\theta_{chr} \geq (1-\theta_{2})\left( 1-\frac{\xi^{3}}{6}\right)\left( 1 + \frac{\eta_{0} \xi^{2}}{12}\right) \geq (1-\theta_{2})\left( 1+\frac{\eta_{0}\xi^{2}}{24}\right) > 1-\theta_{2}. $$(A.3)Thus the bounds in Eq. 1.11 is true. We now evaluate the upper and lower bounds in Eq. 1.11. From Eq. A.1 and the fact that 0 < ξ < 1, we get \((2+2\epsilon )W_{n} + 1 \leq \frac {2n^{\theta _{2}}\log {n}}{1-\xi }.\) Similarly
$$ \frac{1+\epsilon}{c(1-\eta - \theta_{2})} = \frac{2\left( 1+\frac{\xi}{6}\right)}{\left( 1-\frac{\xi^{3}}{6}\right)\left( 1-2\theta_{2} - \frac{\eta_{0}\xi^{2}}{3}(1-\theta_{2})\right)} \leq \frac{2(1+\xi)}{1-2\theta_{2}}, $$provided ξ > 0 is small and these estimates obtain the bounds for χ(.) in Eq. 1.15.
To evaluate the exponents in Eq. 1.11, we use Eq. A.1 to get that \(\epsilon (2+\epsilon )W_{n} \log {n} \geq \frac {\xi }{4} n^{\theta _{2}}(\log {n})^{2}\) for all n large. Similarly from Eq. A.3 we get cθchr > 1 − θ2 and this obtains (1.15).
-
(ii) Here pn = 1 − rn = 1 − p and so α1 = α2 = 0. Letting ξ be small such that
$$ \epsilon = \frac{\xi}{6}, \eta = \frac{1}{2} + 2\xi^{2} < 1, \gamma = \xi^{2} \text{ and } c=1-\xi^{3} $$(A.4)we get that the conditions in Eq. 1.8 are true. Also θchr = 1 + 2ξ2 > 1 and so Eq. 1.9 is also true. Thus the bounds in Eq. 1.11 hold. Recalling that Wn = log n log(1 pn) we have that (2 + 2𝜖)Wn + 1 = (2 + ξ 3) log n log(1 1−p) + 1 ≤ 2 (1−ξ) log n log(1 1−p) for all n large. Similarly, the scaling factor in the upper bound in Eq. 1.11 is 1+𝜖c(1−η−α2) = 1+ξ6 (1−ξ3) (1 2 − 2ξ2) ≤ (1 + ξ) if ξ > 0 is small. The exponents in Eq. 1.11 evaluate to cθchr = (1 − ξ3)(1 + 2ξ2) ≥ 1 + ξ2 for all ξ > 0 small and 𝜖(2 + 2𝜖)Wn ≥ 2𝜖Wn = ξ 3 log n log(1 1−p). This obtains (1.16).
-
(iii) Here \(p_{n} = 1-r_{n} = \frac {1}{n^{\theta _{1}}}\) and so α1 = θ1 < 1 and α2 = 0. Let ξ be small such that
$$ \epsilon = \frac{\xi^{2}}{6}, \eta = \frac{1+\theta_{1}}{2} + 2\theta_{1}\xi^{2} < 1, \gamma = \theta_{1}\xi^{2} \text{ and } c=1-\xi^{3}. $$(A.5)Recalling condition (1.8), we have max(η, c) < 1 = 1 − α2,0 < cη = c(α2 + η) < 1 and
$$ \begin{array}{@{}rcl@{}} 2c(\alpha_{2} + \eta-\gamma) &=& 2(1-\xi^{3})(\eta-\gamma) = 2(\eta-\gamma) - 2\xi^{3}(\eta-\gamma)\\ &=& 1+\theta_{1} +2\theta_{1} \xi^{2} - 2\xi^{3}(\eta-\gamma). \end{array} $$(A.6)which is greater than one if ξ > 0 small. Thus Eq. 1.8 is true. Also θchr = 1 + θ1 + 2θ1ξ2 − θ1 1−ξ3 and for all ξ > 0 small, we have 11−ξ3 ≤ 1 + ξ2 and for such ξ, we have θchr ≥ 1 + θ1 + 2θ1ξ2 − θ1(1 + ξ2) = 1 + θ1ξ2 > 1. For future use we set ξ > 0 smaller if necessary so that
$$ c\theta_{chr} \geq (1-\xi^{3}) (1+\theta_{1}\xi^{2}) \geq 1+\frac{\theta_{1}\xi^{2}}{2} > 1. $$(A.7)Thus Eq. 1.9 is also true and consequently, the bounds in Eq. 1.11 hold.
Recalling that Wn = log n log(1 pn) = 1 θ1 we have from Eq. A.5 that (2 + 2𝜖)Wn + 1 = (2 + ξ2 3) 1 θ1 + 1 ≤ 2+θ1 θ1(1−ξ) for all n large, provided ξ > 0 small. Fixing such a ξ, the scaling factor in the upper bound in Eq. 1.11 is
provided we set ξ > 0 smaller if necessary.
Finally, regarding the exponents in Eq. 1.17, we have from Eq. A.7 that cθchr > 1 and moreover 𝜖(2 + 2𝜖)Wn ≥ 2𝜖Wn = ξ2 3 1 θ1 . This obtains (1.17).
Rights and permissions
About this article
Cite this article
Ganesan, G. Cliques and Chromatic Number in Multiregime Random Graphs. Sankhya A 84, 509–533 (2022). https://doi.org/10.1007/s13171-020-00205-4
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13171-020-00205-4