We study some limit characteristics of the binomial random graph \(G(n,p)\) (see [16]), where \(p \in (0,1)\) is an arbitrary fixed number independent of n. Recall that the vertex set of this graph is \(\{ 1, \ldots ,n\} \) and each pair of vertices is connected by an edge with probability p irrespective of the other edges (more formally, \(G(n,p)\) is a random element taking values in the set of all graphs on \(\{ 1, \ldots ,n\} \) with distribution \(P(G(n,p)\) = H) = \({{p}^{{e(H)}}}{{(1 - p)}^{{\left( \substack{ n \\ 2 } \right) - e(H)}}},\) where \(e(H)\) is the number of edges in H).

It was proved in [710] that the maximum size (called the independence number) of the set of disjoint vertices in \(G(n,p)\), with probability tending to 1, is equal to \({{f}_{0}}(n)\) or \({{f}_{0}}(n) + 1\), where

$${{f}_{0}}(n) = \left\lfloor {2{\text{lo}}{{{\text{g}}}_{{1/(1 - p)}}}n\, - \,2{\text{lo}}{{{\text{g}}}_{{1/(1 - p)}}}{\text{lo}}{{{\text{g}}}_{{1/(1 - p)}}}{{n}_{{_{{_{{_{{_{{_{{_{{_{{}}}}}}}}}}}}}}}}}} \right. $$
$$\left. { + \,2{\text{lo}}{{{\text{g}}}_{{1/(1 - p)}}}\frac{e}{2}\, + \,0.9} \right\rfloor .$$

Some refinements and generalizations of this result can be found, for example, in [11, 12].

In such cases, we say that the independence number is concentrated at two points. A natural question to ask is whether the condition on the vertex set can be weakened or changed so that two-point concentration is preserved. Two ways of such a change are considered, namely, imposing constraints on the structure of an induced subgraph and imposing constraints on the number of edges in an induced subgraph. In this paper, we use the later way.

Consider two natural constraints: the number of edges is (i) at most a given number and (ii) equal exactly to a given number.

Let \(t:\mathbb{N} \to {{\mathbb{Z}}_{ + }}\) be a function. Let \({{X}_{n}}[t]\) be the maximum size \(k\) of a set of vertices in \(G(n,p)\) inducing a subgraph with at most \(t(k)\) edges and \({{Y}_{n}}[t]\) be the maximum size \(k\) of a set of vertices in \(G(n,p)\) inducing a subgraph with exactly \(t(k)\) edges. Here, a set \(A \subset V(G)\) of vertices of the graph G induces a subgraph \(G{{{\text{|}}}_{A}}\) of G with the vertex set being \(A\) and with the edge set consisting of all edges of G which both ends belonging to \(A.\)

Under certain conditions on \(t,\) it was proved in [13] that \({{X}_{n}}[t]\) is two-point concentrated.

Theorem 1 (N. Fountoulakis, R.J. Kang, C. McDiarmid, 2014). Let\(t = t(k) = o\left( {\frac{{k\sqrt {\ln k} }}{{\sqrt {\ln \ln k} }}} \right)\)and

$${{f}_{t}}(n) = \left\lfloor {2\mathop {\log }\nolimits_b n + (t - 2)\mathop {\log }\nolimits_b \mathop {\log }\nolimits_b (np) - t\mathop {\log }\nolimits_b {{t}_{{_{{_{{_{{_{{_{{_{{}}}}}}}}}}}}}}}} \right. $$
$$\left. { +\,\,t\mathop {\log }\nolimits_b \frac{{2pe}}{{1 - p}} + 2\mathop {\log }\nolimits_b \frac{e}{2} + 0.9} \right\rfloor .$$

Then, with probability tending to 1,

$${{X}_{n}}[t] \in \{ {{f}_{t}}(n),{{f}_{t}}(n) + 1\} .$$

Note that, if t grows much faster, then no concentration result holds (there is neither two-point concentration nor concentration at any other fixed number of points). For example, if, say, \(t = p\left( {\left( \begin{gathered} k \\ 2 \\ \end{gathered} \right) - 10k} \right)\), then \({{X}_{n}}[t] = n\) holds with an asymptotic positive probability according to the central limit theorem, since the number of edges in \(G(n,p)\) has the binomial distribution with parameters \(\left( \begin{gathered} n \\ 2 \\ \end{gathered} \right)\) and \(p\). However, for any fixed positive integer \(m,\) with a positive asymptotic probability, \({{X}_{n}}[t] \leqslant n - m\) (and this quantity decreases with growing m). This result is connected with the fact that, with probability tending to 1, the maximum degree of the random graph is at most np + \(2\sqrt {p(1 - p)n{\text{ln}}n} \) (see [14]).

For the random variable \({{Y}_{n}}[t]\), such a simple argument cannot be used for \(t\) close to \(p\left( \begin{gathered} k \\ 2 \\ \end{gathered} \right)\). Nevertheless, there is no concentration at a finite set of points in this situation (this result is stated below; it was proved by J. Balogh and M. Zhukovskii and can be found at https://arxiv.org/pdf/1904.05307.pdf).

Theorem 2 (Balogh, Zhukovskii, 2019). Let t(k) = \(\left( \begin{gathered} k \hfill \\ 2 \hfill \\ \end{gathered} \right)p + O(k)\).

(i) There exists a number\(\mu > 0\)such that, forany\(c > \mu \)and\(C > 2c + \mu \),

$$0 < \mathop {{\text{lim}}{\kern 1pt} {\text{inf}}}\limits_{n \to \infty } P\left( {n - C\sqrt {\frac{n}{{\ln n}}} < {{Y}_{n}}[t] < n - c\sqrt {\frac{n}{{\ln n}}} } \right)$$
$$\mathop { \leqslant {\text{lim}}{\kern 1pt} {\text{sup}}}\limits_{n \to \infty } P\left( {n - C\sqrt {\frac{n}{{\ln n}}} < {{Y}_{n}}[t] < n - c\sqrt {\frac{n}{{\ln n}}} } \right) < 1.$$

(ii) Given an arbitrary nonnegative integer sequence\({{m}_{k}} = O\left( {\sqrt {\frac{k}{{\ln k}}} } \right)\), let

$$\left| {\left( {t(k) - \left( {\frac{k}{2}} \right)p} \right) - \left( {t(k - {{m}_{k}}) - \left( {\frac{{k - {{m}_{k}}}}{2}} \right)p} \right)} \right| = o(k).$$

Then, for any\(\varepsilon > 0\), thereexistc, C such that

$$\mathop {{\text{lim}}{\kern 1pt} {\text{inf}}}\limits_{n \to \infty } P\left( {n - C\sqrt {\frac{n}{{\ln n}}} < {{Y}_{n}}[t] < n - c\sqrt {\frac{n}{{\ln n}}} } \right) > 1 - \varepsilon .$$

Thus, the question arises as to whether \({{Y}_{n}}[t]\) is two-point concentrated for asymptotically smaller t. Specifically, does an analogue of Theorem 1 hold? We have proved that it holds under even more general conditions, namely, for \(t = o\left( {\frac{{{{k}^{2}}}}{{\ln k}}} \right)\).

Theorem 3.Let\(t = t(k) = o\left( {\frac{{{{k}^{2}}}}{{{\text{ln}}k}}} \right)\)and |t(k + 1) – \(t(k){\text{|}} = o\left( {\frac{k}{{{\text{ln}}k}}} \right)\). Additionally, let\(x = x(n)\)bea certain (unique for sufficiently large n) solution of the equation

$$\begin{gathered} x\ln \frac{n}{x} + x + t\ln \frac{{{{x}^{2}}}}{2} - t\ln t \\ +\,\,t\ln \frac{p}{{1 - p}} - \frac{{{{x}^{2}} - x}}{2}\ln \frac{1}{{1 - p}} \\ \, + \left( {\frac{{{{x}^{2}}}}{2} - t} \right)\ln \left( {1 + \frac{t}{{\frac{{{{x}^{2}}}}{2} - t}}} \right) = 0. \\ \end{gathered} $$

Then, with probability tending to 1,

$${{Y}_{n}}[t]\in \{x(n)-0.1,\,\,x(n)-0.1+1\}.$$