1 INTRODUCTION. PRELIMINARY INFORMATION

The studies by Kolmogorov, Dmitriev, Sevastyanov (see [1, 2]) gave the definition and the first results of the theory of branching random processes. Discrete-time branching processes (Galton–Watson processes) were introduced in monographs [3, Chapter I, Sec. 1–8, pp. 11–14], [4, Chapter I, Part A, Sec. 1–5] as a homogeneous Markov chain with phase set of states and transition probabilities determined by the branching condition. In [3, 5–7], transient phenomena for Galton–Watson processes were studied. In [8], the efficient and stiffly accurate inequalities were proved for the distributions of the number of particles of the \(n\)th generation. The asymptotic property of the Rotar generalized numerical characteristic investigated in [9, 10] the results of which can be used in prove of the central limit theorem for non-degenerate Galton–Watson processes. In [7, 11–15], various models of Galton–Watson processes with possible immigration were studied. In [16–28], an asymptotic analysis of complex Galton–Watson processes with decomposable components was conducted. In [28], in particular, an estimate for the rate of convergence in the main lemma of the Galton–Watson critical process was presented for the first time.

In this paper a Galton–Watson branching random process defined by the recurrence formulas is considered

$$Z_{0}=1,\quad Z_{n}=\sum\limits_{j=1}^{Z_{n-1}}X_{j},\quad n\geq 1,$$
(1)

where \(X_{1},X_{2},...,X_{n},...\) is the sequence of independent random variables (r. v.) with non-negative integer values and with general distribution

$$P\left(X_{1}=k\right)=P\left(Z_{1}=k\right)=p_{k},\quad k=0,1,...$$

(see [3, Ch. 1, §1, pp. 11–13; 4, Ch. 1, part A, §1, pp. 1–4]). From the equation (1), it follows that Galton–Watson branching random process is a homogeneous Markov chain with a phase set of states \(\left\{0,1,2,...,n,...\right\}\) and with transient probabilities

$$p_{ij}(n)=P\left(Z_{n+1}=j/Z_{n}=i\right)=p_{1j}^{*i}=\sum\limits_{j_{1}+...+j_{i}=j}p_{1j_{1}}\left(n\right)p_{1j_{2}}(n)...p_{1j_{i}}(n),$$
(2)

where \(i,j=0,1,2,...,\)

$$p_{1j}(n)=P\left(Z_{n}=j/Z_{0}=1\right)=P\left(Z_{n}=j\right),$$
$$p_{0j}\left(n\right)=p_{1j}^{*0}\left(n\right)=\delta_{ij}=\begin{cases}{1\quad\textrm{for}\quad i=j,}\\ {0\quad\textrm{for}\quad i\neq j.}\end{cases}$$

Equation (2) is called a branching condition. Let \(F(x)\) be a generating function of r. v. in \(X_{1}\):

$$F(x)=Ex^{X_{1}}=\sum\limits_{k=0}^{\infty}P\left(X_{1}=k\right)x^{k},\quad|x|\leq 1,$$

and let \(\nu\) be an arbitrary non-negative integer of r. v. with generating function

$$G(x)=Ex^{\nu}=\sum\limits_{k=0}^{\infty}P(\nu=k)x^{k},\quad|x|\leq 1.$$

The following assertion holds.

Assertion. Let the sequence of r. v. \(\nu,X_{1},X_{2},...,X_{n},...\) be collectively independent. Then the generating function of random sum \(S_{\nu}=\sum\limits_{j=1}^{\nu}X_{j}\) satisfies the equation

$$Ex^{S_{\nu}}=\sum\limits_{k=0}^{\infty}P\left(S_{\nu}=k\right)x^{k}=G\left(F(x)\right),\quad|x|\leq 1,$$

where \(G(x)\) is the generating function of r. v. \(\nu\):

$$G(x)=Ex^{\nu}=\sum\limits_{j=1}^{\infty}P(\nu=j)x^{j},\quad|x|\leq 1.$$

Proof. By virtue of the condition of this assertion, we obtain

$$Ex^{S_{\nu}}=\sum\limits_{k=0}^{\infty}P\left(S_{\nu}=k\right)x^{k}=\sum\limits_{k=0}^{\infty}\left(\sum\limits_{j=1}^{\infty}P\left(S_{\nu}=k,\nu=j\right)\right)x^{k}$$
$${}=\sum\limits_{j=1}^{\infty}P\left(\nu=j\right)\left[\sum\limits_{k=0}^{\infty}P\left(S_{j}=k\right)x^{k}\right]=\sum\limits_{j=1}^{\infty}P\left(\nu=j\right)\left(\sum\limits_{k=0}^{\infty}P\left(X_{1}=k\right)x^{k}\right)^{j}$$
$${}=\sum\limits_{j=1}^{\infty}P\left(\nu=j\right)\left[F(x)\right]^{j}=G\left(F(x)\right).$$

This assertion is proved. \(\Box\)

Using this assertion, we can determine recurrent relations for the generating functions

$$F_{n}(x)=\sum\limits_{k=0}^{\infty}P\left(Z_{n}=k\right)x^{k}=\sum\limits_{k=0}^{\infty}P_{n}(k)x^{k},\quad n=1,2,...,$$

i.e. the following relations hold

$$F_{n}\left(x\right)=F_{n-1}\left(F(x)\right)=F\left(F_{n-1}(x)\right),\quad n\geq 1.$$
(3)

Taking into account the above assertion and equation (1), validity of the formulas in (3) can be easily proved by the method of mathematical induction.

In branching random processes with discrete time, the critical value of the mean is \(A=EZ_{1}=F^{\prime}(1)=1\). Indeed, the limit theorems for the number of particles \(Z_{n}\) for the cases \(A\neq 1\), \(A=1\) have completely different forms. For the critical case \(A=1\) the limit distribution for \(Z_{n}\) is a specific exponential distribution, and for noncritical cases \(A\neq 1\) these limit distributions are determined by complex functional equations that do not have explicit solutions. Phenomena, arising as \(n\to\infty\), \(A\to 1\), is called transient phenomena. The study of transient phenomena is based on the asymptotic formula for \(1-F_{n}(x)\) as \(n\to\infty\), \(A\to 1\). The form of this formula could be defined on a specific example of the Galton–Watson branching processes generated by linear fractional generating functions of the form \(F(x)=\frac{ax+b}{cx+d}\). Since \(F(1)=\frac{a+b}{d+c}=1\), then without loss of generality, we can assume that \(F(x)=\frac{ax+b}{x+d}\), i.e. \(c=1\).

In the general theory of limit theorems for branching processes, a particularly important role are played the first three factorial moments of r. v. in \(Z_{1}\):

$$A=EZ_{1}=F^{\prime}\left(1\right),\quad B=EZ_{1}\left(Z_{1}-1\right)=F^{\prime\prime}(1),C=EZ_{1}\left(Z_{1}-1\right)\left(Z_{1}-2\right)=F^{\prime\prime\prime}(1).$$

Expressing the coefficients \(a\), \(b\), \(d\) by the linear fractional function \(F(x)\) in terms of factorial moments \(A\) and \(B\), we can obtain the following representation

$$1-F(x)=\frac{A(1-x)}{1+\frac{B}{2A}(1-x)}.$$

As noted in [3, Chapter III, §4, p. 10], for the generating function \(F(x)\) from formula (3) the iterations \(F_{n}(x)=F\left(F_{n-1}(x)\right)\) are found in the form of an explicit expression

$$R_{n}(x)=1-F_{n}(x)=\frac{A^{n}(1-x)}{1+\frac{B}{2A}\frac{A^{n}-1}{A-1}(1-x)},$$
(4)

which for \(A=1\) is determined by continuity in the form

$$R_{n}(x)=\frac{(1-x)}{1+\frac{Bn}{2}(1-x)}.$$

The expression on the right-hand side of the formula (4) as \(n\to\infty\), \(A\to 1\) is equivalent to

$$r_{n}(x)=\left\{\begin{aligned} \displaystyle{\frac{A^{n}(1-x)}{1+\frac{B}{2}}\frac{A^{n}-1}{A-1}(1-x),\quad\textrm{for}\quad A\neq 1,}\\ \displaystyle{\frac{1-x}{1+\frac{Bn}{2}(1-x)},\quad\textrm{for}\quad A=1.}\end{aligned}\right.$$

Following Sevastyanov [3], we define the class of probabilistic generating functions

$$K\left(B_{0},C_{0}\right)=\left\{F(\cdot),\ F^{\prime\prime}(1)=B\geq B_{0}>0,\ F^{\prime\prime\prime}(1)=C\leq C_{0}<\infty\right\}.$$

The first result on transient phenomena for Galton–Watson processes is the following theorem proved by Sevastyanov (see [3, Chapter III, Sec. 4, p. 106]).

Theorem 1. The following equation holds \(R_{n}\left(x\right)=r_{n}(x)\left(1+\eta_{n}(x)\right),\) where \(r_{n}(x)\) is determined by formula (4), and as \(n\to\infty\), \(A\to 1\) the limit \(\eta_{n}(x)\to 0\) is uniformly far all \(F(x)\in K\left(B_{0},C_{0}\right)\) and \(|x|\leq 1\).

2 ESTIMATE OF THE REMAINDER TERM IN THEOREM 1

In this section, we obtain an estimate for the remainder term in the Theorem 1. We will investigate the rate of uniformly convergence in the class of generating functions \(K\left(B_{0},C_{0}\right)\) to zero of an infinitesimal quantity \(\mathop{\sup}\limits_{\left|x\right|\leq 1}\left|\eta_{n}(x)\right|=o(1)\), \(n\to\infty.\)

Theorem 2. Let be \(A\neq 1\) . Then the following limit relationship holds

$$R_{n}(x)=r_{n}(x)\left(1+O\left(\frac{\overline{Q}_{n}(A)}{g(n,A)}\right)\right)$$

uniformly in the class of generating functions \(F(x)\in K\left(B_{0},C_{0}\right)\) as \(n\to\infty\) , \(A\to 1\) . Here

$$g_{n}(A)=\frac{A^{n}-1}{A-1}=1+A+...+A^{n},\quad\overline{Q}_{n}(A)=\sum\limits_{k=0}^{n}A^{k}Q_{k},\quad Q_{k}=1-F_{k}(0),\quad Q_{0}=1.$$

Remark 1. In the course of the proof of the Theorem 2, it will be shown that

$$\overline{Q}_{n}\left(A\right)=\sum\limits_{k=0}^{n}A^{k}Q_{k}=o\left(g(n,A)\right),\quad n\to\infty,\quad A\to 1.$$

We consider the following auxiliary assertions.

Lemma 1. The function \(g(n,A)\) increases with respect to first argument at a fixed second argument. For all \(n\) and \(A\) the following inequality holds

$$g(n,A)\geq\frac{1}{2}\min\left\{n,|1-A|^{-1}\right\}.$$
(5)

This Lemma 1 was given in [3, Chapter III, §4, p. 104]. But, the proof was incomplete and contains errors. So, here we give a simple proof of this lemma.

Proof. An increase of function \(g\left(\cdot,\cdot\right)\) is

$$g(n+1,\cdot)-g(n,\cdot)=\frac{A^{n+1}-A^{n}}{1-A}=A^{n}\geq 0,$$
$$g(\cdot,A+\Delta)-g(\cdot,A)=\frac{(A+\Delta)^{n}-1}{\left[1-(A+\Delta)\right]}-\frac{A^{n}-1}{1-A}=\sum\limits_{k=0}^{n-1}(A+\Delta)^{k}-\sum\limits_{k=0}^{n-1}A^{k}$$
$${}=\sum\limits_{k=0}^{n-1}\left[(A+\Delta)^{k}-A^{k}\right]=\sum\limits_{k=0}^{n-1}\sum\limits_{j=0}^{k}C_{k}^{j}A^{k-j}\Delta^{j}\geq 0.$$

These relations show that the function \(g(\cdot,\cdot)\) increases with respect to each argument. Now we will prove the validity of the inequality (5). Indeed, for \(A\geq 1\) the following relation always holds

$$g(n,A)=\sum\limits_{k=0}^{n-1}A^{k}\geq n.$$

Let us assume that \(A<1\). Then it follows from the formula

$$g(n,A)=\frac{A^{n}-1}{A-1}=\frac{1-A^{n}}{1-A}=\sum\limits_{k=0}^{n-1}A^{k},$$

that for \(A^{n}\leq\frac{1}{2}\) is true

$$g(n,A)\geq\frac{1}{2(1-A)},$$
(6)

and for \(A^{n}>\frac{1}{2}\) is true

$$g(n,A)\geq\frac{n}{2}.$$
(7)

Relations (6) and (7) prove validity of the inequality (5). The proof of Lemma 1 is complete. \(\Box\)

Note that the expression

$$Q_{n}=R_{n}(0)=1-F_{n}(0)=1-P\left(Z_{n}=0\right)=P\left(Z_{n}>0\right)$$

means the probability of continuation (not degeneration) of the process.

Lemma 2. As \(n\to\infty\), \(A\to 1\) the limit \(Q_{n}\to 0\) is uniformly relative to \(F(x)\in K\left(B_{0},C_{0}\right)\).

The proof of this essential Lemma 2 is given in [3, Chapter III, §4, pp. 104–106]. But, the proof of this lemma can be simplified, if we use the relationship

$$Q_{n}\leq\bigg{[}\frac{b_{0}}{A^{n}}+b_{1}\sum\limits_{k=1}^{n}\frac{1}{A^{k}}\bigg{]}^{-1}=\left[\frac{b_{0}}{A^{n}}+b_{1}g\left(n,\frac{1}{A}\right)\right]^{-1}.$$
(8)

The validity of this formula (8) follows from assertions (3.4.8) and (3.4.11) given in []. Here \(b_{0}=b_{0}\left(B_{0}\right)\) and \(b_{1}=b_{1}\left(C_{0}\right)\) are the positive constants.

Since \(\sum\limits_{k=1}^{n}\frac{1}{A^{k}}\to\infty\) as \(n\to\infty\), \(A\to 1\), then the proof of Lemma 2 follows from estimate (8) and Lemma 1.

Lemma 3. For any \(|x|\leq 1\) as \(n\to\infty\), \(A\to 1\) the limit \(R_{n}(x)\to 0\) is uniformly relative to \(F(x)\in K\left(B_{0},C_{0}\right)\).

Proof. It’s obvious that

$$\left|R_{n}(x)\right|=\left|1-F_{n}(x)\right|\leq\left|1-F_{n}(0)\right|+\left|F_{n}(x)-F_{n}(0)\right|$$

and

$$\left|F_{n}(0)-F_{n}(x)\right|=\left|p_{1}(n)x+p_{2}(n)x^{2}+...+p_{k}(n)x^{k}+...\right|\leq\sum\limits_{k=1}^{\infty}P_{k}(n)=P\left(Z_{n}>0\right)=Q_{n},$$

where \(P_{k}(n)=P\left(Z_{n}=k\right)\), \(k=0,1,...,n,...\) Thus, \(\left|1-F_{n}(x)\right|\leq 2Q_{n}\). Now to complete the proof of Lemma 3, it suffices to apply Lemma 2. \(\Box\)

Using the Taylor expansion, from equation (1) we obtain

$$R_{n+1}(x)=AR_{n}(x)-\frac{B}{2}R_{n}^{2}(x)+\overline{C}(x)R_{n}^{3}(x),$$
(9)

where \(\left|\overline{C}(x)\right|\leq\frac{C}{6}\). Dividing both sides of equation (9) by \(R_{n}(x)R_{n+1}(x)\) and denoting \(\frac{1}{R_{n}(x)}\) by \(b_{n}(x)\), we have

$$b_{n+1}(x)=\frac{1}{A}b_{n}(x)+\frac{B}{2A}\frac{b_{n+1}(x)}{b_{n}(x)}-\frac{\overline{C}(x)}{A}\frac{b_{n+1}(x)}{b_{n}^{2}(x)}.$$
(10)

Hence, we obtain

$$\frac{b_{n+1}(x)}{b_{n}(x)}=\frac{1}{A}+\frac{B}{2A}\frac{b_{n+1}(x)}{b_{n}^{2}(x)}-\frac{\overline{C}(x)}{A}\frac{b_{n+1}(x)}{b_{n}^{3}(x)}.$$
(11)

Substituting expression (11) into (10) instead of \(\frac{b_{n+1}(x)}{b_{n}(x)}\), we obtain

$$b_{n}(x)=\frac{1}{A^{n-1}\left(1-F(x)\right)}+\frac{B}{2}\sum\limits_{k=2}^{n}\frac{1}{A^{k}}+\theta(x)\sum\limits_{k=2}^{n}\frac{b_{k}(x)}{b_{k-1}^{2}(x)A^{n-k+1}}$$
$${}+\frac{\overline{C}(x)B}{2}\sum\limits_{k=2}^{n}\frac{b_{k}(x)}{b_{k-1}^{2}(x)A^{n-k+2}},$$
(12)

where \(\theta(x)=\frac{B^{2}}{4A}+\overline{C}(x)\). It is easy to see that

$$\left|\frac{1}{A^{n-1}[1-F(x)]}-\frac{1}{A(1-x)}\right|\leq\frac{B(1-x)}{A^{n}(1-F(x))}\leq\frac{B}{A^{n}(1-F(x))}\leq\frac{8C_{0}^{2}}{A^{n}B_{0}}\left(\frac{C_{0}}{B_{0}}+2\right)^{2}$$
(13)

and

$$Re\frac{1}{1-x}=Re\frac{1-\overline{x}}{1-|x|^{2}}\geq 0.$$

Hence, for any \(|x|\leq 1\) we obtain

$$\bigg{|}\frac{1}{A^{n}(1-x)}+\frac{B}{2}\sum\limits_{k=2}^{n}\frac{1}{A^{k}}\bigg{|}\geq\bigg{|}Re\bigg{[}\frac{1}{A^{n}(1-x)}+\frac{B}{2}\sum\limits_{k=2}^{n}\frac{1}{A^{k}}\bigg{]}\bigg{|}\geq\frac{B}{2}\sum\limits_{k=2}^{n}\frac{1}{A^{k}}.$$
(14)

In what follows, we will need the following lemma.

Lemma 4. For any \(|x|\leq 1\) as \(n\to\infty\) , \(A\to 1\) the sum

$$\sum\limits_{k=1}^{n}\frac{R_{k}(x)}{A^{n-k}}=O\bigg{(}\sum\limits_{k=1}^{n}\frac{1}{A^{k}}\bigg{)}=O\left(g\left(n,\frac{1}{A}\right)\right)$$

converges uniformly for all \(F(x)\in K\left(B_{0},C_{0}\right)\).

Proof. Let \(L=L(A)\) be a sequence of indices such that \(A^{L}\to 1\), \(L\to\infty\), \(\frac{n}{L}\to\infty\) as \(n\to\infty\), \(A\to 1\). It is evident that

$$\sum\limits_{k=1}^{n-L-1}\frac{R_{n-k}}{A^{n-k}}\leq\mathop{\max}\limits_{L\leq k\leq n}R_{k}(x)\sum\limits_{k=1}^{n}\frac{1}{A^{k}},$$
(15)
$$\left|\sum\limits_{k=n-L}^{\infty}\frac{R_{n-k}}{A^{n}}\right|\leq\sum\limits_{k=n-L}^{n}\frac{1}{A^{k}}.$$
(16)

Hence, we have

$$\frac{\sum\limits_{k=1}^{n}{A^{-k}}}{\sum\limits_{k=n-L}^{L}{A^{-k}}}=\frac{A^{k}-1}{A^{L}-1}=\sum\limits_{k=1}^{n}\left(A^{L}\right)^{k}$$
(17)

and as \(A\to 1\) it is true

$$\lim\limits_{n\to\infty}\sum\limits_{k=1}^{n}\left(A^{L}\right)^{k}=\infty.$$
(18)

The assertion of Lemma 4 follows from relationships (15)–(18) and Lemma 3. \(\Box\)

Let us now return to equation (12). Due to (11) as \(n\to\infty\), \(A\to 1\) the limit \({b_{n+1}(x)}/{b_{n}(x)}\to 1\) is uniformly relative to \(F(x)\in K(B_{0},C_{0})\). Hence, by virtue of inequalities (13), (14) and Lemma 4 from (12) it follows as \(n\to\infty\), \(A\to 1\) that

$$b_{n}(x)=\bigg{[}\frac{1}{A^{n}(1-x)}+\frac{B}{2}\sum\limits_{k=0}^{n}\frac{1}{A^{k}}\bigg{]}\left(1+O\left(\frac{\overline{Q}_{n}(A)}{g(n,A)}\right)\right)$$
(19)

is uniformly for all \(F(x)\in K\left(B_{0},C_{0}\right)\), and this completes the proof of Theorem 2.

This Theorem 2 refines the B. A. Sevastyanov theorems on transient phenomena for Galton–Watson branching random processes. A corollary of Theorem 2 is the following assertion.

Theorem 3. Let \(Q_{n}=1-F_{n}(0)=P\left(Z_{n}>0\right)\) be the probability of continuation of the Galton–Watson process. Then as \(n\to\infty\) , \(A\to 1\) the following quantity

$$Q_{n}=\frac{A^{n}}{1+\frac{B}{2}\frac{1-A^{n}}{1-A}}\left[1+O\left(\frac{\overline{Q}_{n}(A)}{g(n,A)}\right)\right]$$

converges uniformly for all generating functions \(F(\cdot)\in K\left(B_{0},C_{0}\right)\).

Now let us in detail prove the validity of the relationship (19). Expanding \(F(x)\) in equation \(F_{n+1}(x)=F\left(F_{n}(x)\right)\) by the Taylor formula

$$F(x)=1+A(x-1)+\frac{B}{2}(x-1)^{2}+\frac{\overline{C}(x)}{6}(x-1)^{3},$$

for \(R_{n}(x)=1-F_{n}(x)\) we obtain the following recurrent formula

$$R_{n+1}(x)=AR_{n}(x)-\frac{B}{2}R_{n}^{2}(x)+\frac{\overline{C}(x)}{6}R_{n}^{3}(x),$$
(20)

where \(\left|\overline{C}(x)\right|\leq C\). Using the Lemma 4 and (20), we can establish the following inequalities

$$\left|\frac{R_{n+1}(x)}{R_{n}(xt)}-A\right|\leq C_{1}Q_{n},\quad\left|\frac{R_{n}(x)}{R_{n+1}(x)}-\frac{1}{A}\right|\leq C_{2}Q_{n},$$
(21)

where \(C_{1}\) and \(C_{2}\) are some constants. We divide (20) by \(AR_{n}(x)R_{n+1}(x)\), then we obtain

$$b_{n+1}(x)=\frac{1}{R_{n+1}(x)}=\frac{1}{A}b_{n}(x)+\frac{B}{2A}\frac{R_{n}(x)}{R_{n+1}(x)}-\frac{\overline{C}}{6A}\frac{R_{n}^{2}(x)}{R_{n+1}(x)}.$$
(22)

Using (16), we could rewrite (22) in the following form

$$A^{n+1}b_{n+1}(x)=A^{n}b_{n}(x)+\frac{B}{2A}A^{n}+\lambda_{n}(x),$$
(23)

where

$$\left|\lambda_{n}(x)\right|\leq C_{3}A^{n}Q_{n},\quad C_{3}={\textrm{const}}.$$
(24)

From (18) we obtain

$$A^{n}b_{n}(x)=\frac{1}{1-x}+\frac{B}{2A}g(n,A)+\Lambda_{n}(x),$$
(25)

where

$$\left|\Lambda_{n}(x)\right|=\left|\sum\limits_{k=0}^{n-1}\lambda_{k}(x)\right|\leq C_{4}\sum\limits_{k=0}^{n-1}A^{n}Q_{n},\quad C_{4}={\textrm{const}}.$$
(26)

Now relationship (19) follows from the chain of inequalities (20)–(26). In conclusion, we note that

$$\sum\limits_{k=0}^{n-1}A^{k}Q_{k}=o(g(n,A)),\quad n\to\infty,\quad A\to 1.$$
(27)

Indeed, by Lemma 2 for any \(\varepsilon>0\) there are such \(n_{0}\) and \(\delta>0\), that for all \(n>n_{0}\), \(|A-1|\leq\delta\) and \(F(x)\in K\left(B_{0},C_{0}\right)\) we have \(Q_{n}\leq\varepsilon\). Using the estimate (24), we obtain

$$\Lambda_{n}(x)=\sum\limits_{k=0}^{n_{0}-1}\left|\lambda_{k}(x)\right|+\sum\limits_{k=n_{0}}^{n-1}C_{3}\varepsilon A^{k}\leq C_{4}+\varepsilon C_{3}A^{n_{0}}g\left(n-n_{0},A\right).$$

Therefore, as \(n\to\infty\), \(A\to 1\) the estimate (27) holds.

Remark 2. Let be \(A=1\). Then we can verify the validity of the expansion

$$Q_{n}=\frac{2}{Bn}+\left(\frac{4C}{3B^{2}}-\frac{2}{B}\right)\frac{\ln n}{n^{2}}+o\left(\frac{\ln n}{n^{2}}\right),$$

according to which from Theorem 3 we have

$$Q_{n}=\frac{2}{Bn}\left(1+O\left(\frac{\ln n}{n}\right)\right).$$