1 Introduction

For the fixed complex number s, the generalized omega function \(\Omega _s(k)\) is defined by \(\Omega _s(k)=\sum _{p^\ell \Vert k}\ell ^s\), where \(p^\ell \Vert k\) means that \(\ell \) is the largest power of p, such that \(p^\ell |k\). The cases \(s=0\) and \(s=1\) coincide, respectively, with the well-known number-theoretic omega functions \(\omega (k)=\sum _{p|k}1\), the number of distinct prime divisors of the positive integer k, and \(\Omega (k)=\sum _{p^\ell \Vert k}\ell \), the total number of prime divisors of k. Duncan [3] proved that for each arbitrary integer \(s\geqslant 0\)

$$\begin{aligned} \frac{1}{n}\sum _{k\leqslant n}\Omega _s(k)=\log \log n+M_s+O\left( \frac{1}{\log n}\right) , \end{aligned}$$
(1.1)

where \(M_s\) is a constant depending on s, given by \(M_s=M+M'_s\), with M referring to the Meissel–Mertens constant (see Remark 2.11 for more information), and

$$\begin{aligned} M'_s=\sum _{p}\sum _{\ell \geqslant 2}\frac{\ell ^s-(\ell -1)^s}{p^\ell }. \end{aligned}$$

Here and through the paper, \(\sum _p\) means that the sum runs over all primes. Note that \(M_0=M\). Also, we let \(M'=M_1\) and \(M'_1=M''=\sum _{p}\frac{1}{p(p-1)}\). Thus, \(M'=M+M''\). Approximation (1.1) is a generalization of the previously known result of Hardy and Ramanujan [5] concerning the average of the functions \(\omega \) and \(\Omega \).

Based on Dirichlet’s hyperbola method and prime number theorem for arithmetic progressions with error term, Saffari [12] obtained a full asymptotic expansion for the average of \(\omega (n)\) where n runs over the arithmetic progression a modulo q with \(\gcd (a,q)=1\). For \(a=q=1\), his result reads as follows:

$$\begin{aligned} \frac{1}{n}\sum _{k\leqslant n}\omega (k)=\log \log n+M+\sum _{j=1}^m\frac{a_j}{\log ^j n}+O\Big (\frac{1}{\log ^{m+1}n}\Big ), \end{aligned}$$
(1.2)

where \(m\geqslant 1\) is any fixed integer, and the coefficients \(a_j\) are given by

$$\begin{aligned} a_j =-\int _1^\infty \frac{\{t\}}{t^2}\log ^{j-1} t\,\textrm{d}t =\frac{(-1)^{j-1}}{j}\,\frac{\textrm{d}^j}{\textrm{d}s^j}\left( \frac{1}{s}(s-1)\zeta (s)\right) _{s=1}. \end{aligned}$$
(1.3)

In the above integral representation and what follows in the paper, the expression \(\{t\}\) stands for the fractional part of t. Diaconis [2] reproved (1.2) using Dirichlet series of \(\omega \), Perron’s formula, and complex integration methods. One may obtain similar expansion for the average of generalized omega function \(\Omega _s\) for each fixed real \(s\geqslant 0\), replacing M by \(M_s\) (see [9, Theorem 1] for more details).

Explicit versions of (1.1) for \(s=0\) and \(s=1\) are obtained in [8] and [6], respectively, and then both improved in [7, Theorem 1.2], where it is showed that for each \(n\geqslant 2\), the following double-sided approximation holds:

$$\begin{aligned} -\frac{1.133}{\log n}<\frac{1}{n}\sum _{k\leqslant n}\omega (k)-\log \log n-M<\frac{1}{2\log ^2 n}. \end{aligned}$$
(1.4)

Also

$$\begin{aligned} -\frac{1.175}{\log n}<\frac{1}{n}\sum _{k\leqslant n}\Omega (k)-\log \log n-M'<\frac{1}{2\log ^2 n}, \end{aligned}$$
(1.5)

where the left-hand side is valid for each \(n\geqslant 24\) and the right-hand side is valid for each \(n\geqslant 2\).

2 Summary of the Results

2.1 Unconditional Results

In the present paper, we are motivated by finding global numerical lower and upper bounds for the differences \(\mathcal {A}_0(n)\) and \(\mathcal {A}_1(n)\), where \(\mathcal {A}_s(n)\) defined for any fixed complex number s as follows:

$$\begin{aligned} \mathcal {A}_s(n)=\frac{1}{n}\sum _{k\leqslant n}\Omega _s(k)-\log \log n. \end{aligned}$$

The problem for the case \(\mathcal {A}_0(n)\) is an easy corollary of the inequalities (1.4). More precisely, we prove the following.

Theorem 2.1

For all natural numbers \(n\geqslant 2\), we have

$$\begin{aligned} \alpha _0\leqslant \mathcal {A}_0(n)\leqslant \beta _0 \end{aligned}$$
(2.1)

with the best possible constants \(\alpha _0=\frac{45}{32}-\log \log 32\) and \(\beta _0=\frac{1}{2}-\log \log 2\), and the equality in the left-hand side only for \(n=32\), and in the right-hand side only for \(n=2\).

Similarly, to get a global numerical lower bound for \(\mathcal {A}_1(n)\), we can use the inequalities (1.5) to show the following result.

Theorem 2.2

For all natural numbers \(n\geqslant 2\), we have

$$\begin{aligned} \alpha _1\leqslant \mathcal {A}_1(n) \end{aligned}$$
(2.2)

with the best possible constant \(\alpha _1=\frac{8}{7}-\log \log 7\) and the equality only for \(n=7\).

The problem of obtaining a global numerical upper bound for \(\mathcal {A}_1(n)\) is quite different from the above ones. Although, computations show that \(\mathcal {A}_1(n)<\beta _1\) for any \(n\geqslant 2\) with the best possible constant \(\beta _1=M'\), but the inequalities (1.5) are not sharp enough to show this fact. To deal with this difficulty, we made explicit all steps of the proof of (1.2) by following Saffari’s argument in [12], and hence, we could to prove the following result.

Theorem 2.3

For all natural numbers \(n\geqslant \textrm{e}^{14167}\approxeq 4.466\times 10^{6152}\), we have

$$\begin{aligned} \mathcal {A}_1(n)<\beta _1 \end{aligned}$$
(2.3)

with the best possible constant \(\beta _1=M'\). Moreover, if we assume that the Riemann hypothesis is true, then (2.3) holds for all natural numbers \(n\geqslant 1400387903260\).

To prove Theorem 2.3, we use explicit forms of the prime number theorem with error term. Let \(\pi (x)=\sum _{p\leqslant x}1\) be the prime counting function, and \(\textrm{li}(x)=\int _0^x\frac{1}{\log t}\,\textrm{d}t\) be the logarithmic integral function, defined as the Cauchy principle value of the integral. By \(f=O^*(g)\), we mean \(|f|\leqslant g\), providing an explicit version of Landau’s notation. It is known [15, Theorem 2] that

$$\begin{aligned} \pi (x)=\textrm{li}(x)+O^*\left( 0.2795\,x(\log x)^{-\frac{3}{4}}\,\textrm{e}^{-\sqrt{(\log x)/6.455}}\right) \qquad (x\geqslant 229). \end{aligned}$$

Modifying the above to the classical form, for any \(x>1.2\), we have

$$\begin{aligned} \pi (x)=\textrm{li}(x)+O^*(R(x)),\qquad R(x)=x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log x}}. \end{aligned}$$
(2.4)

This is, however, a weaker approximation, but it is suitable for our arguments because of its global validity. We will use it to prove the following unconditional results.

Theorem 2.4

For any fixed integer \(m\geqslant 1\) and for any \(x\geqslant \textrm{e}\), we have

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \mathcal {E}_\omega (x,m)\right) , \end{aligned}$$
(2.5)

where

$$\begin{aligned} \mathcal {E}_\omega (x,m)= & {} 2^{m+1}m!\,\frac{x}{\log ^{m+1} x} +(2^{m+1}+1)\,\textrm{e}m!\,\frac{\sqrt{x}}{\log x}\\{} & {} +x\,\textrm{e}^{-\frac{\sqrt{2}}{6}\sqrt{\log x}}\left( \frac{1}{2}\log x+3\sqrt{2}\sqrt{\log x}+21\right) +\sqrt{x}. \end{aligned}$$

Corollary 2.5

For \(x\geqslant \textrm{e}^{14167}\approxeq 4.466\times 10^{6152}\), we have

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx-\left( 1-\gamma \right) \frac{x}{\log x}+O^*\left( \frac{5x}{\log ^2x}\right) , \end{aligned}$$
(2.6)

and consequently, \(\frac{1}{x}\sum _{n\leqslant x}\omega (n)-\log \log x<M\).

To transfer an average result on the function \(\omega \) to an average result on the function \(\Omega \), we may consider the average difference \(\mathcal {J}(x):=\sum _{n\leqslant x}(\Omega (n)-\omega (n))\), for which it is known [7, Theorem 1.1] that for each integer \(n\geqslant 1\)

$$\begin{aligned} nM''-25\frac{\sqrt{n}}{\log n}<\mathcal {J}(n)<nM''-\frac{\sqrt{n}}{\log n}\Big (2-\frac{20}{\log n}\Big ). \end{aligned}$$
(2.7)

Modifying the above approximation, we will prove in Lemma 3.4 that \(\mathcal {J}(x)=M''x+O^*(\frac{33\sqrt{x}}{\log x})\) for any \(x\geqslant 2\). Thus, Theorem 2.4 and Corollary 2.5 transfer to the following results.

Theorem 2.6

For any fixed integer \(m\geqslant 1\) and for any \(x\geqslant \textrm{e}\), we have

$$\begin{aligned} \sum _{n\leqslant x}\Omega (n)=x\log \log x+M'x+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \mathcal {E}_\Omega (x,m)\right) , \end{aligned}$$
(2.8)

where

$$\begin{aligned} \mathcal {E}_\Omega (x,m)=\mathcal {E}_\omega (x,m)+\frac{33\sqrt{x}}{\log x}. \end{aligned}$$

Corollary 2.7

For \(x\geqslant \textrm{e}^{14167}\approxeq 4.466\times 10^{6152}\), we have

$$\begin{aligned} \sum _{n\leqslant x}\Omega (n)=x\log \log x+M'x-\left( 1-\gamma \right) \frac{x}{\log x}+O^*\left( \frac{6x}{\log ^2x}\right) , \end{aligned}$$
(2.9)

and consequently, \(\frac{1}{x}\sum _{n\leqslant x}\Omega (n)-\log \log x<M'\).

2.2 Conditional Results

As we observe in Corollary 2.5, approximation (2.5), even with its initial parameter \(m=1\), gives explicit bounds for \(\sum _{n\leqslant x}\omega (n)\) for large values of x. The reason is using approximation (2.4) with the remainder term R(x), and appearing the term \(x\,\textrm{e}^{-\frac{\sqrt{2}}{6}\sqrt{\log x}}\) in \(\mathcal {E}_\omega (x,m)\). This term comes essentially from the classical zero-free regions for the Riemann zeta function \(\zeta (s)\). The situation changes as well, when we use approximations for \(\pi (x)\) under assuming the Riemann hypothesis (RH), which asserts that \(\Re (s)>\frac{1}{2}\) is a zero-free region, and indeed, it is the best possible zero-free region, for \(\zeta (s)\). Accordingly, it is known [13, Corollary 1] that if the Riemann hypothesis is true, then

$$\begin{aligned} \pi (x)=\textrm{li}(x)+O^*\left( \frac{1}{8\pi }\sqrt{x}\log x\right) \qquad (x\geqslant 2657). \end{aligned}$$

By computation, we observe that one may drop the coefficient \(\frac{1}{8\pi }\) and get an easy to use bound for global range \(x\geqslant 2\), as follows:

$$\begin{aligned} \pi (x)=\textrm{li}(x)+O^*\left( \widehat{R}(x)\right) ,\qquad \widehat{R}(x)=\sqrt{x}\log x. \end{aligned}$$
(2.10)

Note that the above approximations are close to optimal, because on one hand von Koch [16] showed that the Riemann hypothesis is equivalent to \(\pi (x)=\textrm{li}(x)+O(\sqrt{x}\log x)\), and on the other hand, Littlewood [11] proved that letting \(b(x)=\frac{\log \log \log x}{\log x}\), there are positive constants \(c_1\) and \(c_2\), such that there are arbitrarily large values of x for which \(\pi (x)>\textrm{li}(x)+c_1\sqrt{x}\,b(x)\) and that there are also arbitrarily large values of x for which \(\pi (x)<\textrm{li}(x)-c_2\sqrt{x}\,b(x)\). Using conditional approximation (2.10), we obtain the following analogs of Theorems 2.42.6, and Corollaries 2.52.7.

Theorem 2.8

Assume that the Riemann hypothesis is true. For any fixed integer \(m\geqslant 1\) and for any \(x\geqslant \textrm{e}\), we have

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \widehat{\mathcal {E}}_\omega (x,m)\right) , \end{aligned}$$
(2.11)

and

$$\begin{aligned} \sum _{n\leqslant x}\Omega (n)=x\log \log x+M'x+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \widehat{\mathcal {E}}_\Omega (x,m)\right) , \end{aligned}$$
(2.12)

where

$$\begin{aligned} \widehat{\mathcal {E}}_\omega (x,m)= & {} \left( \frac{3}{2}\right) ^{m+1}m!\,\frac{x}{\log ^{m+1} x}+4x^{\frac{2}{3}}\log x+9x^{\frac{2}{3}}\\{} & {} +\left( \left( \frac{3}{2}\right) ^{m+1}+1\right) \,\textrm{e}m!\,\frac{x^{\frac{2}{3}}}{\log x}+15\sqrt{x}\log x, \end{aligned}$$

and \(\widehat{\mathcal {E}}_\Omega (x,m)=\widehat{\mathcal {E}}_\omega (x,m)+\frac{33\sqrt{x}}{\log x}\).

Corollary 2.9

Assume that the Riemann hypothesis is true, and let \(x_0=1400387903260\). Then, for \(x\geqslant x_0\), we have

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx-\left( 1-\gamma \right) \frac{x}{\log x}+O^*\left( \frac{11x}{\log ^2x}\right) , \end{aligned}$$
(2.13)

and

$$\begin{aligned} \sum _{n\leqslant x}\Omega (n)=x\log \log x+M'x-\left( 1-\gamma \right) \frac{x}{\log x}+O^*\left( \frac{12x}{\log ^2x}\right) , \end{aligned}$$
(2.14)

and consequently, \(\frac{1}{x}\sum _{n\leqslant x}\omega (n)-\log \log x<M\) and \(\frac{1}{x}\sum _{n\leqslant x}\Omega (n)-\log \log x<M'\).

Remark 2.10

According to partial computations we could run, it seems that the inequality \(\mathcal {A}_0(n)<M\) holds for \(n\geqslant 16\); however, it fails for \(n=15\). Also, as we mentioned above, the inequality \(\mathcal {A}_1(n)<M'\) holds for any integer \(n\geqslant 2\). A computational challenge is to check validity of them up to \(x_0\), and hence, we will get a global conditional bound under RH. More generally, we ask about finding bounds for the difference \(\mathcal {A}_s(n)\) for any fixed real \(s>0\). A strategy to attack this problem is to make explicit the argument used in [9] to approximate the average difference \(\mathcal {J}_s(n):=\sum _{k\leqslant n}\left( \Omega _s(k)-\omega (k)\right) \), for which it is proved that

$$\begin{aligned} 2^s\frac{\sqrt{n}}{\log n}\ll nM'_s-\mathcal {J}_s(n)\ll (2+\varepsilon )^s\frac{\sqrt{n}}{\log n}, \end{aligned}$$

holds for each pair of fixed real numbers \(s>0\) and \(\varepsilon >0\), and for n sufficiently large.

Remark 2.11

The Meissel–Mertens constant M [4, pp. 94–98] is determined by

$$\begin{aligned} M=\gamma +\sum _{p}\left( \log \Big (1-p^{-1}\Big )+p^{-1}\right) , \end{aligned}$$

where \(\gamma \) is the Euler–Mascheroni constant [4, pp. 24–40]. Also, see the impressive survey [10] for more information about \(\gamma \). Among several properties of the constants M and \(M'\), we have the following rapidly converging series:

$$\begin{aligned} M=\gamma +\sum _{k=2}^\infty \frac{\mu (k)\log \zeta (k)}{k}, \quad \text {and}\quad M'=\gamma +\sum _{k=2}^\infty \frac{\varphi (k)\log \zeta (k)}{k}, \end{aligned}$$

where \(\mu \) is the Möbus function and \(\varphi \) is the Euler function. Computations based on the above series representations yield that

$$\begin{aligned} M&\approxeq 0.26149721284764278375542683860869585905156664826120,\\ M'&\approxeq 1.03465388189743791161979429846463825467030798434439. \end{aligned}$$

We have used these values in our numerical verifications of the results of the present paper. All of computations have been done over Maple software.Footnote 1

3 Proof of Unconditional Approximations

Proof of Theorem 2.1

Considering the left-hand side of (1.4), we observe that the inequalities

$$\begin{aligned} \mathcal {A}_0(n)>M-\frac{1.133}{\log n}>\alpha _0 \end{aligned}$$

hold when \(n>\textrm{e}^{1.133/(M-\alpha _0)}\approxeq 102841.56\). Thus, we obtain the left-hand side of (2.1) for any integer \(n\geqslant 102842\). By computation, it holds also for \(2\leqslant n\leqslant 102841\) with equality only for \(n=32\). Also, considering the right-hand side of (1.4), we observe that the inequalities

$$\begin{aligned} \mathcal {A}_0(n)<M+\frac{1}{2\log ^2 n}<\beta _0 \end{aligned}$$

hold when \(n>\textrm{e}^{1/\sqrt{2(\beta _0-M)}}\approxeq 2.48\). This completes the proof. \(\square \)

Proof of Theorem 2.2

Since \(\textrm{e}^{1.175/(M'-\alpha _1)}\approxeq 8.23\), for any integer \(n\geqslant 9\), we have \(n>\textrm{e}^{1.175/(M'-\alpha _1)}\), or equivalently \(M'-1.175/\log n>\alpha _1\). Using this inequality, and the left-hand side of (1.5), we deduce that \(\mathcal {A}_1(n)>\alpha _1\) holds for \(n\geqslant 24\). By computation, it holds also for \(2\leqslant n\leqslant 24\) with equality only for \(n=7\). This completes the proof. \(\square \)

Proof of Theorems 2.4 and 2.6 and their corollaries are based on a series of lemmas. As in [12], we start by using Dirichlet’s hyperbola method [14, Theorem 3.1] to get the following result.

Lemma 3.1

For any x and y satisfying \(1\leqslant y\leqslant x\), we have

$$\begin{aligned} \sum _{n\leqslant x}\omega (n) =\sum _{p\leqslant y}\left[ \frac{x}{p}\right] +\sum _{n\leqslant \frac{x}{y}}\pi \left( \frac{x}{n}\right) -\left[ \frac{x}{y}\right] \pi (y). \end{aligned}$$
(3.1)

Proof

Let \(\textbf{1}(n)=1\) be the unitary arithmetic function, and \(\varpi (n)\) be the characteristic function of primes; that is, \(\varpi (n)=1\), when n is prime, and \(\varpi (n)=0\) otherwise. We consider Dirichlet convolution of these two functions

$$\begin{aligned} \textbf{1}*\varpi (n)=\varpi *\textbf{1}(n) =\sum _{d|n}\varpi (d)\,\textbf{1}\left( \frac{n}{d}\right) =\sum _{d|n}\varpi (d)=\sum _{p|n}1=\omega (n). \end{aligned}$$

Note that \([x]=\sum _{n\le x}\textbf{1}(n)\), and \(\pi (x)=\sum _{n\leqslant x}\varpi (n)\). Thus, using Dirichlet’s hyperbola method, for any y satisfying \(1\leqslant y\leqslant x\), we deduce that

$$\begin{aligned} \sum _{n\leqslant x}\omega (n) =\sum _{n\leqslant x}\textbf{1}*\varpi (n) =\sum _{n\leqslant y}\left[ \frac{x}{n}\right] \varpi (n)+\sum _{n\leqslant \frac{x}{y}} \pi \left( \frac{x}{n}\right) -\left[ \frac{x}{y}\right] \pi (y). \end{aligned}$$

This gives (3.1). \(\square \)

Lemma 3.2

For any x and y satisfying \(1.2<y\leqslant x\), we have

$$\begin{aligned} \sum _{p\leqslant y}\left[ \frac{x}{p}\right] =x\log \log y+Mx+O^*\left( h_1(x,y)\right) , \end{aligned}$$
(3.2)

where

$$\begin{aligned} h_1(x,y)=x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( 6\sqrt{\log y}+19\right) +y. \end{aligned}$$

Proof

We have

$$\begin{aligned} \sum _{p\leqslant y}\left[ \frac{x}{p}\right] =\sum _{p\leqslant y}\left( \frac{x}{p}-\left\{ \frac{x}{p}\right\} \right) =x\sum _{p\leqslant y}\frac{1}{p}+O^*(y). \end{aligned}$$

The Stieltjes integral and integration by parts gives

$$\begin{aligned} \sum _{p\leqslant y}\frac{1}{p}&=\int _{2^-}^y\frac{\textrm{d}\pi (t)}{t} =\frac{\pi (y)}{y}+\int _2^y\frac{\textrm{li}(t)}{t^2}\,\textrm{d}t+\int _2^y\frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t\\&=\frac{\textrm{li}(y)}{y}+O^*\left( \frac{R(y)}{y}\right) +\int _2^y\frac{\textrm{li}(t)}{t^2}\,\textrm{d}t+\int _2^y\frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t. \end{aligned}$$

The last integral is dominated by \(\int _2^\infty \frac{R(t)}{t^2}\,\textrm{d}t\), so it is convergent as \(y\rightarrow \infty \). Thus, we have

$$\begin{aligned} \int _2^y\frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t=\int _2^\infty \frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t+O^*\left( \int _y^\infty \frac{R(t)}{t^2}\,\textrm{d}t\right) . \end{aligned}$$

Note that

$$\begin{aligned} \int _y^\infty \frac{R(t)}{t^2}\,\textrm{d}t=\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( 6\sqrt{\log y}+18\right) . \end{aligned}$$

Also, integration by parts implies

$$\begin{aligned} \int _2^y\frac{\textrm{li}(t)}{t^2}\,\textrm{d}t=-\frac{\textrm{li}(t)}{t}\Big |_2^y+\int _2^y\frac{\textrm{d}t}{t\log t}=\log \log y-\frac{\textrm{li}(y)}{y}+\frac{\textrm{li}(2)}{2}-\log \log 2. \end{aligned}$$

Combining the above approximations, we deduce that

$$\begin{aligned} \sum _{p\leqslant y}\frac{1}{p}=\log \log y+C+O^*\left( \textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( 6\sqrt{\log y}+19\right) \right) , \end{aligned}$$

where

$$\begin{aligned} C=\int _2^\infty \frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t+\frac{\textrm{li}(2)}{2}-\log \log 2. \end{aligned}$$

Mertens’ approximation concerning the sum of reciprocal of primes [14, Theorem 1.10] asserts that \(\sum _{p\leqslant y}\frac{1}{p}-\log \log y\rightarrow M\) as \(y\rightarrow \infty \). This implies that \(C=M\), and concludes the proof. Meanwhile, let us mention that the equality \(C=M\) also implies that

$$\begin{aligned} \int _2^\infty \frac{\pi (t)-\textrm{li}(t)}{t^2}\,\textrm{d}t =M+\log \log 2-\frac{\textrm{li}(2)}{2}\approxeq -0.62759759779276794. \end{aligned}$$

Hence an additional output of the completed proof. \(\square \)

Lemma 3.3

Let x and y satisfy \(x\geqslant \textrm{e}\) and \(1.2<x^\delta \leqslant y\leqslant x^\Delta <x\) for some fixed \(\delta , \Delta \in (0,1)\). Then, we have

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\pi \left( \frac{x}{n}\right)= & {} \left[ \frac{x}{y}\right] \textrm{li}(y)+x(\log \log x-\log \log y)\nonumber \\{} & {} +x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( h_2(x,y)\right) , \end{aligned}$$
(3.3)

where

$$\begin{aligned} h_2(x,y)= & {} \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}\\{} & {} +\left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}+x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( 1+\log \frac{x}{y}\right) . \end{aligned}$$

Proof

For \(n\leqslant \frac{x}{y}\), we have \(\frac{x}{n}\geqslant y\geqslant x^\delta >1.2\). Thus, we may use the approximation (2.4) to get

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\pi \left( \frac{x}{n}\right) =\sum _{n\leqslant \frac{x}{y}}\textrm{li}\left( \frac{x}{n}\right) +O^*\left( \sum _{n\leqslant \frac{x}{y}}R\left( \frac{x}{n}\right) \right) . \end{aligned}$$

Since \(\frac{\textrm{d}}{\textrm{d}t}\textrm{li}\left( \frac{x}{t}\right) =-\frac{x}{t^2(\log x-\log t)}\), the Stieltjes integral and integration by parts gives

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\textrm{li}\left( \frac{x}{n}\right) =\int _{1^-}^\frac{x}{y}\textrm{li}\left( \frac{x}{t}\right) \textrm{d}[t] =\left[ \frac{x}{y}\right] \textrm{li}(y)+x\int _1^\frac{x}{y}\frac{[t]}{t^2(\log x-\log t)}\,\textrm{d}t. \end{aligned}$$

We write \([t]=t-\{t\}\) to get

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\textrm{li}\left( \frac{x}{n}\right) =\left[ \frac{x}{y}\right] \textrm{li}(y)+x(\log \log x-\log \log y)-\mathcal {E}(x,y), \end{aligned}$$
(3.4)

with the remainder \(\mathcal {E}(x,y)\) given by

$$\begin{aligned} \mathcal {E}(x,y)=x\int _1^\frac{x}{y}\frac{\{t\}}{t^2(\log x-\log t)}\,\textrm{d}t. \end{aligned}$$

Letting \(g_x(t)=(1-\frac{\log t}{\log x})^{-1}\), we have

$$\begin{aligned} \mathcal {E}(x,y) =\frac{x}{\log x}\int _1^\frac{x}{y}\frac{\{t\}}{t^2}\,g_x(t)\,\textrm{d}t =\mathcal {E}_1(x,y)-\mathcal {E}_2(x,y), \end{aligned}$$

with

$$\begin{aligned} \mathcal {E}_1(x,y)= & {} \frac{x}{\log x}\int _1^\infty \frac{\{t\}}{t^2}\,g_x(t)\,\textrm{d}t,\\ \mathcal {E}_2(x,y)= & {} \frac{x}{\log x}\int _\frac{x}{y}^\infty \frac{\{t\}}{t^2}\,g_x(t)\,\textrm{d}t. \end{aligned}$$

Since \(y\geqslant x^\delta \), we have \(1\leqslant t\leqslant \frac{x}{y}\leqslant x^{1-\delta }\), and consequently, \(0\leqslant \frac{\log t}{\log x}\leqslant 1-\delta <1\). We use Taylor’s formula with remainder [1, Theorem 5.19] for the function \(u\mapsto (1-u)^{-1}\), which asserts that if \(0\leqslant u\leqslant 1-\delta \) for some fixed \(\delta \in (0,1)\), as in our case, then for any given integer \(m\geqslant 1\)

$$\begin{aligned} (1-u)^{-1}=\sum _{r=0}^{m-1} u^r+O^*\left( \frac{1}{\delta ^{m+1}}\,u^{m}\right) . \end{aligned}$$
(3.5)

Taking \(u=\frac{\log t}{\log x}\) in (3.5), we get

$$\begin{aligned} g_x(t)=\sum _{r=0}^{m-1}\left( \frac{\log t}{\log x}\right) ^r+O^*\left( \frac{1}{\delta ^{m+1}}\left( \frac{\log t}{\log x}\right) ^{m}\right) . \end{aligned}$$
(3.6)

Thus

$$\begin{aligned} \mathcal {E}_1(x,y) =\frac{x}{\log x}\int _1^\infty \frac{\{t\}}{t^2}\sum _{r=0}^{m-1}\left( \frac{\log t}{\log x}\right) ^r\,\textrm{d}t+h_\delta (x), \end{aligned}$$

where

$$\begin{aligned} |h_\delta (x)|&\leqslant \frac{x}{\log x}\int _1^\infty \frac{\{t\}}{t^2} \frac{1}{\delta ^{m+1}}\left( \frac{\log t}{\log x}\right) ^{m}\,\textrm{d}t\\&\leqslant \frac{1}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}\int _1^\infty \frac{\log ^m t}{t^2}\,\textrm{d}t =\frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}. \end{aligned}$$

Also, we have

$$\begin{aligned}{} & {} \frac{x}{\log x}\int _1^\infty \frac{\{t\}}{t^2}\sum _{r=0}^{m-1}\left( \frac{\log t}{\log x}\right) ^r\,\textrm{d}t\\{} & {} \quad =\sum _{j=1}^m\frac{x}{\log ^j x}\int _1^\infty \frac{\{t\}}{t^2}\log ^{j-1} t\,\textrm{d}t =-x\sum _{j=1}^m\frac{a_j}{\log ^j x}. \end{aligned}$$

Hence, the following approximation holds for any fixed integer \(m\geqslant 1\), with the coefficients \(a_j\) given by (1.3):

$$\begin{aligned} \mathcal {E}_1(x,y)=-x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}\right) . \end{aligned}$$
(3.7)

To deal with \(\mathcal {E}_2(x,y)\) we note that by induction on \(n\geqslant 0\), we obtain the following anti-derivative formula with the coefficients \(P(n,j)={n\atopwithdelims ()j}j!\):

$$\begin{aligned} \int \frac{\log ^n t}{t^2}\,\textrm{d}t=-\frac{1}{t}\sum _{j=0}^n P(n,j)\log ^{n-j}t. \end{aligned}$$
(3.8)

Since \(y\leqslant x^\Delta \), we get \(\frac{x}{y}\geqslant x^{1-\Delta }\). Thus, for any integer \(n\geqslant 0\), we have

$$\begin{aligned} \int _\frac{x}{y}^\infty \frac{\{t\}}{t^2}\,\log ^n t\,\textrm{d}t \leqslant \int _{x^{1-\Delta }}^\infty \frac{\{t\}}{t^2}\,\log ^n t\,\textrm{d}t <\int _{x^{1-\Delta }}^\infty \frac{\log ^n t}{t^2}\,\textrm{d}t. \end{aligned}$$

Using (3.8), and assuming that \(x\geqslant \textrm{e}\), we get

$$\begin{aligned} \int _{x^{1-\Delta }}^\infty \frac{\log ^n t}{t^2}\,\textrm{d}t&=\frac{\log ^n x}{x^{1-\Delta }}\sum _{j=0}^n P(n,j)(1-\Delta )^{n-j}\frac{1}{\log ^j x}\\&<\frac{\log ^n x}{x^{1-\Delta }}\sum _{j=0}^n P(n,j) =\frac{\log ^n x}{x^{1-\Delta }}\sum _{j=0}^n\frac{n!}{j!}<\textrm{e}n!\,\frac{\log ^n x}{x^{1-\Delta }}. \end{aligned}$$

Thus, for any integer \(n\geqslant 0\), we obtain

$$\begin{aligned} \mathcal {I}_n(x,y):=\int _\frac{x}{y}^\infty \frac{\{t\}}{t^2}\,\log ^n t\,\textrm{d}t<\textrm{e}n!\,\frac{\log ^n x}{x^{1-\Delta }}. \end{aligned}$$
(3.9)

Applying (3.6), we get

$$\begin{aligned} \frac{\log x}{x}\,\mathcal {E}_2(x,y)&=\int _\frac{x}{y}^\infty \frac{\{t\}}{t^2}\,g_x(t)\,\textrm{d}t\\&=\sum _{r=0}^{m-1}\frac{1}{\log ^r x}\,\mathcal {I}_r(x,y)+O^*\left( \frac{1}{\delta ^{m+1}\log ^m x}\,\mathcal {I}_m(x,y)\right) . \end{aligned}$$

Hence, using (3.9), we deduce that

$$\begin{aligned} \mathcal {E}_2(x,y)<\left( \frac{\textrm{e}m!}{\delta ^{m+1}}+\textrm{e}\sum _{r=0}^{m-1}r!\right) \frac{x^\Delta }{\log x}. \end{aligned}$$

Since \(\sum _{r=0}^{m-1}r!\leqslant m!\), we obtain

$$\begin{aligned} \mathcal {E}_2(x,y)=O^*\left( \left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}\right) . \end{aligned}$$
(3.10)

Combining (3.4) with approximations (3.7) and (3.10), we obtain

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\textrm{li}\left( \frac{x}{n}\right)= & {} \left[ \frac{x}{y}\right] \textrm{li}(y)+x(\log \log x-\log \log y)+x\sum _{j=1}^m\frac{a_j}{\log ^j x}\\{} & {} +O^*\left( \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}+\left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}\right) . \end{aligned}$$

Now, to conclude the proof of (3.3), we just need to approximate the sum \(\sum _{n\leqslant \frac{x}{y}}R\left( \frac{x}{n}\right) \). Since \(n\leqslant \frac{x}{y}\), we have \(\frac{x}{n}\geqslant y\). Thus

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}R\left( \frac{x}{n}\right) \leqslant x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\sum _{n\leqslant \frac{x}{y}}\frac{1}{n}\leqslant x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( 1+\log \frac{x}{y}\right) . \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorem 2.4

Considering the hyperbolic identity (3.1) and approximations (3.2) and (3.3), we get

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( h_3(x,y)\right) , \end{aligned}$$
(3.11)

where

$$\begin{aligned} h_3(x,y)=h_1(x,y)+h_2(x,y)+\left[ \frac{x}{y}\right] \left( \textrm{li}(y)-\pi (y)\right) . \end{aligned}$$

Using (2.4), we deduce that

$$\begin{aligned} \left[ \frac{x}{y}\right] \left( \textrm{li}(y)-\pi (y)\right)&=\left[ \frac{x}{y}\right] O^*\left( R(y)\right) \\&=O^*\left( x\,\frac{R(y)}{y}\right) =O^*\left( x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\right) . \end{aligned}$$

Thus, (3.11) holds with \(h_3(x,y)=h_1(x,y)+h_2(x,y)+x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\), or with

$$\begin{aligned} h_3(x,y)= & {} \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x} +\left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}\\{} & {} +x\,\textrm{e}^{-\frac{1}{3}\sqrt{\log y}}\left( \log \frac{x}{y}+6\sqrt{\log y}+21\right) +y. \end{aligned}$$

Now, we take \(\delta =\Delta =\frac{1}{2}\), and hence, \(y=\sqrt{x}\). Note that the assumption \(x\geqslant \textrm{e}\) covers \(x^\delta =\sqrt{x}>1.2\). Thus, we obtain (2.5), and the proof is complete. \(\square \)

Proof of Corollary 2.5

We use (2.5) with \(m=1\). Letting

$$\begin{aligned} h(z)=z^4\textrm{e}^{-\frac{\sqrt{2}}{6}z}\left( \frac{z^2}{2}+3\sqrt{2}z+21\right) +z^2\textrm{e}^{-\frac{z^2}{2}}\left( z^2+5\textrm{e}\right) , \end{aligned}$$

we have

$$\begin{aligned} h(\sqrt{\log x})=\frac{\log ^2 x}{x}\left( \mathcal {E}_\omega (x,1)-\frac{4x}{\log ^2x}\right) . \end{aligned}$$

By computation, we observe that h(z) is decreasing for \(z>23.97\), and \(h(119.02511)<1<h(119.02510)\). When \(x\geqslant \textrm{e}^{14167}\), we have \(\sqrt{\log x}\geqslant 119.02511\), and consequently, \(h(\sqrt{\log x})<1\). Also, we note that \(\left( 1-\gamma \right) \frac{x}{\log x}>\frac{5x}{\log ^2x}\) provided \(x>\textrm{e}^{5/(1-\gamma )}\), and this holds for the values of x we work here. Hence, we conclude the proof. \(\square \)

Using the following key result, Theorem 2.4 and Corollary 2.5 imply Theorem 2.6 and Corollary 2.7, respectively.

Lemma 3.4

For any \(x\geqslant 2\), we have

$$\begin{aligned} \mathcal {J}(x):=\sum _{n\leqslant x}\left( \Omega (n)-\omega (n)\right) =M''x+O^*\left( \frac{33\sqrt{x}}{\log x}\right) . \end{aligned}$$
(3.12)

Proof

Let \(\kappa (x)=\frac{25\sqrt{[x]}}{\log [x]}\). Using the double sided inequality (2.7), we deduce that

$$\begin{aligned} \mathcal {J}(x)&=\sum _{k=1}^{[x]}\left( \Omega (k)-\omega (k)\right) \\&=M''[x]+O^*\left( \kappa (x)\right) =M''x+O^*\left( \kappa (x)+M''\right) . \end{aligned}$$

By computation, we observe that \(\kappa (x)+M''<\frac{33\sqrt{x}}{\log x}\) for \(x\geqslant 2\). \(\square \)

Proof of Corollary 2.7

Approximations (2.6) and (3.12) imply

$$\begin{aligned} \sum _{n\leqslant x}\Omega (n)=x\log \log x+M'x-\left( 1-\gamma \right) \frac{x}{\log x}+O^*\left( \frac{5x}{\log ^2x}+\frac{33\sqrt{x}}{\log x}\right) . \end{aligned}$$

We note that

$$\begin{aligned} \frac{33\sqrt{x}}{\log x}<\frac{x}{\log ^2 x},\qquad (x\geqslant 155652). \end{aligned}$$
(3.13)

This completes the proof. \(\square \)

4 Proof of Conditional Approximations

To prove conditional results, under assuming the Riemann hypothesis, we reconstruct Lemma 3.2 and Lemma 3.3, replacing R(x) by \(\widehat{R}(x)\).

Lemma 4.1

Assume that the Riemann hypothesis is true. Then, for any x and y satisfying \(2\leqslant y\leqslant x\), we have

$$\begin{aligned} \sum _{p\leqslant y}\left[ \frac{x}{p}\right] =x\log \log y+Mx+O^*\left( \frac{x}{\sqrt{y}}\,(3\log y+4)+y\right) . \end{aligned}$$
(4.1)

Proof

Note that

$$\begin{aligned} \int _y^\infty \frac{\widehat{R}(t)}{t^2}\,\textrm{d}t=\frac{2\log y+4}{\sqrt{y}}. \end{aligned}$$

Thus, following similar argument as the proof of Lemma 3.2 and using (2.10), we deduce that assuming RH, for any \(y\geqslant 2\), we have

$$\begin{aligned} \sum _{p\leqslant y}\frac{1}{p}=\log \log y+M+O^*\left( \frac{3\log y+4}{\sqrt{y}}\right) . \end{aligned}$$

This completes the proof. \(\square \)

Lemma 4.2

Assume that the Riemann hypothesis is true. Let x and y satisfy \(x\geqslant \textrm{e}\) and \(1.2<x^\delta \leqslant y\leqslant x^\Delta <x\) for some fixed \(\delta , \Delta \in (0,1)\). Then, we have

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\pi \left( \frac{x}{n}\right)= & {} \left[ \frac{x}{y}\right] \textrm{li}(y)+x(\log \log x-\log \log y)\nonumber \\{} & {} +x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \widehat{h}_2(x,y)\right) , \end{aligned}$$
(4.2)

where

$$\begin{aligned} \widehat{h}_2(x,y)= & {} \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}+\left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}\\{} & {} +\frac{2x}{\sqrt{y}}\left( \log y+2\right) +15\sqrt{x}\log x. \end{aligned}$$

Proof

Following similar argument as the proof of Lemma 3.3, we should approximate the sum \(\sum _{n\leqslant \frac{x}{y}}\widehat{R}\left( \frac{x}{n}\right) \), for which, we have:

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\widehat{R}\left( \frac{x}{n}\right) =\sqrt{x}\log x\,\sum _{n\leqslant \frac{x}{y}}\frac{1}{\sqrt{n}}-\sqrt{x}\sum _{n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}}. \end{aligned}$$
(4.3)

Letting \(f_0(t)=\frac{1}{\sqrt{t}}\) and \(f_1(t)=\frac{\log t}{t}\), we observe that \(f_0(t)\) is decreasing for \(t\geqslant 1\), and with \(t_0=\textrm{e}^2\approxeq 7.39\), the function \(f_1(t)\) is increasing for \(1\leqslant t\leqslant t_0\) and decreasing for \(t\geqslant t_0\). Moreover

$$\begin{aligned} \max _{t\geqslant 1}f_1(t)=f_1(\textrm{e}^2)=\frac{2}{\textrm{e}}<1. \end{aligned}$$

Thus, comparison of a sum and an integral of a monotonic function [14, Theorem 0.4] implies that there exists \(\theta _0\in [0,1]\), such that

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\frac{1}{\sqrt{n}}=1+\int _1^{[\frac{x}{y}]}f_0(t)\,\textrm{d}t+\theta _0\left( f_0\left( \left[ \frac{x}{y}\right] \right) -1\right) . \end{aligned}$$

Since \(\max _{t\geqslant 1}f_0(t)=f_0(1)=1\), we get

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\frac{1}{\sqrt{n}} =\int _1^{[\frac{x}{y}]}f_0(t)\,\textrm{d}t+O^*(3) =\int _1^{\frac{x}{y}}f_0(t)\,\textrm{d}t+O^*(4). \end{aligned}$$
(4.4)

Also, we write

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}} =\sum _{1<n\leqslant 7}\frac{\log n}{\sqrt{n}}+\frac{\log 8}{\sqrt{8}}+\sum _{8<n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}}. \end{aligned}$$

There exists \(\theta _1,\theta _2\in [0,1]\), such that

$$\begin{aligned} \sum _{1<n\leqslant 7}\frac{\log n}{\sqrt{n}}=\int _1^7 f_1(t)\,\textrm{d}t+\theta _1 f_1(7)=\int _1^7 f_1(t)\,\textrm{d}t+O^*\left( \frac{2}{\textrm{e}}\right) , \end{aligned}$$

and

$$\begin{aligned} \sum _{8<n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}}&=\int _8^{[\frac{x}{y}]} f_1(t)\,\textrm{d}t+\theta _2\left( f_1\left( \left[ \frac{x}{y}\right] \right) -f_1(8)\right) \\&=\int _8^{[\frac{x}{y}]} f_1(t)\,\textrm{d}t+O^*\left( \frac{4}{\textrm{e}}\right) . \end{aligned}$$

Thus

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}} =\int _1^{[\frac{x}{y}]} f_1(t)\,\textrm{d}t+O^*\left( \eta \right) =\int _1^{\frac{x}{y}} f_1(t)\,\textrm{d}t+O^*\left( \eta +\frac{2}{\textrm{e}}\right) , \end{aligned}$$

where \(\eta =\frac{6}{\textrm{e}}+f_1(8)+\int _7^8 f_1(t)\,\textrm{d}t\approxeq 3.68\). Since \(\eta +\frac{2}{\textrm{e}}<5\), we get

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\frac{\log n}{\sqrt{n}} =\int _1^{\frac{x}{y}} f_1(t)\,\textrm{d}t+O^*(5). \end{aligned}$$
(4.5)

By computation, we have

$$\begin{aligned}{} & {} \sqrt{x}\log x\int _1^{\frac{x}{y}}f_0(t)\,\textrm{d}t-\sqrt{x}\int _1^{\frac{x}{y}} f_1(t)\,\textrm{d}t\\{} & {} \quad =\frac{2x}{\sqrt{y}}\left( \log y+2\right) -2\sqrt{x}\left( \log x+2\right) . \end{aligned}$$

Thus, considering the identity (4.3) and the approximations (4.4) and (4.5), we deduce that

$$\begin{aligned} \sum _{n\leqslant \frac{x}{y}}\widehat{R}\left( \frac{x}{n}\right) =\frac{2x}{\sqrt{y}}\left( \log y+2\right) +O^*\left( 15\sqrt{x}\log x\right) . \end{aligned}$$

This completes the proof. \(\square \)

Proof of Theorem 2.8

Considering the hyperbolic identity (3.1) and approximations (4.1) and (4.2), we get

$$\begin{aligned} \sum _{n\leqslant x}\omega (n)=x\log \log x+Mx+x\sum _{j=1}^m\frac{a_j}{\log ^j x}+O^*\left( \widehat{h}_3(x,y)\right) , \end{aligned}$$
(4.6)

where

$$\begin{aligned} \widehat{h}_3(x,y)=\widehat{h}_1(x,y)+\widehat{h}_2(x,y)+\left[ \frac{x}{y}\right] \left( \textrm{li}(y)-\pi (y)\right) , \end{aligned}$$

with \(\widehat{h}_1(x,y)=\frac{x}{\sqrt{y}}\,(3\log y+4)+y\). Using (2.10), we deduce that

$$\begin{aligned} \left[ \frac{x}{y}\right] \left( \textrm{li}(y)-\pi (y)\right)&=\left[ \frac{x}{y}\right] O^*\left( \widehat{R}(y)\right) \\&=O^*\left( \frac{x}{y}\,\widehat{R}(y)\right) =O^*\left( \frac{x\log y}{\sqrt{y}}\right) . \end{aligned}$$

Thus, (4.6) holds with \(\widehat{h}_3(x,y)=\widehat{h}_1(x,y)+\widehat{h}_2(x,y)+\frac{x\log y}{\sqrt{y}}\), or with

$$\begin{aligned} \widehat{h}_3(x,y)= & {} \frac{m!}{\delta ^{m+1}}\,\frac{x}{\log ^{m+1} x}+\left( 1+\frac{1}{\delta ^{m+1}}\right) \textrm{e}m!\,\frac{x^\Delta }{\log x}\\{} & {} +\frac{6x\log y}{\sqrt{y}}+\frac{8x}{\sqrt{y}}+15\sqrt{x}\log x+y. \end{aligned}$$

Now, we take \(\delta =\Delta =\frac{2}{3}\), and hence, \(y=x^{\frac{2}{3}}\). Note that the assumption \(x\geqslant \textrm{e}\) covers \(x^\delta >1.2\). Thus, we obtain (2.11), and consequently, we get (2.12) using (3.12). The proof is complete. \(\square \)

Proof of Corollary 2.9

We use (2.11) with \(m=1\). By computation, we observe that \(\widehat{\mathcal {E}}_\omega (x,1)<\frac{11x}{\log ^2x}\) for \(x\geqslant x_0\). Thus, we get (2.13), and consequently (2.14), using the approximation (3.12) and the inequality (3.13). Also, we note that

$$\begin{aligned} \left( 1-\gamma \right) \frac{x}{\log x}>\frac{12x}{\log ^2x}>\frac{11x}{\log ^2x}, \end{aligned}$$

provided that \(x>\textrm{e}^{12/(1-\gamma )}\). Since \(x_0>\textrm{e}^{12/(1-\gamma )}\), we conclude the proof. \(\square \)