1 Introduction

The notion of statistical convergence for sequences of real numbers was introduced by Steinhauss [17] and by Fast [11] indepently in the same year 1951. Using the concept of statistical convergence in approximation theory provides us with many advantages since it is stronger than the clasical one. So, various applications and generalizations have been studied by several authors the help of this convergence [1, 7,8,9,10]. Also, Balcerzak et al. [3] have introduced the definition of equi-statistical convergence which is lying between pointwise and uniform statistical convergence.

Firstly, we recall these convergence methods.

Let \( {\mathbb {N}} \) be the set of natural numbers and \(A\subseteq {\mathbb {N}} \). Also let

$$\begin{aligned} A_{n}:=\left\{ k:k\le n\text { and }k\in A\right\} \end{aligned}$$

and suppose that the symbol \(\left| A_{n}\right| \) denotes the cardinality of the \(A_{n}.\) Then the natural density of A is defined by

$$\begin{aligned} \delta (A)=\underset{n}{\lim }\frac{\left| A_{n}\right| }{n}= \underset{n}{\lim }\frac{1}{n}\left| \left\{ k:k\le n\text { and }k\in A\right\} \right| \end{aligned}$$

provided that the limit exists.

A given sequence \((x_{n})\) is said to be statistically convergent to \(\ell \), if, for every \(\varepsilon >0,\) the following set:

$$\begin{aligned} K(\varepsilon ):=\left| \left\{ n\in {\mathbb {N}} :\left| x_{n}-\ell \right| \ge \varepsilon \right\} \right| \end{aligned}$$

has natural density zero [11, 17]. This means that, for every \( \varepsilon >0\), we have

$$\begin{aligned} \delta (K(\varepsilon ))=\underset{n}{\lim }\frac{\left| K(\varepsilon )\right| }{n}=\underset{n}{\lim }\frac{1}{n}\left| \left\{ k:k\le n \text { and }\left| x_{k}-\ell \right| \ge \varepsilon \right\} \right| =0. \end{aligned}$$

In this case, we write \(st-\underset{n}{\lim }x_{n}=\ell \). We know that, every convergent sequence is statistically convergent to same limit, but the converse is not true.

Let f and \(f_{n}\) belong to C(I), which is the space of all continuous real valued functions on a compact subset I of the real numbers and \( \left\| f\right\| _{C\left( I\right) }\) denotes the usual supremum norm of f in C(I). Throughout the paper, we use the following notation

$$\begin{aligned} K_{n}(x,\varepsilon ):= & {} \left| \left\{ n\in {\mathbb {N}} :\left| f_{n}(x)-f(x)\right| \ge \varepsilon \right\} \right| ,\ x\in I , \\ D_{n}(\varepsilon ):= & {} \left| \left\{ n\in {\mathbb {N}} :\left| \left| f_{n}-f\right| \right| _{C\left( I\right) }\ge \varepsilon \right\} \right| , \end{aligned}$$

where \(\varepsilon >0,\)\(n\in {\mathbb {N}}.\)

Definition 1

[10] If \(st-\underset{n}{\lim }f_{n}(x)=f(x)\) for each \( x\in I,\) i.e., for every \(\varepsilon >0\) and for each \(x\in I,\)\(\underset{n }{\lim }\frac{K_{n}(x,\varepsilon )}{n}\)\(=0,\) then \((f_{n})\) is said to be statistically pointwise convergent to f on I . Then, it is denoted by \( f_{n}\rightarrow f\) (st) on I.

Definition 2

 [3] If for every \(\varepsilon >0,\)

$$\begin{aligned} \underset{n}{\lim }\frac{K_{n}(x,\varepsilon )}{n}=0,\text { uniformly with respect to }x\text {,} \end{aligned}$$

which means that \(\underset{n}{\lim }\frac{\left\| K_{n}(.,\varepsilon )\right\| _{C(I)}}{n}=0\) for every \(\varepsilon >0,\) then \((f_{n})\) is said to be equi-statistically convergent to f on I . In this case, this limit is denoted by \(f_{n}\twoheadrightarrow f\) (equi) on I.

Definition 3

 [10] If \(st-\lim \left\| f_{n}-f\right\| _{C(I)}=0,\) or \(\underset{n}{\lim }\frac{D_{n}(\varepsilon )}{n}=0\), then \( (f_{n})\) is said to be statistically uniform convergent to f on I .This limit is denoted by \(f_{n}\rightrightarrows f\) (st) on I.

The next result follows directly via the above definitions.

Lemma 1

[3] \(f_{n}\rightrightarrows f\) on I (in the ordinary sense) implies \(f_{n}\rightrightarrows f\) (st) on I,  which also implies \( f_{n}\twoheadrightarrow f\) (equi) on I. Furthermore, \( f_{n}\twoheadrightarrow f\) (equi) on I implies \(f_{n}\rightarrow f\) (st) on I; and \(f_{n}\rightarrow f\) on I (in the ordinary sense) implies \(f_{n}\rightarrow f\) (st) on I.

Recently, Császár and Laczkovich have introduced the definition of equal convergence for real functions and have developed their investigation on this convergence [4, 6]. Also, Das et al. have introduced the ideas of \({\mathcal {I}}\) and \({\mathcal {I}}^{*}\)-equal convergence with the help of ideals by extending the equal convergence [5].

Let’s remember this definition.

Definition 4

 [5] If there is a positive number sequence \( \left( \varepsilon _{n}\right) \) with \(st-\lim \varepsilon _{n}=0\) such that for any \(x\in I,\)

$$\begin{aligned} \underset{n}{\lim }\frac{\psi _{n}(x,\varepsilon _{n})}{n}=0, \end{aligned}$$

where \(\psi _{n}(x,\varepsilon _{n}):=\left| \left\{ n\in {\mathbb {N}} :\left| f_{n}(x)-f(x)\right| \ge \varepsilon _{n}\right\} \right| ,\)\(x\in I,\) then \((f_{n})\) is said to be statistical equal convergent to f on I. In this case we write \(f_{n}\rightarrow f\)\( (equal-st)\) on I.

Now, we introduce the concept of statistical equi-equal convergence of sequences of functions.

Definition 5

If there is a positive number sequence \(\left( \varepsilon _{n}\right) \) with \(st-\lim \varepsilon _{n}=0\) such that

$$\begin{aligned} \underset{n}{\lim }\frac{\psi _{n}(x,\varepsilon _{n})}{n}=0,\text { uniformly with respect to }x, \end{aligned}$$

where \(\psi _{n}(x,\varepsilon _{n}):=\left| \left\{ n\in {\mathbb {N}} :\left| f_{n}(x)-f(x)\right| \ge \varepsilon _{n}\right\} \right| ,\)\(x\in I,\) which means that \(\underset{n}{\lim }\frac{ \left\| \psi _{n}(.,\varepsilon _{n})\right\| _{C(I)}}{n}=0\), then \( (f_{n})\) is said to be statistical equi-equal convergent to f on I. In this case, this limit is denoted by \(f_{n}\)\(\twoheadrightarrow f(eq-st)\) on I.

Now, we give an example which satisfies that statistical equi-equal convergence is stronger than statistical convergence.

Example 1

Let \(I=[0,1],\) for each \(x\in I,\)\(g(x)=0\) and \((g_{n})\) is a sequence of functions on I given by

$$\begin{aligned} g_{n}\left( x\right) =\left\{ \begin{array}{cc} x^{n}, &{} n\text { is square,} \\ 0, &{} \text {otherwise.} \end{array} \right. \end{aligned}$$
(1)

Take \(\left( \varepsilon _{n}\right) \) defined by \(\left( \varepsilon _{n}\right) =\left\{ \begin{array}{cc} 2n, &{} n\text { is square,} \\ \frac{1}{n}, &{} \text {otherwise, } \end{array} \right. \). Then it is easy to see that \(st-\lim \varepsilon _{n}=0.\) Also for any \(x\in I,\)\(\left\{ n\in {\mathbb {N}} :\left| g_{n}(x)-g(x)\right| \ge \varepsilon _{n}\right\} =\varnothing .\) Therefore , we get \(g_{n}\twoheadrightarrow g\)\((eq-st)\) on I. But \((g_{n})\) is not statistical convergence to the function g on I.

2 A Korovkin-type approximation theorem

For a sequence \(\left( T_{n}\right) \) of positive linear operators on \( C\left( I\right) \), in 1961 Korovkin [14] gave the necessary and sufficient conditions for the uniform convergence of \(T_{n}\left( f\right) \) to a function f by using the test function \(e_{i}\) defined by \(e_{i}\left( x\right) =x^{i}\), \(\left( i=0,1,2\right) \) (see, for instance, [2]). More recently, general versions of the Korovkin theorem were studied, in which a more general notion of convergence is used. Some Korovkin-type theorems in the setting of a statistical convergence were given by [1, 7, 8, 13, 15, 16].

In this section we apply the notion of statistical equi-equal convergence of a sequence of functions to prove a Korovkin type approximation theorem.

Let T be a linear operator from \(C\left( I\right) \) into itself. Then, as usual, we say that T is positive linear operator provided that \(f\ge 0\) implies \(T\left( f\right) \ge 0\). Also, we denote the value of \(T\left( f\right) \) at a point \(x\in I\) by T(f(y); x) or, briefly, T(fx).

First we recall the statistical case of the Korovkin-type result introduced as follows:

Theorem 1

 [12] Let \(\left( T_{n}\right) \) be a sequence of positive linear operators acting from \(C\left( I\right) \) into itself. Then, for all \(f\in C\left( I\right) \),

$$\begin{aligned} st-\underset{n}{\lim }\left\| T_{n}\left( f\right) -f\right\| _{C\left( I\right) }=0 \end{aligned}$$

if and only if

$$\begin{aligned} st-\underset{n}{\lim }\left\| T_{n}\left( e_{i}\right) -e_{i}\right\| _{C\left( I\right) }=0. \end{aligned}$$

where \(e_{i}(x)=x^{i}\), \(i=0,1,2.\)

Now we remember the following Korovkin-type approximation theorem by means of equi-statistical convergence.

Theorem 2

 [13] Let \(\left( T_{n}\right) \) be a sequence of positive linear operators acting from \(C\left( I\right) \) into itself. Then, for all \(f\in C\left( I\right) \),

$$\begin{aligned} T_{n}(f)\twoheadrightarrow f(equi)\text { on }I \end{aligned}$$

if and only if

$$\begin{aligned} T_{n}(e_{i})\twoheadrightarrow e_{i}(equi)\text { on }I\text { ,} \end{aligned}$$

where \(e_{i}(x)=x^{i}\), \(i=0,1,2.\)

Now we have the following main result

Theorem 3

Let \(\left( T_{n}\right) \) be a sequence of positive linear operators acting from \(C\left( I\right) \) into itself. Then, for all \(f\in C\left( I\right) \),

$$\begin{aligned} T_{n}(f)\twoheadrightarrow f(eq-st)\text { on }I, \end{aligned}$$
(2)

if and only if

$$\begin{aligned} T_{n}(e_{i})\twoheadrightarrow e_{i}(eq-st)\text { on }I, \end{aligned}$$
(3)

where \(e_{i}(x)=x^{i}\), \(i=0,1,2.\)

Proof

Condition (3) follows immediately from condition (2), since each of the functions 1,  x\(x^{2}\) belongs to \(C\left( I\right) \). We prove the converse part. By the continuity of f on I, we can write

$$\begin{aligned} \left| f\left( y\right) -f\left( x\right) \right| \le 2\kappa \end{aligned}$$

where \(\kappa :=\left\| f\right\| _{C(I)}.\) Also, since f is continuous on I, we write that for every \(\varepsilon >0\), there exists a number \(\delta :=\delta (\varepsilon )>0\) such that \(\left| f\left( y\right) -f\left( x\right) \right| <\varepsilon \) for all \(x\in I\) satisfying \(\left| y-x\right| <\delta \). Hence, putting \(\varphi \left( y\right) =\left( y-x\right) ^{2}\), we get

$$\begin{aligned} \left| f\left( y\right) -f\left( x\right) \right| <\varepsilon + \frac{2\kappa }{\delta ^{2}}\varphi (y)\text {.} \end{aligned}$$
(4)

Since \(T_{n}(f,x)\) is monotone and linear, we obtain

$$\begin{aligned} \left| T_{n}(f;x)-f(x)\right|= & {} \left| T_{n}(f(y)-f(x);x)+f(x)\left( T_{n}(e_{0};x)-e_{0}(x)\right) \right| \nonumber \\\le & {} T_{n}\left( \left| f(y)-f(x)\right| ;x\right) +\kappa \left| T_{n}(e_{0};x)-e_{0}(x)\right| \end{aligned}$$
(5)

Then, we operate “\(T_{n}\left( \left| f(y)-f(x)\right| ;x\right) ''\) to inequality (4),

$$\begin{aligned} T_{n}\left( \left| f(y)-f(x)\right| ;x\right)\le & {} T_{n}\left( \varepsilon +\frac{2\kappa }{\delta ^{2}}\varphi (y);x\right) \nonumber \\= & {} \varepsilon \left| T_{n}(e_{0};x)-e_{0}(x)\right| +\frac{2\kappa }{\delta ^{2}}T_{n}(\varphi (y);x) \end{aligned}$$
(6)

Now, we calculate the term of “\(T_{n}(\varphi (y);x)\)” in (6),

$$\begin{aligned} T_{n}(\varphi (y);x)= & {} T_{n}(\left( y-x\right) ^{2};x) \nonumber \\= & {} T_{n}(y^{2}-2xy+x^{2};x) \nonumber \\\le & {} \left| T_{n}(e_{2};x)-e_{2}(x)\right| +2\left\| e_{1}\right\| _{C\left( I\right) }\left| T_{n}(e_{1};x)-e_{1}(x)\right| \nonumber \\&+\left\| e_{2}\right\| _{C\left( I\right) }\left| T_{n}(e_{0};x)-e_{0}(x)\right| \end{aligned}$$
(7)

Using (7) in (5), we get

$$\begin{aligned} \left| T_{n}\left( f;x\right) -f(x)\right|\le & {} \varepsilon +\left( \varepsilon +\kappa +\frac{2\kappa \left\| e_{2}\right\| _{C\left( I\right) }}{\delta ^{2}}\right) \left| T_{n}(e_{0};x)-e_{0}(x)\right| \\&+\frac{4\kappa \left\| e_{1}\right\| _{C\left( I\right) }}{\delta ^{2}}\left| T_{n}(e_{1};x)-e_{1}(x)\right| +\frac{2\kappa }{\delta ^{2}}\left| T_{n}(e_{2};x)-e_{2}(x)\right| \\\le & {} \digamma \left\{ \left| T_{n}(e_{0};x)-e_{0}(x)\right| +\left| T_{n}(e_{1};x)-e_{1}(x)\right| \right. \\&\left. +\left| T_{n}(e_{2};x)-e_{2}(x)\right| \right\} +\varepsilon \end{aligned}$$

where \(\digamma =\varepsilon +\kappa +\frac{2\kappa }{\delta ^{2}}\left( \left\| e_{2}\right\| _{C\left( I\right) }+2\left\| e_{1}\right\| _{C\left( I\right) }+1\right) .\) Since \(\varepsilon \) is arbitrary, we can write

$$\begin{aligned} \left| T_{n}\left( f;x\right) -f(x)\right| \le \digamma \sum \limits _{i=0}^{2}\left| T_{n}(e_{i};x)-e_{i}(x)\right| . \end{aligned}$$
(8)

Since \(T_{n}(e_{i})\twoheadrightarrow e_{i} (eq-st)\) on I\(i=0,1,2,\) there is a positive number sequence \(\left( \varepsilon _{n,i}\right) \) with \( st-\lim \varepsilon _{n,i}=0\) such that

$$\begin{aligned} \underset{n}{\lim }\frac{\psi _{n,i}(x,\varepsilon _{n,i})}{n}=0,\text { uniformly with respect to }x, \end{aligned}$$
(9)

where \(\psi _{n,i}(x,\varepsilon _{n,i}):=\left| \left\{ n\in {\mathbb {N}} :\left| T_{n}(e_{i};x)-e_{i}(x)\right| \ge \varepsilon _{n,i}\right\} \right| \), \(i=0,1,2\). Then, for any \(x\in I,\)

$$\begin{aligned} \psi _{n}(x,\varepsilon _{n})=\left| \left\{ n\in {\mathbb {N}} :\left| T_{n}(f;x)-f(x)\right| \ge 3\digamma \varepsilon _{n}\right\} \right| \end{aligned}$$

where \(\varepsilon _{n}=\max \left\{ \varepsilon _{n,0},\varepsilon _{n,1},\varepsilon _{n,2}\right\} .\) It follows from (8) that \(\psi _{n}(x,\varepsilon _{n})\le \sum \nolimits _{i=0}^{2} \psi _{n,i}(x,\varepsilon _{n,i})\) and so

$$\begin{aligned} \frac{\psi _{n}(x,\varepsilon _{n})}{n}\le \sum \limits _{i=0}^{2}\frac{\psi _{n,i}(x,\varepsilon _{n,i})}{n}. \end{aligned}$$

Then using the hypothesis (3), we get

$$\begin{aligned} T_{n}(f)\twoheadrightarrow f(eq-st)\text { on }I. \end{aligned}$$

This completes the proof of the theorem. \(\square \)

Now, we present an example in support of the result above.

Example 2

Now let \(I=[0,1]\) and consider the classical Bernstein polynomials

$$\begin{aligned} B_{n}(f;x)=\sum \limits _{k=0}^{n}f\left( \frac{k}{n}\right) \left( \begin{array}{c} n \\ k \end{array} \right) x^{k}\left( 1-x\right) ^{n-k} \end{aligned}$$

on C[0, 1]. Using these polynomials, we introduce the following positive linear operators on C[0, 1] : 

$$\begin{aligned} L_{n}(f;x)=(1+g_{n}(x))B_{n}(f;x),\,\,\,\,\,x\in [0,1]\text { and } f\in C[0,1], \end{aligned}$$
(10)

where \(g_{n}(x)\) is given by (1). Then, observe that

$$\begin{aligned} L_{n}(e_{0};x)= & {} (1+g_{n}(x))e_{0}(x), \\ L_{n}(e_{1};x)= & {} (1+g_{n}(x))e_{1}(x), \\ L_{n}(e_{2};x)= & {} (1+g_{n}(x))\left[ e_{2}(x)+\frac{x(1-x)}{n}\right] . \end{aligned}$$

Since \(g_{n}\twoheadrightarrow g=0\)\((eq-st)\) on I,  we conclude that

$$\begin{aligned} L_{n}\left( e_{i}\right) \twoheadrightarrow e_{i}(eq-st)\text { on }I, \text { for each }i=0,1,2. \end{aligned}$$

So, by Theorem 3, we immediately see that

$$\begin{aligned} L_{n}\left( f\right) \twoheadrightarrow f\text { \ }(eq-st)\text { on }I\text { for all }f\in C[0,1]. \end{aligned}$$

However, since \((g_{n})\) is not statistical convergent to the function \(g=0\) on [0, 1],  i.e.,\(st-\lim \left\| g_{n}-g\right\| _{C[0,1]}=1\ne 0\), we can say that Theorem 1 does not work for our operators defined by (10). In a similar manner, since \((g_{n})\) is not uniformly convergent (in the ordinary sense) to the function \(g=0\) on [0, 1], i.e., \(\lim \left\| g_{n}-g\right\| _{C[0,1]}=1\ne 0,\) the classical Korovkin theorem does not work either. As a result, this application clearly shows that our Theorem 3 is a non-trivial generalization of the classical and the statistical cases of the Korovkin results introduced in [14] and [12], respectively.

3 Rate of statistical equi-equal convergence

In this section, we study the corresponding rates of statistical equi-equal convergence with the help of modulus of continuity.

Now, we recall that the modulus of continuity of a function \(f\in C(I)\) is defined by

$$\begin{aligned} w(f,\delta )=\sup _{\left| y-x\right| \le \delta ,\,\,x,y\in I}\left| f(y)-f(x)\right| \,\,\,\,\,\,(\delta >0). \end{aligned}$$

Then we have the following result.

Theorem 4

Let \(\left( T_{n}\right) \) be a sequence of positive linear operators acting from C(I) into itself. Assume that the following conditions hold:

(a):

\(T_{n}(e_{0})\twoheadrightarrow e_{0}(eq-st)\) on I

(b):

\(w(f,\delta _{n})\twoheadrightarrow 0(eq-st)\) on I,  where \( \delta _{n}:=\delta _{n}(x)=\sqrt{T_{n}(\varkappa ^{2};x)}\) with \(\varkappa (y)=(y-x).\)

Then we have, for all \(f\in C(I),\)

$$\begin{aligned} T_{n}(f)\twoheadrightarrow f(eq-st),\text { on }I. \end{aligned}$$

Proof

Let \(f\in C(I)\) and \(x\in I.\) With the help of positive linear operators \( T_{n}\) and property of continuity modulus w, we get

$$\begin{aligned} \left| T_{n}(f;x)-f(x)\right|\le & {} T_{n}\left( \left| f(y)-f(x)\right| ;x\right) +\left| f(x)\right| \left| T_{n}(e_{0};x)-e_{0}(x)\right| \\\le & {} T_{n}\left( \left( 1+\frac{\left| \varkappa (y)\right| }{ \delta }\right) w\left( f,\delta \right) ;x\right) +\kappa \left| T_{n}(e_{0};x)-e_{0}(x)\right| \\= & {} w\left( f,\delta \right) T_{n}(e_{0};x)+\frac{w\left( f,\delta \right) }{ \delta }T_{n}\left( \left| \varkappa (y)\right| ;x\right) \\&+\, \kappa \left| T_{n}(e_{0};x)-e_{0}(x)\right| \end{aligned}$$

where \(\kappa :=\left\| f\right\| _{C(I)}\). Applying the Cauchy–Schwarz inequality for the term of “\(T_{n}\left( \left| \varkappa (y)\right| ;x\right) \)”, we obtain

$$\begin{aligned} T_{n}\left( \left| \varkappa (y)\right| ;x\right) =T_{n}\left( \left| \varkappa (y)e_{0}\right| ;x\right) \le \sqrt{ T_{n}(\varkappa ^{2};x)}\sqrt{T_{n}(e_{0};x)} \end{aligned}$$

then,

$$\begin{aligned} \left| T_{n}(f;x)-f(x)\right|\le & {} w\left( f,\delta \right) T_{n}(e_{0};x)+\frac{w\left( f,\delta \right) }{\delta }\sqrt{ T_{n}(\varkappa ^{2};x)}\sqrt{T_{n}(e_{0};x)} \\&+\,\kappa \left| T_{n}(e_{0};x)-e_{0}(x)\right| . \end{aligned}$$

If we choose \(\delta :=\delta _{n}(x)=\sqrt{T_{n}(\varkappa ^{2};x)},\) this yields that

$$\begin{aligned} \left| T_{n}(f;x)-f(x)\right|\le & {} \kappa \left| T_{n}(e_{0};x)-e_{0}(x)\right| +2w(f,\delta _{n}) \nonumber \\&+\,w(f,\delta _{n})\left| T_{n}(e_{0};x)-e_{0}(x)\right| \nonumber \\&+\,w(f,\delta _{n})\sqrt{\left| T_{n}(e_{0};x)-e_{0}(x)\right| }. \end{aligned}$$
(11)

Since \(T_{n}(e_{0})\twoheadrightarrow e_{0}(eq-st)\) on I,  there is a positive number sequence \(\left( \varepsilon _{n,0}\right) \) with \(st-\lim \varepsilon _{n,0}=0\) such that

$$\begin{aligned} \underset{n}{\lim }\frac{\psi _{n,0}(x,\varepsilon _{n,0})}{n}=0,\text { uniformly with respect to }x, \end{aligned}$$

where \(\psi _{n,0}(x,\varepsilon _{n,0}):=\left| \left\{ n\in {\mathbb {N}} :\left| T_{n}(e_{0};x)-e_{0}(x)\right| \ge \varepsilon _{n,0}\right\} \right| .\)

Since \(w(f,\delta _{n})\twoheadrightarrow 0 (eq-st)\) on I,  there is a positive number sequence \(\left( \varepsilon _{n,1}\right) \) with \(st-\lim \varepsilon _{n,1}=0\) such that

$$\begin{aligned} \underset{n}{\lim }\frac{\psi _{n,1}(x,\varepsilon _{n,1})}{n}=0,\text { uniformly with respect to }x, \end{aligned}$$

where \(\psi _{n,1}(x,\varepsilon _{n,1}):=\left| \left\{ n\in {\mathbb {N}} :w(f,\delta _{n})\ge \varepsilon _{n,1}\right\} \right| .\)

Then, for any \(x\in I,\)

$$\begin{aligned} \psi _{n}(x,\varepsilon _{n})=\left| \left\{ n\in {\mathbb {N}} :\left| T_{n}(x)-f(x)\right| \ge \varepsilon _{n}\right\} \right| \end{aligned}$$

where \(\varepsilon _{n,3}=\max \left\{ \varepsilon _{n,0},\varepsilon _{n,1}\right\} ,\)\(\varepsilon _{n}:=\)\(\varepsilon _{n,3}^{2}+\varepsilon _{n,3}^{3/2}+(\kappa +2)\varepsilon _{n,3}.\) It follows from (11) that \(\psi _{n}(x,\varepsilon _{n})\le \psi _{n,0}(x,\varepsilon _{n,0})+\psi _{n,1}(x,\varepsilon _{n,1})\) and so

$$\begin{aligned} \frac{\psi _{n}(x,\varepsilon _{n})}{n}\le \frac{\psi _{n,0}(x,\varepsilon _{n,0})}{n}+\frac{\psi _{n,1}(x,\varepsilon _{n,1})}{n}. \end{aligned}$$

Then using the hypothesis (a) and (b), we get

$$\begin{aligned} T_{n}(f)\twoheadrightarrow f(eq-st)\text { on }I. \end{aligned}$$

\(\square \)