1 INTRODUCTION AND PRELIMINARIES

The subject of Korovkin-type theory was initiated by Korovkin in 1960 in his pioneering paper [24], and it has been widely studied later on. It is worthwhile to point out that the Korovkin-type theory is about approximation to continuous functions by means of positive linear operators (also, see [1, 24]). Many researchers have studied Korovkin-type theory (see, for example [15, 18, 28, 29]). Moreover, the theory has been given various motivations such as relaxing the continuity of the functions, relaxing the notion of convergence. Aiming the improvement of the classical Korovkin theory, Badea et al. used the space of Bögel-type continuous (simply, \(B\)-continuous) functions in place of the ordinary continuity [3, 4, 6]. Moreover, from a different perspective, Gadjiev and Orhan [23] have used the notion of statistical convergence in order to prove the Korovkin-type approximation theorem. Afterwards, the studies including this convergence and its variants have been studied by authors (see [14, 16, 17, 21, 26, 30, 35]).

First, let us recall the notion of Pringsheim convergence.

As usual, \(\mathbb{N}\) denote the set of all-natural numbers. It is said that a double sequence \(x=\{x_{m,n}\}\) is Pringsheim convergent if, for every \(\varepsilon\) \(>0,\) there exists \(M=M(\varepsilon)\in\mathbb{N}\) such that \(\left|x_{m,n}-L\right|<\varepsilon\) whenever \(m,n>M.\) Here, \(L\) is called the Pringsheim limit of \(x\) and this is denoted by \(P-\lim_{m,n}x_{m,n}=L\) (see [27]). If there exists a positive number \(N\) such that \(\left|x_{m,n}\right|\leq N\) for all \((m,n)\in\mathbb{N}^{2}=\mathbb{N}\times\mathbb{N},\) then a double sequence is called bounded. Notice that,  unlike a convergent single sequence, a convergent double sequence need not to be bounded.

Steinhaus [32] and Fast [22] gave the notion of statistical convergence of sequences of real numbers, independently. There are many variants of statistical convergence in the literature. Recently, Unver and Orhan introduced statistical convergence with respect to power series methods in [33]. More recently, Yıldız, Demirci and Dirik [34] extended this notion of convergence to double sequences. Before these notions of the statistical type of convergence, let us first remind the notions of natural density and statistical convergence.

Let \(K\) be a subset of \(\mathbb{N}_{0}.\) The natural density of \(K,\) denoted by \(\delta\left(K\right)\), is given by

$$\delta\left(K\right):=\lim_{n}\frac{1}{n+1}\#\left\{k\leq n:k\in K\right\},$$

whenever the limit exists, where \(\#\left\{\cdot\right\}\) denotes the cardinality of a set. It is said that a sequence \(x=\left\{x_{n}\right\}\) is statistically convergent provided that for every \(\varepsilon>0\) it holds

$$\delta\left(\{n\in\mathbb{N}_{0}:\left|x_{n}-L\right|\geq\varepsilon\}\right)=0.$$

This is denoted by \(st-\lim_{n}x_{n}=L.\) It is evident from the definition that every convergent sequence (in the usual sense) is statistically convergent to the same limit, while a statistically convergent sequence need not to be convergent.

Let us turn our attention to the notion of statistical convergence for double sequences.

If \(E\subset\mathbb{N}_{0}^{2}=\mathbb{N}_{0}\times\mathbb{N}_{0},\) then \(E_{j,k}:=\) \(\left\{\left(m,n\right)\in E:m\leq j,n\leq k\right\}.\) The double natural density of \(E,\) denoted by \(\delta_{2}(E)\), is given by

$$\delta_{2}(E):=P-\underset{j,k}{\lim}\frac{1}{\left(j+1\right)\left(k+1\right)}\#E_{j,k},$$

whenever the limit exists ([25]). Let \(x=\left\{x_{m,n}\right\}\) be a number sequence. It is statistically convergent to \(L\) if for every \(\varepsilon>0,\) the set

$$E:=E_{j,k}(\varepsilon):=\left\{m\leq j,n\leq k:\text{ }\left|x_{m,n}-L\right|\geq\varepsilon\right\}$$

has zero natural density, in which case we write \(st_{2}-\lim_{m,n}x_{m,n}=L\) ([25]).

It follows from the definition that a Pringsheim convergent double sequence is statistically convergent to the same value while a statistically convergent double sequence need not to be Pringsheim convergent. Notice that, a statistically convergent double sequence need not to be bounded.

Now we recall the statistical convergence with respect to power series methods. First, let us turn our attention to the power series method.

In what follows \(\left\{p_{m,n}\right\}\) will be a given non-negative real double sequence such that \(p_{00}>0\) and the corresponding power series

$$p\left(t,s\right):=\underset{m,n=0}{\overset{\infty}{\sum}}p_{m,n}t^{m}s^{n}$$

has a radius of convergence \(R\) with \(R\in\left(0,\infty\right]\) and \(t,s\in\left(0,R\right).\) If for all \(t,s\in\left(0,R\right),\) the limit

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{m,n=0}{\overset{\infty}{\sum}}p_{m,n}t^{m}s^{n}x_{m,n}=L$$

exists, then it is said that \(x\) is convergent in the sense of power series method, and this is denoted by \(P_{p}^{2}-\lim x_{m,n}=L\) ([7]). It is worth to point out that the method is regular if and only if

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{\underset{m=0}{\overset{\infty}{\sum}}p_{m,\nu}t^{m}}{p\left(t,s\right)}=0\quad\text{and}\quad\underset{t,s\rightarrow R^{-}}{\lim}\frac{\underset{n=0}{\overset{\infty}{\sum}}p_{\mu,n}s^{n}}{p\left(t,s\right)}=0,\quad\text{for any }\mu,\upsilon,$$
(1)

hold (see, e.g. [7]).

Remark 1. Let us notice first that in case of \(R=1,\) the power series method coincides with Abel summability method and logarithmic summability method when \(p_{mn}=1\) and \(p_{mn}=\frac{1}{\left(m+1\right)\left(n+1\right)},\) respectively. In the case of \(R=\infty,\) the power series method coincides with Borel summability method when \(p_{mn}=\frac{1}{m!n!}.\)

In this article, the power series method is always assumed to be regular.

Before giving the next definition, it is worthwhile to point out that, Ünver and Orhan [33] have recently introduced \(P_{p}\)-density of \(E\subset\mathbb{N}_{0}\) and the definition of \(P_{p}\)-statistical convergence for single sequences. Hence, they have showed that statistical convergence and statistical convergence in the sense of power series methods are incompatible. In view of their work, Yıldız, Demirci and Dirik [34] have more recently introduced the definitions of \(P_{p}^{2}\)-density of \(F\subset\mathbb{N}_{0}^{2}=\mathbb{N}_{0}\times\mathbb{N}_{0}\) and \(P_{p}^{2}\)-statistical convergence for double sequences:

Definition 1 [34]. Let \(F\subset\mathbb{N}_{0}^{2}.\) If the limit

$$\delta_{P_{p}}^{2}\left(F\right):=\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in F}{\sum}p_{m,n}t^{m}s^{n}$$

exists, then \(\delta_{P_{p}}^{2}\left(F\right)\) is called the \(P_{p}^{2}\)-density of \(F.\) Notice that, it is not difficult to see from the definition of a power series method and \(P_{p}^{2}\)-density that \(0\leq\delta_{P_{p}}^{2}\left(F\right)\leq 1\) if it exists.

Definition 2 [34]. Let \(x=\left\{x_{m,n}\right\}\) be a double sequence. Then, \(x\) is said to be statistically convergent to \(L\) in the sense of power series method (\(P_{p}^{2}\)-statistically convergent) if for any \(\varepsilon>0\)

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in F_{\varepsilon}}{\sum}p_{m,n}t^{m}s^{n}=0,$$

where \(F_{\varepsilon}=\left\{\left(m,n\right)\in\mathbb{N}_{0}^{2}:\left|x_{m,n}-L\right|\geq\varepsilon\right\},\) that is \(\delta_{P_{p}}^{2}\left(F_{\varepsilon}\right)=0\) for any \(\varepsilon>0.\) This is denoted by \(st_{P_{p}}^{2}\)-\(\lim x_{m,n}=L.\)

Example 1. Let \(\left\{p_{m,n}\right\}\) be defined as follows

$$p_{m,n}=\begin{cases}1,\quad m=2k+1\text{ and }n=2l+1,\\ 0,\quad m=2k\text{ or }n=2l,\end{cases}\quad k,l=1,2,...,$$

and take the sequence \(\left\{s_{m,n}\right\}\) defined by

$$s_{m,n}=\begin{cases}1,\quad m=2k+1\text{ and }n=2l+1,\\ mn,\quad m=2k\text{ or }n=2l,\end{cases}\quad k,l=1,2,....$$
(2)

We calculate that, since for any \(\varepsilon>0,\)

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left\{\left(m,n\right):\left|s_{m,n}-1\right|\geq\varepsilon\right\}}{\sum}p_{m,n}t^{m}s^{n}=0,$$

\(\left\{s_{m,n}\right\}\) is \(P\)-statistically convergent to \(1.\) However, the sequence \(\left\{s_{m,n}\right\}\) is not statistically convergent to \(1.\)

We now pause to collect some basic notions and notations including \(B\)-continuity.

Bögel [10–12] first introduced the definition of \(B\)-continuity as follows.

Let \(X_{1}\) and \(X_{2}\) be compact subsets of the real numbers, and let \(D=X_{1}\times X_{2}.\) Then, a function \(g:D\rightarrow\) \(\mathbb{R}\) is called a \(B\)-continuous at a point \(\left(x,y\right)\in D\) provided that for every \(\varepsilon>0,\) there exists a positive number \(\delta=\delta(\varepsilon)\) such that \(\left|\Delta_{x,y}\left[g\left(u,v\right)\right]\right|<\varepsilon,\) for any \(\left(u,v\right)\in D\) with \(\left|u-x\right|<\delta\) and \(\left|v-y\right|<\delta.\) The symbol \(\Delta_{x,y}\left[g\left(u,v\right)\right]\) stands for the mixed difference of \(g\) defined by

$$\Delta_{x,y}\left[g\left(u,v\right)\right]=g(u,v)-g(u,y)-g(x,v)+g(x,y).$$

As usual, the symbol \(C_{b}(D)\) stands for the space of all \(B-\)continuous functions on \(D\) and also, \(C(D)\) (or \(B(D)\)) denote the space of all continuous (in the usual sense) functions (or the space of all bounded function on \(D\)). The supremum norm on the spaces \(B(D)\) is also given by

$$\left|\left|g\right|\right|:=\sup_{(x,y)\in D}\left|g\left(x,y\right)\right|\quad\text{for}\quad g\in B(D).$$

Then, it can be easily seen that \(C(D)\subset C_{b}(D).\) Moreover, one concludes that for any unbounded \(B\)-continuous function of type \(g(u,v)=g_{1}(u)+g_{2}(v),\) we have \(\Delta_{x,y}\left[g\left(u,v\right)\right]=0\) for all \((x,y),(u,v)\in D.\)

We remind that the following lemma for \(B\)-continuous functions was first proved by Badea et al. [4].

Lemma 1 ([4]). If \(g\in C_{b}(D),\) then, for every \(\varepsilon>0,\) there are positive numbers \(A_{1}(\varepsilon)=A_{1}(\varepsilon,g)\) and \(A_{2}(\varepsilon)=A_{2}(\varepsilon,g)\) such that the inequality

$$\Delta_{x,y}\left[g\left(u,v\right)\right]\leq\frac{\varepsilon}{3}+A_{1}(\varepsilon)(u-x)^{2}+A_{2}(\varepsilon)(v-y)^{2}$$

holds for all \((x,y),\) \((u,v)\in D.\)

The paper is organized as follows. In the next section, we study on Korovkin-type approximation for double sequences of positive linear operators defined on the space of all real-valued \(B\)-continuous functions via the notion of statistical convergence in the sense of power series methods instead of Pringsheim convergence. Then, we present an interesting application that satisfies our new approximation theorem which wasn’t satisfied the one studied before. In Section 3, we compute the rate of convergence of our proposed approximation theorem. Finally, we give a conclusion for periodic functions in Section 4.

2 A KOROVKIN-TYPE APPROXIMATION THEOREM

Let \(T\) be a linear operator from \(C_{b}\left(D\right)\) into \(B\left(D\right).\) As usual, we say that \(T\) is a positive linear operator if \(g\geq 0\) implies \(T\left(g\right)\geq 0.\) The value of \(T\left(g\right)\) at a point \(\left(x,y\right)\in D\) denoted by \(T(g(u,v);x,y)\) or, briefly, \(T(g;x,y).\)

Here and throughout the paper, for fixed \((x,y)\in D\) and \(g\in C_{b}(D),\) the function \(G_{x,y}\) defined as follows:

$$G_{x,y}(u,v)=g(u,y)+g(x,v)-g(u,v)\quad\text{for}\quad(u,v)\in D.$$
(3)

It is easy to verify that the \(B-\)continuity of \(g\) implies the \(B\)-continuity of \(G_{x,y}\) for every fixed \((x,y)\in D\), since it holds

$$\Delta_{x,y}\left[G_{x,y}(u,v)\right]=-\Delta_{x,y}\left[g(u,v)\right]$$

for all \((x,y),\) \((u,v)\in D\). The following test functions are also used throughout the paper

$$e_{0}(x,y)=1,\quad e_{1}(x,y)=x,\quad e_{2}(x,y)=y\quad\text{and}\quad e_{3}(x,y)=x^{2}+y^{2}.$$

Badea et al. [4] gave the following Korovkin-type approximation theorem via \(B\)-continuity.

Theorem 1 [4]. Let \(\{T_{m,n}\}\) be a sequence of positive linear operators acting from \(C_{b}\left(D\right)\) into \(B\left(D\right).\) Assume that the following conditions hold:

\((i)\) \(T_{m,n}(e_{0};x,y)=1\) for all \((x,y)\in D\) and \((m,n)\in\mathbb{N}^{2},\)

\((ii)\) \(T_{m,n}(e_{1};x,y)=e_{1}(x,y)+u_{m,n}(x,y),\)

\((iii)\) \(T_{m,n}(e_{2};x,y)=e_{2}(x,y)+v_{m,n}(x,y),\)

\((iv)\) \(T_{m,n}(e_{3};x,y)=e_{3}(x,y)+w_{m,n}(x,y),\)

where \(\{u_{m,n}(x,y)\},\) \(\{v_{m,n}(x,y)\}\) and \(\{w_{m,n}(x,y)\}\) converge to zero uniformly on \(D\) as \(m,n\rightarrow\infty\) (in any manner). Then, the sequence \(\{T_{m,n}\left(G_{x,y};x,y\right)\}\) converges uniformly to \(g(x,y)\) with respect to \((x,y)\in D,\) where \(G_{x,y}\) is given by (3).

It is worth noting that if the condition \((i)\) is replaced by

\((i^{\prime})\) \(T_{m,n}(e_{0};x,y)=1+\alpha_{m,n}(x,y)\)

where \(\{\alpha_{m,n}(x,y)\}\) converge to zero uniformly on \(D\) as \(m,n\rightarrow\infty\) (in any manner), then the simple, but not uniform, convergence of the sequence \(\{T_{m,n}\left(G_{x,y}\right)\}\) to \(g\) for any \(g\in C_{b}(D),\) is obtained.

Now we can give the following main result of the present paper.

Theorem 2. Let \(\{T_{m,n}\}\) be a sequence of positive linear operators acting from \(C_{b}\left(D\right)\) into \(B\left(D\right).\) Assume that the following conditions hold:

$$\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):T_{m,n}(e_{0};x,y)=e_{0}\left(x,y\right)\ \text{for all}\ (x,y)\in D\right\}\right)=1$$
(4)

and

$$st_{P_{p}}^{2}-\lim\left|\left|T_{m,n}\left(e_{i}\right)-e_{i}\right|\right|=0\quad\text{for}\quad i=1,2,3.$$
(5)

Then, for all \(g\in C_{b}(D),\) we have

$$st_{P_{p}}^{2}-\lim\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|=0,$$
(6)

where \(G_{x,y}\) is given by (3).

Proof. Let \((x,y)\in D\) and \(g\in C_{b}\left(D\right)\) be fixed. Putting

$$S:=\left\{\left(m,n\right):T_{m,n}(e_{0};x,y)=e_{0}\left(x,y\right)\ \text{for all}\ (x,y)\in D\right\},$$
(7)

thanks to (4), we get that

$$\delta_{P_{p}}^{2}\left(S\right)=1\text{ and }\delta_{P_{p}}^{2}\left(\mathbb{N}_{0}^{2}\backslash S\right)=0.$$
(8)

Using the \(B\)-continuity of the function \(G_{x,y}\) given by (3), Lemma 1 implies that, for every \(\varepsilon>0,\) there exist two positive numbers \(A_{1}(\varepsilon)\) and \(A_{2}(\varepsilon)\) such that

$$\left|\Delta_{x,y}\left[G_{x,y}(u,v)\right]\right|\leq\frac{\varepsilon}{3}+A_{1}(\varepsilon)(u-x)^{2}+A_{2}(\varepsilon)(v-y)^{2}$$
(9)

holds for every \((u,v)\in D.\) Also, thanks to (4), we can easily see that

$$T_{m,n}\left(G_{x,y};x,y\right)-g(x,y)=T_{m,n}\left(\Delta_{x,y}\left[G_{x,y}(u,v)\right];x,y\right)$$
(10)

holds for all \(\left(m,n\right)\in S.\) Since, \(T_{m,n}\) is linear and positive, for all \(\left(m,n\right)\in S,\) it follows from (9) and (10) that

$$\left|T_{m,n}\left(G_{x,y};x,y\right)-g(x,y)\right|=\left|T_{m,n}\left(\Delta_{x,y}\left[G_{x,y}(u,v)\right];x,y\right)\right|\leq T_{m,n}\left(\left|\Delta_{x,y}\left[G_{x,y}(u,v)\right]\right|;x,y\right)$$
$${}\leq\frac{\varepsilon}{3}+A_{1}(\varepsilon)T_{m,n}\left((u-x)^{2};x,y\right)+A_{2}(\varepsilon)T_{m,n}\left((v-y)^{2};x,y\right)$$
$${}\leq\frac{\varepsilon}{3}+A(\varepsilon)\{x^{2}+y^{2}+T_{m,n}(e_{3};x,y)-2xT_{m,n}(e_{1};x,y)-2yT_{m,n}(e_{2};x,y)\},$$

where \(A(\varepsilon)=\max\{A_{1}(\varepsilon),A_{2}(\varepsilon)\}\)and hence,

$$\left|T_{m,n}\left(G_{x,y};x,y\right)-g(x,y)\right|\leq\frac{\varepsilon}{3}+A(\varepsilon)\sum_{i=1}^{3}\left|T_{m,n}\left(e_{i};x,y\right)-e_{i}(x,y)\right|$$
(11)

holds for all \(\left(m,n\right)\in S.\) Now, taking the supremum over \((x,y)\in D\) on the both-sides of inequality (11), we have for all \(\left(m,n\right)\in S\)

$$\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\leq\frac{\varepsilon}{3}+A(\varepsilon)\sum_{i=1}^{3}\left|\left|T_{m,n}\left(e_{i}\right)-e_{i}\right|\right|.$$
(12)

For a given \(\varepsilon^{\prime}>0,\) choose \(\varepsilon>0\) such that \(\varepsilon<3\varepsilon^{\prime}\) and define

$$K:=\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\},$$
$$K_{i}:=\left\{\left(m,n\right):\left|\left|T_{m,n}\left(e_{i}\right)-e_{i}\right|\right|\geq\frac{3\varepsilon^{\prime}-\varepsilon}{9A(\varepsilon)}\right\},\quad i=1,2,3.$$

Thanks to (11), we get\(K\cap S\subseteq\bigcup_{i=1}^{3}(K_{i}\cap S),\) since

$$\delta_{P_{p}}^{2}\left(K\cap S\right)\leq\sum_{i=1}^{3}\delta_{P_{p}}^{2}\left(K_{i}\cap S\right)\leq\sum_{i=1}^{3}\delta_{P_{p}}^{2}\left(K_{i}\right),$$

and, from hypotheses (5), we get \(\delta_{P_{p}}^{2}\left(K_{i}\right)=0,\) \(i=1,2,3,\) yielding

$$\delta_{P_{p}}^{2}\left(K\cap S\right)=0.$$
(13)

Moreover,

$$\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\}\right)=\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\}\cap S\right)$$
$${}+\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\}\cap\left(\mathbb{N}_{0}^{2}\backslash S\right)\right)$$
$${}\leq\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\}\cap S\right)+\delta_{P_{p}}^{2}\left(\mathbb{N}_{0}^{2}\backslash S\right).$$

Thanks to (8) and (13), we can easily see that

$$\delta_{P_{p}}^{2}\left(\left\{\left(m,n\right):\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|\geq\varepsilon^{\prime}\right\}\right)=0,$$

which means

$$st_{P_{p}}^{2}-\lim\left|\left|T_{m,n}\left(G_{x,y}\right)-g\right|\right|=0.$$

\(\Box\)

It is known that, for some \(g\in C_{b}(D),\) the function \(g\) may be unbounded on the compact set \(D.\) However, thanks to the conditions (4), (9) and (10), we can say that the number

$$\sup_{(x,y)\in D}\left|T_{m,n}\left(G_{x,y};x,y\right)-g(x,y)\right|$$

in Theorem 2 is finite for each \(\left(m,n\right)\in S,\) where \(S\) is given by (7).

Now, we give an interesting example showing that our result in Theorem 2 is stronger than its classical version Theorem 1. We also see that the statistical Korovkin-type theorem given in [20] does not work for our new defined operators.

Example 2. Consider the following the Bernstein–Stancu-type operators [1]

$$S_{m,n,\alpha,\beta,\gamma,\delta}(g;x,y)=\sum_{s=0}^{m}\sum_{t=0}^{n}G_{x,y}\left(\frac{\alpha+s}{\beta+m},\frac{\gamma+t}{\delta+n}\right)\dbinom{m}{s}\dbinom{n}{t}x^{s}y^{t}(1-x)^{m-s}(1-y)^{n-t},$$
(14)

where \(G_{x,y}\) is given by (3), and \((x,y)\in D=[0,1]\times[0,1],\) \(\alpha\), \(\beta\), \(\gamma\), \(\delta\) are fixed real numbers; \(g\in C_{b}(D).\) Then, thanks to Theorem 1, it is known that for any \(g\in C_{b}(D)\)

$$P-\lim_{m,n}\left|\left|S_{m,n,\alpha,\beta,\gamma,\delta}\left(g\right)-g\right|\right|=0.$$
(15)

Now, we define the following positive linear operators on \(C_{b}(D)\) as follows:

$$T_{m,n}(g;x,y)=s_{m,n}S_{m,n,\alpha,\beta,\gamma,\delta}(g;x,y),$$
(16)

where \(\left\{s_{m,n}\right\}\) given by (2). Observe that the sequence of positive linear operators \(\{T_{m,n}\}\) defined in (16), satisfies all the hypotheses of Theorem 2. So, by (15) and (2), we have

$$st_{P_{p}}^{2}-\lim\left|\left|T_{m,n}\left(g\right)-g\right|\right|=0.$$

Since \(\left\{s_{m,n}\right\}\) is not \(P\)-convergent, the sequence \(\{T_{m,n}(g;x,y)\}\) given by (16) does not converge uniformly to the function \(g\in C_{b}(D).\) Thus, we get that Theorem 1 does not work for our new operators in (16). However, our Theorem 2 still works. Since \(\left\{s_{m,n}\right\}\) is not statistically convergent, the sequence \(\{T_{m,n}(g;x,y)\}\) given by (16) is not statistically uniformly convergent. Hence, the statistical Korovkin-type theorem given by Dirik, Duman and Demirci [20] does not work.

3 RATES OF \(P_{p}^{2}\)-STATISTICAL CONVERGENCE

In the present section, we calculate the rates of \(P_{p}^{2}\)-statistical convergence of a double sequence of positive linear operators by means of the modulus of continuity. Now, we begin with following definitions.

Definition 3. Let \(\left\{\alpha_{m,n}\right\}\) be a positive non-increasing double sequence. A double sequence \(x=\left\{x_{m,n}\right\}\) is \(P_{p}^{2}\)-statistically convergent to a number \(L\) with the rate of \(o(\alpha_{m,n})\) provided that for every \(\varepsilon>0,\)

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in M(\varepsilon)}{\sum}p_{m,n}t^{m}s^{n}=0,$$

where \(M(\varepsilon):=\) \(\left\{\left(m,n\right):\left|x_{m,n}-L\right|\geq\varepsilon\alpha_{m,n}\right\},\) and denoted by \(x_{m,n}-L=st_{P_{p}}^{2}-o(\alpha_{m,n}).\)

Definition 4. Let \(\left\{\alpha_{m,n}\right\}\) be the same as in Definition 3. A double sequence  \(x=\left\{x_{m,n}\right\}\) is \(P_{p}^{2}\)-statistically bounded with the rate of \(O(\alpha_{m,n})\) provided that for every \(\varepsilon>0,\)

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in N(\varepsilon)}{\sum}p_{m,n}t^{m}s^{n}=0,$$

where \(N(\varepsilon):=\left\{\left(m,n\right):\left|x_{m,n}\right|\geq\varepsilon\alpha_{m,n}\right\},\) and denoted by \(x_{m,n}=st_{P_{p}}^{2}-O(\alpha_{m,n}).\)

Thanks to these definitions, it is possible to get the following auxiliary result.

Lemma 2. Let \(\left\{x_{m,n}\right\}\) and \(\left\{y_{m,n}\right\}\) be double sequences. Assume that \(\left\{\alpha_{m,n}\right\}\) and \(\left\{\beta_{m,n}\right\}\) be positive non-increasing sequences. If \(x_{m,n}-L_{1}=st_{P_{p}}^{2}-o(\alpha_{m,n})\) and \(y_{m,n}-L_{2}=st_{P_{p}}^{2}-o(\beta_{m,n}),\) then we have

  • \((i)\) \((x_{m,n}-L_{1})\mp(y_{m,n}-L_{2})=st_{P_{p}}^{2}-o(\gamma_{m,n})\), where \(\gamma_{m,n}:=\max\left\{\alpha_{m,n},\beta_{m,n}\right\}\) for each \(\left(m,n\right)\in\mathbb{N}_{0}^{2},\)

  • \((ii)\) \(\lambda(x_{m,n}-L_{1})=st_{P_{p}}^{2}-o(\alpha_{m,n})\) for any real number \(\lambda.\)

It is worth noting that, if we replace the symbol ‘‘\(o\)’’ with ‘‘\(O\)’’, then we get similar result.

Now we remind the concept of mixed modulus of smoothness. Let \(g\in C_{b}\left(D\right).\) The mixed modulus of smoothness of \(g,\) denoted by \(\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)\), is given by

$$\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)=\sup\left\{\left|\Delta_{x,y}\left[g\left(u,v\right)\right]\right|:\left|u-x\right|\leq\delta_{1},\text{ }\left|v-y\right|\leq\delta_{2}\right\}$$

for \(\delta_{1},\delta_{2}>0.\) In order to get our main result of this section, we will use the following inequality

$$\omega_{mixed}\left(g;\lambda_{1}\delta_{1},\lambda_{2}\delta_{2}\right)\leq\left(1+\lambda_{1}\right)\left(1+\lambda_{2}\right)\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)$$

for \(\lambda_{1},\lambda_{2}>0.\) Several authors used the modulus \(\omega_{mixed}\) in the framework of ‘‘Boolean sum type’’ approximation (see, for example, [13]). Elementary properties of \(\omega_{mixed}\) can be found in [31] (see also [2]), and in particular for the case of \(B\)-continuous functions in [3].

We can now give the main result of this section, which gives the rate of \(P_{p}^{2}\)-statistical convergence.

Theorem 3. Let \(\{T_{m,n}\}\) be a sequence of positive linear operators from \(C_{b}\left(D\right)\) into \(B\left(D\right),\) and let \(\left\{\alpha_{m,n}\right\}\) be a positive non-increasing sequence. Assume that the following condition holds:

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in S}{\sum}p_{m,n}t^{m}s^{n}=1,$$
(17)

where \(S=\left\{\left(m,n\right):T_{m,n}(e_{0};x,y)=e_{0}\left(x,y\right)\ {for \, all}(x,y)\in D\right\};\) and

$$\omega_{mixed}\left(g;\gamma_{m,n},\delta_{m,n}\right)=st_{P_{p}}^{2}-o(\alpha_{m,n}),$$
(18)

where \(\gamma_{m,n}:=\sqrt{\left|\left|T_{m,n}(\varphi)\right|\right|}\) and \(\delta_{m,n}:=\sqrt{\left|\left|T_{m,n}(\Psi)\right|\right|}\) with \(\varphi(u,v)=\left(u-x\right)^{2},\) \(\Psi(u,v)=\left(v-y\right)^{2}.\) Then we get, for all \(g\in C_{b}\left(D\right),\)

$$\left|\left|T_{m,n}(G_{x,y})-g\right|\right|=st_{P_{p}}^{2}-o(\alpha_{m,n}),$$

where \(G_{x,y}\) is given by (3). We note that if we replace the symbol ‘‘ \(o\) ’’ with ‘‘ \(O\) ’’, then we get similar results.

Proof. Let \((x,y)\in D\) and \(g\in C_{b}\left(D\right)\) be fixed. Thanks to (17) that

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in\mathbb{N}_{0}^{2}\backslash S}{\sum}p_{m,n}t^{m}s^{n}=0.$$
(19)

Also, because of

$$\Delta_{x,y}\left[G_{x,y}(u,v)\right]=-\Delta_{x,y}\left[g(u,v)\right],$$

we observe that

$$T_{m,n}\left(G_{x,y};x,y\right)-g(x,y)=T_{m,n}\left(\Delta_{x,y}\left[G_{x,y}(u,v)\right];x,y\right)$$

holds for all \(\left(m,n\right)\in S.\) Then, using the properties of \(\omega_{mixed},\) we obtain

$$\left|\Delta_{x,y}\left[G_{x,y}(u,v)\right]\right|\leq\omega_{mixed}\left(g;\left|u-x\right|,\left|v-y\right|\right)$$
$${}\leq\left(1+\frac{1}{\delta_{1}}\left|u-x\right|\right)\left(1+\frac{1}{\delta_{2}}\left|v-y\right|\right)\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)\text{.}$$
(20)

Hence, from the monotonicity and the linearity of the operators \(T_{m,n}\) for all \(\left(m,n\right)\in S,\) thanks to (20) that

$$\left|T_{m,n}(G_{x,y};x,y)-g\left(x,y\right)\right|=\left|T_{m,n}\left(\Delta_{x,y}\left[G_{x,y}(u,v)\right];x,y\right)\right|\leq T_{m,n}\left(\left|\Delta_{x,y}\left[G_{x,y}(u,v)\right]\right|;x,y\right)$$
$${}\leq T_{m,n}\left(\left(1+\frac{1}{\delta_{1}}\left|u-x\right|\right)\left(1+\frac{1}{\delta_{2}}\left|v-y\right|\right);x,y\right)\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)$$
$${}=\left\{1+\frac{1}{\delta_{1}}T_{m,n}\left(\left|u-x\right|;x,y\right)+\frac{1}{\delta_{2}}T_{m,n}\left(\left|v-y\right|;x,y\right)\right.$$
$$+\left.\frac{1}{\delta_{1}\delta_{2}}T_{m,n}\left(\left|u-x\right|.\left|v-y\right|;x,y\right)\right\}\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)\text{.}$$

Then, using the Cauchy–Schwarz inequality, we get that

$$\left|T_{m,n}(G_{x,y};x,y)-g\left(x,y\right)\right|\leq\left\{1+\frac{1}{\delta_{1}}\sqrt{T_{m,n}\left(\varphi;x,y\right)}+\frac{1}{\delta_{2}}\sqrt{T_{m,n}\left(\Psi;x,y\right)}\right.$$
$$+\left.\frac{1}{\delta_{1}\delta_{2}}\sqrt{T_{m,n}\left(\varphi;x,y\right)}\sqrt{T_{m,n}\left(\Psi;x,y\right)}\right\}\omega_{mixed}\left(g;\delta_{1},\delta_{2}\right)$$
(21)

for all \(\left(m,n\right)\in S\), and taking the supremum over \((x,y)\in D\) on the inequality (21), we obtain for all \(\left(m,n\right)\in S\) that

$$\left|\left|T_{m,n}(G_{x,y})-g\right|\right|\leq 4\omega_{mixed}\left(g;\gamma_{m,n},\delta_{m,n}\right),$$
(22)

where \(\delta_{1}:=\gamma_{m,n}:=\sqrt{\left|\left|T_{m,n}(\varphi)\right|\right|}\) and \(\delta_{2}:=\delta_{m,n}:=\sqrt{\left|\left|T_{m,n}(\Psi)\right|\right|}.\) For a given \(\varepsilon>0,\) let us set the followings:

$$U:=\left\{\left(m,n\right):\left|\left|T_{m,n}(G_{x,y})-g\right|\right|\geq\varepsilon\right\},$$
$$U^{1}:=\left\{\left(m,n\right):\text{ }\omega_{mixed}(g;\gamma_{m,n},\delta_{m,n})\geq\frac{\varepsilon}{4}\right\}.$$

Hence, thanks to (22) that \(U\cap S\subseteq U^{1}\cap S.\) We can easily see that

$$\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U\cap S}{\sum}p_{m,n}t^{m}s^{n}\leq\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U^{1}\cap S}{\sum}p_{m,n}t^{m}s^{n}\leq\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U^{1}}{\sum}p_{m,n}t^{m}s^{n}.$$

Letting \(t,s\rightarrow R^{-}\) and in view of (18), we conclude that

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U\cap S}{\sum}p_{m,n}t^{m}s^{n}=0.$$
(23)

Furthermore, if we use the inequality

$$\underset{\left(m,n\right)\in U}{\sum}p_{m,n}t^{m}s^{n}=\underset{\left(m,n\right)\in U\cap S}{\sum}p_{m,n}t^{m}s^{n}+\underset{\left(m,n\right)\in U\cap\left(\mathbb{N}_{0}^{2}\backslash S\right)}{\sum}p_{m,n}t^{m}s^{n}$$
$${}\leq\underset{\left(m,n\right)\in U\cap S}{\sum}p_{m,n}t^{m}s^{n}+\underset{\left(m,n\right)\in\mathbb{N}_{0}^{2}\backslash S}{\sum}p_{m,n}t^{m}s^{n},$$

we get that

$$\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U}{\sum}p_{m,n}t^{m}s^{n}\leq\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U\cap S}{\sum}p_{m,n}t^{m}s^{n}+\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in\mathbb{N}_{0}^{2}\backslash S}{\sum}p_{m,n}t^{m}s^{n}.$$
(24)

Letting \(t,s\rightarrow R^{-}\) in (24), and by (23) and (19), we conclude that

$$\underset{t,s\rightarrow R^{-}}{\lim}\frac{1}{p\left(t,s\right)}\underset{\left(m,n\right)\in U}{\sum}p_{m,n}t^{m}s^{n}=0,$$

which gives the desired result. \(\Box\)

4 CONCLUSION

The paper contains Korovkin-type approximation theorem and the rate of convergence for all real-valued \(B-\)continuous functions via the notion of statistical convergence in the sense of power series methods. It is worth noting that by considering these results, similar proofs can be obtained for a sequence \(\{T_{m,n}\}\) of positive linear operators mapping \(B_{2\pi}\) into \(B\left(\mathbb{R}^{2}\right),\) where \(B_{2\pi}\) stands for the space of all real-valued \(B\)-continuous and \(B-2\pi\)-periodic functions on \(\mathbb{R}^{2}\) (see also [5, 19]).