1 Introduction and preliminaries

In approximation theory, the classical Korovkin theorem has a significant place because it allows us to check convergence with minimal computation [16]. This theorem mainly provides an approximation to scalar-valued function by way of linear positive operators. In addition, this theorem, which has been discussed by many authors, has made a significant contribution to the literature. However, from a different perspective, Serra-Capizzano established a new Korovkin-type result for matrix-valued functions [18]. After this work, Duman and Erkuş-Duman [9] studied the Korovkin type theorem for matrix-valued functions via the notion of A-statistical convergence.

Statistical convergence was first introduced by Fast [13] and Steinhaus [22], independently. Afterwards, it has been the subject of different fields and have been examined. One of these areas is the Korovkin type approximation theory. The classical Korovkin theorem has recently been improved by Gadjiev and Orhan [15] with respect to the notion of statistical convergence (see, for examples [1, 2, 11, 12, 19]). More recently, Orhan and Ünver introduced the concept of statistical convergence with respect to the power series method which is a new type of statistical convergence [23]. The importance of this convergence is to obtain meaningful results since it cannot be compared with statistical convergence and also, there are many studies (see [3, 4, 6, 20]).

The purpose of this paper is to study P-statistical Korovkin theorem for matrix-valued functions. We compare this new theorem with the classical sense, hence we will get a more general result. We also calculate the rates of P-statistical convergence.

Now, we begin recalling the statistical convergence and the convergence in the sense of power series method of a sequence \(\{x_{n}\}:\)

Let S be a subset of \({\mathbb {N}}_{0},\) the set of natural numbers, then the natural density of S,  denoted by \(\delta (S),\) is given by:

$$\begin{aligned} \delta (S):=\lim _{n}\frac{1}{n+1}\left| \left\{ k\le n\text { }:k\in S \text { }\right\} \right| \end{aligned}$$

whenever the limit exists, where \(\left| .\right|\) denotes the cardinality of the set [17].

A sequence \(\{x_{n}\}\) of numbers is statistically convergent to L provided that, for every \(\varepsilon >0,\)

$$\begin{aligned} \lim _{n}\frac{1}{n+1}\left| \left\{ k\le n:\left| x_{k}-L\right| \ge \varepsilon \right\} \right| =0 \end{aligned}$$

that is,

$$\begin{aligned} S:=S_{n}\left( \varepsilon \right) :=\left\{ k\le n:\left| x_{k}-L\right| \ge \varepsilon \right\} \end{aligned}$$

has natural density zero. This is denoted by \(st-\lim _{n}x_{n}=L\) [13, 22]. It is worth noting that, every convergent sequence (in the usual sense) is statistically convergent to the same number and unlike a convergent sequence, a statistically convergent sequence need not be convergent.

Let \(\left\{ p_{n}\right\}\) be a non-negative real sequence such that \(p_{0}>0\) and the corresponding power series

$$\begin{aligned} p\left( u\right) :=\sum \limits _{n=0}^{\infty }p_{n}u^{n} \end{aligned}$$

has radius of convergence R with \(0<R\le \infty .\) If the limit

$$\begin{aligned} \lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\sum \limits _{n=0}^{ \infty }p_{n}u^{n}x_{n}=L \end{aligned}$$

exists then we say that \(\{x_{n}\}\) is convergent in the sense of power series method [14, 21]. Note that the method is regular iff \(\ \lim \limits _{0<u\rightarrow R^{-}}\dfrac{p_{n}u^{n}}{p\left( u\right) }=0\) for every n (see, e.g. [5]).

We remark that in case of \(R=1,\) the power series methods coincide with the Abel summability method and logarithmic summability method when \(p_{n}=1\) and \(p_{n}=\frac{1}{n+1},\) respectively. In the event of \(R=\infty\) and \(p_{n}=\frac{1}{n!},\) the power series method coincides with the Borel summability method.

Here and in the sequel, the power series method is always assumed to be regular.

Many researchers are interested in summability methods which are defined by power series. They are in general non-matrix methods. The best-known examples of power series methods are the Abel method and the Borel method. It is worthwhile to point out that, Ünver and Orhan [23] have recently introduced P-statistical convergence, which cannot be compared with statistical convergence:

Definition 1

[23] Let \(S\subset {\mathbb {N}} _{0}.\) If the limit

$$\begin{aligned} \delta _{P}\left( S\right) :=\lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\underset{n\in S}{\sum }p_{n}u^{n} \end{aligned}$$

exists, then \(\delta _{P}\left( S\right)\) is called the P-density of S. It is worthwhile to point out that from the definition of a power series method and P-density it is obvious that \(0\le \delta _{P}\left( S\right) \le 1\) whenever it exists.

In order to properly state P-density, one can give some properties for it:

  1. i

    \(\delta _{P}( {\mathbb {N}} _{0})=1,\)

  2. ii

    if \(S\subset G\) then \(\delta _{P}(S)\le \delta _{P}(G),\)

  3. iii

    if S has P-density then \(\delta _{P}( {\mathbb {N}} _{0}/S)=1-\delta _{P}(S).\)

Definition 2

[23] Let \(\{x_{n}\}\) be a sequence. Then \(\{x_{n}\}\) is said to be statistically convergent with respect to power series method (P-statistically convergent) to L if for any \(\varepsilon >0\)

$$\begin{aligned} \lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\underset{n\in S_{\varepsilon }}{\sum }p_{n}u^{n}=0 \end{aligned}$$

where \(S_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}-L\right| \ge \varepsilon \right\} ,\) that is \(\delta _{P}\left( S_{\varepsilon }\right) =0\) for any \(\varepsilon >0.\) This is denoted by \(st_{P}-\lim x_{n}=L.\)

Now let’s state that statistical convergence and P-statistical convergence do not contain each other with the example below.

Example 1

Let \(\left\{ p_{n}\right\}\) be defined as follows

$$\begin{aligned} p_{n}=\left\{ \begin{array}{ll} 1, &{} n=m^{2}, \\ 0, &{} \text {otherwise,} \end{array} \right. m\in {\mathbb {N}} _{0}, \end{aligned}$$

and take the sequence \(\left\{ x_{n}\right\}\) defined by

$$\begin{aligned} x_{n}=\left\{ \begin{array}{ll} 0, &{} n=m^{2}, \\ n, &{} \text {otherwise,} \end{array} \right. m\in {\mathbb {N}} _{0}. \end{aligned}$$

We compute that, since for any \(\varepsilon >0,\) \(\lim\nolimits_{0<u\rightarrow R^{-}}{ }\frac{1}{p\left( u\right) }\sum\nolimits_{n:\left| x_{n}-L\right| >\varepsilon }{ }p_{n}u^{n}=0,\) \(\left\{ x_{n}\right\}\) is P-statistically convergent to 0. We can easily see that \(\left\{ x_{n}\right\}\) is not statistically convergent to 0. On the other hand, let \(\left\{ y_{n}\right\}\) be a sequence defined by

$$\begin{aligned} y_{n}=\left\{ \begin{array}{ll} n, &{} n=m^{2} \\ 0, &{} \text {otherwise} \end{array} \right. m\in {\mathbb {N}} _{0}. \end{aligned}$$

It is not difficult to observe that \(\left\{ y_{n}\right\}\) is statistically convergent to 0 however it is not P-statistically convergent to 0.

2 P-statistical approximation for matrix-valued functions

This section aims to prove a new Korovkin type approximation theorem and to present an application that shows our theorem is more applicable than the classical one.

The symbol \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) stands for the space of all continuous functions H acting on \(\left[ a,b\right]\) and having values in the space \({\mathbb {C}} ^{s\times t}\) where \(s,t\in {\mathbb {N}} _{0}.\) \({\mathbb {C}} ^{s\times t}\) be a space \(s\times t\) complex matrices such that

$$\begin{aligned} H\left( x\right) :=\left[ h_{jk}\left( x\right) \right] _{s\times t},\text { } \left( x\in \left[ a,b\right] ,\text { }1\le j\le s,\text { }1\le k\le t\right) , \end{aligned}$$
(1)

where \(\left[ a_{jk}\right] _{s\times t}\) stands for the \(s\times t\) matrix defined as follows

$$\begin{aligned} \begin{bmatrix} a_{11} &{} a_{12} &{} \cdots &{} a_{1t} \\ a_{21} &{} a_{22} &{} \cdots &{} a_{2t} \\ \begin{array}{c} . \\ . \\ . \end{array} &{} \begin{array}{c} . \\ . \\ . \end{array} &{} \begin{array}{c} . \\ . \\ . \end{array} &{} \begin{array}{c} . \\ . \\ . \end{array} \\ a_{s1} &{} a_{s2} &{} \cdots &{} a_{st} \end{bmatrix} . \end{aligned}$$

Considering the continuity of H we refer that all scalar valued functions \(h_{jk}\) are continuous on \(\left[ a,b\right] .\) Furthermore, the norm \(\left\| .\right\| _{s\times t}\) on the space \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) is defined as follows

$$\begin{aligned} \left\| H\right\| _{s\times t}:=\underset{1\le j\le s,\text { }1\le k\le t}{\max }\left\| h_{jk}\right\| \end{aligned}$$
(2)

where \(\left\| h_{jk}\right\|\) stands for the usual supremum norm of \(h_{jk}\) on the interval \(\left[ a,b\right] .\) Then, the definition (2) may be written as below:

$$\begin{aligned} \left\| H\right\| _{s\times t}:=\underset{1\le j\le s,\text { }1\le k\le t}{\max }\left( \underset{x\in \left[ a,b\right] }{\sup }\left| h_{jk}\left( x\right) \right| \right) . \end{aligned}$$
(3)

We consider the following test functions here and in the sequel the article

$$\begin{aligned} E_{ijk}\left( x\right) :=x^{i}E_{jk}\text { }\left( x\in \left[ a,b\right] , \text { }i=0,1,2,\text { }1\le j\le s,\text { }1\le k\le t\right) \end{aligned}$$
(4)

where \(E_{jk}\) denotes the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}\) being 1 in the position \(\left( j,k\right)\) and zero otherwise.

Let \(\Theta :C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) \rightarrow C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) be an operator, and let us assume that

(i) \(\Theta \left( \alpha H+\beta G\right) =\alpha \Theta \left( H\right) +\beta \Theta \left( G\right)\) for any \(\alpha ,\beta \in {\mathbb {C}}\) and \(H,G\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) ,\)

(ii) \(\left| \Theta \left( H\right) \right| \le K\Theta \left( \left| H\right| \right)\) for any function \(H\in C\left( \left[ a,b \right] , {\mathbb {C}} ^{s\times t}\right)\) and for fixed positive constant K.

Under the above-mentioned assumptions, the operator \(\Theta\) is said to be a matrix linear positive operator, or simply, mLPO. The inequality appearing in (ii) is understood to be componentwise, i.e., holding for any component \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) (see also [9, 18]).

Serra-Capizzano introduced the following Korovkin type approximation theorem. We first recall this theorem:

Theorem 1

[18] Let \(\left\{ \Theta _{n}\right\}\) be a sequence of mLOPs from \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) into itself. Then, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2,\)

$$\begin{aligned} \underset{n}{\lim }\left\| \Theta _{n}\left( E_{ijk}\right) -E_{ijk}\right\| _{s\times t}=0 \end{aligned}$$

where \(E_{ijk}\) is given by (4), iff, for every \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) as in (1),

$$\begin{aligned} \underset{n}{\lim }\left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=0. \end{aligned}$$

Now, we can give the P-statistical Korovkin theorem for mLPOs which is the main result of this paper.

Theorem 2

Let \(\left\{ \Theta _{n}\right\}\) be a sequence of mLOPs from \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) into itself. Then, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2,\)

$$\begin{aligned} st_{P}-\lim \left\| \Theta _{n}\left( E_{ijk}\right) -E_{ijk}\right\| _{s\times t}=0 \end{aligned}$$
(5)

where \(E_{ijk}\) is given by (4), iff for every \(H\in C\left( \left[ a,b \right] , {\mathbb {C}} ^{s\times t}\right)\) as in (1),

$$\begin{aligned} st_{P}-\lim \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=0. \end{aligned}$$
(6)

Proof

Because of each function \(E_{ijk}\) defined by (4) in \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) ,\) the implication (6) \(\Longrightarrow\) (5) follows immediately. Therefore, only the necessity part does really require a proof. Suppose that (5) hold. Let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) and \(x\in \left[ a,b\right]\) be fixed. We first calculate the expression \(\left| \Theta _{n}\left( H;x\right) -H\left( x\right) \right| ,\) the symbol \(\left| B\right|\) stands for the matrix having entries equal to the absolute value of the entries of the matrix B. Notice that the function \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t,\) can be written as follows:

$$\begin{aligned} H\left( x\right) =\underset{j=1}{\overset{s}{\sum }}\underset{k=1}{\overset{t }{\sum }}h_{jk}\left( x\right) E_{jk}=\underset{j=1}{\overset{s}{\sum }} \overset{t}{\underset{k=1}{\sum }}h_{jk}\left( x\right) E_{0jk}\left( y\right) , \end{aligned}$$

where \(E_{jk}\) denotes the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}\) being 1 in the position \(\left( j,k\right)\) and zero otherwise. Then we have

$$\begin{aligned} \Theta _{n}\left( H\left( x\right) ;x\right) =\underset{j=1}{\overset{s}{ \sum }}\overset{t}{\underset{k=1}{\sum }}h_{jk}\left( x\right) \Theta _{n}\left( E_{0jk};x\right) . \end{aligned}$$
(7)

In other respects, since each \(h_{jk}\) is continuous on \(\left[ a,b\right] ,\) for a given \(\varepsilon >0,\) there exists a positive number \(\delta\) such that, for every \(y\in \left[ a,b\right] ,\)

$$\begin{aligned} \left| h_{jk}\left( y\right) -h_{jk}\left( x\right) \right| \le \varepsilon +\frac{2M_{jk}}{\delta ^{2}}\left( y-x\right) ^{2}\text { for } (j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} , \end{aligned}$$
(8)

where \(M_{jk}:=\left\| h_{jk}\right\| =\underset{x\in \left[ a,b\right] }{\sup }\left| h_{jk}\left( x\right) \right| .\) Observe that (8) implies

$$\begin{aligned} \left| H\left( y\right) -H\left( x\right) \right| \le \varepsilon E+ \frac{2M}{\delta ^{2}}\left( y-x\right) ^{2}E \end{aligned}$$
(9)

where E is the \(s\times t\) matrix with all entries are 1,  and

$$\begin{aligned} M:=\underset{1\le j\le s;\text { }1\le k\le t}{\max }M_{jk}=\left\| H\right\| _{s\times t}. \end{aligned}$$

We remind that it should be understood that the inequality, and the absolute-value in (9) is componentwise as stated earlier. Then, in view of (7) and (9), by using similar technique as in the proof of Theorem 2.1 in [9], for a fixed positive constant K,  we get that

$$\begin{aligned} \left| \Theta _{n}\left( H\left( y\right) ;x\right) -H\left( x\right) \right|\le & {} \varepsilon KE+\frac{2KM}{\delta ^{2}}\underset{j=1}{ \overset{s}{\sum }}\overset{t}{\underset{k=1}{\sum }}\left| \Theta _{n}\left( E_{2jk};x\right) -E_{2jk}\left( x\right) \right| \\&+\frac{4\left| c\right| KM}{\delta ^{2}}\underset{j=1}{\overset{s}{ \sum }}\overset{t}{\underset{k=1}{\sum }}\left| \Theta _{n}\left( E_{1jk};x\right) -E_{1jk}\left( x\right) \right| \\&+\left( \varepsilon K+M+\frac{2c^{2}KM}{\delta ^{2}}\right) \underset{j=1}{ \overset{s}{\sum }}\overset{t}{\underset{k=1}{\sum }}\left| \Theta _{n}\left( E_{0jk};x\right) -E_{0jk}\left( x\right) \right| \end{aligned}$$

where \(c:=\max \left\{ \left| a\right| ,\left| b\right| \right\} .\) Also, taking supremum over \(x\in \left[ a,b\right] ,\) and taking maximum of all entries of the corresponding matrices, we get the result

$$\begin{aligned} \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}\le \varepsilon K+B\underset{j=1}{\overset{s}{\sum }}\overset{t}{\underset{k=1}{ \sum }}\overset{2}{\underset{i=0}{\sum }}\left\| \Theta _{n}\left( E_{ijk}\right) -E_{ijk}\right\| _{s\times t}, \end{aligned}$$
(10)

where

$$\begin{aligned} B:=\max \left\{ \frac{2KM}{\delta ^{2}},\frac{4\left| c\right| KM}{ \delta ^{2}},\varepsilon K+M+\frac{2c^{2}KM}{\delta ^{2}}\right\} . \end{aligned}$$

Now, for a given \(\varepsilon ^{\prime }>0,\) choose an \(\varepsilon >0\) such that \(\varepsilon <\varepsilon ^{\prime }/K.\) Then, define the following sets:

$$\begin{aligned}&\Delta :=\left\{ n\in {\mathbb {N}} :\left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}\ge \varepsilon ^{\prime }\right\} ,\\&\Delta _{ijk}:=\left\{ n\in {\mathbb {N}} :\left\| \Theta _{n}\left( E_{ijk}\right) -E_{ijk}\right\| _{s\times t}\ge \frac{\varepsilon ^{\prime }-\varepsilon K}{3stB}\right\} , \end{aligned}$$

where \(i=0,1,2\) and \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} .\) Hence, thanks to (10) that

$$\begin{aligned} \Delta \subseteq \overset{s}{\underset{j=1}{\cup }}\underset{k=1}{\overset{t}{\cup }}\overset{2}{\underset{i=0}{\cup }}\Delta _{ijk}. \end{aligned}$$

So, we get

$$\begin{aligned} \frac{1}{p\left( u\right) }\underset{n\in \Delta }{\sum }p_{n}u^{n}\le \frac{1}{p\left( u\right) }\overset{s}{\underset{j=1}{\sum }}\underset{k=1}{ \overset{t}{\sum }}\overset{2}{\underset{i=0}{\sum }}\underset{n\in \Delta _{ijk}}{\sum }p_{n}u^{n}. \end{aligned}$$
(11)

Letting \(0<u\rightarrow R^{-}\) in (11) and using the hypotheses (5), we have

$$\begin{aligned} \lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\underset{n\in \Delta }{\sum }p_{n}u^{n}=0, \end{aligned}$$

which means

$$\begin{aligned} st_{P}-\lim \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=0. \end{aligned}$$

The proof is completed. \(\square\)

We now present an example such that our Korovkin-type approximation result is more applicable than studied before.

Example 2

We define matrix-valued Bernstein-type polynomials as follows:

$$\begin{aligned} B_{n}\left( H;x\right) =\underset{l=0}{\overset{n}{\sum }}H\left( a+\frac{l}{ n}\left( b-a\right) \right) \left( {\begin{array}{c}n\\ l\end{array}}\right) \left( \frac{x-a}{b-a}\right) ^{l}\left( \frac{b-x}{b-a}\right) ^{n-l}, \end{aligned}$$
(12)

where \(n\in {\mathbb {N}} _{0},\) \(x\in \left[ a,b\right]\) and \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) such that \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Then, notice that the matrix-valued Bernstein-type polynomials \(B_{n}\) can be also written as follows:

$$\begin{aligned} B_{n}\left( H;x\right) =\underset{l=0}{\overset{n}{\sum }}\overset{s}{ \underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }}h_{jk}\left( a+\frac{ l}{n}\left( b-a\right) \right) \left( {\begin{array}{c}n\\ l\end{array}}\right) \left( \frac{x-a}{b-a}\right) ^{l}\left( \frac{b-x}{b-a}\right) ^{n-l}E_{jk}, \end{aligned}$$
(13)

where \(E_{jk}\) as above. So, by (12) and (13), we get, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) that

$$\begin{aligned} B_{n}\left( E_{0jk};x\right)&= \underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l \end{array}}\right) x^{l}\left( 1-x\right) ^{n-l}E_{0jk}\left( a+\frac{l}{n}\left( b-a\right) \right) \\&= E_{jk}\underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l\end{array}}\right) x^{l}\left( 1-x\right) ^{n-l} \\&= E_{0jk}\left( x\right) . \end{aligned}$$

Also, again for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\)

$$\begin{aligned} B_{n}\left( E_{1jk};x\right)&= \underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l \end{array}}\right) x^{l}\left( 1-x\right) ^{n-l}E_{1jk}\left( a+\frac{l}{n}\left( b-a\right) \right) \\&= E_{jk}\underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l\end{array}}\right) x^{l}\left( 1-x\right) ^{n-l}\left( a+\frac{l}{n}\left( b-a\right) \right) \\&= xE_{jk} \\&= E_{1jk}\left( x\right) \end{aligned}$$

and

$$\begin{aligned} B_{n}\left( E_{2jk};x\right)&= \underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l \end{array}}\right) x^{l}\left( 1-x\right) ^{n-l}E_{2jk}\left( a+\frac{l}{n}\left( b-a\right) \right) \\&= E_{jk}\underset{l=0}{\overset{n}{\sum }}\left( {\begin{array}{c}n\\ l\end{array}}\right) x^{l}\left( 1-x\right) ^{n-l}\left( a+\frac{l}{n}\left( b-a\right) \right) ^{2} \\&=\left( x^{2}-\frac{1}{n}\left( x-a\right) ^{2}+\frac{\left( b-a\right) \left( x-a\right) }{n}\right) E_{jk} \\&= E_{2jk}\left( x\right) +\left( \frac{\left( b-a\right) \left( x-a\right) }{n}-\frac{1}{n}\left( x-a\right) ^{2}\right) E_{jk}. \end{aligned}$$

Let \(\left( p_{n}\right)\) be defined as follows

$$\begin{aligned} p_{n}=\left\{ \begin{array}{ll} 0, &{} n=2m, \\ 1, &{} n=2m+1, \end{array} \right. m\in {\mathbb {N}} _{0} \end{aligned}$$

and take the sequence

$$\begin{aligned} x_{n}=\left\{ \begin{array}{ll} 1, &{} n=2m, \\ 0, &{} n=2m+1, \end{array} \right. m\in {\mathbb {N}} _{0}. \end{aligned}$$
(14)

In this case we easily see that

$$\begin{aligned} st_{P}-\lim x_{n}=0, \end{aligned}$$
(15)

while \(\left\{ x_{n}\right\}\) is not convergent in the usual and statistical sense. Using this sequence \(\left\{ x_{n}\right\}\) and considering Bernstein-type polynomials \(B_{n}\) given with (12) or (13), we define the following mLPOs

$$\begin{aligned} \Theta _{n}\left( H;x\right) =\left( 1+x_{n}\right) B_{n}\left( H;x\right) \end{aligned}$$
(16)

where \(n\in {\mathbb {N}} _{0},\) \(x\in \left[ a,b\right]\) and \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) such that \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Thus, using the properties of matrix-valued Bernstein-type polynomials, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) one can get the following results at once:

$$\begin{aligned} \left\| B_{n}\left( E_{ijk};x\right) -E_{ijk}\right\| _{s\times t}=x_{n},\text { }i=0,1, \end{aligned}$$

and

$$\begin{aligned} \left\| B_{n}\left( E_{2jk};x\right) -E_{2jk}\right\| _{s\times t}\le & {} \frac{1}{n}\left( \kappa -a\right) ^{2}+\frac{\left( b-a\right) \left( \kappa -a\right) }{n} \\&+\frac{x_{n}}{n}\left( \kappa -a\right) ^{2}+\frac{\left( b-a\right) \left( \kappa -a\right) }{n}x_{n} \end{aligned}$$

where \(\kappa =\max \left| x\right| .\) Then, by (15), we get that

$$\begin{aligned} st_{P}-\lim \left\| \Theta _{n}\left( E_{ijk}\right) -E_{ijk}\right\| _{s\times t}=0 \end{aligned}$$

for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2.\) Therefore, thanks to our Theorem 2, we get, for all \(H\in C\left( \left[ 0,1\right] , {\mathbb {C}} ^{s\times t}\right) ,\)

$$\begin{aligned} st_{P}-\lim \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=0. \end{aligned}$$

However, since \(\left\| B_{n}\left( E_{ijk};x\right) -E_{ijk}\right\| _{s\times t}=x_{n},\) \(i=0,1,\) and \(\left\{ x_{n}\right\}\) is not convergent in the usual and statistical sense, Theorem 1 and statistical Korovkin theorem [9] for matrix-valued functions do not work for our new operator given by (16).

3 Rates of P-statistical convergence

In this section, we present some estimates of rates of P-statistical convergence for Korovkin-type theorems of matrix-valued positive linear operators. The notion of statistical rates of convergence for matrix-valued functions is studied in [9]. It should be noted that there is no single definition of rates of convergence. Rates of convergence have been studied with different definitions by many authors (see, for example [7,8,9,10, 20]). We show that our P-statistical rates are more efficient than the classical aspects for matrix-valued functions.

Now, we begin with the following definitions:

Definition 3

Let \(\left\{ \alpha _{n}\right\}\) be a nonincreasing sequence of positive real numbers. A sequence \(\{x_{n}\}\) is P-statistically convergent to a number L with the rate of \(o\left( \alpha _{n}\right)\) if, for every \(\varepsilon >0,\)

$$\begin{aligned} \lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\underset{n\in S_{\varepsilon }}{\sum }p_{n}u^{n}=0 \end{aligned}$$

where \(S_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}-L\right| \ge \varepsilon \alpha _{n}\right\} .\) In this case, we write

$$\begin{aligned} x_{n}-L=st_{P}-o\left( \alpha _{n}\right) . \end{aligned}$$

Definition 4

Let \(\left\{ \alpha _{n}\right\}\) be a nonincreasing sequence of positive real numbers. A sequence \(\{x_{n}\}\) is P-statistically bounded with the rate of \(O\left( \alpha _{n}\right)\) if there is an \(B>0\) with

$$\begin{aligned} \lim _{0<u\rightarrow R^{-}}\frac{1}{p\left( u\right) }\underset{n\in G_{\varepsilon }}{\sum }p_{n}u^{n}=0 \end{aligned}$$

where \(G_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}\right| \ge B\alpha _{n}\right\} .\) In this case, we write

$$\begin{aligned} x_{n}-L=st_{P}-O\left( \alpha _{n}\right) . \end{aligned}$$

Using these definitions, let us give the following lemma:

Lemma 1

Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences. Assume that \(\left\{ \alpha _{n}\right\}\) and \(\left\{ \beta _{n}\right\}\) be positive non-increasing sequences. Let \(\gamma _{n}:=\max \left\{ \alpha _{n},\beta _{n}\right\}\) for each \(n\in {\mathbb {N}}_{0}.\) If \(x_{n}-L_{1}=st_{P}-o( \alpha _{n})\) and \(y_{n}-L_{2}=st_{P}-o(\beta _{n}),\) then we have

(i):

\((x_{n}-L_{1})\mp (y_{n}-L_{2})=st_{P}-o(\gamma _{n}),\)

(ii):

\((x_{n}-L_{1})(y_{n}-L_{2})=st_{P}-o(\gamma _{n}),\)

(iii):

\(\lambda (x_{n}-L_{1})=st_{P}-o(\alpha _{n})\) for any real number \(\lambda ,\)

(iv):

\(\sqrt{\left| x_{n}-L_{1}\right| }=st_{P}-o(\alpha _{n})\)

Proof

(i) Assume that \(x_{n}-L_{1}=st_{P}-o(\alpha _{n})\) and \(y_{n}-L_{2}=st_{P}-o(\beta _{n}).\) Also, for \(\varepsilon >0,\) define

$$\begin{aligned} S_{\varepsilon }&:=\left\{ \text { }n\in {\mathbb {N}} _{0}:\text { }\left| (x_{n}-L_{1})\mp (y_{n}-L_{2})\right| \ge \varepsilon \gamma _{n}\right\} , \\ S_{\varepsilon }^{1}&:=\left\{ n\in {\mathbb {N}} _{0}:\text { }\left| x_{n}-L_{1}\right| \ge \frac{\varepsilon }{2} \alpha _{n}\right\} , \\ S_{\varepsilon }^{2}&:=\left\{ n\in {\mathbb {N}} _{0}:\text { }\left| y_{n}-L_{2}\right| \ge \frac{\varepsilon }{2} \beta _{n}\right\} . \end{aligned}$$

Since \(\gamma _{n}=\max \left\{ \alpha _{n},\beta _{n}\right\} ,\) then observe that

$$\begin{aligned} S_{\varepsilon }\subset S_{\varepsilon }^{1}\cup S_{\varepsilon }^{2}, \end{aligned}$$

which gives,

$$\begin{aligned} \delta _{P}\left( S_{\varepsilon }\right) \le \sum \limits _{i=1}^{2}\delta _{P}\left( S_{\varepsilon }^{i}\right) . \end{aligned}$$

Under the hypotheses, we conclude that

$$\begin{aligned} \delta _{P}\left( S_{\varepsilon }\right) =0, \end{aligned}$$

which completes the proof of (i). Since the proofs of (ii),  (iii) and (iv) are similar, we omit them. \(\square\)

Furthermore, similar conclusions hold with the symbol “o ” replaced by “O”.

Now, let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) with \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Consider the following classical modulus of continuity of each function \(h_{jk}\) by

$$\begin{aligned} \omega \left( h_{jk};\delta \right) :=\sup \left\{ \left| h_{jk}\left( u\right) -h_{jk}\left( x\right) \right| :u,x\in \left[ a,b\right] ,\left| u-x\right| \le \delta \right\} \text { for }\delta >0. \end{aligned}$$

Then the matrix modulus of continuity of H as follows:

$$\begin{aligned} \omega _{s\times t}\left( H;\delta \right) =\underset{1\le j\le s,\text { } 1\le k\le t}{\max }\omega \left( h_{jk};\delta \right) . \end{aligned}$$

Observe that a function \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) if and only if \(\underset{\delta \rightarrow 0^{+}}{ \lim }\omega _{s\times t}\left( H;\delta \right) =\omega _{s\times t}\left( H;0\right) =0\) and

$$\begin{aligned} \omega _{s\times t}\left( H;\gamma \delta \right) \le \left( 1+\left[ \gamma \right] \right) \omega _{s\times t}\left( H;\delta \right) , \end{aligned}$$

for every \(\gamma ,\delta >0,\) where \(\left[ \gamma \right]\) denotes the greatest integer less than or equal to \(\gamma\) (see for details [9]).

The following theorem gives simple sufficient conditions for the rate of P-statistical convergence.

Theorem 3

Let \(\left\{ \Theta _{n}\right\}\) and \(E_{0jk}\) be as above and for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) let \(\left\{ \alpha _{njk}\right\}\) and \(\left\{ \beta _{n}\right\}\) be two nonincreasing sequences of strictly positive real numbers, and put \(\gamma _{n}:=\max \left\{ \alpha _{njk},\beta _{n}\right\}\) for each \(n\in {\mathbb {N}} _{0}.\) Let \(\delta _{n}:=\sqrt{\sum\nolimits^{s}_{{j=1}{ }}\sum\nolimits_{ k=1}^t\left\| \Theta _{n}\left( \varphi _{jk}\right) \right\| _{s\times t}}\) with \(\varphi _{jk}(y)=\left( y-x\right) ^{2}E_{jk}\) \(\left( y,x\in \left[ a,b\right] \right)\) where \(E_{jk}\) is the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}.\) Furthermore

  1. (i)

    \(\left\| \Theta _{n}\left( E_{0jk}\right) -E_{0jk}\right\| _{s\times t}=st_{P}-o(\alpha _{njk}),\)

  2. (ii)

    \(\omega _{s\times t}\left( H;\delta _{n}\right) =st_{P}-o(\beta _{n})\) with \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) .\) Then, we get

    $$\begin{aligned} \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=st_{P}-o(\ \gamma _{n}). \end{aligned}$$

Proof

Let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) and \(M:=\left\| H\right\| _{s\times t}.\) Since \(\Theta _{n}\) is a mLPO, we get

$$\begin{aligned} \left| \Theta _{n}\left( H\left( y\right) ;x\right) -H\left( x\right) \right| \le K\Theta _{n}\left( \left| H\left( y\right) -H\left( x\right) \right| ;x\right) +\left| \Theta _{n}\left( H\left( x\right) ;x\right) -H\left( x\right) \right| \end{aligned}$$

where K is a positive constant. Also,

$$\begin{aligned} \left| H\left( y\right) -H\left( x\right) \right| \le \omega _{s\times t}\left( H,\left| y-x\right| \right) E\le \left( 1+\frac{ \left( y-x\right) ^{2}}{\delta ^{2}}\right) \omega _{s\times t}\left( H,\delta \right) E \end{aligned}$$
(17)

where E is the \(s\times t\) matrix such that all entires 1. Then, as stated earlier in the proof of Theorem 2, we can write that

$$\begin{aligned} \left| \Theta _{n}\left( H\left( x\right) ;x\right) -H\left( x\right) \right| \le M\overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t }{\sum }}\left| \Theta _{n}\left( E_{0jk}\left( x\right) ;x\right) -E_{0jk}\left( x\right) \right| \end{aligned}$$
(18)

for each \(x\in \left[ a,b\right] .\) Hence, thanks to (17) and (18), we get

$$\begin{aligned}&\left| \Theta _{n}\left( H\left( y\right) ;x\right) -H\left( x\right) \right| \\\le & {} K\omega _{s\times t}\left( H,\delta \right) \Theta _{n}\left( E;x\right) +\frac{K}{\delta ^{2}}\omega _{s\times t}\left( H,\delta \right) \overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }}\Theta _{n}\left( \varphi _{jk}\left( y\right) ;x\right) \\&+M\overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }} \left| \Theta _{n}\left( E_{0jk}\left( x\right) ;x\right) -E_{0jk}\left( x\right) \right| \\\le & {} K\omega _{s\times t}\left( H,\delta \right) E+K\omega _{s\times t}\left( H,\delta \right) \overset{s}{\underset{j=1}{\sum }}\underset{k=1}{ \overset{t}{\sum }}\left| \Theta _{n}\left( E_{0jk}\left( x\right) ;x\right) -E_{0jk}\left( x\right) \right| \\&+M\overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }} \left| \Theta _{n}\left( E_{0jk}\left( x\right) ;x\right) -E_{0jk}\left( x\right) \right| \\&+\frac{K}{\delta ^{2}}\omega _{s\times t}\left( H,\delta \right) \overset{s }{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }}\Theta _{n}\left( \varphi _{jk}\left( y\right) ;x\right) . \end{aligned}$$

Taking supremum over \(x\in \left[ a,b\right]\) and choosing \(\delta =\delta _{n},\) we have

$$\begin{aligned} \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}\le & {} 2K\omega _{s\times t}\left( H,\delta _{n}\right) \\&+K\omega _{s\times t}\left( H,\delta _{n}\right) \overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }}\left\| \Theta _{n}\left( E_{0jk}\right) -E_{0jk}\right\| _{s\times t} \\&+M\overset{s}{\underset{j=1}{\sum }}\underset{k=1}{\overset{t}{\sum }} \left\| \Theta _{n}\left( E_{0jk}\right) -E_{0jk}\right\| _{s\times t}. \end{aligned}$$

Now considering the above inequality, the hypotheses (i),  (ii) and Lemma 1, the proof is completed at once. \(\square\)

Furthermore, to obtain P-statistical rates quantitatively, one can consider the symbol “O” instead of “o” and as in the proof Theorem 3, the similar results hold.