Abstract
In this paper, we deal with an approximation problem for matrix-valued positive linear operators via statistical convergence with respect to the power series method which is a new statistical type convergence. Then, we present an application that shows our theorem is more applicable than the classical one. We also compute the rates of P-statistical convergence of these operators.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and preliminaries
In approximation theory, the classical Korovkin theorem has a significant place because it allows us to check convergence with minimal computation [16]. This theorem mainly provides an approximation to scalar-valued function by way of linear positive operators. In addition, this theorem, which has been discussed by many authors, has made a significant contribution to the literature. However, from a different perspective, Serra-Capizzano established a new Korovkin-type result for matrix-valued functions [18]. After this work, Duman and Erkuş-Duman [9] studied the Korovkin type theorem for matrix-valued functions via the notion of A-statistical convergence.
Statistical convergence was first introduced by Fast [13] and Steinhaus [22], independently. Afterwards, it has been the subject of different fields and have been examined. One of these areas is the Korovkin type approximation theory. The classical Korovkin theorem has recently been improved by Gadjiev and Orhan [15] with respect to the notion of statistical convergence (see, for examples [1, 2, 11, 12, 19]). More recently, Orhan and Ünver introduced the concept of statistical convergence with respect to the power series method which is a new type of statistical convergence [23]. The importance of this convergence is to obtain meaningful results since it cannot be compared with statistical convergence and also, there are many studies (see [3, 4, 6, 20]).
The purpose of this paper is to study P-statistical Korovkin theorem for matrix-valued functions. We compare this new theorem with the classical sense, hence we will get a more general result. We also calculate the rates of P-statistical convergence.
Now, we begin recalling the statistical convergence and the convergence in the sense of power series method of a sequence \(\{x_{n}\}:\)
Let S be a subset of \({\mathbb {N}}_{0},\) the set of natural numbers, then the natural density of S, denoted by \(\delta (S),\) is given by:
whenever the limit exists, where \(\left| .\right|\) denotes the cardinality of the set [17].
A sequence \(\{x_{n}\}\) of numbers is statistically convergent to L provided that, for every \(\varepsilon >0,\)
that is,
has natural density zero. This is denoted by \(st-\lim _{n}x_{n}=L\) [13, 22]. It is worth noting that, every convergent sequence (in the usual sense) is statistically convergent to the same number and unlike a convergent sequence, a statistically convergent sequence need not be convergent.
Let \(\left\{ p_{n}\right\}\) be a non-negative real sequence such that \(p_{0}>0\) and the corresponding power series
has radius of convergence R with \(0<R\le \infty .\) If the limit
exists then we say that \(\{x_{n}\}\) is convergent in the sense of power series method [14, 21]. Note that the method is regular iff \(\ \lim \limits _{0<u\rightarrow R^{-}}\dfrac{p_{n}u^{n}}{p\left( u\right) }=0\) for every n (see, e.g. [5]).
We remark that in case of \(R=1,\) the power series methods coincide with the Abel summability method and logarithmic summability method when \(p_{n}=1\) and \(p_{n}=\frac{1}{n+1},\) respectively. In the event of \(R=\infty\) and \(p_{n}=\frac{1}{n!},\) the power series method coincides with the Borel summability method.
Here and in the sequel, the power series method is always assumed to be regular.
Many researchers are interested in summability methods which are defined by power series. They are in general non-matrix methods. The best-known examples of power series methods are the Abel method and the Borel method. It is worthwhile to point out that, Ünver and Orhan [23] have recently introduced P-statistical convergence, which cannot be compared with statistical convergence:
Definition 1
[23] Let \(S\subset {\mathbb {N}} _{0}.\) If the limit
exists, then \(\delta _{P}\left( S\right)\) is called the P-density of S. It is worthwhile to point out that from the definition of a power series method and P-density it is obvious that \(0\le \delta _{P}\left( S\right) \le 1\) whenever it exists.
In order to properly state P-density, one can give some properties for it:
-
i
\(\delta _{P}( {\mathbb {N}} _{0})=1,\)
-
ii
if \(S\subset G\) then \(\delta _{P}(S)\le \delta _{P}(G),\)
-
iii
if S has P-density then \(\delta _{P}( {\mathbb {N}} _{0}/S)=1-\delta _{P}(S).\)
Definition 2
[23] Let \(\{x_{n}\}\) be a sequence. Then \(\{x_{n}\}\) is said to be statistically convergent with respect to power series method (P-statistically convergent) to L if for any \(\varepsilon >0\)
where \(S_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}-L\right| \ge \varepsilon \right\} ,\) that is \(\delta _{P}\left( S_{\varepsilon }\right) =0\) for any \(\varepsilon >0.\) This is denoted by \(st_{P}-\lim x_{n}=L.\)
Now let’s state that statistical convergence and P-statistical convergence do not contain each other with the example below.
Example 1
Let \(\left\{ p_{n}\right\}\) be defined as follows
and take the sequence \(\left\{ x_{n}\right\}\) defined by
We compute that, since for any \(\varepsilon >0,\) \(\lim\nolimits_{0<u\rightarrow R^{-}}{ }\frac{1}{p\left( u\right) }\sum\nolimits_{n:\left| x_{n}-L\right| >\varepsilon }{ }p_{n}u^{n}=0,\) \(\left\{ x_{n}\right\}\) is P-statistically convergent to 0. We can easily see that \(\left\{ x_{n}\right\}\) is not statistically convergent to 0. On the other hand, let \(\left\{ y_{n}\right\}\) be a sequence defined by
It is not difficult to observe that \(\left\{ y_{n}\right\}\) is statistically convergent to 0 however it is not P-statistically convergent to 0.
2 P-statistical approximation for matrix-valued functions
This section aims to prove a new Korovkin type approximation theorem and to present an application that shows our theorem is more applicable than the classical one.
The symbol \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) stands for the space of all continuous functions H acting on \(\left[ a,b\right]\) and having values in the space \({\mathbb {C}} ^{s\times t}\) where \(s,t\in {\mathbb {N}} _{0}.\) \({\mathbb {C}} ^{s\times t}\) be a space \(s\times t\) complex matrices such that
where \(\left[ a_{jk}\right] _{s\times t}\) stands for the \(s\times t\) matrix defined as follows
Considering the continuity of H we refer that all scalar valued functions \(h_{jk}\) are continuous on \(\left[ a,b\right] .\) Furthermore, the norm \(\left\| .\right\| _{s\times t}\) on the space \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) is defined as follows
where \(\left\| h_{jk}\right\|\) stands for the usual supremum norm of \(h_{jk}\) on the interval \(\left[ a,b\right] .\) Then, the definition (2) may be written as below:
We consider the following test functions here and in the sequel the article
where \(E_{jk}\) denotes the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}\) being 1 in the position \(\left( j,k\right)\) and zero otherwise.
Let \(\Theta :C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) \rightarrow C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) be an operator, and let us assume that
(i) \(\Theta \left( \alpha H+\beta G\right) =\alpha \Theta \left( H\right) +\beta \Theta \left( G\right)\) for any \(\alpha ,\beta \in {\mathbb {C}}\) and \(H,G\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) ,\)
(ii) \(\left| \Theta \left( H\right) \right| \le K\Theta \left( \left| H\right| \right)\) for any function \(H\in C\left( \left[ a,b \right] , {\mathbb {C}} ^{s\times t}\right)\) and for fixed positive constant K.
Under the above-mentioned assumptions, the operator \(\Theta\) is said to be a matrix linear positive operator, or simply, mLPO. The inequality appearing in (ii) is understood to be componentwise, i.e., holding for any component \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) (see also [9, 18]).
Serra-Capizzano introduced the following Korovkin type approximation theorem. We first recall this theorem:
Theorem 1
[18] Let \(\left\{ \Theta _{n}\right\}\) be a sequence of mLOPs from \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) into itself. Then, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2,\)
where \(E_{ijk}\) is given by (4), iff, for every \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) as in (1),
Now, we can give the P-statistical Korovkin theorem for mLPOs which is the main result of this paper.
Theorem 2
Let \(\left\{ \Theta _{n}\right\}\) be a sequence of mLOPs from \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) into itself. Then, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2,\)
where \(E_{ijk}\) is given by (4), iff for every \(H\in C\left( \left[ a,b \right] , {\mathbb {C}} ^{s\times t}\right)\) as in (1),
Proof
Because of each function \(E_{ijk}\) defined by (4) in \(C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) ,\) the implication (6) \(\Longrightarrow\) (5) follows immediately. Therefore, only the necessity part does really require a proof. Suppose that (5) hold. Let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) and \(x\in \left[ a,b\right]\) be fixed. We first calculate the expression \(\left| \Theta _{n}\left( H;x\right) -H\left( x\right) \right| ,\) the symbol \(\left| B\right|\) stands for the matrix having entries equal to the absolute value of the entries of the matrix B. Notice that the function \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t,\) can be written as follows:
where \(E_{jk}\) denotes the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}\) being 1 in the position \(\left( j,k\right)\) and zero otherwise. Then we have
In other respects, since each \(h_{jk}\) is continuous on \(\left[ a,b\right] ,\) for a given \(\varepsilon >0,\) there exists a positive number \(\delta\) such that, for every \(y\in \left[ a,b\right] ,\)
where \(M_{jk}:=\left\| h_{jk}\right\| =\underset{x\in \left[ a,b\right] }{\sup }\left| h_{jk}\left( x\right) \right| .\) Observe that (8) implies
where E is the \(s\times t\) matrix with all entries are 1, and
We remind that it should be understood that the inequality, and the absolute-value in (9) is componentwise as stated earlier. Then, in view of (7) and (9), by using similar technique as in the proof of Theorem 2.1 in [9], for a fixed positive constant K, we get that
where \(c:=\max \left\{ \left| a\right| ,\left| b\right| \right\} .\) Also, taking supremum over \(x\in \left[ a,b\right] ,\) and taking maximum of all entries of the corresponding matrices, we get the result
where
Now, for a given \(\varepsilon ^{\prime }>0,\) choose an \(\varepsilon >0\) such that \(\varepsilon <\varepsilon ^{\prime }/K.\) Then, define the following sets:
where \(i=0,1,2\) and \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} .\) Hence, thanks to (10) that
So, we get
Letting \(0<u\rightarrow R^{-}\) in (11) and using the hypotheses (5), we have
which means
The proof is completed. \(\square\)
We now present an example such that our Korovkin-type approximation result is more applicable than studied before.
Example 2
We define matrix-valued Bernstein-type polynomials as follows:
where \(n\in {\mathbb {N}} _{0},\) \(x\in \left[ a,b\right]\) and \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) such that \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Then, notice that the matrix-valued Bernstein-type polynomials \(B_{n}\) can be also written as follows:
where \(E_{jk}\) as above. So, by (12) and (13), we get, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) that
Also, again for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\)
and
Let \(\left( p_{n}\right)\) be defined as follows
and take the sequence
In this case we easily see that
while \(\left\{ x_{n}\right\}\) is not convergent in the usual and statistical sense. Using this sequence \(\left\{ x_{n}\right\}\) and considering Bernstein-type polynomials \(B_{n}\) given with (12) or (13), we define the following mLPOs
where \(n\in {\mathbb {N}} _{0},\) \(x\in \left[ a,b\right]\) and \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) such that \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Thus, using the properties of matrix-valued Bernstein-type polynomials, for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) one can get the following results at once:
and
where \(\kappa =\max \left| x\right| .\) Then, by (15), we get that
for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\}\) and for each \(i=0,1,2.\) Therefore, thanks to our Theorem 2, we get, for all \(H\in C\left( \left[ 0,1\right] , {\mathbb {C}} ^{s\times t}\right) ,\)
However, since \(\left\| B_{n}\left( E_{ijk};x\right) -E_{ijk}\right\| _{s\times t}=x_{n},\) \(i=0,1,\) and \(\left\{ x_{n}\right\}\) is not convergent in the usual and statistical sense, Theorem 1 and statistical Korovkin theorem [9] for matrix-valued functions do not work for our new operator given by (16).
3 Rates of P-statistical convergence
In this section, we present some estimates of rates of P-statistical convergence for Korovkin-type theorems of matrix-valued positive linear operators. The notion of statistical rates of convergence for matrix-valued functions is studied in [9]. It should be noted that there is no single definition of rates of convergence. Rates of convergence have been studied with different definitions by many authors (see, for example [7,8,9,10, 20]). We show that our P-statistical rates are more efficient than the classical aspects for matrix-valued functions.
Now, we begin with the following definitions:
Definition 3
Let \(\left\{ \alpha _{n}\right\}\) be a nonincreasing sequence of positive real numbers. A sequence \(\{x_{n}\}\) is P-statistically convergent to a number L with the rate of \(o\left( \alpha _{n}\right)\) if, for every \(\varepsilon >0,\)
where \(S_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}-L\right| \ge \varepsilon \alpha _{n}\right\} .\) In this case, we write
Definition 4
Let \(\left\{ \alpha _{n}\right\}\) be a nonincreasing sequence of positive real numbers. A sequence \(\{x_{n}\}\) is P-statistically bounded with the rate of \(O\left( \alpha _{n}\right)\) if there is an \(B>0\) with
where \(G_{\varepsilon }=\left\{ n\in {\mathbb {N}} _{0}:\left| x_{n}\right| \ge B\alpha _{n}\right\} .\) In this case, we write
Using these definitions, let us give the following lemma:
Lemma 1
Let \(\{x_{n}\}\) and \(\{y_{n}\}\) be sequences. Assume that \(\left\{ \alpha _{n}\right\}\) and \(\left\{ \beta _{n}\right\}\) be positive non-increasing sequences. Let \(\gamma _{n}:=\max \left\{ \alpha _{n},\beta _{n}\right\}\) for each \(n\in {\mathbb {N}}_{0}.\) If \(x_{n}-L_{1}=st_{P}-o( \alpha _{n})\) and \(y_{n}-L_{2}=st_{P}-o(\beta _{n}),\) then we have
- (i):
-
\((x_{n}-L_{1})\mp (y_{n}-L_{2})=st_{P}-o(\gamma _{n}),\)
- (ii):
-
\((x_{n}-L_{1})(y_{n}-L_{2})=st_{P}-o(\gamma _{n}),\)
- (iii):
-
\(\lambda (x_{n}-L_{1})=st_{P}-o(\alpha _{n})\) for any real number \(\lambda ,\)
- (iv):
-
\(\sqrt{\left| x_{n}-L_{1}\right| }=st_{P}-o(\alpha _{n})\)
Proof
(i) Assume that \(x_{n}-L_{1}=st_{P}-o(\alpha _{n})\) and \(y_{n}-L_{2}=st_{P}-o(\beta _{n}).\) Also, for \(\varepsilon >0,\) define
Since \(\gamma _{n}=\max \left\{ \alpha _{n},\beta _{n}\right\} ,\) then observe that
which gives,
Under the hypotheses, we conclude that
which completes the proof of (i). Since the proofs of (ii), (iii) and (iv) are similar, we omit them. \(\square\)
Furthermore, similar conclusions hold with the symbol “o ” replaced by “O”.
Now, let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) with \(H\left( x\right) =\left[ h_{jk}\left( x\right) \right] _{s\times t},\) \(1\le j\le s,\) \(1\le k\le t.\) Consider the following classical modulus of continuity of each function \(h_{jk}\) by
Then the matrix modulus of continuity of H as follows:
Observe that a function \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) if and only if \(\underset{\delta \rightarrow 0^{+}}{ \lim }\omega _{s\times t}\left( H;\delta \right) =\omega _{s\times t}\left( H;0\right) =0\) and
for every \(\gamma ,\delta >0,\) where \(\left[ \gamma \right]\) denotes the greatest integer less than or equal to \(\gamma\) (see for details [9]).
The following theorem gives simple sufficient conditions for the rate of P-statistical convergence.
Theorem 3
Let \(\left\{ \Theta _{n}\right\}\) and \(E_{0jk}\) be as above and for each \((j,k) \in \left\{ 1,2,\ldots ,s\right\} \times \left\{ 1,2,\ldots ,t\right\} ,\) let \(\left\{ \alpha _{njk}\right\}\) and \(\left\{ \beta _{n}\right\}\) be two nonincreasing sequences of strictly positive real numbers, and put \(\gamma _{n}:=\max \left\{ \alpha _{njk},\beta _{n}\right\}\) for each \(n\in {\mathbb {N}} _{0}.\) Let \(\delta _{n}:=\sqrt{\sum\nolimits^{s}_{{j=1}{ }}\sum\nolimits_{ k=1}^t\left\| \Theta _{n}\left( \varphi _{jk}\right) \right\| _{s\times t}}\) with \(\varphi _{jk}(y)=\left( y-x\right) ^{2}E_{jk}\) \(\left( y,x\in \left[ a,b\right] \right)\) where \(E_{jk}\) is the matrix of the canonical basis of \({\mathbb {C}} ^{s\times t}.\) Furthermore
-
(i)
\(\left\| \Theta _{n}\left( E_{0jk}\right) -E_{0jk}\right\| _{s\times t}=st_{P}-o(\alpha _{njk}),\)
-
(ii)
\(\omega _{s\times t}\left( H;\delta _{n}\right) =st_{P}-o(\beta _{n})\) with \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right) .\) Then, we get
$$\begin{aligned} \left\| \Theta _{n}\left( H\right) -H\right\| _{s\times t}=st_{P}-o(\ \gamma _{n}). \end{aligned}$$
Proof
Let \(H\in C\left( \left[ a,b\right] , {\mathbb {C}} ^{s\times t}\right)\) and \(M:=\left\| H\right\| _{s\times t}.\) Since \(\Theta _{n}\) is a mLPO, we get
where K is a positive constant. Also,
where E is the \(s\times t\) matrix such that all entires 1. Then, as stated earlier in the proof of Theorem 2, we can write that
for each \(x\in \left[ a,b\right] .\) Hence, thanks to (17) and (18), we get
Taking supremum over \(x\in \left[ a,b\right]\) and choosing \(\delta =\delta _{n},\) we have
Now considering the above inequality, the hypotheses (i), (ii) and Lemma 1, the proof is completed at once. \(\square\)
Furthermore, to obtain P-statistical rates quantitatively, one can consider the symbol “O” instead of “o” and as in the proof Theorem 3, the similar results hold.
Data availibility
Manuscript has no associated data.
References
Anastassiou, G.A., and O. Duman. 2011. Towards intelligent modeling: statistical approximation theory. Intelligent systems reference library, vol. 14. Berlin: Springer.
Bardaro, C., A. Boccuto, K. Demirci, I. Mantellini, and S. Orhan. 2015. Korovkin-type theorems for modular \(\Psi -A-\)statistical convergence. J. Funct. Sp. 2015: 1–11. https://doi.org/10.1155/2015/160401.
Şahin Bayram, N. 2021. Criteria for statistical convergence with respect to power series methods. Positivity 25 (3): 1097–1105. https://doi.org/10.1007/s11117-020-00801-6.
Belen, C., M. Yıldırım, and C. Sümbül. 2020. On statistical and strong convergence with respect to a modulus function and a power series method. Filomat 34 (12): 3981–3993. https://doi.org/10.2298/FIL2012981B.
Boos, J. 2000. Classical and modern methods in summability. Oxford: Oxford University Press.
Çınar, S., and S. Yıldız. 2021. \(P\)-statistical summation process of sequences of convolution operators. Indian J. Pure Appl. Math.https://doi.org/10.1007/s13226-021-00156-y.
Dirik, F., and K. Demirci. 2010. Four-dimensional matrix transformation and the rate of \(A-\)statistical convergence of continuous functions. Comput. Math. Appl. 59 (8): 2976–2981. https://doi.org/10.1016/j.camwa.2010.02.015.
Duman, O. 2007. Regular matrix transformations and rates of convergence of positive linear operators. Calcolo 44 (3): 159–164. https://doi.org/10.1007/s10092-007-0134-z.
Duman, O., and E. Erkuş-Duman. 2011. Statistical Korovkin-type theory for matrix-valued functions. Stud. Sci. Math. Hung. 48: 489–508. https://doi.org/10.1556/sscmath.2011.1179.
Duman, O., and C. Orhan. 2005. Rates of \(A-\)statistical convergence of positive linear operators. Appl. Math. Lett. 18 (12): 1339–1344. https://doi.org/10.1016/j.aml.2005.02.029.
Demirci, K., and F. Dirik. 2010. A Korovkin type approximation theorem for double sequences of positive linear operators of two variables in \(A-\)statistical sense. Bull. Korean Math. Soc. 47 (4): 825–837. https://doi.org/10.4134/BKMS.2010.47.4.825.
Demirci, K., and S. Orhan. 2017. Statistical relative approximation on modular spaces. RM 71 (3): 1167–1184. https://doi.org/10.1007/s00025-016-0548-5.
Fast, H. 1951. Sur la convergence statistique. Colloquium Mathematicae 2: 241–244.
Kratz, W., and U. Stadtmüller. 1989. Tauberian theorems for \(J_{p}-\)summability. J. Math. Anal. Appl. 139: 362–371. https://doi.org/10.1016/0022-247X(89)90113-3.
Gadjiev, A.D., Orhan, C. 2002. Some approximation theorems via statistical convergence. The Rocky Mountain Journal of Mathematics 32: 129-138. https://www.jstor.org/stable/44238888
Korovkin, P.P. 1960. Linear operators and approximation theory. Delhi: Hindustan Publishing Corporation.
Niven, I., and H.S. Zuckerman. 1980. An introduction to the theory of numbers. New York: John Wiley and Sons.
Serra-Capizzano, S. 1999. A Korovkin based approximation of multilevel Toeplitz matrices (with rectangular unstructured blocks) via multilevel trigonometric matrix spaces. SIAM J. Numer. Anal. 36 (6): 1831–1857. https://doi.org/10.1137/S0036142997322497.
Orhan, S., and K. Demirci. 2015. Statistical approximation by double sequences of positive linear operators on modular spaces. Positivity 19 (1): 23–36. https://doi.org/10.1007/s11117-014-0280-x.
Söylemez, D., and M. Ünver. 2021. Rates of power series statistical convergence of positive linear operators and power series statistical convergence of q-Meyer-König and Zeller operators. Lobachevskii J. Math. 42 (2): 426–434. https://doi.org/10.1134/S1995080221020189.
Stadtmüller, U., and A. Tali. 1999. On certain families of generalized Nörlund methods and power series methods. J. Math. Anal. Appl. 238: 44–66. https://doi.org/10.1006/jmaa.1999.6503.
Steinhaus, H. 1951. Sur la convergence ordinaire et la convergence asymtotique. Colloq. Math. 2: 73–74.
Ünver, M., and C. Orhan. 2019. Statistical convergence with respect to power series methods and applications to approximation theory. Numer. Funct. Anal. Optim. 40 (5): 535–547. https://doi.org/10.1080/01630563.2018.1561467.
Funding
The author has no received any financial support for the research, authorship, or publication of this study.
Author information
Authors and Affiliations
Contributions
All authors have contributed sufficiently in the planning, execution, or analysis of this study to be included as authors. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no conflict of interest.
Consent to participate
The authors declare that they voluntarily agree to participate in this study.
Additional information
Communicated by Samy Ponnusamy.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Demirci, K., Yıldız, S. & Çınar, S. Approximation of matrix-valued functions via statistical convergence with respect to power series methods. J Anal 30, 1179–1192 (2022). https://doi.org/10.1007/s41478-022-00400-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41478-022-00400-6