1 Introduction

The classical theorem of Korovkin [13] on the approximation of continuous functions on a compact interval gives conditions that are key tools for deciding whether a sequence of positive linear operators converges to the identity operator. By means of test functions, convergence was guaranteed on the whole space [1, 4]. Gadjiev and Orhan [11] developed the Korovkin-type approximation theorem by taking the concept of statistical convergence, which is another interesting convergence method, instead of classical convergence ([10, 22], see also [15]). Several studies, such as [7, 8, 17], deal with statistical convergence and Korovkin-type approximation theory. Also, recent generalizations of the Korovkin theorem using new types of statistical convergence are given in [2, 3].

It is worth noting that the main aim of using summability theory has always been to make a non-convergent sequence converge. If the sequence of positive linear operators does not converge to the identity operator, then it might be beneficial to use some matrix and non-matrix summability methods [12, 16, 19].

The power series methods, including both the Abel and Borel methods, are well known and they are more effective than ordinary convergence. As it is well known, power series methods and statistical convergence are incompatible. These methods are considered in the Korovkin type approximation theory with the Abel summability method in [21] for the first time. In many further papers, the authors show how this concept can be applied to approximation theory [18, 20, 23,24,25]. Various approximation theorems have recently been obtained by relaxing the positivity condition on linear operators. For instance, Duman and Anastassiou [9] relaxed the positivity condition of linear operators in the Korovkin-type approximation theory via statistical convergence, and several authors [5, 6] have given Korovkin-type approximation theorems for non-positive operators and convergence methods.

The present work aims to study some Korovkin-type approximation theorems for linear operators defined on classes of differentiable functions via the power series method. We give an example that our theorem makes more sense. It should also be noted that we study the rate of convergence. In the final section, we summarize the results obtained in the present paper.

Prior to giving the main theorems, we mention the power series method.

Let \(\left( p_{k}\right)\) be a non-negative real sequence such that \(p_{0}>0\) and the corresponding power series

$$\begin{aligned} p\left( t\right) :=\sum \limits _{k=0}^{\infty }p_{k}t^{k} \end{aligned}$$

has a radius of convergence R with \(0<R\le \infty .\) If the limit

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\frac{1}{p\left( t\right) }\sum \limits _{k=0}^{\infty }x_{k}p_{k}t^{k}=L \end{aligned}$$

exists, then we say that \(x=\left( x_{k}\right)\) is convergent in the sense of power series method [14, 21]. It is worthwhile to point out that the method is regular if and only if \(\ \lim \nolimits _{0<t\rightarrow R^{-}} \dfrac{p_{k}t^{k}}{p\left( t\right) }=0\) for every k (see, e.g., [4]).

2 Approximation properties via power series method

Let k be a nonnegative integer. As \(C^{k}[0,1]\) is frequently called, we denote the space of the k-times continuously differentiable functions on [0, 1] endowed with the sup-norm \(\left\| .\right\| .\) Then throughout the paper, we consider the following function spaces:

$$\begin{aligned} \begin{array}{ll} C_{+}^{1}=\left\{ h\in C^{1}[0,1]:h^{^{\prime }}\ge 0\right\} , &{} C_{+}=\left\{ h\in C[0,1]:h\ge 0\right\} , \\ C_{+}^{2}=\left\{ h\in C^{2}[0,1]:h^{^{\prime \prime }}\ge 0\right\} , &{} C_{+,1}=\left\{ h\in C^{1}[0,1]:h\ge 0\right\} , \\ C_{-}^{1}=\left\{ h\in C^{1}[0,1]:h^{^{\prime }}\le 0\right\} , &{} C_{+,2}=\left\{ h\in C^{2}[0,1]:h\ge 0\right\} , \\ C_{-}^{2}=\left\{ h\in C^{2}[0,1]:h^{^{\prime \prime }}\le 0\right\} . &{} \end{array} \end{aligned}$$

Let \(\left( T_{n}\right)\) be a sequence of a linear operator from \(C^{k}[0,1]\) into itself and let \(V_{t}\left( .\right)\) be given by

$$\begin{aligned} V_{t}\left( h;x\right) :=\frac{1}{p\left( t\right) }\sum \limits _{n=0}^{ \infty }T_{n}\left( h;x\right) p_{n}t^{n} \end{aligned}$$

is well-defined operator from \(C^{k}[0,1]\) into itself as we can see from the following inequality

$$\begin{aligned} \left\| V_{t}\left( h\right) \right\| \le \sup _{0<t<R}\frac{1}{ p\left( t\right) }\sum \limits _{n=0}^{\infty }\left\| T_{n}\left( 1\right) \right\| p_{n}t^{n}<\infty . \end{aligned}$$

We assume throughout the paper that the test functions are

$$\begin{aligned} h_{i}\left( x\right) =x^{i},\text { }i=0,1,2,3,4. \end{aligned}$$

Theorem 2.1

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{2}[0,1]\) into itself and \(T_{n}\left( C_{+,2}\cap C_{+}^{2}\right) \subset C_{+,2\text { }},\) for all \(n\in \mathbb {N} .\) Then for every \(h\in C^{2}[0,1]\)

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}\left( h\right) -h\right\| =0 \end{aligned}$$
(2.1)

if and only if

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}\left( h_{i}\right) -h_{i}\right\| =0,i=0,1,2. \end{aligned}$$
(2.2)

Proof

First, we assume that \(V_{t}\left( h\right)\) is convergent to h for every \(h\in C^{2}[0,1]\). Hence \(h_{i}\in C^{2}[0,1]\), \(i=0,1,2,\) \(V_{t}\left( h_{i}\right)\) is convergent to \(h_{i}\) for each \(i=0,1,2\). Therefore, only the sufficiency part does really require a proof. Let \(x\in [0,1]\) be fixed and let \(h\in C^{2}[0,1].\) Since h is bounded and uniformly continuous on [0, 1],  for every \(\varepsilon >0,\) there exists a \(\delta >0\) such that

$$\begin{aligned} -\varepsilon -\frac{2M_{1}\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \le h\left( y\right) -h\left( x\right) \le \varepsilon +\frac{2M_{1}\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \end{aligned}$$
(2.3)

holds for all \(y\in [0,1]\) and for any \(\beta \ge 1\) where \(M_{1}=\left\| h\right\|\) and \(\varphi _{x}\left( y\right) =\left( y-x\right) ^{2}.\)

Then by (2.3) we get that

$$\begin{aligned} h_{1,\beta }\left( y\right):= & {} \varepsilon +\frac{2M_{1}\beta }{\delta ^{2}} \varphi _{x}\left( y\right) +h\left( y\right) -h\left( x\right) \ge 0, \end{aligned}$$
(2.4)
$$\begin{aligned} h_{2,\beta }\left( y\right):= & {} \varepsilon +\frac{2M_{1}\beta }{\delta ^{2}} \varphi _{x}\left( y\right) -h\left( y\right) +h\left( x\right) \ge 0. \end{aligned}$$
(2.5)

Also, for all \(y\in [0,1],\)

$$\begin{aligned} h_{1,\beta }^{^{\prime \prime }}\left( y\right) :=\frac{4M_{1}\beta }{\delta ^{2}}+h^{^{\prime \prime }}\left( y\right) \quad \text { and } \quad h_{2,\beta }^{^{\prime \prime }}\left( y\right) :=\frac{4M_{1}\beta }{\delta ^{2}} -h^{^{\prime \prime }}\left( y\right) . \end{aligned}$$

Because \(h^{^{\prime \prime }}\) is bounded on [0, 1],  we can choose \(\beta \ge 1\) so that \(h_{1,\beta }^{^{\prime \prime }}\left( y\right) \ge 0,\) \(h_{2,\beta }^{^{\prime \prime }}\left( y\right) \ge 0,\) for each \(y\in [0,1]\). Hence \(h_{1,\beta },\) \(h_{2,\beta }\in C_{+,2}\cap C_{+}^{2}\) and then by the hypothesis

$$\begin{aligned} T_{n}\left( h_{j,\beta };x\right) \ge 0, \quad \text { for all }n\in \mathbb {N} ,\text { }x\in [0,1]\text { and }j=1,2 \end{aligned}$$
(2.6)

and hence

$$\begin{aligned} V_{t}\left( h_{j,\beta };x\right) \ge 0, \quad \text { for }t\in \left( 0,R\right) , \text { }x\in [0,1]\text { and }j=1,2. \end{aligned}$$

From (2.4)–(2.6) and linearity \(\left( T_{n}\right)\) we get

$$\begin{aligned} \varepsilon V_{t}\left( h_{0};x\right) +\frac{2M_{1}\beta }{\delta ^{2}} V_{t}\left( \varphi _{x};x\right) +V_{t}\left( h;x\right) -h\left( x\right) V_{t}\left( h_{0};x\right) \ge 0 \end{aligned}$$
$$\begin{aligned} \varepsilon V_{t}\left( h_{0};x\right) +\frac{2M_{1}\beta }{\delta ^{2}} V_{t}\left( \varphi _{x};x\right) -V_{t}\left( h;x\right) +h\left( x\right) V_{t}\left( h_{0};x\right) \ge 0 \end{aligned}$$

Then we obtain

$$\begin{aligned} \left| V_{t}\left( h;x\right) -h\left( x\right) \right| \le \varepsilon +\frac{2M_{1}\beta }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right) +\left( \varepsilon +\left| h\left( x\right) \right| \right) \left| V_{t}\left( h_{0};x\right) -h_{0}\left( x\right) \right| . \end{aligned}$$

We immediately get that

$$\begin{aligned} \left\| V_{t}\left( h\right) -h\right\| \le \varepsilon +K_{1}\left\{ \left\| V_{t}\left( h_{2}\right) -h_{2}\right\| +\left\| V_{t}\left( h_{1}\right) -h_{1}\right\| +\left\| V_{t}\left( h_{0}\right) -h_{0}\right\| \right\} \end{aligned}$$
(2.7)

where \(K_{1}=\max \left\{ \varepsilon +M_{1}+\dfrac{2M_{1}\beta }{\delta ^{2} },\dfrac{4M_{2}\beta }{\delta ^{2}}\right\} .\)

Hence it follows from the hypothesis and the inequality (2.7) that

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}\left( h\right) -h\right\| <\varepsilon \end{aligned}$$

which concludes the proof since \(\varepsilon\) is arbitrary.

Theorem 2.2

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{2}[0,1]\) into itself and \(T_{n}\left( C_{+,2}\cap C_{-}^{2}\right) \subset C_{-}^{2},\) for all \(n\in \mathbb {N} .\) Then for all \(h\in C^{2}[0,1]\)

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime \prime }}\left( h\right) -h^{^{\prime \prime }}\right\| =0 \end{aligned}$$
(2.8)

if and only if

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime \prime }}\left( h_{i}\right) -h_{i}^{^{\prime \prime }}\right\| =0,i=0,1,2,3,4. \end{aligned}$$
(2.9)

Proof

It is obvious that (2.8) implies that (2.9) . Let \(h\in C^{2}[0,1]\) and \(x\in [0,1]\) be fixed. Based on the proof of Theorem 2.1, this can be found as follows:

For every \(\varepsilon >0,\) there exists a \(\delta >0\) such that

$$\begin{aligned} -\varepsilon -\frac{2M\beta }{\delta ^{2}}\sigma _{x}^{^{\prime \prime }}\left( y\right) \le h^{^{\prime \prime }}\left( y\right) -h^{^{\prime \prime }}\left( x\right) \le \varepsilon +\frac{2M\beta }{\delta ^{2}} \sigma _{x}^{^{\prime \prime }}\left( y\right) \end{aligned}$$
(2.10)

holds for all \(y\in [0,1]\) and for any \(\beta \ge 1\) where \(M_{2}=\left\| h^{^{\prime \prime }}\right\|\) and \(\ \sigma _{x}\left( y\right) =-\dfrac{\left( y-x\right) ^{4}}{12}+1.\)

Consider the following functions on [0, 1] : 

$$\begin{aligned} u_{1,\beta }\left( y\right):= & {} \frac{2M_{2}\beta }{\delta ^{2}}\sigma _{x}\left( y\right) +h\left( y\right) -\frac{\varepsilon }{2}y^{2}-\frac{ h^{^{\prime \prime }}\left( y\right) }{2}y^{^{\prime \prime }}\ge 0\\ u_{2,\beta }\left( y\right):= & {} \frac{2M_{2}\beta }{\delta ^{2}}\sigma _{x}\left( y\right) -h\left( y\right) -\frac{\varepsilon }{2}y^{2}+\frac{ h^{^{\prime \prime }}\left( y\right) }{2}y^{^{\prime \prime }}\ge 0 \end{aligned}$$

Also, then by (2.10) and for all \(y\in [0,1],\)

$$\begin{aligned} u_{1,\beta }^{^{\prime \prime }}\left( y\right) \le 0\text { and }u_{2,\beta }^{^{\prime \prime }}\left( y\right) \le 0, \end{aligned}$$

which gives \(u_{1,\beta },\) \(u_{2,\beta }\in C_{-}^{2}\) and observe that \(\sigma _{x}\left( y\right) \ge \dfrac{11}{12}\) for all \(y\in [0,1].\) Then inequality

$$\begin{aligned} \frac{\left( \pm h\left( y\right) +\frac{\varepsilon }{2}\pm \frac{ h^{^{\prime \prime }}\left( x\right) }{2}y^{2}\right) \delta ^{2}}{ 2M_{2}\sigma _{x}\left( y\right) }\le \frac{\left( M_{1}+M_{2}+\varepsilon \right) \delta ^{2}}{M_{2}} \end{aligned}$$
(2.11)

holds for all \(y\in [0,1],\) where \(M_{1}=\) \(\left\| h\right\|\) and \(M_{2}=\left\| h^{^{\prime \prime }}\right\|\), as mentioned before. As we know, we can choose \(\beta \ge 1\) such that \(u_{1,\beta }\left( y\right) \ge 0,\) \(u_{2,\beta }\left( y\right) \ge 0,\) for each \(y\in [0,1]\) and hence \(u_{1,\beta },\) \(u_{2,\beta }\in C_{+,2}\cap C_{-}^{2}.\) Then by the hypothesis

$$\begin{aligned} T_{n}^{^{\prime \prime }}\left( u_{j,\beta };x\right) \le 0, \quad \text { for all }n\in \mathbb {N} ,\text { }x\in [0,1]\text { and }j=1,2 \end{aligned}$$

and hence

$$\begin{aligned} V_{t}^{^{\prime \prime }}\left( u_{j,\beta };x\right) \le 0,\text { for } t\in \left( 0,R\right) ,\text { }x\in [0,1]\text { and }j=1,2. \end{aligned}$$

Then we get

$$\begin{aligned}&\frac{2M_{2}\beta }{\delta ^{2}}V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) +V_{t}^{^{\prime \prime }}\left( h;x\right) -\frac{\varepsilon }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -\frac{h^{^{\prime \prime }}\left( x\right) }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) \le 0,\\&\quad \frac{2M_{2}\beta }{\delta ^{2}}V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) -V_{t}^{^{\prime \prime }}\left( h;x\right) -\frac{\varepsilon }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) +\frac{h^{^{\prime \prime }}\left( x\right) }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) \le 0, \end{aligned}$$

and thus

$$\begin{aligned}&\frac{2M_{2}\beta }{\delta ^{2}}V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) -\frac{\varepsilon }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) +\frac{h^{^{\prime \prime }}\left( x\right) }{2} V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -h^{^{\prime \prime }}\left( x\right) \\&\quad \le V_{t}^{^{\prime \prime }}\left( h;x\right) -h^{^{\prime \prime }}\left( x\right) \\&\quad \le -\frac{2M_{2}\beta }{\delta ^{2}}V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) +\frac{\varepsilon }{2}V_{t}^{^{\prime \prime }}\left( h_{2};x\right) +\frac{h^{^{\prime \prime }}\left( x\right) }{2} V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -h^{^{\prime \prime }}\left( x\right) . \end{aligned}$$

Observe that in view of \(\sigma _{x}\in C_{+,2}\cap C_{-}^{2},\) we can get \(V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) \le 0\) and using this

$$\begin{aligned} \left| V_{t}^{^{\prime \prime }}\left( h;x\right) -h^{^{\prime \prime }}\left( x\right) \right| \le -\frac{2M_{2}\beta }{\delta ^{2}} V_{t}^{^{\prime \prime }}\left( \sigma _{x};x\right) +\frac{\varepsilon }{2} V_{t}^{^{\prime \prime }}\left( h_{2};x\right) +\frac{\left| h^{^{\prime \prime }}\left( x\right) \right| }{2}\left| V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -2\right| . \end{aligned}$$

Thus

$$\begin{aligned} \left| V_{t}^{^{\prime \prime }}\left( h;x\right) -h^{^{\prime \prime }}\left( x\right) \right|\le & {} \varepsilon +\frac{\varepsilon +\left| h^{^{\prime \prime }}\left( x\right) \right| }{2}\left| V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -h_{2}^{^{\prime \prime }}\left( x\right) \right| \nonumber \\&+\frac{2M_{2}\beta }{\delta ^{2}}V_{t}^{^{\prime \prime }}\left( -\sigma _{x};x\right) . \end{aligned}$$
(2.12)

Now we compute the quantity \(V_{t}^{^{\prime \prime }}\left( -\sigma _{x};x\right)\) in inequality (2.12) . To see this, observe that

$$\begin{aligned} V_{t}^{^{\prime \prime }}\left( -\sigma _{x};x\right)= & {} V_{t}^{^{\prime \prime }}\left( \dfrac{\left( y-x\right) ^{4}}{12}-1;x\right) \\\le & {} \frac{1}{12}V_{t}^{^{\prime \prime }}\left( h_{4};x\right) -\frac{x}{3 }V_{t}^{^{\prime \prime }}\left( h_{3};x\right) +\frac{x^{2}}{2} V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -\frac{x^{3}}{3} V_{t}^{^{\prime \prime }}\left( h_{1};x\right) \\&+\left( \frac{x^{4}}{12}-1\right) V_{t}^{^{\prime \prime }}\left( h_{0};x\right) \\= & {} \frac{1}{12}\left\{ V_{t}^{^{\prime \prime }}\left( h_{4};x\right) -h_{4}^{^{\prime \prime }}\left( x\right) \right\} -\frac{x}{3}\left\{ V_{t}^{^{\prime \prime }}\left( h_{3};x\right) -h_{3}^{^{\prime \prime }}\left( x\right) \right\} \\&+\frac{x^{2}}{2}\left\{ V_{t}^{^{\prime \prime }}\left( h_{2};x\right) -h_{2}^{^{\prime \prime }}\left( x\right) \right\} \\&-\frac{x^{3}}{3}\left\{ V_{t}^{^{\prime \prime }}\left( h_{1};x\right) -h_{1}^{^{\prime \prime }}\left( x\right) \right\} +\left( \frac{x^{4}}{12} -1\right) \left\{ V_{t}^{^{\prime \prime }}\left( h_{0};x\right) -h_{0}^{^{\prime \prime }}\left( x\right) \right\} . \end{aligned}$$

Combining this with (2.12) , for every \(\varepsilon >0\) we get

$$\begin{aligned} \left\| V_{t}^{^{\prime \prime }}h-h^{^{\prime \prime }}\right\|\le & {} \varepsilon +\left( \frac{\varepsilon +\left| h^{^{\prime \prime }}\left( x\right) \right| }{2}+\frac{2M_{2}\beta }{\delta ^{2}}\right) \left\| V_{t}^{^{\prime \prime }}\left( h_{2}\right) -h_{2}^{^{\prime \prime }}\right\| \nonumber \\&+\frac{M_{2}\beta }{6\delta ^{2}}\left\| V_{t}^{^{\prime \prime }}\left( h_{4}\right) -h_{4}^{^{\prime \prime }}\right\| \nonumber \\&+\frac{2M_{2}\beta }{3\delta ^{2}}\left\| V_{t}^{^{\prime \prime }}\left( h_{3}\right) -h_{3}^{^{\prime \prime }}\right\| \nonumber \\&+\frac{2M_{2}\beta }{3\delta ^{2}}\left\| V_{t}^{^{\prime \prime }}\left( h_{1}\right) -h_{1}^{^{\prime \prime }}\right\| \nonumber \\&+\frac{2M_{2}\beta }{3\delta ^{2}}\left( 1-\frac{x^{4}}{ 12}\right) \left\| V_{t}^{^{\prime \prime }}\left( h_{0}\right) -h_{0}^{^{\prime \prime }}\right\| . \end{aligned}$$
(2.13)

Therefore, we derive, for every \(\varepsilon >0,\) that

$$\begin{aligned} \left\| V_{t}^{^{\prime \prime }}\left( h\right) -h^{^{\prime \prime }}\right\| \le \varepsilon +K_{2}\sum \limits _{k=0}^{4}\left\| V_{t}^{^{\prime \prime }}h_{k}-h_{k}^{^{\prime \prime }}\right\| \end{aligned}$$

where \(K_{2}=\dfrac{\varepsilon +M_{2}}{2}+\dfrac{M_{2}\beta }{\delta }\) and \(M_{2}=\left\| h^{^{\prime \prime }}\right\|\) as stated before. Thus it follows from the hypothesis and inequality (2.13) we obtain that

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime \prime }}\left( h\right) -h^{^{\prime \prime }}\right\| =0. \end{aligned}$$

Theorem 2.3

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{1}[0,1]\) into itself and \(T_{n}\left( C_{+,1}\cap C_{+}^{1}\right) \subset C_{+}^{1},\) for all \(n\in \mathbb {N} .\) Then for all \(h\in C^{1}[0,1]\)

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime }}\left( h\right) -h^{^{\prime }}\right\| =0 \end{aligned}$$
(2.14)

if and only if

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime }}\left( h_{i}\right) -h_{i}^{^{\prime }}\right\| =0,\text { }i=0,1,2,3. \end{aligned}$$
(2.15)

Proof

It is enough to prove the implication (2.15) \(\Rightarrow\) (2.14) . Let \(h\in C^{1}[0,1]\) and \(x\in [0,1]\) be fixed. Then for every \(\varepsilon >0,\) there exists a positive number \(\delta >0\) such that

$$\begin{aligned} -\varepsilon -\frac{2M_{3}\beta }{\delta ^{2}}\gamma _{x}^{^{\prime }}\left( y\right) \le h^{^{\prime }}\left( y\right) -h^{^{\prime }}\left( x\right) \le \varepsilon +\frac{2M_{3}\beta }{\delta ^{2}}\gamma _{x}^{^{\prime }}\left( y\right) \end{aligned}$$
(2.16)

holds for all \(y\in [0,1]\) and for any \(\beta \ge 1\) where \(M_{3}=\left\| h^{^{\prime }}\right\|\) and \(\gamma _{x}\left( y\right) =\dfrac{\left( y-x\right) ^{3}}{3}+1.\)

Now using the functions defined by

$$\begin{aligned} \theta _{1,\beta }\left( y\right):= & {} \frac{2M_{3}\beta }{\delta ^{2}}\gamma _{x}\left( y\right) -h\left( y\right) +\varepsilon y+yh^{^{\prime }}\left( x\right) ,\\ \theta _{2,\beta }\left( y\right):= & {} \frac{2M_{3}\beta }{\delta ^{2}}\gamma _{x}\left( y\right) +h\left( y\right) +\varepsilon y-yh^{^{\prime }}\left( x\right) , \end{aligned}$$

we can easily see that \(\theta _{1,\beta }\) and \(\theta _{2,\beta }\) belong to \(C_{+}^{1}\) for any \(\beta \ge 1,\) i.e., \(\theta _{1,\beta }\left( y\right) \ge 0,\) \(\theta _{2,\beta }\left( y\right) \ge 0.\) Also observe that \(\gamma _{x}\left( y\right) \ge \dfrac{2}{3}\) for all \(y\in [0,1],\) then inequality

$$\begin{aligned} \frac{\left( \pm h\left( y\right) -\varepsilon y\pm h^{^{\prime }}\left( x\right) y\right) \delta ^{2}}{2M_{3}\gamma _{x}\left( y\right) }\le \frac{ \left( M_{1}+M_{3}+\varepsilon \right) \delta ^{2}}{M_{3}} \end{aligned}$$
(2.17)

holds for all \(y\in [0,1],\) where \(M_{1}=\) \(\left\| h\right\|\) as mentioned before. Now we can choose \(\beta \ge 1\) such a way that \(\theta _{1,\beta }\left( y\right) \ge 0,\) \(\theta _{2,\beta }\left( y\right) \ge 0,\) for each \(y\in [0,1]\) and hence \(\theta _{1,\beta },\) \(\theta _{2,\beta }\in C_{+,1}\cap C_{+}^{1}.\) Then by the hypothesis

$$\begin{aligned} T_{n}^{\prime }\left( \theta _{j,\beta };x\right) \ge 0, \quad \text { for all }n\in \mathbb {N} ,\text { }x\in [0,1]\text { and }j=1,2 \end{aligned}$$

and hence

$$\begin{aligned} V_{t}^{^{\prime }}\left( \theta _{j,\beta };x\right) \ge 0, \quad \text { for }t\in \left( 0,R\right) ,\text { }x\in [0,1]\text { and }j=1,2. \end{aligned}$$

Then we get

$$\begin{aligned}&\frac{2M_{3}\beta }{\delta ^{2}}V_{t}^{^{\prime }}\left( \gamma _{x};x\right) -V_{t}^{^{\prime }}\left( h;x\right) +\varepsilon V_{t}^{^{\prime }}\left( h_{1};x\right) +h^{^{\prime }}\left( x\right) V_{t}^{^{\prime }}\left( h_{1};x\right) \ge 0,\\&\frac{2M_{3}\beta }{\delta ^{2}}V_{t}^{^{^{\prime }}}\left( \gamma _{x};x\right) +V_{t}^{^{\prime }}\left( h;x\right) +\varepsilon V_{t}^{^{\prime }}\left( h_{2};x\right) -h^{^{\prime }}\left( x\right) V_{t}^{^{\prime }}\left( h_{1};x\right) \ge 0, \end{aligned}$$

and thus

$$\begin{aligned}&\frac{2M_{3}\beta }{\delta ^{2}}V_{t}^{^{^{\prime }}}\left( \gamma _{x};x\right) -\varepsilon V_{t}^{^{^{\prime }}}\left( h_{1};x\right) +h^{^{\prime }}\left( x\right) V_{t}^{^{\prime }}\left( h_{1};x\right) -h^{^{\prime }}\left( x\right) \\&\quad \le V_{t}^{^{^{\prime }}}\left( h;x\right) -h^{^{^{\prime }}}\left( x\right) \\&\quad \le -\frac{2M_{3}\beta }{\delta ^{2}}V_{t}^{^{^{\prime }}}\left( \gamma _{x};x\right) +\varepsilon V_{t}^{^{^{\prime }}}\left( h_{1};x\right) +h^{^{\prime }}\left( x\right) V_{t}^{^{^{\prime }}}\left( h_{1};x\right) -h^{^{\prime }}\left( x\right) . \end{aligned}$$

Since the function \(\gamma _{x}\in C_{+,1}\cap C_{+}^{1},\) we have \(V_{t}^{^{^{\prime }}}\left( \gamma _{x}\right) \in C_{+}^{1}\)

$$\begin{aligned} \left| V_{t}^{^{^{\prime }}}\left( h;x\right) -h^{^{^{\prime }}}\left( x\right) \right|\le & {} \varepsilon +\left( \varepsilon +\left| h^{^{\prime }}\left( x\right) \right| \right) \left| V_{t}^{^{^{\prime }}}\left( h_{1};x\right) -h_{1}^{^{\prime }}\left( x\right) \right| \nonumber \\&+\frac{2M_{3}\beta }{\delta ^{2}}V_{t}^{^{\prime }}\left( \gamma _{x};x\right) , \end{aligned}$$
(2.18)

holds. Since

$$\begin{aligned} V_{t}^{^{^{\prime }}}\left( \gamma ;x\right)= & {} V_{t}^{^{\prime }}\left( \dfrac{\left( y-x\right) ^{3}}{3}+1;x\right) \\\le & {} \frac{1}{3}V_{t}^{^{^{\prime }}}\left( h_{3};x\right) -xV_{t}^{^{\prime }}\left( h_{2};x\right) +x^{2}V_{t}^{^{\prime }}\left( h_{1};x\right) +\left( 1-\frac{x^{3}}{3}\right) V_{t}^{^{\prime }}\left( h_{0};x\right) \\= & {} \frac{1}{3}\left\{ V_{t}^{^{\prime }}\left( h_{3};x\right) -h_{3}^{^{\prime }}\left( x\right) \right\} -x\left\{ V_{t}^{^{\prime }}\left( h_{2};x\right) -h_{2}^{^{\prime }}\left( x\right) \right\} \\&+x^{2}\left\{ V_{t}^{^{\prime }}\left( h_{1};x\right) -h_{1}^{^{\prime }}\left( x\right) \right\} +\left( 1-\frac{x^{3}}{3}\right) \left\{ V_{t}^{^{\prime }}\left( h_{0};x\right) -h_{0}^{^{^{\prime }}}\left( x\right) \right\} , \end{aligned}$$

combining this with (2.18) , for every \(\varepsilon >0,\) we get

$$\begin{aligned} \left| V_{t}^{^{\prime }}\left( h;x\right) -h^{^{\prime }}\left( x\right) \right|\le & {} \varepsilon +\left( \varepsilon +\left| h^{^{\prime }}\left( x\right) \right| +\frac{2M_{3}\beta x^{2}}{\delta ^{2}}\right) \left| V_{t}^{^{\prime }}\left( h_{1};x\right) -h_{1}^{^{\prime }}\left( x\right) \right| \nonumber \\&+\frac{2M_{3}\beta }{3\delta ^{2}}\left| V_{t}^{^{\prime }}\left( h_{3};x\right) -h_{3}^{^{\prime }}\left( x\right) \right| \nonumber \\&+\frac{2M_{3}\beta x}{3\delta ^{2}}\left| V_{t}^{^{\prime }}\left( h_{2};x\right) -h_{2}^{^{\prime }}\left( x\right) \right| \nonumber \\&+\frac{2M_{3}\beta }{3\delta ^{2}}\left( 1-\frac{x^{3}}{3} \right) \left| V_{t}^{^{\prime }}\left( h_{0};x\right) -h_{0}^{^{\prime }}\left( x\right) \right| \end{aligned}$$
(2.19)

It follows that

$$\begin{aligned} \left\| V_{t}^{^{\prime }}\left( h\right) -h^{^{\prime }}\right\| \le \varepsilon +K_{3}\sum \limits _{k=0}^{3}\left\| V_{t}^{^{\prime }}\left( h_{k}\right) -h_{k}^{^{\prime }}\right\| \end{aligned}$$
(2.20)

where \(K_{3}=\varepsilon +M_{3}+\dfrac{2M_{3}\beta }{\delta }.\) Thus it follows from hypothesis and inequality (2.20) that

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}^{^{\prime }}\left( h\right) -h^{^{\prime }}\right\| =0. \end{aligned}$$

3 Applications

In this section, we give intriguing application, showing that, in general, our results are more robust than classical ones.

Example 3.1

Let \(h[x_{0}, x_{1} , ..., x_{i}]\) denote the divided difference of the function \(h\in C[0, 1]\) in the points \(x_{0}, x_{1} , ..., x_{i}\in [0, 1]\) where \(i=0,1,2,...\). Also, let \(G=\{h\in C[0, 1]\): \(\exists M>0\) such that \(h[x_{0}, x_{1} ,x_{2}]\le M \,\, \forall x_{0}, x_{1} ,x_{2}\in [0, 1]\)} and \(\left( K_{n}\right)\), \(K_{n}\) : \(G\rightarrow C[0, 1]\), be the sequence of linear operators defined [5] by

$$\begin{aligned} K_{n}h(x) ={\left\{ \begin{array}{ll} h(0)+h\left[ 0,\dfrac{1}{n}\right] x+h\left[ 0,\dfrac{1}{n},\dfrac{2}{n}\right] x^{2}, \quad x\in \left[ 0,\dfrac{1}{n}\right] , \\ K_{n}h\left( \dfrac{i}{n}\right) +DK_{n}h\left( \dfrac{i}{n}\right) \left( x-\dfrac{i}{n}\right) +h\left[ \dfrac{1}{n},\dfrac{i+1}{n},\dfrac{i+2}{n}\right] \left( x-\dfrac{i}{n}\right) ^{2}, \\ \quad \quad \quad x\in \left( \dfrac{i}{n},\dfrac{i+1}{n}\right] ,\, i=1,...,n-3, \\ K_{n}h\left( \dfrac{n-2}{n}\right) +DK_{n}h\left( \dfrac{n-2}{n}\right) \left( x-\dfrac{n-2}{n}\right) \\ \quad \quad \quad +f\left[ \dfrac{n-2}{n},\dfrac{n-1}{n},1\right] \left( x-\dfrac{n-2}{n}\right) ^{2}, x\in \left( \dfrac{n-2}{n},1\right] , \end{array}\right. } \end{aligned}$$

where D denotes the differential operator. Then we know from [5] that \(K_{n}\) is not positive operator and \(K_{n}e_{0}=e_{0}\), \(K_{n}e_{1}=e_{1}\), \(K_{n}e_{2}(x)=\dfrac{x}{n}+e_{2}(x)\) \(\forall x \in [0,1]\). Hence, \(K_{n}e_{i}\) converges uniformly to \(e_{i}\) for \(i=0,1,2\) and thus \(K_{n}h\) converges uniformly to h as \(n\rightarrow \infty\) for all \(h\in G\). Now, using this operator \(K_{n}\), we define the sequence of linear operators

$$\begin{aligned} T_{n}h(x)=(1+u_{n})K_{n}h(x), \text {for all} \, h\in C[0,1], n\ge 1 . \end{aligned}$$

Take \((u_{n})= (-1)^{n}\). It is easy to see that \((u_{n})\) is Abel convergent to zero, but it is not convergent in the classical and statistical sense. Observe that \(T_{n}e_{0}(x)=(1+u_{n})e_{0}(x)\), \(T_{n}e_{1}(x)=(1+u_{n})e_{1}(x)\) and \(T_{n}e_{2}(x)=(1+u_{n})(1+\frac{x}{n}e_{2}(x))\). By using the above conditions and because of the method is regular, we can see that \(T_{n}e_{i}\) is Abel convergent to \(e_{i}\) for \(i=0,1,2\).

The example given above suggests that the classical and statistical Korovkin theorems for the sequence of non-positive operators (see [5, 10]) do not work. Then we can alternatively use the power series method to get some Korovkin type approximation results. Hence, by using this method, our Theorem 2.1 works for the operators \(T_{n}\).

4 Quantitative estimates

In this section, we prove some results which give the degree of approximation by means of linear operators.

The modulus of continuity, denoted by \(\omega \left( h,\delta \right)\), is defined by

$$\begin{aligned} \omega (h,\delta )=\sup _{\left| {y-x}\right| \le \delta }\left| h(y)-h(x)\right| . \end{aligned}$$

where \(\delta\) is a positive constant, \(h\in C[a,b]\)). It is easy to see that, for any \(c>0\) and all

$$\begin{aligned} \omega (h;\delta )\le (1+[c])\omega (h;\delta )\text { } \end{aligned}$$

where [c] is defined to be the greatest integer less than or equal to c.

Now we present some estimates the rates of the power series method for Korovkin-type theorems.

Theorem 4.1

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{2}[0,1]\) into itself and \(T_{n}\left( C_{+,2}\cap C_{+}^{2}\right) \subset C_{+,2\text { }},\) for all \(n\in \mathbb {N} .\) We have, for all \(h\in C^{2}[0,1]\)

$$\begin{aligned} \left\| V_{t}\left( h\right) -h\right\| \le \left( 1+\beta \right) \omega \left( h,\delta _{n}\right) +\left( \omega \left( h,\delta \right) +\left\| h\left( x\right) \right\| \right) \left\| V_{t}\left( h_{0}\right) -h_{0}\right\| \end{aligned}$$
(4.1)

where \(\delta _{k}:=\sqrt{\left\| V_{t}\varphi _{x}\right\| }\) and \(\varphi _{x}\left( y\right) =\left( y-x\right) ^{2}.\)

Proof

Let \(x\in [0,1]\) be fixed and let \(h\in C^{2}[0,1].\) We can write that

$$\begin{aligned} -\left( 1+\frac{\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \right) \omega \left( h,\delta \right) \le h\left( y\right) -h\left( x\right) \le \left( 1+\frac{\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \right) \omega \left( h,\delta \right) \end{aligned}$$
(4.2)

for all \(y\in [0,1]\) and for any \(\beta \ge 1\) where \(\varphi _{x}\left( y\right) =\left( y-x\right) ^{2}.\) Then by (4.2) we get that

$$\begin{aligned} g_{1,\beta }\left( y\right):= & {} \left( 1+\frac{\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \right) \omega \left( h,\delta \right) +h\left( y\right) -h\left( x\right) \ge 0, \end{aligned}$$
(4.3)
$$\begin{aligned} g_{2,\beta }\left( y\right):= & {} \left( 1+\frac{\beta }{\delta ^{2}}\varphi _{x}\left( y\right) \right) \omega \left( h,\delta \right) -h\left( y\right) +h\left( x\right) \ge 0. \end{aligned}$$
(4.4)

Also for all \(y\in [0,1],\)

$$\begin{aligned} g_{1,\beta }^{^{\prime \prime }}\left( y\right) :=\frac{2\beta }{\delta ^{2}} \omega \left( h,\delta \right) +h^{^{\prime \prime }}\left( y\right) \text { and }g_{2,\beta }^{^{\prime \prime }}\left( y\right) :=\frac{2\beta }{\delta ^{2}}\omega \left( h,\delta \right) -h^{^{\prime \prime }}\left( y\right) . \end{aligned}$$

Because \(h^{^{\prime \prime }}\) is bounded on [0, 1] we can choose \(\beta \ge 1\) such a way that \(g_{1,\beta }^{^{\prime \prime }}\left( y\right) \ge 0,\) \(g_{2,\beta }^{^{\prime \prime }}\left( y\right) \ge 0,\) for each \(y\in [0,1].\) Hence, \(g_{1,\beta },\) \(g_{2,\beta }\in C_{+,2}\cap C_{+}^{2}\) and then by the hypothesis

$$\begin{aligned} T_{n}\left( g_{j,\beta };x\right) \ge 0,\text { for all }n\in \mathbb {N} ,\text { }x\in [0,1]\text { and }j=1,2 \end{aligned}$$
(4.5)

and hence

$$\begin{aligned} V_{t}\left( g_{j,\beta };x\right) \ge 0,\text { for }t\in \left( 0,R\right) , \text { }x\in [0,1]\text { and }j=1,2. \end{aligned}$$

From (4.3)–(4.5) and the linearity of \(\left( T_{n}\right) ,\) we get

$$\begin{aligned}&V_{t}\left( h_{0};x\right) \omega \left( h,\delta \right) +\frac{\beta \omega \left( h,\delta \right) }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right) +V_{t}\left( h;x\right) -h\left( x\right) V_{t}\left( h_{0};x\right) \ge 0,\\&V_{t}\left( h_{0};x\right) \omega \left( h,\delta \right) +\frac{\beta \omega \left( h,\delta \right) }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right) -V_{t}\left( h;x\right) +h\left( x\right) V_{t}\left( h_{0};x\right) \ge 0, \end{aligned}$$

thus

$$\begin{aligned} -V_{t}\left( h_{0};x\right) \omega \left( h,\delta \right) -\frac{\beta \omega \left( h,\delta \right) }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right)\le & {} h\left( x\right) V_{t}\left( h_{0};x\right) -V_{t}\left( h;x\right) \\\le & {} V_{t}\left( h_{0};x\right) \omega \left( h,\delta \right) \\&\quad +\frac{ \beta \omega \left( h,\delta \right) }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right) . \end{aligned}$$

Then we obtain

$$\begin{aligned} \left| V_{t}\left( h;x\right) -h\left( x\right) \right|\le & {} \omega \left( h,\delta \right) +\left( \omega \left( h,\delta \right) +\left| h\left( x\right) \right| \right) \left| V_{t}\left( h_{0};x\right) -h_{0}\left( x\right) \right| \\&\quad +\frac{\beta \omega \left( h,\delta \right) }{\delta ^{2}}V_{t}\left( \varphi _{x};x\right) . \end{aligned}$$

If we take \(\delta :=\delta _{n}:=\sqrt{\left\| V_{t}\left( \varphi _{x}\right) \right\| }\) and taking supremum \(x,y\in [0,1],\) then we get

$$\begin{aligned} \left\| V_{t}\left( h\right) -h\right\| \le \left( 1+\beta \right) \omega \left( h,\delta _{n}\right) +\left( \omega \left( h,\delta \right) +\left\| h\left( x\right) \right\| \right) \left\| V_{t}\left( h_{0}\right) -h_{0}\right\| . \end{aligned}$$
(4.6)

Note that Theorems 4.2 and 4.3 may be proved as in Theorem 4.1. So we omit their proofs.

Theorem 4.2

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{2}[0,1]\) into itself and \(T_{n}\left( C_{+,2}\cap C_{-}^{2}\right) \subset C_{-}^{2},\) for all \(n\in \mathbb {N},\) then we have for all \(h\in C^{2}[0,1]\),

$$\begin{aligned} \left\| V_{t}^{^{\prime \prime }}\left( h\right) -h^{^{\prime \prime }}\right\| \le \left( 1+\beta \right) \omega \left( h^{^{\prime \prime }},\delta _{n}\right) +\left( \omega \left( h^{^{\prime \prime }},\delta \right) +\left\| h^{^{\prime \prime }}\left( x\right) \right\| \right) \left\| V_{t}^{^{\prime \prime }}\left( h_{0}\right) -h_{0}^{^{\prime \prime }}\right\| , \end{aligned}$$
(4.7)

where \(\delta _{k}:=\sqrt{\left\| V_{t}^{^{\prime \prime }}\left( -\sigma _{x}\right) \right\| }\) and \(\sigma _{x}\left( y\right) =-\dfrac{\left( y-x\right) ^{4}}{12}+1.\)

Theorem 4.3

Let \(\left( T_{n}\right)\) be a sequence of linear operators from \(C^{1}[0,1]\) into itself and \(T_{n}\left( C_{+,1}\cap C_{+}^{1}\right) \subset C_{+}^{1},\) for all \(n\in \mathbb {N}\) then we have for all \(h\in C^{1}[0,1]\),

$$\begin{aligned} \left\| V_{t}^{^{\prime }}\left( h\right) -h^{^{\prime }}\right\| \le \left( 1+\beta \right) \omega \left( h^{^{\prime }},\delta _{n}\right) +\left( \omega \left( h^{^{\prime }},\delta \right) +\left\| h^{^{\prime }}\left( x\right) \right\| \right) \left\| V_{t}^{^{\prime }}\left( h_{0}\right) -h_{0}^{^{\prime }}\right\| , \end{aligned}$$
(4.8)

where \(\delta _{k}:=\sqrt{\left\| V_{t}^{^{\prime }}\left( \gamma _{x}\right) \right\| }\) and \(\gamma _{x}\left( y\right) =\dfrac{\left( y-x\right) ^{3}}{3}+1.\)

5 Conclusions

Finally, we give the following concluding remarks.

\(\mathbf {\Diamond }\) Let \(\left( T_{n}\right)\) be a sequence of linear operators from C[0, 1] into itself and \(T_{n}\left( C_{+}\right) \subset C_{+},\) for all \(n\in \mathbb {N} .\) Then for all \(h\in C[0,1],\)

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}\left( h\right) -h\right\| =0 \end{aligned}$$
(5.1)

if and only if

$$\begin{aligned} \lim _{0<t\rightarrow R^{-}}\left\| V_{t}\left( h_{i}\right) -h_{i}\right\| =0, \quad i=0,1,2 \quad \left( \text {see [17]}\right) . \end{aligned}$$
(5.2)

\(\mathbf {\Diamond }\) We remark that all our theorems also work on any compact subset of \(\mathbb {R}\) instead of the unit interval [0, 1].

\(\mathbf {\Diamond }\) Theorem 2.3 works if we replace the condition \(T_{n}\left( C_{+,1}\cap C_{+}^{1}\right) \subset C_{+}^{1}\) by \(T_{n}\left( C_{+,1}\cap C_{-}^{1}\right) \subset C_{-}^{1}.\) To prove this, it is enough to consider the function \(\mu _{x}\left( y\right) =-\dfrac{\left( y-x\right) ^{3}}{3}+1\) instead of \(\gamma _{x}\left( y\right)\) defined in the proof of Theorem 2.3.