1 Introduction

The well-known theorem of Korovkin [30, 31] states that if for a sequence \((T_{n})_{n}\) of positive linear operators that map \(C\left( [0,1]\right) \) into itself we have \(T_{n}(h)\rightarrow h\) uniformly for any \(h\in \{1, x, x^{2}\}\), then \(T_{n}(f)\rightarrow f\) uniformly, for all \(f\in C([0, 1])\).

This result was generalized in numerous ways, see, for example, the authoritative monograph of Altomare–Campiti [3], the excellent survey of Altomare [2] (and the references therein) and Chapter 17 in Anastassiou [5].

On the other hand, for quantitative results concerning approximation by sublinear operators see, e.g., Bede–Coroianu–Gal [6], Anastassiou [4].

Recently, the Korovkin theorem was extended to the framework of monotone and sublinear operators acting on various function spaces in Gal–Niculescu [21, 22, 24]. For results concerning approximation by sublinear operators see Bede–Coroianu–Gal [6], Anastassiou [4].

On the other hand, Korovkin-type theorems for the so-called, more general, statistical convergence of positive and linear operators were obtained by many authors including Gadjev [19, 20], Agratini [1], Cárdenas–Morales–Garancho [7], Dirik [10], Duman–Khan–Orhan [13], Duman [11, 12], Karakus–Demirci–Duman [29], Sakaoglu-Unver [37].

Recall that a sequence \((\alpha _{k})_{k}\) is called statistically convergent to the number L if, for every \(\varepsilon >0\), it follows \(\delta \{k\in \mathbb {N}:|\alpha _{k}-L|\ge \varepsilon \}=0\), (see Connor [8]) or equivalently, if there exists a subset \(K\subset \mathbb {N}\) with \(\delta (K)=1\) and \(n_{0} (\varepsilon )\) such that \(k>n_{0}\) and \(k\in K\) imply that \(|\alpha _{k}-L|<\varepsilon \), see, e.g., Fridy [16], Miller [32], Salat [38]. In this case, we write \(st-\lim \quad \alpha _{k}=L\).

Here, for K, a subset of \(\mathbb {N}\), its density denoted by \(\delta (K)\) is defined by

$$\begin{aligned} \delta (K):= \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{j=1}^{n}\chi _{K}(j), \end{aligned}$$

whenever the limit exists, where \(\chi _{K}\) is the characteristic function of K.

It is known that any convergent sequence is statistically convergent, but not conversely. For example, the sequence is defined by \(\alpha _{n}=\sqrt{n}\) if n is square and \(\alpha _{n}=0\), otherwise, has the property that \(st-\lim \alpha _{n}=0\).

Basic properties of statistical convergence can be found in Connor [8], Salat [38], Schoenberg [39]. It is worth mentioning that this concept has been applied in many topics: number theory (Erdös - Jenenbaüm [14]), summability theory (Connor [8], Fridy [16], Fridy-Orhan [17]), probability theory (Fridy–Khan [18]), trigonometric series (Zygmund [41]), measure theory (Miller [32], Niculescu et al [33,34,35]) and optimization theory (Pehlivan–Mamedov [36]), to cite just a few papers.

It is the purpose of this paper to generalize in Sect. 3 the results on various classical convergences in the Korovkin-type theorems for monotone and sublinear operators in Gal–Niculescu [25], to the cases of their statistical variants. Section 4 contains several examples illustrating the results obtained. Section 2 contains some preliminaries on monotone and sublinear operators and on Choquet integral.

2 Preliminaries on Weakly Nonlinear Operators and on Choquet Integral

Let X be a metric measure space that is the space X is endowed with the metric d and the measure m defined on the sigma field of Borel subsets of X.

If we consider the vector lattice \(\mathcal {F}(X)\) of all real-valued functions defined on X, endowed with the pointwise ordering, then important vector sublattices of \(\mathcal {F}(X)\) are

$$\begin{aligned} C(X)&=\left\{ f\in \mathcal {F}(X):\text { }f\text { is continuous and bounded}\right\} ,\\ \mathcal{A}\mathcal{C}_{b}(X)&=\left\{ f\in \mathcal {F}(X):\text { }f\text { is bounded and almost everywhere continuous}\right\} \end{aligned}$$

and

$$\begin{aligned} L^{p}(X)=\left\{ f\in \mathcal {F}(X):\text { }f\text { is Borel measurable and Lebesgue integrable}\right\} , \end{aligned}$$

for \(1\le p<+\infty \).

On C(X), we consider the uniform norm \(\Vert f\Vert =\sup \{|f(x)|:x\in [a,b]\}\), while on \(\mathcal {L}^{p}(X)\), \(1\le p<\infty \), we consider the usual p-norm

$$\begin{aligned} \Vert f\Vert _{p}=\left( (L)\int _{X}|f(x)|^{p}dx\right) ^{1/p}. \end{aligned}$$

Let X and Y be two metric spaces and E and F two respectively ordered vector subspaces (or the positive cones) of \(\mathcal {F}(X)\) and \(\mathcal {F}(Y)\) that contain the unity. An operator \(T:E\rightarrow F\) is called a weakly nonlinear operator (respectively a weakly nonlinear functional when \(F=\mathbb {R} \)) if it satisfies the following three conditions:

  • (Sublinearity) T is subadditive and positively homogeneous, i.e.,

    $$\begin{aligned} T(f+g)\le T(f)+T(g)\quad \text {and}\quad T(af)=aT(f) \end{aligned}$$

    for all fg in E and \(a\ge 0;\)

  • (Monotonicity) \(f\le g\) in E implies \(T(f)\le T(g).\)

  • (Translatability) \(T(f+\alpha \cdot 1)=T(f)+\alpha T(1)\) for all \(f\in E\) and \(a\ge 0;\)

  • (Subunital property) \(T(1)\le 1\).

If E and F are closed vector sublattices of the Banach lattices C(X) and C(Y), respectively, then every monotone and subadditive operator (functional when \(F=\mathbb {R}\)) T \(:E\rightarrow F\) satisfies the inequality

$$\begin{aligned} \left| T(f)-T(g)\right| \le T\left( \left| f-g\right| \right) \quad \text { for all }f,g. \end{aligned}$$
(2.1)

Indeed, \(f\le g+\left| f-g\right| ~\) implies \(T(f)\le T(g)+T\left( \left| f-g\right| \right) ,\) i.e., \(T(f)-T(g)\le T\left( \left| f-g\right| \right) \), and interchanging the role of f and g, we get that \(-\left( T(f)-T(g)\right) \le T\left( \left| f-g\right| \right) .\)

If T is linear, then the property of monotonicity is equivalent to that of positivity, that is, to the fact that

$$\begin{aligned} T(f)\ge 0 \quad \text { for all }f\ge 0. \end{aligned}$$

If an operator (functional when \(F=\mathbb {R}\)) T is monotone and positively homogeneous, then we necessarily have

$$\begin{aligned} T(0)=0. \end{aligned}$$

The properties of weakly nonlinear operators were suggested by those of the nonlinear Choquet integral. For this reason we shortly mention them below. Full details on this integral can be found in the books of Grabisch [26] and Wang and Klir [40].

Let \((X,\mathcal {A})\) be a measurable space, where X is nonempty and \({\mathcal {A}}\) is a \(\sigma \)-algebra of subsets of X.

Definition 1

The set function \(\mu :{\mathcal {A}}\rightarrow [0,1]\) is called capacity if it verifies the following two conditions:

(a):

\(\mu (\emptyset )=0;\) and

(b):

\(\mu (A)\le \mu (B)\) for all \(A,B\in {\mathcal {A}}\), with \(A\subset B\) (monotonicity).

Clearly that an important class of capacities is that of probability measures (that is, the capacities have the property of \(\sigma \)-additivity). Also, probability distortions represents important examples of nonadditive capacities. They are of the form \(\mu (A)=u(P(A))\), where \(P:\mathcal {A\rightarrow }[0,1]\) is a probability measure and \(u:[0,1]\rightarrow [0,1],\) is a nondecreasing continuous function with \(u(0)=0\) and \(u(1)=1\). For example, one may chose \(u(t)=t^{a}\) with \(\alpha >0.\)

Notice that when the distortion u is concave (for example, when \(u(t)=t^{a}\) with \(0<\alpha <1),\) then \(\mu \) is also submodular in the sense that

$$\begin{aligned} \mu (A\cup B)+\mu (A\cap B)\le \mu (A)+\mu (B)\quad \text { for all }A,B\in \mathcal {A}. \end{aligned}$$

The Choquet concept of integrability with respect to a capacity is considered for all functions \(f:X\rightarrow \mathbb {R}\) such that \(f^{-1}(A)\in {\mathcal {A}}\) for every Borel subset A of \(\mathbb {R}\).

Definition 2

The Choquet integral of f with respect to the capacity \(\mu \) is defined as the sum of two Riemann improper integrals,

$$\begin{aligned} (C)\int _{X}fd\mu= & {} \int _{0}^{+\infty }\mu \left( \{x\in X:f(x)\ge t\}\right) dt\\{} & {} \quad +\int _{-\infty }^{0}\left[ \mu \left( \{x\in X:f(x)\ge t\}\right) -1\right] dt. \end{aligned}$$

f is called Choquet integrable on X if both integrals above are finite.

Notice that if \(f\ge 0\), then the second integral in Definition 2 is equal to 0.

When the set function \(\mu \) is a \(\sigma \)-additive measure, the Choquet integral coincides with the Lebesgue integral.

A function f is said to be Choquet integrable on a set \(A\in \mathcal {A}\) if \(f\chi _{A}\) is integrable in the sense of Definition 2, and we denote

$$\begin{aligned} (C)\int _{A}fd\mu =(C)\int _{X}f\chi _{A}d\mu . \end{aligned}$$

The basic properties of the Choquet integral, seen as a functional, are as follows: it is monotone, positive homogenous and subadditive (if \(\mu \) is submodular).

3 Main Results

Several extensions of Korovkin’s theorem in the case of weakly nonlinear operators acting on a sublattice of a space C(X) with \(X=\mathbb {R}^{N}\) and the uniform convergence on compact sets, almost everywhere convergence, convergence in measure and convergence in \(L^{p}\)-space can be found in the paper Gal–Niculescu [25]. In this section, we discuss the analogues for their statistical variants.

Notice that the case of statistical uniform convergence in C([ab]) in Korovkin-types theorems for monotone and sublinear operators was already studied by Iancu [27].

Firstly, we consider the case of statistical almost everywhere convergence.

Theorem 1

Suppose that X is a locally compact subset of \(\mathbb {R},\) and E is a vector sublattice of \(\mathcal {F}(X)\) that contains the following set of test functions: \(1, x, -x, x^{2}\).

(i) If \((T_{n})_{n}\) is a sequence of monotone and sublinear operators from E into E such that

$$\begin{aligned} st - \lim T_{n}(f)= f\quad \text { a.e.} \end{aligned}$$
(3.1)

for each of the above mentioned test functions, then the property (3.1) holds for all nonnegative functions f in \(E\cap \mathcal{A}\mathcal{C}_{b}(X)\).

(ii) If, in addition, each operator \(T_{n}\) is translatable, then \(st - \lim T_{n}(f)= f\) a.e. for all \(f\in E\cap \mathcal{A}\mathcal{C}_{b}\left( X\right) \).

Moreover, in both cases (i) and (ii), if X is included in \([0, +\infty )\), then the family of test functions can be reduced to \(1, -x, x^{2}\).

Proof

(i) Let \(f\in E\cap \mathcal{A}\mathcal{C}_{b}(\Omega )\) and let \(\omega \) be a point of continuity of f which is also a point where

$$\begin{aligned} T_{n}(h)(\omega )\rightarrow h(\omega ) \end{aligned}$$

for each of the functions \(h\in \left\{ 1, x, -x, x^{2}\right\} \).

Then for \(\varepsilon >0\) arbitrarily fixed, there is \(\delta >0\) such that

$$\begin{aligned} |f(x)-f(\omega )|\le \varepsilon \quad \text { for every }x\in X\text { with } |x-\omega |\le \delta . \end{aligned}$$

If \(|x-\omega |\ge \delta \), then

$$\begin{aligned} |f(x)-f(\omega )|\le \frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}\cdot | x-\omega |^{2}, \end{aligned}$$

so that

$$\begin{aligned} |f(x)-f(\omega )|\le \varepsilon +\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}} \cdot |x-\omega |^{2}\quad \text { for all }x\in X. \end{aligned}$$
(3.2)

Denoting

$$\begin{aligned} M=\max \left\{ \omega , 0\right\} , \end{aligned}$$

one can restate (3.2) as

$$\begin{aligned} |f(x)-f(\omega )|\le \varepsilon +\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}\left[ x^{2} +2 x(M-\omega )+2M (-x)+|\omega |^{2}\right] . \end{aligned}$$

Taking into account the above formula and the fact that the operators \(T_{n}\) are subadditive and positively homogeneous, we infer in the case where \(f\ge 0\) that

$$\begin{aligned} |T_{n}(f)-f(\omega )|&\le \left| T_{n}(f)-T_{n}(f(\omega )\cdot 1)+f(\omega )T_{n}(1)-f(\omega )\right| \\&\le T_{n}(|f-f(\omega )|)+f(\omega )|T_{n}(1)-1|\\&\le \varepsilon +\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}\left[ T_{n} (t^{2})+2(M-\omega )T_{n}(t)\right. \\&+\left. 2 M T_{n}\left( -t\right) + |\omega |^{2}T_{n}(1)\right] +f(\omega )|T_{n}(1)-1|\\&\le \varepsilon + C[ |T_{n}(t^{2})+2(M-\omega )T_{n}(t) +2 M T_{n}\left( -t\right) \\&+ |\omega |^{2}T_{n}(1) | +|T_{n}(1)-1|], \end{aligned}$$

for all \(n\in \mathbb {N}\), where \(C=\max \{\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}, f(\omega )\}\).

Let \(\eta >0\) be arbitrary and choose \(\varepsilon < \eta \) (this choice can be done because \(\varepsilon \) is in fact arbitrarily small). Denoting

$$\begin{aligned} E_{0}= & {} \{n\in \mathbb {N}; |T_{n}(f)-f(\omega )|\ge \frac{\eta -\varepsilon }{C} \},\\ E_{1}= & {} \{n\in \mathbb {N}; |T_{n}(t^{2})+2(M-\omega )T_{n}(t) +2 M T_{n}\left( -t\right) + |\omega |^{2}T_{n}(1) |\ge \frac{\eta -\varepsilon }{2 C}, \end{aligned}$$

and

$$\begin{aligned} E_{2}=|T_{n}(1)-1|\ge \frac{\eta -\varepsilon }{2 C}, \end{aligned}$$

it follows

$$\begin{aligned} E_{0}\subset E_{1} \cup E_{2}. \end{aligned}$$

This implies that for all \(j\in \mathbb {N}\), we have

$$\begin{aligned} \chi _{E_{0}}(j)\le \chi _{E_{1}}(j)+\chi _{E_{2}}(j), \end{aligned}$$

which immediately leads to \(\delta (E_{0})\le \delta (E_{1})+\delta (E_{2})\). Since \(st - \lim T_{n}(t^{2})=\omega ^{2}\), \(st - \lim T_{n}(t)=\omega \), \(st - \lim T_{n}(-t)=-\omega \), \(st - \lim T_{n}(1)=1\) and since it is easy to show that the sum of two statistically convergent sequences is a statistically convergent sequence (to the sum of the two limits), we immediately arrive at

$$\begin{aligned} st - \lim T_{n}(f) = f(\omega ). \end{aligned}$$

(ii) Suppose in addition that each operator \(T_{n}\) is also translatable. According to the assertion (i), 

$$\begin{aligned} st - \lim T_{n}(f+\Vert f\Vert ) = f+\Vert f\Vert \quad \text { a.e.} \end{aligned}$$

Since \(T_{n}\) is also translatable, \(T_{n}(f+\Vert f\Vert )=T_{n}(f)+\left\| f\right\| T_{n}(1),\) so we immediately conclude that \(st - \lim T_{n}(f)=f\) a.e.

Concerning the last assertion of Theorem 1, notice that when X is included in \([0, +\infty )\), one can restate the estimate (3.2) as

$$\begin{aligned} \left| f(x)-f(\omega )\right| \le \varepsilon +\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}\left[ x^{2}+2(-x)(\omega )+ |\omega |^{2}\right] , \end{aligned}$$

which leads to

$$\begin{aligned}&|T_{n}(f)-f(\omega )|\le T_{n}(|f-f(\omega )|)+f(\omega )|T_{n}(1)-1|\\&\le \varepsilon +\frac{2\Vert f\Vert _{\infty }}{\delta ^{2}}\left[ T_{n}\left( t^{2}\right) + 2\omega T_{n}\left( -t\right) + |\omega |^{2}T_{n}(1)\right] +f(\omega )|T_{n}(1)-1| \end{aligned}$$

in the case where \(f\ge 0.\) Then, the proof continues analogous as in the cases (i) and (ii). \(\square \)

Corollary 1

Let \(\mathcal {R}([a,b])\) be the vector lattice of all Riemann integrable functions defined on [ab] and let \((T_{n})_{n}\) be a sequence of monotone and sublinear operators from \(\mathcal {R}([a,b])\) into itself such that

$$\begin{aligned} st-\lim T_{n}(h)=h \quad \text { a.e.} \end{aligned}$$

for each of the functions \(h\in \{1,x,-x,x^{2}\}\). Then, for all nonnegative functions \(h\in \mathcal {R}([a,b])\), this statistical convergence holds. If, in addition, all the operators \(T_{n}\) are translatable, then the statistical convergence holds for all Riemann integrable functions.

Denoting by \(R_{loc}[0, +\infty )\) the set of all functions \(f:[0, +\infty )\rightarrow \mathbb {R}\) which are Riemann integrable on each closed subinterval of \([0, +\infty )\) and reasoning exactly as for Theorem 1, we immediately get the following result.

Corollary 2

Let \(L_{n}:R_{loc}[0, +\infty )\rightarrow R_{loc}[0, +\infty )\), \(n\in \mathbb {N}\), be a sequence of monotone and sublinear operators, such that for all \(h\in \{1, -x, x^{2}\}\), there exist \(A_{h}\subset [0, +\infty )\) of Lebesgue measure zero, with \(st- \lim L_{n}(h)(x)=h(x)\), for all \(x\in A_{h}\).

Then, for all \(f\in R_{loc}[0,+\infty )\), nonnegative functions, we have

$$\begin{aligned} st-\lim L_{n}(f)(x)=f(x),\quad \text {a.e. on }[0,+\infty ). \end{aligned}$$

If all the operators are also translatable, then this statistical convergence holds for all \(f\in R_{loc}[0,+\infty )\).

Remark 1

If \(st - \lim L_{n}(h)(x)=h(x)\) for all \(x\in [0, +\infty )\) and \(h\in \{1, -x, x^{2}\}\), then the conclusion of the above theorem is the statistical convergence

$$\begin{aligned} st- \lim L_{n}(f)(x)=f(x) \end{aligned}$$

at each continuity point of \(f\in R_{loc}[0, \infty )\).

In what follows, we study the influence of convergence in measure to statistical Korovkin-type theorems.

Recall that a sequence \((f_{n})_{n}\), \(n\in \mathbb {N}\), of measurable functions converges statistically in the (\(\sigma \)-additive) measure m to the measurable function f if

$$\begin{aligned} st - \lim m\left( \left\{ x:\left| f_{n}-f\right| \ge \varepsilon \right\} \right) =0 \quad \text { for all }\varepsilon >0. \end{aligned}$$

Equivalently, it means that for all \(\eta , \varepsilon >0\), we have \(\delta (\{k\in \mathbb {N}; F_{k}(\varepsilon )\ge \eta \})=0\), where

$$\begin{aligned} F_{k}(\varepsilon )=m(\{x\in [0, 1]; |f_{k}(x) - f(x)|\ge \varepsilon \}). \end{aligned}$$

We have the following result.

Theorem 2

Let C([0, 1]) be the vector lattice of all continuous functions on [0, 1] and let \((T_{n})_{n}\) be a sequence of monotone, subunital and sublinear operators from C([0, 1]) into itself such that

$$\begin{aligned} st-\lim T_{n}(h)=h \quad \text { in measure} \end{aligned}$$

for each of the functions \(h\in \{1,x,-x,x^{2}\}\). Then, this statistical convergence holds for all nonnegative functions \(h\in C([0,1])\). If all the operators \(T_{n}\) are also translatable, then this statistical convergence holds for all \(f\in C([0, 1])\).

Proof

Let f nonnegative and \(\varepsilon >0\) be arbitrary. From the uniform continuity of f, by the standard method, it follows that for any \(\varepsilon _{0}>0\) sufficiently small, let us chose it \(\varepsilon _{0} < \varepsilon \), there exists \(\delta _{0}>0\), such that

$$\begin{aligned} |f(t)-f(x)|\le \varepsilon _{0}+2\Vert f\Vert _{\infty }\cdot \frac{(t-x)^{2}}{\delta _{0}^{2}}, \text{ for } \text{ all } t, x \in [0, 1], \end{aligned}$$

where \(\Vert \cdot \Vert _{\infty }\) denotes the uniform norm of f. Applying \(T_{n}\), it follows

$$\begin{aligned} |T_{n}(f)(x)-f(x)|\le & {} \varepsilon _{0}+\frac{2 \Vert f\Vert _{\infty }}{\delta _{0}}\cdot T_{n}((t-x)^{2})(x)\\\le & {} \varepsilon _{0}+\frac{2 \Vert f\Vert _{\infty }}{\delta _{0}}\cdot (T_{n}(e_{2})(x)+2 x T_{n}(-e_{1})(x)+x^{2}), \end{aligned}$$

implying that

$$\begin{aligned}{} & {} \{x\in [0, 1]: |T_{n}(f)(x)-f(x)|\ge \varepsilon \}\\{} & {} \quad \subset \left\{ x\in [0, 1]; \varepsilon _{0}+\frac{2 \Vert f\Vert _{\infty }}{\delta _{0} }\cdot (T_{n}(e_{2})(x)+2 x T_{n}(-e_{1})(x)+x^{2})\ge \varepsilon \right\} \\{} & {} \quad =\left\{ x\in [0, 1]; T_{n}(e_{2})(x)+2 x T_{n}(-e_{1})(x)+x^{2}\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{\Vert f\Vert _{\infty }}\right\} \\{} & {} \quad =\left\{ x\in [0, 1]; (T_{n}(e_{2})(x)-x^{2})+2 x (T_{n}(-e_{1})(x)+x)\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{\Vert f\Vert _{\infty }}\right\} \\{} & {} \quad \subset \left\{ x\in [0, 1]; |T_{n}(e_{2})(x)-x^{2}|\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\right\} \\{} & {} \quad \cup \left\{ x\in [0, 1]; |T_{n}(-e_{1})(x)+x|\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\right\} . \end{aligned}$$

This implies that

$$\begin{aligned}{} & {} m(\{x\in [0, 1]: |T_{n}(f)(x)-f(x)|\ge \varepsilon \})\\{} & {} \quad \le m\left( \left\{ x\in [0, 1]; |T_{n}(e_{2})(x)-x^{2}|\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\right\} \right) \\{} & {} \quad + m\left( \left\{ x\in [0, 1]; |T_{n}(-e_{1})(x)+x|\ge (\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\right\} \right) . \end{aligned}$$

For a given \(\eta >0\), denoting now

$$\begin{aligned} E= & {} \{n\in \mathbb {N}; m(\{x\in [0, 1]; |T_{n}(f)(x)-f(x)|\ge \varepsilon \})\ge \eta \},\\ E_{1}= & {} \{n\in \mathbb {N}; m(\{x\in [0, 1]; |T_{n}(e_{2})(x)-x^{2}|\ge (\varepsilon -\varepsilon _{0})\frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\})\ge \frac{\eta }{2}\},\\ E_{2}= & {} \{n\in \mathbb {N}; m(\{x\in [0, 1]; |T_{n}(-e_{1})(x)+x|\ge (\varepsilon - \varepsilon _{0})\frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\})\ge \frac{\eta }{2}\}, \end{aligned}$$

it follows that \(E\subset E_{1} \cup E_{2}\). Denoting \(\gamma =(\varepsilon -\varepsilon _{0})\cdot \frac{\delta _{0}}{4 \Vert f\Vert _{\infty }}\), since \(\varepsilon >\varepsilon _{0}\) can be chosen arbitrarily close to \(\varepsilon _{0}\), it follows that \(\gamma >0\) can be chosen arbitrarily close to 0.

Now, by similar reasonings as in the proof of Theorem 1, we are leading to

$$\begin{aligned} \delta (E)\le \delta (E_{1})+\delta (E_{2}), \end{aligned}$$

and to the desired conclusion for \(f\ge 0\).

Suppose now that all the operators are translatable too and that f is of arbitrary sign. Since \(f+\Vert f\Vert _{\infty }|\ge 0\), it follows that \(st - \lim T_{n} (f+\Vert f\Vert _{\infty })=f+\Vert f\Vert _{\infty }\) in measure and by \(T_{n}(f+\Vert f\Vert _{\infty })=T_{n}(f)+ \Vert f\Vert _{\infty }\), it immediately follows that \(st - \lim T_{n}(f) = f\) in measure. \(\square \)

At the end of this section, we consider the case of statistical convergence in \(L^{p}\)-space.

Recall that a sequence \((f_{n})\in L^{p}(X)\) is called statistically convergent in \(L^{p}\) to \(f\in L^{p}(X)\), if

$$\begin{aligned} st - \lim \Vert f_{n} - f\Vert _{p}=0, \end{aligned}$$

where \(\Vert \cdot \Vert _{p}\) denotes the usual norm in \(L^{p}(X)\).

Theorem 3

Let \(1\le p<+\infty \) and \((T_{n})_{n}\) be a sequence of monotone, subunital and sublinear operators from \(L^{p}([0,1])\) into itself, uniformly bounded and such that

$$\begin{aligned} st-\lim T_{n}(h)=h \quad \text { in }L^{p} \end{aligned}$$

for each of the functions \(h\in \{1,x,-x,x^{2}\}\). Then, this statistically convergence holds for all nonnegative functions \(h\in L^{p}([0,1])\).

If all \(T_{n}\) are translatable too, then \(T_{n}(f)\) statistically converges to f in p-mean for all \(f\in C([0, 1])\).

Proof

Let \(\varepsilon > 0\) be arbitrary small. We can choose \(\varepsilon <1\). By hypothesis, there exists \(n_{0}\) such that

$$\begin{aligned} \Vert T_{n}(e_{k})(x)-e_{k}\Vert _{p}<\varepsilon , \end{aligned}$$

for all \(n\ge n_{0}\) and all \(k=0, 1, 2\). Here, the notation \(\Vert T_{n} (f)(x)\Vert _{p}\) means that the \(L^{p}\)-norm is taken with respect to x.

Firstly, let us suppose that \(f\in L^{p}([0, 1])\) is nonnegative.

Since the classical Kantorovich polynomials given by the formula

$$\begin{aligned} K_{n}(f)(x)=(n+1)\sum _{k=0}^{n+1}p_{n, k}(x)\int _{k/(n+1)}^{(k+1)/(n+1)}f(t)d t, \end{aligned}$$

with \(p_{n, k}(x)={\left( {\begin{array}{c}n \\ k\end{array}}\right) }x^{k}(1-x)^{n-k}\), converges in \(L^{p}\) to \(f\in L^{p}([0, 1])\) (see, e.g., Lorentz [28], Theorem 2.1, page 30) and since evidently that \(K_{n}(f)\ge 0\) on [0, 1], for all \(n\in \mathbb {N}\), there exists an index \(n_{1}\) such that

$$\begin{aligned} \Vert K_{n_{1}}(f)(x)-f(x)\Vert _{p}<\varepsilon . \end{aligned}$$

For the simplicity of notation, let us denote \(g(x)=K_{n_{1}}(f)(x)\), which obviously is a nonnegative continuous function on [0, 1].

Since \((T_{n})\) is uniformly bounded, there exists \(M>0\) such that \(\Vert |T_{n}| \Vert \le M\), for all \(n\in \mathbb {N}\).

By the properties of the norm \(\Vert \cdot \Vert _{p}\), we obtain

$$\begin{aligned}{} & {} \Vert T_{n}(f)(x)-f(x)\Vert _{p}\\{} & {} \quad \le \Vert T_{n}(f)(x)-T_{n}(g)(x)\Vert _{p}+\Vert T_{n} (g)(x)-g(x)\Vert _{p}+\Vert g(x)-f(x)\Vert _{p}\\{} & {} \quad \le \Vert T_{n}(|f-g|)(x)\Vert _{p}+\Vert T_{n}(|g(t)-g(x)|)(x)\Vert _{p}+\Vert g-f\Vert _{p} \\{} & {} \quad \le \varepsilon (1+M)+\Vert T_{n}(|g(t)-g(x)|)(x)\Vert _{p}. \end{aligned}$$

Since \(g\in C([0, 1])\), by the standard method, there exists a \(\delta >0\) (depending only on \(\varepsilon \)), such that

$$\begin{aligned} |g(t)-g(x)|\le \varepsilon + \frac{2\Vert g\Vert _{\infty }}{\delta ^{2}}, \text{ for } \text{ all } t, x\in [0, 1]. \end{aligned}$$

This implies

$$\begin{aligned}{} & {} \Vert T_{n}(|g(t)-g(x)|)(x)\Vert _{p}\le \Vert T_{n}(\varepsilon )(x)+\frac{2\Vert g\Vert _{\infty } }{\delta ^{2}}T_{n}((t-x)^{2})(x)\Vert _{p}\\{} & {} \quad \le \varepsilon \Vert T_{n}(1)(x)\Vert _{p}+\frac{2\Vert g\Vert _{\infty }}{\delta ^{2}} \Vert T_{n}((t-x)^{2})(x)\Vert _{p}\\{} & {} \quad \le \varepsilon (\Vert T_{n}(1)(x)-1\Vert _{p}+1)\\{} & {} \quad +\Vert (T_{n}(e_{2})(x)-x^{2}) + x^{2}+ 2 x [T_{n}(-e_{1})(x)+e_{1}(x)] - 2x^{2}+x^{2}T_{n}(1)(x)\Vert _{p}\\{} & {} \quad \le \Vert T_{n}(e_{2})(x)-x^{2}\Vert _{p}+2\Vert T_{n}(-e_{1})(x)+x\Vert _{p}+\Vert x^{2} (T_{n}(1)(x)-1)\Vert _{p}\\{} & {} \quad +\varepsilon \Vert T_{n}(1)(x)-1\Vert _{p}+\varepsilon , \end{aligned}$$

for all \(n\in \mathbb {N}\).

As a first conclusion, we have

$$\begin{aligned}{} & {} \Vert T_{n}(f)(x)-f(x)\Vert _{p}\\{} & {} \quad \le \Vert T_{n}(e_{2})(x)-x^{2}\Vert _{p}+2\Vert T_{n}(-e_{1})(x)+x\Vert _{p}\\{} & {} \quad +\Vert x^{2}(T_{n}(1)(x)-1)\Vert _{p}+\varepsilon \Vert T_{n}(1)(x)-1\Vert _{p}+\varepsilon (2+M), \end{aligned}$$

for all \(n\in \mathbb {N}\).

Now, since \(\varepsilon>\) is arbitrarily small, for given arbitrary \(\eta >0\), choose

$$\begin{aligned} \varepsilon <\min \left\{ 1, \frac{\eta }{2+M}\right\} . \end{aligned}$$

Let us denote

$$\begin{aligned} E= & {} \{n\in \mathbb {N}; \Vert f_{n} - f \Vert _{p}\ge \eta \},\\ E_{1}= & {} \{n\in \mathbb {N}; \Vert T_{n}(e_{2})(x)-x^{2}\Vert _{p}\ge \frac{\eta -\varepsilon (2+M)}{5}\},\\ E_{2}= & {} \{n\in \mathbb {N}; \Vert T_{n}(-e_{1})(x)+x\Vert _{p}\ge \frac{\eta -\varepsilon (2+M)}{5}\},\\ E_{3}= & {} \{n\in \mathbb {N}; \Vert T_{n}(1)(x)-1\Vert _{p}\ge \frac{\eta -\varepsilon (2+M)}{5}\}. \end{aligned}$$

By the previous inequality, we easily get

$$\begin{aligned} E\subset E_{1}\cup E_{2}\cup E_{3}. \end{aligned}$$

Also, the choices for \(\eta \) and \(\varepsilon \) easily show that \(\frac{\eta -\varepsilon (2+M)}{5}\) can be chosen arbitrarily small, which by reasonings similar with those in the proof of Theorem 1 lead to

$$\begin{aligned} \delta (E)\le \delta (E_{1})+\delta (E_{2})+\delta (E_{3}), \end{aligned}$$

which proves the theorem for all nonnegative \(f\in L^{p}([0, 1])\).

Now, if, in particular, \(f\in C([0, 1])\) is of arbitrary sign and all \(T_{n}\) are translatable too, then applying the previous result to \(f+\Vert f\Vert _{\infty }\), it easily follows that \(T_{n}(f)\) converges to f in \(L^{p}\). \(\square \)

Remark 2

Evidently that Theorems 1, 2 and 3 hold also for sequence of positive linear operators satisfying the corresponding conditions on the test functions.

4 Applications

In this section, we present some concrete examples illustrating the above results.

Example 1

Let us consider the Bernstein–Kantorovich–Choquet polynomial operators for functions of one real variable,

$$\begin{aligned} K_{n,\mu }^{(1)}:\mathcal {R}([0, 1])\rightarrow \mathcal {R}([0, 1]), \end{aligned}$$

defined by the formula

$$\begin{aligned} K_{n,\mu }^{(1)}(f)(x)=\sum _{k=0}^{n}p_{n,k}(x)\cdot \frac{(C)\int _{k/(n+1)}^{(k+1)/(n+1)}f(\beta _{n} t)\textrm{d}\mu (t)}{\mu ([k/(n+1),(k+1)/(n+1)])}, \end{aligned}$$

with \(\mu =\sqrt{m}\),

$$\begin{aligned} p_{n,k}(t)={\left( {\begin{array}{c}n\\ k\end{array}}\right) }t^{k}(1-t)^{n-k}, \quad \text { for }t\in [0,1]\text { and }n\in \mathbb {N} \end{aligned}$$

and \(0<\beta _{n}\le 1\) satisfying \(st - \lim \beta _{n} = 1\).

According to the results in Section 3 in Gal–Niculescu [21], \(K_{n,\mu }^{(1)}(x^{k})\rightarrow x^{k}\), \(k\in \{0,1,2\}\) and \(K_{n,\mu }^{(1)}(-x)\rightarrow -x\), even uniformly on [0, 1].

Also, according to Section 5 in [24], these operators are monotone, sublinear and translatable, even if they are defined on the lattice \(\mathcal {R}([0, 1])\).

But \(K_{n, \mu }^{(1)}(1)=1\), \(K_{n, \mu }^{(1)}(t)=\beta _{n}K_{n, \mu } ^{(1)}(t)\), \(K_{n, \mu }^{(1)}(-t)=\beta _{n}K_{n, \mu }^{(1)}(-t)\), and \(K_{n, \mu }^{(1)}(t^{2})=\beta _{n}^{2}K_{n, \mu }^{(1)}(t^{2})\). Applying now Corollary 1, it follows that \(st - lim K_{n,\mu }^{(1)}(f)(x)=f(x)\), at each point x of continuity of f.

Example 2

Let us consider the Szász–Mirakjan–Kantorovich–Choquet operators given by the formula

$$\begin{aligned} S_{n, \mu }(f)(x)=e^{-n x}\sum _{k=0}^{\infty }\frac{(C)\int _{k/n}^{(k+1)/n}f(t)d \mu }{\mu ([k/n, (k+1)/n])}\cdot \frac{(n x)^{k}}{k !}, \end{aligned}$$

where \(\mu (A)=\sqrt{m(A)}\), \(x\in [0, +\infty )\), \(n\in \mathbb {N}\).

Define now

$$\begin{aligned} S_{n, \mu }^{(1)}(f)(x)=e^{-n x}\sum _{k=0}^{\infty }\frac{(C)\int _{k/n} ^{(k+1)/n}f(\beta _{n} t)d \mu }{\mu ([k/n, (k+1)/n])}\cdot \frac{(n x)^{k}}{k !}, \end{aligned}$$

where \(\beta _{n}\) is as in Example 1.

According with the direct calculation in [21], we have that \(S_{n,\mu }(h)(x)\rightarrow h(x)\), pointwise for all \(x\in [0,+\infty )\) and \(h\in \{1,x,-x,x^{2}\}\). Then, reasoning as in the case of Example 1, we easily get that \(st-limS_{n,\mu }^{(1)}(h)=h\), for all \(h\in \{1,x,-x,x^{2}\}\).

Therefore, by Corollary 2, it follows that \(st - lim S_{n, \mu }(x)=f(x)\), at each point x of continuity of \(f\in R_{loc}[0, +\infty )\) bounded on \([0, +\infty )\). (We imposed the boundedness of f on \([0, +\infty )\) only because in this case the operators are well-defined.)

Example 3

An example satisfying Theorem 2 can be obtained for the monotone and sublinear operators given by the formula

$$\begin{aligned} T_{n}(f)(x)=K_{n,\mu }^{(1)}(f)(x)=\sum _{k=0}^{n}p_{n,k}(x)\cdot \frac{(C)\int _{k/(n+1)}^{(k+1)/(n+1)}f(h_{n}(x)\beta _{n} t)\textrm{d}\mu (t)}{\mu ([k/(n+1),(k+1)/(n+1)])}, \end{aligned}$$

where \(h_{n}(x)\) is a sequence of continuous functions converging in measure (but not almost everywhere) to the constant 1 with \(0\le h_{n}(x)\le 1\) for all \(x\in [0, 1]\), \(\mu =\sqrt{m}\) and \(\beta _{n}\) is as in the above examples.

Example 4

An example satisfying Theorem 3 is given by the formula in Example 3, but where \(h_{n}(x)\) is a sequence of functions converging in \(L^{p}\) (but not almost everywhere) to the constant 1 with \(0\le h_{n}(x)\le 1\) for all \(x\in [0, 1]\).