Keywords

1 Introduction and Background

Let s denote the set of all real and complex sequences \(x=(x_{k})\). By \(l_{\infty }\) and c, we denote the Banach spaces of bounded and convergent sequences \(x=(x_{k})\) normed by \(||x||=\sup _{n}|x_{n}|\), respectively. A sequence \(x\in l_{\infty }\) is said to be almost convergent if all of its Banach limits coincide. Let \(\hat{c}\) denote the space of all almost convergent sequences. Lorentz [6] has shown that

$$\begin{aligned} \hat{c}=\left\{ x\in l_{\infty }: \lim _{m}t_{m,n}(x) \text{ exists } \text{ uniformly } \text{ in } n\right\} \end{aligned}$$

where

$$\begin{aligned} t_{m,n}(x)=\frac{x_{n}+x_{n+1} +x_{n+2}+\cdots + x_{n+m}}{m+1}. \end{aligned}$$

The space \([\hat{c}]\) of strongly almost convergent sequences was introduced by Maddox [7] and also independently by Freedman et al. [3] as follows:

$$\begin{aligned}{}[\hat{c}]=\left\{ x\in l_{\infty }: \lim _{m}t_{m,n}(|x-L |)= 0, \text{ uniformly } \text{ in } n, \text{ for } \text{ some } L\right\} . \end{aligned}$$

Let \(\lambda = (\lambda _{i})\) be a nondecreasing sequence of positive numbers tending to \(\infty \) such that

$$\begin{aligned} \lambda _{i+1}\le \lambda _{i} +1, \lambda _{1} =1. \end{aligned}$$

The collection of such sequence \(\lambda \) will be denoted by \(\varDelta .\)

The generalized de la Valée-Poussin mean is defined as

$$\begin{aligned} T_{i}(x)= \frac{1}{\lambda _{i}}\sum _{k\in I_{i}}x_{k} \end{aligned}$$

where \(I_{i}=[i-\lambda _{i} +1, i]\). A sequence \(x=(x_{n})\) is said to be \((V,\lambda )\)—summable to a number L, if \(T_{i}(x)\rightarrow L \text{ as } i\rightarrow \infty \) (see [9]).

Recently, Malkowsky and Savaş [9] introduced the space \([V,\lambda ]\) of \(\lambda \)—strongly convergent sequences as follows:

$$\begin{aligned}{}[V,\lambda ]=\left\{ x=(x_{k}):\lim _{i}\frac{1}{\lambda _{i}} \sum _{k\in I_{i}}|x_{k} -L|=0, \text{ for } \text{ some } L \right\} . \end{aligned}$$

Note that in the special case where \(\lambda _i=i\), the space \([V,\lambda ]\) reduces the space w of strongly Cesàro summable sequences which is defined as

$$\begin{aligned} w=\left\{ x=(x_{k}):\lim _{i}\frac{1}{i} \sum _{k=1}^i |x_k-L|)=0, \text{ for } \text{ some } L \right\} . \end{aligned}$$

More results on \(\lambda \)- strong convergence can be seen from [12, 2024].

Ruckle [16] used the idea of a modulus function f to construct a class of FK spaces

$$\begin{aligned} L(f) = \left\{ x= (x_k) : \sum _{k=1}^{\infty } f\left( |x_k|\right) < \infty \right\} . \end{aligned}$$

The space L(f) is closely related to the space \(l_1\), which is an L(f) space with \(f(x) = x\) for all real \(x\ge 0\).

Maddox [8] introduced and examined some properties of the sequence spaces \(w_0 (f)\), w(f), and \(w_{\infty } (f)\) defined using a modulus f, which generalized the well-known spaces \(w_0\), w and \(w_\infty \) of strongly summable sequences.

Recently, Savas [19] generalized the concept of strong almost convergence using a modulus f and examined some properties of the corresponding new sequence spaces.

Waszak [26] defined the lacunary strong \((A,\varphi )\)—convergence with respect to a modulus function.

Following Ruckle [16], a modulus function f is a function from \([0, \infty )\) to \([0, \infty )\) such that

  1. (i)

    \(\displaystyle {f(x) = 0}\) if and only if \(x=0\),

  2. (ii)

    \(\displaystyle {f(x+y) \le f(x) + f(y)}\) for all \(x,y \ge 0\),

  3. (iii)

    f increasing,

  4. (iv)

    f is continuous from the right at zero.

Since \(\displaystyle {\left| f(x) - f(y)\right| \le f\left( |x-y|\right) }\), it follows from condition \((\textit{iv})\) that f is continuous on \([0,\infty )\).

If \(x=(x_k)\) is a sequence and \(A=(a_{nk})\) is an infinite matrix, then Ax is the sequence whose nth term is given by \(A_{n}(x) = \sum _{k=0}^{\infty } a_{nk}x_{k}\). Thus we say that x is A-summable to L if \(\lim _{n\rightarrow \infty }A_{n}(x) = L\). Let X and Y be two sequence spaces and \(A=(a_{nk})\) an infinite matrix. If for each \(x\in X\) the series \(A_n(x) = \sum _{k=0}^{\infty } a_{nk}x_{k}\) converges for each n and the sequence \(Ax= A_n(x) \in Y\) we say that A maps X into Y. By (XY) we denote the set of all matrices which maps X into Y, and in addition if the limit is preserved then we denote the class of such matrices by \((X,Y)_{reg}\).

A matrix A is called regular , i.e., \(A\in (c,c)_{reg}.\) if \(A\in (c,c)\) and \(lim_{n}A_n(x) = lim_{k}x_{k}\) for all \(x\in c\).

In 1993, Nuray and Savas [14] defined the following sequence spaces:

Definition 1

Let f be a modulus and A a nonnegative regular summability method. We let

$$ w(\hat{A},f) = \left\{ x : \textit{lim}_{n}\sum _{k=1}^{\infty }a_{nk}f(|x_{k+m}-L|)=0, \text{ for } \text{ some } \text{ L, } \text{ uniformly } \text{ in } m \right\} $$

and

$$ w(\hat{A},f)_0 = \left\{ x : \textit{lim}_{n}\sum _{k=1}^{\infty }a_{nk}f(|x_{k+m}|)=0, \text{ uniformly } \text{ in } m \right\} . $$

If we take \(A=(a_{nk})\) as

$$\begin{aligned} a_{nk}:=\{ \begin{array}{ccc} \frac{1}{n}, &{} \text{ if } &{} n\ge k , \\ 0, &{} &{} \text{ otherwise. } \end{array} \end{aligned}$$

Then the above definitions are reduced to \([\hat{c}(f)]\) and \([ \hat{c}(f)]_0 \) which were defined and studied by Pehlivan [15].

If we take \(A=(a_{nk})\) is a de la Valée poussin mean, i.e.,

$$\begin{aligned} a_{nk}:=\{ \begin{array}{ccc} \frac{1}{\lambda _n}, &{} \text{ if } &{} k \in I_n = [ n-\lambda _{n}+1,n] , \\ 0, &{} &{} \text{ otherwise. } \end{array} \end{aligned}$$

Then these definitions are reduced to the following sequence spaces which were defined and studied by Malkowsky and Savas [9].

$$\begin{aligned} w(\hat{V},\lambda ,f) = \left\{ x : lim_{j}\frac{1}{\lambda _j}\sum _{k \in I_j}f(|x_{k+m}-L|)=0, \text{ for } \text{ some } \text{ L, } \text{ uniformly } \text{ in } m \right\} \end{aligned}$$

and

$$\begin{aligned} w(\hat{V},\lambda ,f)_0 = \left\{ x : lim_{j}\frac{1}{\lambda _j}\sum _{k \in I_j}f(|x_{k+m}|)=0, \text{ uniformly } \text{ in } m \right\} \end{aligned}$$

When \(\lambda _j = j \) the above sequence spaces become \([\hat{c}(f)]_0\) and \([\hat{c}(f)]\).

By a \(\varphi \)-function we understand a continuous nondecreasing function \(\varphi (u)\) defined for \(u\ge 0\) and such that \(\varphi (0)=0, \varphi (u)>0\), for \(u>0\) and \(\varphi (u)\rightarrow \infty \) as \(u\rightarrow \infty \), (see, [26]).

A \(\varphi \)-function \(\varphi \) is called non-weaker than a \(\varphi \)-function \(\psi \) if there are constants \(c, b, k,l>0 \) such that \(c\psi (lu)\le b\varphi (ku)\), (for all large u) and we write \(\psi \prec \varphi \).

A \(\varphi \)-function \(\varphi \) and \(\psi \) are called equivalent and we write \(\varphi \sim \psi \) if there are positive constants \(b_1,b_2, c, k_1, k_2 , l \) such that \(b_ 1\varphi (k_1u) \le c\psi (lu)\le b_2\varphi (k_2u)\), (for all large u ), (see, [26]).

A \(\varphi \)-function \(\varphi \) is said to satisfy \((\varDelta _2)\)-condition, (for all large u) if there exists constant \(K>1\) such that \(\varphi (2u) \le K\varphi (u)\).

In this paper, we introduce and study some properties of the following sequence space that is defined using the \(\varphi \)- function and de la Valée-Poussin mean and some known results are also obtained as special cases.

2 Main Results

Let \(\varLambda =(\lambda _j)\) be the same as above, \(\varphi \) be given \(\varphi \)-function, and f be given modulus function, respectively. Moreover, let \(\mathbf {A}=(a_{nk}(i))\) be the generalized three-parametric real matrix. Then we define

$$\begin{aligned} V_{\lambda }^{0}((A,\varphi ),f)= \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\right| \Big )= 0, \text{ uniformly } \text{ in } i \right\} . \end{aligned}$$

If \(\lambda _j = j, \) we have

$$\begin{aligned} V_{\lambda }^{0}((A,\varphi ),f)= \left\{ x=(x_{k}): \lim _{j}\frac{1}{j}\sum _{n=1}^{j}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\right| \Big )= 0, \text{ uniformly } \text{ in } i \right\} . \end{aligned}$$

If \(x\in V_{\lambda }^{0}((A,\varphi ),f)\), the sequence x is said to be \(\lambda \)—strong \((A,\varphi )\)—convergent to zero with respect to a modulus f. When \(\varphi (x)= x\) for all x, we obtain

$$\begin{aligned} V_{\lambda }^{0}((A),f)= \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)(|x_{k}|)\right| \Big )= 0, \text{ uniformly } \text{ in } i \right\} . \end{aligned}$$

If \(f(x) = x \), we write

$$\begin{aligned} V_{\lambda }^{0}(A,\varphi )= \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\right| \Big )= 0, \text{ uniformly } \text{ in } i \right\} . \end{aligned}$$

If we take \(A=I\) and \(\varphi (x)=x\) respectively, then we have

$$\begin{aligned} V_{\lambda }^{0}(I,f) = \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{k\in I_{j}}f \Big (\left| x_{k}\right| \Big )=0 \right\} . \end{aligned}$$

If we take \(A=I\), \(\varphi (x)=x\) and \(f(x)=x\) respectively, then we have

$$\begin{aligned} V_{\lambda }^{0}((I)) = \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{k\in I_{j}}|x_{k}|=0 \right\} , \end{aligned}$$

which was defined and studied by Savaş and Savaş [18].

If we define the matrix \(A=(a_{nk}(i))\) as follows: for all i

$$\begin{aligned} a_{nk}(i):=\{ \begin{array}{ccc} \frac{1}{n}, &{} \text{ if } &{} n\ge k , \\ 0, &{} &{} \text{ otherwise. }\end{array}\end{aligned}$$

then we have,

$$\begin{aligned} V_{\lambda }^{0}(\mathbf {C},\varphi ,f)= \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \frac{1}{n}\sum _{k=1}^{n}\varphi (|x_{k}|)\right| \Big )= 0 \right\} . \end{aligned}$$

If we define

$$\begin{aligned} a_{nk}(i):=\{ \begin{array}{ccc} \frac{1}{n}, &{} \text{ if } &{} i\le k\le i+n-1 , \\ 0, &{} &{} \text{ otherwise. } \end{array}\end{aligned}$$

then we have,

$$\begin{aligned} V_{\lambda }^{0}(\hat{c},\varphi ,f)= \left\{ x=(x_{k}): \lim _{j}\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f \Big (\left| \frac{1}{n}\sum _{k=i}^{i+n}\varphi (|x_{k}|)\right| \Big )= 0, \text{ uniformly } \text{ in } i \right\} . \end{aligned}$$

We now have:

Theorem 1

Let \(\mathbf A =(a_{nk}(i))\) be the generalized three parametric real matrix and let the \(\varphi \)—function \(\varphi (u)\) satisfy the condition \((\varDelta _2)\). Then the following conditions are true:

(a) If \(x = (x_k)\in w((\mathbf A ,\varphi ),f)\) and \( \alpha \) is an arbitrary number, then \(\alpha x\in w((\mathbf A ,\varphi ),f).\)

(b) If \(x,y\in w((\mathbf A ,\varphi ),f)\) where \(x = (x_k)\), \(y=(y_k)\) and \(\alpha , \beta \) are given numbers, then \(\alpha x + \beta y\in w((\mathbf A ,\varphi ),f).\)

The proof is a routine verification by using standard techniques and hence is omitted.

Theorem 2

Let f be a modulus function.

$$V_{\lambda }^{0}(A,\varphi )\subseteq V_{\lambda }^{0}((A,\varphi ),f).$$

Proof

Let \(x\in V_{\lambda }^{0}(A,\varphi )\). For a given \(\varepsilon >0\) we choose \(0<\delta <1\) such that \(f(x)<\varepsilon \) for every \(x\in [0,\delta ]\). We can write for all i

$$\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)\right| \Big ) = S_1 + S_2 ,$$

where \(S_1=\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)\right| \Big )\) and this sum is taken over

$$\sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\le \delta $$

and

$$ S_2=\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)\right| \Big ) $$

and this sum is taken over

$$ \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)> \delta . $$

By definition of the modulus f we have \(S_1=\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}f\Big (\delta \Big ) = f(\delta )<\varepsilon \) and moreover

$$S_2=f(1)\frac{1}{\delta }\frac{1}{\lambda _{j}}\sum _{n\in I_{j}}\sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|).$$

Thus we have \(x\in V_{\lambda }^{0}((A,\varphi ),f)\).

This completes the proof.

3 Uniform \((A,\varphi )\)—Statistical Convergence

The idea of convergence of a real sequence was extended to statistical convergence by Fast [2] (see also Schoenberg [25]) as follows: If \(\mathbb {N}\) denotes the set of natural numbers and \(K\subset \mathbb {N}\) then K(mn) denotes the cardinality of the set \(K\cap [m,n],\) the upper and lower natural densities of the subset K are defined as

$$\begin{aligned} \overline{d}(K)= \displaystyle {\lim _{n\rightarrow \infty }} \sup \frac{K(1,n)}{n} ~~\text{ and } ~~\underline{d}(K)=\displaystyle {\lim _{n\rightarrow \infty }}\inf \frac{K(1,n)}{n}. \end{aligned}$$

If \(\overline{d}(K)=\underline{d}(K)\) then we say that the natural density of K exists and it is denoted simply by d(K). Clearly \(d(K)= \displaystyle {\lim _{n\rightarrow \infty }} \frac{K(1,n)}{n}. \)

A sequence \((x_{k})\) of real numbers is said to be statistically convergent to L if for arbitrary \(\epsilon >0,\) the set \(K(\epsilon )=\{k\in \mathbb {N}: |x_k-L|\ge \epsilon \}\) has natural density zero.

Statistical convergence turned out to be one of the most active areas of research in summability theory after the work of Fridy [4] and Šalát [17].

In another direction, a new type of convergence called \(\lambda \)-statistical convergence was introduced in [13] as follows.

A sequence \((x_{k})\) of real numbers is said to be \(\lambda \)- statistically convergent to L (or, \(S_\lambda \)-convergent to L) if for any \(\epsilon >0,\)

$$\begin{aligned} \displaystyle {\lim _{ j\rightarrow \infty }}~\frac{1}{\lambda _{j}}|\{k\in I_{j}:|x_{k}-L|\ge \epsilon \}|=0 \end{aligned}$$

where |A| denotes the cardinality of \(A\subset \mathbb {N}.\) In [13] the relation between \(\lambda \)-statistical convergence and statistical convergence was established among other things.

Recently, Savas [20] defined almost \(\lambda \)-statistical convergence using the notion of \((V, \lambda )\)-summability to generalize the concept of statistical convergence.

Assume that A is a nonnegative regular summability matrix. Then the sequence \(x=(x_n)\) is called statistically convergent to L provided that, for every \(\varepsilon >0\), (see, [5])

$$\begin{aligned} lim_{j} \sum _{n: |x_{n}-L|\ge \varepsilon }a_{jn} =0. \end{aligned}$$

Let \(\mathbf {A}=(a_{nk}(i))\) be the generalized three parametric real matrix and the sequence \(x=(x_k)\), the \(\varphi \)-function \(\varphi (u)\) and a positive number \(\varepsilon > 0\) be given. We write, for all i

$$\begin{aligned} K_{\lambda }^j((A,\varphi ),\varepsilon )= \{ n \in I_j : \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\ge \varepsilon \}. \end{aligned}$$

The sequence x is said to be uniform \((A,\varphi )\)—statistically convergent to a number zero if for every \(\varepsilon > 0\)

$$\begin{aligned} lim_j \frac{1}{\lambda _j}\mu (K_{\lambda }^j((A,\varphi ),\varepsilon ))=0, \text{ uniformly } \text{ in } i \end{aligned}$$

where \(\mu (K_{\lambda }^j((A,\varphi ),\varepsilon ))\) denotes the number of elements belonging to \(K_{\lambda }^j((A,\varphi ),\varepsilon )\). We denote by \(S_{\lambda }^0((A,\varphi )),\) the set of sequences \(x=(x_k)\) which are uniform \((A,\varphi )\)—statistical convergent to zero.

If we take \(A= I\) and \(\varphi (x) = x\) respectively, then \(S_{\lambda }^0((A,\varphi ))\) reduce to \(S_{\lambda }^0\)which was defined as follows, (see, Mursaleen [13]).

$$\begin{aligned} S_{\lambda }^0= \left\{ x=(x_k): lim_j\frac{1}{\lambda _j}|\{k\in I_j:|x_{k}|\ge \varepsilon \}| =0 \right\} . \end{aligned}$$

Remark 1

(i) If for all i,

$$ a_{nk}:=\{ \begin{array}{ccc} \frac{1}{n}, &{} \text{ if } &{}n\ge k ,\\ 0, &{} &{}\text{ otherwise. } \end{array} $$

then \(S_{\lambda }((A,\varphi ))\) reduce to \(S_{\lambda }^0((C,\varphi ))\), i.e., uniform \((C,\varphi )\)— statistical convergence. (ii) If for all i, (see, [1]),

$$ a_{nk}:=\{ \begin{array}{ccc} \frac{p_k}{P_n}, &{} \text{ if } &{}n\ge k ,\\ 0, &{} &{}\text{ otherwise. } \end{array} $$

then \(S_{\lambda }((A,\varphi ))\) reduce to \(S_{\lambda }^0((N,p),\varphi ))\), i.e., uniform \(((N,p),\varphi )\)— statistical convergence, where \(p=p_k\) is a sequence of nonnegative numbers such that \(p_{0}> 0\) and

$$P_i = \sum _{k=0}^{n}p_k \rightarrow \infty ( n\rightarrow \infty ). $$

We are now ready to state the following theorem.

Theorem 3

If \(\psi \prec \varphi \) then \(S_{\lambda }^0((A,\psi ))\subset S_{\lambda }^0((A,\varphi )).\)

Proof

By our assumptions we have \(\psi (|x_{k}|)\le b\varphi (c|x_{k}|)\) and we have for all i,

$$ \sum _{k=1}^{\infty }a_{nk}(i)\psi (|x_{k}|) \le b \sum _{k=1}^{\infty }a_{nk}(i)\varphi (c|x_{k}|)\le K\sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)$$

for \(b,c > 0\), where the constant K is connected with properties of \(\varphi \). Thus, the condition \(\sum _{k=1}^{\infty }a_{nk}(i)\psi (|x_{k}|)\ge \varepsilon \) implies the condition \(\sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\ge \varepsilon \) and in consequence we get

$$\mu (K_{\lambda }^j((A,\varphi ),\varepsilon ))\subset \mu (K_{\lambda }^j((A,\psi ),\varepsilon ))$$

and

$$lim_j \frac{1}{\lambda _j}\mu \Big (K_{\lambda }^j((A,\varphi ),\varepsilon ))\le lim_j \frac{1}{\lambda _j}\mu (K_{\lambda }^j ((A,\psi ),\varepsilon ))\Big ).$$

This completes the proof.

Theorem 4

(a) If the matrix A, functions f, and \(\varphi \) are given, then

$$V_{\lambda }^{0}((A,\varphi ),f)\subset S_{\lambda }^{0}(A,\varphi ).$$

(b) If the \(\varphi \)- function \(\varphi (u)\) and the matrix A are given, and if the modulus function f is bounded, then

$$S_{\lambda }^{0}(A,\varphi )\subset V_{\lambda }^{0}(A,\varphi ),f).$$

(c) If the \(\varphi \)- function \(\varphi (u)\) and the matrix A are given, and if the modulus function f is bounded, then

$$S_{\lambda }^{0}(A,\varphi )= V_{\lambda }^{0}(A,\varphi ),f).$$

Proof

(a) Let f be a modulus function and let \(\varepsilon \) be a positive number. We write the following inequalities:

$$\begin{aligned} \frac{1}{\lambda _{j}}\sum _{n\in I_{j}}&f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)\right| \Big )\\&\ge \frac{1}{\lambda _{j}} \sum _{n\in I_{j}^1}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\right| \Big )\\&\ge \frac{1}{\lambda _{j}} f(\varepsilon ) \sum _{n\in I_{j}^1}1\\&\ge \frac{1}{\lambda _{j}}f(\varepsilon )\mu (K_{\lambda }^j(A,\varphi ),\varepsilon ),\\ \end{aligned}$$

where

$$ I_j^1 = \left\{ n \in I_j: \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\ge \varepsilon \right\} . $$

Finally, if \(x\in V_{\lambda }^{0}((A,\varphi ),f)\) then \(x\in S_{\lambda }^{0}(A,\varphi )\).

(b) Let us suppose that \(x\in S_{\lambda }^{0}(A,\varphi ).\) If the modulus function f is a bounded function, then there exists an integer M such that \(f(x)<M\) for \(x\ge 0\). Let us take

$$ I_j^2 = \left\{ n \in I_j: \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)<\varepsilon \right\} . $$

Thus we have

$$\begin{aligned} \frac{1}{\lambda _{j}}\sum _{n\in I_{j}}&f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (| x_{k}|)\right| \Big )\\&\le \frac{1}{\lambda _{j}} \sum _{n\in I_{j}^1}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i)\varphi (|x_{k}|)\right| \Big )\\&+\frac{1}{\lambda _{j}} \sum _{n\in I_{j}^2}f\Big (\left| \sum _{k=1}^{\infty }a_{nk}(i) \varphi (|x_{k}|)\right| \Big )\\&\le \frac{1}{\lambda _{j}}M \mu (K_{\lambda }^j((A,\varphi ),\varepsilon ) + f(\varepsilon ).\\ \end{aligned}$$

Taking the limit as \(\varepsilon \rightarrow 0 \), we obtain that \(x\in V_{\lambda }^{0}(A,\varphi ,f).\)

The proof of (c) follows from (a) and (b).

This completes the proof.

In the next theorem we prove the following relation.

Theorem 5

If a sequence \(x=(x_{k})\) is \(S(A,\varphi )\)—convergent to L and

$$ \lim inf_{j} \Big (\frac{\lambda _{j}}{j} \Big ) > 0 $$

then it is \(S_{\lambda }(A,\varphi )\) convergent to L, where

$$ S(A,\varphi )= \{ x=(x_{k}): lim_{j} \frac{1}{j}\mu (K(A,\varphi ,\varepsilon ))=0 \}.$$

Proof

For a given \(\varepsilon >0\), we have, for all i

$$\{ n\in I_{j}: \sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \varepsilon \} \subseteq \{ n\le j: \sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \varepsilon \}.$$

Hence we have,

$$K_{\lambda }(A,\varphi ,\varepsilon )\subseteq K(A,\varphi ,\varepsilon ).$$

Finally the proof follows from the following inequality:

$$ \frac{1}{j}\mu (K(A,\varphi ,\varepsilon ))\ge \frac{1}{j}\mu (K_{\lambda }(A,\varphi ,\varepsilon )) = \frac{\lambda _{j}}{j}\frac{1}{\lambda _{j}}\mu (K_{\lambda }(A,\varphi ,\varepsilon )) .$$

This completes the proof.

Theorem 6

If \(\lambda \in \triangle \) be such that \( lim_j \frac{\lambda _j}{j}=1\) and the sequence \(x=(x_{k})\) is \(S_{\lambda }(A,\varphi )\)—convergent to L then it is \(S(A,\varphi )\) convergent to L,

Proof

Let \( \delta >0 \) be given. Since \( lim_j \frac{\lambda _j}{j}=1\), we can choose \( m \in N\) such that \( |\frac{\lambda _j}{j}-1 |<\frac{\delta }{2},\) for all \( j \ge m . \) Now observe that, for \( \varepsilon >0 \)

$$\begin{aligned}&\frac{1}{j}\left| \left\{ n\le j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \varepsilon \right\} \right| \\&= \frac{1}{j}\left| \left\{ k\le j-\lambda _j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \epsilon \right\} \right| \\&+ \frac{1}{j}\left| \left\{ n \in I_j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \epsilon \right\} \right| \\&\le \frac{j-\lambda _j}{j} + \frac{1}{j}\left| \left\{ n \in I_j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \epsilon \right\} \right| \\&\le 1-(1-\frac{\delta }{2})+\frac{1}{j}\left| \left\{ n \in I_j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \epsilon \right\} \right| \\&= \frac{\delta }{2}+ \frac{1}{j}\left| \left\{ n \in I_j:\sum _{k=0}^{\infty }a_{nk}(i)\varphi ({|x_{k}-L|}) \ge \epsilon \right\} \right| , \end{aligned}$$

This completes the proof.