Keywords

1 Introduction and Basic Definitions

Faster convergence of sequences particularly the acceleration of convergence of sequence of partial sums of series via linear and nonlinear transformations are widely used in finding solutions of mathematical as well as different scientific and engineering problems (one may refer to Brezinski [1] and Brezinski et.al. [2]). The problem of acceleration convergence often occurs in numerical analysis. To accelerate the convergence, the standard interpolation and extrapolation methods of numerical mathematics are quite helpful. It is useful to study about the acceleration of convergence methods (we shall focus on matrix transformations), which transform a slowly converging sequence into a new sequence, converging to the same limit faster than the original sequence. The speed of convergence of sequences is of the central importance in the theory of subsequence transformation.

There are many sequences (or series) those converge very slowly. Again the rate of convergence of two convergent sequences (series) may not be equal. In order to speed the convergence of slowly convergent sequences (series) many well-known mathematicians like Salzer [8], Smith and Ford [10] established different methods. Dawson [3] has studied matrix summability over certain classes of sequences ordered with respect to rate of convergence.

Definition 1.1

The sequence (x n ) converges to σ at the same rate as the sequence ( y n ) converges to λ, written as (x n ) ≈ ( y n ) if

$$\displaystyle{ 0 < \mathrm{lim - inf}\left \vert \frac{x_{n}-\sigma } {y_{n}-\lambda }\right \vert \leq \mathrm{lim - sup}\left \vert \frac{x_{n}-\sigma } {y_{n}-\lambda }\right \vert < \infty. }$$

Example 1.1

Consider the sequences (x k ) and ( y k ) defined by x k  = 1 + k −1 and y k  = 23 + 123k −1, for all k ∈ N.

It can be easily verified that ( y k ) converges to 23 faster than (x k ) converges to 1.

Definition 1.2

Let A = (a nk ) be an infinite matrix. For a sequence x = (x k ), then the A transform of X is defined by Ax = (A n x), where \(A_{n}x = \sum \limits _{k=1}^{\infty }a_{nk}x_{k}\), for all n ∈ N.

Definition 1.3

The subsequence \(x = (x_{n_{i}})\) of (x k ) can be represented as a regular matrix transformation A = (a nk ) times (x k ) by defining \(a_{i,n_{i}} = 1\), for all i ∈ N and a pq  = 0, otherwise.

Definition 1.4

The convergence field of the matrix A = (a nk ) is defined by {x = (x k ): Ax ∈ c}, where c denotes the class of all convergent sequences.

Definition 1.5

The matrix A = (a nk ) accelerates the convergence of x if Ax < x. The acceleration field of A is defined by {(x k ) ∈ ω: Ax < x}.

Let S 0 denote the set of all sequences in c 0 with non-zero terms. If a ∈ S 0, let [a] = { x ∈ S 0: x ≈ a}. Also let E 0 = { [x]: x ∈ S 0}. If [a], [b] ∈ E 0, then we say [a] is less than [b], [a] < [b], provided a < b. Then E 0 is partially ordered with respect to < .

Open intervals in S 0 will be denoted by (a, b), (a, −), (−, b), where (a, −) = { x ∈ S 0: a < x} and (−, b) = { x ∈ S 0: x < b}. On combining these two we have (a, b) = { x = (x k ) ∈ S 0: a < x < b}.

The necessary and sufficient conditions for a matrix A = (a pq ) to be convergence preserving over (abbreviated c.p.o.) S 0 are (deduced from the Silverman and Toeplitz conditions)

  1. (i)

    (a pq ) p = 1 converges for each q = 1, 2, 3,  and

  2. (ii)

    there exists K such that \(\sum \limits _{q=1}^{\infty }a_{pq} < K\), for each p = 1, 2, 3, .

Dawson [3] has characterized the summability field of a matrix A by showing A is convergence preserving over the set of all sequences which converges faster than some fixed sequence. The following results are due to him.

Theorem 1.6

If A is c.p.o. [b], then there exists b ∈ S 0 such that b < b and A is c.p.o. [b ].

Theorem 1.7

If A is c.p.o. each of the sets [b (1) ],[b (2) ],[b (3) ],…, then there exists d ∈ S 0 such that b ( p) < d, p = 1,2,3,… and A is c.p.o. [d].

It follows that A is convergence preserving over a set of the type (−,x).

Keagy and Ford [5] proved that if a subsequence transformation A accelerates x ∈ S 0, then it accelerates each y ∈ S 0 which converge at the same rate as x. They also proved the following two results.

Theorem 1.8

If A is a subsequence transformation and x ∈ S 0 , then there exists y,z ∈ S 0 , such that y < x < z and A does not accelerate y or z.

Theorem 1.9

If A is a subsequence transformation and x ∈ S 0 , then there exists y ∈ S 0 such that y < x and A accelerates y.

It is also proved by Keagy and Ford [5] that an analog to the above theorem does not exist for x < z, that is the acceleration field of a subsequence transformation cannot be any of the forms (x, −), [x, −), (−, x], (x, y), [x, y], (x, y] or [x, y); nor it include any of the first four of these forms. They showed that the acceleration field for each subsequence transformation A is the union of collection of sets of the form (x, y). The result is given below.

Theorem 1.10

If x ∈ S 0 and A is a subsequence matrix that accelerates x, then there exist y and z such that y < x < z and A accelerates each r ∈ ( y,z).

They also proved that this algorithm cannot be extended to a larger class of sequences defined in terms of rate of convergence.

2 On Statistical Acceleration Convergence of Sequences

The notion of statistical convergence of sequences was introduced by Fast [4] and Schoenberg [9] independently. Later on it was further investigated from different aspects of sequence spaces and summability theory by many research workers.

A subset E of N is said to have asymptotic density δ(E) if \(\delta (E) ={ \lim \atop n\rightarrow \infty } \frac{1} {n}\sum \limits _{k=1}^{\infty }\chi _{ E}(k)\) exists, where χ E is the characteristic function of E. Clearly all finite subsets of N have zero asymptotic density and δ(E c) = δ(NE) = 1 −δ(E).

A sequence (x k ) is said to be statistically convergent to L if for every ɛ > 0, δ({k ∈ N:  | x k L | ≥ ɛ}) = 0. We write stat−lim x k  = L.

Throughout ω, , c, c 0, \(\bar{c}\), \(\bar{c_{0}}\), m 0, represent the spaces of all, bounded, convergent, null, statistically convergent, statistically null and bounded statistically null sequences, respectively. Further S 0, \(\bar{S_{0}}\) denote the subsets of the spaces c 0 and m 0, respectively, with non-zero terms.

Example 2.1

The sequence (x k ) defined by x k  = i, for k = i 2, i ∈ N and x k  = k −2, otherwise, is statistically convergent to 0 and is unbounded.

Tripathy and Sen [13] introduced the notion of statistical acceleration convergence of sequences as follows.

Definition 2.1

Let the sequence (x k ) be statistically convergent to σ and the sequence ( y k ) statistically convergent to λ with \((x_{k}-\sigma )\notin \bar{S_{0}}\) and \((\,y_{k}-\sigma )\notin \bar{S_{0}}\), then the sequence (x k ) statistically converges to σ statistically faster than ( y k ) statistically convergent to λ, written as (x k ) < stat( y k ) if

\(\mathrm{stat - lim}\frac{x_{k}-\sigma } {y_{k}-\sigma } = 0\), provided ( y k σ) ≠ 0 for all k ∈ N.

Definition 2.2

The sequence (x k ) statistically converges to σ statistically at the same rate as the sequence ( y k ) statistically converges to λ, written as (x k ) ≈ ( y k ) if

$$\displaystyle{ 0 < \mathrm{stat - lim - inf}\left \vert \frac{x_{k}-\sigma } {y_{k}-\lambda }\right \vert \leq \mathrm{stat - lim - sup}\left \vert \frac{x_{k}-\sigma } {y_{k}-\lambda }\right \vert < \infty. }$$

Tripathy and Sen [13] proved the statistical analogue of most of the above results as well as they established the following decomposition theorem for acceleration convergence.

Theorem 2.3

Let (x k ), \((\,y_{k}) \in \bar{ S_{0}}\) , then the following are equivalent.

  1. (i)

    (x k ) < stat ( y k ).

  2. (ii)

    there exist (x k ) and ( y k ) in S 0 such that x k = x k for a.a.k, y k = y k for a.a.k and (x k ) < ( y k ).

  3. (iii)

    there exists a subset K ={ k i : I ∈ N} of N such that δ(K) = 1 and \((x_{k_{i}}) < (\,y_{k_{i}})\).

Remark 2.4

Keagy and Ford [5] conjectured “If A is any subsequence transformation and (x k ) ∈ S 0, then either Ax < x or Ax ≈ x”. Tripathy and Sen [13] provided the following example, which shows that this conjecture fails.

Example 2.2

Let the sequence (x k ) be defined by the subsequence \((x_{k_{i}}) = (x_{1},x_{3},x_{5},\ldots )\).

3 On I-Acceleration Convergence of Sequences

The notion of I-convergence was introduced by Kostyrko et al. [6]. Later on it was further investigated from sequence space point of view and linked with summability theory by many others.

The notion depends on the notion of ideals. Let X be a non-empty set, then a family of sets I ⊂ 2X is an ideal if and only if for each A, B ∈ I, we have AB ∈ I and for A ∈ I and for each B ⊂ A, we have B ∈ I. A non-empty family of sets F ⊂ 2X is a filter on X if and only if ∅ ∉ F, for each A, B ∈ F, we have AB ∈ F and for each A ∈ F and for each B ⊃ A, we have B ∈ F. An ideal I is called non-trivial if I ≠ ∅ and XI. Hence I ⊂ 2X is a non-trivial ideal if and only if F = F(I) = { XA: A ∈ I} is a filter on X.

A subset E of N is said to have logarithmic density d(E) if \(d(E) ={ \lim \atop n\rightarrow \infty } \frac{1} {s_{n}}\sum \limits _{k=1}^{n}\frac{\chi _{E}(k)} {k}\) exists, where \(s_{n} = \sum \limits _{k=1}^{n}\frac{1} {k}\), for all n ∈ N. Clearly all finite subsets of N have zero logarithmic density and d(E c) = d(NE) = 1 − d(E).

Let T = (t nk ) be a regular non-negative matrix. Then for E ⊂ N, if \(d_{T}(E) ={ \lim \atop n\rightarrow \infty } \sum \limits _{k=1}^{\infty }t_{nk}\chi _{E}(k)\) exists, it is called the T-density of E. From the regularity of T it follows that \({ \lim \atop n\rightarrow \infty } \sum \limits _{k=1}^{\infty }t_{ nk} = 1\) and from this and non-negativeness of T it follows that d T (E) ∈ [0, 1].

Clearly the asymptotic density and logarithmic density can be obtained as the particular cases of T-density. If one considers \(t_{nk} = \frac{1} {n}\), for k ≤ n and t nk  = 0, otherwise, then d T (E) = δ(E). If one considers \(t_{nk} = \frac{k^{-1}} {s_{n}}\), for k ≤ n and t nk  = 0, otherwise, then one will get d T (E) = d(E).

The uniform density of a subset E of N is defined as follows: For integers t ≥ 0 and s ≥ 1, let E(t + 1, s + 1) = Card{n ∈ E: t + 1 ≤ n ≤ t + s}. Let \(\beta _{s} ={ \lim \inf \atop t\rightarrow \infty } E(t + 1,t + s)\) and \(\beta ^{s} ={ \lim \sup \atop t\rightarrow \infty } E(t + 1,t + s)\). Then \(\underline{u}(E) ={ \lim \atop s\rightarrow \infty } \frac{\beta _{s}} {s}\) and \(\bar{u}(E) ={ \lim \atop s\rightarrow \infty } \frac{\beta ^{s}} {s}\) exist. If \(\underline{u}(E) =\bar{ u}(E)\), then we say that the uniform density of E exists and \(u(E) =\underline{ u}(E) = \overline{u}(E)\).

Remark 3.1

Throughout we consider I to be a non-trivial ideal of subsets of N, the set of natural numbers

Definition 3.2

A sequence (x k ) is said to be Iconvergent to L if for each ɛ > 0, {k ∈ N: x k L ≥ ɛ} ∈ I. We write I −limx k  = L.

The following are the examples of ideals:

Example 3.1

The class I f of all finite subsets of N is an ideal of 2N.

Example 3.2

The class I C  = { E ⊂ N: } is an ideal of 2N.

Example 3.3

The class I = { E ⊂ N: (E) = 0} is an ideal of 2N.

Example 3.4

The class I d  = { EN: d(E) = 0} is an ideal of 2N.

Example 3.5

The class = { E ⊂ N: T d (E) = 0} is an ideal of 2N.

Example 3.6

The class I u  = { E ⊂ N: u(E) = 0} is an ideal of 2N.

Remark 3.3

All the ideals considered in the above examples are non-trivial ideals.

Tripathy and Mahanta [12] introduce the following definition on acceleration convergence related to I-convergence of sequences:

Definition 3.4

Let I −limx k  = σ and I −limy k  = λ with (x k σ), ( y k λ) ∈ S 0 I. Then we say (x k ) I-converges to σ, I-faster than ( y k ) I-converges to λ, written as (x k ) < I( y k ) if \(I -{ \mathrm{lim} \atop k\rightarrow \infty } \frac{x_{k}-\sigma } {y_{k}-\lambda } = 0\), provided y k λ ≠ 0, for all k ∈ N.

Peterson and Savas [7] have studied the acceleration convergence for double sequences, Tripathy and Dutta [11] have studied I-acceleration convergence for sequences of fuzzy real numbers.