Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Introduction

In this chapter, we shall first review, in Sect. 3.2, certain results on infinite linear systems whose “coefficient” matrices are diagonally dominant in some sense. We treat these infinite matrices in their own right and also as operators over certain normed linear spaces. These results show the extent to which results that are known for finite matrices have been generalized. Next, in Sect. 3.3, we recall some results on eigenvalues for operators mainly of the type considered in the second section. We also review a powerful numerical method for computing eigenvalues of certain diagonally dominant tridiagonal operators. Section 3.4 concerns linear differential systems whose coefficient matrices are operators on either 1 or \(\ell^{\infty }\). Convergence results for truncated systems are presented. The concluding section, viz., Sect. 3.5 discusses an iterative method for numerically solving a linear equation whose matrix is treated as an operator on \(\ell^{\infty }\) satisfying certain conditions, including a diagonal dominance condition.

3.2 Infinite Linear Systems

In this section, we shall take a look at considerations of diagonal dominance for infinite matrices. In classical analysis, linear equations in infinite matrices occur in problems including interpolation, sequence spaces, and summability theory. An earlier notable result (considered till then) on the existence of a solution to such an infinite system was given by Polya, which, however, excluded discussion of uniqueness. Kantorovich and Krylov [54] state certain results (without proofs) which provide sufficient conditions for the existence and uniqueness of bounded solutions under the assumption that the infinite matrix under consideration is invertible. One of the motivations for the results presented in this section is in solving certain elliptic partial differential equations in multiply connected regions.

In what follows, we shall first review two works, viz., [113] and [101], where infinite matrices were considered not as operators on some normed linear spaces. Specifically, we are concerned here with the infinite system of linear equations [113]:

$$\displaystyle{ \sum _{j=1}^{\infty }a_{ ij}x_{j} = b_{i},\ i \in \mathbb{N}, }$$
(3.1)

or alternatively

$$\displaystyle{ Ax = b, }$$
(3.2)

where the infinite matrix A = (a ij ), is strictly “column” diagonally dominant, i.e., there exist numbers \(\sigma _{i}\) with \(0 \leq \sigma _{i} <1,\ i \in \mathbb{N}\) such that

$$\displaystyle{ \sigma _{i}\vert a_{ii}\vert =\sum _{ j=1}^{\infty }\vert a_{ ij}\vert, }$$
(3.3)

where, of course, a ii ≠ 0 for all \(i \in \mathbb{N}\) and the sequence {b i } is bounded. One is interested in sufficient conditions that guarantee the existence and uniqueness of a bounded solution to the above system. The idea of the approach taken in [113] is to use finite truncations and develop estimates for such a truncated system, viz.,

$$\displaystyle{ \sum _{j=1}^{n}a_{ ij}x_{j} = b_{i},\ i = 1,2,\ldots,n. }$$
(3.4)

Using A (n) to denote the matrix obtained from A by taking the first n rows and n columns it may be shown that det(A (n)) ≠ 0 for each n. Adopting a similar notation for x (n) and b (n), the truncated system above could be rewritten as

$$\displaystyle{ A^{(n)}x^{(n)} = b^{(n)}. }$$
(3.5)

Let us denote the unique solution of this truncated solution by x (n). The following inequalities are established: for each j ≥ 1 and n ≥ j, one has

$$\displaystyle{ \left \vert x_{j}^{(n+1)} - x_{ j}^{(n)}\right \vert \leq P\sigma _{ n+1} + \frac{Q} {\vert a_{n+1,n+1}\vert } }$$
(3.6)

for some positive constants P and Q. For any two positive integers p, q and for each fixed j,  j ≤ p, q one also has

$$\displaystyle{ \left \vert x_{j}^{(q)} - x_{ j}^{(p)}\right \vert \leq P\sum _{ i=p+1}^{\infty }\sigma _{ i} + Q\sum _{i=p+1}^{q} \frac{1} {\vert a_{i,i}\vert }. }$$
(3.7)

Using standard estimates for strictly row diagonally dominant finite systems, an estimate for the solution of the truncated system is given by,

$$\displaystyle{ \left \vert x_{j}^{(n)}\right \vert \leq \prod _{ k=1}^{n}\frac{1 +\sigma _{k}} {1 -\sigma _{k}}\sum _{k=1}^{n} \frac{\vert b_{k}\vert } {\vert a_{kk}\vert (1 +\sigma _{k})}, }$$
(3.8)

for each j with j ≤ n. Turning the attention to the infinite system, let us assume that one has the following for the entries a ij :

$$\displaystyle{ \sum _{i=1}^{\infty } \frac{1} {\vert a_{ii}\vert } <\infty, }$$
(3.9)

and for some M > 0 and all \(i \in \mathbb{N}\)

$$\displaystyle{ \qquad \sum _{j=1,j\neq i}^{\infty }\vert a_{ ij}\vert \leq M. }$$
(3.10)

Then Shivakumar and Wong show (Theorems 1 and 2, [113]) that the infinite system considered above has a unique and a bounded solution. Let us observe that the authors give a numerical example to show that a general infinite system which has a unique bounded solution could still have unbounded solutions.

Shivakumar (Theorems 3 and 4, [101]), later relaxed the assumption on the absolutely summability of the reciprocals of the diagonals of A, while retaining the other assumptions to show that one could still recover similar results, like existence and uniqueness. In the presence of another rather strong assumption, he shows that A −1 is also strictly “row” diagonally dominant. It must be remarked that this is a rather unusual result, especially for infinite matrices.

Now, we take the point of view of studying infinite matrices as operators over certain Banach spaces. We will be discussing two specific instances of bounded operators over Banach spaces. These are the spaces: 1, the space of absolutely summable complex sequences and \(\ell^{\infty }\), the space of bounded complex sequences. We shall review some recent results on certain classes of strictly (“row” or “column”) diagonally dominant infinite matrices that turn out to be invertible. Bounds on the inverses in these cases are given. The work reported here is due to Shivakumar, Williams and Rudraiah [119].

Given a matrix \(A = (a_{ij}),\ i,j \in \mathbb{N}\), a space of infinite sequences X over the real or the complex field and \(x = (x_{i}),\ i \in \mathbb{N}\), we define Ax by

$$\displaystyle{(Ax)_{i} =\sum \limits _{ j=1}^{\infty }a_{ ij}x_{j},}$$

provided this series converges for each \(i \in \mathbb{N}\). We define the domain of A by

$$\displaystyle{D(A) =\{ x \in X: Ax\text{ exists and }Ax \in X\}.}$$

Let us start with the case X =  1. Consider an infinite matrix A on 1. We assume the following: Suppose that the “diagonals” of A are all nonzero and form an unbounded sequence of real or complex numbers. Let A be uniformly strictly “column” diagonally dominant in the sense that the following condition holds: There exist numbers ρ,  0 ≤ ρ < 1 and \(\rho _{j},\ 0 \leq \rho _{j} \leq \rho\) such that one has

$$\displaystyle{Q_{j} =\sum \nolimits _{ i=1,i\neq j}^{\infty }\vert a_{ ij}\vert =\rho _{j}\vert a_{jj}\vert,\ j \in \mathbb{N}.}$$

We further assume that

$$\displaystyle{\vert a_{ii} - a_{jj}\vert \geq Q_{i} + Q_{j},\text{ for all }i,j \in \mathbb{N},\ i\neq j}$$

and

$$\displaystyle{\sup \{\vert a_{ij}\vert: j \in \mathbb{N}\} <\infty \text{ for all }i \in \mathbb{N}.}$$

For an operator A satisfying the first two conditions, Shivakumar, Williams, and Rudraiah show (Theorem 2, [119]) that A is an operator with dense domain, is invertible, and A −1 is compact. The following upper bound is also proved:

$$\displaystyle{\|A^{-1}\|_{ 1} \leq \frac{1} {(1-\rho )(\inf _{i}\vert a_{ii}\vert )}.}$$

Similar results are also derived for operators on \(\ell^{\infty }\). For an operator A on \(\ell^{\infty }\) consider the following set of conditions which could be considered “dual” to the assumptions that were made for an operator on 1 listed above. There exist numbers \(\sigma,\ 0 \leq \sigma <1\) and \(\sigma _{j},\ 0 \leq \sigma _{j} \leq \sigma\) such that one has

$$\displaystyle\begin{array}{rcl} & P_{i} =\sum \nolimits _{ j=1,j\neq i}^{\infty }\vert a_{ij}\vert =\sigma _{j}\vert a_{jj}\vert,\ i \in \mathbb{N}, & {}\\ & \vert a_{ii} - a_{jj}\vert \geq P_{i} + P_{j},\text{ for all }i,j \in \mathbb{N},\ i\neq j& {}\\ \end{array}$$

and

$$\displaystyle{\sup \{\vert a_{ij}\vert: i \in \mathbb{N}\} <\infty \text{ for all }j \in \mathbb{N}.}$$

Analogous to the case of 1, the first condition could be considered as a uniform strict “row” diagonal dominance. Then, for an operator A on \(\ell^{\infty }\) satisfying the two conditions given above, along with the condition that the main diagonal elements of A are all nonzero and form an unbounded sequence, it is proved (Theorem 4, [119]) that A is a closed operator, A −1 exists, A −1 is compact, and

$$\displaystyle{\|A^{-1}\|_{ \infty }\leq \frac{1} {(1-\sigma )(\inf _{i}\vert a_{ii}\vert )}.}$$

It is clear that this last inequality is a generalization of the inequality of Varah mentioned in Sect. 2.2, for infinite matrices. It must be remarked here that there is a departure from what one experiences in the finite matrix case for a diagonally dominant matrix which is also irreducible. The authors of the work reported here demonstrate by an example (Example 1, [119]) that there are infinite matrices (considered as bounded operators on \(\ell^{\infty }\)) which are irreducible and diagonally dominant (meaning that one has \(\sigma = 1\) with all \(\sigma _{i} <1\)) but are not invertible.

We close this section by mentioning some recent results on infinite matrices that were obtained by Williams and Ye [138], which, however, do not concern either diagonal dominance or invertibility. This work investigates conditions that guarantee when an infinite matrix will be bounded as an operator on two weighted 1 spaces and obtains a relationship between such a matrix and the given weight vector. It is established that every infinite matrix is bounded as an operator between two weighted 1 spaces for suitable weights. Necessary conditions and separate sufficient conditions for an infinite matrix to be bounded on a weighted 1 space, with the same weight for the domain and codomain, are presented.

3.3 Linear Eigenvalue Problem

In this section, we consider the eigenvalue problem for infinite matrices considered as operators on certain Banach spaces. We also discuss results on the problem of determining the location of eigenvalues for diagonally dominant infinite matrices and determining upper and lower bounds for them. First, we report the results of Shivakumar, Williams and Rudraiah [119].

If \(A = (a_{ij}),\ i,j \in \mathbb{N}\), and X is a space of infinite sequences, then the domain of A denoted by D(A) is defined as in the previous section. We define an eigenvalue of A to be any scalar \(\lambda\) (from the underlying field) for which \(Ax =\lambda x\) for some 0 ≠ x ∈ D(A). We define the Gershgorin disks by considering A as an operator on 1, by

$$\displaystyle{C_{i} =\{ z \in \mathbb{C}: \vert z - a_{ii}\vert \leq Q_{i}\},\ i \in \mathbb{N},}$$

where the numbers Q i are as defined in the previous section. Let us also recall another assumption that was made there:

$$\displaystyle{\vert a_{ii} - a_{jj}\vert \geq Q_{i} + Q_{j},\text{ for all }i,j \in \mathbb{N},\ i\neq j.}$$

We observe that this condition on A is equivalent to the (almost) disjointness of the Gershgorin disks C i , viz., the intersection of two disks consists of at most one point. Finally, the condition on the boundedness of the suprema made as in the earlier section implies that A is a closed linear operator on 1.

One of the main results of the work being reported here is in showing the following for an operator A on 1 satisfying the three conditions of Sect. 3.2. This result (Theorem 3, [119]) states that A consists of a discrete countable set of nonzero eigenvalues \(\{\lambda _{k}: k \in \mathbb{N}\}\) such that \(\vert \lambda _{k}\vert \rightarrow \infty\) as \(k \rightarrow \infty\).

For the case of \(\ell^{\infty }\), we assume that the entries a ij satisfy those conditions that are given in Sect. 3.2. We define the Gershgorin disks as

$$\displaystyle{D_{i} =\{ z \in \mathbb{C}: \vert z - a_{ii}\vert \leq P_{i}\},\ i \in \mathbb{N}.}$$

The authors also prove another result (Theorem 5, [119]) similar to the 1 case, that the spectrum of A (satisfying all the three conditions listed above) consists of a discrete countable set of nonzero eigenvalues \(\{\lambda _{k}: k \in \mathbb{N}\}\) such that \(\vert \lambda _{k}\vert \rightarrow \infty\) as \(k \rightarrow \infty\).

Let us mention certain interesting contributions and generalizations of the work reported earlier in this section, that have been made by Farid and Lancaster [32] and [33]. In the first work, certain Gerschgorin type theorems were established for a class of row diagonally dominant infinite matrices by considering them as operators on p spaces, \(1 \leq p \leq \infty\). The authors develop a theory analogous to the work in [119]. They provide constructive proofs where a sequence of matrix operators is shown to converge (in the sense of the gap for closed operators) to the diagonally dominant operator that one started with. Utilizing eigenvalues and eigenvectors of such a sequence of matrix operators, the problem of convergence of these eigenvalues and the corresponding eigenvectors to a simple eigenvalue and the corresponding eigenvector of the given operator, is investigated. Despite the fact that the range of the value p was extended in this work, the results here for p = 1 and \(p = \infty\) are weaker than the corresponding ones of [119]. However, in the second work, the authors show how the earlier contributions of [119] could be both strengthened and extended to more general values of p. Here, row diagonally dominant infinite matrices are considered as closed operators with compact inverses on p spaces, \(1 \leq p \leq \infty\). The authors extend the results of their earlier work for the case of p = 1 and \(p = \infty\). Results for column diagonally dominant infinite matrices are also derived (Theorems 2.1, 3.1 and 3.2, [33]).

Let us include some other contributions, as well. Farid [29] shows (Theorem 3.2) that the eigenvalues of a diagonally dominant infinite matrix satisfying certain additional conditions, acting as a linear operator in 2 approach its main diagonal. He also discusses an application of this result to approximate the eigenvalues of the Mathieu’s equation. Malejki [70] studies a real symmetric tridiagonal matrix A whose diagonal entries and off-diagonal entries satisfy certain decay properties. It follows that such an operator A has a discrete spectrum. Let A (n) be its n × n truncation. The main result of the author is in showing the following: If the eigenvalues of A are

$$\displaystyle{\lambda _{1} \leq \lambda _{2} \leq \ldots }$$

and

$$\displaystyle{\mu _{1,n} \leq \mu _{2,n} \leq \ldots \mu _{n,n}}$$

are the eigenvalues of A (n), then for every γ > 0 and r ∈ (0, 1), there exists a constant c such that

$$\displaystyle{\vert \mu _{k,n} -\lambda _{k}\vert \leq cn^{-\gamma }\text{ for all }1 \leq k \leq rn.}$$

To conclude this section, we discuss a powerful computational technique for determining the eigenvalues of the infinite system \(Ax =\lambda x\) derived by using a truncated matrix G (1, k), to be defined below. The idea of this technique is to box the eigenvalues and then use a simple bisection method to give the value of \(\lambda _{n}\) to any required degree of accuracy.

Consider a matrix A = (a ij ) acting on \(\ell^{\infty }\) satisfying all the four conditions given earlier and satisfy in addition the following:

$$\displaystyle\begin{array}{rcl} & a_{ij} = 0,\text{ if }\vert i - j\vert \geq 2,\ i,j \in \mathbb{N}& {}\\ & 0 <a_{ii} <a_{i+1,i+1},\ i \in \mathbb{N} & {}\\ \end{array}$$

and

$$\displaystyle{a_{i,i+1}a_{i+1,i}> 0,\ i \in \mathbb{N}.}$$

Observe that the first condition here means that A is a tridiagonal matrix, viz., the entries not in the principal diagonal and the two immediate subdiagonals (the lower and the upper) are zero.

Suppose that the scalar \(\lambda\) satisfies

$$\displaystyle{a_{nn} - P_{n} \leq \lambda \leq a_{nn} + P_{n},\text{ for all }n \in \mathbb{N},}$$

where P i are as defined above (in connection with the Gerschgorin circles). Let \(G = A -\lambda I\). Let G (1, k) denote the truncated matrix of A obtained from A by taking only the first k rows and k columns. Denote \(\beta _{1,k}:= \mbox{ det}\ G^{(1,k)}\). We then have the following [119, Sect. 8]:

$$\displaystyle\begin{array}{rcl} \beta _{1,k}& =& (a_{11}-\lambda )\beta _{2,k} - a_{12}a_{21}\beta _{3,k}{}\end{array}$$
(3.11)
$$\displaystyle\begin{array}{rcl} & =& [(a_{11}-\lambda )(a_{22}-\lambda ) - a_{12}a_{21}]\beta _{3,k} - (a_{11}-\lambda )a_{23}a_{32}\beta _{4,k}{}\end{array}$$
(3.12)

so that

$$\displaystyle{ \beta _{1,k} = p_{s}\beta _{s,k} - p_{s-1}a_{s-1,s}a_{s,s-1}\beta _{s+1,k}, }$$
(3.13)

where the sequence p s is defined by \(p_{0} = 0,\ p_{1} = 1\) and

$$\displaystyle{p_{s} = p_{s-1}\,(a_{s-1,s-1}-\lambda ) - p_{s-2}\,a_{s-1,s-2}\,a_{s-2,s-1}.}$$

If we set

$$\displaystyle{ Q_{s,k} = \frac{p_{s}} {p_{s-1}} - a_{s-1,s}a_{s,s-1}\,\frac{\beta _{s+1,k}} {\beta _{s,k}}, }$$
(3.14)

we then have

$$\displaystyle{ \beta _{1,k} = p_{s-1}Q_{s,k}\beta _{s,k}. }$$
(3.15)

We have the following cases:

Case (i)::

p s−1 and p s have opposite signs.

Then Q s, k  < 0 and β 1, k has the same sign as \(-p_{s-1}\).

Case (ii)::

p s−1 and p s have the same sign and

$$\displaystyle{ \frac{p_{s}} {p_{s-1}}> \frac{a_{s,s-1}a_{s-1,s}} {a_{ss} -\lambda -\vert a_{s,s+1}\vert }.}$$

We then have Q s, k  > 0 and β 1, k has the same sign as p s−1.

Case (iii)::

p s−1 and p s have the same sign and

$$\displaystyle{ \frac{p_{s}} {p_{s-1}} <\frac{a_{s,s-1}a_{s-1,s}} {a_{ss} -\lambda -\vert a_{s,s+1}\vert }.}$$

Then, Q s, k  < 0 and β 1, k have the same sign as \(-p_{s-1}\).

We can use the method of bisection to establish both upper and lower bounds for \(\lambda _{n}\) to any degree of accuracy.

Let us close this section by mentioning in the passing that in this work, the authors (in [119], Sect. 6) establish some results concerning the convergence of the sequence of solutions of the truncated systems and study error analysis in detail. Application of the above technique to study Bessel functions forms the discussion in Sect. 8.6 of Chap. 8

3.4 Linear Differential Systems

We consider the infinite linear system of differential equations:

$$\displaystyle{ \frac{d} {dt}x_{i}(t) =\sum _{ j=1}^{\infty }a_{ ij}x_{j}(t) + f_{i}(t),\qquad t \geq 0,\ x_{i}(0) = y_{i},\ i \in \mathbb{N}, }$$
(3.16)

where the functions f i and numbers y i are known. Using the notation x(t) = (x i (t)), y = (y i ) and f(t) = (f i (t)), this equation can be rewritten as

$$\displaystyle{ \dot{\mathbf{x}}(t) = A\mathbf{x}(t) + \mathbf{f}(t),\qquad \dot{\mathbf{x}}(0) = \mathbf{y}. }$$
(3.17)

This equation is of considerable theoretical and applied interest. In particular, such systems occur frequently in topics including the theory of stochastic processes, perturbation theory of quantum mechanics, degradation of polynomials, and infinite ladder network theory. Arley and Brochsenius [3], Bellman [7], and Shaw [100] have made some notable contributions to the problem posed above. In particular, if A is a bounded operator on 1, then convergence of a truncated system has been established. However, none of these works yields explicit error bounds for such a truncation. In what follows, we recall the results of Shivakumar, Chew and Williams [118] for such error bounds, among other things. The analysis in this work concerns A, being a constant infinite matrix defining a bounded operator on X, where X is one of the spaces \(\ell^{1},\ \ell^{\infty }\), or c 0, the latter being the space of complex sequences converging to zero. Explicit error bounds are obtained for the approximation of the solution of the infinite system by the solutions of finite truncation systems.

To begin with, we present the following framework for homogeneous systems (f = 0): First, we assume that y ∈  1. Next, suppose that

$$\displaystyle{\alpha =\sup \{\sum \limits _{ i=1}^{\infty }\vert a_{ ij}\vert: j \in \mathbb{N}\} <\infty.}$$

Set

$$\displaystyle{\gamma _{n} =\sup \{\sum \limits _{ i=n+1}^{\infty }\vert a_{ ij}\vert: j = 1,2,\ldots,n\}}$$

and

$$\displaystyle{\delta _{n} =\sup \{\sum \limits _{ i=1}^{n}\vert a_{ ij}\vert: j = 1,2,\ldots,n\}.}$$

We assume that

$$\displaystyle{\gamma _{n} \rightarrow 0\text{ and }\delta _{n} \rightarrow 0,\text{ as }n \rightarrow \infty.}$$

In the above, the finiteness of the supremum is equivalent to the statement that A is bounded on 1. The assumption involving γ n states that the sums in the definition involving α converge uniformly below the main diagonal; it is a condition involving only the entries of A below the main diagonal. On the other hand, the assumption involving δ n is a condition involving only the entries of A above the main diagonal.

For the sake of convenience and ease of use, let us adopt a notation used earlier for denoting a different object. So, let us define the matrix A (n) by: \((A^{(n)})_{ij} = a_{ij}\) if 1 ≤ i, j ≤ n and \((A^{(n)})_{ij} = 0\), otherwise. One applies similar definitions for y (n) and f (n). This leads to the definition

$$\displaystyle{b_{ij}^{(n)} = ((A^{(n)})^{j}\mathbf{y}^{(n)})_{ i}}$$

using which we finally set

$$\displaystyle{x_{i}^{(n)}(t) =\sum \nolimits _{ j=1}^{\infty }\frac{t^{j}} {j!}b_{ij}^{(n)},\ 1 \leq i \leq n.}$$

In (Theorem 1, [118]) the following result is established: Suppose that the first two assumptions on A as given above are satisfied together with one of the next two conditions. Then

$$\displaystyle{\lim \nolimits _{n\rightarrow \infty }\mathbf{x}^{(n)}(t) = \mathbf{x}(t)}$$

in the l 1 norm uniformly in t on compact subsets of \([0,\infty )\). One also has explicit error bounds as given below:

$$\displaystyle\begin{array}{rcl} \sum _{i=1}^{n}\vert x_{ i}(t) - x_{i}^{(n)}(t)\vert & \leq & \alpha te^{\alpha t}\left [\frac{1} {2}\gamma _{n}Mt +\sum _{ k=n+1}^{\infty }\vert \mathbf{y}_{ k}\vert \right ]{}\end{array}$$
(3.18)

and

$$\displaystyle\begin{array}{rcl} \sum _{i=n+1}^{\infty }\vert x_{ i}(t)\vert & \leq & e^{\alpha t}\left [\gamma _{ n}Mt +\sum _{ k=n+1}^{\infty }\vert \mathbf{y}_{ k}\vert \right ].{}\end{array}$$
(3.19)

Combining these, one has

$$\displaystyle\begin{array}{rcl} \|\mathbf{x}(t) -\mathbf{x}^{(n)}(t)\|& \leq & e^{\alpha t}\left [\left (1 + \frac{1} {2}\alpha t\right )\gamma _{n}Mt + (1 +\alpha t)\sum _{k=n+1}^{\infty }\vert \mathbf{y}_{ k}\vert \right ], {}\\ \end{array}$$

corresponding to the condition on γ n (with the right-hand side converging to zero as \(n \rightarrow \infty\)) and

$$\displaystyle{\sum \nolimits _{i=1}^{n}\vert x_{ i}(t) - x_{i}^{(n)}(t)\vert \leq \delta _{ n}Mte^{\alpha t},}$$

corresponding to the condition on δ n (with the right-hand side converging to zero as \(n \rightarrow \infty\)).

For the nonhomogeneous system, one assumes that f i is continuous on \([0,\infty ),\ i \in \mathbb{N}\) and \(\|\mathbf{f}(t)\| =\sum \limits _{ i=1}^{\infty }\vert f_{i}(t)\vert\) converges uniformly in t on compact subsets of \([0,\infty )\). If one defines \(L(t) =\sup \{\| f(\tau )\|: 0 \leq \tau \leq t\}\), it then follows that the condition on f above is equivalent to the statement that f is continuous from \([0,\infty )\) into 1 and so one has \(L(t) <\infty\) for all t ≥ 0.

We also have the following result (Theorem 2, [118]): Suppose that one has the assumptions \(\alpha <\infty\) and the condition on f i given as earlier. In addition, suppose that either of the conditions on γ n or δ n hold. Then

$$\displaystyle{\lim \limits _{n\rightarrow \infty }\mathbf{x}^{(n)}(t) = \mathbf{x}(t)}$$

in the l 1 norm uniformly in t on compact subsets of \([0,\infty )\), with explicit error bounds as given below:

$$\displaystyle{ \|\mathbf{x}(t) -\mathbf{x}^{(n)}(t)\| \leq \frac{1} {2}t^{2}e^{\alpha t}\gamma _{ n}L(t) + te^{\alpha }t\ \sup \{\sum _{ k=n+1}^{\infty }\vert f_{ k}(\tau )\vert: 0 \leq \tau \leq t\}, }$$

corresponding to the condition on γ n (with the right-hand side converging to zero as \(n \rightarrow \infty\)) and

$$\displaystyle{ \sum _{i=1}^{n}\vert x_{ i}(t) - x_{i}^{(n)}(t)\vert \leq \delta _{ n}L(t)\alpha ^{-2}\left [\alpha te^{\alpha t} + (1 - e^{\alpha }t)\right ], }$$
(3.20)

corresponding to the condition on δ n (with the right hand side converging to zero as \(n \rightarrow \infty\)).

Similar results hold for systems on \(l^{\infty }\) (Theorems 4 and 6, [118]) and for systems on c 0 (Theorems 3 and 5, [118]). We refer the reader to [118] for details.

3.5 An Iterative Method

Iterative methods for linear equations in finite matrices have been the subject of a very vast literature. Since all these methods involve the nonsingularity of the matrix, the various notions of diagonal dominance of matrices have played a major role, as evidenced in our discussion in Chap. 2.2 The interest in this section is to discuss an iterative method for certain diagonally dominant infinite systems which we believe to be one of the first attempts towards such extensions.

Let us recall that in Sect. 2.3, we reviewed the work of Shivakumar and Chew where a certain notion of weakly chained diagonal dominance was discussed. A convergent iterative procedure was also proposed in the work [104].

In this section, we consider an infinite system of equations of the form T x = v, where \(\mathbf{x},\mathbf{v} \in \ell^{\infty }\), and T is a (possibly unbounded) linear operator on \(\ell^{\infty }\). Suppose that the matrix of T relative to the usual Schauder basis is given by T = (t ij ). Consider the following:

  1. 1.

    There exists η > 0 such that | t ii  | ≥ η for all \(i \in \mathbb{N}\).

  2. 2.

    There exist \(\sigma\) with \(0 \leq \sigma <1\) and \(\sigma _{i}\), \(0 \leq \sigma _{i} <\sigma <1\), \(i \in \mathbb{N}\) such that

    $$\displaystyle{ \sum _{j=1,\ j\neq i}^{\infty }\ \vert t_{ ij}\vert =\sigma _{i}\vert t_{ii}\vert, }$$
  3. 3.

    \(\sum \limits _{j=1}^{i-1}\frac{\vert t_{ij}\vert } {\vert t_{ii}\vert } \rightarrow 0\) as \(i \rightarrow \infty\).

  4. 4.

    Suppose further that either the diagonals of T form an unbounded sequence or that v ∈ c 0.

Shivakumar and Williams first prove the following result (Theorem 1, [112]): Let \(\mathbf{v} \in \ell^{\infty }\) and let T satisfy the first two conditions listed above. Then T has a (bounded) inverse and the equation T x = v has a unique \(\ell^{\infty }\) solution. This solution x satisfies the inequality (all the norms ∥ . ∥ denote \(\parallel. \parallel _{\infty })\):

$$\displaystyle{\parallel \mathbf{x} \parallel =\parallel T^{-1}\mathbf{v} \parallel \leq \frac{\parallel \mathbf{v} \parallel } {\eta (1-\sigma )}.}$$

It must be remembered that two results of a similar type from the work of [113] were discussed in Sect. 3.2.

Let \(T = D + F\), where D is the main diagonal of T, (which, by virtue of the first assumption, is invertible) and F is the off-diagonal of T. Let A be defined by \(A = -D^{-1}F\) and \(\mathbf{b} = D^{-1}\mathbf{v}\). Then T x = v is equivalent to the fixed-point system \(\mathbf{x} = A\mathbf{x} + \mathbf{b},\ \mathbf{b} \in \ell^{\infty }\), where A is a bounded linear operator on \(\ell^{\infty }\). If one considers all the four conditions on A listed above, then one has the following consequences:

  1. 1.

    \(\|A\| =\sup \limits _{i\geq 1}\sum \limits _{j=1}^{\infty }\vert a_{ij}\vert \leq \sigma <1\),

  2. 2.

    \(\sum \limits _{j=1}^{i-1}\vert a_{ij}\vert \rightarrow 0\) as \(i \rightarrow,\infty\) and

  3. 3.

    \(\mathbf{b} = (b_{i}) \in c_{0}\).

Let us note that the fixed-point equation above leads naturally into the iterative scheme:

$$\displaystyle{ \mathbf{x}^{(p+1)} = A\mathbf{x}^{(p)} + \mathbf{b},\ \mathbf{x}(0) = \mathbf{b},\ p = 0,1,2,\ldots }$$
(3.21)

As before, let \(A^{(n)} = (a_{ij}^{(n)})\) be the infinite matrix obtained from A, where

$$\displaystyle{ a_{ij}^{(n)} = \left \{\begin{array}{@{}l@{\quad }l@{}} a_{ij}\quad &\mbox{ if $1 \leq i,j \leq n$}, \\ 0 \quad &\text{otherwise}. \end{array} \right. }$$
(3.22)

Thus A (n) is the n × n truncation of A padded with zeros. Let x (p, n) be such that

$$\displaystyle{(\mathbf{x}^{(p,n)})_{ i} = b_{i}\text{ whenever }i> n.}$$

By starting with x (0, n) = b, we consider the following truncated iterations:

$$\displaystyle\begin{array}{rcl} \mathbf{x}^{(p+1,n)}\ & =& \ A^{(n)}\mathbf{x}^{(p,n)} + b,\ \ \mathbf{x}^{(0,n)} = \mathbf{b}{}\end{array}$$
(3.23)

for p = 0, 1, 2,  and n = 1, 2, 3. Then one has (Theorem 2, [112]): for certain constants \(\beta,\beta _{n},\gamma _{n}\), and μ n

$$\displaystyle{ \|\mathbf{x}^{(p)} -\mathbf{x}^{(p,n)}\| \leq \beta \gamma _{ n}\sum _{k=0}^{p-1}(k + 1)\sigma ^{k} +\beta _{ n}\mu _{n}\sum _{k=0}^{p-1}\sigma ^{k}, }$$
(3.24)

where the right-hand side converges to zero as \(n \rightarrow \infty\). It can also be shown that the following result holds:

Corollary 3.5.1 (Corollary 2, [112]).

$$\displaystyle{ \|\mathbf{x} -\mathbf{x}^{(p,n)}\| \leq \sigma ^{p+1}(1-\sigma )^{-1}\beta +\beta \gamma _{ n}(1-\sigma )^{-2} +\beta _{ n}\mu _{n}(1-\sigma )^{-1}, }$$
(3.25)

where the right-hand side converges to zero as \(n \rightarrow \infty\) .

An application of the above in the recurrence relations of the Bessel functions is given in Sect. 8.6 in Chap. 8