1 Introduction

The study of high-accuracy computations is an active research topic of great interest in recent years. For the eigenvalue problem of a single matrix, high-accuracy algorithms have been constructed for a few classes of matrices such as diagonally dominant matrices [1, 2], tridiagonal matrices [9], acyclic matrices [6], certain sign regular matrices [17, 19, 20], matrices with rank-revealing decompositions [11, 12] and structured matrices [5, 7, 8, 10, 22]. However, there are many situations in which one need to find eigenvalues of a product of two or more matrices. Indeed, the product eigenvalue problem arises in many applications and has been extensively studied, see [29, 30] and the references therein for an overview. Since accuracy is always lost when explicitly forming the product, a variety of algorithms such as LR-type algorithms [18, 28], Jacobi-type algorithms [15] and QR-type algorithms [4, 14, 21] have been developed for a product of two or more matrices by implicitly working with the factors. Our interest of this paper is the high-accuracy computation for the product eigenvalue problem of large numbers of structured matrices.

Structured matrices such as Vandermonde and Cauchy matrices frequently appear in various areas of modern computing [23, 26, 27]. In this paper, we are concerned with a wide class of structured matrices containing Vandermonde and Cauchy matrices.

Definition 1

[16] Let \(A\in {\mathbb R}^{n\times m}\). If

$$\begin{aligned} \left\{ \begin{array}{ll} \mathrm{rank}A[s+1:i+1|1:j]\le \mathrm{rank}A[s:i|1:j],&{}\quad \forall ~i\ge j,~i-j+1\le s\le i,\\ \mathrm{rank}A[1:i|s+1:j+1]\le \mathrm{rank}A[1:i|s:j],&{}\quad \forall ~j\ge i,~j-i+1\le s\le j, \end{array} \right. \end{aligned}$$

then A is called a consecutive-rank-descending (CRD) matrix. Here, denote by A[i : j|k : l] the submatrix of \(A\in {\mathbb R}^{n\times m}\) having row and column indexes in the ranges i through j and k through l, respectively.

We will show that Vandermonde and Cauchy matrices are special CRD matrices in Section 2 later. Recall that single Vandermonde and Cauchy matrices are badly ill-conditioned [24]. So, it is an interesting challenge to accurately find eigenvalues of products of these badly ill-conditioned matrices. When dealing with the product eigenvalue problem, the currently used algorithms are QR-type algorithms. However, it is known that QR-type algorithms destroy matrix structures. Thus, we turn to an alternative approach, i.e., LR-type algorithms. One advantage of LR-type algorithms over QR-type algorithms is that they can preserve matrix structures, but the disadvantage is the possible instability in floating point arithmetic. In fact, the works by Parlett and Fernando [13, 25] have brought a further for LR-type algorithms. Indeed, its variants such as qd-type algorithms have been developed to successfully achieve the desirable high relative accuracy for the eigenvalue problem of single matrices [3, 13, 25]. Therefore, our main contribution of this paper is to compute all the eigenvalues of products of rectangular CRD matrices with high relative accuracy by developing a periodic qd-type reduction.

The rest of the paper is organized as follows. In Sect. 2, we provide the unique representation for the class of CRD matrices, and we then verify that the well-known Vandermonde and Cauchy matrices are special CRD matrices. In Sect. 3, we provide a qd-type algorithm for updating the representation of the product of two specific CRD matrices under the assumption that no breakdown occurs. In Sect. 4, the periodic qd-type method is developed to efficiently reduce a product of rectangular CRD matrices into the tridiagonal from by implicitly working with its factors. In Sect. 5, we identify the subset of these products for which no subtraction of like-signed numbers occurs throughout the reduction process. Consequently, all the eigenvalues of such a product are computed to high relative accuracy in a preferable complexity. Error analysis is provided to illustrate the high relative accuracy. In Sect. 6, numerical experiments are presented to confirm the high relative accuracy.

2 The Unique Representation

In our work [16], the unique representation for (pq)-diagonal consecutive-rank-descending matrices has been derived, and CRD matrices are just special (0, 0)-diagonal consecutive-rank-descending matrices. Therefore, the unique representation for CRD matrices is directly obtained as follows. Here, \(I_{n}\in {\mathbb R}^{n \times n}\) is the identity matrix.

Theorem 1

[16] \(A\in \mathbb {R}^{n\times m}\) (\(n\ge m\)) is CRD if and only if \(A\in \mathbb {R}^{n\times m}\) is uniquely represented as

$$\begin{aligned} A=B_{1}B_{2}\ldots B_{n-1}DC_{m-1}\ldots C_{2}C_{1}, \end{aligned}$$
(1)

where \(D=\mathrm{diag}(p_{ii})\in \mathbb {R}^{n\times m}\), \(B_{i}\in \mathbb {R}^{n\times n}\) and \(C_{i}\in \mathbb {R}^{m\times m}\) \((1\le i\le m-1)\) are the following

$$\begin{aligned} B_{i}=\begin{bmatrix} {\begin{matrix} 1&{} &{} &{} &{} &{} &{}\\ 0&{} 1&{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}0 &{} 1&{} &{} &{}\\ &{} &{} &{}p_{n-i+1, 1} &{}1 &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} p_{ni}&{}1 \end{matrix}} \end{bmatrix},\quad C_{i}=\begin{bmatrix} {\begin{matrix} 1&{}0 &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} 0&{} &{} &{}\\ &{} &{} &{}1 &{}p_{1, m-i+1} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{}p_{im}\\ &{} &{} &{} &{} &{} &{}1 \end{matrix}} \end{bmatrix}, \end{aligned}$$

and \(B_{i}\in \mathbb {R}^{n\times n}\) (\(m\le i\le n-1\)) is bidiagonal of the form

$$\begin{aligned} B_{i}=\left[ \begin{array}{cc}B_{i}^{\prime }&{}\\ &{}I_{i-m}\end{array}\right] , \quad B_{i}^{\prime }=\begin{bmatrix} {\begin{matrix} 1&{} &{} &{} &{} &{} &{}\\ 0&{} 1&{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}0 &{} 1&{} &{} &{}\\ &{} &{} &{}p_{n-i+1, 1} &{}1 &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} p_{n-i+m,m}&{}1 \end{matrix}} \end{bmatrix}, \end{aligned}$$

all the nontrivial entries \(p_{ij}\) (\(1\le i\le n\) and \(1\le j\le m\)) satisfy that

$$\begin{aligned} {\left\{ \begin{array}{ll} p_{ij}=0,~i\ge j\Rightarrow ~p_{sj}=0 \quad \forall ~s\ge i;\\ p_{ij}=0,~i\le j \Rightarrow ~p_{is}=0 \quad \forall ~s\ge j. \end{array}\right. } \end{aligned}$$
(2)

The case \(n<m\) for a CRD matrix \(A\in {\mathbb R}^{n\times m}\) is dealt with only by transposing the matrix A. Therefore, any CRD matrix \(A\in {\mathbb R}^{n\times m}\) is represented by these nm independent elements \(p_{ij}\) (\(1\le i\le n\) and \(1\le j\le m\)) satisfying the fact (2), and thus, we store these elements as the parameter matrix \(P=(p_{ij})\in \mathbb {R}^{n\times m}\). Notice that the parameter matrix of \(A^{T}\) is just the matrix \(P^{T}\). It must be pointed out that

$$\begin{aligned} p_{ij}=\left\{ \begin{array}{ll} \frac{\mathrm{det}A[1,\ldots ,i]}{\mathrm{det}A[1,\ldots ,i-1]}\ne 0,&{}\quad ~i=j;\\ \frac{\mathrm{det}A[i-j+1,\ldots ,i|1,\ldots ,j]}{\mathrm{det}A[i-j+1,\ldots ,i-1|1,\ldots ,j-1]}\cdot \frac{\mathrm{det}A[i-j,\ldots ,i-2|1,\ldots ,j-1]}{\mathrm{det}A[i-j,\ldots ,i-1|1,\ldots ,j]}\ne 0,&{}\quad ~i> j;\\ \frac{\mathrm{det}A[1,\ldots ,i|j-i+1,\ldots ,j]}{\mathrm{det}A[1,\ldots ,i-1|j-i+1,\ldots ,j-1]}\cdot \frac{\mathrm{det}A[1,\ldots ,i-1|j-i,\ldots ,j-2]}{\mathrm{det}A[1,\ldots ,i|j-i,\ldots ,j-1]}\ne 0,&{} \quad ~j>i; \end{array} \right. \end{aligned}$$
(3)

provided that the involved minors are nonzero. Thus, one immediately verifies by the formula (3) that Vandermonde and Cauchy matrices are special CRD matrices. Here, set the convention \(\prod ^{j}_{k=i} \cdot =1\) if \(j<i\).

Corollary 1

[19] A Vandermonde matrix \(A=[x_{i}^{j-1}]_{i,j=1}^{n,m}\) with nonzero distinct nodes \(x_{i}\) (\(1\le i\le n\)) is CRD of full rank whose parameter matrix \(P=(p_{ij})\in {\mathbb R}^{n\times m}\) is the following:

$$\begin{aligned} p_{ij}=\left\{ \begin{array}{ll} \prod ^{i-1}_{k=1}(x_{i}-x_{k}),&{}\quad \mathrm{if}~i=j;\\ \prod ^{i-1}_{k=i-j+1}\frac{x_{i}-x_{k}}{x_{i-1}-x_{k-1}},&{}\quad \mathrm{if}~i>j;\\ p_{ij}=x_{i},&{}\quad \mathrm{if }~i<j.\end{array} \right. \end{aligned}$$
(4)

Corollary 2

[19] A Cauchy matrix \(A=\left[ \frac{1}{x_{i}+y_{j}}\right] _{i,j=1}^{n,m}\) with distinct nodes \(x_{i}\) and distinct nodes \(y_{j}\) is CRD of full rank whose parameter matrix \(P=(p_{ij})\in {\mathbb R}^{n\times m}\) is the following:

$$\begin{aligned} p_{ij}=\left\{ \begin{array}{ll} \frac{1}{x_{i}+y_{i}}\cdot \prod ^{i-1}_{k=1}\frac{(x_{i}-x_{k}) (y_{i}-y_{k})}{(x_{i}+y_{k})(y_{i}+x_{k})},&{}\quad \mathrm{if}~i=j,\\ \frac{x_{i-j}+y_{j}}{x_{i}+y_{j}}\prod _{k=i-j+1}^{i-1} \frac{x_{i}-x_{k}}{x_{i-1}-x_{k-1}}\prod _{k=1}^{j-1} \frac{x_{i-1}+y_{k}}{x_{i}+y_{k}},&{}\quad \mathrm{if}~ i>j,\\ \frac{y_{j-i}+x_{i}}{y_{j}+x_{i}}\prod _{k=j-i+1}^{j-1} \frac{y_{j}-y_{k}}{y_{j-1}-y_{k-1}}\prod _{k=1}^{i-1} \frac{y_{j-1}+x_{k}}{y_{j}+x_{k}},&{}\quad \mathrm{if}~ i<j. \end{array} \right. \end{aligned}$$
(5)

To perform our error analysis later, the standard model for the floating point arithmetic is adopted:

$$\begin{aligned} \mathrm{fl}(x \circ y)=(x\circ y)(1+\delta )^{\pm 1},~|\delta |\le \mathtt{\mu },~\circ \in \{+,-,*,/\}, \end{aligned}$$

where \(\mathtt{\mu }\) is the unit roundoff. Thus, if x and y are initial (thus exact) data, then \(x- y\) and \(x+y\), as well as their products and quotients, are computable to high relative accuracy [5, 7, 19]. This means that all the parameters of (4) and (5) are accurately computed for these given initial nodes, which is very favorable for our high-accuracy computations later.

3 A qd-Type Updating Method

In this section, we provide a qd-type algorithm for updating the representation of the product of two specific CRD matrices under the assumption that there is no breakdown, i.e., there is no division by zero.

A nonsingular bidiagonal matrix and its inverse are CRD matrices. We first consider how to update the representation of the product of these matrices. For a nonzero vector \(\alpha \), denote \(\mathrm{sign}(\alpha )=1~\mathrm{or}~-1\) if all its nonzero elements are positive or negative, respectively. In addition, \(\mathrm{sign}(\alpha )=0\) if \(\alpha \) is a zero vector.

Lemma 1

Let \(L,U_{r}\in {\mathbb R}^{n \times n}\) be nonsingular of the following:

$$\begin{aligned} L=\begin{bmatrix} {\begin{matrix} 1&{} &{} &{} &{}&{} &{}\\ l_{2}&{} 1&{} &{} &{}&{}&{}\\ &{} \ddots &{} \ddots &{} &{}&{}&{}\\ &{} &{}l_{r-1} &{} 1 &{}&{}&{}\\ &{} &{} &{}l_{r} &{} 1&{} &{}\\ &{} &{} &{} &{}\ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{}l_{n} &{} 1 \\ \end{matrix}} \end{bmatrix},\quad U_{r}=\begin{bmatrix} {\begin{matrix} 1&{}0 &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} 0&{} &{} &{}\\ &{} &{} &{}d_{r-1} &{}u_{r} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} d_{n-1}&{}u_{n}\\ &{} &{} &{} &{} &{} &{}d_{n} \end{matrix}} \end{bmatrix},\quad 2\le r\le n. \end{aligned}$$
(6)

Then in the absence of breakdown,

$$\begin{aligned} U_{r}^{-1}L=\bar{L}\bar{U}_{r}^{-1}=:\begin{bmatrix} {\begin{matrix} 1&{} &{} &{} &{}&{} &{}\\ l_{2}&{} 1&{} &{} &{}&{}&{}\\ &{} \ddots &{} \ddots &{} &{}&{}&{}\\ &{} &{}l_{r-2} &{} 1 &{}&{}&{}\\ &{} &{} &{}\bar{l}_{r-1} &{} 1&{} &{}\\ &{} &{} &{} &{}\ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{}\bar{l}_{n} &{} 1 \\ \end{matrix}}\end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}0 &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} 0&{} &{} &{}\\ &{} &{} &{}\bar{d}_{r-1} &{}u_{r} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} \bar{d}_{n-1}&{}u_{n}\\ &{} &{} &{} &{} &{} &{}\bar{d}_{n} \end{matrix}} \end{bmatrix}^{-1}, \end{aligned}$$
(7)

where with the convention \(u_{r-1}=0\),

$$\begin{aligned} \left\{ \begin{array}{ll} \bar{l}_{r-1}=\frac{l_{r-1}}{d_{r-1}},\quad \mathrm{if}~r>2,&{}\\ z_{j}=d_{j}-l_{j}u_{j}, &{}\forall ~r-1\le j\le n.\\ t_{j}=\frac{z_{j}}{z_{j+1}},~\bar{d}_{j}=t_{j}d_{j+1},~\bar{l}_{j+1}=t_{j}l_{j+1},\quad \mathrm{if}~j<n,&{}\\ \bar{d}_{n}=z_{n}, \end{array} \right. \end{aligned}$$
(8)

Moreover, the transformation (7) is subtraction-free if and only if \(\mathrm{sign}(d_{j}l_{j}u_{j})=-1~\mathrm{or}~0\) for all \(r\le j\le n\). Consequently, in this case no breakdown occurs for the transformation (7).

Proof

Comparing the entries in both sides of \(L\bar{U}_{r} =U_{r} \bar{L}\), we have with the convention \(u_{r-1}=0\) that

$$\begin{aligned} \left\{ \begin{array}{ll} l_{r-1}=\bar{l}_{r-1}d_{r-1},\quad \mathrm{if}~r>2,&{}\\ \bar{d}_{j}+l_{j}u_{j}=d_{j}+\bar{l}_{j+1}u_{j+1}, &{} \quad \forall ~r-1\le j\le n-1,\\ \bar{d}_{j}l_{j+1}=d_{j+1}\bar{l}_{j+1}, &{}\\ \bar{d}_{n}+l_{n}u_{n}=d_{n}, &{}\\ \end{array} \right. \end{aligned}$$

from which the formula (8) is derived with \(z_{j+1}\ne 0\) for all \(r-1\le j\le n-1\) because of the absence of breakdown. Thus, \(\bar{d}_{j}\ne 0\) for all j. So, \(U^{-1}_{r}L=\bar{L}\bar{U}^{-1}_{r}\). Moreover, for all \(r\le j\le n\), \(z_{j}=d_{j}-l_{j}u_{j}\) is subtraction-free if and only if \(\mathrm{sign}(d_{j}l_{j}u_{j})=-1\) or 0. Consequently, in this case no breakdown occurs because \(|z_{j}|\ge |d_{j}|> 0\) for all \(r-1\le j\le n\). \(\square \)

The following result is obtained by a straight computation.

Lemma 2

Let \(U_{r}\in {\mathbb R}^{n \times n}\) (\(2\le r\le n\)) be as in (6), and let \(D=\mathrm{diag}(p_{ii})\in {\mathbb R}^{n\times m}\) be diagonal of full rank. Then

$$\begin{aligned} U_{r}^{-1}D=\bar{D}\bar{U}_{r}^{-1}, \end{aligned}$$
(9)

where \(\bar{D}=\mathrm{diag}(\bar{p}_{jj})\in {\mathbb R}^{n\times m}\) with \(\bar{p}_{jj}=\frac{p_{jj}}{d_{j}}\) for \(r-1\le j\le \min \{n,m\}\) and \(\bar{p}_{jj}=p_{jj}\) otherwise; and \(\bar{U}_{r}\in {\mathbb R}^{m\times m}\) is bidiagonal with unit diagonal whose \((j-1,j)\)th entries \(\bar{u}_{j}=\frac{u_{j}\bar{p}_{jj}}{p_{j-1,j-1}}\) for \( r\le j\le \min \{n,m\}\) and \(\bar{u}_{j}=0\) otherwise. In particular, for \(r-1\le j\le \min \{n,m\}\), if \(d_{j}>0\), then \(\mathrm{sign}(\bar{p}_{jj})=\mathrm{sign}(p_{jj})\) and \( \mathrm{sign}(\bar{u}_{j})=\mathrm{sign}(u_{j}p_{j-1,j-1}p_{jj})\).

Lemma 3

Let \(U_{r},C\in {\mathbb R}^{m\times m}\) be the following:

$$\begin{aligned} U_{r}=\begin{bmatrix} {\begin{matrix} 1&{}0 &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} 0&{} &{} &{}\\ &{} &{} &{}1 &{}u_{r} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{}u_{m}\\ &{} &{} &{} &{} &{} &{}1 \end{matrix}} \end{bmatrix},~C=\begin{bmatrix} {\begin{matrix} 1&{}c_{2} &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} c_{r-1}&{} &{} &{}\\ &{} &{} &{}1 &{}c_{r} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{}c_{m}\\ &{} &{} &{} &{} &{} &{}1 \end{matrix}} \end{bmatrix},~2\le r\le m. \end{aligned}$$

Then in the absence of breakdown,

$$\begin{aligned} U_{r}^{-1}C=\bar{C}\bar{U}_{r+1}^{-1}=:\begin{bmatrix} {\begin{matrix} 1&{}c_{2} &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} c_{r-1}&{} &{} &{}\\ &{} &{} &{}1 &{}\bar{c}_{r} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{}\bar{c}_{m}\\ &{} &{} &{} &{} &{} &{}1 \end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}0 &{} &{} &{} &{} &{}\\ &{} \ddots &{} \ddots &{} &{} &{} &{}\\ &{} &{}1 &{} 0&{} &{} &{}\\ &{} &{} &{}1 &{}\bar{u}_{r+1} &{} &{}\\ &{} &{} &{} &{} \ddots &{} \ddots &{} \\ &{} &{} &{} &{} &{} 1&{}\bar{u}_{m}\\ &{} &{} &{} &{} &{} &{}1 \end{matrix}} \end{bmatrix}^{-1}, \end{aligned}$$
(10)

where with the convention \(u_{r-1}=0\),

$$\begin{aligned} {\left\{ \begin{array}{ll} z_{j}=c_{j}-u_{j},\\ \bar{u}_{j}=0,~\bar{c}_{j}=z_{j},\quad \mathrm{if}~u_{j-1}=0,\\ t_{j}=\frac{z_{j}}{z_{j-1}},~\bar{u}_{j}=t_{j}u_{j-1}, ~\bar{c}_{j}=t_{j}c_{j-1}, \quad \mathrm{if}~u_{j-1}\ne 0, \end{array}\right. }\quad \forall ~r\le j\le m. \end{aligned}$$
(11)

Moreover, the transformation (10) is subtraction-free if and only if \(\mathrm{sign}(c_{j}u_{j})=-1~\mathrm{or}~0\) for all \(r\le j\le m\). Consequently, in this case no breakdown occurs for the transformation (10).

Proof

Comparing the entries in both sides of \(C\bar{U}_{r+1}=U_{r}\bar{C}\), we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} c_{r}=u_{r}+\bar{c}_{r},\\ c_{j-1}\bar{u}_{j}=u_{j-1}\bar{c}_{j}, ~\bar{u}_{j}+c_{j}=u_{j}+\bar{c}_{j}, \end{array}\right. }\quad \forall ~r+1\le j\le m, \end{aligned}$$

from which the formula (11) is derived under the assumption that there is no breakdown. Moreover, for all \(r\le j\le m\), \(z_{j}=c_{j}-u_{j}\) is subtraction-free if and only if \(\mathrm{sign}(c_{j}u_{j})=-1\) or 0. Consequently, in this case no breakdown occurs because \(|z_{j-1}|\ge |u_{j-1}|> 0\) for the case \(u_{j-1}\ne 0\) (\(r\le j\le m\)). \(\square \)

Our main result of this section is the following:

Theorem 2

Let \(A\in {\mathbb R}^{n\times m}\) be CRD of full rank, and let \(U_{r} \in {\mathbb R}^{n \times n}\) (\(2\le r\le n\)) be as in (6). Then in the absence of breakdown, \(A'=U^{-1}_{r}A\) is CRD of full rank.

Proof

Let \(A=B_{1}B_{2}\ldots B_{n-1}DC_{m-1}\ldots C_{2}C_{1}\) be as in (1), and let \(P=(p_{ij})\in {\mathbb R}^{n\times m}\) be its parameter matrix satisfying the fact (2). In the absence of breakdown, we recursively get by using the transformation (7) that

$$\begin{aligned} A'= & {} \underline{U_{r}^{-1}B_{1}}\ldots B_{n-1}DC_{m-1}\ldots C_{1}=B'_{1}\underline{U_{r}'^{-1}B_{2}}\ldots B_{n-1}DC_{m-1}\ldots C_{1}\nonumber \\= & {} \cdots =B'_{1}\ldots B'_{n-1}\bar{U}_{r}^{-1}DC_{m-1}\ldots C_{1}, \end{aligned}$$
(12)

where each \(B'_{k}\) (\(1\le k\le n-1\)) has the same form as that of \(B_{k}\), whose corresponding entries \(p'_{i,i-n+k}=0\) if and only if \(p_{i,i-n+k}=0\) for all \(n-k<i\le n\). So,

$$\begin{aligned} p'_{ij}=0,~i>j\Rightarrow ~p'_{sj}=0,~~\forall ~s\ge i. \end{aligned}$$
(13)

Further, by the transformation (9),

$$\begin{aligned} A'=B'_{1}\ldots B'_{n-1}\underline{\bar{U}_{r}^{-1}D}C_{m-1}\ldots C_{1}=B'_{1}\ldots B'_{n-1}\underline{D'\tilde{U}_{r}^{-1}}C_{m-1}\ldots C_{1}, \end{aligned}$$
(14)

where \(\tilde{U}_{r}\in {\mathbb R}^{m\times m}\) is upper bidiagonal with unit diagonal, \(D'=\mathrm{diag}( p'_{kk})\in {\mathbb R}^{n\times m}\) is diagonal with

$$\begin{aligned} p'_{kk}\ne 0,\quad \forall ~1\le k\le \min \{n,m\}. \end{aligned}$$
(15)

Finally, in the absence of breakdown, we recursively get by using the transformation (10) that

$$\begin{aligned} A'= & {} B'_{1}\ldots B'_{n-1}D'\underline{\tilde{U}_{r}^{-1}C_{m-1}}\ldots C_{1}=B'_{1}\ldots B'_{n-1}D'C'_{m-1}\underline{\tilde{U}_{r+1}^{-1}C_{m-2}}\ldots C_{r-1}C_{r-2} \ldots C_{1}\nonumber \\= & {} \ldots = B'_{1}\ldots B'_{n-1}D'C'_{m-1}\ldots C'_{r-1}C_{r-2} \ldots C_{1} \end{aligned}$$
(16)

where for all \(1\le s\le m-r+1\), \(\tilde{U}^{-1}_{r+s-1}C_{m-s}=C'_{m-s}\tilde{U}^{-1}_{r+s}\) (here, \(\tilde{U}^{-1}_{m+1}=I_{m}\)), \(C'_{m-s}\) is bidiagonal different form \(C_{m-s}\) only in the entries \(p'_{ij}\) for \(r-1\le i\le \min \{m-1,n\}\) and \(j=i+s \), which are computed from the entries \(p_{ij}\) of \( C_{m-s}\) by the formula (11) as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} z_{j}=p_{ij}-u_{j},\\ \bar{u}_{j}=0,~p'_{ij}=z_{j},\quad \mathrm{if}~u_{j-1}= 0,\\ t_{j}=\frac{z_{j}}{z_{j-1}},~p'_{ij}=t_{j}p_{i-1,j-1}, ~\bar{u}_{j}=t_{j}u_{j-1},\quad \mathrm{if}~u_{j-1}\ne 0, \end{array}\right. } \end{aligned}$$
(17)

here, denote by \(u_{j}\) and \(\bar{u}_{j}\) the \((j-1,j)\)th entries of \(\tilde{U}_{r+s-1}\) and \(\tilde{U}_{r+s}\), respectively. For showing the fact (2), let \(p'_{ij}=0\) of \( C'_{m-s}\) for some \(r-1\le i\le \min \{m-1,n\}\) and \(j=i+s \), then it need to prove \(p'_{i,j+1}=0\) with \(j+1\le m\). Notice that \(1\le s\le m-r+1\). If \(s=m-r+1\), then \(j+1>m\). So, assume that \(1\le s< m-r+1\) and \(j<m\). According to (17), there are the following cases to be considered.

  • For the case \(u_{j-1}= 0\), we have that \(\bar{u}_{j}=0\) and \(0=p'_{ij}=z_{j}=p_{ij}-u_{j}\). If \(u_{j}\ne 0\), then by the formula (17), \(t_{j+1}=\frac{z_{j+1}}{z_{j}}=\frac{z_{j+1}}{p'_{ij}}\) is a breakdown. So, we must have that \(u_{j}= 0\), and then, \(p_{ij}=0\). Thus,

    $$\begin{aligned} p_{ij}=0~\Rightarrow ~p_{i,j+1}=0~\mathrm{by ~the ~fact}~(2);\quad u_{j}=0~\Rightarrow ~\bar{u}_{j+1}= 0~\mathrm{by ~the ~formula ~ }~(17). \end{aligned}$$

    Hence, for the next transformation \(\tilde{U}^{-1}_{r+s}C_{m-s-1}=C'_{m-s-1}\tilde{U}^{-1}_{r+s+1}\), since \(\bar{u}_{j}=0\), we have by the formula (11) that \(p'_{i,j+1}=p_{i,j+1}-\bar{u}_{j+1}=0\).

  • For the case \(u_{j-1}\ne 0\), we have \(0=p'_{ij}=t_{j}p_{i-1,j-1}\), which means that \(t_{j}=0\) or \(p_{i-1,j-1}=0\).

    • If \(t_{j}=0\), then \(z_{j}=p_{ij}-u_{j}=0\). Assume that \(u_{j}\ne 0\). Then by the formula (17), \(t_{j+1}=\frac{z_{j+1}}{z_{j}}\) is a breakdown. So, we must have \(u_{j}=0\), and then, \(p_{ij}=0\). Thus,

      $$\begin{aligned} p_{ij}=0~\Rightarrow ~p_{i,j+1}=0;\quad u_{j}=0~\Rightarrow ~\bar{u}_{j+1}=0. \end{aligned}$$

      Hence, for the next transformation \(\tilde{U}^{-1}_{r+s}C_{m-s-1}=C'_{m-s-1}\tilde{U}^{-1}_{r+s+1}\), since \(\bar{u}_{j}=t_{j}u_{j-1}=0\), we have by the formula (11) that \( p'_{i,j+1}=p_{i,j+1}-\bar{u}_{j+1}=0\).

    • If \(p_{i-1,j-1}=0\) and \(t_{j}\ne 0\), then \(p_{i-1,j}=0\) by the fact (2). Hence, for the next transformation \(\tilde{U}^{-1}_{r+s}C_{m-s-1}=C'_{m-s-1}\tilde{U}^{-1}_{r+s+1}\), since \(\bar{u}_{j}=t_{j}u_{j-1}\ne 0\), we have by the formula (11) that

      $$\begin{aligned} p'_{i,j+1}=\bar{t}_{j+1}p_{i-1,j}=\frac{p_{i,j+1}-\bar{u}_{j+1}}{p_{i-1,j}-\bar{u}_{j}}p_{i-1,j}=0. \end{aligned}$$

Hence, we conclude that

$$\begin{aligned} p'_{ij}=0,~j>i\Rightarrow ~p'_{is}=0,~~\forall ~s\ge j. \end{aligned}$$
(18)

Thus, we get that \(A'\) is of the form (1) satisfying the fact (2) because of the facts (13), (15) and (18). Therefore, by Theorem 1, \(A'\) is CRD of full rank. \(\square \)

By applying a transpose transformation, we immediately get the following result.

Corollary 3

Let \(A\in {\mathbb R}^{n\times m}\) be CRD of full rank, and let \(L_{r} \in {\mathbb R}^{m\times m}\) (\(2\le r\le m\)) be nonsingular whose transpose is as in (6). Then in the absence of breakdown, \(A'=AL^{-1}_{r}\) is CRD of full rank.

Therefore, we have Algorithm 1 for updating the representation of the product of two specific CRD matrices.

figure a

Observe that the formulas of (8), (9) and (11) cost at most 6, 3 and 4 arithmetic operations, respectively. So, the cost of Algorithm 1 is at most

$$\begin{aligned}&6(n-r+2)(n-1)+3\cdot (\min \{m,n\}-r+2)+4\cdot (\min \{m,n\}-r+2)(m-r+1)\\&\quad \le \sum \limits _{j=r-1}^{n}12j+(\min \{m,n\}-r+2)(4m-4r+7) \le \sum \limits _{j=r-1}^{n}(4m+12j). \end{aligned}$$

4 The Periodic qd-Type Reduction Method

In this section, we provide a periodic qd-type reduction for a product of rectangular CRD matrices of full rank

$$\begin{aligned} A=A_{1}A_{2}\ldots A_{K},\quad \mathrm{where}~A_{i}\in \mathbb {R}^{n_{i}\times n_{i+1}},~1\le i\le K,~ n_{1}=n_{K+1}, \end{aligned}$$
(19)

under the assumption that no breakdown occurs. First, we illustrate the reduction for \(K=3\) where each factor is a \(4\times 4\) CRD matrix of full rank, i.e.,

$$\begin{aligned} A=A_{1}A_{2}A_{3}=\left[ \begin{array}{cccc} x&{}x&{}x&{}x\\ x&{}x&{}x&{}x\\ x&{}x&{}x&{}x\\ x&{}x&{}x&{}x\end{array}\right] \left[ \begin{array}{cccc} y&{}y&{}y&{}y \\ y&{}y&{}y&{}y \\ y&{}y&{}y&{}y \\ y&{}y&{}y&{}y\end{array}\right] \left[ \begin{array}{cccc} z&{}z&{}z&{}z \\ z&{}z&{}z&{}z \\ z&{}z&{}z&{}z \\ z&{}z&{}z&{}z \end{array}\right] . \end{aligned}$$

Denote by \(x_{ij}\), \(y_{ij}\) and \(z_{ij}\) the parameters of \(A_{1}\), \(A_{2}\) and \(A_{3}\), respectively. For the first stage, we perform the following operations.

  • First, according to the form (1) of \(A_{1}\), i.e., \(A_{1}=B_{1}B_{2}B_{3}DC_{3}C_{2}C_{1}\), it can be rewritten as

    $$\begin{aligned} A_{1}= & {} \left( B_{1}B_{2}B_{3}D\begin{bmatrix} {\begin{matrix} 1&{}x_{12}&{}&{}\\ &{}1&{}x_{23}&{}\\ &{}&{}1&{}x_{34}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ &{}1&{}x_{13}&{}\\ &{}&{}1&{}x_{24}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ &{}1&{} &{}\\ &{}&{}1&{} x_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}-x_{12}&{}&{}\\ &{}1&{}-x_{13}&{}\\ &{}&{}1&{}-x_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\right) \\&\times \begin{bmatrix} {\begin{matrix} 1&{}-x_{12}&{}&{}\\ &{}1&{}-x_{13}&{}\\ &{}&{}1&{}-x_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}^{-1}\\= & {} \left( B_{1}B_{2}B_{3}D\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ &{}1&{}x_{23}&{}\\ &{}&{}1&{}x_{34}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ &{}1&{}&{}\\ &{}&{}1&{}x_{24}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}\right) \begin{bmatrix} {\begin{matrix} 1&{}-x_{12}&{}&{}\\ &{}1&{}-x_{13}&{}\\ &{}&{}1&{}-x_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}^{-1} \\= & {} \begin{bmatrix} {\begin{matrix} x'&{}0&{}0&{}0\\ x'&{}x'&{}x'&{}x'\\ x'&{}x'&{}x'&{}x'\\ x'&{}x'&{}x'&{}x'\end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} 1&{}-x_{12}&{}&{}\\ &{}1&{}-x_{13}&{}\\ &{}&{}1&{}-x_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}^{-1}=:A'_{1}S_{1}^{-1},\end{aligned}$$

    this means that \(A'_{1}\) is obtained form \(A_{1}\) only by setting its parameters \(x_{12}=x_{13}=x_{14}=0\). So,

    $$\begin{aligned} A=A_{1}A_{2}A_{3}=A'_{1}(S_{1}^{-1}A_{2})A_{3}, \end{aligned}$$

    where by Theorem 2, \(A_{2}=:S_{1}^{-1}A_{2}\) is CRD of full rank whose parameters are still denoted as \(y_{ij}\).

  • Further, using a similar argument, the updated CRD matrix \(A_{2}\) can be rewritten as

    $$\begin{aligned} A_{2}=\begin{bmatrix} {\begin{matrix} y'&{}0&{}0&{}0\\ y'&{}y'&{}y'&{}y'\\ y'&{}y'&{}y'&{}y'\\ y'&{}y'&{}y'&{}y' \end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} 1&{}-y_{12}&{}&{}\\ &{}1&{}-y_{13}&{}\\ &{}&{}1&{}-y_{14}\\ &{}&{}&{}1 \end{matrix}} \end{bmatrix}^{-1}=:A'_{2}S_{2}^{-1}, \end{aligned}$$

    where \(A'_{2}\) is obtained from \(A_{2}\) only by setting its parameters \(y_{12}=y_{13}=y_{14}=0\). So,

    $$\begin{aligned} A=A'_{1}A_{2}A_{3}=A'_{1}A'_{2}(S_{2}^{-1}A_{3}), \end{aligned}$$

    where by Theorem 2, \(A_{3}=:S_{2}^{-1}A_{3}\) is CRD of full rank whose parameters are still denoted as \(z_{ij}\).

  • Finally, the updated CRD matrix \(A_{3}\) can be rewritten as

    $$\begin{aligned} A_{3}=\begin{bmatrix} {\begin{matrix} z'&{}z'&{}0&{}0\\ z'&{}z'&{}z'&{}z'\\ z'&{}z'&{}z'&{}z'\\ z'&{}z'&{}z'&{}z' \end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} 1&{} &{}&{}\\ &{}1&{}-z_{13}&{}\\ &{}&{}1&{}-z_{14}\\ &{}&{}&{}1\end{matrix}} \end{bmatrix}^{-1}=:A'_{3}S_{3}^{-1}, \end{aligned}$$

    where \(A'_{3}\) is obtained from \(A_{3}\) only by setting its parameters \(z_{13}=z_{14}=0\). So,

    $$\begin{aligned} S^{-1}_{3}AS_{3}=S^{-1}_{3}(A'_{1}A'_{2}A'_{3}S^{-1}_{3})S_{3}=(S^{-1}_{3}A'_{1})A'_{2}A'_{3}, \end{aligned}$$

    where by Theorem 2, \(A'_{1}=:S_{3}^{-1}A'_{1}\) is CRD of full rank whose parameters are still denoted as \(x_{ij}\) with \(x_{1j}=0\) (\(j=2,3,4\)) not being disturbed. So far, the product \(A=A_{1}A_{2}A_{3}\) has been reduced by a similarity transformation into the form:

    $$\begin{aligned} A'=A'_{1}A'_{2}A'_{3}=\begin{bmatrix} {\begin{matrix} x'&{}0&{}0&{}0\\ x'&{}x'&{}x'&{}x'\\ x'&{}x'&{}x'&{}x'\\ x'&{}x'&{}x'&{}x'\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} y'&{}0&{}0&{}0\\ y'&{}y'&{}y'&{}y'\\ y'&{}y'&{}y'&{}y'\\ y'&{}y'&{}y'&{}y'\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} z'&{}z'&{}0&{}0\\ z'&{}z'&{}z'&{}z'\\ z'&{}z'&{}z'&{}z'\\ z'&{}z'&{}z'&{}z'\end{matrix}} \end{bmatrix}. \end{aligned}$$

For the second stage, we perform the following operations.

  • First, taking into account the form (1), \(A'_{3}\) can be rewritten as

    $$\begin{aligned}A'_{3}=\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ -z_{21}&{}1&{}&{}\\ &{}-z_{31}&{}1&{}\\ &{}&{}-z_{41}&{}1 \end{matrix}} \end{bmatrix}^{-1} \begin{bmatrix} {\begin{matrix} z''&{}z''&{}0&{}0\\ 0&{}z''&{}z''&{}z''\\ 0&{}z''&{}z''&{}z''\\ 0&{}z''&{}z''&{}z'' \end{matrix}} \end{bmatrix}=:W_{3}^{-1}A''_{3}, \end{aligned}$$

    where \(A''_{3}\) is obtained from \(A'_{3}\) only by setting its parameters \(z_{21}=z_{31}=z_{41}=0\). So,

    $$\begin{aligned} A'= A'_{1}A'_{2}A'_{3}=A'_{1}(A'_{2}W_{3}^{-1})A''_{3}, \end{aligned}$$

    where by Corollary 3, \(A'_{2}=:A'_{2}W_{3}^{-1}\) is CRD of full rank whose parameters are still denoted as \(y_{ij}\) with \(y_{1j}=0\) (\(j=2,3,4\)) not being disturbed.

  • Further, the updated CRD matrix \(A'_{2}\) can be rewritten as

    $$\begin{aligned}A'_{2}=\begin{bmatrix} {\begin{matrix} 1&{}&{}&{}\\ -y_{21}&{}1&{}&{}\\ &{}-y_{31}&{}1&{}\\ &{}&{}-y_{41}&{}1 \end{matrix}} \end{bmatrix}^{-1} \begin{bmatrix} {\begin{matrix} y''&{}0&{}0&{}0\\ 0&{}y''&{}y''&{}y''\\ 0&{}y''&{}y''&{}y''\\ 0&{}y''&{}y''&{}y''\end{matrix}} \end{bmatrix}=:W_{2}^{-1}A''_{2},\end{aligned}$$

    where \(A''_{2}\) is obtained from \(A'_{2}\) only by setting its parameters \(y_{21}=y_{31}=y_{41}=0\). So,

    $$\begin{aligned} A'=A'_{1}A'_{2}A''_{3}=(A'_{1}W_{2}^{-1})A''_{2}A''_{3}, \end{aligned}$$

    where by Corollary 3, \(A'_{1}=:A'_{1}W_{2}^{-1}\) is CRD of full rank whose parameters are still denoted as \(x_{ij}\) with \(x_{1j}=0\) (\(j=2,3,4\)) not being disturbed.

  • Finally, the updated CRD matrix \(A'_{1}\) can be rewritten as

    $$\begin{aligned} A'_{1}=\begin{bmatrix} {\begin{matrix} 1&{} &{}&{}\\ &{}1&{}&{}\\ &{}-x_{31}&{}1&{}\\ &{}&{}-x_{41}&{}1 \end{matrix}} \end{bmatrix}^{-1}\begin{bmatrix} {\begin{matrix} x''&{}0&{}0&{}0\\ x''&{}x''&{}x''&{}x''\\ 0&{}x''&{}x''&{}x''\\ 0&{}x''&{}x''&{}x'' \end{matrix}} \end{bmatrix}=:W_{1}^{-1}A''_{1},\end{aligned}$$

    where \(A''_{1}\) is obtained from \(A'_{1}\) only by setting its parameters \(x_{31}=x_{41}=0\). So,

    $$\begin{aligned} W_{1}A'W^{-1}_{1}=W_{1}(W_{1}^{-1}A''_{1}A''_{2}A''_{3})W^{-1}_{1}=A''_{1}A''_{2}(A''_{3}W^{-1}_{1}), \end{aligned}$$

    where by Corollary 3, \(A''_{3}=:A''_{3}W^{-1}_{1}\) is CRD of full rank whose parameters are still denoted as \(z_{ij}\) with \(z_{i1}=0\) (\(i=2,3,4\)) and \(z_{1j}=0\) (\(j=3,4\)) not being disturbed. Thus, the product \(A=A_{1}A_{2}A_{3}\) has been reduced into the form:

    $$\begin{aligned} A''_{1}A''_{2}A''_{3}=\begin{bmatrix} {\begin{matrix} x''&{}0&{}0&{}0\\ x''&{}x''&{}x''&{}x''\\ 0&{}x''&{}x''&{}x''\\ 0&{}x''&{}x''&{}x''\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} y''&{}0&{}0&{}0\\ 0&{}y''&{}y''&{}y''\\ 0&{}y''&{}y''&{}y''\\ 0&{}y''&{}y''&{}y''\end{matrix}} \end{bmatrix}\begin{bmatrix} {\begin{matrix} z''&{}z''&{}0&{}0\\ 0&{}z''&{}z''&{}z''\\ 0&{}z''&{}z''&{}z''\\ 0&{}z''&{}z''&{}z''\end{matrix}} \end{bmatrix}. \end{aligned}$$

When proceeding with the analogous operations on rows/columns 2 to 4, we conclude that \(A=A_{1}A_{2}A_{3}\) is reduced by similarity transformations into the tridiagonal form:

$$\begin{aligned}\left[ \begin{array}{cccc} \tilde{x}&{}0&{}0&{}0\\ \tilde{x}&{}\tilde{x}&{}0&{}0\\ 0&{}\tilde{x}&{}\tilde{x}&{}0\\ 0&{}0&{}\tilde{x}&{}\tilde{x}\end{array}\right] \left[ \begin{array}{cccc} \tilde{y}&{}0&{}0&{}0 \\ 0&{}\tilde{y}&{}0&{}0 \\ 0&{}0&{}\tilde{y}&{}0 \\ 0&{}0&{}0&{}\tilde{y}\end{array}\right] \left[ \begin{array}{cccc} \tilde{z}&{}\tilde{z}&{}0&{}0 \\ 0&{}\tilde{z}&{}\tilde{z}&{}0 \\ 0&{}0&{}\tilde{z}&{}\tilde{z} \\ 0&{}0&{}0&{}\tilde{z} \end{array}\right] . \end{aligned}$$

For the general product (19), let \(r=\min _{1\le t\le K}\{n_{t}\}\), then by performing the above reductions on rows/columns 1 to r, we conclude that the product \(A=A_{1}A_{2}\ldots A_{K}\) is reduced by similarity transformations into the form

$$\begin{aligned} T= & {} A'_{1}A'_{2}\ldots A'_{K-1}A'_{K}\\= & {} \begin{bmatrix} {\begin{matrix} x&{}&{}&{}&{}&{}\\ x&{}x&{}&{}&{}&{}\\ &{}\ddots &{}\ddots &{}&{}&{}\\ &{} &{}x&{}x&{} &{} \\ &{} &{}&{}x&{}x&{}\cdots &{}x\\ &{}&{}&{}&{}\vdots &{}\cdots &{}\vdots \\ &{}&{}&{}&{}x&{}\cdots &{}x \end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} y&{}&{}&{}&{}&{}\\ &{}\ddots &{}&{}&{}&{}\\ &{}&{}y&{}&{}&{}\\ &{}&{}&{}y&{}\ldots &{}y\\ &{}&{}&{}\vdots &{}\cdots &{}\vdots \\ &{}&{}&{}y&{}\cdots &{}y \end{matrix}} \end{bmatrix}\ldots \begin{bmatrix} {\begin{matrix} y&{}&{}&{}&{}&{}\\ &{}\ddots &{}&{}&{}&{}\\ &{}&{}y&{}&{}&{}\\ &{}&{}&{}y&{}\ldots &{}y\\ &{}&{}&{}\vdots &{}\cdots &{}\vdots \\ &{}&{}&{}y&{}\cdots &{}y \end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} z&{}z&{}&{}&{}&{}\\ &{}\ddots &{}\ddots &{}&{}&{}\\ &{}&{}z&{}z&{}&{}\\ &{}&{}&{}z&{}\ldots &{}z\\ &{}&{}&{}\vdots &{}\cdots &{}\vdots \\ &{}&{}&{}z&{}\cdots &{}z \end{matrix}} \end{bmatrix}, \end{aligned}$$

where if \(n_{1}=n_{K+1}=r\), then

$$\begin{aligned} A'_{1}=\begin{bmatrix} {\begin{matrix} x&{}&{}&{} &{}0&{}\cdots &{}0\\ x&{}x&{}&{} &{}0&{}\cdots &{} 0\\ &{}\ddots &{}\ddots &{} &{}\vdots &{}\cdots &{}\vdots \\ &{} &{}x&{} x&{} 0&{}\cdots &{}0 \end{matrix}} \end{bmatrix}\in \mathbb {R}^{n_{1}\times n_{2}},~A'_{K}=\begin{bmatrix} {\begin{matrix} z&{}z&{}&{} \\ &{}\ddots &{}\ddots &{} \\ &{}&{}z&{}z \\ &{} &{} &{}z \\ 0&{}\cdots &{}0&{}0 \\ \vdots &{}\cdots &{}\vdots &{} \vdots \\ 0&{}\cdots &{}0&{} 0 \end{matrix}} \end{bmatrix}\in \mathbb {R}^{n_{K}\times n_{K+1}}, \end{aligned}$$

and if \(n_{t}=r\) for some \(2\le t\le K\), then

$$\begin{aligned} A'_{t-1}=\begin{bmatrix} {\begin{matrix} y&{}&{}\\ &{}\ddots &{}\\ &{}&{}y\\ 0&{}\cdots &{}0\\ \vdots &{}\cdots &{}\vdots \\ 0&{}\cdots &{}0 \end{matrix}} \end{bmatrix}\in \mathbb {R}^{n_{t-1}\times n_{t}}~(t\ne 2),~A'_{t}=\begin{bmatrix} {\begin{matrix} y&{}&{}&{}0&{}\cdots &{}0\\ &{}\ddots &{}&{}\vdots &{}\cdots &{}\vdots \\ &{}&{}y&{}0&{}\ldots &{}0\\ \end{matrix}} \end{bmatrix}\in \mathbb {R}^{n_{t}\times n_{t+1}}~(t\ne K). \end{aligned}$$

So, the product \(A=A_{1}A_{2}\ldots A_{K}\) has been reduced by similarity transformations into

$$\begin{aligned} T=\left[ \begin{array}{cc} T'&{}0\\ 0&{}0\end{array}\right] , \end{aligned}$$
(20)

where if \(n_{1}=r\), then

$$\begin{aligned} T'= & {} A'_{1}[1:r]\cdot (A'_{2}[1:r]\ldots A'_{K-1}[1:r])\cdot A'_{K}[1:r]\nonumber \\= & {} \begin{bmatrix} {\begin{matrix} x&{}&{}&{}\\ x&{}x&{}&{}\\ &{}\ddots &{}\ddots &{}\\ &{}&{}x&{}x\end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} y&{}&{}&{} \\ &{}y&{}&{} \\ &{}&{}\ddots &{} \\ &{}&{}&{}y\end{matrix}} \end{bmatrix} \cdots \begin{bmatrix} {\begin{matrix} y&{}&{}&{} \\ &{}y&{}&{} \\ &{}&{}\ddots &{} \\ &{}&{}&{}y\end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} z&{}z&{}&{} \\ &{}\ddots &{}\ddots &{} \\ &{}&{}z&{}z \\ &{}&{}&{}z \end{matrix}} \end{bmatrix}\in \mathbb {R}^{r\times r}, \end{aligned}$$

and if \(n_{1}>r\), then

$$\begin{aligned} T'= & {} A'_{1}[1:r+1|1:r]\cdot (A'_{2}[1:r]\ldots A'_{K-1}[1:r])\cdot A'_{K}[1:r|1:r+1]\nonumber \\= & {} \begin{bmatrix} {\begin{matrix} x&{}&{}&{}\\ x&{}x&{}&{}\\ &{}\ddots &{}\ddots &{}\\ &{}&{}x&{}x\\ &{}&{}&{}x\end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} y&{}&{}&{} \\ &{}y&{}&{} \\ &{}&{}\ddots &{} \\ &{}&{}&{}y\end{matrix}} \end{bmatrix} \cdots \begin{bmatrix} {\begin{matrix} y&{}&{}&{} \\ &{}y&{}&{} \\ &{}&{}\ddots &{} \\ &{}&{}&{}y\end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} z&{}z&{}&{} \\ &{}\ddots &{}\ddots &{} \\ &{}&{}z&{}z \end{matrix}} \end{bmatrix}\in \mathbb {R}^{(r+1)\times (r+1)}. \end{aligned}$$

According to the analysis above, we finally have Algorithm 2 (here, denote by \(U_{i}(x_{i},\ldots ,x_{n})\in {\mathbb R}^{n \times n}\) the unit bidiagonal matrix with the \((t-1,t)\)th entry \(x_{t}\) for \(i\le t\le n\)) to reduce the general product (19) into the tridiagonal form (20), which costs the arithmetic operations of at most

$$\begin{aligned}&\sum \limits _{i=1}^{r}\left\{ \sum \limits _{t=1}^{K-1} \sum \limits _{j=i}^{n_{t+1}}(4n_{t+2}+12j)+ \sum \limits _{j=i+1}^{n_{K+1}}(4n_{2}+12j)+\sum \limits ^{K}_{t=2} \sum \limits _{j=i}^{n_{t}}(4n_{t-1}+12j)+\sum \limits _{j=i+1}^{n_{1}} (4n_{K}+12j)\right\} \\&\approx O(r(n_{1}n_{2}+n_{2}n_{3}+\cdots +n_{K}n_{K+1})). \end{aligned}$$

So, the complexity of our periodic qd-type reduction is very preferable when r is small.

Observe that the matrix \(T'\) of (20) is tridiagonal of the factored form. Consequently, all the eigenvalues of the product A are obtained by applying various tridiagonal eigensolvers such as LR-type algorithms to \(T'\). More interestingly, it turns out that our method can achieve the high relative accuracy, as shown in the next section.

figure b

5 Accurate Eigenvalue Computations

In this section, we first identify the subset of the general product (19) for which no subtraction of like-signed numbers occurs throughout the periodic qd-type reduction. We then compute all the eigenvalues of such a product to high relative accuracy. Error analysis is presented to illustrate the high relative accuracy.

5.1 The Subtraction-Free Algorithm 1

Lemma 4

Let \(A=B_{1} B_{2}\ldots B_{n-1} \in {\mathbb R}^{n \times n}\) be CRD whose parameter matrix \(P=(p_{ij})\in {\mathbb R}^{n \times n}\) and each factor \(B_{t}\in {\mathbb R}^{n \times n}\) (\(1\le t\le n-1\)) is as in (1), and let \(U\in {\mathbb R}^{n \times n}\) be bidiagonal whose diagonal entries \(d_{j}>0\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(u_{j}\) (\(2\le j\le n\)). Then the qd-type transformation

$$\begin{aligned} U^{-1}A=B'_{1} B'_{2}\ldots B'_{n-1}U'^{-1} \end{aligned}$$

is subtraction-free if and only if

$$\begin{aligned} \mathrm{sign}(u_{j} P[j|1:j-1])=-1~\mathrm{or}~0,~ \forall ~2\le j\le n. \end{aligned}$$
(21)

Consequently, in this case no breakdown occurs such that each \(B'_{t}\) (\(1\le t\le n-1\)) has the same form as that of \(B_{t}\) all of whose corresponding entries \(p'_{ij}\) satisfy that

$$\begin{aligned} \mathrm{sign}(p'_{ij})=\mathrm{sign}(p_{ij}),~\forall ~i>j, \end{aligned}$$
(22)

and \(U' \) is bidiagonal whose diagonal entries \(d'_{j}>0\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(u'_{j}=u_{j}\) (\(2\le j\le n\)).

Proof

We use induction on the total number of the factors in A to prove the result. By Lemma 1, the qd-type transformation \(U^{-1}B_{1}=B'_{1}\bar{U}^{-1}\) is subtraction-free if and only if

$$\begin{aligned} \mathrm{sign}(d_{n}p_{n1}u_{n})=\mathrm{sign}(p_{n1}u_{n})=-1~\mathrm{or}~0; \end{aligned}$$
(23)

and consequently, no breakdown occurs such that \(\bar{U}\in {\mathbb R}^{n \times n}\) is bidiagonal whose diagonal entries \(\bar{d}_{j}\) and \((j-1,j)\)th entries \(\bar{u}_{j}\) are the following:

$$\begin{aligned} {\left\{ \begin{array}{ll} \bar{d}_{j}=d_{j}>0,~1\le j \le n-2,\\ \bar{d}_{n}=d_{n}-u_{n}p_{n1}>0,~\bar{d}_{n-1}= \frac{d_{n-1}}{\bar{d}_{n}}d_{n}>0,\\ \bar{u}_{j}=u_{j}, \quad \forall ~2\le j\le n, \end{array}\right. } \end{aligned}$$

and \(B'_{1}\) has the same form as that of \(B_{1}\) with the corresponding entry \(p'_{n1}=\frac{d_{n-1}}{\bar{d}_{n}}p_{n1}\) such that

$$\begin{aligned} \mathrm{sign}(p'_{n1})=\mathrm{sign}(p_{n1}). \end{aligned}$$
(24)

Notice that \(B_{2} \ldots B_{n-1}\) is CRD whose parameters \(p_{ij}\) (\(0<i-j\le n-2\)) satisfy the fact (2). Hence, by applying our induction assumption, the qd-type transformation

$$\begin{aligned} \bar{U}^{-1}B_{2} \ldots B_{n-1}=B'_{2} \ldots B'_{n-1}U'^{-1} \end{aligned}$$

is subtraction-free if and only if

$$\begin{aligned} \mathrm{sign}(u_{j}P[j|1:j-1])=-1~\mathrm{or}~0,~ \forall ~2\le j\le n-1;\quad \mathrm{sign}(u_{n}P[n|2:n-1])=-1~\mathrm{or}~0;\nonumber \\ \end{aligned}$$
(25)

and consequently, no breakdown occurs such that each \(B'_{t}\) (\(2\le t\le n-1\)) has the same form as that of \(B_{t}\) all of whose corresponding entries \(p'_{ij}\) satisfy that

$$\begin{aligned} \mathrm{sign}(p'_{ij})=\mathrm{sign}(p_{ij}),~\forall ~i>j~\mathrm{and}~(i,j)\ne (n,1), \end{aligned}$$
(26)

and \(U' \in {\mathbb R}^{n \times n}\) is bidiagonal whose diagonal entries \(d'_{j}>0\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(u'_{j}=u_{j}\) (\(2\le j\le n\)). Therefore, the fact (25) together with the fact (23) implies that the qd-type transformation \(A'=U^{-1}A\) is subtraction-free if and only if the fact (21) holds. Consequently, the facts (24) and (26) imply that the fact (22) is true. \(\square \)

Lemma 5

Let \(A=C_{m-1}\ldots C_{2}C_{1}\in {\mathbb R}^{m\times m}\) be CRD whose parameter matrix \(P\in {\mathbb R}^{m\times m}\) and each factor \(C_{t}\in {\mathbb R}^{m\times m}\) (\(1\le t\le m-1\)) is as in (1), and let \(U\in {\mathbb R}^{m\times m}\) be bidiagonal with unit diagonal whose \((j-1,j)\)th entries \(u_{j}\ne 0\) for all \(l\le j\le h\) and \(u_{j}=0\) otherwise. Then the qd-type transformation

$$\begin{aligned} U^{-1}A=C'_{m-1}\ldots C'_{2}C'_{1} \end{aligned}$$

is subtraction-free if and only if

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(u_{j}P[l-1,\ldots ,j-1|j])=-1~\mathrm{or}~0,~\forall ~l\le j\le h;\\ \mathrm{sign}(P[l-1,\ldots ,h|j])=s_{j},~\forall ~h+1\le j\le m.\\ \end{array}\right. } \end{aligned}$$
(27)

Consequently, no breakdown occurs such that \(A'=U^{-1}A\) is CRD whose parameter matrix \(P' \) satisfies that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(u_{j}P'[l-1,\ldots ,j-1|j])=-1,~\forall ~l\le j\le h;\\ \mathrm{sign}(P'[l-1,\ldots ,h|j])=s_{j},~\forall ~h+1\le j\le m;\\ P'[i|j]=P[i|j],~\mathrm{otherwise}; \end{array}\right. } \end{aligned}$$
(28)

Proof

We use induction on the total number of the factors in A to prove the result. By Lemma 3, the qd-type transformation \(U^{-1} C_{m-1}=C'_{m-1}\bar{U}^{-1}\) is subtraction-free if and only if

$$\begin{aligned} \mathrm{sign}(u_{j}p_{j-1,j})=-1~\mathrm{or}~0,\quad \forall ~l\le j\le h; \end{aligned}$$
(29)

and consequently, no breakdown occurs such that \(\bar{U}\in {\mathbb R}^{m\times m}\) is bidiagonal with unit diagonal whose \((j-1,j)\)th entries \(\bar{u}_{j}\) (\(2\le j\le m\)) are the following:

$$\begin{aligned} \bar{u}_{j}=\frac{p_{j-1,j}-u_{j}}{p_{j-2,j-1}-u_{j-1}}u_{j-1},~l+1\le j\le \min \{h+1,m\};~\mathrm{otherwise}~\bar{u}_{j}=0, \end{aligned}$$

from which it follows that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(\bar{u}_{j})=-\mathrm{sign}(p_{j-1,j}-u_{j})=\mathrm{sign}(u_{j}),~l+1\le j\le h;\\ \mathrm{sign}(\bar{u}_{h+1})=-\mathrm{sign}(p_{h,h+1}-u_{h+1})=-\mathrm{sign}(p_{h,h+1}),~\mathrm{if}~h+1\le m; \end{array}\right. } \end{aligned}$$
(30)

and \(C'_{m-1}\) has the same form as that of \(C_{m-1}\) with the corresponding bidiagonal entries

$$\begin{aligned} {\left\{ \begin{array}{ll} p'_{l-1,l}=p_{l-1,l}-u_{l}\ne 0,\\ p'_{j-1,j}=\frac{p_{j-1,j}-u_{j}}{p_{j-2,j-1} -u_{j-1}}p_{j-2,j-1},~\forall ~l+1\le j\le \min \{h+1,m\},\\ p'_{j-1,j}=p_{j-1,j},~\mathrm{otherwise}, \end{array}\right. } \end{aligned}$$

from which it follows that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(u_{l}p'_{l-1,l})=\mathrm{sign}(u_{l}(p_{l-1,l}-u_{l}))=-1,\\ \mathrm{sign}(u_{j}p'_{j-1,j})=-1~\mathrm{or}~0,~\forall ~l+1\le j\le h,\\ \mathrm{sign}(p'_{h,h+1})=\mathrm{sign}(p_{h,h+1})~\mathrm{or}~0,~\mathrm{if}~h+1\le m. \end{array}\right. } \end{aligned}$$
(31)

Notice that \(C_{m-2}\ldots C_{1}\) is CRD whose parameters \(p_{ij}\) (\(j-i\ge 2\)) satisfy the fact (2). There are the following cases that we need to consider.

  • For the case \(p_{h,h+1}\ne 0\), we have by the fact (30) that

    $$\begin{aligned} \bar{u}_{j}\ne 0,~l+1\le j\le h+1; ~\mathrm{otherwise}~\bar{u}_{j}=0. \end{aligned}$$

    Thus, by our induction assumption, the qd-type transformation

    $$\begin{aligned} A''=\bar{U}^{-1}C_{m-2}\ldots C_{1}=C'_{m-2}\ldots C'_{1} \end{aligned}$$

    is subtraction-free if and only if

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(\bar{u}_{j}P[(l+1)-2,\ldots ,j-2|j])=-1~\mathrm{or}~0,~\forall ~l+1\le j\le h+1;\\ \mathrm{sign}(P[(l+1)-2,\ldots ,(h+2)-2|j])=s_{j}, ~\forall ~h+2\le j\le m;\\ \end{array}\right. } \end{aligned}$$
    (32)

    and consequently, no breakdown occurs such that the parameter matrix \(P'=(p'_{ij})\) of \(A''\) satisfies that

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(\bar{u}_{j}P'[(l+1)-2,\ldots ,j-2|j])=-1, ~\forall ~l+1\le j\le h+1;\\ \mathrm{sign}(P'[(l+1)-2,\ldots ,(h+2)-2|j])=s_{j}, ~\forall ~h+2\le j\le m;\\ p'_{ij}=p_{ij}~(j-i\ne 1),~\mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
    (33)

    Therefore, the fact (32) together with the facts (29), (30) implies that the qd-type transformation \(A'=U^{-1}A\) is subtraction-free if and only if the fact (27) is true. Consequently, the fact (33) with (30), (31) implies that the fact (28) is true.

  • For the case \(p_{h,h+1}=0\), we have by the fact (30) that

    $$\begin{aligned} \bar{u}_{j}\ne 0,~l+1\le j\le h; ~\mathrm{otherwise}~\bar{u}_{j}=0. \end{aligned}$$

    Thus, by our induction assumption, the qd-type transformation

    $$\begin{aligned} A''=\bar{U}^{-1}C_{m-2}\ldots C_{1}=C'_{m-2}\ldots C'_{1} \end{aligned}$$

    is subtraction-free if and only if

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(\bar{u}_{j}P[(l+1)-2,\ldots ,j-2|j])=-1~\mathrm{or}~0, ~\forall ~l+1\le j\le h;\\ \mathrm{sign}(P[(l+1)-2,\ldots ,(h+1)-2|j])=s_{j}, ~\forall ~h+1\le j\le m;\\ \end{array}\right. } \end{aligned}$$
    (34)

    and consequently, no breakdown occurs such that the parameter matrix \(P'=(p'_{ij})\) of \(A''\) satisfies that

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(\bar{u}_{j}P'[(l+1)-2,\ldots ,j-2|j])=-1, ~\forall ~l+1\le j\le h;\\ \mathrm{sign}(P'[(l+1)-2,\ldots ,(h+1)-2|j])=s_{j}, ~\forall ~h+1\le j\le m;\\ p'_{ij}=p_{ij}~(j-i\ne 1),~\mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
    (35)

    Notice that by the fact (2), \(p_{h,h+1}=0\) implies that \(p_{hj}=0\) for all \(j\ge h+1\). Therefore, the fact (34) together with the facts (29), (30) implies that the qd-type transformation \(A'=U^{-1}A\) is subtraction-free if and only if the fact (27) is true. Consequently, the fact (35) with (30), (31) implies that the fact (28) is true.

The result is proved. \(\square \)

By Lemmas 4 and 5, we identify Algorithm 1 to be subtraction-free as follows.

Theorem 3

Let \(A\in {\mathbb R}^{n\times m}\) be CRD of full rank with the parameter matrix \(P\in {\mathbb R}^{n\times m}\), and let \(U\in {\mathbb R}^{n \times n}\) be bidiagonal whose diagonal entries \(d_{j}> 0\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(u_{j}\) (\(2\le j\le n\)) are the following:

$$\begin{aligned} \mathrm{sign}(u_{j})=g_{j}\ne 0,~l_{r}\le j\le h_{r},~r=1,2,\ldots ,t;\quad \mathrm{otherwise}~u_{j}=0, \end{aligned}$$

here, \(l_{r+1}-h_{r}\ge 2\) for all \(1\le r\le t-1\). Denote \(\mathrm{sign}(P[j|j])=w_{j}\) for all \(1\le j\le \min \{n,m\}\). Then the qd-type transformation \(A'=U^{-1}A\) by Algorithm 1 is subtraction-free if and only if

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P[j|1:j-1])=-g_{j}~\mathrm{or}~0,~\forall ~l_{r}\le j\le h_{r};\\ \mathrm{sign}(P[l_{r}-1,\ldots ,j-1|j])=-g_{j}w_{j-1}w_{j}~\mathrm{or}~0,~\forall ~l_{r}\le j\le h'_{r};\\ \mathrm{sign}(P[l_{r}-1,\ldots ,h'_{r}|j])=s_{j}, ~\forall ~h'_{r}+1\le j\le m;\\ \end{array}\right. } r=1,2,\ldots ,t, \end{aligned}$$

here \(h'_{r}=\min \{h_{r},n,m\}\). Consequently, in this case no breakdown occurs such that \(A'\) is CRD of full rank whose parameter matrix \(P'\in {\mathbb R}^{n\times m}\) satisfies that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P'[l_{r}-1,\ldots ,j-1|j])=-g_{j}w_{j-1}w_{j},~\forall ~l_{r}\le j\le h'_{r};\\ \mathrm{sign}(P'[l_{r}-1,\ldots ,h'_{r}|j])=s_{j}, \forall ~h'_{r}+1\le j\le m;\\ \mathrm{sign}(P'[i|j])=\mathrm{sign}(P[i|j]),~\mathrm{otherwise}; \end{array}\right. } r=1,2,\ldots ,t. \end{aligned}$$

Proof

Denote by \(U_{l_{r}}\in {\mathbb R}^{n \times n}\) (\(1\le r\le t\)) the bidiagonal matrix whose diagonal entries \(\tilde{d}_{j}\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(\tilde{u}_{j}\) (\(2\le j\le n\)) are the following:

$$\begin{aligned} {\left\{ \begin{array}{ll} \tilde{d}_{j}=d_{j},~l_{r}-1\le j\le h_{r};~\mathrm{otherwise}~\tilde{d}_{j}=1;\\ \tilde{u}_{j}=u_{j},~l_{r}\le j\le h_{r};~\mathrm{otherwise}~\tilde{u}_{j}=0. \end{array}\right. } \end{aligned}$$

Consider that \(l_{r+1}-h_{r}\ge 2\) for all \(1\le r\le t-1\). Then \(U=U_{l_{t}}\ldots U_{l_{2}}U_{l_{1}}\). Hence, the qd-type transformation \(U^{-1}A=U^{-1}_{l_{1}}U^{-1}_{l_{2}}\ldots U^{-1}_{l_{t}}A\) is subtraction-free if and only if the sequence qd-type transformations \(A_{r-1}=U^{-1}_{l_{r}}A_{r}\) (denote \(A_{t}=A\)) for \(r=t,\ldots ,2,1\) are subtraction-free. Therefore, we conclude by Lemmas 4, 2 and 5 that the result is true. \(\square \)

Remark 1

In Theorem 3, all the diagonal entries of U have been assumed to be positive. Clearly, a general case can be easily dealt with by choosing a signature matrix S such that all the diagonal entries of SU are positive.

5.2 The Subtraction-Free LR Algorithm

Given the LU factorization \(T_{0}=L_{0}U_{0}\), the basic LR algorithm is performed as follows:

$$\begin{aligned} {\left\{ \begin{array}{ll} T_{k}=L_{k} U_{k},\\ T_{k+1}=L_{k}^{-1}T_{k}L_{k}=U_{k}L_{k},\\ \end{array}\right. } k=0,1,\ldots \end{aligned}$$
(36)

Lemma 6

Let \(T\in {\mathbb R}^{n \times n}\) be nonsingular as follows

$$\begin{aligned} T=LU= \begin{bmatrix} {\begin{matrix} 1&{} &{} &{} \\ l_{2}&{} 1&{} &{} \\ &{} \ddots &{} \ddots &{} \\ &{} &{}l_{n} &{} 1 \\ \end{matrix}} \end{bmatrix} \begin{bmatrix} {\begin{matrix} d_{1}&{}u_{2} &{} &{} \\ &{} \ddots &{} \ddots &{} \\ &{} &{}d_{n-1} &{} u_{n} \\ &{} &{} &{}d_{n} \end{matrix}} \end{bmatrix}. \end{aligned}$$
(37)

Then in the absence of breakdown, the LR transformation

(38)

where with the convention \(l_{n+1}=u_{n+1}=0\),

$$\begin{aligned} {\left\{ \begin{array}{ll} z_{1}=d_{1},~ \bar{d}_{j}=z_{j}+l_{j+1}u_{j+1},\\ \bar{l}_{j+1}=\frac{d_{j+1}}{\bar{d}_{j}}l_{j+1}, ~ z_{j+1}=\frac{d_{j+1}}{\bar{d}_{j}}z_{j},\\ \end{array}\right. }\quad ~\forall ~1\le j\le n. \end{aligned}$$
(39)

Moreover, the transformation (38) is subtraction-free if and only if \(\mathrm{sign}(d_{j}l_{j+1}u_{j+1})=1~\mathrm{or}~0\) for all \(1\le j\le n-1\). Consequently, in this case no breakdown occurs such that for all \(1\le j\le n\), \(\mathrm{sign}(\bar{d}_{j})=\mathrm{sign}(d_{j})\), \(\mathrm{sign}(\bar{d}_{j}\bar{l}_{j+1}u_{j+1})=\mathrm{sign}(d_{j}d_{j+1})~\mathrm{or}~0\), and \(\bar{l}_{j+1}=0\) if and only if \(l_{j+1}=0\).

Proof

Comparing the entries in both sides of (38), we have with the convention \(l_{n+1}=u_{n+1}=0\) that

$$\begin{aligned} {\left\{ \begin{array}{ll} \bar{d}_{1}=d_{1}+l_{2}u_{2},\\ \bar{d}_{j}\bar{l}_{j+1}=d_{j+1}l_{j+1},\\ \bar{d}_{j+1}+\bar{l}_{j+1}u_{j+1}=d_{j+1}+l_{j+2}u_{j+2},\\ \end{array}\right. }j=1,2,\ldots ,n-1, \end{aligned}$$

from which it follows that

$$\begin{aligned} {\left\{ \begin{array}{ll} z_{1}=\bar{d}_{1}-l_{2}u_{2}=d_{1},~ \bar{d}_{1}=z_{1}+l_{2}u_{2},\\ \bar{l}_{j+1}=\frac{d_{j+1}}{\bar{d}_{j}}l_{j+1},\\ z_{j+1}=\bar{d}_{j+1}-l_{j+2}u_{j+2}=d_{j+1}-\bar{l}_{j+1}u_{j+1}=\frac{d_{j+1}}{\bar{d}_{j}}z_{j}, \\ \bar{d}_{j+1}=z_{j+1}+l_{j+2}u_{j+2}, \end{array}\right. }j=1,2,\ldots ,n-1. \end{aligned}$$

So, the formula (39) is derived. Moreover, by the formula (39), since \(z_{1}=d_{1}\ne 0\) with \(\mathrm{sign}(z_{1})=\mathrm{sign}(d_{1})\), we have that \(\bar{d}_{1}=z_{1}+l_{2}u_{2}\) is subtraction-free if and only if \(\mathrm{sign}(z_{1}l_{2}u_{2})=\mathrm{sign}(d_{1}l_{2}u_{2})=1\) or 0. Consequently, \(\bar{d}_{1}\ne 0\) with \(\mathrm{sign}(\bar{d}_{1})=\mathrm{sign}(z_{1})=\mathrm{sign}(d_{1})\) such that \(z_{2}=\frac{d_{2}}{\bar{d}_{1}}z_{1}\ne 0\) with \(\mathrm{sign}(z_{2})=\mathrm{sign}(d_{2})\). So, assume that \(z_{j}\ne 0\) with \(\mathrm{sign}(z_{j})=\mathrm{sign}(d_{j})\) for some \(1\le j\le n-1\). Then \(\bar{d}_{j}=z_{j}+l_{j+1}u_{j+1}\) is subtraction-free if and only if \(\mathrm{sign}(z_{j}l_{j+1}u_{j+1})=\mathrm{sign}(d_{j}l_{j+1}u_{j+1})=1\) or 0, and consequently, \(\bar{d}_{j}\ne 0\) with \(\mathrm{sign}(\bar{d}_{j})=\mathrm{sign}(z_{j})=\mathrm{sign}(d_{j})\) such that \(z_{j+1}=\frac{d_{j+1}}{\bar{d}_{j}}z_{j}\ne 0\) with \(\mathrm{sign}(z_{j+1})=\mathrm{sign}(d_{j+1})\). Notice that \(\bar{d}_{n}=z_{n}\). Thus, we conclude that the LR transformation (38) is subtraction-free if and only if \(\mathrm{sign}(d_{j}l_{j+1}u_{j+1})=1\) or 0 for all \(1\le j\le n-1\). Consequently, no breakdown occurs such that for all \(1\le j\le n\), \(\mathrm{sign}(\bar{d}_{j})=\mathrm{sign}(d_{j})\), and \(\mathrm{sign}(\bar{d}_{j}\bar{l}_{j+1}u_{j+1})=\mathrm{sign}(d_{j}d_{j+1})\) or 0 because \(\bar{d}_{j}\bar{l}_{j+1}=d_{j+1}l_{j+1}\) and \(\mathrm{sign}(l_{j+1}u_{j+1})=\mathrm{sign}(d_{j})\) or 0. Clearly, \(\bar{l}_{j+1}=0\) if and only if \(l_{j+1}=0\). \(\square \)

Now, we identify the basic LR algorithm to be subtraction-free as follows.

Theorem 4

Let \(T\in {\mathbb R}^{n \times n}\) be as in (37). Then the basic LR algorithm of T is subtraction-free if and only if \(\mathrm{sign}(l_{j+1}u_{j+1})=\mathrm{sign}(d_{j})=\mathrm{sign}(d_{j+1})\) whenever \(l_{j+1}u_{j+1}\ne 0\) for all \(1\le j\le n-1\). Consequently, in this case no breakdown occurs.

Proof

By Lemma 6, the first LR iteration of (36) is subtraction-free if and only if \(\mathrm{sign}(d_{j}l_{j+1}u_{j+1})=1\) whenever \(l_{j+1}u_{j+1}\ne 0\) for all j; and then, the second LR iteration of (36) is subtraction-free if and only if \(\mathrm{sign}(d_{j}d_{j+1})=1\) whenever \(l_{j+1}u_{j+1}\ne 0\) by considering that \(\bar{l}_{j+1}=0\) if and only if \(l_{j+1}=0\); and afterwards, the rth (\(r=3,4,\ldots \)) iteration of (36) must be subtraction-free by considering \(\mathrm{sign}(\bar{d}_{j})=\mathrm{sign}(d_{j})\) for all j. Notice that the facts \(\mathrm{sign}(d_{j}l_{j+1}u_{j+1})=1\) and \(\mathrm{sign}(d_{j}d_{j+1})=1\) imply that \(\mathrm{sign}(l_{j+1}u_{j+1})=\mathrm{sign}(d_{j})=\mathrm{sign}(d_{j+1})\). \(\square \)

5.3 Computing Eigenvalues with High Relative Accuracy

Now we are ready to show that our eigensolver of combining Algorithm 2 with the LR algorithm can achieve the high relative accuracy for computing all the eigenvalues of the product \(A=A_{1}A_{2}\ldots A_{K}\in {\mathbb R}^{n \times n}\) of (19). For simplicity, assume that each factor \(A_{t}\) (\(1\le t\le K\)) is square of full rank, which can be generalized to the rectangular case by taking account into Theorems 3 and 4.

Before proceeding, we remark that if there exists \(2\le r\le n\) such that \(P_{t}[1:r-1|r]=0\) for all \(1\le t\le K\), then by the fact (2), \(P_{t}[1:r-1|r:n]=0\) such that

$$\begin{aligned} A_{t}=\left[ \begin{array}{cc}A^{(t)}_{11}&{}0\\ A^{(t)}_{21}&{}A^{(t)}_{22}\end{array}\right] \in {\mathbb R}^{n \times n}, ~\mathrm{where}~A^{(t)}_{11}\in \mathbb {R}^{r\times r},~\forall ~1\le t\le K, \end{aligned}$$

consequently, the eigenvalue problem of the product A is split into two subproblems of the products \(A_{11}=\prod _{t=1}^{K}A^{(t)}_{11}\) and \(A_{22}=\prod _{t=1}^{K}A^{(t)}_{22}\). A similar splitting fact holds if there exists \(2\le r\le n\) such that \(P_{t}[r|1:r-1]=0\) for all \(1\le t\le K\). Hence, assume that for all \(2\le r\le n\), there exist \(1\le t^{(r)}_{1},t^{(r)}_{2}\le K\) such that

$$\begin{aligned} P_{t^{(r)}_{1}}[1:r-1|r]\ne 0~\mathrm{and}~P_{t^{(r)}_{2}}[r|1:r-1]\ne 0. \end{aligned}$$
(40)

Now, we identify the subset of these products for which the periodic qd-type reduction method is subtraction-free. Recall the convention \(\prod _{l}^{h}\cdot =1\) if \(h<l\).

Theorem 5

Let \(A=A_{1}\ldots A_{K-1} A_{K}\in {\mathbb R}^{n \times n}\), where each factor \(A_{t}\in {\mathbb R}^{n \times n}\) (\(1\le t\le K\)) is a nonsingular CRD matrix with the parameter matrix \(P_{t} \in {\mathbb R}^{n \times n}\) such that the fact (40) is satisfied. Then the periodic qd-type reduction method, i.e., Algorithm 2, is subtraction-free to reduce A into a tridiagonal matrix whose basic LR algorithm is subtraction-free if and only if for all \(1\le t\le K\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P_{t}[j|1:j-1])= f^{(t)}_{j}, \\ \mathrm{sign}(P_{t}[1:j-1|j])= s^{(t)}_{j}, \\ \mathrm{sign}(P_{t}[j|j])= w^{(t)}_{j}, \end{array}\right. }\forall ~1\le j\le n, \end{aligned}$$
(41)

where

$$\begin{aligned} {\left\{ \begin{array}{ll} \prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1,\\ f^{(t_{1})}_{j}f^{(t_{2})}_{j}=\prod _{l= t_{1}}^{ t_{2}-1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if}~f^{(t_{1})}_{j}f^{(t_{2})}_{j}\ne 0~\mathrm{for}~t_{1}< t_{2},\\ s^{(t_{1})}_{j}s^{(t_{2})}_{j}=\prod _{l= t_{1} +1}^{ t_{2} }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if}~s^{(t_{1})}_{j}s^{(t_{2})}_{j}\ne 0~\mathrm{for}~t_{1}<t_{2},\\ s^{(t_{1})}_{j}f^{(t_{2})}_{j}=\prod _{l= \min \{t_{1}+1,t_{2}\} }^{ \max \{t_{1},t_{2}-1\} }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if}~s^{(t_{1})}_{j}f^{(t_{2})}_{j}\ne 0, \end{array}\right. } \forall ~ 2\le j\le n. \end{aligned}$$
(42)

Proof

We use induction on the total number h of the eliminated rows and columns of the factors to prove the result. If \(h=0\), then A itself is of the tridiagonal form with \(A=LU\), where \(L\in \mathbb {R}^{n\times n}\) is bidiagonal with unit diagonal whose \((j,j-1)\)th entries \(l_{j}=p^{(1)}_{j,j-1}\) (\(2\le j\le n\)), and \(U\in \mathbb {R}^{n\times n}\) is bidiagonal whose diagonal entries \(d_{j}=\prod _{l=1}^{K}p^{(l)}_{jj}\) (\(1\le j\le n\)) and \((j-1,j)\)th entries \(u_{j}=d_{j-1}p^{(K)}_{j-1,j}\) (\(2\le j\le n\)). Because of the fact (40), \(l_{j}\ne 0\) and \(u_{j}\ne 0\) for all \(2\le j\le n\). Thus, by Theorem 4, the basic LR algorithm of A is subtraction-free if and only if

$$\begin{aligned} \mathrm{sign}(l_{j}u_{j})=\mathrm{sign}(d_{j-1})=\mathrm{sign}(d_{j}),~\forall ~2\le j\le n, \end{aligned}$$

i.e., for all \(2\le j\le n\),

$$\begin{aligned} \mathrm{sign}(p^{(1)}_{j,j-1}p^{(K)}_{j-1,j})=f^{(1)}_{j}s^{(K)}_{j}=1,\quad \mathrm{sign}(d_{j-1}d_{j})=\prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1, \end{aligned}$$

this means that the result is true for the case \(h=0\). Now assume that the result is true for the case that the total number of the eliminated rows and columns is less than h. Notice that for the periodic qd-type method, the eliminations on columns of \(A_{1}\ldots A_{K-1} A_{K}\) are equivalent to the eliminations on rows of \((A_{1}\ldots A_{K-1} A_{K})^{T}\). So, without loss of generality, assume the first elimination of the method is performed on the first row of the factor \(A_{t}\) for some \(1\le t\le K\). There are the following cases to be considered.

  • The case \(1\le t<K\). For the parameter matrix \(P_{t}=(p^{(t)}_{ij})\in {\mathbb R}^{n \times n}\) of \(A_{t}\), if \(p^{(t)}_{1j}=0\) for all \(2\le j\le n\), then all the off-diagonal entries on the 1th row of \(A_{t}\) are zero, and so, there is nothing to be eliminated. Thus, by the fact (2), we assume that

    $$\begin{aligned} p^{(t)}_{11}\ne 0,~\ldots ,~p^{(t)}_{1h}\ne 0~(h\ge 2);\quad p^{(t)}_{1j}=0,~\forall ~j>h. \end{aligned}$$

    Let \(U=U_{2}(-p^{(t)}_{12},\ldots ,-p^{(t)}_{1h})\in {\mathbb R}^{n \times n}\) with \(\mathrm{sign}(p^{(t)}_{1j})=s^{(t)}_{j}\ne 0\) (\(2\le j\le h\)). Then the elimination of these nonzero entries is performed by the periodic qd-type method as follows

    $$\begin{aligned} A_{1}\ldots A_{t-1}\underline{A_{t}A_{t+1}}\ldots A_{K}=A_{1}\ldots A_{t-1}\underline{A'_{t}A'_{t+1}}\ldots A_{K}, \end{aligned}$$
    (43)

    where \(A'_{t}\) is a nonsingular CRD matrix obtained from \(A_{t}\) only by setting its (1, j)th (\(2\le j\le h\)) parameters to be zero, i.e., the parameter matrix \(P'_{t}\in {\mathbb R}^{n \times n}\) of \(A'_{t}\) is the following:

    $$\begin{aligned} P'_{t}[1|j]=0,~\forall ~2\le j\le n;\quad P'_{t}[i|j]=P_{t}[i|j],~\mathrm{otherwise}; \end{aligned}$$

    and the qd-type transformation \(A'_{t+1}=U^{-1}A_{t+1}\) is computed by Algorithm 1. Thus, by Theorem 3, the elimination of (43) is subtraction-free if and only if the fact (41) with (42) is satisfied for the parameter matrix \(P_{t+1}\in {\mathbb R}^{n \times n}\) as follows:

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P_{t+1}[j|j])=w^{(t+1)}_{j},~~\forall ~1\le j\le h,\\ \mathrm{sign}(P_{t+1}[j|1:j-1])=f^{(t+1)}_{j}=s^{(t)}_{j}~\mathrm{or}~0,~\forall ~2\le j\le h ,\\ \mathrm{sign}(P_{t+1}[1:j-1|j])=s^{(t+1)}_{j}=s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}~\mathrm{or}~0,~\forall ~2\le j\le h,\\ \mathrm{sign}(P_{t+1}[1:h|j])=s^{(t+1)}_{j},~\forall ~h+1\le j\le n, \end{array}\right. } \end{aligned}$$
    (44)

    and consequently, \(A'_{t+1}\) is a nonsingular CRD matrix whose parameter matrix \(P'_{t+1}\in {\mathbb R}^{n \times n}\) satisfies that

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P'_{t+1}[1:j-1|j]))=s'^{(t+1)}_{j}=s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}\ne 0,~\forall ~2\le j\le h,\\ \mathrm{sign}(P'_{t+1}[1:h|j])=s'^{(t+1)}_{j}=s^{(t+1)}_{j},~\forall ~h+1\le j\le n,\\ \mathrm{sign}(P'_{t+1}[i|j])=\mathrm{sign}(P_{t+1}[i|j]),~~\mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
    (45)

    In particular, the fact (40) is satisfied for the product \(A_{1}\ldots A'_{t}A'_{t+1}\ldots A_{K}\).

    • For the sufficiency, it need to show that the fact (41) with (42) holds for all the factors of \(A_{1}\ldots A'_{t}A'_{t+1}\ldots A_{K}\). Consider that the fact (41) with (42) has been satisfied for all the factors \(A_{i}\) (\(1\le i\le K\)). Because of the connections between \(P_{j}\) and \(P'_{j}\) (\(j=t,t+1\)), it remains to show that the fact (42) is satisfied for all \(s'^{(t+1)}_{j}\) (\(2\le j\le h\)) of \(P'_{t+1}\), which is proved as follows: for all \(2\le j\le h\),

      $$\begin{aligned} {\left\{ \begin{array}{ll} s^{(r)}_{j}s'^{(t+1)}_{j}=s^{(r)}_{j}s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}=\prod _{l= r +1}^{ t+1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r\le t,\\ s'^{(t+1)}_{j}s^{(r)}_{j}=s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}s^{(r)}_{j}=\prod _{l= t +2}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r>t+1,\\ f^{(r)}_{j}s'^{(t+1)}_{j}=f^{(r)}_{j}s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}=\prod _{l= r}^{ t+1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r\le t+1,\\ s'^{(t+1)}_{j}f^{(r)}_{j}=s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}f^{(r)}_{j}=\prod _{l= t +2}^{ r-1}w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r>t+1. \end{array}\right. } \end{aligned}$$

      So, we conclude by the induction assumption that the sufficiency is true.

    • For the necessity, by the induction assumption, the fact (41) with (42) is true for all the factors of \(A_{1}\ldots A'_{t}A'_{t+1}\ldots A_{K}\), and in particular,

      $$\begin{aligned} {\left\{ \begin{array}{ll} P'_{t}[1|j]=0,~\forall ~2\le j\le n;\\ \mathrm{sign}(P'_{t}[j|1:j-1])=\mathrm{sign}(P_{t}[j|1:j-1])= f^{(t)}_{j},~~2\le j\le n; \\ \mathrm{sign}(P'_{t}[1:j-1|j])=\mathrm{sign}(P_{t}[2:j-1|j])= s'^{(t)}_{j},~~3\le j\le h;\\ \mathrm{sign}(P'_{t}[1:j-1|j])=\mathrm{sign}(P_{t}[1:j-1|j])= s^{(t)}_{j},~~h+1\le j\le n;\\ \mathrm{sign}(P'_{t}[j|j])=\mathrm{sign}(P_{t}[j|j])= w^{(t)}_{j},~~1\le j\le n, \end{array}\right. } \end{aligned}$$

      where if \(s'^{(t)}_{j}\ne 0\) for some \(3\le j\le h\), since \(s'^{(t+1)}_{j}=s^{(t)}_{j}w^{(t+1)}_{j-1}w^{(t+1)}_{j}\ne 0\) and \(s'^{(t)}_{j}s'^{(t+1)}_{j}=w^{(t+1)}_{j-1}w^{(t+1)}_{j}\), we have that \(s'^{(t)}_{j}=s^{(t)}_{j}\), which together with \(\mathrm{sign}(p^{(t)}_{1j})=s^{(t)}_{j}\ne 0\) (\(2\le j\le h\)) implies that

      $$\begin{aligned} \mathrm{sign}(P_{t}[1:j-1|j])= s^{(t)}_{j}\ne 0,~~\forall ~2\le j\le h. \end{aligned}$$

      For showing that the fact (41) with (42) holds for all the factors \(A_{i}\) (\(1\le i\le K\)), because of the connections between \(P_{j}\) and \(P'_{j}\) (\(j=t,t+1\)), it remains to show that the fact (42) is satisfied for all \(s^{(t)}_{j}\) (\(2\le j\le h\)) of \(P_{t}\), which is proved as follows: for all \(2\le j\le h\), by considering that \(s'^{(t+1)}_{j}\ne 0\), since

      $$\begin{aligned} {\left\{ \begin{array}{ll} s^{(r)}_{j}s'^{(t+1)}_{j}=\prod _{l= r +1}^{ t+1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r<t,\\ s'^{(t+1)}_{j}s^{(r)}_{j}=\prod _{l= t +2}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r\ge t+1,\\ f^{(r)}_{j}s'^{(t+1)}_{j} =\prod _{l= r}^{ t+1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r\le t+1,\\ s'^{(t+1)}_{j}f^{(r)}_{j} =\prod _{l= t +2}^{ r-1}w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r>t+1, \end{array}\right. } \end{aligned}$$

      we have that

      $$\begin{aligned} {\left\{ \begin{array}{ll} s^{(r)}_{j}s^{(t)}_{j}=\prod _{l= r +1}^{ t }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r<t,\\ s^{(t)}_{j}s^{(r)}_{j}=\prod _{l= t +1}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r>t+1,\\ f^{(r)}_{j}s^{(t)}_{j}=\prod _{l= r}^{ t }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r\le t+1,\\ s^{(t)}_{j}f^{(r)}_{j}=\prod _{l= t +1}^{ r-1}w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r>t+1. \end{array}\right. } \end{aligned}$$

      Therefore, we conclude that the necessity is true.

  • The case \(t=K\). For the parameter matrix \(P_{K}=(p^{(K)}_{ij})\in {\mathbb R}^{n \times n}\) of \(A_{K}\), if \(p^{(K)}_{1j}=0\) for all \(3\le j\le n\), then all the (1, j)-th (\(3\le j\le n\)) entries of \(A_{K}\) are zero, and so, there is nothing to be eliminated. Thus, by the fact (2), we assume that

    $$\begin{aligned} p^{(K)}_{11}\ne 0,~\ldots ,~p^{(K)}_{1h}\ne 0~(h\ge 3);\quad p^{(K)}_{1j}=0,~\forall ~j>h. \end{aligned}$$

    Let \(U=U_{3}(-p^{(K)}_{13},\ldots ,-p^{(K)}_{1h})\) with \(\mathrm{sign}(p^{(K)}_{1j})=s^{(K)}_{j}\ne 0\) (\(3\le j\le h\)). Then the elimination of these nonzero entries is performed by the periodic qd-type method as follows

    $$\begin{aligned} U^{-1}(A_{1}\ldots A_{K-1}A_{K})U=A'_{1}A_{2}\ldots A_{K-1}A'_{K}, \end{aligned}$$
    (46)

    where \(A'_{K}\) is a nonsingular CRD matrix obtained from \(A_{K}\) only by setting its nonzero (1, j)th (\(3\le j\le h\)) parameters to be zero, i.e., the parameter matrix \(P'_{K}\in {\mathbb R}^{n \times n}\) of \(A'_{K}\) is the following:

    $$\begin{aligned} P'_{K}[1|j]=0,~\forall ~3\le j\le n;\quad P'_{K}[i|j]=P_{K}[i|j],~\mathrm{otherwise}; \end{aligned}$$

    and the qd-type transformation \(A'_{1}=U^{-1}A_{1}\) is computed by Algorithm 1. Notice that all the (1, j)-th (\(2\le j\le n\)) entries of \(A_{1}\) are zero, this means that \(P_{1}[1|j]=0\) for all \(2\le j\le n\). Thus, by Theorem 3, the elimination of (46) is subtraction-free if and only if the fact (41) with (42) is satisfied for the parameter matrix \(P_{1}\in {\mathbb R}^{n \times n}\) as follows:

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P_{1}[j|1:j-1])=f^{(1)}_{j}=s^{(K)}_{j}~\mathrm{or}~0,~\forall ~3\le j\le h ,\\ \mathrm{sign}(P_{1}[1:j-1|j])=s^{(1)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}~\mathrm{or}~0,~\forall ~3\le j\le h ,\\ \mathrm{sign}(P_{1}[1:h|j])=s^{(1)}_{j},~\forall ~h+1\le j\le n, \end{array}\right. } \end{aligned}$$
    (47)

    and consequently, \(A'_{1}\) is a nonsingular CRD matrix whose parameter matrix \(P'_{1}\in {\mathbb R}^{n \times n}\) satisfies that

    $$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P'_{1}[1:j-1|j]))=s'^{(1)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}\ne 0,~\forall ~3\le j\le h,\\ \mathrm{sign}(P'_{1}[1:h|j])=s'^{(1)}_{j}=s^{(1)}_{j},~\forall ~h+1\le j\le n,\\ \mathrm{sign}(P'_{1}[i|j])=\mathrm{sign}(P_{1}[i|j]),~~\mathrm{otherwise}. \end{array}\right. } \end{aligned}$$
    (48)

    In particular, the fact (40) is satisfied for the product \(A'_{1}A_{2}\ldots A_{K-1}A'_{K}\).

    • For the sufficiency, it need to show that the fact (41) with (42) holds for all the factors of \(A'_{1}A_{2}\ldots A_{K-1}A'_{K}\). Consider that the fact (41) with (42) has been satisfied for all the factors \(A_{i}\) (\(1\le i\le K\)). Because of the connections between \(P_{j}\) and \(P'_{j}\) (\(j=1,K\)), it remains to show that the fact (42) is satisfied for all \(s'^{(1)}_{j}\) (\(3\le j\le h\)) of \(P'_{1}\), which is proved as follows: for all \(3\le j\le h\), by considering that \(\prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1\), we have that

      $$\begin{aligned} {\left\{ \begin{array}{ll} s'^{(1)}_{j}s^{(r)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}s^{(r)}_{j}=w^{(1)}_{j-1}w^{(1)}_{j}\prod _{l= r+1}^{ K }w^{(l)}_{j-1}w^{(l)}_{j}\\ \quad \quad \quad \quad =\prod _{l= 2}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r>1,\\ s'^{(1)}_{j} f^{(r)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}f^{(r)}_{j}=w^{(1)}_{j-1}w^{(1)}_{j}\prod _{l= r}^{ K }w^{(l)}_{j-1}w^{(l)}_{j}\\ \quad \quad \quad \quad =\prod _{l= 2}^{ r-1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r> 1,\\ s'^{(1)}_{j} f^{(1)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}f^{(1)}_{j}=w^{(1)}_{j-1}w^{(1)}_{j}. \end{array}\right. } \end{aligned}$$

      Therefore, we conclude by the induction assumption that the sufficiency is true.

    • For the necessity, by the induction assumption, the fact (41) with (42) is true for all the factors of \(A'_{1}A_{2}\ldots A_{K-1}A'_{K}\), and in particular,

      $$\begin{aligned} {\left\{ \begin{array}{ll} P'_{K}[1|j]=0,~\forall ~3\le j\le n;\quad P'_{K}[1|2]=P_{K}[1|2];\\ \mathrm{sign}(P'_{K}[j|1:j-1])=\mathrm{sign}(P_{K}[j|1:j-1])= f^{(K)}_{j},~~2\le j\le n; \\ \mathrm{sign}(P'_{K}[1:j-1|j])=\mathrm{sign}(P_{K}[2:j-1|j])= s'^{(K)}_{j},~~3\le j\le h;\\ \mathrm{sign}(P'_{K}[1:j-1|j])=\mathrm{sign}(P_{K}[1:j-1|j])= s^{(K)}_{j},~~h+1\le j\le n;\\ \mathrm{sign}(P'_{K}[j|j])=\mathrm{sign}(P_{K}[j|j])= w^{(K)}_{j},~~1\le j\le n, \end{array}\right. } \end{aligned}$$

      where if \(s'^{(K)}_{j}\ne 0\) for some \(3\le j\le h\), since \(s'^{(1)}_{j}=s^{(K)}_{j}w^{(1)}_{j-1}w^{(1)}_{j}\ne 0\) and \(s'^{(1)}_{j}s'^{(K)}_{j}=\prod _{l=2}^{K}w^{(l)}_{j-1}w^{(l)}_{j}\), we have that \(s'^{(K)}_{j}s^{(K)}_{j}=\prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1\), which together with \(\mathrm{sign}(p^{(K)}_{1j})=s^{(K)}_{j}\ne 0\) (\(3\le j\le h\)) implies that

      $$\begin{aligned} \mathrm{sign}(P_{K}[1:j-1|j])= s^{(K)}_{j}\ne 0,~~\forall ~3\le j\le h. \end{aligned}$$

      For showing that the fact (41) with (42) holds for all the factors \(A_{i}\) (\(1\le i\le K\)), because of the connections between \(P_{j}\) and \(P'_{j}\) (\(j=1,K\)), it remains to show that the fact (42) is satisfied for all \(s^{(K)}_{j}\) (\(3\le j\le h\)) of \(P_{K}\), which is proved as follows: for all \(3\le j\le h\), by considering that \(s'^{(1)}_{j}\ne 0\) and \(\prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1\), since

      $$\begin{aligned} {\left\{ \begin{array}{ll} s'^{(1)}_{j}s^{(r)}_{j}=\prod _{l= 2}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r\ge 1,\\ s'^{(1)}_{j}f^{(r)}_{j} =\prod _{l=2}^{ r-1}w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r>1,\\ s'^{(1)}_{j}f^{(1)}_{j}=w^{(1)}_{j-1}w^{(1)}_{j},~\mathrm{if }~f^{(1)}_{j}\ne 0, \end{array}\right. } \end{aligned}$$

      we have that

      $$\begin{aligned} {\left\{ \begin{array}{ll} s^{(r)}_{j}s^{(K)}_{j}=\prod _{l= r +1}^{ K }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~s^{(r)}_{j}\ne 0~\mathrm{for}~r\ge 1,\\ f^{(r)}_{j}s^{(K)}_{j}=\prod _{l= r}^{ K}w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~\mathrm{for}~r\ge 1. \end{array}\right. } \end{aligned}$$

      Therefore, we conclude that the necessity is true.

The result is proved. \(\square \)

Remark 2

It must be pointed out that according to the proof of Theorem 5, the condition (40) can be removed for the sufficiency.

Corollary 4

Let \(A=A_{1}\ldots A_{K-1}A_{K}\in {\mathbb R}^{n \times n}\), where each factor \(A_{t}\in {\mathbb R}^{n \times n}\) (\(1\le t\le K\)) is a nonsingular CRD matrix whose parameter matrix \(P_{t} \in {\mathbb R}^{n \times n}\) satisfies that

$$\begin{aligned} P_{t}[1:r-1|r]\ne 0~\mathrm{and}~P_{t}[r|1:r-1]\ne 0,~\forall ~1\le r\le n. \end{aligned}$$

Then Algorithm 2 is subtraction-free to reduce A into a tridiagonal matrix whose basic LR algorithm is subtraction-free if and only if for all \(1\le t\le K\),

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P_{t}[j|1:j-1])= f^{(t)}_{j}, \\ \mathrm{sign}(P_{t}[1:j-1|j])= s^{(t)}_{j}, \\ \mathrm{sign}(P_{t}[j|j])= w^{(t)}_{j}, \end{array}\right. } \forall ~1\le j\le n, \end{aligned}$$

where

$$\begin{aligned} {\left\{ \begin{array}{ll} \prod _{l=1}^{K}w^{(l)}_{j-1}w^{(l)}_{j}=1,\\ f^{(t)}_{j}s^{(t)}_{j}=w^{(t)}_{j-1}w^{(t)}_{j},~ \forall ~1\le t\le K,\\ s^{(t)}_{j}=f^{(t+1)}_{j},~\forall ~1\le t\le K-1;~s^{(K)}_{j}=f^{(1)}_{j},\\ \end{array}\right. } \forall ~1\le j\le n. \end{aligned}$$

Corollary 5

Let \(A=A_{1}A^{T}_{2}\ldots A_{2K-1}A^{T}_{2K}\in {\mathbb R}^{n \times n}\) be the product of Vandermonde matrices \(A_{t}=[(x^{(t)}_{i})^{j-1}]_{i,j=1}^{n,n}\) (\(1\le t\le 2K\)) with \(0\ge x^{(t)}_{1}>x^{(t)}_{2}>\cdots >x^{(t)}_{n}\). Then Algorithm 2 is subtraction-free to reduce A into a tridiagonal matrix whose basic LR algorithm is subtraction-free.

Proof

By the formula (4), the parameter matrix \(P_{t}\in {\mathbb R}^{n \times n}\) of \(A_{t}\) (\(1\le t\le 2K\)) satisfies that

$$\begin{aligned} {\left\{ \begin{array}{ll} \mathrm{sign}(P_{t}[j|1:j-1])= 1, \\ \mathrm{sign}(P_{t}[1:j-1|j])= -1, \\ \mathrm{sign}(P_{t}[j|j])=(-1)^{j-1}, \end{array}\right. } \forall ~1\le j\le n. \end{aligned}$$

Thus, we conclude by Corollary 4 that the result is true. \(\square \)

Beside of the example by Corollary 5, more examples can be given for the periodic qd-type reduction method to be subtraction-free. Fox example, by the formula (4), the parameter matrix of a Vandermode matrix \(A=(x_{i}^{j-1})\in {\mathbb R}^{n\times m}\) is nonnegative if \( 0\le x_{1}<x_{2}<\cdots <x_{n}\); and by the formula (5), the parameter matrix of a Cauchy matrix \(A=(\frac{1}{x_{i}+y_{j}})\in {\mathbb R}^{n\times m}\) is positive (or all the diagonal and off-diagonal parameters are negative and positive, respectively) if \( x_{1}<x_{2}<\cdots <x_{n}\), \( y_{1}<y_{2}<\cdots <y_{n}\), \(x_{1}+y_{1}>0\) (or \(x_{n}+y_{n}<0\)). So, any product consisting of these Vandermonde and Cauchy matrices is our desired example. In what follows, we illustrate how to specifically compute all the eigenvalues of such products to high relative accuracy.

For the product \(A=A_{1}\ldots A_{K-1}A_{K}\in {\mathbb R}^{n \times n}\) with parameter matrices \(P_{t}=(p^{(t)}_{ij})\) of the factors \(A_{t}\) (\(1\le t\le K\)) satisfying the fact (41) with (42), the periodic qd-type method is subtraction-free to reduce A into a tridiagonal matrix \(T\in {\mathbb R}^{n \times n}\) as follows:

$$\begin{aligned} T=\bar{A}_{1}\ldots \bar{A}_{K-1}\bar{A}_{K}=LDU, \end{aligned}$$
(49)

where let \(\bar{P}_{t}=(\bar{p}^{(t)}_{ij})\) be the parameter matrix of \(\bar{A}_{t}\) (\(1\le t\le K\)), then \(D=\mathrm{diag}(d_{ii})\) with \(d_{ii}=\prod _{l=1}^{K}\bar{p}^{(l)}_{ii}\) and \(\mathrm{sign}(\bar{p}^{(l)}_{ii})=\mathrm{sign}(p^{(l)}_{ii})\) for all i and l, L and U are lower and upper bidiagonal with unit diagonal whose bidiagonal entries \(l_{i}=\bar{p}^{(1)}_{i,i-1}\) and \(u_{i}=\bar{p}^{(K)}_{i-1,i}\), respectively; and \(\mathrm{sign}(l_{i}u_{i})=1~\mathrm{or}~0\) for all i. Consequently, let \(X=\mathrm{diag}(x_{1},\prod _{i=1}^{2}x_{i},\ldots ,\prod _{i=1}^{n}x_{i})\in {\mathbb R}^{n \times n}\), where

$$\begin{aligned} x_{i}= {\left\{ \begin{array}{ll} \max \{\mathrm{sign}(l_{i}),\mathrm{sign}(u_{i})\},~\mathrm{if}~l_{i}\ne 0~\mathrm{or}~u_{i}\ne 0,\\ 1,\quad \mathrm{otherwise}, \end{array}\right. } 1\le i\le n, \end{aligned}$$

then by the fact \(\prod _{l=1}^{K}\mathrm{sign}(\bar{p}^{(l)}_{i-1,i-1}\bar{p}^{(l)}_{ii})=\prod _{l=1}^{K}\mathrm{sign}(p^{(l)}_{i-1,i-1}p^{(l)}_{ii})=1\) for all i,

$$\begin{aligned} X^{-1}TX= {\left\{ \begin{array}{ll} |L||D||U|,~\mathrm{if}~\prod _{l=1}^{K}\mathrm{sign}(p^{(l)}_{11})=1,\\ -|L||D||U|,~\mathrm{if}~\prod _{l=1}^{K}\mathrm{sign}(p^{(l)}_{11})=-1, \end{array}\right. } \end{aligned}$$

here, \(|\cdot |\) is interpreted componentwise. Further, |L||D||U| has the same eigenvalues as those of the symmetric tridiagonal matrix

$$\begin{aligned} T'=(D'U')^{T}( D'U')=B^{T}B,\quad D'=\mathrm{diag}\left( \sqrt{|d_{ii}|}\right) , \end{aligned}$$
(50)

where \(U'\) is bidiagonal with unit diagonal whose \((i-1,i)\)th entries \(u'_{i}=\sqrt{|\bar{p}^{(1)}_{i,i-1}\bar{p}^{(K)}_{i-1,i }|}\) (\(2\le i\le n\)). So, all the eigenvalues of A are positive or negative squares of singular values of B according to \(\prod _{l=1}^{K}\mathrm{sign}(p^{(l)}_{11})=1\) or \(-1\). It is well known that all the singular values of a bidiagonal matrix is accurately computed by the dqds algorithm [13]. Therefore, we have Algorithm 3 to compute eigenvalues of the product satisfying (41) with (42).

figure c

5.4 Error Analysis

In this subsection, error analysis is provided to illustrate the high relative accuracy of Algorithm 3. We first show that for the product (19) satisfying the fact (41) with (42), all its eigenvalues are determined by its parameters to high relative accuracy.

Theorem 6

Let \(A=A_{1}\ldots A_{K-1}A_{K}\in {\mathbb R}^{n \times n}\) be the product of nonsingular CRD matrices \(A_{t}\in {\mathbb R}^{n \times n}\) (\(1\le t\le K\)) whose parameter matrices \(P_{t}=(p^{(t)}_{ij})\in {\mathbb R}^{n \times n}\) satisfy the fact (41) with (42), and let \(\tilde{A}\in {\mathbb R}^{n \times n}\) be obtained from A by replacing one of these parameters \(p^{(t)}_{ij}\) with \(\tilde{p}^{(t)}_{ij}=p^{(t)}_{ij}(1+\epsilon ^{(t)}_{ij})\) (\(1\le i,j\le n\) and \(1\le t\le K\)), where \(|\epsilon ^{(t)}_{ij}|\le \epsilon \) and \(2\epsilon <1\). Then for the descending-ordered eigenvalues \(\lambda _{i}\) and \(\tilde{\lambda }_{i}\) of A and \(\tilde{A}\), respectively;

$$\begin{aligned} |\tilde{\lambda }_{i}-\lambda _{i}|\le \frac{2\epsilon }{1-2\epsilon }|\lambda _{i}|,\quad \forall ~1\le i\le n. \end{aligned}$$

Proof

According to the fact (41), let \(X=\mathrm{diag}(x_{1},\prod _{j=1}^{2}x_{j},\ldots ,\prod _{j=1}^{n}x_{j})\in {\mathbb R}^{n \times n}\), where for all \( 1\le j\le n\),

$$\begin{aligned} x_{j}= {\left\{ \begin{array}{ll} f^{(r)}_{j}\prod _{l= 1}^{ r-1 }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if }~f^{(r)}_{j}\ne 0~~\mathrm{for~some }~1\le r\le K;\\ s^{(r)}_{j}\prod _{l= 1}^{ r }w^{(l)}_{j-1}w^{(l)}_{j},~\mathrm{if}~f^{(l)}_{j}=0~\mathrm{for~all }~1\le l\le K, \mathrm{but}~s^{(r)}_{j}\ne 0~\mathrm{for~some }~1\le r\le K;\\ 1,~\mathrm{otherwise}; \end{array}\right. } \end{aligned}$$

and thus, let \(Z_{t}=XY_{t}=\mathrm{diag}(z^{(t)}_{j})\in {\mathbb R}^{n \times n}\) where \(Y_{t}=\mathrm{diag}(\prod _{l=1}^{t}w^{(l)}_{j})\in {\mathbb R}^{n \times n}\) for all \(0\le t\le K\) with the convention \(Y_{0}=I_{n}\). Then

$$\begin{aligned} Z_{0}AZ_{K}=(Z_{0}A_{1}Z_{1})(Z_{1}A_{2}Z_{2})\ldots (Z_{K-1}A_{K}Z_{K})=\bar{A}_{1}\bar{A}_{2}\ldots \bar{A}_{K}, \end{aligned}$$

where for each \(1\le t\le K\), since \(A_{t}=B_{1}\ldots B_{n-1}DC_{n-1}\ldots C_{1}\) is as in (1) satisfying the fact (41), we have

$$\begin{aligned} \bar{A}_{t}=Z_{t-1}A_{t}Z_{t}= & {} |B_{1}|\ldots |B_{n-1}| (Z_{t-1}D)C_{n-1}\ldots C_{1}Z_{t}\\= & {} |B_{1}|\ldots |B_{n-1}||D|(Z_{t} C_{n-1}\ldots C_{1})Z_{t}\\= & {} |B_{1}|\ldots |B_{n-1}||D||C_{n-1}|\ldots |C_{1}|(Z_{t}Z_{t})\\= & {} |B_{1}|\ldots |B_{n-1}||D||C_{n-1}|\ldots |C_{1}|, \end{aligned}$$

because the following statements hold by using the fact (42):

  • first, for any \(f^{(t)}_{j}\ne 0\) (\(2\le j\le n\)),

    $$\begin{aligned} f^{(t)}_{j}z^{(t-1)}_{j-1}z^{(t-1)}_{j}=f^{(t)}_{j}x_{j}\prod _{l= 1}^{ t-1 }w^{(l)}_{j-1}w^{(l)}_{j}= f^{(t)}_{j}\left( f^{(r)}_{j}\prod _{l= 1}^{ r-1 }w^{(l)}_{j-1}w^{(l)}_{j}\right) \prod _{l= 1}^{ t-1 }w^{(l)}_{j-1}w^{(l)}_{j}=1, \end{aligned}$$

    such that

    $$\begin{aligned} Z_{t-1}B_{i}Z_{t-1}=|B_{i}|,~\forall ~1\le i\le n-1; \end{aligned}$$
  • further, for any \(w^{(t)}_{j}\) (\(1\le j\le n\)),

    $$\begin{aligned} w^{(t)}_{j}z^{(t-1)}_{j}=z^{(t)}_{j},~\forall ~1\le j\le n \end{aligned}$$

    such that

    $$\begin{aligned} Z_{t-1}D=|D|Z_{t}; \end{aligned}$$
  • finally, for any \(s^{(t)}_{j}\ne 0\) (\(2\le j\le n\)),

    $$\begin{aligned} s^{(t)}_{j}z^{(t)}_{j-1}z^{(t)}_{j}=s^{(t)}_{j}x_{j}\prod _{l= 1}^{ t }w^{(l)}_{j-1}w^{(l)}_{j}= s^{(t)}_{j}\left( s^{(r)}_{j}\prod _{l= 1}^{ r }w^{(l)}_{j-1}w^{(l)}_{j}\right) \prod _{l= 1}^{ t }w^{(l)}_{j-1}w^{(l)}_{j}=1, \end{aligned}$$

    such that

    $$\begin{aligned} Z_{t}C_{i}Z_{t}=|C_{i}|,~\forall ~1\le i\le n-1.\end{aligned}$$

The fact \(\prod _{l= 1}^{ K }w^{(l)}_{j-1}w^{(l)}_{j}=1\) implies that \(\prod _{l= 1}^{ K }w^{(l)}_{j-1}=\prod _{l= 1}^{ K }w^{(l)}_{j}\) for all j. So, \(Z_{k}=(\prod _{l= 1}^{ K }w^{(l)}_{1})\cdot X\), and thus,

$$\begin{aligned} X^{-1}AX=\left( \prod _{l= 1}^{ K }w^{(l)}_{1}\right) \cdot \bar{A}_{1}\bar{A}_{2}\ldots \bar{A}_{K}=\pm \left( \bar{A}_{1}\bar{A}_{2}\ldots \bar{A}_{K}\right) \end{aligned}$$

where each \(\bar{A}_{t}\) (\(1\le t\le K\)) is a nonsingular CRD matrix whose parameter matrix \(\bar{P}_{t}=(|p^{(t)}_{ij}|)\in {\mathbb R}^{n \times n}\). This means that \(\bar{A}_{1}\bar{A}_{2}\ldots \bar{A}_{K}\) is just a product of nonnegative bidiagonal matrices. Therefore, we conclude by [19, Theorem 7.2] that the result is true. \(\square \)

Now we are ready to show the high relative accuracy of Algorithm 3 as follows.

Theorem 7

Let \(A=A_{1}\ldots A_{K-1}A_{K}\in {\mathbb R}^{n \times n}\) be the product of nonsingular CRD matrices \(A_{t}\in {\mathbb R}^{n \times n}\) (\(1\le t\le K\)) satisfying the fact (41) with (42), and let \(\lambda _{i}\) and \(\hat{\lambda }_{i}\) be its descending-ordered exact and computed eigenvalues by Algorithm 3, respectively. Then

$$\begin{aligned} |\hat{\lambda }_{i}-\lambda _{i}|\le \frac{O(2Kn^{3})\mu }{1-O(2Kn^{3})\mu }|\lambda _{i}|,\quad i=1,2,\ldots ,n. \end{aligned}$$
Fig. 1
figure 1

The computed eigenvalues of the product in Example 1

Fig. 2
figure 2

The relative errors for the computed eigenvalues of the product in Example 1

Fig. 3
figure 3

The computed eigenvalues of the product in Example 2

Proof

For the first stage of Algorithm 3, let T and \(\hat{T}\) be the exact and computed tridiagonal forms obtained from A without any subtractions of like-signed numbers in \(O(Kn^{3})\) arithmetic operations, respectively. Consider that every single subtraction-free arithmetic operation causes at most \(\mu \) relative perturbation in at most one parameter of the factors of A. So, by Theorem 6, all the descending-ordered eigenvalues \(\lambda _{i}(\hat{T})\) and \(\lambda _{i}(T)\) satisfy that

$$\begin{aligned} |\lambda _{i}(\hat{T})-\lambda _{i}(T)|\le \frac{O(2Kn^{3})\mu }{1-O(2Kn^{3})\mu }|\lambda _{i}(T)|,~~\lambda _{i}(T)=\lambda _{i}(A),\quad i=1,2,\ldots ,n. \end{aligned}$$

Further, for the second stage of Algorithm 3, let B and \(\hat{B}\) be the exact and computed bidiagonal matrices obtained form \(\hat{T}\) without any subtractions of like-signed numbers, respectively; then by [8, Theorem 2], all the descending-ordered singular values \(\sigma _{i}(\hat{B})\) and \(\sigma _{i}(B)\) satisfy that

$$\begin{aligned} |\sigma _{i}(\hat{B})-\sigma _{i}(B)|\le \frac{O(n^{3})\mu }{1-O(n^{3})\mu }\sigma _{i}(B),~~\sigma ^{2}_{i}(B)=|\lambda _{i}(\hat{T})|,\quad i=1,2,\ldots ,n. \end{aligned}$$

In addition, the dqds algorithm computes each singular value of \(\hat{B}\) with a relative error not exceeding \(O(n^{2})\mu \) [13, 25]. Therefore, we conclude that the result is true. \(\square \)

Fig. 4
figure 4

The relative errors for the computed eigenvalues of the product in Example 2

Fig. 5
figure 5

The computed eigenvalues of the product in Example 3

Table 1 The relative errors for computing nonzero eigenvalues in Example 3

6 Numerical Experiments

In this section, we provide numerical experiments to confirm the high relative accuracy of our proposed method by measuring the relative error \( |\hat{\lambda }_{i}-\lambda _{i}|/ |\lambda _{i}|\) of the computed eigenvalues \(\hat{\lambda }_{i}\), where \(\lambda _{i}\) is the exact descending-ordered eigenvalue by Mathematica with 200-decimal digit arithmetic. All the tests are conducted by MATLAB 7.0 in double precision arithmetic.

Example 1

Consider the product \(A=B^{6}\) with \(B=\left[ \frac{n^{2}}{x_{i}+y_{j}}\right] _{i,j=1}^{n,n}\), where the descending-ordered vectors \(x=(x_{i}),y=(y_{j})\in \mathbb {R}^{1\times n}\) are randomly chosen by the command rand. Set \(n=30\). The spectral condition number \(k_{2}(B)= 7.3453e+049\) by Mathematica. We compute eigenvalues of A by Algorithm 3 and Matlab command \(\mathtt{(eig(B))}^{6}\), respectively. The absolute values and relative errors of the computed eigenvalues are plotted in Figs. 1 and 2, respectively; where the maximum relative errors by Algorithm 3 and \(\mathtt{(eig(B))}^{6}\) are \(8.6766e-013\) and \( 6.5770e+179\), respectively.

Example 2

Consider the product \(A=(BB^{T})^{3}\) with \(B=\left[ (-x_{i}\cdot 10)^{j-1}\right] _{i,j=1}^{n,m}\), where the ascending-ordered vector \(x=(x_{i})\in \mathbb {R}^{1\times n}\) is randomly chosen by the command rand. Set \(n=100\) and \(m=30\). Then \(k_{2}(B)= 2.5298e+036\) by Mathematica. We compute eigenvalues of A by Algorithm 3 and Matlab command \(\mathtt{(svd(B))}^{6}\), respectively. The absolute values and relative errors of the computed eigenvalues are plotted in Figs. 3 and 4, respectively; where the maximum relative errors by Algorithm 3 and \(\mathtt{(svd(B))}^{6}\) are \(1.5229e-014\) and \( 2.4737e+023\), respectively.

Fig. 6
figure 6

The computed eigenvalues of the product in Example 4

Table 2 The relative errors for computing nonzero eigenvalues in Example 4
Fig. 7
figure 7

The computed eigenvalues of the product in Example 5

Fig. 8
figure 8

The relative errors for the computed eigenvalues of the product in Example 5

Example 3

Consider the product \(A=A_{1}A_{2}A_{3}A_{4}A_{5}A_{6}\in \mathbb {R}^{n_{1}\times n_{1}}\), where

$$\begin{aligned} A_{1}=\left[ \left( \frac{1}{n_{1}-i+1}\right) ^{j-1}\right] _{i,j=1}^{n_{1},n_{2}},A_{2}=\left[ \left( \frac{-i^{2}}{n_{2}}\right) ^{j-1}\right] _{i,j=1}^{n_{2},n_{3}}, A_{3}=\left[ \left( -n_{4}-j\right) ^{i-1}\right] _{i,j=1}^{n_{3},n_{4}}, \end{aligned}$$

and

$$\begin{aligned} A_{4}=\left[ \frac{n_{4}}{i+j-1} \right] _{i,j=1}^{n_{4},n_{5}},~A_{5}=\left[ \frac{n_{6}}{-i-j}\right] _{i,j=1}^{n_{5},n_{6}},~A_{6}=\left[ \frac{n_{1}}{i+j}\right] _{i,j=1}^{n_{6},n_{1}}. \end{aligned}$$

By the formulas (4) and (5), the fact (41) with (42) is satisfied. Set \(n_{1}=100,~n_{2}=30,~n_{3} = 40,~n_{4} = 50,~n_{5} = 60,~n_{6} = 70\). Then \(k_{2}(A_{1})= 1.7796e+042\), \(k_{2}(A_{2})=1.9307e+060\), \(k_{2}(A_{3})= 4.0401e+109\), \(k_{2}(A_{4})= 3.9278e+069\), \(k_{2}(A_{5})= 1.5132e+085\) and \(k_{2}(A_{6})= 1.4911e+095\) by Mathematica. Notice that 70 eigenvalues of A are zero. We compute eigenvalues of A by Algorithm 3 and Matlab command \(\mathtt{eig}(A)\), respectively. The absolute values of all the computed eigenvalues are plotted in Fig. 5. As observed, these 70 zero eigenvalues are not correctly computed by \(\mathtt{eig}(A)\). The relative errors for computing 30 nonzero eigenvalues are reported in Table 1, which confirms that all the eigenvalues of A are computed by Algorithm 3 to high relative accuracy.

Example 4

Consider the product \(A=A_{1}A_{2}A_{3}A_{4}A_{5}A_{6}\in \mathbb {R}^{n_{1}\times n_{1}}\), where

$$\begin{aligned} A_{1}=\left[ \left( \frac{-1}{n_{1}-i+1}\right) ^{j-1}\right] _{i,j=1}^{n_{1},n_{2}},A_{2}=[(-n_{3}-j)^{i-1}]_{i,j=1}^{n_{2},n_{3}}, A_{3}=\left[ \left( \frac{-1}{n_{3}-i+1}\right) ^{j-1}\right] _{i,j=1}^{n_{3},n_{4}}, \end{aligned}$$

and

$$\begin{aligned} A_{4}=[(-n_{5}-j)^{i-1}]_{i,j=1}^{n_{4},n_{5}},~A_{5}=\left[ \left( \frac{-1}{n_{5}-i+1}\right) ^{j-1}\right] _{i,j=1}^{n_{5},n_{6}},~A_{6}=[(-n_{1}-j)^{i-1}]_{i,j=1}^{n_{6},n_{1}}. \end{aligned}$$

By the formula (4), the fact (41) with (42) is satisfied. Set \(n_{1}=70,~n_{2}=30,~n_{3} = 40,~n_{4} = 40,~n_{5} = 50,~n_{6} = 60\). Then \(k_{2}(A_{1})= 8.3931e+042\), \(k_{2}(A_{2})=1.9573e+078\), \(k_{2}(A_{3})= 9.2919e+068\), \(k_{2}(A_{4})= 4.0401e+109\), \(k_{2}(A_{5})= 6.2048e+089\) and \(k_{2}(A_{6})= 3.0557e+175\) by Mathematica. Notice that 40 eigenvalues of A are zero. We compute eigenvalues of A by Algorithm 3 and Matlab command \(\mathtt{eig}(A)\), respectively. The absolute values of all the computed eigenvalues are plotted in Fig. 6. As observed, these 40 zero eigenvalues are not correctly computed by \(\mathtt{eig}(A)\). The relative errors for computing 30 nonzero eigenvalues are reported in Table 2, which confirms the high relative accuracy of Algorithm 3.

Finally, we show the accuracy of our method even though the fact (41) with (42) is not satisfied.

Example 5

Consider the product \(A=B^{6}\in \mathbb {R}^{n\times n}\) with \(B=\left[ (\frac{n}{n+i})^{j-1}\right] _{i,j=1}^{n,n}\). Set \(n=30\). Then \(k_{2}(B)= 7.9202e+037\) by Mathematica. The fact (41) with (42) is not satisfied for A. Nevertheless, Algorithm 3 computes eigenvalues of A with more accuracy than those by the command \(\mathtt{(eig(B))}^{6}\), especially for eigenvalues with small magnitudes. All the absolute values and relative errors of the computed eigenvalues are plotted in Figs. 7 and 8, respectively. The maximum relative error by Algorithm 3 is \( 2.5882e-011\), whereas the maximum relative error by \(\mathtt{(svd(B))}^{6}\) is \(3.1404e+117\).