1 Introduction

The real number field and the complex number field are denoted by \(\mathbb {R}\) and \(\mathbb {C}\), respectively. Let \(E^{(k)}_{i,j}\) be the \(k\times k\) matrix with 1 at its entry (ij) and zeros elsewhere, \(i=1, 2, \ldots , k\) and \(j=1, 2, \ldots , k\). In this paper, we study the inverse eigenvalue problem of quasi-tridiagonal matrices, a class of matrices of this form

$$\begin{aligned} J= \overline{b}_nE^{(n)}_{1,m}+b_nE^{(n)}_{m,1}+\sum _{k = 1}^na_kE^{(n)}_{k,k}+\sum _{k = 1}^{n-1}b_kE^{(n)}_{k,k+1}+\sum _{k = 1}^{n-1} \overline{b}_kE^{(n)}_{k+1,k}, \end{aligned}$$
(1.1)

where \(a_k\in \mathbb {R}\), \(b_k\in \mathbb {C}-\mathbb {R}\), \(k=1, 2, \ldots , n\), \(\overline{b}_{k}\) is the complex conjugation of \(b_{k}\), and \(\overline{b}_{n}\) lie in any given entry (1, m) (the first row and mth column) of matrix J for \(3\le m\le n\). Especially, when \(m=n\), we have \(J=H\), where

$$\begin{aligned} H= \overline{b}_nE^{(n)}_{1,n}+b_nE^{(n)}_{n,1}+\sum _{k = 1}^na_kE^{(n)}_{k,k}+\sum _{k = 1}^{n-1}b_kE^{(n)}_{k,k+1}+\sum _{k = 1}^{n-1} \overline{b}_kE^{(n)}_{k+1,k}. \end{aligned}$$
(1.2)

The goal of this paper is to find the numbers \(a_{1},\dots ,a_n\) and \(b_{1},\dots ,b_{n}\) so that the wanted matrix J has exactly the given eigenvalues. Thus, there are inputs to the problem and the output is the matrix J. The characteristic polynomial of the \(n\times n\) matrix J is \(\chi _{n}\left( \lambda \right) =\mathrm{det}\left( \lambda {I_{n}}-J\right) \), where \(I_n\) is the \(n\times n\) identity matrix. Let \(\sigma \left( J\right) =\{\lambda _{1}, \ldots , \lambda _{n}\}\) be the spectrum of J. It is clear that J is Hermitian, so the eigenvalues \({\lambda _{1}}, \ldots , {\lambda _{n}}\) of J are real.

The inverse eigenvalue problems for different classes of matrices have attracted much attention [10, 13, 14, 16, 19]. Inverse eigenvalue problems arise in a remarkable variety of applications, including system and control theory, geophysics, molecular spectroscopy, particle physics, and structure analysis. Also today their place in mathematical physics is determined rather by the unexpected connection between inverse problems and nonlinear evolution equations which was discovered in 1967. If \(b_{k} \ne 0\) is real for \(k=1, \dots , n\) and \(b_{n}\) and \(\overline{b}_{n}\) are in (1, n) (the first row and the nth column) and (n, 1) entries of the above matrices, the class of matrices H of the form (1.2) is called periodic Jacobi matrices. The inverse eigenvalue problems arise in many areas such as science and engineering [8]. Due to its wide application, these problems deserved much attention of researchers [1, 6, 7, 15, 18]. Algorithms have been found to reconstruct matrices with described eigenvalues [1, 6, 15, 18]. In [17], four inverse eigenvalue problems for pseudo-Jacobi matrices have been considered. The necessary and sufficient conditions for the solvability of these problems have been shown.

If \(b_{n}=0\) in period Jacobi matrices, then the matrices are reduced to tridiagonal matrices named Jacobi matrices. The inverse eigenvalue problems for the family of this form have also been solved. Furthermore, a variety of algorithms of constructions of Jacobi matrices have been presented [2, 5, 9, 11, 12]. In addition, it is shown in [12] by Moerbeke that the eigenvalues of real Jacobi matrices are distinct real numbers.

Study on Jacobi matrix and periodic Jacobi matrix has been relatively mature; in recent years, the extended matrices based on the two classes of matrices have been studied. In 2001, the properties of complex Jacobi matrix were investigated in [13], and the Jacobi matrix was extended to the complex domain. In 2013, inverse spectral problems for pseudo-symmetric matrix whose form is similar to periodic Jacobi matrix are discussed in [18], and the periodic Jacobi matrix was extended to non-symmetric form. In 2013, Bebiano investigated a class of non-self-adjoint periodic of tridiagonal matrices some of whose elements are in the plural [15].

In this article, the inverse eigenvalue problem of the matrix J with the non-diagonal elements is solved. The following are relevant to quasi-tridiagonal matrices.

Principle sub-matrices are the following form of matrices obtained by deleting the first row and the first column of J, denoting as

$$\begin{aligned} G= \sum _{k = 2}^na_kE^{(n-1)}_{k-1,k-1}+\sum _{k = 2}^{n-1}b_kE^{(n-1)}_{k-1,k}+\sum _{k = 2}^{n-1} \overline{b}_kE^{(n-1)}_{k,k-1}. \end{aligned}$$

In the following, the characteristic polynomial of G is denoted by \(\psi _{n-1}\left( \lambda \right) =\mathrm{det}\left( \lambda {I_{n-1}}-G\right) \), and the spectrum of G is \(\sigma \left( G\right) =\{{\mu _{1}}, \ldots , {\mu _{n-1}}\}\). In addition, we can also use the product of non-diagonal elements of J. \(\mathrm{Re}(\beta )\) is the real part of complex number \(\beta =(-1)^n\prod _{k=1}^nb_k\). The other forms of matrices will be involved, respectively, defining them as

$$\begin{aligned} J^-= & {} -\overline{b}_nE^{(n)}_{1,m}-b_nE^{(n)}_{m,1}+\sum _{k = 1}^na_kE^{(n)}_{k,k}+\sum _{k = 1}^{n-1}b_kE^{(n)}_{k,k+1}+\sum _{k = 1}^{n-1} \overline{b}_kE^{(n)}_{k+1,k},\\ H^-= & {} J^-,\ (m=n),\\ {\hat{H}}= & {} \sum _{k = 1}^na_kE^{(n)}_{k,k}+\sum _{k = 1}^{n-1}b_kE^{(n)}_{k,k+1}+\sum _{k = 1}^{n-1} \overline{b}_kE^{(n)}_{k+1,k},\\ L= & {} \sum _{k = 2}^{n-1}a_kE^{(n-2)}_{k-1,k-1}+\sum _{k = 2}^{n-2}b_kE^{(n-2)}_{k-1,k}+\sum _{k = 2}^{n-2} \overline{b}_kE^{(n-2)}_{k,k-1}. \end{aligned}$$

This paper is organized as follows. The eigenvalue properties, as well as the location of J and G, are discussed in Sect. 2. We get the conclusion that the eigenvalues of J and G satisfy interlacing property. In Sect. 3, the inverse eigenvalue problems of a family of quasi-tridiagonal matrices are explored. In this part, the construction of bordered diagonal matrices with given eigenvalues is solved first, and the sufficient conditions of solvability to the inverse eigenvalue problem of quasi-tridiagonal matrices are presented. The reconstruction of J is analyzed. Algorithms to describe the construction of J are given in Sect. 4. In Sect. 5, numerical examples are given to illustrate the algorithm and the results demonstrate that the algorithms are practical.

2 Eigenvalue Problem of Quasi-tridiagonal Matrices

Lemma 2.1

[18] The eigenvalues of G are strictly distinct real numbers,  that is,  \({\mu _{1}}, \ldots , {\mu _{n-1}}\) are real and simple.

The following lemma presents the sufficient and necessary conditions for J and G having common eigenvalues.

Lemma 2.2

Let \({\mu _{1}}, \ldots , {\mu _{n-1}}\) be distinct eigenvalues of G,  and \(u_{k}^\mathrm{T}=[{u_{k1}}, \ldots ,\)\( {u_{k,n-1}}]^\mathrm{T} \in \mathbb {C}^{n-1}\) is the unit eigenvector of G corresponding to \(\mu _{k},\) where \(u_{k1}\) and \(u_{k,m-1}\) are the first and \((m-1)\)th entries of \(u_{k},\) respectively. \(\mu _{k}\) is one eigenvalue of J if and only if \(|b_{1}{u_{k,1}}+\overline{b}_{n}{u_{k,m-1}}| = 0\).

Proof

As a bridge, we first define a bordered diagonal matrix as

$$\begin{aligned} A=\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U^* \\ \end{array}} \right] J\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U \end{array}} \right] , \end{aligned}$$
(2.1)

where the kth row of U is taken as \({u_{k}^*}\) (\(^*\) represents conjugate transpose), it follows that \(U^*=\left[ u_{1}, \ldots , u_{n-1}\right] \). It is clear that U is unitary matrix, that is, \(U^*U=I_{n-1}\) (the \((n-1)\times (n-1)\) identity matrix).

Let \(e_k\) be the \((n-1)\)-dimensional row vector with 1 at its kth element and zeros elsewhere for \(k=1, \dots , n-1\). The matrix J can be expressed as

$$\begin{aligned} J=\left[ {\begin{array}{c@{\quad }c} a_{1} &{} h \\ h^* &{} G \end{array}} \right] , \end{aligned}$$
(2.2)

where \(h=b_1e_{1}+ \overline{b}_{n}e_{m-1}\). In addition, let

$$\begin{aligned} A=\left[ {\begin{array}{c@{\quad }c} a_{1} &{} c^* \\ c &{} M \end{array}} \right] , \end{aligned}$$
(2.3)

where \(M=\mathrm{diag}\left( \mu _{1}, \ldots , \mu _{n-1}\right) \). Clearly, M is unitarily similar to tridiagonal matrix G, that is, the following formula holds:

$$\begin{aligned} M=U^{-1}GU=U^*GU, \ \ G=UMU^*. \end{aligned}$$

It is easy to get

$$\begin{aligned} \psi _{n-1}\left( \lambda \right)= & {} \mathrm{det}\left( \lambda {I_{n-1}}-G\right) =\mathrm{det}\left( \lambda {I_{n-1}}-M\right) = \prod _{k=1}^{n-1}\left( \lambda -\mu _{k}\right) .\\ \chi _{n}\left( \lambda \right)= & {} \mathrm{det}\left( \lambda {I_{n}}-J\right) = \left| {\begin{array}{c@{\quad }c} \lambda -a_{1} &{} -h \\ -h^* &{} \lambda {I_{n-1}}-G \\ \end{array}} \right| \\ {}= & {} \left( \lambda -a_{1}\right) \mathrm{det}\left( \lambda {I_{n-1}}-G\right) -\mathrm{det}\left( \lambda {I_{n-1}}-G\right) h{\left( \lambda {I_{n-1}}-G\right) ^{-1}}{h^{*}} \\ {}= & {} \left( \lambda -a_{1}\right) \mathrm{det}\left( \lambda {I_{n-1}}-G\right) -\mathrm{det}\left( \lambda {I_{n-1}}-G\right) hU^*{\left( \lambda {I_{n-1}}-M\right) ^{-1}}U{h^{*}} \\ {}= & {} \left( \lambda -a_{1}\right) \psi _{n-1}\left( \lambda \right) -\psi _{n-1}\left( \lambda \right) \sum _{k=1}^{n-1}{\frac{{{\left| h{u_{k}}\right| }^2}}{{\lambda -\mu _{k}}}} \\ {}= & {} \left( \lambda -a_{1}\right) \psi _{n-1}\left( \lambda \right) -\psi _{n-1}\left( \lambda \right) \sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}, \end{aligned}$$

where \(\alpha _k={\left| b_{1}{u_{k,1}}+\overline{b}_{n}{u_{k,m-1}}\right| }^2\). It is equivalent to

$$\begin{aligned} {\frac{{\chi _{n}\left( \lambda \right) }}{{\psi _{n-1} \left( \lambda \right) }}}=\lambda -a_{1} -\sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}. \end{aligned}$$
(2.4)

Then

$$\begin{aligned} \chi _{n}\left( \lambda \right) =\prod _{j=1}^{n-1}\left( \lambda -\mu _{j}\right) \left( \lambda -a_{1}-\sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}\right) . \end{aligned}$$
(2.5)

Substituting \(\mu _{k}\) into (2.5), the following can be obtained

$$\begin{aligned} \chi _{n}\left( \mu _{k}\right) =-\sum _{\begin{array}{c} k=1\\ j\ne k \end{array}}^{n-1}{\left( \mu _{k}-\mu _{j}\right) \alpha _{k}}. \end{aligned}$$
(2.6)

Differentiating both sides of the polynomial \(\psi _{n-1}\left( \lambda \right) \), we see

$$\begin{aligned} \psi '_{n-1}\left( \lambda \right) =\sum _{k=1}^{n-1}{\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \lambda -\mu _{j}\right) }. \end{aligned}$$

As the eigenvalues of G are simple, \(\psi '_{n-1}\left( \lambda \right) \) must not be 0.

It can easily be seen that

$$\begin{aligned} \mathrm{sign} {\psi '_{n-1}}\left( \mu _{k}\right) =(-1)^{n-k-1}. \end{aligned}$$
(2.7)

According to (2.6), we obtain

$$\begin{aligned} \alpha _{k}=-\frac{{\chi _{n}\left( \mu _{k}\right) }}{{\psi _{n-1}^{'} \left( \mu _{k}\right) }}. \end{aligned}$$
(2.8)

Therefore, \(\mu _{k}\) is an eigenvalue of J, that is, \(\chi _{n}\left( \mu _{k}\right) =0\). Considering (2.8), we infer that \(\chi _{n}\left( \mu _{k}\right) =0\) if and only if \(\alpha _{k}=0\). This completes the proof. \(\square \)

Suppose there are no common eigenvalue between them, then the next lemma presents the condition which the eigenvalues of quasi-tridiagonal matrices satisfy.

Lemma 2.3

If the formula \(|b_{1}{u_{k,1}}+\overline{b}_{n}{u_{k,m-1}}| \ne 0\) holds for \(k=1, \ldots , n-1,\) then the eigenvalues of J are the zeros of the following function : 

$$\begin{aligned} f(\lambda )=\lambda -a_{1}-\sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}. \end{aligned}$$

Proof

The lemma follows immediately from Lemma 2.2 and formula (2.5). \(\square \)

Based on Lemmas 2.2 and 2.3, the position of the eigenvalues between J and G in the condition that there are no common eigenvalue is presented in the following theorem.

Theorem 2.4

Let \(\lambda _{1}< \cdots <\lambda _{n}\) and \(\mu _{1}<\cdots <\mu _{n-1}\) be the eigenvalues of J and G,  respectively. If each \(\lambda _{j}\) is not the eigenvalue of G,  then the eigenvalues of J are strictly distinct real numbers,  and the eigenvalues \(\lambda _{j}\) and \(\mu _{k}\) satisfy the inequality as follows : 

$$\begin{aligned} \lambda _{1}<\mu _{1}<\lambda _{2}<\mu _{2}<\cdots<\mu _{n-1}<\lambda _{n}. \end{aligned}$$

Proof

By Lemma 2.3, we know that the eigenvalues of J are the roots of the equation

$$\begin{aligned} \lambda -a_{1}-\sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}=0. \end{aligned}$$

Let \(g(\lambda )=\lambda -a_{1}\) and \(q(\lambda )=\sum _{k=1}^{n-1}{\frac{{\alpha _{k}}}{{\lambda -\mu _{k}}}}\). It is obvious to prove that \(g(\lambda )\) is monotonically increasing in the entire interval.

Since \({q'(\lambda )}<0\) with \(\lambda \in {\left( \mu _{k}, \mu _{k+1}\right) \cup \left( -\infty ,\mu _{1}\right) \cup \left( \mu _{n-1}, +\infty \right) }\) for \(k=1, \ldots , n-2\), then \(q(\lambda )\) is strictly monotonically decreasing in each interval and \(\lim _{\lambda \rightarrow +\infty }h(\lambda )=\lim _{\lambda \rightarrow -\infty }h(\lambda )=0\).

Computing \({q''(\lambda )}\), we also know the sign of which changes only once in each interval. It follows that there exist odd roots in each interval. From what have been discussed above, there exists one and only one real root in each interval, that is, the inequality holds. \(\square \)

The condition that the eigenvalues of J satisfy is given in the following if some of the eigenvalues of G are also the eigenvalues of J.

Lemma 2.5

Let S be a subset containing s elements of \(\{1, \ldots , n-1\},\) such that \(|b_{1}{u_{k,1}}+\overline{b}_{n}{u_{k,m-1}}|=0\) holds for \(k \in S,\) and \(|b_{1}{u_{k,1}}+\overline{b}_{n}{u_{k,m-1}}| \ne 0\) holds for \(k \notin S\). Then \(\mu _{k}\) is an eigenvalue of J for \(k \in S \) and the rest eigenvalues of J are the \(n-s\) roots of \(f(\lambda )\).

Proof

It is known that \(\chi _{n}\left( \lambda \right) \) has n roots. Based on Lemma 2.2, it is obvious that \(\chi _{n}\left( \mu _{k}\right) =0\) for \(k \in S\) and the rest eigenvalues of J satisfy \(\prod _{j=1}^{n-1}\left( \lambda -\mu _{j}\right) \ne 0\). Therefore, the rest \(n-s\) eigenvalues of J are the zeros of the polynomial \(f\left( \lambda \right) \), obtained from the structure of \(\chi _{n}\left( \lambda \right) \). The proof of this lemma is now complete. \(\square \)

Based on the previous theory, we can obtain the general properties of eigenvalues of quasi-tridiagonal matrices.

Theorem 2.6

Assume that \(\lambda _{1}\le \cdots \le \lambda _{n}\) and \(\mu _{1}<\cdots <\mu _{n-1}\) are the eigenvalues of J and G. Let S be a subset containing s elements of the set \(\{1, \ldots , n-1\},\) such that \(\mu _{k}\) is an eigenvalue of J for \(k \in S \). Then the multiplicity of the eigenvalues of J is at most 2,  the multiple roots are also eigenvalues of G,  and the eigenvalues of J and G satisfy the following inequality : 

$$\begin{aligned} \lambda _{1} \le \mu _{1} \le \ \lambda _{2} \le \mu _{2} \le \cdots \le \mu _{n-1} \le \lambda _{n}. \end{aligned}$$

Proof

Let \(S=\{s_{1}, \ldots , s_{s}\}\)\((s_{1}<\cdots <s_{s})\), \(N-S=\{k_{1}, \ldots , k_{n-s}\}\)\((k_{1}<\cdots <k_{n-s})\), where \(N=\{1, \ldots , n-1\}\). Without loss of generality, we may assume \(\mu _{s_{i}}=\lambda _{s_{i}}\), then by Lemma 2.5, the rest \(n-s\) eigenvalues of J satisfy the equation \(f(\lambda )=0\).

We can also have a conclusion that there exists one and only one eigenvalue of J in each interval \((\mu _{k_{i}}, \mu _{k_{i+1}}) \cup (-\infty , \mu _{k_{1}}) \cup (\mu _{k_{n-s}}, +\infty )\) for \(k_{i} \in {N-S}\) from Theorem 2.4.

Assume one eigenvalue \(\lambda _{k_{j}}\) satisfies \(\lambda _{k_{j}}=\lambda _{s_{i}}=\mu _{s_{i}}\), since \(\mu _{i}\) is distinct and simple, then we can come to a conclusion that the multiplicity of the eigenvalues of J is at most 2, and the multiple eigenvalues are also the eigenvalues of G. \(\square \)

To sum up the above discussion, we can characterize the eigenvalue properties of J and the location of J and G.

3 The Inverse Eigenvalue Problems of J

In this section, we first define two bordered diagonal matrices A and \(A^{-}\) as a bridge to proceed in the proof of the next theorem in a similar manner of Lemma 2.2 in Sect. 2, constructed as follows:

$$\begin{aligned} A=\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U^* \\ \end{array}} \right] J\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U \\ \end{array}} \right] ,\quad A^{-}=\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U^* \\ \end{array}} \right] J^{-}\left[ {\begin{array}{c@{\quad }c} 1 &{} 0 \\ 0 &{} U \end{array}} \right] . \end{aligned}$$
(3.1)

Assume

$$\begin{aligned} A=\left[ {\begin{array}{c@{\quad }c} a_{1} &{} c^* \\ c &{} M \\ \end{array}} \right] ,\quad A^{-}=\left[ {\begin{array}{c@{\quad }c} a_{1} &{} \left( c^{-}\right) ^*\\ c^{-} &{} M \\ \end{array}} \right] , \end{aligned}$$
(3.2)

where \( M=\mathrm{diag}(\mu _{1}, \ldots , \mu _{n-1})=U^*{G}U \). Denote vectors c and \(c^-\) as \(c=[c_{1}, \ldots , c_{n-1}]^\mathrm{T}\) and \(c^-=[c^-_{1}, \ldots , c^-_{n-1}]^\mathrm{T}\), respectively.

From the construction above, the diagonal matrix M and the tridiagonal matrix G have the same eigenvalues, the matrices A and J also have the same eigenvalues, so do the matrices \(A^{-}\) and \(J^{-}\). It is easy to get \(a_{1}=\mathrm{tr}\left( A\right) -\mathrm{tr}\left( M\right) =\sum _{m=1}^{n}{\lambda _{m}}-\sum _{k=1}^{n-1}{\mu _{k}}\).

We can also know the matrices A and \(A^{-}\) are both bordered diagonal matrices.

In the above process, if the eigenvalues of J are given, and the eigenvalues of G are selected, then \(a_{1}\) and M can be computed. Therefore, if we want to construct the bordered diagonal matrices A and \(A^{-}\), it is sufficient to compute the boundary elements of them, that is, to compute the vector c and vector \(c^{-}\), respectively.

The following theorem provides formulas to compute the boundary elements of the bordered diagonal matrices A and \(A^{-}\) in the case of \(J=H\).

Theorem 3.1

For H,  assume that \(\lambda _{i}\) and \(\mu _{k}\) are distinct real numbers and satisfy the inequality \(\lambda _{1} \le \mu _{1} \le \lambda _{2} \le \mu _{2} \le \cdots \le \mu _{n-1} \le \lambda _{n}\). Then the entries \(c_{k}\) and \(c^-_{k},\)\((k=1, \ldots , n-1)\) can be represented with \(\lambda _{i},\)\(\mu _{k}\) and \(\mathrm{Re}{\beta },\) where the formulas are as follows : 

$$\begin{aligned} {\left| {c_{k}}\right| }^2=-\frac{{\prod _{i=1}^{n} \left( \mu _{k}-\lambda _{i}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \mu _{k}-\mu _{j}\right) }},\quad {\left| {c^{-}_{k}}\right| }^2= -\frac{{{\prod _{i=1}^{n}\left( \mu _{k}-\lambda _{i}\right) }+4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \mu _{k}-\mu _{j}\right) }}. \end{aligned}$$

Proof

Computing the characteristic polynomial of A, we have

$$\begin{aligned} \mathrm{det}\left( \lambda {I_{n}}-A\right) =\left( \lambda -a_{1}\right) \prod _{j=1}^{n-1}\left( \lambda -\mu _{j}\right) -\sum _{k=1}^{n-1}{{\left| c_{k}\right| }^2} \prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \lambda -\mu _{j}\right) . \end{aligned}$$
(3.3)

Substituting \(\lambda =\mu _{1}, \ldots , \mu _{n-1}\) into formula (3.3), we get \(n-1\) equations of \(\left| {c_{k}}\right| \). Solving the \(n-1\) equations, we have

$$\begin{aligned} {\left| {c_{k}}\right| }^2=-\frac{{\prod _{i=1}^{n}\left( \mu _{k} -\lambda _{i}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \mu _{k}-\mu _{j}\right) }}. \end{aligned}$$
(3.4)

Let \(p(\lambda )=\mathrm{det}(\lambda {I_{n}}-{\hat{H}})\) and \(r(\lambda )=\mathrm{det}(\lambda {I_{n-2}}-L)\), then we get

$$\begin{aligned} \mathrm{det}\left( \lambda {I_{n}}-H\right)= & {} p\left( \lambda \right) -{{\left| {b_{n}}\right| }^2}r\left( \lambda \right) -2\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) ,\\ \mathrm{det}\left( \lambda {I_{n}}-H^{-}\right)= & {} p\left( \lambda \right) -{{\left| {b_{n}}\right| }^2}r\left( \lambda \right) +2\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) . \end{aligned}$$

Doing subtraction with the above two equations, we have

$$\begin{aligned} \mathrm{det}\left( \lambda {I_{n}}-H^{-}\right) -\mathrm{det}\left( \lambda {I_{n}}-H\right) =4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) . \end{aligned}$$

Therefore,

$$\begin{aligned} \mathrm{det}(\lambda {I_{n}}-A^{-})= & {} \mathrm{det}\left( \lambda {I_{n}}-H^{-}\right) \nonumber \\= & {} \mathrm{det}\left( \lambda {I_{n}}-H\right) +4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) \nonumber \\= & {} \mathrm{det}\left( \lambda {I_{n}}-A\right) +4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) . \end{aligned}$$
(3.5)

In addition,

$$\begin{aligned} \mathrm{det}(\lambda {I_{n}}-A^{-})=(\lambda -a_{1}) \prod _{j=1}^{n-1}(\lambda -\mu _{j}) -\sum _{k=1}^{n-1}{{|{c^{-}_{k}}|}^2}\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}(\lambda -\mu _{j}), \end{aligned}$$
(3.6)

substituting \(\mu _{k}\) into formula (3.5), we have

$$\begin{aligned} \mathrm{det}\left( \mu _{k}{I_{n}}-H^{-}\right)= & {} \mathrm{det}\left( \mu _{k}{I_{n}}-A^{-}\right) \\= & {} -{{\left| {c^{-}_{k}}\right| }^2}\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \mu _{k}-\mu _{j}\right) \\= & {} \mathrm{det}\left( \mu _{k}{I_{n}}-A\right) +4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) . \end{aligned}$$

Since

$$\begin{aligned} \mathrm{det}\left( \mu _{k}{I_{n-1}}-A\right) =\prod _{i=1}^{n}\left( \mu _{k}-\lambda _{i}\right) , \end{aligned}$$

combining formula (3.6) with formula (3.5), we have

$$\begin{aligned} {\left| {c^{-}_{k}}\right| }^2=-\frac{{{\prod _{i=1}^{n}\left( \mu _{k}- \lambda _{i}\right) }+4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}\left( \mu _{k}-\mu _{j}\right) }}. \end{aligned}$$
(3.7)

This completes the proof of Theorem 3.1. \(\square \)

Based on Theorem 3.1, the existence of solutions is shown in the following.

Theorem 3.2

Let \(\left\{ \lambda _{k}\right\} ,\left( k=1, \ldots , n\right) \) and \(\left\{ \mu _{k}\right\} , \left( k=1, \ldots , n-1\right) \) be distinct real numbers,  and satisfy the inequality \(\lambda _{k} \le \mu _{k} \le \lambda _{k+1}, k=1, \ldots , n-1.\) If the real part \(\mathrm{Re}\left( \beta \right) \) of \(\beta \) satisfies the inequality \(| \mathrm{Re}{\beta }| \le r,\) denoting \(r=\mathrm{max}_{1 \le k \le n-1} (r_{k}),\) where \(r_{k}=\left( -1\right) ^{n-k-1}\frac{{1}}{{4}}\prod _{j=1}^{n}\left( \mu _{k}-\lambda _{j}\right) \). Then the solutions of formulas (3.4) and (3.7) exist. It is equivalent to the fact that the solutions exist.

Proof

On the one hand, due to the condition \(\lambda _{k} \le \mu _{k} \le \lambda _{k+1}\), for \(k=1, \ldots , n-1\), yields the signs of \(\prod _{j=1}^{n}(\mu _{k}-\lambda _{j})\) and \( \prod _{\begin{array}{c} j=1\\ j\ne k \end{array}}^{n-1}(\mu _{k}-\mu _{j})\) are opposite. Therefore,

$$\begin{aligned} {\left| {c_{i}}\right| }^2=-\frac{{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }} \ge 0. \end{aligned}$$

The above inequality ensures the existence of solutions of formula (3.4). On the other hand, if we want to get the formula \({\left| {c_{i}}^{-}\right| }^2 \ge 0\), then for \(i=1, \ldots , n-1\), the following \(n-1\) formulas must be guaranteed

$$\begin{aligned} \left| {-\frac{{4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }}}\right| \le \left| {-\frac{{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }}}\right| . \end{aligned}$$

Since

$$\begin{aligned} -\frac{{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }} \ge 0, \end{aligned}$$

the above formula is equivalent to

$$\begin{aligned} \left| {-\frac{{4\left( -1\right) ^{n}\mathrm{Re}\left( \beta \right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }}}\right| \le -\frac{{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }}, \end{aligned}$$

that is,

$$\begin{aligned} 4\left| \mathrm{Re}(\beta )\right| \le -\frac{\prod _{j=1}^{n}(\mu _{i}-\lambda _{j})}{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}(\mu _{i}-\mu _{j})}\left| \prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}(\mu _{i}-\mu _{j})\right| . \end{aligned}$$

It is equivalent to

$$\begin{aligned} 4\left| \mathrm{Re}\left( \beta \right) \right| \le \left( -1\right) ^{n-i-1}{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }. \end{aligned}$$

Solving the inequality, we have

$$\begin{aligned} \left( -1\right) ^{n-i}\frac{{1}}{{4}} {\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) } \le \mathrm{Re}\left( \beta \right) \le \left( -1\right) ^{n-i-1}\frac{{1}}{{4}} {\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) }, \end{aligned}$$
(3.8)

for \(i=1, \ldots , n-1\).

Simplifying formula (3.8) gives a simple inequality

$$\begin{aligned} |\mathrm{Re}(\beta )| \le r. \end{aligned}$$
(3.9)

To sum up, if formula (3.9) holds, then the following formula holds:

$$\begin{aligned} {\left| {c_{i}}^{-}\right| }^2=-\frac{{{\prod _{j=1}^{n}\left( \mu _{i}-\lambda _{j}\right) } +4{\left( -1\right) }^{n}\mathrm{Re}\left( \beta \right) }}{{\prod _{\begin{array}{c} j=1\\ j\ne i \end{array}}^{n-1}\left( \mu _{i}-\mu _{j}\right) }} \ge 0. \end{aligned}$$

The above ensures the existence of solutions of formula (3.7). The proof is completed. \(\square \)

The theorems above provide the sufficient conditions of existence of solutions.

Theorem 3.3

Defining vectors c and \(c^{-}\) as above,  assume that \(\overline{b}_{n}\) is in the entry (1, m) of matrix J,  the formulas \({\overline{b}_{1}}u_{1}+b_nu_{m-1}=c\)\((3\le m\le n),\)\(2{b_{n}}u_{n-1}=c-c^{-}(m=n)\) and \({\overline{b}_{1}}u_{1}=c+c^{-}(m=n)\) hold.

Proof

Noting that \(\Vert u_1\Vert =1\) and \(\Vert u_{m-1}\Vert =1\). From \(h^*U^*=c\), we have

$$\begin{aligned} (\overline{b}_{1}e_{1}^\mathrm{T}+b_{n}e_{m-1}^\mathrm{T})U^*=c, \end{aligned}$$

so

$$\begin{aligned} {\overline{b}_{1}}u_{1}+b_nu_{m-1}=c (3\le m\le n). \end{aligned}$$
(3.10)

When \(m=n\), from \(h=c^*U^*\) and \(h^-={c^-}^*{U}^*\), we have

$$\begin{aligned} (\overline{b}_{1}e_{1}^\mathrm{T}+b_{n}e_{n-1}^\mathrm{T})U^*=c,\quad (\overline{b}_{1}e_{1}^\mathrm{T}-b_{n}e_{n-1}^\mathrm{T})U^*=c^-, \end{aligned}$$

that is,

$$\begin{aligned} \overline{b}_{1}u_{1}+b_{n}u_{n-1}=c,\quad \overline{b}_{1}u_{1}-b_{n}u_{n-1}=c^-. \end{aligned}$$

Therefore,

$$\begin{aligned} 2b_{1}u_{1}=c+c^{-},\quad 2b_{n}u_{n-1}=c-c^{-}. \end{aligned}$$
(3.11)

The proof is completed. \(\square \)

4 Algorithm for the Inverse Eigenvalue Problem of Quasi-tridiagonal Matrices

4.1 Algorithm for H

The algorithm combining the previous theoretic part with Lanczos algorithm for the construction of quasi-tridiagonal matrices is presented as follows:

  1. (1)

    Giving \(\lambda _k\), \(k=1,\dots ,n\), take \(\mu _k=\frac{1}{2}(\lambda _k+\lambda _{k+1})\), \(k=1,\dots ,n-1\). According to formulas (3.4) and (3.7), we can compute \(\left| c_{k}\right| \) and \(\left| {c_{k}}^{-}\right| \). Let \(c_{k}=\left| c_{k}\right| \mathrm{e}^{{\mathrm{i}}\frac{{\pi }}{{4}}}\), \({c_{k}}^{-}=\left| {c_{k}}^{-}\right| \mathrm{e}^{{\mathrm{i}}\frac{{\pi }}{{4}}}\), then we get the complex vectors c and \(c^{-}\).

  2. (2)

    From formula (3.11), we can compute \(\left| \overline{b_{1}}\right| \), noting \(\left| u_{1}\right| =1\). Let \(b_{1}=\left| \overline{b_{1}}\right| \mathrm{e}^{\mathrm{i}\frac{\pi }{4}}\), then we get the complex number \(\overline{b_{1}}\), then we can compute the eigenvector \(u_{1}\). Similarly, we can also compute \(\left| b_{n}\right| \). Taking \(b_{n}=\left| b_{n}\right| \mathrm{e}^{\mathrm{i}\frac{\pi }{4}}\), we can have the element \(b_{n}\), then we can compute the vector \(u_{n-1}\).

  3. (3)

    It is well known that \(G=UMU^{*}\) is equal to \({U^{*}G=MU^{*}}\). For simplicity, we denote the sub-matrix G as the following form:

    $$\begin{aligned} G= \sum _{k = 1}^{n-1}t_kE^{(n-1)}_{k,k}+\sum _{k = 1}^{n-2}s_kE^{(n-1)}_{k,k+1}+\sum _{k = 1}^{n-2} \overline{s}_kE^{(n-1)}_{k+1,k}. \end{aligned}$$

    We can compute the vectors \(u_{2}, \ldots , u_{n-1}\) and the tridiagonal matrix G via Lanczos algorithm with the vector \(u_{1}\) and the diagonal matrix \(M=\mathrm{diag}\left( \mu _{1}, \ldots , \mu _{n-1}\right) \), proceeding as follows:

    1. Step 0.

      \(k=1\).

    2. Step 1.

      \(t_{k}={u^*_{k}}Mu_{k} \in {\mathbb R}.\)

    3. Step 2.

      \(z_{k+1}=Mu_{k}-u_{k}t_{k}\) for \(k=1\),

      \(z_{k+1}=Mu_{k}-u_{k}t_{k}-u_{k-1}s_{k-1}\) for \(k=2, \ldots , n-2\).

    4. Step 3.

      \({\overline{s}_{k}}=\left\| z_{k+1}\right\| \mathrm{e}^{\mathrm{i}\frac{{\pi }}{{4}}}\).

    5. Step 4.

      \(u_{k+1}=z_{k+1}\left( {\overline{s}_{k}}\right) ^{-1}\).

      Let \(k=k+1\) return to step 1.

      Calculating the above steps, we get \(a_{2}, \ldots , a_{n-1}\)\(b_{1}, \ldots , b_{n-2}, b_{n}\).

  4. (4)

    Select \({\left( -1\right) ^n}(b_{1}\cdots b_{n-2}b_{n})^{-1}\mathrm{Re}\left( \beta \right) \) as \( b_{n-1}\).

  5. (5)

    Obviously, \(a_{1}=\mathrm{tr}\left( A\right) -\mathrm{tr}\left( M\right) =\sum _{k=1}^{n}\lambda _{k}-\sum _{k=1}^{n-1}\mu _{k}\).

4.2 Algorithm for J

Give m and \(\lambda _k\), \(k=1,\dots ,n\), and take \(\mu _k=\frac{1}{2}(\lambda _k+\lambda _{k+1})\), \(k=1,\dots ,n-1\). According to formula (3.4), we can compute \(\left| c_{k}\right| \). Let \(c_{k}=\left| c_{k}\right| \mathrm{e}^{{\mathrm{i}}\frac{{\pi }}{{4}}}\), then we get the complex vector c.

Take any \((n-1)\)-dimensional complex column vector \(\tilde{u}_1\). For \(j=1\), let \(u_1^{(j)}=\tilde{u}_1\Vert \tilde{u}_1\Vert ^{-1}\).

  1. Step 0.

    \({b_{1}^{(j)}}=\overline{(u_1^{(j)})^*c}\), \(k=1\);

  2. Step 1.

    \(t_{k}^{(j)}={(u^{(j)}_{k})^*}Mu_{k}^{(j)} \in {\mathbb R}\);

  3. Step 2.

    \(z_{k+1}^{(j)}=Mu_{k}^{(j)}-u_{k}^{(j)}t_{k}^{(j)}\) for \(k=1\),

    \(z_{k+1}^{(j)}=Mu_{k}^{(j)}-u_{k}^{(j)}t_{k}^{(j)}-u_{k-1}^{(j)}s_{k-1}^{(j)}\) for \(k=2, \ldots , m-2\);

  4. Step 3.

    \(\overline{s_{k}^{(j)}}=\Vert z_{k+1}^{(j)}\Vert \mathrm{e}^{\mathrm{i}\frac{{\pi }}{{4}}}\);

  5. Step 4.

    \(u_{k+1}^{(j)}=z_{k+1}^{(j)}\left( {\overline{s_{k}^{(j)}}}\right) ^{-1}\);

  6. Step 5.

    \(b_n^{(j)}=(u_{m-1}^{(j)})^*c\);

  7. Step 6.

    \(|b_1^{(j+1)}|=\Vert c-b_n^{(j)}u_{m-1}^{(j)}\Vert \);

  8. Step 7.

    \(b_1^{(j+1)}=|b_1^{(j+1)}|\mathrm {e}^{\mathrm {i}\frac{\pi }{4}}\);

  9. Step 8.

    \(u_1^{(j+1)}=(c-b_n^{(j)}u_{m-1}^{(j)})\left( \overline{b_1^{(j+1)}}\right) ^{-1}\);

  10. Step 9.

    \(b_1^{(j+1)}=\overline{(u_1^{(j+1)})^*c}\) for regulating \(b_1^{(j+1)}\) above;

  11. Step 10.

    \(u_{m-1}^{(j)}=(c-\overline{b_1^{(j+1)}}u_{1}^{(j+1)})({b_n^{(j)}})^{-1}\) for regulating \(u_{m-1}^{(j)}\) above.

Let \(j=j+1\), return to step 0. When \(j=K\) makes \(\Vert u_{m-1}^{(K)}-u_{m-1}^{(K+1)}\Vert <\varepsilon \) given before start, we take \(b_1=b_1^{(K+1)}\), \(b_n=b_n^{(K)}\) and \(u_1=u_1^{(K+1)}\).

$$\begin{aligned} x_1= & {} u_1^*Mu_1;\\ v_1= & {} Mu_1-u_1x_1;\\ w_1= & {} \Vert v_1\Vert \mathrm{e}^{\frac{\pi }{4}};\\ u_2= & {} v_1(w_1)^{-1};\\ x_2= & {} u_2^*Mu_. \end{aligned}$$

For \(k=2,\dots ,n-2\),

$$\begin{aligned} v_k= & {} Mu_k-u_kx_k-u_{k-1}\overline{w_{k-1}};\\ w_k= & {} \Vert v_k\Vert \mathrm{e}^{\frac{\pi }{4}};\\ u_{k+1}= & {} v_k(w_k)^{-1};\\ x_{k+1}= & {} u_{k+1}^*Mu_{k+1}. \end{aligned}$$

Let \(k=k+1\) return. Calculating the above steps, we get \(u_{1}, \dots , u_{n-1}\). Let \(J=(J_{i,j})=O_{n \times n}\). Take

$$\begin{aligned} J_{1,1}= & {} \sum _{k=1}^n\lambda _k-\sum _{k=1}^{n-1}\mu _k;\\ J_{k,k}= & {} u_{k-1}^*Mu_{k-1}\quad \hbox {for}\ k=2,\dots ,n;\\ J_{1,2}= & {} b_{1}, J_{2,1}=\overline{J_{1,2} };\\ J_{k,k+1}= & {} u_{k-1}^*Mu_{k}, J_{k+1,k}=\overline{J_{k,k+1} }\quad \hbox {for}\ k=2,\dots ,n-1;\\ J_{m,1}= & {} b_{n}, J_{1,m}=\overline{J_{m,1} }. \end{aligned}$$

We get J wanted. To avoid errors, we may take \(\frac{1}{2}(J+J^*)\) as J.

5 Numerical Experiments

Numerical experiments are conducted with Matlab to test the algorithms for illustrating our method.

Example 5.1

Giving a set of geometric sequence whose first item is 2, and common ratio is 2, \(\lambda _{1}=2\), \(\lambda _{2}=8\), \(\lambda _{3}=32\) and \(\lambda _{4}=128\), we reconstruct a \(4\times 4\) quasi-tridiagonal matrix H of the form (1.2) with the eigenvalues \(\lambda _1,\lambda _2,\lambda _3,\lambda _4\) by the algorithm for H.

Taking \(\mu _{1}=4\), \(\mu _{2}=16\), \(\mu _{3}=64\), first we have the boundary elements of the bordered diagonal matrix

$$\begin{aligned} c= & {} \left[ \frac{3149}{717}+\frac{3149}{717}\mathrm{i}, \frac{3907}{296}+\frac{3907}{296} \mathrm{i}, \frac{5973}{170}+\frac{5973}{170} \mathrm{i}\right] ,\\ c^{-}= & {} \left[ \frac{3413}{1099}+\frac{3413}{1099} \mathrm{i}, \frac{1242}{91}+\frac{1242}{91}\mathrm{i}, \frac{7301}{208}+\frac{7301}{208} \mathrm{i}\right] . \end{aligned}$$

The matrix H is reconstructed as follows:

$$\begin{aligned} \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} 86 &{} \frac{12695}{336}-\frac{12695}{336} \mathrm{i} &{} 0 &{} \frac{1459}{2141}-\frac{1459}{2141} \mathrm{i} \\ \frac{12695}{336}+\frac{12695}{336} \mathrm{i} &{} \frac{11298}{197} &{} \frac{2767}{233}-\frac{2767}{233} \mathrm{i} &{} 0 \\ 0 &{} \frac{2767}{233}+\frac{2767}{233} \mathrm{i} &{} \frac{1172}{55} &{} \frac{1039}{366}+\frac{1039}{366} \mathrm{i} \\ \frac{1459}{2141}+\frac{1459}{2141} \mathrm{i} &{} 0 &{} \frac{1039}{366}-\frac{1039}{366} \mathrm{i} &{} \frac{4139}{775} \\ \end{array}} \right] . \end{aligned}$$

Substituting the given eigenvalues into the characteristic polynomial of the constructed matrix H, we have Table 1.

Table 1 Approximate value of the characteristic polynomial of H at \(\lambda _{j}\)

Example 5.2

Giving a set of Fibonacci sequence from the second item, \(\lambda _{1}=1\), \(\lambda _{2}=2\), \(\lambda _{3}=\lambda _{1}+\lambda _{2}=3\), \(\lambda _{4}=\lambda _{2}+\lambda _{3}=5\) and \(\lambda _{5}=\lambda _{3}+\lambda _{4}=8\), we reconstruct three \(5\times 5\)J of the form (1.1) for \(m=3,4,5\) with the eigenvalues \(\lambda _1,\lambda _2,\lambda _3,\lambda _4,\lambda _5\) by the algorithm for J.

Taking \(\mu _{1}=\frac{1}{2}(\lambda _{1}+\lambda _{2})=1.5\), \(\mu _{2}=\frac{1}{2}(\lambda _{2}+\lambda _{3})=2.5\), \(\mu _{3}=\frac{1}{2}(\lambda _{3}+\lambda _{4})=4\) and \(\mu _{4}=\frac{1}{2}(\lambda _{4}+\lambda _{5})=6.5\) and choosing \(\tilde{u}_1=[1,2,3,4]^\mathrm{T}+{\mathrm{i}}[5,6,7,8]^\mathrm{T}\), when \(m=3\), we reconstruct J as

$$\begin{aligned} \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \frac{9}{2} &{} \frac{1507}{921} + {\frac{632}{279}}\mathrm{i} &{} \frac{1855}{4888} - \frac{1121}{5153}\mathrm{i} &{} 0 &{} 0\\ \frac{1507}{921} - \frac{632}{279}\mathrm{i} &{} \frac{1365}{313} &{} \frac{1549}{1441} - \frac{314}{211}\mathrm{i} &{} 0 &{} 0\\ \frac{1855}{4888} + \frac{1121}{5153}\mathrm{i} &{} \frac{1549}{1441} + \frac{314}{211}\mathrm{i} &{} \frac{2110}{503} &{} \frac{857}{997} - \frac{2117}{1779}\mathrm{i} &{} 0\\ 0 &{} 0 &{} \frac{857}{997} + \frac{2117}{1779}\mathrm{i} &{} \frac{1104}{323} &{} \frac{425}{1016} - \frac{593}{1024}\mathrm{i} \\ 0 &{} 0 &{} 0 &{} \frac{425}{1016} + \frac{593}{1024}\mathrm{i} &{} \frac{1061}{420}\\ \end{array}} \right] . \end{aligned}$$

Substituting the given eigenvalues into the characteristic polynomial of the constructed matrix J, we have Table 2.

Table 2 Approximate value of the characteristic polynomial of J (\(m=3\)) at \(\lambda _{j}\)

When \(m=4\), we reconstruct J as

$$\begin{aligned} \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \frac{9}{2} &{} \frac{917}{367} + \frac{425}{322}\mathrm{i} &{} 0 &{} -\frac{537}{9359} + \frac{93}{869}\mathrm{i} &{} 0\\ \frac{917}{367} - \frac{425}{322}\mathrm{i} &{} \frac{2593}{535} &{} \frac{655}{409} - \frac{659}{779}\mathrm{i} &{} 0 &{} 0\\ 0 &{} \frac{655}{409} + \frac{659}{779}\mathrm{i} &{} \frac{2662}{677} &{} \frac{923}{769}- \frac{1133}{1787}\mathrm{i} &{} 0\\ -\frac{537}{9359} - \frac{93}{869}\mathrm{i} &{} 0 &{} \frac{923}{769}+ \frac{1133}{1787}\mathrm{i} &{} \frac{520}{161} &{} \frac{1136}{1901} - \frac{667}{2113}\mathrm{i} \\ 0 &{} 0 &{} 0 &{} \frac{1136}{1901} + \frac{667}{2113}\mathrm{i} &{} \frac{1014}{407}\\ \end{array}} \right] . \end{aligned}$$

Substituting the given eigenvalues into the characteristic polynomial of the constructed matrix J, we have Table 3.

Table 3 Approximate value of the characteristic polynomial of J (\(m=4\)) at \(\lambda _{j}\)

When \(m=5\), we reconstruct J as

$$\begin{aligned} \left[ {\begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }c} \frac{9}{2} &{} \frac{1042}{409} + \frac{397}{327}\mathrm{i} &{} 0 &{} 0 &{}-\frac{441}{3559} + \frac{434}{3067}\mathrm{i} \\ \frac{1042}{409} - \frac{397}{327}\mathrm{i} &{} \frac{835}{172} &{} \frac{281}{177} - \frac{376}{497}\mathrm{i} &{} 0 &{} 0\\ 0 &{} \frac{281}{177} + \frac{376}{497}\mathrm{i} &{} \frac{2879}{703} &{} \frac{2131}{1822}- \frac{928}{1665}\mathrm{i} &{} 0\\ 0 &{} 0 &{} \frac{2131}{1822}+ \frac{928}{1665}\mathrm{i} &{} \frac{2127}{673} &{} \frac{5807}{8412} - \frac{1155}{3511}\mathrm{i} \\ -\frac{441}{3559} - \frac{434}{3067}\mathrm{i} &{} 0 &{} 0 &{} \frac{5807}{8412} + \frac{1155}{3511}\mathrm{i} &{} \frac{2153}{901}\\ \end{array}} \right] . \end{aligned}$$

Substituting the given eigenvalues into the characteristic polynomial of the constructed matrix J, we have Table 4.

Table 4 Approximate value of the characteristic polynomial of J (\(m=5\)) at \(\lambda _{j}\)

6 Conclusion

The spectral properties and the inverse eigenvalue properties for the class of quasi-tridiagonal matrices are given. The first conclusion is that the multiplicities of the eigenvalues of J are at most two, as well as satisfying the interlacing properties. Second, the sufficient conditions of the solution to the inverse eigenvalue problem of quasi-tridiagonal matrices are solved. Two algorithms are given. The advantage of the second algorithm is that we can give any initial vector. Computational results are shown in numerical examples, illustrating the feasibility of the algorithm.