Abstract
In this paper, we enhance a recent algorithm for approximate spectral factorization of matrix functions, extending its capabilities to precisely factorize rational matrices when an exact lower-upper triangular factorization is available. This novel approach leverages a fundamental component of the improved algorithm for the precise design of rational paraunitary filter banks, allowing for the predetermined placement of zeros and poles. The introduced algorithm not only advances the state-of-the-art in spectral factorization but also opens new avenues for the tailored design of paraunitary filters with specific spectral properties, offering significant potential for applications in signal processing and beyond.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Spectral factorization is the process by which a positive (scalar or matrix-valued) function S is expressed in the form
where \(S_+\) can be analytically extended inside the unit circle \( \mathbb {T}\) and \(S_+^*\) is its Hermitian conjugate. There are multiple contexts in which this factorization naturally arises, e.g., linear prediction theory of stationary processes [21, 30] optimal control [2, 6] digital communications [3, 15] etc. Spectral factorization is used to construct certain wavelets [5] and multiwavelets [20] as well. Therefore, many authors contributed to the development of different computational methods for spectral factorization (see the survey papers [22, 28] and references therein, and also [4, 17] for more recent results). As opposed to the scalar case, in which an explicit formula exists for factorization: \(S_+(z)=\exp \left( \frac{1}{4\pi } \int \nolimits _\mathbb {T}\frac{t+z}{t-z}\log S(t)\,dt\right) \), in general, there is no explicit expression for spectral factorization in the matrix case. The existing algorithms for approximate factorization are, therefore, more demanding in the matrix case.
The Janashia-Lagvilava algorithm [18, 19] is a relatively new method of matrix approximate spectral factorization [10] which proved to be effective, see, e.g., [9, 12, 24]. Several generalizations of this method can be found in [13, 14]. In particular, the method is capable to factorize some singular matrix functions with a much higher accuracy than other existing methods (see Sect. 3 in [9]). Nevertheless, the algorithm, as it was designed originally, is not able to factorize exactly even simple polynomial matrices. In the present paper, we cast a new light on the capabilities of the method eliminating the above-mentioned flaw. The exact matrix spectral factorization is important as it may be used as a key step in the construction of certain wavelet or multiwavelet filter banks with high precision coefficients [20]. Furthermore, we construct a wide class of rational paraunitary matrices, including singular ones with some entries having zeros on the boundary; this construction process is of independent interest.
Let \(\mathcal {P}^+_N\) be the set of polynomials of degree at most N and for \(p(z)=\sum _{k=0}^Nc_kz^k\in \mathcal {P}^+_N\), let \(\widetilde{p}(z)=\sum _{k=0}^N\overline{c_k}z^{-k}\). Let also \(\mathcal {P}^-_N:=\{\widetilde{p}:p\in \mathcal {P}^+_N\}\). The core of the Janashia-Lagvilava method is a constructive proof of the following
Theorem
([19, Th. 1], [11, Th. 1]) Let F be a (Laurent) polynomial \(m\times m\) matrix of the form
where
Then, there exists a unique paraunitary matrix polynomial of the form
where \(u_{ij}(z)\in \mathcal {P}^+_N\), \(1\le i,j\le m\), with determinant 1,
satisfying
and such that
Here (1.6) means that FU is an \(m\times m\) matrix with the entries from \(\mathcal {P}^+_N\). A matrix polynomial U is called paraunitary if
where \(\widetilde{U}(z)=[\widetilde{u_{ji}}]\) for \(U(z)=[u_{ij}(z)]\), and \(I_m\) is the \(m\times m\) identity matrix.
Let \(\mathcal {R}\) be the set of rational functions in the complex plane, \(\mathcal {R}_+\subset \mathcal {R}\) be the set of rational functions with the poles outside the open unit disk \(\mathbb {D}:=\{z\in \mathbb {C}:|z|<1\}\), and let \(\mathcal {R}_-\subset \mathcal {R}\) be the set of rational functions with the poles inside \(\mathbb {D}\). It follows readily from well-known facts (see Sect. 5) that, in the above theorem, if \(\phi _j\in \mathcal {R}_-\) in (1.2), \(j=1,2,\ldots ,m-1\), then \(u_{ij}\in \mathcal {R}_+\) in (1.3), \(1\le i,j\le m\). Nevertheless, in the existing form, the Janashia-Lagvilava algorithm would find only a polynomial approximation to the rational entries \(u_{ij}\). In this paper, we construct them exactly. In particular, we provide a constructive proof of the following
Theorem 1.1
Let F be an \(m\times m\) matrix function of the form (1.2), where \(\phi _j\in \mathcal {R}_-\). Assume also that the poles of functions \(\phi _j\) are known exactly. Then one can explicitly construct the unique paraunitary matrix function U of the form (1.3), where
satisfying (1.4) and (1.5), such that
Here, for \(u\in \mathcal {R}\), it is assumed that \(\widetilde{u}(z)=\overline{u(1/\overline{z})}\), which coincides with the introduced definition for \(u\in \mathcal {P}^+_N\).
The first step in the Janashia-Lagvilava algorithm is Cholesky-like factorization of (1.1):
where M is the lower triangular matrix with the corresponding scalar spectral factors on the diagonal:
If S is a polynomial matrix function, then the entries of M are rational functions. However, in general, M cannot be constructed exactly even for simple polynomial matrices S. The reason for this limitation is that, apart from elementary operations, constructing M entails the spectral factorization of certain polynomials (see, for instance, the beginning of Sect. 3 in [8]), which can only be performed approximately unless the polynomial’s degree is very low and its roots can be exactly determined. Nevertheless, there exist specific cases where the leading principal minors of S exhibit simple structure, allowing for the exact determination of their spectral factors, which are involved in the diagonal entries of M. Under these circumstances, all (rational) entries of M can be determined exactly. If, furthermore, the poles of the functions \(\xi _{ij}\) inside \(\mathbb {T}\) can be precisely identified, the assertion of the following theorem is that we can proceed with the exact spectral factorization of S. Specifically, leveraging Theorem 1.1, we establish.
Theorem 1.2
Let S be an \(r\times r\) polynomial matrix function which is positive definite (a.e.) on \(\mathbb {T}\), and let (1.9) be its lower-upper factorization. If the entries of (1.10) and the poles of the functions \(\xi _{ij}\) inside \(\mathbb {T}\) are known exactly, then the spectral factorization of S can also be found exactly.
Remark 1.1
We emphasize that the diagonal entries of (1.10) may have zeros on \(\mathbb {T}\), however, we do not require knowledge of their exact locations.
Remark 1.2
Observe that (1.10) in its turn yields the exact factorization of the determinant. As demonstrated in [1], knowing the latter is necessary and sufficient for the more general Wiener–Hopf factorization to be carried out exactly. The algorithm described in [1], while allowing to exactly factorize any polynomial matrix functions with a factorable determinant (possessing an arbitrary set of partial indices, unstable among others), does not work in singular cases where the determinant has zeros on \(\mathbb {T}\). As mentioned in the previous remark, our algorithm does not have this restriction.
Polynomial paraunitary matrix functions play an important role in the theory of wavelet matrices and paraunitary filter banks (see [11, 26, 29]). They are also known as finite impulse response (FIR) lossless filters banks, and designing such filters with specific characteristics is of significant practical importance. In particular, construction of a matrix FIR lossless filter with a given first row, known as the wavelet matrix completion problem, has a long history with various solutions proposed by a number of authors [7, 11, 16, 23, 25, 27]. Theorem 1.1 leads to a solution of the rational paraunitary matrix completion problem for a broad class of given first rows. Namely, we prove the following
Theorem 1.3
Let
be such that
If
then one can precisely construct a paraunitary matrix V with the first row (1.11).
Theorem 1.3 enables us to design rational lossless filter banks with preassigned zeros and poles of the entries in the first row.
The paper is organized as follows: after notation (Sect. 2) and preliminary observations (Sect. 3), we prove some auxiliary lemmas in Sect. 4. Proofs of Theorems 1.1 and 1.2 are given in Sects. 5 and 6, respectively. The matrix completion problem is solved in Sect. 7, while the last Sect. 8 provides some numerical examples of exact spectral factorization and rational paraunitary matrix construction.
2 Notation and Definitions
This section summarizes the notation used in the paper, with some already introduced in the introduction.
Let \(\mathbb {T}:=\{z\in \mathbb {C}:|z|=1\}\) be the unit circle in the complex plane,
For a set \(\mathcal {S}\), let \(\mathcal {S}^{m\times n}\) be the set of \(m\times n\) matrices with the entries from \(\mathcal {S}\). Accordingly, the term ‘matrix function’ denotes matrices with entries as functions, or, equivalently, functions that yield matrices as values. \(I_m=\textrm{diag}(1,1,\ldots ,1)\in \mathbb {C}^{m\times m}\) stands for the \(m\times m\) identity matrix and \(0_{m\times n}\) is the \(m\times n\) matrix consisting of zeros. For a matrix (or a matrix function) \(M=[M_{ij}]\), \(M^T=[M_{ji}]\) denotes its transpose, and \(M^*=[\overline{M_{ji}}]\) denotes its Hermitian conjugate, while \([M]_{m\times m}\) stands for its upper-left \(m\times m\) principal submatrix. On the other hand, for \(a\in \mathbb {C}\), we let \(a^\star =\overline{1/a}\).
Let \(\mathcal {P}\) be the set of Laurent polynomials with the coefficients in \(\mathbb {C}\):
We also consider the following subsets of \(\mathcal {P}\): \(\mathcal {P}^+\), \(\mathcal {P}^-\), \(\mathcal {P}_N\), \(\mathcal {P}^+_N\) and \(\mathcal {P}^-_N\), where N is a non-negative integer, which correspond to the cases \(k_1=0\), \(k_2=0\), \(-N=k_1\le k_2=N\), \(0=k_1\le k_2=N\), and \(-N=k_1\le k_2=0\) in (2.1), respectively. So, \(\mathcal {P}^+\) is the set of usual polynomials, and \(\mathcal {P}^+_N\) is the set of polynomials of degree less than or equal to N.
The set of rational functions \(\{f=p/q: p,q\in \mathcal {P}\}\) is denoted by \(\mathcal {R}\), and \(\mathcal {R}_+\) (resp. \(\mathcal {R}_-\)) stands for the rational functions which are analytic, i.e. without poles, in \(\mathbb {T}_+\) (resp. in a neighbourhood of \(\mathbb {T}_-\cup \mathbb {T}\)). We assume that functions from \(\mathcal {R}_-\) vanish at \(\infty \) and constant functions belong to \(\mathcal {R}_+\), so that \(\mathcal {R}=\mathcal {R}_+\oplus \mathcal {R}_-\), i.e., every \(f\in \mathcal {R}\) can be uniquely decomposed as
where \(f^-\in \mathcal {R}_-\) and \(f^+\in \mathcal {R}_+\).
For \(f\in \mathcal {R}\), it is assumed that \(\widetilde{f}(z)=\overline{f(1/\overline{z})}=\overline{f(z^\star )}\), and for \(F=[F_{ij}]\in (\mathcal {R})^{m\times n}\) it is assumed that \(\widetilde{F}=[\widetilde{F_{ji}}]\in (\mathcal {R})^{n\times m}\). Note that \(\widetilde{f}(z)=\overline{f(z)}\) and \(\widetilde{F}(z)=F^*(z)\) for \(z\in \mathbb {T}\).
Of course,
A matrix function \(U\in \mathcal {R}^{m\times m}\) is called paraunitary if
(when we write an equation involving rational functions, we assume it holds wherever the rational functions are defined, which is everywhere except for their poles). Note that \(U\in \mathcal {R}^{m\times m}\) is paraunitary if and only if U(z) is unitary (i.e., \(U(z)U^*(z)=I_m\)) for each \(z\in \mathbb {T}\).
A matrix function \(S\in \mathcal {R}^{m\times m}\) is called positive definite, if \(S(z)\in \mathbb {C}^{m\times m}\) is positive definite for each \(z\in \mathbb {T}\), except for some isolated points (where the determinant of S might be equal to zero, or some entries of S might have a pole).
If f is an analytic function in a neighborhood of \(a\in \mathbb {C}\), then the k-th coefficient of its Taylor series expansion is denoted by \(c_k^+\{f,a\}\).
The notation \(\langle \cdot ,\cdot \rangle _{\mathbb {C}^m}\) and \(\Vert \cdot \Vert _{\mathbb {C}^m}\) are for the standard scalar product and norm on \(\mathbb {C}^m\). The symbol \(\delta _{ij}\) stands for the Kronecker delta, i.e., \(\delta _{ij}=1\) if \(i=j\) and \(\delta _{ij}=0\) otherwise, and \(e_j=(\delta _{1j},\delta _{2j},\ldots ,\delta _{mj})^T \) is a standard basis vector of \(\mathbb {C}^{m\times 1} \).
Throughout this paper, we understand the term ‘construct’ as ‘finding the object exactly’, with its precise meaning emerging from the surrounding context. For instance, ‘constructing’ \(p\in \mathcal {P}\) amounts to finding its coefficients exactly, while ‘constructing’ \(f\in \mathcal {R}\) means determining coprime p and q such that \(f=p/q\). In theory, we can also find exact solutions for certain problems, such as determining Laurent series coefficients or solving linear equations with known matrices and vectors. These facts will be employed implicitly in the subsequent sections.
3 Preliminary Observations
3.1 3.1
We will need the following simple observation. Let
be the system of n equations with unknowns \(z_1, z_2,\ldots ,z_n\). It is equivalent to the following \(2n\times 2n\) system of equations with the unknowns \(x_i=\Re (z_i)\) and \(y_i=\Im (z_i)\), \(i=1,2,\ldots ,n\),
where \(a^r=\Re (a)\), \(a^i=\Im (a)\), and the same for b and c.
Remark 3.1
If system (3.1) has a unique solution, then system (3.2) has a unique solution as well and, therefore, the determinant of its \(2n\times 2n\) coefficients matrix is nonzero.
3.2 3.2
If a matrix polynomial \(S\in (\mathcal {P}_N)^{m\times m}\) is positive definite, then spectral factorization (1.1) has the form
where \(S_+\in (\mathcal {P}_N^+)^{m\times m}\). Under the usual requirement that the spectral factor \(S_+\) is nonsingular on \(\mathbb {T}_+\), the spectral factor is unique up to a constant right unitary multiple. This theorem is known as the Polynomial Matrix Spectral Factorization Theorem, and its elementary proof is available in [8].
Since every positive definite matrix function \(R\in \mathcal {R}^{m\times m}\) can be represented as a ratio \(R=S/P\), where \(S\in \mathcal {P}^{m\times m}\) and \(P\in \mathcal {P}\) are positive definite, the spectral factorization theorem (alongside with the uniqueness) can be extended to the rational case:
Theorem 3.1
If \(R\in \mathcal {R}^{m\times m}\) is positive definite, then there exists an unique (up to a constant right unitary multiple) \(R_+\in \mathcal {R}_+^{m\times m}\) such that \(\det R_+(z)\not =0\) for each \(z\in \mathbb {T}_+\) and
for each z where both sides of (3.3) are defined.
Remark 3.2
To be specific, if \(R_+\) and \(Q_+\) are two spectral factors of R, then there exists a unitary matrix \(U\in \mathbb {C}^{m\times m}\) such that \(R_+(z)=Q_+(z)U\).
Remark 3.3
Factorization (3.3) provides also the spectral factorization of the determinant
3.3 3.3
Knowing the coefficients \(f_{1k}\) of the expansion of an analytic function f in a neighborhood of \(a\in \mathbb {C}\),
we can (explicitly) compute the coefficients of the expansion of its l-th power, \(f^l\):
in the same neighborhood for each \(l\ge 1\) by the following recursive formula
For the sake of notational convenience, we also assume that \(f_{0k}=\delta _{0k}\) for \(k=0,1,2,\ldots \).
If \(b\in \mathbb {T}_+\) and a function \(\widetilde{u}\in \mathcal {R}_-\) has the form
then
If now \(a\in \mathbb {T}_+\) (\(a=b\) is not excluded), we have the expansion
in a neighborhood of a (to be specific, for \(|z-a|<|b^\star -a|\)) and hence (using \(z=z-a+a\))
Applying now formulas (3.4), (3.5), we can expand (3.6) in the same neighborhood of a as
where the coefficients \(f_{lk}^{ab}\), which depend on a and b, can be recursively computed for each \(l={0, 1},\ldots ,N\) and \(k=0,1,\ldots \)
Thus, in order to compute the first L coefficients \(c^+_0, c^+_1,\ldots ,c^+_{L-1}\) of the expansion of the function (3.6) in the neighborhood of a, one can use the linear transformation
where \(A^{ab}_{LN}\) is an \(L\times {(N+1)}\) matrix whose kl-th entry is equal to
We emphasize that the entries of \(A^{ab}_{LN}\) depend only on a, b, L, and N.
4 Some Auxiliary Lemmas
For given rational functions \(\phi _j\in \mathcal {R}_-\), \(j=1,2,\ldots ,m-1\), let us consider the following set of conditions, which originates from the Janashia–Lagvilava method (cf. [11, eq. (31)]) and plays an essential role in the proposed constructions:
We say that a vector function \(X=(x_1, x_2,\ldots ,x_m)^T\in (\mathcal {R}_+)^{m\times 1}\) is a solution of (4.1) if its coordinate functions are free of the poles on \(\mathbb {T}\) and satisfy the conditions in (4.1).
Lemma 4.1
(cf. [11, Lemma 3]) If X and Y are two solutions of (4.1) (not necessarily different), then
Proof
Since both X and Y are solutions, we have in particular:
Taking the linear combination of these conditions with the weights \(-x_1,\ldots ,-x_{m-1}\) and \(y_m\), respectively:
Since the conditions on X and Y are symmetric, we also have
and since the functions \(x_i\) and \(y_i\) do not have poles on \(\mathbb {T}\) by definition, the relation (2.3) imply (4.2). \(\square \)
The set of solutions \(\mathcal {S}_m\) of (4.1) is not a linear space. In order to make it linear, we need to modify it and consider
Then \( \mathcal {S}_{\widetilde{m}}\) becomes a linear space in the usual sense: X, \(Y\in \mathcal {S}_{\widetilde{m}}\) \(\Rightarrow \) \(\alpha X+\beta Y\in \mathcal {S}_{\widetilde{m}}\) for each \(\alpha , \beta \in \mathbb {C}\), where \(\alpha (x_1,\ldots ,x_{m-1},\widetilde{x_m})^T= (\alpha x_1,\ldots ,\alpha x_{m-1},\alpha \widetilde{x_m})^T\) (not \(\overline{\alpha }\widetilde{x_m}\) in the last position). From now on, slightly abusing the notation, we may also call \( \mathcal {S}_{\widetilde{m}}= \mathcal {S}_{\widetilde{m}}(\phi _1,\phi _2,\ldots ,\phi _{m-1})\) the space of the solutions of (4.1) (along with \(\mathcal {S}_m\)) and denote its elements \((x_1,\ldots ,x_{m-1},\widetilde{x_m})^T\) by \(\hat{X}\).
Since \(\widetilde{x}(z)=\overline{x(z)}\) for each \(z\in \mathbb {T}\), Lemma 4.1 implies the following
Corollary 4.1
If \(\hat{X}\) and \(\hat{Y}\) are two solutions of (4.1), then \(\langle \hat{X}(z),\hat{Y}(z)\rangle _{\mathbb {C}^m}\) is constant on \(\mathbb {T}\). In particular, \(\Vert \hat{X}(z)\Vert _{\mathbb {C}^m}\) is constant on \(\mathbb {T}\) and if \(\hat{X}(z)=0\) for some \(z\in \mathbb {T}\), then \(\hat{X}\equiv 0\) and \(X\equiv 0\).
Corollary 4.2
Let \(\hat{X}_1,\hat{X}_2,\ldots ,\hat{X}_m\) be m solutions of the system (4.1) such that
and let
Then
is the unique solution of the system (4.1) for which \( \hat{X}(1)=C.\)
5 Constructive Proof of Theorem 1.1
For F defined by (1.2), consider the positive definite matrix function
Due to spectral factorization theorem for rational matrix functions (see Theorem 3.1 and Remark 3.2), there exists the spectral factorization (3.3) of (5.1) such that
Remark 5.1
We emphasize that such factor \(R_+(z)\) is unique (see Remark 3.2).
Since \(\det R(z)=1=\det F(z)\) and spectral factorization yields the factorization of the determinant as well (see Remark 3.3), we have
and, due to (5.2), this constant is equal to 1. Therefore,
Consequently, the matrix function
which is paraunitary since
has the determinant equal to 1,
Let \(U(z)=[U_{ij}(z)]_{i,j=1}^m\), and investigate its further properties.
If we write the inverse of the matrix (1.2) explicitly as
it follows from (5.4) that
Since
we have
i.e., the entries of the last row of U are in \(\{u\in \mathcal {R}: \widetilde{u}\in \mathcal {R}_+\}\) (note that since U is a unitary matrix function on \(\mathbb {T}\), it does not have poles on \(\mathbb {T}\)). Consequently, we can modify the notation for the entries of U and assume that it has the form (1.3), where (1.7) holds. Thus the matrix function (5.4) satisfies the conditions of Theorem 1.1 and, taking into account that
(see (5.2)) and such U is unique (see Remark 5.1), we will construct it explicitly.
First observe that the columns of U satisfy the system of conditions (4.1). Indeed, since \(R_+^{-1}\in \mathcal {R}_+\) as a spectral factor and \(U^{-1}=\widetilde{U}\), it follows from (5.4) that
Thus, if we write the product \(\widetilde{U}F^{-1}\) explicitly, taking into account equations (5.5) and
we obtain the first \(m-1\) conditions in (4.1) for each column of U. The last condition directly follows from the relations \(FU=R_+\in \mathcal {R}_+\).
Remark 5.2
We emphasize that the columns of U are solutions of the system (4.1).
Suppose now that the functions \(\phi _i\in \mathcal {R}_-\) are of the form
i.e., \(\phi _i\) has poles at \(a_{i1}, a_{i2},\ldots , a_{i,n_i}\), \(|a_{ik}|<1\), of orders \(N_{i1}, N_{i2},\ldots , N_{i,n_i}\), respectively. Assume also that
\(1\le \nu \le n_m\), i.e., we combine poles of all functions \(\phi _i\), \(i=1,2,\ldots ,m-1\), and set their maximal order at each pole as the order of the pole.
For each \(j=1,2,\ldots ,m\), we construct the j-th column of U. To this end, assume that j is fixed and let \(u_i=u_{ij}\), \(i=1,2,\ldots ,m\). Since functions \(\widetilde{u_i}\), \(1\le i\le m\), have poles only in \(\mathbb {T}_+\) (see (1.7)), it can be observed from the first \(m-1\) equations of (4.1) that functions \(u_i\), \(1\le i<m\), have the form
and it follows from the last relation of (4.1) that
The functions (5.6), for \(i=1,2,\ldots ,m-1\), and (5.7) overall contain
unknown coefficients:
We will construct a linear algebraic system of equations (with these coefficients as unknowns) consisting of the same number of equations. Indeed, m equations can be obtained from the relation that the j-th column of U(1) is equal to \(e_j\) (see (1.5)), namely we have
In addition, for each function \(\phi _i\), \(i=1,2,\ldots ,m-1\), considering i-th relation of the system (4.1) which is satisfied by \(u_m\) and \(u_i\),
and equating \(N_{ik}\) negative indexed coefficients to 0 in the Laurent expansion of the function in (5.11) in a neighborhood of \(a_{ik}\), where \(1\le k\le n_i\), we get the following \(N_{ik}\) equations written in the matrix form
Using equation (5.7) and the relations described by (3.7), we can substitute
in (5.12). The resulting system will contain only (5.9) as the unknowns. Performing the same procedure for each pole \(a_{ik}\) of \(\phi _i\), we obtain \(\sum \nolimits _{k=1}^{n_i}N_{ik}\) equations for each \(i=1,2,\ldots ,m-1\), and thus in total
additional equations.
Consider now the solution of the last condition in (4.1):
Equating again negative indexed coefficients to 0 in the Laurent expansion of the above function in a neighborhood of \(a_{mk}\), for each \(k=1,2,\ldots n_m\), we get the following \(N_{mk}\) equations
Here, the summation is with respect to those ies for which \(a_{i\nu }=a_{mk}\) for some \(\nu \le n_i\) and it is assumed that \(\gamma _{i\nu l}=0\) if \(N_{i\nu }<N_{mk}\) and \(N_{i\nu }<l\le N_{mk}\). Again, we can eliminate extra unknowns in the above equations by making the substitutions (see (5.6) and (3.7))
This way, we get
additional equations, and summing up m (the number of equations in (5.10)) with \(m_1\) in (5.13) and \(m_2\) in (5.15), we get \(m_0\) in (5.8). Consequently, we can construct the system of \(m_0\) algebraic equations with \(m_0\) unknowns (5.9). Some of these unknowns enter in this equation with their conjugate like \(\overline{C_i}\) or \(\overline{C_{ikl}}\), however, the existence and uniqueness of the solution to this system is known beforehand and, taking into account Remark 3.1, these unknowns can be found explicitly by the standard way.
6 Constructive Proof of Theorem 1.2
By applying Theorem 1.1, the proof of Theorem 1.2 closely follows the approach outlined in previous publications [12, 19]. However, it is crucial to note that the obtained intermediate terms can be computed exactly, which is a central focus of this paper. Particularly, throughout this section, when we refer to “constructing", we understand “computing exactly".
Note that the triangular factor M in (1.10) can be constructed within the field of rational functions using a process similar to the Cholesky factorization algorithm for standard numerical matrices. The key difference is that, instead of extracting square roots as the standard algorithm requires, one performs scalar spectral factorization. It is assumed that the poles indicated in the theorem can be precisely determined during the construction of such M. Observe also that the determinant of (1.10) can be analytically extended inside \(\mathbb {T}\) everywhere and
since it is required that the diagonal functions \(f_m\), \(m=1,2,\ldots \), are spectral factors.
The spectral factor \(S_+\) can be represented as the product
where each matrix \(\textbf{U}_m\) is paraunitary and has the following block matrix form
Matrices (6.3) are constructed recursively in such a way that \([M_m]_{m\times m}=:([S]_{m\times m})_+\) is a spectral factor of \([S]_{m\times m}\), where
(the term \(\textbf{U}_1=I_r\) is omitted as \([M]_{1\times 1}=f_1^+\) is already a spectral factor of \([S]_{1\times 1}\); see (1.10)). This can be achieved by using Theorem 1.1. Indeed, let us assume that \(\textbf{U}_2, \textbf{U}_3, \ldots ,\textbf{U}_{m-1}\) are already constructed so that
Since matrices (6.3) are paraunitary and they have the special structure, we also have
Furthermore, \([M_{m-1}(z)]_{m\times m}\) has the following block matrix form
where \([\zeta ]_{1\times (m-1)}:=[\zeta _1, \zeta _2, \ldots , \zeta _{m-1}]\), \([\phi ^\pm ]_{1\times (m-1)}:=[\phi ^\pm _1, \phi ^\pm _2, \ldots , \phi ^\pm _{m-1}]\) and
is the decomposition of a rational function according to the rule (2.2). The first two factors in the right hand side of (6.5) belong to \((\mathcal {R}_+)^{m\times m}\) and their determinants are free of zeros in \(\mathbb {T}_+\). Assuming now that F is the last matrix in (6.5) and applying Theorem 1.1, we can find a paraunitary matrix \(U_m=U\) of the form (1.3), satisfying (1.7) and (1.4), such that (1.8) holds. Hence,
is a spectral factor of \([S]_{m\times m}\), and equation (6.4) remains valid if we change \((m-1)\) to m. Note that, although the factors in (6.6) are merely rational matrices, the product \([M_m]_{m\times m}=([S]_{m\times m})_+\in (\mathcal {P}_+)^{m\times m}\) due to polynomial spectral factorization theorem (see Sect. 3.2).
Thus, if we accordingly construct all the matrices \(\textbf{U}_2, \textbf{U}_3,\ldots ,\textbf{U}_r\) in (6.2), we obtain a spectral factor \(S_+\).
Remark 6.1
In this paper, our focus is on exact factorization. If we cannot precisely handle the factor described in Theorem 1.2 as given by (1.10), we can still obtain its entries and corresponding poles in \(\mathbb {T}\) with any prescribed accuracy. This allows us to derive an approximation of M by a rational matrix function, say \(\hat{M}\). We can then proceed with \(\hat{M}\) and apply the algorithm as previously described. This results in an approximate spectral factor \(\hat{S}_+\). We anticipate that the convergence \(\hat{S}_+\rightarrow S_+\) should occur under the condition that \(\hat{M}\rightarrow M\), which will be the focus of our future work.
In our opinion, the ideas developed in this section can also be used to factorize matrices depending on a parameter.
7 Completion of Paraunitary Matrices
In this section we prove Theorem 1.3. Since a matrix V is paraunitary if and only if \(V^T\) is paraunitary, for notational convenience, we assume the given data is the first column of the matrix and construct its completion. Hence we assume that
is given which satisfies \(\widetilde{V_1}V_1=1\) and we obtain a paraunitary matrix V with the first column \(V_1\). This completion follows the same idea as demonstrated in [11, Sect. 5] for polynomial matrices: first we construct a rational row vector
and then, applying the procedures described in the proof of Theorem 1.1, we construct the corresponding completion. To this end, we consider the same system of conditions as (4.1),
however, now we treat \(v_i\in \mathcal {R}_+\) (for \(1\le i\le m\)) as known functions and \(\phi _j\in \mathcal {R}_-\) (for \(1\le j <m\)) as unknown functions. A necessary and sufficient condition for the existence of this solution is provided by the following
Lemma 7.1
For given functions \(v_i\in \mathcal {R}_+\), \(i=1,2,\ldots ,m\), satisfying
there exists solution (7.2) of system (7.3) \((\phi _1,\phi _2,\ldots ,\phi _{m-1})^T\), \(\phi _j\in \mathcal {R}_-\), \(j=1,2,\ldots ,m-1\), if and only if there exist functions \(h_i\in \mathcal {R}_+\), \(1\le i\le m\), such that
Proof
General solutions of the first \(m-1\) conditions in (7.3) have the form
Substituting these relations into the last condition of (7.3), we get
Hence,
and, by virtue of (7.4),
Thus, if we introduce the notation \(h_i=-\psi _i v_m\), \(i=1,2,\ldots ,m-1\), where \(h_i\in \mathcal {R}_+\) due to the last condition in (7.6), we get (7.5) and the first part of the lemma is proved.
Suppose now that (7.5) holds and define functions
Then the first \(m-1\) conditions in (7.3) are satisfied and, for the last condition, we have
Hence the last condition of (7.3) is also satisfied and the lemma is proved. \(\square \)
Remark 7.1
It is well known that condition (1.13) is equivalent to (7.5) satisfied for some polynomials \(h_i\in \mathcal {P}_+\), \(1\le i\le m\). Moreover, these polynomials can be explicitly found by solving the corresponding system of linear algebraic equations.
We proceed with the proof of Theorem 1.3 as follows. Suppose
We can complete C to the unitary matrix W, i.e., we can construct a unitary matrix W with the first column C.
Lemma 7.1 guarantees that we can construct a row vector (7.2) which is the solution of system (7.3). Use these \(\phi _i\), \(i=1,2,\ldots ,m-1\), to construct a matrix function F defined by equation (1.2), and let U be the corresponding paraunitary matrix determined according to Theorem 1.1. We have that the columns
of the matrix U are solutions of system (4.1) (see Remark 5.2) and
(see (1.5)). Hence
is the solution of system (4.1) which satisfies \(V_C(1)=C\). However, such solution of the system (4.1) is unique (see Corollary 4.2). Therefore, \(V_C\) defined by (7.7), which is the first column of \( U(z)\cdot W\), coincides with (7.1). Obviously, the matrix
is paraunitary, therefore the proof of Theorem 1.3 is completed.
8 Numerical Examples
8.1 8.1
In this section, we provide some examples of exact constructions which rely on the methods presented in this paper. First we factorize the following matrix
which is positive definite, however, it has a singularity (of order 4) at the isolated point \(z=1\). In particular, \(\det S(z)=-z^{-2}+2-z^2=(z^{-2}-1)(z^2-1)\). An approximate factorization of this matrix, varying in both speed and accuracy, is given in [12, 19]. Now, by using Theorem 1.1, we can factorize (8.1) exactly as it admits exact lower-upper factorization (1.9) with
where \(a=\sqrt{3-\sqrt{5}}\) and \(b=\sqrt{3+\sqrt{5}}\). Arguing as in (6.5), the matrix (8.2) can be represented as
where \(\phi =(7+22z+11z^2)/(a+bz)\). The function \(\phi \) has a single pole in \(\mathbb {T}_+\) at \(z_0=-a/b=(\sqrt{5}-3)/2\) with the residue \(\gamma _0=(25-11\sqrt{5})/2.\) Therefore, the function \(\phi \) can be split into \(\phi =\phi ^++\phi ^-\), where \(\phi ^-(z)=\gamma _0/(z-z_0)\) and \(\phi ^+\in \mathcal {R}_+\). Hence, we have to take the \(2\times 2\) matrix function
in the role of (1.2) and construct a paraunitary matrix U according to Theorem 1.1. Postmultiplying M by U, we get a spectral factor
Formulas (5.6) and (5.7) suggest that we have to search solutions \(u_1\) and \(u_2\) of the system (4.1) in the form
Consequently,
(since we deal with real coefficients, we do not use the conjugate sign for \(C_i\)) and the system derived from the equations (1.5), (5.12), and (5.14) has the form
For \(u_{12}\) and \(u_{22}\), we just have to change the right-hand side of (8.4) to \((0,1,0,0)^T\).
The code was designed to perform the exact arithmetics in the quadratic field \(\mathbb {Q}(\sqrt{5})\) and the following solutions of (8.4) and its companion system were obtained by the Gaussian elimination: \(C_1=(35-7\sqrt{5})/20\), \(C_{11}=(-11+5\sqrt{5})/4\), \(C_2=(5-\sqrt{5})/20\), \(C_{21}=(-3+\sqrt{5})/4\) for \(u_{11}\), \(u_{21}\), and \(C_1=(-5+\sqrt{5})/20\), \(C_{11}=(3-\sqrt{5})/4\), \(C_2=(35-7\sqrt{5})/20\), \(C_{21}=(-11+5\sqrt{5})/4\) for \(u_{12}\), \(u_{22}\). Hence the unitary matrix (1.3) was constructed
where \(c=b(5-\sqrt{5})/20=\sqrt{3+\sqrt{5}}\sqrt{25-10\sqrt{5}+5}/20=2\sqrt{10}/20=1/\sqrt{10}\).
The spectral factor (8.3) is equal to the product of (8.2) and (8.5). Given that the theory implies that the entries of \(S_+\) are polynomials of order 2, we can assert that the obtained rational functions are indeed polynomials, allowing for exact divisions. Indeed we get
a result that can be verified through direct multiplication.
8.2 8.2
In the next example, we construct a paraunitary matrix with the first row
One can check that \(|v_1(z)|^2+|v_2(z)|^2+|v_3(z)|^2=1\) for each \(z\in \mathbb {T}\), i.e., condition (1.12) of Theorem 1.3 is satisfied and we follow the procedures described in its proof in order to perform this construction.
Since \(v_3(z)=(z+1)/(5z+6)\) is analytic in \(\mathbb {T}_+\) together with its inverse, the solution \(\phi _1, \phi _2\) of the system (7.3) can be identified by
where \(\gamma _1=11/12\), \(\gamma _2=-11/36\), and \(z_0=-5/6\). Thus, we construct the matrix function F of the form (1.2) and search for the corresponding paraunitary matrix (1.3). Since \(z_0=-5/6\) is a single pole of functions \(\phi _1\) and \(\phi _2\), we search the solutions \((u_1,u_2,u_3)\) of the system (4.1) in the form
and the system derived from the equations (1.5), (5.12), and (5.14) has the form
Taking \(i=1,2\), and 3 in the system (8.7), we get the coefficients of the entries of the first, second and third columns of (1.3). Thus we obtain
For the function \(V_1\) defined by (8.6), we have \(V_1(1)=\frac{1}{11}(6,9,2)\). We used the Cayley transform \(V=(I-A)(I+A)^{-1}\) and searched for skew-symmetric matrix A which satisfies the conditions \((I-A)_1=V_1(1)(I+A)\), where \((I-A)_1\) is the first row of \(I-A\). This provides the orthogonal matrix W with rational entries and the first column \(V_1^T(1)\):
Using formula (7.8), we get a paraunitary matrix V with the first column \(V_1^T\):
and the matrix we search for is \(V^T\).
References
Adukov, V.M., Adukova, N.V., Mishuris, G.: An explicit Wiener-Hopf factorization algorithm for matrix polynomials and its exact realizations within ExactMPF package. Proc. R. Soc. Lond. 478, 20210941 (2022). https://doi.org/10.1098/rspa.2021.0941
Anderson, B.D.O., Moore, J.B.: Optimal Filtering. Prentice-Hall, New Jersey (1979)
Barry, J.R., Lee, E.A., Messerschmitt, D.G.: Digital Communication. Springer, New York (2004)
Böttcher, A., Halwass, M.: A Newton method for canonical Wiener-Hopf and spectral factorization of matrix polynomials. Electron. J. Linear Algebra 26, 873–897 (2013). https://doi.org/10.13001/1081-3810.1693
Daubechies, I.: Ten Lectures on Wavelets. SIAM, Philadelhia (1992)
Davis, J.H.: Foundations of Deterministic and Stochastic Control. Birkhäuser Boston Inc, Boston (2002)
Ehler, M., Han, B.: Wavelet bi-frames with few generators from multivariate refinable functions. Appl. Comput. Harmon. Anal. 25, 407–414 (2008). https://doi.org/10.1016/j.acha.2008.04.003
Ephremidze, L.: An elementary proof of the polynomial matrix spectral factorization theorem. Proc. R. Soc. Edinb. Sect. A 144, 747–751 (2014). https://doi.org/10.1017/S0308210512001552
Ephremidze, L., Gamkrelidze, A., Spitkovsky, I.: On the spectral factorization of singular, noisy, and large matrices by Janashia-Lagvilava method. Trans. A. Razmadze Math. Inst. 176, 361–366 (2022)
Ephremidze, L., Janashia, G., Lagvilava, E.: On approximate spectral factorization of matrix functions. J. Fourier Anal. Appl. 17, 976–990 (2011). https://doi.org/10.1007/s00041-010-9167-9
Ephremidze, L., Lagvilava, E.: On compact wavelet matrices of rank \(m\) and of order and degree \(N\). J. Fourier Anal. Appl. 20, 401–420 (2014). https://doi.org/10.1007/s00041-013-9317-y
Ephremidze, L., Saied, F., Spitkovsky, I.M.: On the algorithmization of Janashia-Lagvilava matrix spectral factorization method. IEEE Trans. Inf. Theory 64, 728–737 (2018). https://doi.org/10.1109/TIT.2017.2772877
Ephremidze, L., Spitkovsky, I.: An algorithm for \(J\)-spectral factorization of certain matrix functions. In: 60th IEEE Conference on Decision and Control (CDC), pp. 5820–5825 (2021). https://doi.org/10.1109/CDC45484.2021.9683123
Ephremidze, L., Spitkovsky, I.M.: On multivariable matrix spectral factorization method. J. Math. Anal. Appl. 514, 126300 (2022). https://doi.org/10.1016/j.jmaa.2022.126300
Fischer, R.: Precoding and Signal Shaping for Digital Transmission. Wiley, New York (2002)
Han, B., Zhuang, X.: Algorithms for matrix extension and orthogonal wavelet filter banks over algebraic number fields. Math. Comput. 82, 459–490 (2013). https://doi.org/10.1090/S0025-5718-2012-02618-4
Jafarian, A., McWhirter, J.G.: A novel method for multichannel spectral factorization. In: Proceedings of the 20th European Signal Processing Conference, Bucharest, Romania, pp. 1069–1073 (2012)
Janashia, G., Lagvilava, E.: A method of approximate factorization of positive definite matrix functions. Stud. Math. 137, 93–100 (1999)
Janashia, G., Lagvilava, E., Ephremidze, L.: A new method of matrix spectral factorization. IEEE Trans. Inf. Theory 57, 2318–2326 (2011). https://doi.org/10.1109/TIT.2011.2112233
Kolev, V., Cooklev, T., Keinert, F.: Matrix spectral factorization for SA4 multiwavelet. Multidimensional Syst. Signal Process. 29, 1613–1641 (2018). https://doi.org/10.1007/s11045-017-0520-x
Kolmogorov, A.N.: Stationary sequences in Hilbert’s space. In: Selected Works of A. N. Kolmogorov, vol. 2, Probability Theory and Mathematical Statistics, pp. 228–272. Springer, Heidelberg (1992)
Kučera, V.: Factorization of rational spectral matrices: a survey of methods. In: Proc. IEEE Int. Conf. Control, Edinburgh, vol. 2, pp. 1074–1078 (1991)
Lawton, W., Lee, S.L., Shen, Z.: An algorithm for matrix extension and wavelet construction. Math. Comput. 65, 723–737 (1996)
MacLaurin, J.N., Robinson, P.A.: Determination of effective brain connectivity from activity correlations. Phys. Rev. E 99, 042404 (2019)
Park, P.: A Computational Theory of Laurent Polynomial Rings and Multidimensional FIR Systems. PhD thesis, UC Berkeley (1995)
Resnikoff, H.R., Wells, R.O.: Wavelet Analysis. Springer, New York (1998)
Ri, C., Paek, Y.: Causal FIR symmetric paraunitary matrix extension and construction of symmetric tight \(M\)-dilated framelets. Appl. Comput. Harmon. Anal. 51, 437–460 (2021)
Sayed, A.H., Kailath, T.: A survey of spectral factorization methods. Numer. Linear Algebra Appl. 8, 467–496 (2001)
Vaidyanathan, P.P.: Multirate Systems and Filter Banks. Prentice Hall, New Jersey (1993)
Wiener, N., Masani, P.: The prediction theory of multivariate stochastic processes II. The linear predictor. Acta Math. 99, 93–137 (1958)
Acknowledgements
The authors express their gratitude to the anonymous reviewers for their thorough review of the manuscript and valuable remarks, which helped to improve the exposition.
Funding
This work was supported by the EU through the H2020-MSCA-RISE-2020 project EffectFact, Grant Agreement ID: 101008140. Ilya M. Spitkovsky was also supported by Faculty Research funding from the Division of Science and Mathematics, NYUAD.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Chris Heil.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ephremidze, L., Mishuris, G. & Spitkovsky, I.M. On the Exact Spectral Factorization of Rational Matrix Functions with Applications to Paraunitary Filter Banks. J Fourier Anal Appl 30, 43 (2024). https://doi.org/10.1007/s00041-024-10100-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-024-10100-3