1. INTRODUCTION

Splines and wavelets are widely used in information theory. Wavelet decompositions are associated with the development of efficient algorithms for processing (compression or refinement) of large flows of digital data and signals of various kinds. They have been widely used in many technical areas, in particular, in coding theory.

It is well-known that there are no explicit finite decomposition formulas for classical spline wavelets. Therefore, approximate relations are used for the leading decomposition coefficients [1] or the corresponding sparse systems of linear algebraic equations are solved. However, it is not guaranteed that they are well-conditioned [2]. For instance, such a system can be split into systems with strict diagonal dominance [3] and then solved by the double-sweep method with guaranteed correctness and stability. Another method is to construct an extension matrix whose elements are values of an extended system of biorthogonal functionals on the original basis [4, 5]. However, the above-mentioned papers do not provide an explicit representation of the decomposition and reconstruction filters which could be used to construct, for example, noise-resistant codes.

G. Faber [6] constructed a sequence of continuous piecewise linear functions that converge pointwise to a continuous nowhere differentiable function. He introduced a hierarchical representation of the functions in the form of a series based on piecewise linear interpolation on nested binary grids and an explicit representation of the first “lazy” wavelet (with a compact support). From the point of view of data compression, the Faber decomposition is useful owing to the stability of linear B-splines. It provides good and fast compression without much effort. However, wavelet theory was initiated by A. Haar [7], with a study of partial sums of Fourier series, which resulted in the first classical orthogonal wavelet with a compact support. Major theoretical approaches used in the study of wavelets involve constructing an orthogonal basis of wavelets in a Hilbert space and using shifts and compressions (expansions) of the argument of a fixed function called a scalingfunction. The requirement of space embedding on a twice refined infinite uniform grid on a real axis leads to multi-scale relations such that each basis function on a refined grid can be expressed as a linear combination of basis functions on a dense grid. Specifically, splines are subject to such relations. Unfortunately, it is difficult to solve the scaling equations in general form. A more appropriate method for nonuniform grids is to construct calibration relations generalizing the classical scaling equations for uniform grids. This allows constructing systems of embedded spline spaces with arbitrary refining/coarsening of a nonuniform grid. Also, this results in corresponding adaptive wavelet decompositions on nonuniform grids on an interval, that is, wavelet decompositions of finite spaces.

An important problem in constructing a spline-wavelet decomposition is choosing an appropriate method of constructing nested grids. In [810], spline-wavelet decompositions were constructed by successively removing or adding nodes of a nonuniform grid. In this case a procedure of constructing a system of functionals that is biorthogonal to a wavelet basis is used instead of the requirement of orthogonality of the wavelet basis. Paper [11] considered a single local coarsening of a nonuniform grid with nodes forming a dyadic system of indices. This provided decompositions that could be used to construct either “lazy” wavelets or wavelets with a shifted support. In [12, 13], the set of grids to be used was increased, and a once locally refined nonuniform grid was used. However, no explicit representation of decomposition and reconstruction filters had been made. In the present paper, the same approach to constructing spline-wavelet decompositions with a single local refinement of a nonuniform grid is used. It leads, in the general case, to the construction of biorthogonal wavelets or lifting schemes. It is used to construct spline-wavelet decompositions on an interval with the approximating relations as initial structures for constructing spaces of linear minimal splines and the calibration relations to prove that the corresponding spaces are embedded. Also, decomposition and reconstruction operators are constructed, and it is proved that they are mutually inverse. An explicit representation of filter banks for constructing corresponding wavelet transforms is obtained. It is found that the decomposition and reconstruction matrices are sparse. An advantage of this approach is the possibility of using nonuniform grids and arbitrary nonpolynomial spline wavelets without using the formalism of Hilbert spaces.

2. SPACE OF COORDINATE SPLINES

Let \({\mathbb Z}, {\mathbb R}\) be the sets of integer and real numbers, respectively; \(C^r[a,b]\) is the set of functions that are \(r\) times continuously differentiable on an interval \([a,b]\) assuming that \(C^0[a,b] = C[a,b].\) 

On an interval \([a, b]\subset {\mathbb R}^1\), let us consider a grid \(X\) with two additional nodes outside the interval \([a, b]\):

$$X: x_{-1}< a=x_0< x_1< \cdots <x_{n-1}< x_{n} = b < x_{n+1}. $$
(1)

Let us introduce the following notation:

$$J_{i,k} {~\stackrel{\mathrm{def}}{=}~} \{i,i+1,\ldots,k\}, \quad i,k\in {\mathbb Z}, ~~ i < k.$$

Let \(\{{\mathbf{a}}_j\}_{j\in J_{-1,n-1}}\) be an ordered set of vectors \({\mathbf{a}}_j\in {\mathbb R}^2.\) For convenience, the components of the vectors will be denoted by square brackets and numbered. For instance, \({\mathbf{a}}_j = \big([{\mathbf{a}}_j]_0,[{\mathbf{a}}_j]_1\big)^T,\) where \(T\) denotes transposition.

Assume that the square matrices \(({\mathbf{a}}_{j-1},{\mathbf{a}}_j)\) consisting of a pair of vectors \({\mathbf{a}}_{j-1},\) \({\mathbf{a}}_j\) are nonsingular:

$$\mathrm{ det}({\mathbf{a}}_{j-1},{\mathbf{a}}_j) \neq 0 \quad \forall\, j \in J_{-1,n-1}. $$
(2)

The union of all elementary grid intervals is denoted by \(M ~{~\stackrel{\mathrm{def}}{=}~} \bigcup\limits_{j\in J_{-1,n}}(x_j,x_{j+1}).\) Let \({\mathbb X}(M)\) be the linear space of real-valued functions defined on the set \(M.\) 

Consider a generating vector-function \({\boldsymbol{\varphi}} : [a, b] \rightarrow {\mathbb R}^2\) with components from the space \(C^1[a, b]\) and a nonzero Wronskian:

$$|\mathrm{ det}({\boldsymbol{\varphi}},{\boldsymbol{\varphi}}')(t)| \geq \mathrm{const} > 0 \quad \forall\, t \in [a, b].$$

Assume that functions \(\omega_j \in {\mathbb X}(M)\) and \(j\in J_{-1,n-1}\) satisfy the following approximating relations:

$$\begin{array}{rll} \displaystyle \sum\limits_{j'=k-1}^k {\mathbf{a}}_{j'}\,\omega_{j'}(t) \equiv & {\boldsymbol{\varphi}}(t) & \forall\ t\in (x_k,x_{k+1}), \ \forall \ k\in J_{-1,n-1},\\ \omega_j(t)\equiv & 0 & \forall\ t\notin [x_j,x_{j+2}]\bigcap M.\\ \end{array} $$
(3)

For every fixed \(t \in (x_k,x_{k+1})\) relations (3) may be considered as a system of linear algebraic equations for the unknowns \(\omega_j(t).\) By virtue of assumption (2), system (3) is uniquely solvable, and \(\mathrm{ supp}\,\omega_j(t) \subset [x_j,x_{j+2}].\) 

Using Cramer’s formulas, from the system of linear algebraic equations (3) we find that

$$\omega_{j}(t) = \left\{ \begin{array}{ll} \dfrac{\mathrm{ det}({\mathbf{a}}_{j-1}, {\boldsymbol{\varphi}}(t)) }{\mathrm{ det}({\mathbf{a}}_{j-1}, {\mathbf{a}}_{j})}, & \quad t\in [x_j,x_{j+1}), \\[2ex] \dfrac{\mathrm{ det}({\boldsymbol{\varphi}}(t), {\mathbf{a}}_{j+1})}{\mathrm{ det} ({\mathbf{a}}_{j}, {\mathbf{a}}_{j+1})}, & \quad t\in [x_{j+1},x_{j+2}). \\ \end{array} \right. $$
(4)

It is well-known [14] that if vectors \({\mathbf{a}}_j\in {\mathbb R}^2,\) \(j\in J_{-1,n-1}\) are given by the formula

$${\mathbf{a}}_j {~\stackrel{\mathrm{def}}{=}~} {\boldsymbol{\varphi}}_{j+1},$$

where \({\boldsymbol{\varphi}}_j {~\stackrel{\mathrm{def}}{=}~} {\boldsymbol{\varphi}}(x_j),\) the functions \(\omega_{j} \in C[a,b].\) Also, if \([{\boldsymbol{\varphi}}(t)]_0 \equiv 1,\) that is, \({\boldsymbol{\varphi}}(t) = (1, \rho (t))^T,\) where \(\rho \in C^1[a,b],\) the following partition of unity is valid:

$$\sum_{j=-1}^{n-1} \omega_{j}(t) \equiv 1 \quad \forall\ t \in [a,b].$$

In this case formulas (4) take the form

$$\omega_{j}(t) = \left \{ \begin{array}{ll} \dfrac{\mathrm{ det}({\boldsymbol{\varphi}}_{j}, {\boldsymbol{\varphi}}(t)) }{\mathrm{ det}({\boldsymbol{\varphi}}_{j}, {\boldsymbol{\varphi}}_{j+1})} = \dfrac{\rho(t) - \rho_j }{\rho_{j+1} - \rho_j}, & \quad t\in [x_j,x_{j+1}), \\[2ex] \dfrac{\mathrm{ det}({\boldsymbol{\varphi}}(t), {\boldsymbol{\varphi}}_{j+2})}{\mathrm{ det} ({\boldsymbol{\varphi}}_{j+1}, {\boldsymbol{\varphi}}_{j+2})} = \dfrac{\rho_{j+2} - \rho(t)}{\rho_{j+2} - \rho_{j+1}}, & \quad t\in [x_{j+1},x_{j+2}), \\ \end{array} \right. $$
(5)

where \(\rho_j {~\stackrel{\mathrm{def}}{=}~} \rho(x_{j}).\) 

The space

$${\mathbb S}(X){~\stackrel{\mathrm{def}}{=}~} \Bigg\{u\;|\; u=\sum_{j=-1}^{n-1}c_j\,\omega_{j} \quad\forall \ c_j\in {\mathbb R}^1\Bigg\}$$

is called the space of linear minimal \(B_{{\boldsymbol{\varphi}}}\)-splines (of order 2) on the grid \(X\). The splines themselves are called coordinate minimal splines of maximum smoothness. If the generating vector function \({\boldsymbol{\varphi}}\) has polynomial components, the degree of a spline is defined. Then the (polynomial) splines of maximum smoothness are splines of the first degree. The difference between the degree of a spline and the order of its highest continuous derivative is called the splinedeficiency. Thus, the splines of maximum smoothness are splines with the minimum deficiency (equal to \(1\)).

It is clear that

$$\omega_{j}(x_i) = \delta_{j,i-1},$$

where \(\delta_{j,i}\) is the Kronecker symbol.

Also, if \(\rho (t)\) is a strictly monotone function on the set \(M,\) the spline

$$\omega_j(t) > 0 \quad \forall\ t \in (x_j,x_{j+2}).$$

At \(\ {\boldsymbol{\varphi}}(t) = (1, t)^T,\) that is, \(\rho (t) = t,\) the functions \(\omega_{j}\) coincide with well-known polynomial B-splines of the first degree (second order), that is, with one-dimensional Courant functions:

$$\omega_{j}^B(t) = \left \{ \begin{array}{ll} \dfrac{t - x_j}{x_{j+1} - x_j}, & \quad t\in [x_j,x_{j+1}), \\[2ex] \dfrac{x_{j+2} - t}{x_{j+2}- x_{j+1}}, & \quad t\in [x_{j+1},x_{j+2}). \\ \end{array} \right.$$

3. DECOMPOSITION AND RECONSTRUCTION FILTERS IN SPLINE-WAVELET DECOMPOSITION

A grid (1) in which \(n=2^Lm,\) where \(L,m\in {\mathbb Z},\) \(L \geq 0,\) \(m \geq 1,\) is denoted by \(\Delta^L.\) Both the left node of the support (5) and the central node can be used to number the splines. For the splines numbered by the central node of the support we will use the notation \(\ \phi_{j}(t) {~\stackrel{\mathrm{def}}{=}~} \omega_{j-1}(t).\)

In what follows, the objects on the grid \(\Delta^L\) will have superscript \(L.\) For instance, the splines \(\phi_{j}(t)\) constructed on the grid \(\Delta^L\) will be denoted by \(\phi_j^L(t),\) \(j \in J_{0,2^Lm}.\) The space of such splines on the interval \([a,b]\) is denoted by

$${V^L} ~{~\stackrel{\mathrm{def}}{=}~}~ \mathbb{S}(\Delta^L)= \left\{s^L\mid s^L(t)=\sum_{j=0}^{2^Lm}c^L_j\,\phi_j^L(t) \quad\forall \ c_j^L\in {\mathbb R}^1,\ t\in[a,b] \right\}, $$
(6)
$$\dim {V^L} = 2^Lm + 1.$$

Let us compose the following row vector from the basis functions \(\phi_j^L\):

$${\boldsymbol{\phi}}^L {~\stackrel{\mathrm{def}}{=}~} \left(\phi_0^L,\phi_1^L,\ldots,\phi_{2^Lm}^L \right).$$

Denote the vector consisting of the approximation coefficients by

$${\boldsymbol{c}}^L {~\stackrel{\mathrm{def}}{=}~} \left(c^L_0, c^L_1, \ldots, c^L_{2^Lm}\right)^T,$$

and write (6) in vector form:

$$s^L(t)= {\boldsymbol{\phi}}^L(t)\,{\boldsymbol{c}}^L.$$

Let a grid \(\Delta^{L+1}\) be obtained by a twice refined grid \(\Delta^L\) with the addition of new nodes \({\xi}^L_j \in (x^L_j,x^L_{j+1}),\) \(j\in J_{0,2^Lm-1},\) that is,

$$x^{L+1}_{j} {~\stackrel{\mathrm{def}}{=}~} \left\{ \begin{array}{ll} x^{L}_{-1}, &\; j=-1, \\ x^{L}_{j/2}, &\; j=2k, \,k\in J_{0,2^Lm}, \\ {\xi}^{L}_{(j-1)/2}, &\; j=2k-1, \,k\in J_{1,2^Lm}, \\ x^{L}_{2^Lm+1}, &\; j=2^{L+1}m+1. \\ \end{array} \right.$$

Then there exists a matrix of refining reconstruction of scaling functions (or a matrix of successive refinement) \({\boldsymbol{P}}^{L+1}\) of dimension \((2^{L+1}m+1) \times (2^{L}m+1)\) such that

$${\boldsymbol{\phi}}^{L}={\boldsymbol{\phi}}^{L+1}\,{\boldsymbol{P}}^{L+1}, $$
(7)

where the column elements consist of the coefficients ofcalibration relations [12]:

$$\phi^L_{j}(t)= \left\{ \begin{array}{ll} \phi_{0}^{L+1}(t)+p^{L+1}_{-1,2}\,{\phi^{L+1}_{1}}(t), & \;\;j=0,\\[2ex] p_{j-1,0}^{L+1}\,{\phi_{2j-1}^{L+1}}(t)+p_{j-1,1}^{L+1}\,{\phi_{2j}^{L+1}}(t)+p_{j-1,2}^{L+1}\,{\phi_{2j+1}^{L+1}}(t), & \;\;j\in J_{1,2^Lm-1},\\[2ex] p^{L+1}_{2^Lm-1,0}\,\phi_{2^{L+1}m-1}^{L+1}(t)+\phi_{2^{L+1}m}^{L+1}(t), & \;\;j=2^Lm, \\ \end{array} \right. $$
(8)

and the coefficients \(p_{j,i}^{L+1}\in {\mathbb R}^1,\) \(i=0,1,2,\) are calculated by the formulas

$$\begin{array}{ll} p_{j,0}^{L+1} =\dfrac{\mathrm{ det}({\boldsymbol{\varphi}}^{L+1}_{2j}, {\boldsymbol{\varphi}}^{L+1}_{2j+1})} {\mathrm{ det}({\boldsymbol{\varphi}}^{L+1}_{2j}, {\boldsymbol{\varphi}}^{L+1}_{2j+2})} = \dfrac{\rho^{L+1}_{2j+1}-\rho^{L+1}_{2j}} {\rho^{L+1}_{2j+2}-\rho^{L+1}_{2j}}, & \;\;j\in J_{0,2^Lm-1}, \\[3ex] p_{j,1}^{L+1} = 1, & \;\;j\in J_{-1,2^Lm-1},\\[3ex] p_{j,2}^{L+1} =\dfrac{\mathrm{ det}({\boldsymbol{\varphi}}^{L+1}_{2j+3}, {\boldsymbol{\varphi}}^{L+1}_{2j+4})} {\mathrm{ det}({\boldsymbol{\varphi}}^{L+1}_{2j+2}, {\boldsymbol{\varphi}}^{L+1}_{2j+4})}= \dfrac{\rho^{L+1}_{2j+4}-\rho^{L+1}_{2j+3}} {\rho^{L+1}_{2j+4}-\rho^{L+1}_{2j+2}}, &\;\; j\in J_{-1,2^Lm-2}. \end{array}$$

The matrix \({\boldsymbol{P}}^{L+1}\) has the following form:

$${\boldsymbol{P}}^{L+1} = \left[ \begin{array}{cccccc} 1&0&0&\ldots&0&0 \\ p^{L+1}_{-1,2}&p^{L+1}_{0,0}&0&\ldots&0&0 \\ 0&1&0&\ldots&0&0 \\ 0&p^{L+1}_{0,2}&p^{L+1}_{1,0}&\ldots&0&0 \\ 0&0&1&\ldots&0&0 \\ 0&0&p^{L+1}_{1,2}&\ldots&0&0 \\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ 0&0&0&\ldots&p^{L+1}_{2^Lm-2,0}&0 \\ 0&0&0&\ldots&1&0 \\ 0&0&0&\ldots&p^{L+1}_{2^Lm-2,2}&p^{L+1}_{2^Lm-1,0} \\ 0&0&0&\ldots&0&1 \\ \end{array} \right].$$

By virtue of the calibration relations (8), we have the embedding of the spaces

$$V^{L} \subset V^{L+1},$$

and, hence, the wavelet decomposition

$$V^{L+1} = V^{L} \dotplus W^{L}, $$
(9)

where \(\dotplus\) denotes the direct sum of the spaces \(V^{L}\) and \(W^{L}.\)

The space of wavelets \(W^{L}\) can be defined as a complement of the space \(V^{L}\) with respect to the space \(V^{L+1}\) such that any function of the space \(V^{L+1}\) can be written as the sum of a function from the space \(V^{L}\) and a function from the space \(W^{L}.\) There are various possibilities for constructing basic functions in the space \(W^{L}.\)

For instance, as basic functions in the space \(W^{L}\) one can use basic functions from the space \(V^{L+1}\) with centers at odd nodes. This is how “lazy” wavelets are obtained. They do not need additional calculations, being a subset of the scaling functions. It is clear that

$$\dim W^{L} = 2^{L}m.$$

Then the dimensions of the spaces under consideration are complementary, that is,

$$\dim V^{L+1}=\dim V^{L} + \dim W^{L}.$$

Let us denote the basic wavelet functions by

$$\psi_i^{L}(t) {~\stackrel{\mathrm{def}}{=}~} \phi_{2i+1}^{L+1}(t), \quad i=0,1,\ldots,2^{L}m-1,$$

and introduce a row vector

$${\boldsymbol{\psi}}^{L} {~\stackrel{\mathrm{def}}{=}~} \left(\psi_0^{L},\psi_1^{L},\ldots,\psi_{2^{L}m-1}^{L}\right).$$

We denote the corresponding wavelet approximation coefficients by\(d^{L}_i,\) \(i=0,1,\ldots,2^Lm-1,\) and introduce a vector

$${\boldsymbol{d}}^{L} {~\stackrel{\mathrm{def}}{=}~} \left(d^{L}_0, d^{L}_1, \ldots, d^{L}_{2^{L}m-1}\right)^T.$$

Since by definition the space of wavelets \(W^{L}\) is a subspace of \(V^{L+1},\) the wavelet functions \(\psi_i^{L}\) can be presented as a linear combination of the scaling functions \(\phi_j^{L+1}.\) Thus, there exists a matrix of refining reconstruction of the wavelet functions \({\boldsymbol{Q}}^{L+1}\) of dimension \((2^{L+1}m+1) \times 2^{L}m\) such that

$${\boldsymbol{\psi}}^{L}={\boldsymbol{\phi}}^{L+1}\,{\boldsymbol{Q}}^{L+1}, $$
(10)

where all elements of the columns of the matrix \({\boldsymbol{Q}}^{L+1}\) are zeros, except for a single unity, since each “lazy” wavelet is one “narrow” basis function.

The matrix \({\boldsymbol{Q}}^{L+1}\) has the following form:

$${\boldsymbol{Q}}^{L+1} = \left[ \begin{array}{cccccc} 0&0&0&\ldots&0 \\[-1ex] 1&0&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&0 \\[-1ex] 0&1&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&0 \\[-1ex] 0&0&1&\ldots&0 \\[-1ex] \vdots&\vdots&\vdots&\vdots&\vdots \\[-1ex] 0&0&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&1 \\[-1ex] 0&0&0&\ldots&0 \\[-1ex] \end{array} \right].$$

With the notation for block matrices, representations (7) and (10) can be written as a single calibration relation for the scaling functions and wavelets as follows:

$$\left[{\boldsymbol{\phi}}^{L} \;|\; {\boldsymbol{\psi}}^{L} \right] = {\boldsymbol{\phi}}^{L+1} \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]. $$
(11)

By virtue of the decomposition (9) any function from the space \(V^{L+1}\) can be written as the sum of a function from the space \(V^{L}\) and a function from the space \(W^{L},\) and the following chain of equalities is valid:

$$s^{L+1}(t)= {\boldsymbol{\phi}}^{L+1}(t)\,{\boldsymbol{c}}^{L+1} = {\boldsymbol{\phi}}^{L}(t)\,{\boldsymbol{c}}^{L} + {\boldsymbol{\psi}}^{L}(t)\,{\boldsymbol{d}}^{L} = {\boldsymbol{\phi}}^{L+1}(t)\,{\boldsymbol{P}}^{L+1}\,{\boldsymbol{c}}^{L} + {\boldsymbol{\phi}}^{L+1}(t)\,{\boldsymbol{Q}}^{L+1}\,{\boldsymbol{d}}^{L}.$$

Let the coefficients \({\boldsymbol{c}}^{L}\) and \({\boldsymbol{d}}^{L}\) be known. Then the coefficients \({\boldsymbol{c}}^{L+1}\) can be obtained from the coefficients \({\boldsymbol{c}}^{L}\) and \({\boldsymbol{d}}^{L}\) as

$${\boldsymbol{c}}^{L+1} = {\boldsymbol{P}}^{L+1}\,{\boldsymbol{c}}^{L}+{\boldsymbol{Q}}^{L+1}\,{\boldsymbol{d}}^{L} $$
(12)

or, using the notation for block matrices, as

$${\boldsymbol{c}}^{L+1} = \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right] \left[ \frac{{\boldsymbol{c}}^{L}}{{\boldsymbol{d}}^{L}} \right]. $$
(13)

The block matrix \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]\) has the following form:

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right] = \left[\!\!\! \begin{array}{cccccc|ccccc} 1&0&0&\ldots&0&0 &\quad0&0&0&\ldots&0 \\[-1ex] p^{L+1}_{-1,2}&p^{L+1}_{0,0}&0&\ldots&0&0 &\quad1&0&0&\ldots&0 \\[-1ex] 0&1&0&\ldots&0&0 &\quad0&0&0&\ldots&0 \\[-1ex] 0&p^{L+1}_{0,2}&p^{L+1}_{1,0}&\ldots&0&0 &\quad0&1&0&\ldots&0 \\[-1ex] 0&0&1&\ldots&0&0 &\quad0&0&0&\ldots&0 \\[-1ex] 0&0&p^{L+1}_{1,2}&\ldots&0&0 &\quad0&0&1&\ldots&0 \\[-1ex] \vdots&\vdots&\vdots&\vdots&\vdots&\vdots &\quad\vdots&\vdots&\vdots&\vdots&\vdots \\[-1ex] 0&0&0&\ldots&p^{L+1}_{2^Lm-2,0}&0 &\quad0&0&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&1&0 &\quad0&0&0&\ldots&0 \\[-1ex] 0&0&0&\ldots&p^{L+1}_{2^Lm-2,2}&p^{L+1}_{2^Lm-1,0} &\quad0&0&0&\ldots&1 \\[-1ex] 0&0&0&\ldots&0&1 &\quad0&0&0&\ldots&0 \\ \end{array}\!\! \right]. $$
(14)

Theorem 1.The matrix \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1}\) \((\)that is inverse of \((14))\) exists and has the following form:

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1} = \left[\! \begin{array}{ccccccccc} 1&0&0&0&0&\!\!\ldots\!\! &0&0&0 \\[-1ex] 0&0&1&0&0&\!\!\ldots\!\! &0&0&0 \\[-1ex] \vdots &\vdots &\vdots &\vdots&\vdots&\vdots&\vdots &\vdots &\vdots \\[-1ex] 0&0&0&0&0&\!\!\ldots\!\! &1&0&0 \\[-1ex] 0&0&0&0&0&\!\!\ldots\!\! &0&0&1 \\[0.3ex] \hline -p^{L+1}_{-1,2}&1&-p^{L+1}_{0,0}&0 &0 &\!\!\ldots\!\! &0 &0 &0 \\[0.5ex] 0 &0 &-p^{L+1}_{0,2}&1&-p^{L+1}_{1,0}&\!\!\ldots\!\! &0 &0 &0 \\[-1ex] \vdots &\vdots &\vdots &\vdots&\vdots&\vdots&\vdots &\vdots &\vdots \\[0.5ex] 0&0 &0 &0 &0 &\!\!\ldots\!\!&-p^{L+1}_{2^Lm-2,2}&1&-p^{L+1}_{2^Lm-1,0} \\[0.5ex] \end{array} \right]. $$
(15)

Proof. Consider the tridiagonal matrix

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]' {~\stackrel{\mathrm{def}}{=}~} \left[ \begin{array}{cccccccccc} 1&0& & & & & & & & \\[-0.5ex] p^{L+1}_{-1,2}&1&p^{L+1}_{0,0}& & & & & & & \\[-0.5ex] & 0&1&0& & & & & &\\[-1ex] & &p^{L+1}_{0,2}&1&p^{L+1}_{1,0}&& & & & \\[-1ex] & & &0&1&\ddots&& & & \\[-1ex] & & & &p^{L+1}_{1,2}&\ddots&& & & \\[-1ex] & & & & &\ddots&&\quad p^{L+1}_{2^Lm-2,0}& & \\[-0.5ex] & & & & & &&\quad 1&0& \\[-0.5ex] & & & & & &&\quad p^{L+1}_{2^Lm-2,2}&1&p^{L+1}_{2^Lm-1,0} \\[-0.5ex] & & & & & && &0&1 \\[0.1ex] \end{array} \right],$$
(16)

which is obtained by permuting the columns of the matrix (14) with the permutation

$$\sigma {~\stackrel{\mathrm{def}}{=}~} \left( \begin{array}{ccccccccc} 0 & 1 & 2 & \ldots & 2^{L}m & 2^{L}m + 1 & 2^{L}m + 2 & \ldots & 2^{L+1}m \\ 0 & 2 & 4 & \ldots & 2^{L+1}m & 1 & 3 & \ldots & 2^{L+1}m - 1 \\ \end{array} \right). $$
(17)

Let the matrix with the permutation of columns (17) be denoted by\({\boldsymbol T}.\) Then the following representation is valid:

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]' = \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]{\boldsymbol T}. $$
(18)

It is evident that the determinant of the tridiagonal matrix (16) is equal to the product of its diagonal elements. Thus, \(\det\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]' = 1,\) and, hence, there exists a matrix \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]'^{-1}\) that is inverse of (16), which also turns out to be tridiagonal and has the following form:

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]'^{-1}= \left[\!\!\! \begin{array}{cccccccccc} 1&0& & & & & & & & \\ -p^{L+1}_{-1,2}&1&-p^{L+1}_{0,0}& & & & & & & \\ & 0&1&0& & & & & &\\ & &-p^{L+1}_{0,2}&1&-p^{L+1}_{1,0}&& & & & \\ & & &0&1&\!\!\!\!\ddots\!\!\!\!\!\!\!\!\!&& & & \\ & & & &-p^{L+1}_{1,2}&\!\!\!\!\!\!\!\!\ddots\!\!\!\!\!&& & & \\ & & & & &\!\!\!\!\ddots&&\!\!\!\!\!\!\!\!\!\! -p^{L+1}_{2^Lm-2,0}& & \\ & & & & & &&\ 1&0& \\ & & & & & &&\!\!\!\!\!\!\!\!\!\! -p^{L+1}_{2^Lm-2,2}&1& -p^{L+1}_{2^Lm-1,0} \\ & & & & & && &0&1 \\ \end{array}\!\!\ \!\! \right]. $$
(19)

From representation (18) we find

$$\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1} = {\boldsymbol T}\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]'^{-1}.$$

It follows from the above equality that to find the matrix \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1}\) the rows of the matrix (19) are subject to the permutation that is inverse of (17), that is, the permutation \(\sigma^{-1}\) in which the image and the preimage change their places. From this we obtain the required representation (15).\(\Box\)

Consider the space \(\mathbb{L}_N,\) \(N \in \mathbb{Z},\) \(N \geq 0,\) of all numerical sequences represented by the column vectors \(\boldsymbol{l} = (l_0,l_1,\ldots,l_{N})^T.\) Consider two copies of the space \(\mathbb{L}_N\) and denote them by \({\mathcal C}^L\) and \({\mathcal D}^L\), respectively. The elements of the space \({\mathcal C}^L\) are vectors \(\boldsymbol{c}^{L},\) and the elements of the space \({\mathcal D}^L\) are vectors \(\boldsymbol{d}^{L}.\) Let \({\mathcal C}^L \times {\mathcal D}^L\) denote the direct product of the spaces \({\mathcal C}^L\) and \({\mathcal D}^L\), that is,

$${\mathcal C}^L \times {\mathcal D}^L {~\stackrel{\mathrm{def}}{=}~} \left\{ \left[ \frac{{\boldsymbol{c}}^{L}}{{\boldsymbol{d}}^{L}} \right] \;\Big| \; {\boldsymbol{c}}^{L} \in {\mathcal C}^L, {\boldsymbol{d}}^{L} \in {\mathcal D}^L \right\}.$$

Consider an operator \(\ {\mathfrak R}: \; {\mathcal C}^L \times {\mathcal D}^L \rightarrow {\mathcal C}^{L+1}, \quad {\mathfrak R} {~\stackrel{\mathrm{def}}{=}~} \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right],\) for which

$${\boldsymbol{c}}^{L+1}= {\mathfrak R}\left[ \frac{{\boldsymbol{c}}^{L}}{{\boldsymbol{d}}^{L}} \right] = \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right] \left[ \frac{{\boldsymbol{c}}^{L}}{{\boldsymbol{d}}^{L}} \right].$$

The operator \({\mathfrak R}\) is called a reconstruction (orsynthesis) operator, formulas (12) and (13) are called reconstruction formulas, and the matrices \({\boldsymbol{P}}^{L+1}\) and \({\boldsymbol{Q}}^{L+1}\) are called reconstruction filters.

Consider the reverse process of splitting the known coefficients \({\boldsymbol{c}}^{L+1}\) into a coarse version \({\boldsymbol{c}}^{L}\) and refining coefficients \({\boldsymbol{d}}^{L},\) which is determined by the following matrix equations:

$${\boldsymbol{c}}^{L} = {\boldsymbol{A}}^{L+1} {\boldsymbol{c}}^{L+1}, $$
(20)
$${\boldsymbol{d}}^{L} = {\boldsymbol{B}}^{L+1} {\boldsymbol{c}}^{L+1}, $$
(21)

where \({\boldsymbol{A}}^{L+1}\) are matrices of dimension \((2^{L}m+1) \times (2^{L+1}m+1)\) and \({\boldsymbol{B}}^{L+1}\) of dimension \(2^{L}m \times (2^{L+1}m+1)\), which are determined from relation (11) as follows:

$$\left[{\boldsymbol{\phi}}^{L} \;|\; {\boldsymbol{\psi}}^{L} \right] \left[\frac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right] = {\boldsymbol{\phi}}^{L+1}. $$
(22)

Consider an operator \(\ {\mathfrak D}: \; {\mathcal C}^{L+1} \rightarrow {\mathcal C}^{L} \times {\mathcal D}^L, \quad {\mathfrak D} {~\stackrel{\mathrm{def}}{=}~} \left[\dfrac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right]\), for which

$$\left[ \frac{{\boldsymbol{c}}^{L}}{{\boldsymbol{d}}^{L}} \right] = {\mathfrak D}\,{\boldsymbol{c}}^{L+1} = \left[\frac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right] {\boldsymbol{c}}^{L+1}.$$

The operator \({\mathfrak D}\) is called a decomposition (or analysis) operator, formulas (20) and (21) are calleddecomposition formulas, and the matrices \({\boldsymbol{A}}^{L+1}\) and \({\boldsymbol{B}}^{L+1}\) are called decomposition filters.

Theorem 2.The operators \({\mathfrak D}\) and \({\mathfrak R}\) are mutually inverse. They realize a linear isomorphism of the spaces \({\mathcal C}^{L+1}\) and \({\mathcal C}^{L} \times {\mathcal D}^L.\)

Proof. From relations (11) and (22), since the matrix \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1}\) exists, we have

$$\left[\frac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right] = \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]^{-1}. $$
(23)

With representation (23), from the definitions of the operators\({\mathfrak R}\) and \({\mathfrak D}\) we have

$${\mathfrak R}{\mathfrak D}= \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right] \left[\frac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right] = {\boldsymbol{P}}^{L+1}{\boldsymbol{A}}^{L+1}+{\boldsymbol{Q}}^{L+1}{\boldsymbol{B}}^{L+1} = I, $$
(24)

where \(I\) is the unit matrix of the corresponding dimension.

On the other hand, the following representation takes place:

$${\mathfrak D}{\mathfrak R}= \left[\frac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right] \left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right] = \left[\frac{{\boldsymbol{A}}^{L+1}{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{A}}^{L+1}{\boldsymbol{Q}}^{L+1}} {{\boldsymbol{B}}^{L+1}{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{B}}^{L+1}{\boldsymbol{Q}}^{L+1}}\right] = I, $$
(25)

where \(I\) is the unit matrix of the corresponding dimension.\(\Box\)

Corollary. From the equalities \((24)\) and \((25)\) we have the following relations between the matrices \({\boldsymbol{A}}^{L+1},\) \({\boldsymbol{B}}^{L+1},\) \({\boldsymbol{P}}^{L+1}\), and \({\boldsymbol{Q}}^{L+1}\):

$${\boldsymbol{P}}^{L+1}{\boldsymbol{A}}^{L+1}+{\boldsymbol{Q}}^{L+1}{\boldsymbol{B}}^{L+1} = I, $$
(26)
$${\boldsymbol{A}}^{L+1}{\boldsymbol{P}}^{L+1} = I, \quad {\boldsymbol{B}}^{L+1}{\boldsymbol{Q}}^{L+1} = I, $$
(27)
$${\boldsymbol{A}}^{L+1}{\boldsymbol{Q}}^{L+1} = O, \quad {\boldsymbol{B}}^{L+1}{\boldsymbol{P}}^{L+1} = O, $$
(28)

where \(I\)s denote the unit matrices and \(O\)s denote the zero matrices of the corresponding dimensions.

Remark 1. By virtue of relations (27) and (28), the above-constructed matrices can be used as generating and verifying matrices in constructing a noise-resistant coding scheme with the exact reconstruction condition (26) (see, for instance, [15] for more details).

It follows from representation (14) that \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]\) are sparse matrices. However, in the general case the matrices that are inverse of \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]\) are no longer sparse. Then the decomposition can be made without constructing decomposition filters in explicit form. For this one has to solve the sparse system of linear equations (13) whose solvability is guaranteed by the fact that the basic functions are linearly independent. To solve it for the coefficients \({\boldsymbol{c}}^{L}\) and \({\boldsymbol{d}}^{L}\) it is proposed to make the system matrix a band one by changing the order of the unknowns, and then use a special method for solving it [2]. Another method is to construct an extension matrix whose elements are values of an extended system of biorthogonal functionals on the initial basis [4].

In our case the matrices \(\left[\dfrac{{\boldsymbol{A}}^{L+1}} {{\boldsymbol{B}}^{L+1}} \right]\), which are inverse of the matrices \(\left[{\boldsymbol{P}}^{L+1} \;|\; {\boldsymbol{Q}}^{L+1} \right]\), are sparse. From equality (14) we find an explicit representation of the decomposition filters:

$${\boldsymbol{A}}^{L+1} = \left[ \begin{array}{ccccccccc} 1&0&0&0&0&\ldots &0&0&0 \\ 0&0&1&0&0&\ldots &0&0&0 \\ \vdots &\vdots &\vdots &\vdots&\vdots&\vdots&\vdots &\vdots &\vdots \\ 0&0&0&0&0&\ldots &1&0&0 \\ 0&0&0&0&0&\ldots &0&0&1 \\ \end{array} \right],$$
$${\boldsymbol{B}}^{L+1} = \left[ \begin{array}{ccccccccc} -p^{L+1}_{-1,2}&1&-p^{L+1}_{0,0}&0 &0 &\ldots &0 &0 &0 \\[1ex] 0 &0 &-p^{L+1}_{0,2}&1&-p^{L+1}_{1,0}&\ldots &0 &0 &0 \\ \vdots &\vdots &\vdots &\vdots&\vdots&\vdots&\vdots &\vdots &\vdots \\[1ex] 0&0 &0 &0 &0 &\ldots&-p^{L+1}_{2^Lm-2,2}&1&-p^{L+1}_{2^Lm-1,0} \\[1ex] \end{array} \right].$$

Remark 2. The matrix \({\boldsymbol{A}}^{L+1}\) is 2-circulant, but the matrix \({\boldsymbol{B}}^{L+1}\), in general, is not circulant.

Remark 3. Note that after “lazy” wavelets and corresponding filters are constructed, their improvement provides a simple way of constructing biorthogonal wavelets and lifting schemes (for more details, see [2, 16, 17]).

FUNDING

This work was supported by the Grants Council (under RF President), grant no. MD-2242.2019.9.