Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction: Harten’s Multiresolution

Multiscale decompositions are efficient tools for analyzing the information contained in a signal, providing various applications such as signal compression and denoising. If f L represents a sampling of a signal, f(x), in the finest resolution level L, the multiresolution schemes rearrange this information leading to the decomposition \(\{f^{0},e^{1},e^{2},\ldots,e^{L}\}\), where f 0 corresponds to the sampling at the coarsest resolution level and each sequence e k represents the information which is necessary to recover f k from f k−1. If f(x) is smooth the details e k have small magnitude and we can remove them without a great loss of information, providing excellent compression capabilities. Harten introduces its notion of multiresolution in [5] and later generalizes it in [6, 7]. We start revising the basic aspects of this formulation.

Let us consider V k+1 to be a k + 1 dimension linear space. The discretization operator, \(\mathrm{D}_{k+1}:\mathrm{ F} \rightarrow V ^{k+1}\), allows to obtain the discrete values of a function, f ∈ F, while the reconstruction operator, \(\mathrm{R}_{k+1}: V ^{k+1} \rightarrow \mathrm{ F}\), performs the reverse operation, and it must satisfy the following property:

$$\displaystyle{ \mathrm{D}_{k+1}\mathrm{R}_{k+1} = I_{V ^{k+1}}. }$$
(1)

The reconstruction operator can be nonlinear. This allows the introduction of techniques that improve the approximation in the presence of discontinuities (see [1]), being this a fundamental difference from linear multiscale decompositions, such as the wavelet transform (see [3]).

The connection between two levels of resolution (larger k, higher resolution) is given by two operators: decimation, \(D_{k+1}^{k}: V ^{k+1} \rightarrow V ^{k}\) and prediction, \(P_{k}^{k+1}: V ^{k} \rightarrow V ^{k+1}\), that must satisfy the consistency requirement: \(D_{k+1}^{k}P_{k}^{k+1} = I_{V ^{k}}\). However, for the inverse composition \(P_{k}^{k+1}D_{k+1}^{k}\neq I_{V ^{k+1}}\). Then, we define the prediction error: \(e^{k+1} = (I_{V ^{k+1}} - P_{k}^{k+1}D_{k+1}^{k})v^{k+1}\), being \(v^{k+1} \in V ^{k+1}\). If \(\{\mu _{i}^{k+1}\}\) is a basis of the null space of \(D_{k+1}^{k}\), we can express \(e^{k+1} =\sum _{i}d_{i}^{k+1}\mu _{i}^{k+1}\) (see [1]). We call \(d_{i}^{k+1}\) the scale coefficients at level k.

Finally, decimation operators can be constructed from a sequence of discretization operators, provided they are nested (see [1]):

$$\displaystyle{ \mathrm{D}_{k+1}f = 0 \Rightarrow \mathrm{ D}_{k}f = 0,\quad \forall k \in \mathbb{N},\quad \forall f \in \mathrm{ F}. }$$
(2)

The most commonly used discretizations are point-value and cell average, [1]. Since our goal is noise removal and it is eliminated naturally by cell average decimation, this will be used in what follows.

1.1 Cell Average Discretization in [0,1]

Consider a set of nested dyadic grids defined in [0,1]:

$$\displaystyle{ X^{k} =\{ x_{ i}^{k}\}_{ i=0}^{N_{k} },\quad N_{k} = 2^{k}N_{ 0},\quad x_{i}^{k} = \mathit{ih}_{ k},\quad h_{k} = \tfrac{1} {N_{k}},\quad k = 0,\ldots,L, }$$
(3)

where \(N_{0} \in \mathbb{N}\). If F = L 1([0, 1]), the cell average discretization operator \(\mathrm{D}_{k+1}:\mathrm{ F} \rightarrow V ^{k+1}\), is defined in [7] as:

$$\displaystyle{ \bar{f}_{i}^{k+1}:= (\mathrm{D}_{ k+1}f)_{i} = \frac{1} {h_{k+1}}\int _{x_{i-1}^{k+1}}^{x_{i}^{k+1} }f(x)\mathit{dx},\quad 1 \leq i \leq N_{k+1}. }$$
(4)

By integral properties is easy to see that \(\bar{f}_{i}^{k} = \tfrac{1} {2}(\bar{f}_{2i}^{k+1} +\bar{ f}_{ 2i-1}^{k+1})\), defining this way the decimation operator and satisfying (2).

2 Interpolatory Reconstruction for Cell Averages

We define the r-th order interpolatory reconstruction as:

$$\displaystyle{ \overline{\mathrm{IC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x,\bar{f}^{k}) = g_{ i}(x),\quad x \in [x_{i-1}^{k},x_{ i}^{k}],\quad i = 1,\ldots,N_{ k}, }$$
(5)

where g i (x) is the polynomial of degree \(r - 1 = \mathit{nl} + \mathit{nr}\) such that:

$$\displaystyle{ \frac{1} {h_{k}}\int _{x_{i+s-1}^{k}}^{x_{i+s}^{k} }g_{i}(x)\mathit{dx} =\bar{ f}_{i+s}^{k},\qquad s = -\mathit{nl},\ldots,\mathit{nr},\quad \mathit{nl},\mathit{nr} \in \mathbb{N}. }$$
(6)

Prediction operator is calculated as follows:

$$\displaystyle{(P_{k}^{k+1}\bar{f}^{k})_{ 2i-j} = \left (\mathrm{D}_{k+1}(\overline{\mathrm{IC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x;\bar{f}^{k}))\right )_{ 2i-j} = \frac{1} {h_{k+1}}\int _{x_{2i-j-1}^{k+1}}^{x_{2i-j}^{k+1} }\overline{\mathrm{IC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x;\bar{f}^{k})\mathit{dx},}$$

where j = 0, 1. Therefore \(\tfrac{1} {2}((P_{k}^{k+1}\bar{f}^{k})_{ 2i-1} + (P_{k}^{k+1}\bar{f}^{k})_{ 2i}) =\bar{ f}_{i}^{k}\), satisfying (1). Also \(e_{2i-1}^{k+1} + e_{2i}^{k+1} = 0\) and we can define \(d_{i}^{k+1} = e_{2i-1}^{k+1}\), (see [1]).

Then, the multiresolution scheme is:

Codification, \(\bar{f}^{L} \rightarrow \{\bar{f}^{0},d^{1},d^{2},\ldots,d^{L}\}\) (Direct Transformation):

$$\displaystyle{ \left \{\begin{array}{ll} \mathbf{For}k = L - 1,\ldots,0 & \\ \ \quad \bar{f}_{i}^{k} = \tfrac{1} {2}(\bar{f}_{2i-1}^{k+1} +\bar{ f}_{ 2i}^{k+1}), &i = 1,\ldots,N_{ k}, \\ \ \quad d_{i}^{k+1} =\bar{ f}_{2i-1}^{k+1} - (P_{k}^{k+1}\bar{f}^{k})_{2i-1},&i = 1,\ldots,N_{k}. \end{array} \right. }$$
(7)

Decodification, \(\{\bar{f}^{0},d^{1},d^{2},\ldots,d^{L}\} \rightarrow \bar{ f}^{L}\) (Inverse Transformation):

$$\displaystyle{ \left \{\begin{array}{ll} \mathbf{For}k = 0,\ldots,L - 1 & \\ \ \quad \bar{f}_{2i-1}^{k+1} = (P_{k}^{k+1}\bar{f}^{k})_{2i-1} + d_{i}^{k+1}, &i = 1,\ldots,N_{k}, \\ \ \quad \bar{f}_{2i}^{k+1} = 2\bar{f}_{i}^{k} -\bar{ f}_{2i-1}^{k+1} \equiv (P_{k}^{k+1}\bar{f}^{k})_{2i} - d_{i}^{k+1},&i = 1,\ldots,N_{k}. \end{array} \right. }$$
(8)

2.1 Nonlinear Techniques

Nonlinear techniques help us to improve the reconstructions in the presence of discontinuities. Here, we will apply ENO and SR techniques.

2.1.1 ENO Technique

The ENO (Essentially Non-Oscillatory, [8]) interpolation technique consists in choose for each interval, a stencil that do not cross a discontinuity. This is possible if the working interval contains no discontinuities. Typically stencil is selected according to the magnitude of the divided differences. However, if the function is contaminated by noise, the information provided by divided differences is unreliable and we must seek an alternative. Inspired by [9] we use a choice that is not affected by the presence of noise. If \(m = \mathit{nl} + \mathit{nr} + 1\), we define the measure:

$$\displaystyle{ \bar{E}_{2}(x_{i},m,l) =\sum _{ j=1}^{m}\left (\bar{q}_{ i}^{\overline{\mathrm{LS}}_{m-l,l-1} }(x_{i-(m-l)+j-1}^{k};\bar{f}^{k},s) -\bar{ f}_{ i-(m-l)+j-1}^{k}\right )^{2}, }$$
(9)

where \(\bar{q}_{i}^{\overline{\mathrm{LS}}_{\mathit{nl},\mathit{nr}}}(x;\bar{f}^{k},s)\) is the cell averages least squares polynomial of degree \(s - 1 < \mathit{nl} + \mathit{nr}\) constructed from the stencil \(\{x_{i-\mathit{nl}}^{k},\ldots,x_{i+\mathit{nr}}^{k}\}\).

Now, we take:

$$\displaystyle{ \bar{E}_{2}(x_{i},m,l^{{\ast}}) = \mathit{min}\left \{\bar{E}_{ 2}(x_{i},m,1),\bar{E}_{2}(x_{i},m,2),\ldots,\bar{E}_{2}(x_{i},m,m)\right \}, }$$
(10)

and the ENO stencil for \(I_{i}^{k} = (x_{i-1}^{k},x_{i}^{k})\) is \(\left \{x_{i-\mathit{nl}_{i}}^{k},\ldots,x_{i+\mathit{nr}_{i}}^{k}\right \}\), with:

$$\displaystyle{ \mathit{nl}_{i}:= m - l^{{\ast}},\qquad \mathit{nr}_{ i}:= l^{{\ast}}- 1. }$$
(11)

2.1.2 SR Technique

With the technique SR (Subcell Resolution, [2, 4]) we can improve the approximation even in the interval containing the discontinuity. The idea is to properly extend the adjacent interpolating polynomials to the point of discontinuity.

First, we define some useful concepts. If f(x) has a jump in \([x_{i-1}^{k},x_{i}^{k}]\) the primitive function of f, \(F(x) =\int _{ 0}^{x}f(y)\mathit{dy} \in \mathrm{ C}([0,1])\) has a corner (a discontinuity in the derivative) there. Note that the sets \(\{\bar{f}_{i}^{k}\}_{i=1}^{N_{k}}\) and \(F^{k} =\{ F_{i}^{k}\}_{i=0}^{N_{k}}\) are equivalents due to the relations \(F_{i}^{k} = F(x_{i}^{k}) =\int _{ 0}^{x_{i}^{k} }f(y)\mathit{dy} = h_{k}\sum _{j=1}^{i}\bar{f}_{j}^{k}\) and \(\bar{f}_{j}^{k} = \frac{1} {h_{k}}(F_{j}^{k} - F_{j-1}^{k})\).

If \(m = \mathit{nl} + \mathit{nr} + 1\), the SR technique is summarized as follows:

  1. 1.

    Taking stencils with m nodes, we calculate the ENO stencils by (11).

  2. 2.

    If \(\mathit{nl}_{i-1} = m - 1\) and \(\mathit{nl}_{i+1} = 0\) the stencils for the cells \(I_{i-1}^{k}\) and \(I_{i+1}^{k}\) are disjoint. We label the cell \(I_{i}^{k}\) as suspect of containing a discontinuity.

  3. 3.

    For each suspicious cell we define the function

    $$\displaystyle{ G_{i}^{\overline{\mathrm{IC}}}(x) = q_{ i+1,0,m-1}^{\overline{\mathrm{IP}}}(x;F^{k},r) - q_{ i-1,m-1,0}^{\overline{\mathrm{IP}}}(x;F^{k},r), }$$
    (12)

    where \(q_{j,\mathit{nl},\mathit{nr}}^{\overline{\mathrm{IP}}}(x;F^{k},s)\) is the polynomial of degree s that interpolates the point values \((x_{l}^{k},F_{l}^{k})\), \(j -\mathit{nl} - 1 \leq l \leq j + \mathit{nr}\).

    If \(G_{i}^{\overline{\mathrm{IC}}}(x_{i-1}^{k}) \cdot G_{i}^{\overline{\mathrm{IC}}}(x_{i}^{k}) < 0\) we label the cell \(I_{i}^{k}\) as singular.

  4. 4.

    If \(G_{i}^{\overline{\mathrm{IC}}}(x_{i-1}^{k}) \cdot G_{i}^{\overline{\mathrm{IC}}}(x_{2i-1}^{k+1}) < 0\), the node \(x_{2i-1}^{k+1}\) lies at the right of the discontinuity. Then the predicted values are obtained as follows:

    $$\displaystyle\begin{array}{rcl} (P_{k}^{k+1}\bar{f}^{k})_{ 2i}& =& \frac{q_{i+1,0,m-1}^{\overline{\mathrm{IP}}}(x_{2i}^{k+1};F^{k},r) - q_{i+1,0,m-1}^{\overline{\mathrm{IP}}}(x_{2i-1}^{k+1};F^{k},r)} {h_{k+1}},\qquad {}\end{array}$$
    (13)
    $$\displaystyle\begin{array}{rcl} (P_{k}^{k+1}\bar{f}^{k})_{ 2i-1}& =& 2\bar{f}_{i}^{k} - (P_{ k}^{k+1}\bar{f}^{k})_{ 2i}. {}\end{array}$$
    (14)

    In the other case, \(x_{2i-1}^{k+1}\) is located at the left of the discontinuity and:

    $$\displaystyle\begin{array}{rcl} (P_{k}^{k+1}\bar{f}^{k})_{ 2i-1}& =& \frac{q_{i-1,m-1,0}^{\overline{\mathrm{IP}}}(x_{2i-1}^{k+1};F^{k},r) - q_{i-1,m-1,0}^{\overline{\mathrm{IP}}}(x_{2i-2}^{k+1};F^{k},r)} {h_{k+1}},\qquad {}\end{array}$$
    (15)
    $$\displaystyle\begin{array}{rcl} (P_{k}^{k+1}\bar{f}^{k})_{ 2i}& =& 2\bar{f}_{i}^{k} - (P_{ k}^{k+1}\bar{f}^{k})_{ 2i-1}. {}\end{array}$$
    (16)

3 Least Squares Reconstruction for Cell Averages

Schemes in this case are similar to those discussed in Sect. 2 but now we use least squares fitting instead interpolation fitting. We define the r-th order least squares reconstruction for cell averages as:

$$\displaystyle{ \overline{\mathrm{LSC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x,\bar{f}^{k}) = g_{ i}(x),\quad x \in [x_{i-1}^{k},x_{ i}^{k}],\quad i = 1,\ldots,N_{ k}, }$$
(17)

where g i (x) is the polynomial of degree \(r - 1 < \mathit{nl} + \mathit{nr}\) such that:

$$\displaystyle{ \frac{1} {h_{k}}\int _{x_{i+s-1}^{k}}^{x_{i+s}^{k} }g_{i}(x)\mathit{dx} =\bar{ f}_{i+s}^{k},\qquad s = -\mathit{nl},\ldots,\mathit{nr},\quad \mathit{nl},\mathit{nr} \in \mathbb{N}. }$$
(18)

The prediction operator is calculated as follows:

$$\displaystyle{(P_{k}^{k+1}\bar{f}^{k})_{ 2i-j}=\left (\mathrm{D}_{k+1}(\overline{\mathrm{LSC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x;\bar{f}^{k}))\right )_{ 2i-j}= \frac{1} {h_{k+1}}\int _{x_{2i-j-1}^{k+1}}^{x_{2i-j}^{k+1} }\overline{\mathrm{LSC}}_{\mathit{nl},\mathit{nr}}^{r-1}(x;\bar{f}^{k})\mathit{dx},}$$

with j = 0, 1. Note that we don’t have any interpolation condition and (1) is not fulfilled because of \(\tfrac{1} {2}((P_{k}^{k+1}\bar{f}^{k})_{ 2i-1} + (P_{k}^{k+1}\bar{f}^{k})_{ 2i})\neq \bar{f}_{i}^{k}\). At this point we suggest two options for (8):

  • Forcing consistency: \(\bar{f}_{2i}^{k+1} = 2\bar{f}_{i}^{k} -\bar{ f}_{2i-1}^{k+1}\). We denote it as \(\overline{\mathrm{LSC} - C}\).

  • Losing consistency: \(\bar{f}_{2i}^{k+1} = (P_{k}^{k+1}\bar{f}^{k})_{2i} - d_{i}^{k+1}\). We denote it as \(\overline{\mathrm{LSC} -\mathit{NC}}\).

3.1 Nonlinear Techniques

We can apply the nonlinear techniques similarly to the exposed in Sect. 2.1, but considering the following adaptations:

  • In (9), s ≤ r is allowed.

  • The G function, (12), in this case is defined as follows:

    $$\displaystyle{ G_{i}^{\overline{\mathrm{LSC}}}(x) = q_{ i+1,0,m-1}^{\overline{\mathrm{LSP}}}(x;F^{k},r) - q_{ i-1,m-1,0}^{\overline{\mathrm{LSP}}}(x;F^{k},r), }$$
    (19)

    where \(q_{j,\mathit{nl},\mathit{nr}}^{\overline{\mathrm{LSP}}}(x;F^{k},s)\) is the s-th degree polynomial that approximates in the least squares sense to the point values \((x_{l}^{k},F_{l}^{k})\) for \(j -\mathit{nl} - 1 \leq l \leq j + \mathit{nr}\). Since there are no interpolatory conditions, we can’t express \(G_{i}^{\overline{\mathrm{LSC}}}\) in terms of \(\{\bar{f}_{i}^{k}\}\). To apply the SR technique for this reconstruction, we use the function \(G_{i}^{\overline{\mathrm{IC}}}(x)\) to decide whether we are facing a singular cell. Obviously we need to use in \(G_{i}^{\overline{\mathrm{IC}}}(x)\) polynomials with the same length that \(\overline{\mathrm{LSC}}\).

4 Numerical Experiments

In this section we present some numerical experiments for denoising applying the reconstructions studied in this paper.

We define the function:

$$\displaystyle{ g(x) = \left \{\begin{array}{cl} -\frac{4x-3} {5} \mathit{sin}(\frac{3} {2}\pi (\frac{4x-3} {5} )^{2})&\ \mathrm{if}\ 0 \leq x < \frac{3\pi } {29}, \\ \vert \mathit{sin}2\pi (\frac{4x-3} {5} ) + \frac{\pi } {1000}\vert &\ \mathrm{if}\ \frac{3\pi } {29} \leq x \leq 1.\\ \end{array} \right. }$$
(20)

The work function is of the following type: \(f(x) = g(x) + n(x),\) where g(x) is defined in (20) and n(x) is some white Gaussian noise.Footnote 1 To measure the noise of the signal, we consider the Signal-to-Noise Ratio, expressed in dB: \(\mathit{SNR}(g,f):= 10\mathit{log}_{10}(\sum _{i=1}^{N}g_{i}^{2}/\sum _{i=1}^{N}(g_{i} - f_{i})^{2})\), where N is the signal length.

The experiment consists of: we fix SNR  = 25 dB, and we consider a discretization with \(N_{L} = 2^{6+L}\) nodes, obtaining \(\{\bar{f}_{i}^{L}\}_{i=1}^{N_{L}}\). First we decimate L levels for cell averages to get \(\{\bar{f}_{i}^{0}\}_{i=1}^{64}\). Then we apply L levels of an inverse transform (with \(d_{i}^{k} = 0\ \forall i,k\)) using different reconstructions, obtaining \(\{\hat{f}_{i}^{L}\}_{i=1}^{N_{L}}\). For evaluate the denoising goodness we use the Root Mean Squared Error:

$$\displaystyle{ \mathit{RMSE}(\hat{f}^{L},\bar{g}^{L}) = \sqrt{ \frac{1} {N_{L}}\sum _{i=1}^{N_{L}}(\hat{f}_{i}^{L} -\bar{ g}_{i}^{L})^{2}}. }$$
(21)
Fig. 1
figure 1

Denoising with \(\overline{\mathrm{IC}}\). Errors: (a) RMSE = 0.02196; (b) RMSE = 0.03206; (c) RMSE = 0.03526; (d) RMSE = 0.07537

In Fig. 1 we can see the results that we obtain with \(\overline{\mathrm{IC}} + SR\) reconstruction. In (a) (L = 3) we see that the noise removal is poor, due to we use few levels. Gibbs phenomenon does not appear, thereby nonlinear techniques achieve their objective. In (b) we increase the number of levels, L = 5, obtaining an efficient noise removal. In Fig. 1c, d we show what happens if we raise the degree of \(\overline{\mathrm{IC}}\). We can see that the results are worse because we are using high degree polynomials with noisy data and we obtain values with lower smoothness. The oscillations are amplified by raising levels. Then we conclude that the degree of \(\overline{\mathrm{IC}}\) must be low.

Fig. 2
figure 2

Denoising with \(\overline{\mathrm{LS}}\). Errors: (a) RMSE = 0.07252; (b) RMSE = 0.03560; (c) RMSE = 0.06847; (d) RMSE = 0.02271

In Fig. 2 we use least squares reconstruction for cell averages with SR technique. In (a) we use \(\overline{\mathrm{LSC} - C}\) and, as expected, we do not get good results because by forcing consistency we create oscillations which are transmitted and extended to higher levels. However, if we use \(\overline{\mathrm{LSC} -\mathit{NC}}\), with the same parameters, we obtain smoother results (Fig. 2b). There is a problem with the non consistency: we lose the connection between two consecutive levels and we could lose the correct location of discontinuities, as we can see in Fig. 2c. Nevertheless there is an advantage with respect to \(\overline{\mathrm{IC}}\) reconstruction: we can see in (d), only with three levels, that we achieve remove noise efficiently. This fact is confirmed by the RMSE (shown in the figure legend). For that, we need to use longer stencils and also we improve the reliability of discontinuity detection. Remember that we can’t do it for \(\overline{\mathrm{IC}}\) because it involves an increase of the degree and therefore worst reconstructions. Also using few levels we reduce the computational cost and the effects of non consistency.

Conclusions

We have studied the applicability of the Harten’s multiresolution with nonlinear techniques (ENO and SR) to the signal denoise, obtaining adaptations to the standard schemes (using \(\bar{E}_{2}\) instead of divided differences for locating discontinuities). We have used two reconstructions types: interpolatory and least squares, and the latter with some adaptations (consistent and non consistent) to improve the denoise.

Based on our numerical experiments we can conclude that with the \(\overline{\mathrm{IC}}\) reconstruction we can remove efficiently noise and for it we must use low degrees and high levels. If we use \(\overline{\mathrm{LSC}}\) reconstruction we must use a non consistent version, causing that we lose the exact discontinuity position. However, in some cases this may be advantageous over the interpolatory reconstruction. For example, if there are insufficient number of initial data to apply a large number of levels we can eliminate a significant amount of noise using few levels of \(\overline{\mathrm{LSC}}\) reconstruction.

As future work, we plan to design consistent reconstructions combining interpolation and least squares, in order to take advantage of both reconstructions.