1 Introduction

Pulsed signals are suitable tools that allow physicists to model highly localized events occurring randomly at different times or points of the state space. This study focuses on particular signals made of few nonzero impulses with random amplitudes like peaks, possibly at very close range. This type of signals is often encountered in several fields such as geology [6], mechanical engineering [10], or biomedical engineering [11].

In this study, the modeling framework is represented by the situation where a known system \({\mathcal {H}} (t)\) is excited by a signal x(t) with random impulses. The output signal is expressed as: \( y(t) = \sum \nolimits _{k=1}^{d} x_{k}~ {\mathcal {H}} (t-(\tau _k)) +n(t)\), where n(t) is the random noise, d is the number of impulses with \(x_{k}\) and \(\tau _k\) being their amplitude and delay factors, respectively. The output measurement y(t) represents an image of the original signal x(t) which has been distorted by passage through a known linear and time-invariant system \({\mathcal {H}} (t)\) in the presence of noise. The matrix notations of this relationship can be written as follows: \( {\varvec{y}} = {\mathbf {H}} {\varvec{x}} + {\varvec{n}} \). The objective is to retrieve the original sparse signal x(t) given \({\mathcal {H}} (t)\) and y(t), which corresponds to a deconvolution problem. Deconvolution belongs to inverse problems and is particularly well-known to be an ill-posed problem since the Impulse Response Function (IRF) works as a low-pass filter, and the convolved signal is always affected by noise. Fortunately, regularization methods gives satisfying solutions accounting for a priori information on the original object [12].

In recent frameworks, the sparse approximation of signals has drawn significant interest in many areas. The fundamental idea is that a signal can be approximated with only few elementary signals (from now on referred to as atoms) taken from a redundant family (often referred to as dictionary), while its projection onto a basis of elementary signals may result to a more significant number of nonzero coefficients. Such a basic idea is the source of recent theoretical development and many practical applications in denoising, blind source separation, and compression [2, 7, 9]. Unlike orthogonal transforms, a redundant dictionary leads to non-unique representations of a given signal. Put differently, minimizing the number of nonzero coefficients in a linear combination approximating the data results to an exhaustive search which is an NP-hard problem. Many techniques and algorithms, with some sufficient conditions, have been proposed to resolve this problem. These Algorithms can be approximately classified into two strategies: greedy pursuit algorithms and convex relaxation algorithms. Greedy pursuit algorithms are computationally more preferable than \(\ell _0\)-norm minimization methods. They iteratively refine the approximation by choosing at each iteration additional elementary signals. Many algorithms based on this scheme were developed in literature, such as matching pursuit (MP) [17], orthogonal matching pursuit (OMP) [21], orthogonal least square (OLS) [3], compressive sampling MP(CoSaMP) [19], forward–backward pursuit (FBP) [13] and its extension iterative forward–backward pursuit (IFBP) [26], and multipath matching pursuit (MMP) [15]. However, the majority of these methods share either the OMP selection criterion or the OLS selection criterion. The principle of convex relaxation techniques is to replace the minimization of the number of elements by the minimization of a different function. This function should be minimized more efficiently and ensure the solution to have large zero coefficients. A \(\ell _1\)-norm is largely used to this end [4, 22, 24].

Recently, the recovery of sparse spike signal using sparse approximation became an interesting research topic. Maud and Bell proposed a mismatched greedy pursuit algorithm that overcome the limitation of recovering sparse signal with a coherent dictionary [18]. In recent papers, sparse approximation methods are often combined with other techniques as the toeplitz sparse matrix factorization [27], Shearlet-Cauchy constrained inversion [16], or Bergman algorithm [20] for more accuracy and low computational efficiency. Other sparse spike deconvolution algorithms incorporate interesting features that produce attractive results in harsh conditions as the normalization of the input data and search for a suitable regularization parameter [8], and the use of autoregressive models to recover state dynamics from noisy and under-sampled measurement. [14].

This study focuses on sparse spike deconvolution in order to reconstruct pulsed signal where some of the peaks are very close to each other. To this end, an algorithm called orthogonal least absolute value (OLAV) is proposed. The particularity of OLAV lies in the selection criterion which is based on minimizing the absolute value of the error between the signal and its approximation. To show the advantage of the proposed algorithm, OLAV is compared to other greedy algorithms in different situations and noisy environments.

The paper is arranged as follows, Sect. 2 recalls the theory of sparse approximation problem and the description of two greedy algorithms: OMP and OLS. The principal contribution of this paper is illustrated in Sect. 3 with an analysis of the greedy algorithms selection criterion. A performance evaluation of OMP, OLS, FBP, MMP and OLAV is realized in Sect. 4 to show the advantage of the proposed algorithm. In Sect. 5, the three greedy algorithms analyzed in the theoretical study, OMP, OLS and OLAV , are applied to phonocardiogram (PCG) signals for more detailed investigation and comparison. Finally, conclusions are drawn in Sect. 6.

2 Sparse Approximation Theory

2.1 Recall of Sparse Approximation Problem

The problem of sparse signal approximation consists in approximating a signal as a linear combination of a limited number of elementary signals chosen from a redundant collection (dictionary). This problem can be formulated as: find sparse coefficients \({\varvec{x}} \) such that \(\Phi {\varvec{x}} \approx {\varvec{y}},\) where \({\varvec{y}} \) is the measured data and \(\Phi \) corresponds to a known matrix with atoms \(\{\phi _k\}_{k=1\ldots Q}\). A compromise between a satisfying approximation and the number of included elementary signals is part of the sparse approximation problem. Mathematically such compromise results from minimizing the following criterion:

$$\begin{aligned} {\mathcal {J}} ({\varvec{x}}) = ||{\varvec{y}}-\Phi {\varvec{x}} ||_2^2 + \beta ||{\varvec{x}} ||_0 \end{aligned}$$
(1)

The scalar \(\beta \) is an essential parameter that adjusts the trade-off between the sparsity of the solution and the quality of the approximation [12]. Of course, minimizing such a criterion is a combinatory optimization problem which is generally known to be NP-hard. However, two strategies are usually utilized to avoid sweeping every combination: (1) Greedy algorithms, which iteratively improve the approximation by successively identifying additional elementary signals that ameliorate the approximation quality [17, 25]; (2) convex relaxation algorithms are based on the relaxation of the criterion (1), which replace the combinatorial problem with an simpler optimization problem often chosen convex [4]. In the latter, the \(\ell _0\)-norm is usually relaxed with a \(\ell _p\)-norm. For \(p=1\) this problem corresponds to the least absolute shrinkage and selection operator regression (LASSO) [24] or Basis Pursuit Denoising (BPDN) in signal processing [4].

2.2 Structure of the Dictionary \({\mathbf {H}} \)

In the case of deconvolution, the dictionary must be generated from the IRF and cannot be chosen, contrarily to sparse approximation where the dictionary is generally selected as a combination of bases or wavelet dictionary.

Let specify the boundary condition considered in the convolution operator such as boundary hypothesis that affects the size and structure of the dictionary \({\mathbf {H}} \) generated from the IRF. It is assumed that the convolution \({\mathbf {H}} {\varvec{x}} \) is performed with the zero-padded edges. By applying this option, the resulting signal has length \(L_y=L_x+L_h-1\) where \(L_x\) and \(L_h\) denote respectively the length of the sparse signal and the length of IRF. Note that \({\mathbf {H}} \) is a sparse matrix of dimension \(L_y\times L_x\) with \(L_h\times L_x\) nonzero elements (the length of the IRF usually is largely lesser than the length of the signal). However, as the \(L_x\) atoms of the dictionary correspond to delayed versions of the IRF, the matrix \({\mathbf {H}} \) is formed entirely with the \(L_h\) elements of the IRF. Furthermore, the matrix \({\mathbf {H}} \) has a Toeplitz structure (diagonal–constant matrix) as it models a convolution operator.

2.3 Greedy Algorithms Description

Let the sub-matrix \({\mathbf {H}} _\Lambda \) constructed from the columns of \({\mathbf {H}} \) where the indexes are in \(\Lambda \), \({\varvec{h}} _i={\mathbf {H}} _{\{i\}},\) and \(\Lambda ^{(k)}\) is the set of the chosen indexes at iteration k. The vectors are defined as follows, \({\varvec{x}} =[x_1,\dots , x_{L_x}]^\mathtt {T}\), \({\varvec{y}} =[y_1,\dots , y_{L_y}]^\mathtt {T}\), \({\varvec{n}} =[n_1,\ldots , n_{L_y}]^\mathtt {T}\) and \({\varvec{r}} =[r_1,\dots , r_{L_y}]^\mathtt {T}\) which stands for the residual. Finally, \(L_x\), \(L_y\) and \(L_h\) represent respectively the length of \({\varvec{x}} \), \({\varvec{y}} \) and \({\mathcal {H}} \) .

In this study, the interest is focused on the restoration of a pulsed signal with some peaks very close to each other by using greedy algorithms and a dictionary. The dictionary is given by the Toeplitz matrix \({\mathbf {H}} \) generated from the IRF \({\mathcal {H}} \). Greedy algorithms are iterative algorithms constituted of two major steps at each iteration: (1) the selection of an additional atom in the dictionary; (2) the update of the solution and the corresponding approximation. The stopping rule at the end of the loop limits the iterations.

Let \({\varvec{x}} ^{(k)}\) be the solution of the kth iteration, \({\varvec{x}} _{\Lambda }^{(k)}\) being its coefficients at indexes \(\Lambda ^{(k)}\) and \({\varvec{r}} ^{(k)} = {\varvec{y}}- {\mathbf {H}} {\varvec{x}} ^{(k)}\) the residual corresponding to the solution at the kth iteration. The usual structure of a greedy algorithm has three major steps. First, initialization of the main paramters. Second, the search for the best atom \({\varvec{h}} _{i}\) improving the approximation. Third, the solution and the residue update. The difference between the several algorithms lies in the selection or the updating steps. The most known greedy algorithms are the MP [17] and its orthogonal version OMP [21].

figure a

The OMP algorithm has a simple structure that makes its implementation easy. The additional atom maximizes the dot product with the residual. Then, the update is an orthogonal projection of the data on the whole set of the selected atoms. As a result, the selection of previously selected atoms is avoided, but the computation cost increases as the amplitudes associated with all the selected atoms are updated. The OLS differs from the OMP on the atom selection step as the selected atoms minimizes the energy of the approximation error. However, the computation cost of the OLS is higher than that of the OMP. Algorithm 1 summarizes the OMP and OLS algorithms. More details about the stopping rule can be found in [23].

3 Theoretical Study

3.1 The Proposed Algorithm

The OLAV is derived from the OLS algorithm and has roughly the same computation cost. The OLAV differs in the selection step as the selected atom minimizes the absolute value of the approximation error. This idea came after noticing that the OLS could not correctly restore very close peaks of some experimental data. Fortunately, replacing the \(\ell _2\)-norm of the approximation error by the \(\ell _1\)-norm allowed the correct restoration of these peaks, and hence the emergence of the OLAV method. The algorithm 2 gives the OLAV steps.

figure b

3.2 Analysis of Selection Criteria

OLAV and OLS are the more coherent greedy algorithms as they both aim to minimize the approximation error. However, minimizing the energy of the error is different from minimizing the absolute value of the error. To explain this point, let develop the OLS selection criterion of Eq. (3),

$$\begin{aligned} ||{\varvec{y}}-{\mathbf {H}} _{\Lambda }{\varvec{x}} _{\Lambda } ||_2^2 = ||{\varvec{y}} ||_2^2 + ||{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }||_2^2 - 2 {\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda } \text { where } {\varvec{x}} _{\Lambda } = ({\mathbf {H}} _{\Lambda }^\mathtt {T}{\mathbf {H}} _{\Lambda })^{-1}{\mathbf {H}} _{\Lambda }^\mathtt {T}{\varvec{y}} \end{aligned}$$
(5)

The term \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) represents the correlation between the measure \({\varvec{y}} \) and the contribution of the signal \({\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) in the measure. After several iteration, the variance term \(||{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }||_2^2\) is slightly less than \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) as impulses being sparse, close to one another, and independent of noise. This is due to the fact that \({\varvec{y}} = {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda } + {\mathbf {H}} _{\bar{\Lambda }} {\varvec{x}} _{\bar{\Lambda }} + {\varvec{n}} \), where \(\bar{\Lambda }\) denotes the set of non selected atom indexes at the \(k^{th}\) iteration. Both sets are updated in every selection criterion trial as \(\Lambda = \Lambda ^{(k-1)}\cup {\{i^{(k)}\}}\) and \(\bar{\Lambda } =\bar{\Lambda }^{(k-1)}\backslash {\{ i^{(k)}\}}\). When the spacing between impulses is greater than the length of significant coefficients of the IRF, the term \(({\mathbf {H}} _{\bar{\Lambda }} {\varvec{x}} _{\bar{\Lambda }})^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) is equal to zero. Therefore, the terms \(||{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }||_2^2\) and \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) are exactly equal as the sparse signal is independent of noise. Moreover, the quantity \(||{\varvec{y}}-{\mathbf {H}} _{\Lambda }{\varvec{x}} _{\Lambda } ||_2^2\) becomes smaller as \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) grows more important. Unfortunately, when the impulses are very close, this term tends towards high values for indices between the locations of the close impulses.

The same problem is observed for the selection criterion of the OMP in Eq. (2) because the terms are similar. To demonstrate this ,the OMP selection criterion at the kth iteration is developed as follows:

$$\begin{aligned} \begin{array}{lcl} | {{\varvec{r}} ^{(k-1)}}^\mathtt {T} {\mathbf {H}} | &{}=&{} |({\varvec{y}}- {\mathbf {H}} _{\Lambda ^{(k-1)}}{\varvec{x}} _{\Lambda ^{(k-1)}} )^\mathtt {T} {\mathbf {H}} | \\ &{}=&{} |{\varvec{x}} _{\bar{\Lambda }^{(k-1)}}^\mathtt {T} {\mathbf {H}} _{\bar{\Lambda }^{(k-1)}}^\mathtt {T} {\mathbf {H}} +{\varvec{n}} ^\mathtt {T} {\mathbf {H}} | \end{array} \end{aligned}$$
(6)

Generally, \(| {{\varvec{r}} ^{(k-1)}}^\mathtt {T} {\mathbf {H}} |\) is maximum when \({\varvec{x}} _{\bar{\Lambda }^{(k-1)}}^\mathtt {T} {\mathbf {H}} _{\bar{\Lambda }^{(k-1)}}^\mathtt {T} {\mathbf {H}} \) is maximum too. The latter suffers from the same problem as the selection criterion of the OLS, since \({\mathbf {H}} _{\bar{\Lambda }^{(k-1)}}^\mathtt {T}{\mathbf {H}} \) presents larger values for i situated between the index of close peaks. Furthermore, Eq. (6) shows that, in contrast to OLS, the selection criterion of the OMP is sensitive to noise because of \({\varvec{n}} ^\mathtt {T} {\mathbf {H}} \).

The OLAV selection criterion in Eq. (4) is equivalent to:

$$\begin{aligned} ||{\varvec{y}}-{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda } ||_1 = || {\mathbf {H}} _{\bar{\Lambda }}{\varvec{x}} _{\bar{\Lambda }} + {\varvec{n}} ||_1 \end{aligned}$$
(7)

Unlike OLS and OMP, the latter problem problem is not encountered for the OLAV since the selection criterion of Eq. (4) aims only to minimize the absolute value of the error. As a result, it leads to a considerable enhancement of performances, particularly for pulsed signal with peaks close to each other. However, Eq. (7) clearly points out that the selection criterion of the OLAV is sensitive to noise as well as OMP.

Fig. 1
figure 1

a The measured signal (SNR = 20 dB). b The IRF used for the convolution

To visualize how the three algorithms perform, a numerical simulation based on the example in Sect. 4.1 with a pulsed input signal consisting only of \(d=2\) random impulses at very close indexes 24 and 30 is considered. The time representations of the IRF and the resulting signal with a Signal-to-Noise Ratio (SNR) of 20 dB are reported in Fig. 1.

Fig. 2
figure 2

The reconstructed signal versus the original one for the three algorithms. a OMP. b OLS. c OLAV

Figure 2 reports the true sparse signal (blue line) and the estimated signal (red line) for each method. The results shows that only OLAV algorithm reconstructed both peaks without any false detection unlike OMP and OLS. To understand what happened to OMP and OLS algorithms, the three selection criteria are tracked in Fig. 3.

Fig. 3
figure 3

The tracking of selection criteria of the three algorithms for every iteration

For the OLS, when the impulses are very close, the terms \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) and \(||{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }||_2^2\) tend towards higher values for indices situated between the impulses of the sparse signal as shown in Fig. 4.

Almost similar behavior is observed for the selection criterion of the OMP. On the other hand, the behavior of the OLAV is completely different as its selection criterion aims to minimize the absolute value of the error which leads to the exact recovering of both impulses. For the second iteration, OMP and OLS continue to detect false impulsions as errors propagate through iterations, whereas the OLAV restores precisely the remaining impulse. The aim of this study is not to make valid detections with accuracy and precision in the case of very close peaks, but to limit the propagation of errors through future iteration. Accordingly, the performances of OMP and OLS can be degraded if they are not interrupted after a certain number of iterations.

Fig. 4
figure 4

The tracking of the terms \({\varvec{y}} ^\mathtt {T} {\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }\) and \(||{\mathbf {H}} _{\Lambda } {\varvec{x}} _{\Lambda }||_2^2\). a \(1^{st}\) iteration. b \(2^{nd}\) iteration

4 Simulation Tests

4.1 Description

To assess the performance of the proposed algorithm, a comparative study between OMP, OLS, and OLAV is conducted. This study involves a difficult deconvolution situation with low amplitudes, small spacing between impulses, and noise.

Fig. 5
figure 5

a The measured signal (SNR = 26 dB). b The IRF used for all simulations

In this simulation a pulsed signal sampled at 128 Hz is considered. This input signal consists of \(d=4\) random impulses at indexes 14, 20, 35 and 40 characterized by two close peaks followed by another two close peaks. The signal is then filtered by the IRF given by the relationship : \( {\mathcal {H}} (t) = \mathrm{e}^{-(\frac{t}{\sigma _{\mathcal {H}}})^2} \cos (2 \pi f_0 t)\) where \(\sigma _{\mathcal {H}} \) and \(f_0\) represent respectively the damping and the frequency of the waveform. Moreover, an i.i.d. Gaussian noise is added to the convolved signal such that the SNR is 26 dB. The number of iterations was fixed at four to prevent the performance degradation of the algorithms. The resulting signal and the IRF are reported in Fig. 5.

Fig. 6
figure 6

The reconstructed signal versus the actual one for each algorithm. a OMP. b OLS. c OLAV

Figure 6 report deconvolution results with the blue line representing the true sparse signal and the red line the recovered signal for each method. The results shows that OLAV, unlike OMP and OLS, allows to restore the four peaks with the exact amplitudes and provide the best recovery for this simulation. To investigate thoroughly the behaviour of the three algorithms, two other simulations were performed.

4.2 Comparative Study

The objective of this simulation is to present the performance of the studied greedy algorithms in different i.i.d. noisy environments. The simulation is performed with the same parameters as the example of Sect. 4.1 except for the SNR and the impulses amplitude. Actually, the SNR will change from \(-10\) to 30 dB and the impulses amplitude follows an uniform distribution between 0.1 and 1.1. Besides OMP and OLS, two other recent sparse recovery algorithms that share the same selection criterion as the OMP, FBP and MMP, are added to the comparative study. More details about FBP and MMP algorithms can be found in [13, 15]. Furthermore, the performance comparison between these methods is realized by the average mean squared error (MSE) and average histogram over 1000 Monte Carlo (MC) runs.

4.2.1 Mean-Squared Error-Based Evaluation

The first test concerns the evaluation of the recovered signal quality through the MSE. The MSE of the estimate \(\hat{{\varvec{x}}}\) with respect to \({\varvec{x}} \) is defined as, \({\text {MSE}}(\hat{{\varvec{x}}})=\mathbb {E}\big [(\hat{{\varvec{x}}}-{\varvec{x}})^2\big ]\). For a reliable study, the MSE results are averaged over the number of MC runs for each SNR value.

Figure 7 describes the MSE variation for each algorithm over the SNR values. In general, the calculated MSE for the five algorithms decreases while the SNR increases because of lower noise effect on the observed data \({\varvec{y}} \). In this simulation, the OLAV presents efficient performances, especially for SNR values over 12 dB. For an SNR between 0 and 12 dB, the MMP performs slightly better because of its strategy to find the best sparse support by using a combinatory tree search approach. For lower SNR values, the OLAV algorithm struggles to recover the sparse signal and performs similarly to the other algorithms. It should be noted that the difficulties encountered by OMP, OLS, FBP, and MMV are principally linked to the small distances between impulses rather than noise level. In overall, OLAV is more suited for the recovery of very close peaks.

4.2.2 Sparse Signal Distribution-Based Evaluation

The second test aims to analyse thoroughly the performances of the five algorithms by displaying the number of the true impulses and false/missing detections that occur within the actual impulses as well. The distribution of data values is obtained through the averaged the histogram of the reconstructed signals. Hence, this test is a suitable tool to analyse the number of erroneous detections for each algorithm.

Fig. 7
figure 7

The effect of varying SNR from \(-10\) to 30 dB over MC runs on the MSE

Figure 8 presents the averaged reconstructed signal distribution for each SNR values and for each algorithm. On the whole, the histogram is nearly the same for OMP, OLS, and FBP algorithms, with higher missing/false detections for higher SNR. In contrast, OLAV and MMV has lower missing/false detections occurring at lower SNR. The false detections are not very far from the true impulse’s location, which is acceptable for some applications. The heights of the bars in the averaged histograms is related to the number of times the impulses are detected. Hence, the results show that the missing detections distribution is wider for the first close impulses. This is more related to the opposite sign of the last close peaks and the waveform of the IRF than the algorithms selection criterion. In summary, OLAV has a smaller number of missing/false detections and slightly surpasses MMP, especially for the first close impulses.

Fig. 8
figure 8

The effect of varying SNR from 1 to 30 dB on the sparse signal distribution

5 Application to Experimental Signals

Motivated by the good performance of OLAV in various i.i.d. noisy environments, it seems to be natural to investigate its application to experimental data. Thus, OLAV is applied to real-life PCG signals in comparative study, with OMP and OLS, to restore an almost periodic random sparse impulses occurring at very close factor delay.

5.1 Problem Formulation

For a healthy subject, two heart sounds known as S1 and S2 can be found in PCG signals. The first heart sound S1 results from the closure of the mitral valve (M1) followed closely by the closure of the tricuspid valve (T1) at each cardiac cycle. The same process is repeated for the second heart sound S2 with the aortic valve (A2) and the pulmonary valve (P2). Regarding the asynchronous heart valves closure, the time split between them is very critical to diagnose some pathologies (< 30 ms for normal cases). Hence, the objective of this simulation is the accurate detection of the time split between the heart valves closure instants. Due to the convolution between the valves closure impacts and low-frequencies IRFs [5], the PCG mathematical model is expressed as follows:

$$\begin{aligned} y(t)= & {} \sum _{i,n} a_{i,n}\delta (t-\mu _{i,n}-nT) *\mathrm{e}^{-\frac{t^2}{2\sigma _i}} \cos ( 2\pi f_i t-\varphi _{i} ) \nonumber \\= & {} \sum _{i,n} a_{i,n} \mathrm{e}^{-\frac{(t-\mu _{i,n}-nT)^2}{2\sigma _i}} \cos ( 2\pi f_i (t-\mu _{i,n}-nT)-\varphi _{i} ) \end{aligned}$$
(8)

where i and n denotes respectively the impact indices produced by each valve closure (M1, T1, A2, and P2) and the cardiac cycle index, \(\delta \) stands for Dirac distribution, \(a_{i,n}\) is the normally distributed random amplitude, \(\mu _{i,n}\) and T corresponds respectively to the instants of the heart valves closure in each cycle and the cardiac cycle duration. Furthermore, the Gaussian kernel shape for each IRF is controlled by \(f_i\), \(\sigma _i\) and \(\varphi _{i}\).

5.2 Experimental Deconvolution Results

For further investigation, the proposed algorithm OLAV, OMP, and OLS are applied to a PCG database collected from a clinical trial in hospitals to detect valves closure instants. This database includes two datasets published in the Classifying Heart Sounds Pascal Challenge contest [1]. The real-life PCG signals chosen for this simulation were recorded using the Littmann Model 3100 digital stethoscope with a sampling frequency of 4000 Hz. Furthermore, information regarding gender, age or condition of the subjects is not available.

To apply the three algorithms and restore the heart valves closure impacts successfully, suitable dictionaries must be generated from the low-frequency IRFs. In this study, the three main parameters controlling the Gaussian kernel are manually estimated to match the shape of each IRF. The three parameters values for each IRF and for each signal are listed in Table 1. It should be noted that a valid estimation of the IRF is critical to provide a suitable dictionary and reach better performance.

Table 1 The Gaussian kernel parameters estimated values for each IRF and for each PCG signal

In overall, the deconvolution results for the three algorithms, presented in Fig. 9, are interesting with a slight difference for the proposed algorithm. The resulting sparse signals reveals the time split between impacts with precision and accuracy despite having some minor erroneous impacts detections. Those erroneous detections can be explained by the fact that the IRFs slightly change their shape from one cycle to another and that perfect results require perfect IRFs estimation. It is very difficult to give conclusions from experimental signals without knowing the exact impacts position before convolution. However, the OLAV algorithm seems to produce more convincing results because the time differences between the restored impacts are closer to those estimated visually.

Fig. 9
figure 9

The restored heart valves closure impacts with each algorithm. a First PCG signal. b Second PCG signal

6 Conclusion

This paper presents the OLAV algorithm for sparse signal recovery. The particularity of this algorithm lies in its selection criterion that minimizes the absolute value of the error between the signal and its approximation. The OLAV selection criterion allows the algorithm to perform efficiently in the case of close peaks deconvolution. This was confirmed by the tracking of different selections criteria. Unlike the OLAV, the OMP and OLS selections criteria do not choose the right first atoms in the studied case, which generate erroneous detections that spread through future iterations. Other computer simulations in close peaks deconvolution framework were performed to prove the effectiveness of the proposed algorithm OLAV. To push our comparative study further, the three greedy algorithms were applied to real-life PCG signals to recover heart valves closure impacts occurring at very close factor delay. Future work will focus on ways to enhance the performance of OLAV in noisy environments and more complex situations.