Keywords

1 Introduction

Electrocardiograms (ECG) are signals that originate from the action of the human heart. The ECG is the graphical representation of the potential difference between two points on the body surface, versus time. ECG uses information from generation and propagation of electric signals in to the heart to generate clinically relevant information parameters such as time intervals between waves, duration of each wave or composite wave forms and peak amplitudes.

1.1 Arrhythmia

Arrhythmia (ah-RITH-me-ah) is a problem with the rate or rhythm of the heartbeat. During arrhythmia, the heart can beat too fast, too slow, or with an irregular rhythm. Most arrhythmias are harmless, but some can be serious or even fatal. During an arrhythmic attack, the heart may not be able to pump enough blood to the body. Lack of blood flow can damage the brain, heart, and other organs.

In this paper, compression results of ECG signals are shown by using a time-frequency transformation like Discrete Wavelet Transform. Samples of signals are transformed to appropriate groups of transformation coefficients. Almost all coefficients below the determined threshold are rounded to zero values and by inverse transform the similar signal to original one is created. By using run-length coder, consecutive zero value coefficients can be replaced by single value that shows how many consecutive coefficients with zero value exists. In this way small number of coefficients is stored, and compression is obtained. Depending on transform used, different number of coefficients is rounded to zero in different positions, hence the reconstructed signal turns out to be more or less similar to the original one.

1.2 MIT-BIH Arrhythmia Database Directory

The source of the ECG included in the MIT-BIH Arrhythmia Database is a set of over 4000 long-term Holter recordings that were obtained by the Beth Israel Hospital Arrhythmia Laboratory between 1975 and 1979. Approximately 60 % of these recordings were obtained from in-patients.

2 Objectives and Scope

The development of the electrocardiogram was the culmination of scientific effort aimed at perfecting a device conceived for the elucidation of a physiological phenomenon. The morphology of ECG signal has been used for recognizing such variability of heart activity, so it is very important to get the parameters of ECG signal clear without noise. In order to support clinical decision making, reasoning tool to the ECG signal must be clearly represented and filtered, to remove all noises and artifacts from the signal [2]. ECG signal is one of the bio signals that are considered to be non-stationary and this signal needs a lot of hard work for denoising.

An efficient technique for such non-stationary signal processing is the wavelet transforms. The wavelet transform can be used as a decomposition of a signal in the time frequency scale plane. Several methods to enhance ECG signal have been developed. The most widely used is the least mean square adaptive algorithm (LMS). But this algorithm is not able to track the rapidly varying non-stationary signals such as the ECG signal within each heartbeat, which causes excessive low pass bias of mean parameters such QRS complex. We also use bi-orthogonal wavelet transform for ECG signal compression and main ECG parameters estimation.

2.1 ECG Recordings

Based on the conventional criteria on amplitudes and duration of the P-wave, statements are derived regarding left atrial overload, right atrial overload, and prolonged atria conduction (Fig. 1).

Fig. 1
figure 1

PQRS time interval

Each ECG resting record consists of a header of 1024 bytes and 10 s of ECG raw data. The header is used for acquisition and storage of demographic data including data on the acquisition device which might be needed for construction of a Standard Communications Protocol File (SCP-File). The raw data should be digitized with 500 samples per second and an amplitude quantization of 1.0...5 \(\upmu \)V per LSB.

2.2 QRS-T-Evaluation

An adaptive threshold is used (related to the maximum and mean values of the signal), to find the points over this value. After that, the R peaks are selected and stored in a parameter data vector. R-R intervals are known as R-R distances. Q, S points are local minimum points before and after an R wave. The area of the QRS complex can be calculated from the Q-S duration and the value of the R peak. Our test signals contained rhythms from a normal, healthy heart and signals with abnormalities in order to find the main parameters. The used databases were: MIT-BIH Atrial fibrillation database (ADDB), MIT-BIH Supraventricular Arrhythmia Database (SVDB), MIT-BIH Arrhythmia Database (MITDB) and MIT-BIH Normal Sinus Rhythm Database (NSRDB). Over 100 signals were used. The evaluation of this procedure was carried out using the following indicators:

$$\begin{aligned} {\text{ Sensitivity} \text{(S)}}&= \mathrm{TP / (FN+TP+FP) }\\ {\text{ Positive} \text{ Predictability} \text{(P)}}&= \mathrm{TP / (TP+FP) } \end{aligned}$$

where TP is the number of true positive detection, FN stands for the number of false negative misdetections and FP is the number of false positive misdetections. The results obtained were compared manually in Table 1 (around 60 beats/signal in the self-created database) and the number of misdetections was noted and the indicators calculated.

Table 1 PRT detection
Fig. 2
figure 2

Wavelet function

3 Discrete Wavelet Transformation

A wavelet is simply a small wave which has energy concentrated in time to give a tool for the analysis of transient, non stationary or time-varying phenomena such as a wave shown in Fig. 2.

A signal as the function of f(t) shown in Fig. 2 can often be better analyzed and expressed as a linear decomposition of the sums & products of the coefficient and function. The set of coefficients are called the Discrete Wavelet Transform (DWT) of f(t). In the wavelet transform, the original signal (1-D, 2-D, 3-D) is transformed using predefined wavelets [3]. The wavelets can be orthogonal, ortho-normal, or bi orthogonal, scalar or multi wavelets.

3.1 1-D DWT of ECG

The most popular scheme to implement DWT is the convolution-based FIR structure introduced by Mallat in 1988. For a one-dimensional signal, each convolution-based 1D-DWT can be realized using a pair of filters. In the decomposition stage, the original signal x of length L is passed through a low pass filter (LPF) l and a high pass filter (HPF) h, which are called the analysis (decomposition) filter pair [l, h]. Then, the two filtered outputs are down-sampled by 2 to obtain two sub bands, \(L_{1}\)[x] and \(H_{1}\)[x], respectively. In the reconstruction phase, \(L_{1}\)[x] and \(H_{1}\)[x] are up-sampled by 2 and passed through another set of filters called the synthesis (reconstruction) filter pair [\(l{\sim }, h{\sim }\)], to reconstruct the signal x.

4 Analyses

We carried out a retrospective analysis of ECG signals recorded during out-of-hospital treatment of adult patients. Four parameters—centroid frequency (FC), peak power frequency (FP), average segment amplitude (SA) and average wave amplitude (WA)—were extracted from the recorded ECG signal immediately before each counter shock and compared with counter shock outcome.

4.1 Clustering

An additional first stage of pre-clustering has been used for removing redundant information. Specific heartbeats have been considered redundant if it has a dissimilarity measure to others.

4.2 Orientation Tree

A tree structure, called “temporal orientation tree”, defines the temporal relationship in the wavelet domain. Every point in layer i corresponds to 2 points in the next layer (i+1), mutually sharing a parent-offspring relationship. This definition is analogous to that of spatial orientation trees in each node that either have no offspring or 2 offsprings. In a typical 1-D signal, most of the energy is concentrated in low frequency bands, so that the coefficients are expected to be better magnitude-ordered as we move downward following the temporal orientation tree to the leaves (terminal nodes).

In digital signal processing, the fast forward and inverse wavelet transforms are implemented as tree-structured, perfect-reconstruction filter banks [4]. The input signal is divided into contiguous, non overlapping blocks of samples called frames and is transformed frame by frame for the forward transform. Within each frame, the input signal is filtered by the analysis filter pair to generate low pass and high pass signals, which is then down sampled by a factor of two. Then this analysis filter pair is applied to the down sampled low pass signal recursively to generate layered wavelet coefficients shown in the resolution.

The number of layers determined the coarsest frequency resolution of the transform and should be at least four for adequate compression [5].

5 Results and Discussions

The frame size of the ECG signal, M, has to be at least \(2^{\mathrm{n}+1}\) samples long, where n is the number of levels in the Wavelet Transform.

For each patient, 12 leads (channels) with 6 s of ECG signal per lead were recorded in a database. Every signal is then decomposed using parameters mentioned and threshold analysis is performed. For selected parameters, after reconstruction of the signal in each Lead, compression ratio, maximal absolute difference and PRD is calculated, as it is shown in the Table 2 for DWT with decomposition on 4th level using Daubechies 2 (db2) wavelet function.

Table 2 PRD maximal absolute difference (Max D) and number of zeroes (NZ) for a patient with AIM

In the last row of the Table 2 are shown maximal values of PRD and Max D for that patient, because that is the value of compression which will be obtained if compressed coefficients of all leads are stored in the same file. These values will be called statistical values of compression for selected parameters [6].

Reconstruction results are entered in a new table with four additional columns: heart disease id, patient id wavelet function and level of decomposition. For each combination of patient, wavelet function and level of decomposition, a new row is created. There were 10 (patients) 3 (levels of decomposition) 6 \((\mathrm{{wavelet}}\;\mathrm{{functions}}) = 180\) rows in a table which represents compression results for 12 (leads) \(180 = 2160\) ECG.

Table 3 Compression results of ECG signal obtained by using DWT and wavelet packet (WP) by wavelet function and by level of decomposition
Table 4 Mean compression results and mean PRD of ECG signals obtained by using M channel cosine-modulated filter bank

Two tables of described format are created: one for DWT and another for WP. From these huge tables, regardless of patient, the average compression ratios per combination of wavelet function and level of decomposition are calculated and shown in the Table 3 and appropriate maximal PRD in the Table 4. It is very important to note that the original and reconstructed signals for each lead (ECG signal) had to be visually checked for distortion.

6 Conclusions and Enhancements

The presented method shows a new experimental threshold of wavelet transform coefficients. This threshold value is accomplished experimentally after using a loop of calculating a minimum error between the denoised wavelet sub signals and the original free of noise sub signals. This algorithm performs quite well in terms of both numerical and visual distortion. From the result it is confirmed that the bi orthogonal wavelets tested performed better than the orthogonal wavelets. The DWT wavelet coder appears to be superior to the other wavelet based ECG coders that have appeared in the literature. It has the advantage that no domain specific knowledge or techniques were used in this algorithm. Generally, by using WP, better compression is achieved compared to the DWT.