Introduction

Electroencephalogram (EEG) analysis has become a standard diagnosis tool for several neurological disorders and cognitive functions. Ocular artifacts are the most important form of interference in EEG signals. When the eyeball moves, an electrical field is induced by changes in the cornea and retina. This electrical field propagates across the scalp, producing activity in the EEG recording. An electrooculogram (EOG) directly measures the cornea–retina potential in the scalp. Two electrodes are placed to the right and left of the outer canthi to detect horizontal movements of the eyes, and another pair is placed above and below one of the eyes to detect vertical movements.

Linear regression analysis is commonly used to reduce ocular activity in EEG signals. In this method, the relationships between the EOG signal and the EEG channels are expressed by transfer coefficients derived in the time domain9 , 14 or frequency domain.14 , 22 A corrupted segment in the EEG is corrected by subtracting the corresponding transferred component of the EOG. In 2004, He et al. 10 employed adaptive filtering to cancel ocular artifacts in the EEG. In this study, the horizontal and vertical EOGs are used as reference signals to construct individual finite impulse response filters. An uncorrupted EEG is recovered by subtracting the filtered outputs. The mean squared value of the recovered EEG is used to adjust the filter coefficients, ensuring that the reference signals provide the best possible fit to the ocular activity in the EEG signal. He et al.’s adaptive filtering method is more accurate than the time-domain regression method.11 However, they share a limitation: the need for an additional EOG recording.

An alternative approach known as blind source separation (BSS) has proven useful in reducing ocular artifacts.12 , 13 , 16 BBS assumes that the EEG signals are a linear mixture of neural and ocular activities. Through independent component analysis (ICA), multiple EEG channels are decomposed into an equal number of source components (SCs). It is generally possible to identify ocular-related SCs (hereafter “ocular SCs”) by visual inspection of SCs and display of the projected strengths in the scalp12 , 13 or by automatic identification.1 , 7 , 19 After discarding ocular SCs, the uncorrupted EEG signal can be restored by remixing the remaining SCs. Compared to time-domain regression and adaptive filter, the BSS method yields lower errors in the spectral content of the restored EEG.20

Collecting EOG data during a long-term EEG recording is neither convenient nor comfortable for the subject, and so the BSS method enjoys an important practical advantage. This article describes a novel method of removing ocular artifacts from EEGs based on both blind source separation and adaptive filtering, which avoids the need for parallel EOG recordings. First, ocular SCs are identified from an ICA decomposition of the multichannel EEG signals as in the BSS method. Next, the ocular SCs are treated to remove high-frequency signals (by the wavelet shrinkage technique) and low-amplitude excursions (by imposing an adaptive threshold). The resulting SCs consist almost entirely of ocular-artifact variations. They are then used as reference signals to create an adaptive filter, whose parameters are optimized by the recursive least squares method. Finally, the adaptive filter is used to remove ocular artifacts from the EEG signal. The proposed method was tested on the simulated EEG signals derived from eight subjects. Its performance is compared to the standard BSS method and to adaptive filtering using the EOG.

Methods

Data Collection

Recordings of 17-Channel EEG and 2-channel EOG were obtained from eight young subjects during repetitive unimanual finger movements. The participants looked at the center of a screen and pressed the specified keys on a keyboard with their right-hand index, middle, and ring fingers when “2”, “3”, or “4” were presented with an inter-stimulus interval of 2000 ms.17 The EEG and EOG data were acquired at a sampling rate of 1000 Hz from scalp electrodes placed according to the international 10/20 system: Fz, FCz, Cz, Pz, Oz, F3, FC3, C3, T7, P3, O1, F4, FC4, C4, T8, P4, O2, Veog, and Heog. These points are referred to as the averaged mastoids (BrainAmp, Gilching, Germany). The EEG and EOG signals were filtered by a noncausal, fourth-order Butterwoth filter with the bandpass 0.5–30 Hz. All data processing tasks, including independent component analysis, wavelet shrinkage, adaptive filtering, and blind source separation were performed using MATLAB 7.5 (The MathWorks, Natick, MA, USA). The independent component analysis and the blind source separation were performed using functions included in EEGLAB (http://www.sccn.ucsd.edu/eeglab/).6

Independent Component Analysis

The method of independent component analysis5 (ICA) is used to decompose multichannel EEG signals x 1(n),…,x M (n) into multiple sources s 1(n),…,s M (n). The observed sources include neural activity, disturbances from eye movements, muscle activations, heartbeats, and so on.12 , 13 This application assumes that the EEG signals are a linear mixture of the sources:

$$ {\mathbf{x}} = {\mathbf{As}} $$
(1)

where x = [x 1(n) … x M (n)]T, s = [s 1(n) … s M (n)]T, and A is an M-by-M mixing matrix. ICA finds an unmixing matrix W that decomposes the EEG signals into M source components u 1(n),…,u M (n):

$$ {\mathbf{u}} = {\mathbf{Wx}} $$
(2)

where u = [u 1(n) … u M (n)]T. Note that the actual sources s are theoretical; they are not the same as the source components u inferred by ICA.

In this study, ICA was performed in two ways based on second-order blind identification (SOBI3), and higher-order statistics by information maximization (INFOMAX2). The SOBI is based on a joint diagonalization of time-lagged covariance matrices of whitening data. The analyzed data are first pre-whitened by singular value decomposition of the zero-lagged covariance matrix of x:

$$ {\mathbf{z}} = {\mathbf{D}}^{ - 1/2} {\mathbf{E}}^{\text{T}} {\mathbf{x}} $$
(3)

where the whitening data z = [z 1(n) … z M (n)]T, D the diagonal matrix of M eigenvalues λ1,…,λ M , and E containing corresponding eigenvectors. The time-lagged covariance matrices of whitening data R1),…,R K ) are then computed for a fixed set of time lags τ1,…,τ K . A separating matrix S is obtained as a joint diagonalizer of the set {R j )|j = 1,…,K}. The M SCs are thus estimated by

$$ {\mathbf{u}} = {\mathbf{Sz}} = {\mathbf{SD}}^{ - 1/2} {\mathbf{E}}^{\text{T}} {\mathbf{x}} $$
(4)

where the unmixing matrix W is equal to SD −1/2 E T.

The INFOMAX is performed based on maximum likelihood estimation. Same as the SOBI, the analyzed data are first pre-whitened, and SCs are obtained by u = Sz where the separating matrix S is initially set to the identity matrix and updated according to currently computed SCs at each step:

$$ {\mathbf{S}} = {\mathbf{S}} + \eta \left[ {{\mathbf{I}} + {\mathbf{g}}({\mathbf{u}}){\mathbf{u}}^{\text{T}} } \right]{\mathbf{S}} $$
(5)

The learning rate η is initially set to 0.00029 and annealed by 0.98 at each step. The elements of nonlinearity vector g(u) = [g(u 1) … g(u M )]T are set to tanh(u) − u if corresponding SCs are subgaussian (defined as negative kurtosis, kurt(u) = E{u 4(n)} − 3[E{u 2(n)}]2 by an expectation operator); −2 tanh(u) for supergaussian SCs (positive kurtosis).

Adaptive Filter Using the Reconstructed Ocular Source Components

Figure 1 provides the block diagram of the proposed method. In the training stage, ICA is conducted on the training EEG signals using the SOBI approach. The unmixing matrix derived in this step transforms the multichannel EEG into an equal number of SCs. Since the selected training epoch contains distinct ocular activity, some of the SCs display significant, ocular-like waveforms. These ocular SCs are denoted \( \tilde{u}_{1} \left( n \right), \ldots ,\tilde{u}_{P} \left( n \right) \), and contribute to the formation of ocular artifacts in the EEG.

Figure 1
figure 1

Schematic of the proposed method for removing ocular artifacts with an adaptive filter based on ocular source components (SCs). In the training stage, independent component analysis is applied to an EEG epoch with distinct ocular artifacts, to determine the unmixing matrix and the SC channels exhibiting significant ocular activity. In the processing stage, the EEG data of subsequent segments are decomposed based on the same unmixing matrix. The ocular SCs are then filtered by the wavelet shrinkage technique, and used as references to construct an adaptive filter capable of canceling ocular artifacts in the EEG signals

In the processing stage, a complete sequence of EEG signals is decomposed based on the same unmixing matrix. The ocular SCs will serve as reference signals for an adaptive filter to remove ocular interference. First, however, their high-frequency components and low-amplitude excursions are reduced by the wavelet shrinkage technique and imposing an adaptive threshold on the signal, respectively. The treated ocular SCs are used as reference signals to create an adaptive filter.

Ocular-Source Component Identification

Classical ocular-SC identification is performed by visual inspection of SCs, aided by displays of the projected strengths on the scalp. Automatic ocular-SC identification methods based on kurtosis,7 entropy,1 relative delta power,19 and projected strength gradient19 are recently used to overcome this subjective judgment. In this study, each SC is quantified by the following features:

  1. (1)

    signed square-root of kurtosis defined as sign{kurt(u)} × |kurt(u)|1/2;

  2. (2)

    entropy given by −Σp k  × log(p k ) where p k is the probability of the k-th division in SC histogram;

  3. (3)

    relative delta power defined as the spectral power below 3 Hz divided by the power below 30 Hz;

  4. (4)

    average of sequential differences of projected strengths from frontal, central, parietal, to occipital areas.

Eight 10-s EEG epochs with distinct ocular activity were collected from eight subjects for building a global classifier to identifying ocular SCs. These epochs are different from the corruption epochs used to construct simulated signals (stated in a later section). The ICA was then applied to these epochs, and the derived SCs were noted as ocular (ωo) or nonocular (ωn) by visual inspection. The probability function of a feature vector x in class ω i is given by

$$ p({\mathbf{x}}|\omega_{i} ) = {\frac{1}{{\sqrt {2\pi |{\varvec{\Upsigma}}_{i} |} }}}\exp \left( { - \frac{1}{2}({\mathbf{x}} - {\mathbf{u}}_{i} )^{\text{T}} {\varvec{\Upsigma}}_{i}^{ - 1} ({\mathbf{x}} - {\mathbf{u}}_{i} )} \right) $$
(6)

where u i and Σ i are the mean vector and covariance matrix, directly computed from all feature vectors assigned to class ω i . The discriminate function21 is thus obtained by

$$ g_{i} ({\mathbf{x}}) = \ln (p({\mathbf{x}}|\omega_{i} )p(\omega_{i} )) = - \frac{1}{2}({\mathbf{x}} - {\mathbf{u}}_{i} )^{\text{T}} {\varvec{\Upsigma}}_{i}^{ - 1} ({\mathbf{x}} - {\mathbf{u}}_{i} ) + c_{i} + \ln \,p(\omega_{i} ) $$
(7)

where c i is −(1/2) (ln 2π + ln|Σ i |). The prior probability p i ) is derived in a two-class task with a criterion of classifying x in ωo if g o(x) > g n(x); ωn otherwise. In addition, a confidence level of ocular-SC judgment can be expressed by the posterior probability

$$ p(\omega_{\text{o}} |{\mathbf{x}}) = {\frac{{p({\mathbf{x}}|\omega_{\text{o}} )p(\omega_{\text{o}} )}}{{p({\mathbf{x}})}}}\quad {\text{where}}\;p({\mathbf{x}}) = \sum\limits_{{i = {\text{o,n}}}} {p({\mathbf{x}}|\omega_{i} )p(\omega_{i} )} $$
(8)

Wavelet Shrinkage

Wavelet shrinkage, also known as wavelet denoising, is an effective technique for isolating noise components from electrocardiograms18 and EEGs15 while preserving the original signal. Wavelet shrinkage works well even when the noise spectrum overlaps with the signal spectrum. The wavelets with biorthogonal property are usually used since the redundancy between coefficients in wavelet domain can be reduced. The aforementioned ocular SCs not only present characteristic waves related to eye movements, but also waves that may come from neural activity. In order to avoid the undesired cancellation of neural activity, the ocular SCs are filtered by wavelet shrinkage before being used to construct the adaptive filter.

Figure 2 shows the decomposition and reconstruction of an ocular SC based on an eight-level discrete wavelet transform (DWT) with the Daubechies-9 wavelet. The ocular SC is decomposed into the coefficients a 8, d 8, d 7, d 6, d 5, d 4, d 3, d 2, and d 1. These correspond to the bands ≤1.95, 1.95–3.91, 3.91–7.81, 7.81–15.63, 15.63–31.25, 31.25–62.5, 62.5–125, 125–250, and 250–500 Hz, respectively. The ocular SC can be completely reconstructed by summing the corresponding wavelet components A 8(n), D 8(n), etc. with these coefficients. The approximate component A 8(n) is a smooth artifact with typical ocular structure. The D 6(n) and lower-level detail components are not closely related to the ocular artifact, so are excluded (this is the wavelet shrinkage). The D 8 and D 7 components are retained, but filtered by adaptive soft thresholding where the employed thresholds are estimated by the max–min spread (MMS) sorting method.4

Figure 2
figure 2

A single ocular source component (SC) is decomposed into several wavelet components. The approximate component A 8 is a smooth curve displaying the main structure of the ocular artifact. The D 6 and lower-level detail components are not closely related to ocular artifacts, and can safely be excluded. The D 8 and D 7 components contain a mixture of ocular and neural activity. The low-amplitude excursions related to neural activity are reduced by imposing adaptive thresholds (dashed lines). The ocular SC reconstructed from A 8, D 8, and D 7 is similar to the original ocular SC, except that high-frequency components and insignificant fluctuations have largely been eliminated. The reconstructed ocular SC by 7-level wavelet shrinkage not only contains ocular structure but also low-frequency component (below 3.91 Hz) during inter-ocular segments. Imposing 9-level wavelet shrinkage helps suppressing this component, but the basic ocular structure can-not be covered by approximate component A 9 due to a limited bandwidth (≤0.98 Hz), and the remnant structures in detail components are affected by soft thresholding

MMS sorting was originally used for neuronal spike detection.4 The detail component is divided into consecutive 400-ms bins with 50% overlap. The “signal intensity,” or MMS, in each bin is defined as the difference between the maximum and minimum values. All the MMSs of a given epoch are then sorted in ascending order. Those in the first 25% are expected to be less related to ocular artifacts, so the threshold is set to their average value. As shown in Fig. 2 (see the plots of D 7 and D 8, dashed lines), positive and negative excursions below this threshold are replaced with a zero value, and the others are subtracted by the threshold (It is a kind of soft thresholding). In this way, the sharp characteristics of ocular artifacts are preserved. The ocular SC is reconstructed by adding A 8 to the filtered D 7 and D 8 components. It is similar to the original ocular SC, but the high-frequency components and insignificant low-frequency components have been greatly reduced. This reconstructed ocular SC will be used as a reference signal for adaptive filtering. When a 7-level DWT is incorporated, the approximate component A 7(n) not only contains ocular structure but also component below 3.91 Hz, thereby producing larger low-frequency component within corruptless segments in the reconstructed signal. Although 9-level wavelet shrinkage gives a good suppression of this component, the basic ocular structure can-not be covered by approximate component A 9(n) due to a limited bandwidth (≤0.98 Hz), and the remnant structures in detail components are affected by soft thresholding.

Adaptive Filtering

Each of the EEG signals y(n) is assumed to contain both neural activity x(n) and ocular activity v(n). These aforementioned reconstructed ocular SCs serve as reference signals r 1(n),…,r P (n) for an adaptive filter written as

$$ v_{j} (n) = \sum\limits_{k = 1}^{L} {h_{j} (k)r_{j} (n - k + 1)} $$
(9)

where h j (k) is the kth coefficient of a finite impulse response filter with length L. The corrupted EEG e(n) is corrected by subtracting the primary signal y(n) from the sum of the filtered reference signals:

$$ e(n) = y(n) - \sum\limits_{j = 1}^{P} {v_{j} (n)} $$
(10)

In order to obtain the best cancellation of ocular activity, the filter coefficients h j (k) are adjusted using a recursive least squares method by minimizing the sum ε(n) of the weighted squared errors:

$$ \varepsilon (n) = e^{2} (n) + \lambda e^{2} (n - 1) + \cdots + \lambda^{n - L} e^{2} (L) $$
(11)

where 0 < λ ≤ 1 is a “forgetting factor” that graduate reduces the effects of previous errors.10

Adaptive Filter Using Electrooculogram

The vertical EOG and horizontal EOG signals can detect vertical and horizontal movements of eyeballs. The EOG signals are related to ocular activity in EEG signals and provide the most direct references for canceling ocular activity in linear regression and adaptive filtering methods. Owing to bidirectional linkages, EOG signals are also contaminated by cerebral activity, resulting in undesired reduction of cerebral activity in the EEG signals. Low-pass filtering of EOG signals was used to reduce the cancellation of high-frequency cerebral component from EEG in linear regression method.8 Because of spectral overlaps between EOG and some EEGs, the adaptive filter is appropriate in overcoming this difficulty.

In this study, EOG signals are filtered by wavelet shrinkage with the same settings as for treating ocular SCs so that the sharp characteristic of ocular component is preserved, and the high-frequency component and insignificant low-frequency component have been reduced. The reconstructed vertical EOG and horizontal EOG are used as reference signals r v(n) and r h(n) for an adaptive filter

$$ v(n) = \sum\limits_{k = 1}^{L} {h_{\text{v}} (k)r_{\text{v}} (n - k + 1)} + \sum\limits_{k = 1}^{L} {h_{\text{h}} (k)r_{\text{h}} (n - k + 1)} $$
(12)

The corrupted EEG is corrected by subtracting EEG signal y(n) from the output of the adaptive filter v(n):

$$ e(n) = y(n) - v(n) $$
(13)

The filter coefficients are again optimized to cancel ocular activity in the EEG signal by the recursive least squares method.10

Blind Source Separation

Similar to the manipulations in the proposed ocular-reduction method, the SOBI- or INFOMAX-based ICA decomposed multi-channel EEG signals of the training epoch into an equal number of SCs. The ocular-SC channels were then judged by the proposed ocular-SC identification. In the processing stage, the EEG signals are transformed to multiple SCs based on the obtained unmixing matrix in the training stage. The ocular SCs according to the foregoing judgment were disregarded. The EEG signal is then reconstructed from the remaining SCs based on the mixing matrix (the inverse of the unmixing matrix).

Generation of Simulated Signals

For each subject, three 10-s EEG epochs were selected based on the appearance of distinct ocular activity (referred to as corruption epochs). Three other 10-s EEG epochs which did not contain distinct ocular artifacts were chosen (corrupt-less epochs). Twenty-four simulated sequences were generated from these data. The eight subjects were divided into four groups. Figure 3 illustrates the procedure for the generation of simulated EEG signals. In each group, the first corruption epoch of one subject (the leftmost column of Fig. 4) was decomposed by the SOBI-based ICA. The ocular SCs were then filtered by wavelet shrinkage as described in the previous sections. Here, however, the adaptive threshold was set to two times the average value of sorted MMSs in the first 25%, to significantly reduce the contribution of neural activity in the SCs. The filtered SCs (the last two rows in the middle column) were used as the sole sources to construct ocular interferences (the other rows in the middle column) using the same mixing matrix. A simulated EEG epoch (the rightmost column in Fig. 4) was obtained by adding these ocular interferences to the EEG data of the first corrupt-less epoch from the other subject (after the SOBI-based blind source separation). The resulting simulated data were used as a training epoch where ICA was applied to determine the unmixing matrix and ocular SC channels. Two other testing epochs were generated by the same method, adding the interfering signals derived from the second (third) corruption epoch of the first subject to the second (third) corrupt-less epoch of the second subject. The roles of the two subjects were then switched to generate an independent training–testing pair.

Figure 3
figure 3

Generation of simulated EEG signals. The ocular SCs derived from a corruption epoch from subject 1 by independent component analysis are filtered by wavelet shrinkage and used as the sole sources to construct ocular-interfering signals. A simulated EEG epoch is obtained by adding these interfering signals to the EEG data of a uncorrupted epoch (after blind source separation) from subject 2

Figure 4
figure 4

Multichannel EEG signals excluding Veog and Heog of a corruption epoch from one subject (the leftmost column) are decomposed into multiple source components (SCs) through independent component analysis. Two significant ocular SCs are filtered by wavelet shrinkage (the last two rows in the middle column) and used as the sole sources to construct ocular artifacts (the other rows in the middle column). A simulated EEG epoch (the rightmost column) is obtained by adding the reconstructed ocular artifacts to the EEG data of a corrupt-less epoch (after blind source separation) from the other subject

Performance Evaluation

The performance of various ocular-reduction methods were evaluated based on the simulated dataset. Two statistics were then applied to the testing epochs for performance evaluation. The first statistic is Pearson’s correlation between the corrected EEG and its uncorrupted origin. The second statistic is the spectral content change (SCC) defined as the relative change in power observed in the specified bands (from f 1 to f 2), including delta (≤3 Hz), theta (4–7 Hz), alpha (8–12 Hz), and beta (13–30 Hz):

$$ {\text{SCC}} = {\frac{{\sum\nolimits_{{f_{1} }}^{{f_{2} }} {\left| {P_{\text{corrected}} (f) - P_{\text{original}} (f)} \right|} }}{{\sum\nolimits_{{f_{1} }}^{{f_{2} }} {P_{\text{original}} (f)} }}} $$
(14)

Poriginal(f) represents the power spectral density of uncorrupted origin. Pcorrected(f) represents the power spectral density of the corrected signal after applying ocular-artifact removal. The power spectral density was obtained by dividing the analyzed data into consecutive 1-s segments with 50% overlap, and applying the discrete Fourier transform to each 1-s segment using a Hanning window for average. The SCCs obtained were averaged over all EEG channels.

Results

Figure 5 demonstrates the removal of ocular artifacts by various methods. One distinct ocular artifact is present in the original EEG. A residual artifact is visible in the EEG corrected by adaptive filtering using the EOG signal as a reference. In this epoch, two significant ocular SCs were found. Discarding the more significant ocular SC in the BSS methods also left a residual artifact in the corrected EEG. This is true whether INFOMAX or SOBI statistics were used to decompose the EEG signal. The ocular interferences disappeared when both significant ocular SCs were discarded in the BSS methods. Similar observations were present in the adaptive filtering using ocular SCs as references.

Figure 5
figure 5

The removal of ocular artifacts from corrupted EEG signals. Residual artifacts are visible in the EEG corrected by an adaptive filter based on the electrooculogram (EOG). When only the more significant ocular source component (OSC) is used to construct the adaptive filter or discarded in the blind source separation (BSS) based on second-order blind identification (SOBI) or information maximization (INFOMAX), residual features are still present. The ocular artifacts almost disappear, however, when both significant OSCs are considered

Applying the proposed ocular SC identification to the 136 SCs derived from the training epochs in simulated EEG dataset, all the nine ocular SCs derived by the SOBI-based ICA (9 by the INFOMAX-based ICA) were correctly identified, but the 5 of 127 (3 of 127) non-ocular SCs were misclassified.

Figure 6 shows the average correlations and SCCs obtained by applying adaptive filters using EOG and using ocular SCs to the testing epochs of the simulated dataset. Higher correlation and smaller SCC indicate better ocular-artifact reduction. The forgetting factors (λ) were set to 0.9999, 0.9995, and 0.999, and the filter length (L) varied from 1 to 8 where the Daubechies-9 wavelet was adopted (the leftmost two columns). Using λ = 0.9999 and 0.9995 produced a higher correlation, but using λ = 0.9999 in adaptive filter using EOG yielded larger SCCs at 0.5–3 Hz. Using L = 2 gave smaller SCCs at 0.5–3 Hz in the adaptive filter using EOG. The filter length and forgetting factor were therefore set to 2 and 0.9995. These values are similar to settings used in previous research.10 , 11 , 20 The Coiflets-3, Biorthogonal Spline-6.8, and Symlets-9 wavelets, which also possess a biorthogonal property and an identical filter length as Daubechies-9, were also incorporated (the rightmost two columns). The Baubechies-9 wavelet gave the highest correlation and smallest SCCs, and so the same was used.

Figure 6
figure 6

Correlations and spectral content changes (SCCs) obtained on applying adaptive filters using electrooculogram (AF-EOG) and ocular source component (AF-OSC) to the testing epochs in the simulated dataset. The forgetting factors (λ) were set to 0.9999, 0.9995, and 0.999, and the filter length varied from 1 to 8 where the Daubechies-9 wavelet was adopted (the leftmost two columns). The Daubechies-9, Coiflets-3, Biorthogonal Spline-6.8, and Symlets-9 wavelets were, respectively, used where the filter length and forgetting factor were set to 2 and 0.9995 (the rightmost two columns)

Table 1 reports the overall average correlation and SCC statistics between the corrected EEG and the corrupt-less origins of the simulated EEGs. This comparison demonstrates the ability for reducing ocular artifacts. Adaptive filter based on the EOG produced the lowest correlation. Adaptive filter based on ocular SCs had the smallest SCCs, particularly in delta and alpha bands.

Table 1 Pearson correlations and spectral content changes observed in corrected samples of simulated electroencephalograms by various ocular-artifact removal methods, compared to their corrupt-less origins

Discussion

The proposed ocular-artifact removal makes use of adaptive filtering and blind source separation, combining the advantages of both methods. As with the BSS method, ICA of multichannel EEG decomposes the original signals into multiple SCs. Some of these exhibit distinct ocular activity, and can be discarded to remove the main ocular artifacts in the EEG. However, rather than discarding ocular and neural data simultaneously by simply ignoring significant ocular SCs, we use wavelet shrinkage and an adaptive threshold to remove or reduce non-ocular information from the ocular SCs. The reconstructed ocular SCs retain the shape of the ocular artifact, and are then used to construct effective adaptive filters. The ocular reference signals are, therefore, obtained without the need for inconvenient and uncomfortable EOG recordings. In our experiments, the proposed method yielded higher correlations and lower spectral errors in simulated EEG signals than the adaptive filter based on EOG signals and the BSS methods. This result demonstrated a better ocular-artifact reduction by the proposed method.

The BSS methods usually face a dilemma: discarding only the most significant ocular SC may result in an EEG with some residual ocular artifacts, but discarding all ocular SCs is likely to remove some neural components of the original signal. This dilemma persists in the proposed method. Automatic ocular-SC identification helps relieving this dilemma. By the proposed ocular-SC identification, the significant ocular SCs in each training epoch were correctly recognized. A few nonocular SCs were misclassified as ocular, producing over-discarding in the BSS methods. However, imposing wavelet shrinkage on the detected ocular SC in the proposed method significantly reduces its nonocular component, thereby avoiding the superfluous cancellation of cerebral activity in EEG signals. The slight reduction of ocular structure by wavelet shrinkage is recovered by the adaptive filter, which is optimized to reduce ocular artifacts. In addition, the derived posterior probability might be beneficial when no ocular SC is detected in the selected training epoch with ocular artifacts although there is no such situation in our experiment. The SC channel with the largest posterior probability could be regarded as ocular related.

In this study, simulated EEG signals were generated by adding ocular interference obtained from another subject’s ocular SCs to uncorrupted EEG signals. That is, the filtered ocular SCs of one person are projected onto the EEG of another person through the BSS mixing matrix. This approach is similar to the use of transfer functions11 and the linear autoregressive model,20 which are also estimated from real data.