In the past several years, considerable progress has been made in the precision of movements controlled through the use of implanted microelectrode arrays (Wolpaw et al. 2000; Taylor et al. 2002; Wessberg and Nicolelis 2004). Improvements in hardware and data processing, in conjunction with closed-loop feedback training, have shown good promise for providing functional movement to the neurologically impaired. At the same time, substantial improvements have been achieved in signal processing, training techniques and interpretation of data in noninvasive brain-machine interface (BMI) using electroencephalography (EEG) (McFarland et al. 1997; Wolpaw et al. 2000; Wessberg and Nicolelis 2004; Scherer et al. 2004; Hinterberger et al. 2004). For example, four human subjects (two of whom were paralyzed but had normal arm function), working with 64-lead EEG and adaptive algorithms, demonstrated two-dimensional (2-D) control over a computer cursor following many weeks of training sessions (Wolpaw and McFarland 2004). However promising these advances may be, there are drawbacks to both approaches. For example, information transfer rates for both invasive and noninvasive techniques are often low (Wolpaw et al. 2000). Cortical electrode implantation carries inherent risks and may not be an available option for some patients. Furthermore, studies of implant longevity in rodents, while encouraging due to the limited fibrous encapsulation, have shown viability on the order of only months and not years (Vetter et al. 2004). On the other hand, EEG-based BMI has focused on keyboard interfaces controlled by effortful changes in cortical rhythm(s), thus requiring substantial training time and providing limited variables over which a subject may gain control. In this study, we explored the feasibility of using real-time magnetoencephalography (MEG) to predict 2-D movement trajectories in a drawing task. For this purpose, we applied analysis methods developed previously in the context of neurophysiological recordings (Georgopoulos et al. 1988).

Ten right-handed subjects (five women and five men) participated in these experiments as paid volunteers (age range, 23–41 years; mean ± SD, 30±6 years). The study protocol was approved by the appropriate institutional review boards. Informed consent was obtained from all subjects prior to the study according to the Declaration of Helsinki. Stimuli were generated by a computer and presented to the subjects using an LCD projector. Subjects were presented with a red fixation point surrounded by a pentagon. When the fixation point turned green, the subjects drew the shape continuously for 45 s by moving an X-Y joystick using their right hand. The joystick was located at arm’s length and out of the visual field. Subjects were instructed to fixate on the central point throughout the task. They were also instructed to copy the shape counter-clockwise at their own speed. No visual feedback was provided. The fixation point and the pentagon were presented to the subjects using a periscopic mirror system which placed the image on a screen approximately 62 cm in front of the subject’s eyes. The pentagon subtended approximately 10° of visual angle. MEG data were collected using a 248 sensor whole-head axial-gradiometer system (Magnes 3600WH, 4D-Neuroimaging, San Diego, CA). MEG data, electrooculogram data, and joystick output (0.1–400 Hz) were acquired simultaneously at 1017.25 Hz. The cardiac artifact was removed using event synchronous subtraction (Leuthold 2003). X-Y joystick coordinates were determined by converting from mV to end-of-joystick excursion.

The first step in data analysis was a multivariate linear regression in which the time courses of the 248 sensors were the independent variables and the corresponding time courses of the X and Y coordinates of the joystick were the dependent variables. This analysis was implemented using the double-precision fast Givens transformation of the IMSL statistical and mathematical library, called from FORTRAN programs (Compaq Visual FORTRAN Professional edition version 6.6B). This analysis yielded X- and Y-coefficients for each sensor. Next, predicted X- and Y-trajectories were computed by a linear summation of the weighted time-varying contributions from the 248 sensors, as follows:

$$X_{t} = a_{x} + \sum\limits_{i}^{248} {b_{ix} S_{i} (t)} $$
(1)
$$Y_{t} = a_{y} + \sum\limits_{i}^{248} {b_{iy} S_{i} (t)} $$
(2)

where X t , Y t , are the predicted X- and Y- trajectories at time t; a x , a y are constants; b ix , b iy are X- and Y-regression coefficients for sensor i; and S i (t) is the signal from sensor i at time t. In subsequent analyses the independent variables consisted of the above plus k=19 additional sample points of the 248 MEG signals preceding the currently predicted trajectory point at t, as follows:

$$X^{\prime}_{t} = a^{\prime}_{x} + \sum\limits_{i}^{248}{\sum\limits_{k}^{k = 0,19}{b_{ikx} S_{i}(t - k)}} $$
(3)
$$Y^{\prime}_{t} = a^{\prime}_{y} + \sum\limits_{i}^{248}{\sum\limits_{k}^{k = 0,19}{b_{iky} S_{i} (t - k)}}$$
(4)

This is a reasonable procedure, since changes in activity in most motor-related cortical areas precede movement onset. The quality of the prediction was quantified by calculating the Pearson correlation coefficient between the ms-by-ms actual and predicted data. Summary statistics (median, range) were obtained from pooled X- and Y-data across subjects. Finally, we carried out cross-validation analyses in which only the first half (22,500 time points; “training” set) were used for the calculation of weighting coefficients from the first half (22.5 s of the data; “training” set). Then trajectory predictions were computed (using Eqs. 3, 4) for both the training set as well as for the remaining cross-validated data points (“test” set). This cross-validation is important as a test of the feasibility of this approach to control a prosthetic device in real-time.

Figure 1a shows an example from one subject of actual (blue lines) and predicted (fuchsia lines) X- and Y-trajectories, without smoothing, using Eqs. 3 and 4; very similar, good predictions were also obtained in the remaining subjects. The modulated neural prediction disappeared when the neural data were shuffled in time (data not shown), which means that the trajectory information resides in the temporal sequence of the MEG signal. The prediction was further improved following smoothing (Fig. 1b), and yielded 2-D trajectories practically indistinguishable from the actual movements (Fig. 1c). All predictions were of the same high quality for both X- and Y-data.

Fig. 1
figure 1

Actual (blue lines) and predicted (fuchsia lines) X- and Y-trajectories using 20 time-points per sensor (see text, Eqs. 3, 4), based on the whole sample. a Unsmoothed predictions, b cubic-spline smoothed predictions, c X-Y plots of data in b

Next, we evaluated the robustness of these results by cross-validating the predictions between the first and second half of the data. As can be seen in Fig. 2, the predictions for the test set were very good, although a higher variance was present. Overall, these analyses documented the robustness of the results and the validity of the approach, as follows. The correlation coefficients between the actual and predicted trajectories in the first half were high (unsmoothed data: median r=0.91, range 0.83–0.94; smoothed data: median r=0.97, range 0.95–0.99), and remained high in the second, cross-validated half (unsmoothed data: median r=0.76, range 0.59–0.86; smoothed data: median r=0.85, range 0.68–0.92).

Fig. 2
figure 2

Actual (blue lines) and predicted (fuchsia lines) X- and Y-trajectories using 20 time-points per sensor for training and test data. Coefficients were calculated from the training set and were applied to predict the test set

These results show, for the first time, that there is adequate and robust information in the non-invasive MEG signal for real-time prediction of drawing movement trajectories. Moreover, this is the first time that such information of high quality has been extracted from single unaveraged trials. This should be very easy to implement in real time, given that the time-consuming part of the procedure is calculating the X-Y coefficients for each sensor, which can be done at leisure and for an extensive training set. In contrast, the calculation of the prediction for prosthetic control is almost instantaneous since it involves only multiplications and summations. However, the use of MEG for ambulatory prosthetic control is obviously impractical. Preliminary results from our laboratory (F. J. P. Langheim, A. C. Leuthold, J. J. Stanwyck, S. M. Lewis, S. Sponheim, A. P. Georgopoulos, unpublished observations; work in progress) indicate that good motor predictions can be also obtained using EEG signals which are easy to record in an ambulatory setting.