Keywords

1 Brief Introduction to MEG

Electric currents produce magnetic fields about them. Therefore, the electric currents within active neurons are accompanied not only with electric but also with magnetic fields; see Fig. 17.1. Since neural currents are very weak, even the concerted action of tens of thousands of neurons produces only very faint magnetic fields outside of the head. Measuring these fields is exceedingly challenging due to their weakness. Yet, ultrasensitive instrumentation enables such noninvasive recordings.

Fig. 17.1
figure 1

MEG and EEG noninvasively record electromagnetic brain activity. (a) Electric currents in neurons (red arrow) produce a potential distribution on the scalp (red and blue regions) as well as a magnetic field (green lines) outside of the head. (b) The sensor array of modern MEG systems covers the whole scalp. The magnetic field map (red and blue contours for field out- and inward of the sensor helmet, respectively). (c) Postsynaptic currents (red arrows) in the apical dendrites of cortical pyramidal neurons are the main source of MEG and EEG. The parallel orientation of these dendrites enables spatial summation of the fields. (d) Action potentials are associated with antiparallel intracellular currents whereas postsynaptic currents are unidirectional. Parts of the figure adopted from Parkkonen [1]

This section sheds light on the neural origin of these signals, illustrates the instrumentation required to measure them, and outlines the typical computational approaches needed to infer neural activations from the neuromagnetic measurements. Throughout this introduction, attention is paid to aspects that are relevant for real-time MEG.

For further and more detailed information about MEG the reader is referred to reviews [25]. Textbooks on MEG have also become available; see, e.g., [68].

1.1 Origin of MEG Signals

Neurotransmitter vesicles released at the presynaptic axon terminal diffuse across the synaptic cleft and open specific ion channels on the membrane of the postsynaptic neuron. Ionic currents through these channels lead to a change in the membrane potential; excitatory synapses depolarize the cell (increase the membrane potential) while inhibitory synapses hyperpolarize the cell (decrease its membrane potential) or shunt the membrane potential to its resting value of about −70 mV (measured as the potential of the intracellular space with respect to the extracellular space). These postsynaptic potentials (PSPs) give rise to intracellular currents; see Fig. 17.1d. It is important to note that most inhibitory synapses are close to or even on the soma of the neuron where they can strongly influence the likelihood of the postsynaptic neuron to fire an action potential. In contrast, excitatory synapses are most abundant further away from the soma, on the branches of the dendritic tree. Therefore, both excitatory and inhibitory postsynaptic currents are typically towards the soma of the neuron. If the net potential change at the soma, taking into account the number and type of simultaneously active synapses and their distances from the soma, exceeds a threshold value of about −35 mV, the neuron fires an action potential (AP) that actively propagates along the axon towards presynaptic terminals of other synapses. Since the AP involves transient opening of local ion channels and thus a local change in the membrane potential, the intracellular current associated with an AP flows both forward and backward with respect to the propagation of the AP.

Although both postsynaptic currents and currents due to action potentials generate magnetic fields, the antiparallel arrangement of intracellular AP current components constitutes a quadrupole whose field diminishes rapidly with distance. When measuring the fields at a distance much larger than the length of significant AP current flow, the fields due to an AP are substantially weaker than those due to the largely unidirectional postsynaptic currents, which can be modeled as current dipoles.

The detectability of neural currents is also affected by their time course; the slower postsynaptic currents (tens of ms in duration) are more likely to co-occur than the faster APs (1–2 ms). Since the fields of single neural events are by far too weak to be measured by MEG or EEG, concerted action of tens or hundreds of thousands of neurons is needed for extracranial detection [2, 9]. Moreover, these simultaneously active neurons need to have their dendrites oriented roughly in parallel for the fields to sum up constructively. Fortunately, apical dendrites of pyramidal neurons are arranged in such a manner, and indeed the postsynaptic currents in these neurons produce the bulk of MEG and EEG signals.

In rare cases also axonal activity, i.e., APs, can be recorded with MEG. Bends in axons destroy the symmetry of the forward and backward currents and give rise to a dipolar component. However, the short duration of the AP still hampers its detection. Yet, sometimes, external stimulation can phase-lock the APs so well that massive averaging can make APs detectable; for example, responses from human auditory brain stem have been recorded with MEG [10]. These responses partly reflect axonal activity.

1.2 Instrumentation

Neuromagnetic fields within centimeters from the scalp are on the order of 100 femtoteslas (fT). For example, the Earth’s magnetic field (50–90 μT) is thus 9 orders of magnitude (billion times) stronger. Similarly, moving vehicles and many electric devices produce magnetic fields that are many orders of magnitude stronger than any neuromagnetic signal. Therefore, measuring these weak fields is extremely challenging, and a combination of ultrasensitive detectors and magnetic shielding is needed.

Currently all MEG systems in routine use are based on superconducting quantum interference devices (SQUIDs) which are magnetic field sensors that provide femtotesla-range sensitivity. As the name implies, these sensors exploit superconductivity and thus require cryogenic temperatures to operate. SQUIDs are typically made of low-critical-temperature superconductors that need to be cooled down to temperatures of liquid helium (boiling point 4.2 K or −269 °C). Despite the sufficient sensitivity of SQUIDs for MEG, alternative sensors are actively being sought since liquid helium is expensive and its supply is limited. For example, high-critical-temperature SQUIDs that operate at liquid-nitrogen (T = 77 K) temperatures [11] and atomic (optical) magnetometers [12] have been demonstrated to be capable of recording MEG, but further development is needed for these sensors to outperform helium-cooled SQUIDs in practical MEG devices.

To capture the neuromagnetic fields above the whole scalp and to enable localizing the neural sources generating the measured fields, large helmet-shaped arrays of SQUID sensors are used. State-of-the-art systems comprise over 300 sensors distributed across the scalp. Due to the low temperatures required, the sensor helmet is placed inside a cryogenic vessel, Dewar, which is designed to minimize the transfer of heat to the liquid-helium bath inside the vessel while allowing the neuromagnetic fields to pass undistorted to the sensors.

As mentioned earlier, magnetic shielding is typically needed to protect the neuromagnetic measurement from interfering ambient magnetic fields such as those from traffic, power lines, elevators, and laboratory equipment. The most common shielding method is a magnetically shielded room, which is made of mu-metal (an alloy with very high permeability) and aluminum. Such a passive shield can be augmented with active systems that measure the interfering field and apply currents to coils that produce fields that counteract the interference. The magnetically shielded room requires space and is a substantial expense. Attempts to measure MEG without such a room have been made, but unfortunately robust operation in urban environments is yet to be demonstrated.

As is evident from the above, compared to EEG, MEG is considerably more expensive and not portable. However, MEG offers superior spatial resolution and to some extent complementary sensitivity with respect to EEG. These features outweigh the higher cost of instrumentation when the goal is to localize brain activity with high accuracy. MEG may also facilitate the development of brain–computer interfaces even if the eventual application does not use MEG.

2 MEG Data Analysis

While the sensor signals are already indicative of brain activity, the full potential of MEG can only be exploited by subjecting the measured multichannel data to mathematical operations that (a) suppress external interference, (b) improve the signal-to-noise ratio (SNR) of the brain signals of interest, and (c) estimate the locations of the neural sources underlying the measured signals. These steps are discussed in the following. Special attention is paid to their real-time application.

First, let us define the concept of signal space as it becomes handy when discussing the following methods. A measurement with N channels can be described as a signal vector in an N-dimensional virtual space where each measurement channel spans one axis. The spatial distribution, or pattern on the sensors, corresponds to the direction of that vector whereas the strength of the signal is reflected as the length of this vector. Thus, a neural source with a fixed location and orientation corresponds to a signal vector to a fixed direction; variation of the strength of that source only affects the length of the vector. Conversely, a source whose location or orientation varies over time corresponds to a signal vector whose direction changes over time.

Often the signal of interest is confined to a specific part of the signal space, i.e., only a limited set of signal patterns is generated by the neural processes underlying the measured signal. This set is referred to as a signal subspace. For example, when measuring with a whole-scalp MEG system or a high-density EEG system, the sequence of responses evoked by a visual stimulus fall within a certain subspace which is likely to be separate from the subspace spanned by, e.g., auditory responses. Likewise, external magnetic interference is confined to a specific subspace which is separate from that of brain signals as will be discussed in the following.

For a more detailed and mathematical treatment of signal space in the context of MEG and EEG, see [13].

2.1 Interference Suppression

Despite the employed magnetic shielding, the measured data may still be contaminated by interference, which can originate in the subject and thus cannot be suppressed by shielding. These biological interference sources include active muscles, particularly the heart, and eyes (blinks and saccades). For suppressing interference, the multichannel nature of the data allows the use of statistical methods such as principal component analysis (PCA) [13, 14] and independent component analysis (ICA) [15, 16] but also physics-inspired methods that comprise a spatial signal model based on Maxwell’s equations; signal-space separation (SSS) is such a method [17]. When using PCA or ICA, interference is suppressed simply by omitting those components that correspond to interfering signals. For example, eye blinks and cardiac activity are typically represented in a low-dimensional (1–3 dimensions) signal subspace, which can be projected out of the data. In contrast, physics-based methods, such as SSS, do not require selecting specific components since the brain-signal subspace is determined by the model in a data-independent manner. However, SSS may not be able to suppress interference that spatially resembles brain activity while PCA or ICA could do it.

In real-time applications, the interference suppression system should require no or very little operator intervention such as determining which principal or independent components correspond to interference. Therefore, SSS lends itself well to real-time use, and a robust version of the algorithm has been devised particularly for applications where the data have to be processed in a single pass [18].

2.2 Filtering and Averaging in Time

Among all measurable brain activity, one often wants to target specific neural activations that reflect stimulus- or task-related processing. As raw MEG signals usually do not provide sufficient SNR to accurately pinpoint or separate the contributions of the underlying neural sources, responses to similar stimuli are averaged to improve the SNR. If we assume the noise—whether from the sensors, environment, or background brain activity—to be temporally uncorrelated with the signal of interest and that the responses are resilient to repetition, the SNR improves as the square root of the number of responses is averaged.

However, in real-time analysis, response averaging has only limited applicability since accumulating multiple responses naturally increases the latency of the feedback to the subject. Yet, with a rapidly changing stimulus, computing a moving average of the responses can be a powerful way of boosting the SNR. This is particularly true for steady-state responses (see Sect. 17.2.4.2), which have been employed extensively in EEG-based BCI applications; for example, steady-state visual evoked potentials (SSVEPs) can be collected with the stimulus changing even at several tens of hertz, and thus an averaging window of just one second can provide a good SNR.

The response of interest typically has a limited spectral width, i.e., the response is confined to a certain frequency band. Since the spectra of the measured signals are nonzero at all frequencies due to external interference, intrinsic sensor noise, and background brain activity, filtering the measurement to the band of the response improves the SNR.

In addition to conventional spectral filtering, time-domain templates (template filters) can be used if the response or series of responses have a stereotypical waveform. The measured data and the template are convolved and the resulting time series thresholded to detect moments when the responses of interest are likely present in the data.

2.3 Filtering and Averaging in Space

The SNR of multichannel measurements can be improved also by spatial filtering, i.e., by linearly combining the measurement channels such that the result shows the maximal SNR for the signal of interest. Several approaches exist for deriving these linear combinations, that is, weighted sums of the measurement channels:

  1. 1.

    Combination of a set of neighboring channels with uniform weights. This method is used mostly to reduce the effect of intrinsic sensor noise, and it is often referred to as “spatial averaging.” No detailed knowledge of the source of interest is utilized. To remove the dependence on the source orientation and guarantee constructive spatial averaging, absolute values (or norms of planar-gradiometer pairs) are often taken. However, since both operations are nonlinear, the frequency content and phase of the signal are not preserved.

  2. 2.

    Applying machine-learning algorithms to determine projections such that the resulting signal is optimized for providing the feedback; for example, it best discriminates between attended and unattended stimuli. Both unsupervised methods, such as ICA, and supervised methods, such as linear discriminant analysis (LDA), can be used. In either case, no explicit source modeling is done.

  3. 3.

    Projecting the data to the signal-space direction corresponding to a known source, yielding an estimate of the source time course. The direction must be known in advance or determined adaptively from the data. Possible additional simultaneous sources are not explicitly taken into account and they can thus interfere with the estimate. To allow for any source orientation at a specific location, the source signal vectors of orthogonal dipolar sources can be computed and the norm of the projection onto this subspace taken as an estimate of the source strength.

  4. 4.

    Using beamforming techniques to explicitly suppress signals generated outside of the source area of interest. In addition to the source signal vector, an estimate of the data covariance must be available.

2.3.1 Challenges in Real-Time Implementations

In real-time analysis, utilizing a priori information can be difficult as that information often depends on the position of the head with respect to the MEG sensor array. This is particularly true for source signal vectors and data covariance. There is no corresponding problem in EEG since the electrodes are attached to the scalp according to, e.g., the international 10–20 system and thus their location with respect to the brain does not vary considerably even between measurement sessions. In MEG, this problem can be addressed in two ways that we discuss below.

A short “source localizer” measurement can be performed just prior to switching to the actual experiment. If the head position can be assumed not to vary between the localizer measurement and the experiment, the localizer provides the source signal vector and the data covariance matrix in the space that is readily applicable also during the experiment; the head position does not need to be determined and no data transformations are needed. However, the assumption of an immobile head may not be valid, particularly if the localizer measurement is time-consuming. Additionally, the a priori information does not generalize well across subjects.

Alternatively, the signal vectors corresponding to the sources of interest can be determined in a separate measurement, and either these signal vectors or the measured data can be transformed to a common head position during the actual real-time experiment. Such a transformation can be accomplished via a source model (e.g., based on equivalent current dipoles or a minimum-norm estimate [19]) or via a multipolar expansion bound to the head position [17, 20]. Both approaches require that the head position with respect to the sensors can be determined accurately. All modern MEG systems include such functionality for off-line source modeling and integration of the source estimates with structural information; however, the availability of head localization for online use is still limited.

Machine-learning methods require training data which need to be obtained during the measurement unless the learned model generalizes across subjects with the aid of the transformations described earlier for differing head positions.

2.4 Signals for Neurofeedback

The previous section described how to transform the multichannel MEG measurement to a small set of time series that optimally reflects the brain activity of interest. Similarly to EEG, several features of MEG signals can be used for neurofeedback. Here we outline the most common signal characteristics that can be extracted from the raw time series and potentially used to drive or modulate stimulation.

2.4.1 Evoked Responses

Sensory input, particularly its abrupt changes, elicits electromagnetic responses at multiple levels of the nervous system, from peripheral nerves to high-order cortical areas. The characteristics of these responses, including their susceptibility to modulation by the current “brain state,” have been studied extensively and they have shed light on many aspects of cortical sensory processing.

For neurofeedback, those responses that can be modified by endogenous mechanisms such as attention are usually most appropriate. In EEG, the stimulus-nonspecific P300 response that is considered to reflect conscious evaluation of the stimulus and decision making is probably the most widely used endogenous response for brain–computer interfaces; the “P300 speller” (see Chap. 2 in this book) is a good example of such use [21, 22]. However, while the P300 response is detectable also by MEG, more specific responses may yield better BCI performance, likely owing to the better spatial resolution of MEG compared to EEG [23].

2.4.2 Steady-State Responses and Frequency Tagging

Fast, repetitive stimulation or amplitude modulation of the stimulus at a specific frequency often produces a train of evoked responses at that same frequency; these responses are often referred to as steady-state responses. If the frequency is sufficiently high, the responses are approximately sinusoidal and thus produce a clear spectral peak at the fundamental frequency. The amplitude of this peak is influenced not only by the qualities of the stimulus but in many cases also by intrinsic factors such as attention, which makes steady-state responses of particular interest in closed-loop experiments.

The cortical processing of multiple, concurrent stimuli can be monitored by using stimulus-specific frequencies, i.e., by tagging or labeling each stimulus with a distinct frequency. Any intrinsic modulations in the processing of these stimuli could then be reflected in the amplitudes of the tag signals. This approach has been employed, e.g., in studying conscious visual perception [24, 25].

There are several methods available for the quantification of the tag amplitudes. Simple averaging time-locked to a fixed phase of the repeating stimuli yields an average steady-state evoked response whose amplitude can be readily measured. Similar information could be obtained by examining the corresponding peaks in the amplitude spectrum of the measured signal (see, e.g., [26]). Sufficiently high spectral resolution must be used to avoid losing in SNR by diluting the narrow spectral peak to a much wider frequency bin. In real-time applications, the temporal window from which the spectra are computed must be kept short to allow short-latency feedback, which limits frequency resolution and, in the case of multiple tags, may lead to unwanted mixing due to the non-orthogonality of the sinusoids within the finite temporal window. Steady-state average responses may suffer from similar leakage unless the tag frequencies are carefully chosen to be orthogonal.

To overcome these problems, a model-based approach can be taken: since the exact frequency of each tag is known, only the amplitude and phase need to be determined, and the estimation of the full spectrum can be replaced by fitting to the data a model where each tag response is represented by a quadrature (sine and cosine) signal at that frequency. A quadrature is needed as the phase of the response is unknown. By solving this multiple regression problem, the leakage between the tag amplitude estimates can easily be taken into account and compensated for. In addition, any interfering oscillatory signals can be included in the model as “nuisance regressors,” thereby eliminating their contaminating effect. Such an approach was adopted in [25] for off-line analysis. Later, we applied a similar schema in a real-time setting involving four simultaneous tags, one for each quadrant of the visual field; see Fig. 17.2. The temporal windows were just 1.5 s in duration but provided amplitude estimates robust enough for real-time, brain-based control of a cursor on the screen.

Fig. 17.2
figure 2

Application of MEG and frequency tagging for providing the subject with real-time feedback on the target of visual attention. (a) Dynamic visual stimulus with noise patterns at each quadrant refreshed such that the mean luminance varied at 11, 13, 15, and 17 Hz depending on the quadrant. A cursor (small black disc) was re-positioned on the screen every 1.5 s based on the balance of the tag responses. (b) The amplitude spectra of all MEG sensors while the subject was viewing the stimulus. Each triplet comprises two planar gradiometers (left) and one magnetometer (right). (c) The spectrum of an occipital planar gradiometer channel enlarged. Frequency tags produce distinct spectral peaks at their frequency while also giving rise to lower-amplitude harmonics. (d) System architecture

2.4.3 Changes in Oscillatory Activity

It has been known since the early days of EEG that noninvasively detected spontaneous oscillatory brain activity, such as occipital alpha at 10 Hz or Rolandic mu rhythm at 10 and 20 Hz, can be modulated by various factors such as sensory input, motor output, visual or motor imagery, attention, vigilance, etc. This holds true also for MEG, which can record oscillatory brain signals from near-DC to several hundreds of hertz. Compared to EEG, MEG generally allows localizing the sources of these oscillations better and is therefore particularly well suited to exploiting rhythmic brain activity in simple neurofeedback and other closed-loop experiments.

Even though oscillatory brain activity is typically modulatory rather than highly specific to the stimulus or task, it can be used to convey more information than just binary choices; see, e.g., [27, 28].

Oscillatory activity at frequencies below the gamma band (<30 Hz) is most prominent in EEG and also in MEG, but the low frequency holds a problem for closed-loop experiments; for reliable estimation of the amplitude, several cycles of oscillatory activity have to be accumulated, and at these low frequencies it means temporal windows and feedback latencies approaching and often exceeding one second. Therefore, gamma-range signals would be better suited for such experiments, but unfortunately their SNR is often prohibitively low for a sub-second integration time to be sufficient for reliable feedback.

2.4.4 Functional Connectivity

We have so far assumed that the signals from the brain sources represent a characteristic of the stimulus, possibly modulated by some endogenous brain mechanisms. However, this view ignores the possibility that one brain region passing information about the stimulus to the next one could be reflected as increased synchronization between the signals from those regions even if we do not know how these signals are associated with the stimulus or task. This interregional synchronization is often referred to as functional connectivity. Several metrics exist for estimating and quantifying functional connectivity; coherence and phase-locking value (PLV) are probably the most widely used in MEG. Coherence and PLV attempt to estimate whether the phases of two signals, at a specific frequency, exhibit a constant difference.

Due to the finite spatial resolution of MEG and EEG, the activity of one source is present at multiple sensors. Similarly in source space, focal activity leaks to nearby source locations. This leakage, or field spread, presents a significant problem in functional connectivity estimation by giving rise to spurious connectivity. One way to address this problem is to omit all in-phase coherence and accept only 90-degree-shifted signals as true connectivity since they cannot occur due to field spread which can only produce no or 180-degree phase shifts. This method is called imaginary coherency [29]. We have applied this method with beamformer-based extraction of source time series in a closed-loop experiment [30].

3 Implementing Neurofeedback with MEG

While real-time analysis and neurofeedback frameworks have existed for some time (e.g., “BCI2000” described in [31]), they have typically been adopted for EEG-based applications. More recently, real-time interfaces have appeared also for MEG or for combined EEG and MEG use (see, e.g., [32, 33]), and complete analysis software packages such as “FieldTrip” [34] and “MNE Software” [35] now support real-time data analysis. Connectivity between the real-time interfaces of these software packages also exists. The following text addresses certain relevant performance characteristics.

3.1 Latency

In MEG-based neurofeedback, the delay from a change in the cerebral magnetic field to the corresponding change in the feedback provided to the subject is often referred to as the latency of the setup. Latency comprises the intrinsic delay in the sensing and data acquisition system of the MEG device, possible additional delay caused by relaying the data from the MEG system to the real-time algorithm, time of accumulating data for extracting the signal features relevant for the feedback, processing time, and finally the delay in delivering the feedback to the subject. Many of these steps along the data path involve handling the data in packets of a fixed number of samples. Since these “temporal segmentations” are asynchronous with respect to each other and to the measured neural events, the delay in each element should be regarded as a random variable described with a statistical distribution rather than a constant. For example, if an MEG acquisition system delivers the data in packets of 50 samples and involves a minimum transfer delay of 30 ms and a sampling rate of 1 kHz is used, this part incurs a uniform latency distribution of 30–80 ms. Similarly, if the feedback is delivered via a video projector running at 60 frames per second and with an intrinsic delay of one frame, an additional uniformly distributed delay of 17–34 ms adds to the total feedback latency.

However, compared to the above numbers, the data accumulation delay is often the most significant because the SNR in the raw data is typically low and multiple single events need to be collected and averaged. Moreover, if the signal feature of interest is the amplitude of a low-frequency oscillation, data amounting to a few cycles have to be collected. In the case of alpha oscillations (~10 Hz), this accumulation corresponds to hundreds of milliseconds rendering the technical delays described earlier almost negligible (see also Sect. 17.2.4.3). Thus, further optimization of the technical sources of delay should be considered only if their contribution approaches the algorithmic delay, including data accumulation.

3.2 Signal Quality

As described earlier (see Sect. 17.2.1), raw MEG signals are often hampered by residual interference. Particularly the magnetometer (as opposed to gradiometer) signals are prone to such interference. Traditionally, off-line algorithms are employed to suppress the remaining interference and artifacts, but in a neurofeedback setting such suppression naturally has to happen online. Fortunately, most of the relevant algorithms can be implemented as a multiplication of the multichannel data with a fixed matrix; e.g., higher-order gradiometrization [36], signal-space projection [13, 14], and signal-space separation [17] can be expressed in such a way. A multiplication even by a large matrix, e.g., 400 by 400 corresponding to 400 measurement channels, only adds few ms to the total delay.

3.3 Synchronization of Measurement Channels

In modern MEG systems, the sampling of all channels is typically controlled by a shared sampling clock and thus phase differences across the signals are negligible. However, when combining data from multiple modalities, e.g., MEG, EEG with a separate acquisition system, and eye tracking, synchronization may become an issue unless a common clock signal is used. Resampling the data in real time for regaining synchrony is undesirable because of the added complexity and delay.

3.4 An Example Setup

We devised a setup where the subject is provided with real-time feedback on the target of his/her visual attention; see Fig. 17.2. We employed visual steady-state responses, i.e., frequency tagging, by using a dynamic visual stimulus with noise patterns at each quadrant refreshed such that the mean luminance varied at 11, 13, 15, and 17 Hz depending on the quadrant (Fig. 17.2a). The amplitude of the MEG signal at each tag frequency was determined with a regression model (see Sect. 17.2.4.2) that was fitted to every 1.5 s of MEG data. The resulting amplitude values were then used to compute horizontal and vertical indices to position a cursor on the screen.

The setup was implemented as follows (Fig. 17.2d). The rtMEG software [32] was run on the MEG acquisition computer to provide a network-transparent real-time interface (via the FieldTrip real-time buffer [34]) to the MEG data. Another computer ran two instances of MATLAB: one for reading the data over a TCP/IP network connection with FieldTrip functions and for estimating the tag response amplitudes, and the other for generating the dynamic stimulus and feedback with the help of PsychToolBox [37]. The two MATLAB instances communicated through a shared memory segment.

The arrival of a new data buffer from the MEG system and the refreshing of the stimulus image are asynchronous events. Thus, using two concurrent processes or threads, each handling only one of the tasks, and sharing data across them appropriately disconnects the timings and allows strictly regular updates of the stimulus and therefore maintenance of the tag frequencies.

In this setup, the rtMEG interface incurred a delay of 30–60 ms [32], the video projection system 38–55 ms, computations on the order of 10 ms, and data accumulation 1,500 ms, which makes all other delays irrelevant.

4 Conclusions

Neurofeedback holds promise for new kinds of neuroscientific experiments and rehabilitation strategies. Compared to fMRI- and EEG-based neurofeedback, MEG may offer clear advantages due to its unique combination of excellent temporal and good spatial resolution [38, 39].

The increasing availability of software tools that support implementing neurofeedback with MEG makes such experiments tractable for a larger community of neuroscientists.