Definition

General anesthesia is a reversible, drug-induced state of unconsciousness characterized by lack of awareness of surroundings, lack of responsiveness to painful stimuli (nociception), and inability to form memories (amnesia). The change in brain state from wakeful to unconscious produces alterations in cortical electrical activity that can be monitored with electrodes placed on the scalp (electroencephalogram (EEG)) or on the surface of the cortex (electrocorticogram (ECoG)). The goal of neural modelers is to develop equations that describe the gross behavior of spatially averaged populations of neurons during both induction of and recovery from general anesthesia.

Detailed Description

Classes of General Anesthesia

There are two broad classes of anesthetic drugs: inductive agents (such as propofol, etomidate, isoflurane) that produce a slowed sleeplike EEG and dissociative agents (e.g., ketamine, nitrous oxide) that induce a dissociated state with an activated EEG similar to that of REM sleep.

Most commonly used intravenous and volatile agents – such as propofol or sevoflurane – boost inhibition by increasing the influx of chloride ions at gamma-aminobutyric acid (GABA) receptors on postsynaptic membranes (Weir 2006), causing the postsynaptic neuron to become hyperpolarized. In contrast, dissociative drugs are believed to disrupt excitatory synaptic transmission. In both cases, the excitatory – inhibitory balance required for normal brain function has been shifted to favor inhibition.

The Induction: Recovery Trajectory

At low concentrations, most GABAergic agents (e.g., propofol, sevoflurane, etomidate) cause a paradoxical boost in cortical activity (called the “biphasic effect”) across most EEG frequency bands (Kuizenga et al. 2001), with the biphasic peak appearing first in the high beta frequencies (24–28 Hz), then sliding smoothly towards lower frequencies in time (e.g., see Fig. 3 of Koskinen et al. (2005)). With further increase in concentration, the EEG slows as large-amplitude delta-band oscillations (1–4 Hz) become dominant, then changes to an intermittent burst–suppression pattern (bursting activity alternating with relative silence), and finally collapses into a flat-line trace at the deepest levels of comatose anesthesia.

This sequence is reversed as the anesthetic drug is eliminated naturally from the body, allowing the patient to return to consciousness. However, the fact that the recovery of responsiveness generally occurs at a lower drug concentration (as measured in the blood) than that required to induce unresponsiveness suggests a hysteresis separation between induction and recovery trajectories. Part of this hysteresis can be explained in terms of the time required for the drug to diffuse across the blood–brain barrier (Voss et al. 2007) and so can be compensated using pharmacokinetics models (Roberts 2007), but such compensations are typically only partially successful (Ludbrook et al. 1999; Coppens et al. 2010). The remaining hysteresis may be a consequence of a recently proposed “neural inertia” that resists transitions between conscious and unconscious states (Friedman et al. 2010); such distinct induction/recovery paths arise naturally if the brain has access to multiple steady states as suggested by the modeling of Steyn-Ross et al. (1999, 2004).

Cellular Effects of General Anesthetic Drugs

Studies of propofol, halothane, and isoflurane have shown that, at drug concentrations rendering human subjects unresponsive, cerebral blood flow and metabolism are reduced by about 50% (Antkowiak 2002) as a result of global reductions in cortical activity. This is consistent both with in vivo investigations in rat cortex – where sedative-level concentrations were found to suppress neural firing rates by 50–70% (Gaese and Ostwald 2001) – and with cultured brain-slice studies in which low concentrations of general anesthetics (GABAergic agonists propofol, halothane, isoflurane, enflurane, sevoflurane, etomidate, ethanol, and pentobarbital and the non-GABAergic agent ketamine) significantly decreased mean firing rates (Antkowiak 2002).

All anesthetic drugs influence cellular function in a number of different ways, but the major mechanism for GABAergic suppression of firing rates is believed to be the prolongation of the opening of chloride channels on the postsynaptic neuron, thus causing a substantial increase in negative charge transfer (by a factor of 2–4 times control at clinically relevant concentrations (Kitamura et al. 2003; Banks and Pearce 1999)) during the inhibitory postsynaptic current (IPSC) pulse.

Dissociative drugs reduce excitatory transmission by blocking N-methyl-d-aspartate (NMDA) glutamate channels, which probably has a significant role in producing the characteristic dissociated anesthetic state (Petrenko et al. 2013); however, these drugs also have other effects such as inhibition of hyperpolarization-activated cyclic nucleotide-gated (HCN1) channels (Chen et al. 2009) or increased potassium channel opening (Gruss et al. 2004).

Modeling Anesthetic Effects

The challenge for anesthesia modelers is to bridge the scales from the microscopic cellular drug effects to the consequent macroscopic population behaviors detected with scalp or cortical electrodes. By considering spatially averaged (“mean-field”) properties of cortical tissue, we can avoid the need (and computational expense) of attempting to explicitly represent myriads of individual neurons (as is done in neural networks). There is a steadily growing interest in applying mean-field methods to the challenge of understanding anesthesia; see Foster et al. (2008) and Steyn-Ross et al. (2011) for reviews.

The notion of neural fields dates from foundation work by Wilson and Cowan (1972) that modeled the brain as homogenous populations of excitatory and inhibitory neurons. The first attempt at modeling propofol anesthesia by Steyn-Ross et al. (1999) incorporated prolongation of inhibitory response into the mean-field neural model of Liley et al. (1999); it predicted the possibility of multiple steady states with distinct first-order phase transitions between activated (“conscious”) and inactivated (“unconscious”) states and provided a possible explanation for the hysteretically separated biphasic power surges observed at loss and recovery of consciousness (Kuizenga et al. 2001).

Subsequent work by Bojak and Liley (2005) on isoflurane anesthesia showed that, for suitable choices of cortical parameters, a smooth descent into unconsciousness can also generate a biphasic drug response. Using an alternative mean-field model, Hutt and colleagues (Hutt and Schimansky-Geier 2008; Hutt and Longtin 2010) predicted that biphasic power surges can be expected for both the bistable (jump transition) and monostable (smooth) inductions of anesthesia.

General anesthetic agents are widely used to treat seizures, but paradoxically, some anesthetics (e.g., enflurane) can also provoke cortical seizures when the patient is deeply anesthetized. Liley and Bojak (2005) and Wilson et al. (2006) used mean-field modeling to show that subtle changes in the shape and duration of the drug-induced inhibitory postsynaptic response can explain why enflurane, but not isoflurane, is seizurogenic.

An important part of general anesthesia is the suppression of noxious stimuli. A practical index of antinociception has been developed from a mean-field model (Liley et al. 2010) that informed construction of an autoregressive – moving-average (ARMA) noise-driven filter whose output approximates the scalp-recorded EEG. The mean filter frequency tracks the level of propofol-induced hypnosis (“cortical state”), while the decrease in required noise intensity (“cortical input”) tracks the concentration of a coadministered analgesic agent (remifentanil). This computed “cortical input” signal is presumed to be a measure of cortical stimulus, both noxious and normal, entering from the thalamus, and potentially allows differentiation between hypnotic and analgesic drug effects.

The unconscious state of anesthesia and of deepest natural sleep are both characterized by large-amplitude, slow (0.5–4 Hz) delta waves of EEG activity. The source of these slow waves is unknown but is generally supposed to originate from gradual alternations in depolarizing and hyperpolarizing ionic currents. By introducing a slow ionic gating variable into a mean-field model for desflurane anesthesia, Molaee-Ardekani et al. (2007) demonstrated emergence of realistic slow waves. A quite different slow-wave mechanism has been proposed by Steyn-Ross et al. (2013): if inhibitory gap junctions are included in the two-dimensional cortical sheet, then a Turing (pattern-forming) instability can interact with a weakly damped low-frequency Hopf instability to produce turbulent slow-wave activity across the cortex. Anesthetic-induced closure of inhibitory gap junctions (Wentlandt et al. 2006) is predicted to weaken the Turing instability in favor of the Hopf oscillation.

There has been interest in modeling some of the specific details of EEG spectral changes caused by various general anesthetic drugs, for example, the displacement in alpha peak frequency induced by ketamine (Bojak et al. 2013) or propofol (Hindriks and van Putten 2012; Hutt 2013) and the burst–suppression pattern of deep anesthesia (Liley and Walsh 2013).

Increasingly there has been a realization that general anesthesia may disrupt neuronal networks in an anatomically specific fashion (Kuhlmann et al. 2013; Lee et al. 2013) and that the current homogenous and isotropic neuronal population models might need to include aspects of network topology. This has led to attempts to link EEG patterns probabilistically with underlying anesthetic effects on inhibitory and excitatory neuronal groups – this should provide a quantitative basis for the estimation of model parameters. At an abstract level, dynamic causal-modeling methods have been employed (Moran et al. 2011; Boly et al. 2012), but a more direct Bayesian approach – which has been used for natural sleep (Dadok et al. 2013) – could be applied to anesthesia EEG.

Cross-References