Keywords

1 Introduction

The discovery of anesthesia has been recognized as one of the most significant discoveries in the history of mankind. Massachusetts General Hospital (MGH) surgeon Henry Bigelow, the assistant surgeon at the first demonstration, claimed that anesthesia was “medicine’s greatest single gift to suffering humanity” (Viets 1949). The success of inhalation anesthesia received widespread attention, interest, and rapid adoption in the practice of surgery because it was the only successful method of pain relief available for surgical patients. In the future, other routes of administration for general anesthesia and local anesthesia would also have a significant impact on pain relief. This chapter summarizes the history of general and local anesthesia and neuromuscular blockade and concludes with the concept of balanced anesthesia.

2 Inhalation Anesthetic Agents

Inhalation anesthesia is frequently what comes to mind when the origins of anesthesia are discussed. Despite a robust account of the pioneering presentation of ether in 1846 by William Morton, it is also important to consider the preceding years and scientific discoveries that set the stage for such a monumental achievement.

The earliest attempts at inhalation anesthesia date back to the middle ages when mixtures of mandrake, henbane (i.e., a source of scopolamine), and other hallucinogens were consumed (Carter 1999). Although some believe that these substances would have been most likely consumed by mouth, references made to inhaled vapors suggest that this likely represented primitive inhalation anesthesia. Despite the discovery of these hallucinogenic alkaloids, it would be centuries before true anesthesia, where patients remain immobile after a noxious stimulation, was actually discovered.

In 1275 CE, a well-known alchemist, Raymond Lully, created a solution of ethyl ether named the “sweet oil of vitriol” by mixing sulfuric acid and wine (Davison 1949). This discovery was recapitulated a few hundred years later when the collective writings of German physician Valerius Cordus, describing similar solutions, were published by Conrad Gesner in 1561 (Leake 1925a). These well-documented accounts of “sweet oil of vitriol” have led some historians to credit Cordus as the discoverer of ether. Around the same time, Paracelsus, who was a Swiss alchemist and physician, may have also discovered ethyl ether’s properties. Alternatively, he found out about them from Cordus during his visits to Nuremberg and Leipzig. Paracelsus went on to describe the incapacitating and transient effects of ether on chickens but failed to realize potential surgical implications before dying in 1541 (Leake 1925a). A few years later, Italian philosopher and polymath Giambattista della Porta, also known as John Baptista Porta, contributed a collection of writings known as “Natural Magick” published in 1558. He described the ability to induce a “profound sleep” after inhaling certain vapors (Porta 1658). These historic references to inhalation anesthesia likely set the stage for what was to follow in the eighteenth and nineteenth centuries.

The next breakthrough came from Joseph Priestly, an English minister, who in 1772 heated iron filings with nitric acid producing nitrous oxide. He called this gas “dephlogisticated nitrous air.” In the ensuing years, Priestly discovered other gases or “airs” including carbon dioxide (1772) named “combined fixed air,” ammonia (1773) named “alkaline air,” and finally sulfur dioxide (1774) named “vitriolic air” (West 2014a). Samuel Mitchell, an American physician and politician, described the anesthetic effects of nitrous oxide in 1795 (Bergman 1985). He also suggested that nitrous oxide might represent the contagion leading to all infectious disease (Riegels and Richards 2012). Mitchell’s writing caught the attention of Humphry Davy, an American chemist and inventor, whose skepticism of Mitchell’s contagion theory led him to investigate nitrous oxide himself (Bergman 1985). In 1798, at the invitation of Thomas Beddoes of England, Davy started working at Beddoes Pneumatic Institute, devoted to treating pulmonary tuberculosis (Leake 1925b). Davy became the first to demonstrate that gases could liquify by pressurizing them (Zimmer 2014). Davy also discovered the potential for nitrous oxide to eliminate pain during surgery in 1800, and he subsequently described his personal use of nitrous oxide for wisdom tooth pain (Riegels and Richards 2012). Davy experimented with inhaling carbon dioxide demonstrating that a 30% solution could produce narcosis associated with “a degree of giddiness and an inclination to sleep” (Riegels and Richards 2012). The change in sensation supported the hypothesis that gases had the ability to change human consciousness.

In 1805, a Japanese surgeon named Seishu Hanaoka gave an oral mixture of herbs that induced general anesthesia in a patient, allowing him to perform a successful mastectomy (Ball and Westhorpe 2011). Although not a gas, the oral elixir produced an anesthetic effect which facilitated a painless operation nearly 40 years prior to Morton’s demonstration of ether. Unfortunately, this discovery was not shared with the world probably because the shogun of Japan, Tokugawa Yoshimune, would not allow its release.

In 1818, Humphry Davy’s successor and English scientist, Michael Faraday, published an article describing similarities between the effects of ether and nitrous oxide (Bergman 1992). Faraday suggested that ether had recreational benefits similar to nitrous oxide. In the 1830s, recreational ether use, referred to as “ether frolics,” became a popular form of social entertainment (Short 2014). Despite the sensory altering effects of these gases, it was not appreciated that they could be used for elimination of pain during surgery. With ether and nitrous oxide both under study by scientists and being used for recreational purposes, another inhaled gas, chloroform, was discovered by Frenchman Eugene Soubeiran in 1831. German scientist Justus von Liebig independently discovered chloroform in 1832 (Kyle and Shampo 1997). However, the idea to use chloroform for anesthesia would not be realized until after the demonstration of ether in 1846. In January of 1842, American medical student William Clark recorded the first use of ether to perform a dental extraction, and in March of the same year, American surgeon Crawford Williamson Long used ether to painlessly remove a tumor from the neck of his patient, James Venable (Desai et al. 2007).

On December 10, 1844, Gardner Colton, who had dropped out of medical school to put on nitrous oxide demonstrations, displayed the effects of nitrous oxide on several volunteers in Hartford, Connecticut. A practicing dentist of Hartford named Horace Wells was present for the demonstration. After observing a volunteer injure himself with little evidence of pain, Wells saw the potential for analgesia and anesthesia (Desai et al. 2007). Dental surgeons, at that time, were desperate for remedies to make tooth extractions less painful. Horace Wells then arranged for one of his own molars to be extracted the next day by John Riggs with Gardner Colton administering nitrous oxide (Desai et al. 2007; Smith and Hirsch 1991; Haridas 2013). After a few weeks of practicing with nitrous oxide on 12–15 patients, Wells traveled from Hartford, CT, to Boston, MA, in January of 1845 in hopes of demonstrating his discovery to physicians there (Haridas 2013).

As an aside, Wells initially trained as an apprentice under Nathan Cooley Keep, a revered dental surgeon in Boston, before returning to Hartford to start his practice around 1836 (Beecher and Ford 1848). There were no formal dental schools at that time. Morton, who was 4 years younger than Wells, first met Wells in Massachusetts while Wells was on a business trip and before Morton was interested in dentistry (Beecher and Ford 1848). In 1841, when Morton was 21 years old, he sought to study dentistry and went to Hartford, CT, where he learned from Wells as noted by the payment Wells documented in his day book (Archer 1944). Wells and Morton eventually partnered in Hartford, and when Wells invented a noncorrosive dental solder in 1843, they went to Boston to promote it and open another office (Beecher and Ford 1848). Their partnership dissolved in 1844, but they remained friends, and following in Wells footsteps Morton would also seek out Nathan Cooley Keep for special apprentice-style training.

Morton would eventually help connect Wells to John Collins Warren, the surgeon-in-chief at Massachusetts General Hospital (MGH), who agreed to introduce Wells to his class of Harvard Medical students after a lecture he was giving, for a demonstration of nitrous oxide (Smith and Hirsch 1991; Haridas 2013; Guralnick and Kaban 2011). Unfortunately, the nitrous was withdrawn too soon from the student volunteer who admitted to feeling pain as Wells extracted one of his teeth (Urman and Desai 2012). According to Morton, the audience “laughed and hissed,” leaving Wells utterly embarrassed (Haridas 2013). This failure would eventually cause Wells to spiral into depression before taking his own life in 1848 (Guralnick and Kaban 2011).

Despite Wells’ failed demonstration, Morton probably realized the potential of his attempt (Beecher and Ford 1848; Guralnick and Kaban 2011). Morton’s mentor Nathan Cooley Keep, the most respected and skilled dental surgeon of the time in Boston, likely encouraged him to continue using ether, and Keep likely called upon his friend and Harvard Professor of Chemistry, Charles Thomas Jackson, for help (Guralnick and Kaban 2011; Urman and Desai 2012; Kaban and Perrott 2020). Jackson is believed to have suggested to Morton that he should use sulfuric ether instead of commercial ether (a common cleaning fluid) to produce anesthesia (Guralnick and Kaban 2011; Lopez-Valverde et al. 2011). Morton experimented with ether and tried it in animals and eventually on a patient, Ebenezer H. Frost, from whom he successfully removed a tooth on September 30, 1846 (Guralnick and Kaban 2011; Urman and Desai 2012). Morton wanted to keep the identity of the gas a secret in hopes of collecting royalties later, but MGH surgeon-in-chief Dr. John Collins Warren resisted this by delaying the initial demonstration until Morton revealed the name of the drug to him (Guralnick and Kaban 2011). Finally, on October 14, 1846, Morton was invited to MGH by Warren to demonstrate his anesthetic technique (Guralnick and Kaban 2011; LeVasseur and Desai 2012). Morton, who only had 2 days to prepare, scrambled to the last minute to finalize his conical glass inhaler with the help of local instrument maker Joseph Wightman (Viets 1949). Ultimately, he arrived late for the 10 AM demonstration but successfully anesthetized the patient, Edward Gilbert Abbott, for the ligation of the feeding vessels of a congenital vascular malformation performed by Warren (Vandam and Abbott 1984) (Fig. 1). Warren famously turned to the audience and uttered, “Gentlemen, this is no humbug” (Leake 1925b; Guralnick and Kaban 2011). However, some physicians remained skeptical of Morton’s technique given the superficial nature of the operation, so Morton tried again to validate his technique by providing ether for the removal of a fatty tumor, but there were doubters once more. Morton again provided anesthesia on November 7, 1846, this time for a knee amputation performed by surgeon George Hayward, and his success left skeptics convinced. Thus, Morton’s work was validated at last (Guralnick and Kaban 2011).

Fig. 1
figure 1

Robert Cutler Hinckley oil on canvas painting from 1893 entitled The First Operation with Ether (Reprinted with permission from the Harvard Medical Library in the Francis A. Countway Library of Medicine, Boston, Massachusetts)

Morton and Jackson submitted their joint patent for “surgical insensibility by means of sulphuric ether” on November 12, 1846 (Yang et al. 2018). Eventually, Morton, Jackson, and Wells would all oppose each other in search of recognition for the roles they each respectively played in the discovery of ether anesthesia (Leake 1925b). Years later, in 1849, Morton would appeal to Congress for a $100,000 grant for his contributions of ether anesthesia, but primary opposition from Keep, as well Jackson, would ultimately foil Morton’s chances of financial remuneration (Leake 1925b; Guralnick and Kaban 2011).

On November 11, 1846, just 3 weeks after Morton’s initial demonstration, ether was being used in Scotland for amputations (Viets 1949). Soon after that, Robert Liston, the preeminent surgeon in London, used ether for surgery with success and much surprise (Pieters et al. 2015). By January 19, 1847, Scottish obstetrician James Simpson became the first person to use ether for labor (Dunn 1997). On April 11, 1847, dentist Nathan Cooley Keep became the first to use ether for obstetric anesthesia in the United States. He administered ether to the wife of American poet, Henry Wadsworth Longfellow, for the delivery of their daughter Fanny in their home on Brattle Street in Cambridge, MA (Guralnick and Kaban 2011; Dunn 1997). A year later, on November 4, 1847, Simpson and colleagues experimented with various vapors in search of something less pungent for pregnant patients and came upon chloroform (Kyle and Shampo 1997). Simpson soon popularized chloroform, making it the British anesthetic of choice, while ether remained the preferred anesthetic in the United States (Kyle and Shampo 1997). John Snow, an English physician known as the father of epidemiology for his work identifying the water pump which was the source of the cholera epidemic in London, learned about these demonstrations of ether and chloroform and began studying anesthesia himself (Leake 1925b). In 1847, he described the five stages of anesthesia (Thornton 1950), and in 1853, he administered chloroform to Queen Victoria during childbirth, ending moral opposition to the relief of pain and generating greater acceptance of anesthesia use (Kyung et al. 2018).

Inhaled anesthetics would soon spread globally by the ships’ captains and doctors within a year after the first demonstration, driving innovation and scientific investigation (Ellis 1976). Second-generation anesthetic gases were eventually produced including ethyl chloride (1894), ethylene gas (1923), and cyclopropane (1933) (Whalen et al. 2005). In the 1940s, due to the ongoing covert Manhattan project, attention was turned to fluorine chemistry leading to the production of fluorinated anesthetics including halothane (1951), methoxyflurane (1960), and enflurane (1963). Enflurane was then followed by isoflurane in the 1980s, which was eventually replaced by sevoflurane and desflurane in the 1990s (Wang et al. 2020). In the ensuing years, increased understanding of the mechanisms of action and metabolism of the inhaled anesthetics, patient factors, and the effects of the type of surgery being performed have guided the indications for the uses of various anesthetic gases.

3 Intubation and Inhaled Anesthesia Technology

In addition to the pharmacologic discovery of inhaled anesthetics, it is also important to consider the technological advances that kept pace and occasionally drove inhaled anesthesia innovation.

The development of techniques for surgical airways dates back as early as 3600 BCE, depicted by the healing tracheostomy wounds seen on Egyptian hieroglyphics (Rajesh and Meher 2012). In the second century CE, despite the Greek physician, Galen, describing the necessity of breathing to keep the heart beating, it was not until 1543 when Andreas Vesalius described opening the trachea of an animal to provide ventilation that interest in this subject increased (Slutsky 2015). In 1546, the first successful surgical airway on record was done by Antonio Musa Brassavola for a tonsillar obstruction; however, the term “tracheotomy” was not coined until Thomas Fienus first used it in 1649 (Rajesh and Meher 2012). A few decades later, in 1667, English scientist, philosopher, and polymath Robert Hooke demonstrated that blowing fresh air into the lungs of dogs that were not breathing was life-sustaining (West 2014b). Thus, the pure movement of the lungs was not itself essential to life, nor was it driving the movement of blood throughout the lungs or body. This realization suggested that life was sustainable with just air exchange in the lungs, whether by natural or artificial means.

In 1754, an English obstetrician named Benjamin Pugh described one of the first airway devices: an air-pipe made of tightly coiled wire for resuscitating neonates (Szmuk et al. 2008; Baskett 2000). In 1760, Buchan used an opening in the windpipe to aid in human resuscitation (Szmuk et al. 2008). Later, in 1788, Englishman Charles Kite introduced a curved metal cannula into the trachea of drowning victims to help resuscitate them (Szmuk et al. 2008).

In 1829, English physician Benjamin Guy Babington published on his “glottoscope,” which consisted of a tongue depressor speculum to retract supraglottic tissues out of view and a series of mirrors used to visualize the larynx (Pieters et al. 2015). The term “laryngoscope” was adopted later by fellow colleague and physician Thomas Hodgkin, who is best known for his work on Hodgkin’s disease (Pieters et al. 2015). Ultimately, the direct laryngoscope would be developed in 1910 by American physician Chevalier Jackson (Pieters et al. 2015).

In 1874, Jacob M. Heiberg, a surgeon from Norway, described the jaw thrust maneuver for opening up airways (Matioc 2016). In 1876, Alfred Woillez developed a manual ventilator, which was later replaced by the iron lung (Slutsky 2015). Eventually in 1885, using high-pressure oxygen cylinders with high-pressure nitrous oxide, the SS White Company patented the first anesthesia machine (Bause 2009). In 1893, Austrian physician Victor Eisenmenger described using an inflatable cuff around an endotracheal tube paving the way for the designs used today for endotracheal intubation (Gillespie 1946). In 1967, English physician Ian Calder performed the first fiber-optic bronchoscopy (Pieters et al. 2015), and in 2001, Canadian surgeon John Pacey invented the first commercially available video laryngoscopes known as the GlideScope (Pieters et al. 2015).

In addition to technical advances, in 1895, Harvey Cushing and Amory Codman, both Harvard medical students, first proposed the idea of keeping an anesthesia record, which included information such as the pulse, respiratory rate, depth of anesthesia, and amount of anesthetic given (i.e., ether) (Fisher et al. 1994), which remains standard practice today.

4 Parenteral Anesthesia

Some of the most important and frequently used anesthetics today are those intravenously administered. While the separate historical timelines of inhalation, local, and parenteral anesthesia can be thought of as parallel themes at times unfolding simultaneously, the discovery of parenteral anesthesia arguably started earlier than the rest.

It seems most appropriate to start the story of parenteral anesthesia with the establishment of intravenous access. The earliest record of intravenous access for medication administration appears to date back to 1656 when the English architect Christopher Wren performed a cutdown to access the leg vein of a dog. Wren delivered ale via a goose quill as the needle and animal bladders as the syringe (Dorrington and Poole 2013; Dagnino 2009), leaving the dog transiently senseless before later regaining full consciousness and surviving. Johann D. Major, a German graduate of Padua University, tried this technique on humans in 1662 (Barsoum and Kleeman 2002; Foster 2005), but the resulting mortality paralyzed the scientific advancement of this technique for another 200 years until Adam Neuner developed a syringe in 1827 while studying cataract surgery (Blake 1960). From 1827 onward, the concentrated efforts of many individuals on improving syringe and needle design were paramount to the future of both local and parenteral anesthesia.

4.1 Opioids

General anesthesia has been defined as a state in which a patient is rendered amnestic, unconscious, immobile, and free of pain (Dodds 1999). Although inhalation anesthetics produce rapid unconsciousness with rapid recovery, opioids in high enough doses can also produce similar anesthetic effects, but often with a longer recovery period. Opioids are effective because of their strong analgesic effects, which blunt the pain inflicted by surgical incision, dissection , and manipulation, thereby reducing the reflex stress response to pain which results in withdrawal from the stimulus, tachycardia, and hypertension. As a result, opioids are powerful anesthetics or anesthetic adjuncts depending on their dose and the circumstances of their use.

Opium is a substance derived from the poppy plant and well known since 3000 BCE in Mesopotamia (Brownstein 1993; The History of Opiates | Michael’s House Treatment Center 2020). After thousands of years, morphine was extracted from opium by Friedrich Sertürner in 1806 (Schmitz 1985). A few years later in 1832, codeine was identified as an impurity associated with morphine and isolated for use as an analgesic drug (Eddy et al. 1968). In 1898, heroin (known as diacetylmorphine) was commercialized by the Bayer company (Leverkusen, Germany), also involved in the discovery of aspirin around the same time (Sneader 1998). Heroin was initially marketed as a cough suppressant with a presumed lower risk of addiction compared to morphine. However, a decade later, concerns for addiction and drug dependence would begin challenging the drug’s acceptance. Heroin was banned in the United States in 1924 (Sneader 1998).

In 1921, hydromorphone was discovered in Germany and found its way into clinical medicine by 1926 (Murray and Hagen 2005). In 1939, meperidine was synthesized (Batterman and Himmelsbach 1943) about the same time that the long-acting opioid, methadone, was synthesized (Fishman et al. 2002). In 1960, fentanyl was synthesized for the first time, and the synthesis of other fentanyl-like medications followed: sufentanil, alfentanil, and remifentanil (Stanley 2014). Each of these medications were integrated into the practice of anesthesia providing the analgesia component of the balanced anesthesia strategy.

5 Sedative Hypnotics and Other Intravenous Anesthetics

One of the first medications designed for intravenous use was the hypnotic medication chloral hydrate. It was synthesized by German scientist Justus von Liebig in 1832, but was not introduced into medicine until a fellow German scientist, Oskar Liebig, did so in 1869 (López-Muñoz et al. 2005).

In 1864, German chemist Adolf von Baeyer synthesized a new class of medications known as barbiturates when he created malonylurea (Cozanitis 2004). This eventually led to the synthesis of diethyl-barbituric acid in 1881 and its inauguration into clinical medicine in 1904 as the first clinically used hypnotic (López-Muñoz et al. 2005). With diethyl-barbituric acid as the parent molecule, many other iterations were spun off, including one in 1911 called phenobarbital. It was synthesized by German scientist Heinrich Horlein (López-Muñoz et al. 2005). Many other barbiturate variants were developed including butobarbital (1922), amobarbital (1923), secobarbital (1929), pentobarbital (1930), and hexobarbital (1932) (López-Muñoz et al. 2005). Thiopental was synthesized from pentobarbital by substituting the oxygen at position 2 for a sulfur group, introducing a new class of medications known as the thiobarbiturates. This class of drugs was first used clinically by Ralph Waters in 1934 (López-Muñoz et al. 2005). The addition of sulfur resolved the issue of muscle movement when the non-sulfonated precursor drug, hexobarbital, was administered.

After the Second World War, the search for shorter-acting barbiturates resulted in the discovery of methohexital (López-Muñoz et al. 2005). A distinctive property of methohexital is the excitability it produces on electroencephalograms, in contrast to the depressing effects of other barbiturates. This characteristic made it a useful agent for anesthesia during electroconvulsive therapy (Kadiyala and Kadiyala 2017).

Propofol was discovered in 1973 by Scottish veterinarian John Baird Glen (2018). Propofol has the advantage of fast onset and decreased postoperative nausea and vomiting. As a result, it has become one of the most widely used anesthetic medications, often used without inhalation anesthetics, during total intravenous anesthesia (TIVA) (White 2008). The introduction (1989) and subsequent popularity of propofol has resulted in significantly diminished barbiturate use in anesthesia.

Other medications important to parenteral anesthesia include the benzodiazepines. The first benzodiazepine, chlordiazepoxide (also known as Librium®), was discovered in 1960 by Leo Sternbach, a Polish-American chemist working at Hoffmann-La Roche pharmaceutical company (López-Muñoz et al. 2011). Additional studies aimed at simplifying the side chains of the chlordiazepoxide molecule resulted in additional benzodiazepines such as diazepam (1959), oxazepam, alprazolam, triazolam, and midazolam. These medications have been utilized in anesthesia for their amnestic, anxiolytic, and hypnotic properties (López-Muñoz et al. 2011).

Another parenteral anesthetic currently in use is ketamine. It was synthesized by Calvin Stevens, in 1962, to decrease side effects of phencyclidine (PCP) and was found to lack the cardiac or respiratory depression seen with barbiturates. However, emergence delirium, excitability, and addictive potential of this drug have restricted its use. Situations that require brief sedation, e.g., injured pediatric patients in the emergency room, remain prime opportunities to utilize ketamine effectively (Gao et al. 2016). Additionally, etomidate, a rapid-acting anesthetic, was discovered by Janssen Pharmaceuticals in 1972 (Forman 2011). It has the benefit of rapid onset, but unfortunately, it has been associated with adrenal suppression, thus relegating its use to an induction agent for rapid sequence induction (RSI).

More recently, anesthetic discovery has identified dexmedetomidine, approved by the FDA in 1999, for patients in intensive care units (Gertler et al. 2001). Its use was then broadened to include surgical patients in 2008 as it provides both sedation and decreases sympathetic output by stimulating central alpha-2 receptors (Kaur and Singh 2011).

6 Neuromuscular Blockers

In addition to unconsciousness and analgesia, general anesthesia also requires patient immobility and muscle paralysis (Dodds 1999). Although inhalation anesthetics are useful for producing unconsciousness and opioids best at reducing pain, neuromuscular blockers are superior at rendering patients immobile and paralyzing contractile tissues to facilitate surgical manipulation. This permits better conditions to perform sophisticated operations and reduces overall operating time. Therefore, neuromuscular blockers add a crucial component to the general anesthesia regimen.

Some of the earliest published accounts of parenteral anesthesia were neuromuscular blocker medications, also known as paralytics, that dated back to around 1500 CE. In 1516, Peter Martyr d’Anghera, a historian from Spain, relayed stories of those who had visited the New World overseas describing the puzzling “flying death,” in reference to the poison known as curare that was used by natives (Raghavendra 2002). Wars in Europe stalled further exploration of curare’s potential until 1735 when Charles de la Condamine, a French explorer, observed Ecuadorian natives shooting curare-dipped darts from their blowpipes to hunt animals (Fernie 1964). The acquisition of curare was the first step toward discovering the potential of neuromuscular blockers.

Curare was then tried in animals including rabbits, cats (Raghavendra 2002), and donkeys (Birmingham 1999), which survived due to artificial ventilation provided by bellows inserted into their airways. In 1857, curare’s function as a neuromuscular junction blocker was discovered (Bowman 2006), and in 1912, German surgeon Arthur Lawen became the first to use paralytics in surgery (Czarnowski and Holmes 2007). Lawen reported that use of paralytic curarine (an extract from gourd curare) in combination with ether or chloroform produced the desired level of abdominal wall muscle relaxation unachieved by other medications (Foldes 1995). In the 1930s, curare was purified and branded under the name Intocostrin, also known as d-tubocurarine (Ball and Westhorpe 2005), and in 1942, Intocostrin was used on a patient for the first time, thus officially inaugurating neuromuscular blockade into clinical practice (Sykes 1992).

In 1946, English researcher Frederick Prescott described his frightening experience being the first human to voluntarily receive tubocurarine alone without any other anesthetic agents after which he reported being paralyzed, but sensate to pain (Prescott et al. 1946). Prescott’s research also found that d-tubocurarine reduced the shock-like state that often occurred with spinal anesthesia, producing muscle relaxation like ether without prolonged postanesthetic recovery and vomiting, and it saved time as nerve blocks were time-consuming (Prescott et al. 1946).

As the pharmacology of neuromuscular blockers became more robust, so did the infrastructure that would ultimately help ventilate the paralyzed patient during surgery. Scottish physician John Dalziel in 1983 developed the first negative pressure respirator in 1838 known as the tank respirator (Kacmarek 2011). In 1911, Johann Heinrich Draeger introduced the first positive pressure ventilator known as the pulmotor (Kacmarek 2011). Paralytics and ventilators coevolved as were necessary for each to remain successful.

In the mid-twentieth century, combinations of drugs to produce anesthesia became more popular given the growing medications from which to choose. In 1946, Thomas Cecil Gray, an English anesthetist, presented this idea known as “balanced anesthesia” to the Royal Society based on 1500 patients (Shafer 2011). He described inducing anesthesia with an intravenous agent, giving curare to provide relaxation and to decrease barbiturate, and an inhaled agent for anesthesia maintenance. His thought was to combine several drugs to create a more advantageous effect and outcome, and from these descriptions, the multimodal modern anesthetic approach as we know it was born.

With the balanced anesthesia techniques now realized, scientists and clinicians turned to newer anesthetic agents and neuromuscular blockers. A depolarizing paralytic called suxamethonium, also known as succinylcholine, was introduced into clinical medicine in 1951 (Raghavendra 2002). In 1964, the non-depolarizing paralytic pancuronium was discovered (Raghavendra 2002) and essentially completely replaced curare for generating neuromuscular blockade. Following this, a number of other paralytics were discovered, notably vecuronium (1973) (McKenzie 2000), atracurium (1981) (Raghavendra 2002), mivacurium (1984) (Savarese et al. 2004), and rocuronium (1994) (Succinylcholine vs. Rocuronium: Battle of the RSI Paralytics - JEMS 2020). In addition, the Train-of-Four monitor was invented in 1972 allowing clinicians to detect the amount of neuromuscular blockade at any one time (McGrath and Hunter 2006; Ali’s “train of Four” | Wood Library-Museum 2020), providing even greater control of paralysis in surgery and anesthesia. Despite the availability of glycopyrrolate and neostigmine for reversal of muscle relaxation, a new neuromuscular blocker reversal agent known as sugammadex was discovered in 2001 (Welliver et al. 2008), approved in Europe in 2008 (The Development and Regulatory History of Sugammadex in the United States - Anesthesia Patient Safety Foundation 2020) and finally in the United States in 2015 (Drug Trial Snapshot: BRIDION | FDA 2020).

As the practice of anesthesia evolves and medications become more targeted, we find the concept of balanced anesthesia more poignant than ever. Thomas Cecil Gray’s concept of balanced anesthesia remains the pedagogy behind modern anesthesia and serves as the basis on which we now seek to optimize drug combinations and minimize drug side effects (Shafer 2011).

7 Parenteral and Local Anesthesia Technology

As previously mentioned, the 1820s were transformative years for local anesthesia with the arrival of the hypodermic syringe (Blake 1960). In 1827, Adam Neuner developed a syringe-like apparatus through which he was able to inject fluid into the eyes of deceased corpses to study and practice cataract surgery (Blake 1960). However, this design included a central stylet, which needed to be removed to inject fluid requiring more steps to operate (Blake 1960). Not long after, in the 1830s, French physicians were treating neuralgia in humans by pushing morphine paste down grooved trocars, functioning as rudimentary syringes (Lawrence 2002). In 1836, vascular nevi were treated by injecting an irritating chemical beneath the skin first by lancing the skin and then pushing the chemical beneath it with a blunt tip of a syringe (Blake 1960) representing yet another attempt at hypodermic injection. Eventually, in 1844, Francis Rynd of Dublin developed a hollow needle in the form of a cannula within which was a slender retractable trocar required to breach the skin, marking the first hypodermic needle (Lawrence 2002). By this syringe design, narcotic liquid followed gravity and was administered under the skin as the cannula was withdrawn—a functional but not ideal design.

In 1853, Daniel Ferguson, a surgical instrument and truss maker in London, devised a new syringe design consisting of a glass tube containing an internal plunger and piston (Blake 1960). The syringe ended in a narrow conical platinum tube with an oblique opening just proximal to the most distal trocar-like tip. Inside of that narrow platinum tube was a second slightly shorter tube, also with an oblique opening that could align with the outer one when the outer one was spun to the correct position (Blake 1960; Duce and Hernandez 1999). This design did away with the need for a removable trocar used to puncture the skin before fluid could be administered. Ferguson’s design was modified by Cooper Forster, a surgeon in London, with indicators to signal when the aperture was open or closed (Blake 1960). Later in 1853, Edinburgh physician Alexander Wood further modified Ferguson’s design by calibrating the barrel and creating a threaded tip on the end of the syringe for attaching a hollow needle with a beveled point (Duce and Hernandez 1999). Wood’s needle that could pierce the skin without needing to lance skin or use a trocar and his syringe design, published in 1855, earned him the credit for developing the hypodermic technique. Notably, French veterinary surgeon Charles Gabriel Pravaz, who simultaneously was developing a hollow metal needle in 1853, narrowly trailed Wood for the honor of pioneering the original hypodermic syringe (Lawrence 2002). Interestingly, the term “hypodermic” was not coined until 1865 when proposed by Charles Hunter, who also garnered fame for realizing that injecting morphine locally caused systemic pain relief, in contrast to Wood who thought the effects were only local (Howard-jones 1947).

In 1867, as a component of his ongoing work on antisepsis for prevention of wound and surgical site infection, Joseph Lister described the successful use of carbolic acid for surgical wounds, which improved both mortality and morbidity (Schlich 2012; Pitt and Aubin 2012). Lister’s use of carbolic acid is believed to also have extended to surgical instruments as a means to clean them (Craig 2018; Lister 1870). Later, the idea of using a pressured steam to sterilize instruments would result in the first autoclave being introduced in 1879 by Charles Chamberland, an associate of Louis Pasteur (Harvey 2011). Following this invention, in the 1880s and 1890s, Lister’s assistant Ernst Tavel and Swiss physician Theodor Kocher advocated the use of pressured steam to sterilize instruments, and eventually hypodermic needles (Schlich 2012; Maclachlan 1942).

However, even 50 years later in the early 1900s, only about 1.8% of the 1039 commonly used drugs in the United States were injectable, a small market for syringe use. In 1921, after the discovery and use of insulin, there was a subsequent increase in parenterally administered drugs, making the need for a delivery system critically important (Lawrence 2002). Needles were reused at the end of the nineteenth century and the first half of the twentieth century, and despite attempts at steam sterilization, they were difficult to clean leading to complications of cellulitis with reuse (Craig 2018). Attempts to clean these needles included inserting a small wire to debride the inside followed by either passing them through an alcohol flame before inserting it, soaking the needle in carbolic solution followed by cleaning with alcohol or by boiling the needle for a few minutes in water (Hampton 1893). In 1946, the first disposable syringe, made of glass with interchangeable parts, was developed by brothers Robert Lucas and William Chance (Kantengwa 2020). In 1949, Arthur E. Smith had patented the first disposable hypodermic syringe in the United States made of glass, eliminating the need to sterilize and reuse syringes (Levy 2020). In 1955, Roehr products (Waterbury, CT) introduced the first plastic disposable hypodermic syringe (Levy 2020), which were commonly used by the 1960s (Kravetz 2005). The 1950s also brought about the introduction of many single-use items in medicine, including needles (Greene 1986). Disposable syringes and needles were also mass-produced for the polio vaccination program led by Dr. Jonas Salk, thus solidifying their utility in medicine (Levy 2020). Since the 1950s, disposable hypodermic syringes and needles have become the standard of care to administer drugs parenterally to prevent entry site and hematogenous infection.

8 Local Anesthesia

8.1 Local Anesthesia Drugs

Local anesthesia has become one of the most commonly used methods for alleviating the pain of surgical procedures and injuries. The injection of local anesthetic agents is ubiquitous in medicine, dentistry, and other areas of health care from operating rooms to outpatient clinics, private offices and in the prehospital management of injured patients.

Some of the earliest attempts at pain relief were described by Greek surgeons applying anodyne and astringent pastes to wounds during the siege of Troy around 1250 BCE (Zorab 2003a). Around 50 CE, a Greek physician named Pedanius Dioscorides, who eventually wrote a five-volume book on medicine called De Medica Materia, described mixing Memphitic stone and henbane seeds to smear onto a surgical site prior to the operation (Belfiglio 2018). This anesthetic paste is thought to have released carbonic acid producing a cold or “freezing” effect resulting in anesthesia of the operative site; this was the first topical anesthetic (Zorab 2003b; Bhimana and Bhimana 2018).

In 1539, the potential use of coca leaf as an anesthetic agent was first described by Friar Vicente de Valverde, the bishop of Cuzco (Calatayud and González 2003a). Peruvian literature suggests that coca leaves were chewed and spit into the wounds of patients to alleviate pain (Chivukula et al. 2014). The local anesthetic mechanisms of cocaine were not well understood but were clearly recognized by the way it was being used for pain relief. In 1653, the potential anesthetic properties of coca were revealed by Spanish Jesuit Bernabe Cobo in a paper describing the alleviation of a toothache by chewing coca leaves (Calatayud and González 2003b).

In 1807, Dominique Larrey, Napoleon’s surgeon during the bloody and cold battle of Eylau (current day Western Russia), described the numbing effect of cold snow to produce local anesthesia and reduce the pain of amputations (Zimmer 2014). Although Larrey’s tactic was rudimentary, there were few alternatives readily available as intravenous access was not yet in use. In the 1820s, with the advent of the early hypodermic syringe, the technology finally caught up to allow localized medication delivery beneath the skin for analgesic effect (Blake 1960).

In the mid-1860s, before the local anesthetic effects of cocaine had been appreciated for clinical use, British physician Benjamin Ward Richardson used ether spray to numb the skin (Leake 1925b). Later, Richardson’s ether spray was changed to ethyl chloride which evaporated more rapidly and produced a faster onset of anesthesia. Ultimately, this spray technique would propel topical anesthesia forward, setting the stage for subcutaneous local anesthesia to follow.

In 1859, nearly 200 years after Cobo’s paper on the numbing effects of coca leaves was published, German chemist Albert Niemann was the first to isolate pure cocaine, which he keenly noted caused numbness when placed on his own tongue (Redman 2011). Vassily von Anrep, a Russian physician also studying cocaine, described the effects of injecting cocaine into animals commenting afterward that it should be tested as a local anesthetic. Sadly, his astute recommendation was not followed, and his brilliant work went largely unnoticed (Yentis and Vlassakov 1999). Finally, in September of 1884, Carl Koller, an ophthalmology resident and roommate of Sigmund Freud who was also a resident at the Vienna General Hospital, recognized the significance of cocaine’s local anesthetic potential (Goerig et al. 2012). After witnessing a colleague painlessly cut his tongue while licking cocaine off a knife, Koller appreciated the significance of the event and realized the potential of the drug. He soon tested a cocaine solution on frog corneas with demonstrable decrease in sensation (Goerig et al. 2012). This work was presented to the science community shortly thereafter and was well received. From then on, cocaine use for local anesthesia grew rapidly, which also led to the development of many regional anesthetic techniques.

On December 6, 1884, Richard John Hall and William Stewart Halsted published a report describing the first nerve block. They used 4% cocaine solution to anesthetize the inferior alveolar nerve of the mandible (Grzybowski 2008). Hall and Halstead went on to describe the techniques of regional blockade in many other parts of the body including the facial nerve, brachial plexus, and pudendal and posterior tibial nerves. In 1885, James Leonard Corning published the first report of spinal anesthesia using cocaine (Wulf 1998). The same year, Corning proposed using a tourniquet to slow the systemic absorption of local anesthesia (Giovannitti et al. 2013). It was not until 1903 that Heinrich Braun would modernize this concept by recommending the use of epinephrine as a chemical tourniquet instead (Giovannitti et al. 2013), a practice that is standard today.

However, despite cocaine’s growing popularity and use, by 1891, there were 13 deaths and 200 cases of systemic intoxication, raising concern for the safety of locally injecting cocaine (Murray and f. Cocaine. 1979). Carl Ludwig Schleich developed standardized local anesthesia infiltration techniques by diluting the topical cocaine dose for use with hypodermic injection. This technique was safe and decreased cocaine mortality (Wawersik 1991). His results were presented at the 1892 Congress of the German Society for Surgery in Berlin, but his comments about infiltration anesthesia being potentially less dangerous than general anesthesia offended the surgeons in the audience. Eventually, however, the merit of his work was recognized and adopted in Germany.

The increasing mortality from the toxic effects of cocaine and its addictive nature led a movement to identify alternative substances that could be used as local anesthetics. In 1890, Eduard Ritsert, a German chemist, synthesized benzocaine (Brock and Bell 2012). Unfortunately, its poor water solubility relegated it to use mainly as a topical anesthetic. In 1903, amylocaine (Stovaine) was introduced but was soon found to irritate nerves and was promptly replaced. Procaine, better known by its brand name Novocaine, was synthesized by Alfred Einhorn in 1904. Procaine remained the main anesthetic in dentistry and medicine until tetracaine was synthesized in 1928. However, tetracaine and procaine were both esters with allergic side effects and toxicities, as compared to the more tolerable amide compound of lidocaine, discovered by Nils Lofgren and his assistant, Bengt Lundqvist, in 1948 (Giovannitti et al. 2013). Lidocaine underwent years of clinical testing before finally being introduced into practice in 1948 following FDA approval. Lidocaine’s tolerability would later make it one of the most commonly used local anesthetics, even today. In 1957, Bo af Ekenstam introduced two more local anesthetics named mepivacaine and bupivacaine (Calatayud and González 2003a; Ekenstam, and af, Egner B, Pettersson G. 1957). In 1969, Nils Löfgren and Cläes Tegner synthesized prilocaine (Löfgren and Tegné 1960), and a few years later in 1972, articaine was first published in the literature by J. E. Winther (Winther and Nathalang 1972). A more recent discovery is the 2011 FDA-approved ultra-long-acting anesthetic called liposomal bupivacaine (Exparel®) (Drug Approval Package: Brand Name (Generic Name) NDA 2020). Despite a decrease in the rate of local anesthetic discovery and innovation over the last several decades, the drive to improve the effects and success of local anesthesia continues and is evolving.

9 Summary

In conclusion, the story of anesthesia is complex, with simultaneously evolving themes including inhalation, local and parenteral agents, asepsis, technology, and neuromuscular blockers. All these combine today to produce balanced anesthesia safely and appropriately for each patient. Undoubtedly, serendipity played a strong role at times in the discovery process, but history was made by also seizing the opportunities provided as demonstrated by William Morton and critical thinking of John Snow about the science at hand. These factors have driven innovation in anesthesia over the last several hundred years and solved one of humanity’s greatest issues—pain during surgery.