Keywords

In the first two chapters, I discussed some of medicine’s heroes and their exploits and several of the diseases that have affected humans and our world since the beginning of history. There is another important collection of tales; these are about the drugs and other remedies that physicians and scientists have developed to combat the illnesses we all suffer. Some drugs, such as quinine, originally came from trees and plants; ergot and penicillin were discovered in fungi; conjugated equine estrogens (Premarin) were extracted from horse urine; and one of the compounds we use today—ammonia—was first derived from camel dung. Who discovered streptomycin was debated for half a century. Drawing laboratory notes from a hat decided the patent rights of isoniazid. In this third chapter, I discuss six historically noteworthy compounds—ranging from the opium derivatives to penicillin—and I will share some short stories about other familiar medications such as nitrogen mustard, warfarin, and nystatin.

Opium and Its Derivatives

Over the millennia, opium and its derivatives have been both a blessing and a curse for humankind. These drugs have eased the pain of traumatic wounds, surgery, and heart attacks and can also help control dysentery and relieve the dyspnea of heart failure. But on the dark side, the addictive properties of the opium poppy have ruined many lives (see Fig. 3.1).

Fig. 3.1
figure 1

The opium poppy. Author: Zyance (https://commons.wikimedia.org/wiki/File:Mohn_z10.jpg)

We can trace the use of opium to Egypt in the second millennium BCE. Medieval Persian physician and scientist Avicenna (980–1037) called it the “most powerful of the stupefacients” (Porter, p. 194). Paracelsus (1493–1541) was an enthusiastic proponent of opium therapy. In the next century, Thomas Sydenham (1624–1689) wrote “Among the remedies which it has pleased Almighty God to give to man to relieve his sufferings, none is so universal and so efficacious as opium” (Fortuine, p. 312).

In the eighteenth century, Scottish physician John Cheyne (for whom Cheyne-Stokes respiration is named) continued to praise opium, by then widely used to relieve pain, reduce fever, and combat diarrhea. Cheyne wrote “Providence has been kind and gracious to us beyond all Expression in furnishing us with a certain Relief, if not a Remedy, even to our most intense Pains and extreme Miseries” (Porter, p. 269).

In 1806 in Germany, F.W.A. Serturner isolated morphine, which Fortuine describes as “one of the earliest chemically pure medications derived from plants to become available to physicians” (p. 315). In 1832, Pierre Jean Robiquet in Paris discovered another opium derivative— codeine. Later in the nineteenth century, heroin was first prepared in Germany. The latter is the most effective analgesic of all the opiate derivatives and also has caused the most human suffering through misuse.

It was during the mid-nineteenth century, when opiate derivatives were being discovered, that the hypodermic syringe was developed by French veterinarian Charles Gabriel Pravaz and Scottish physician Alexander Wood. This technical innovation resulted in increased use of morphine and later heroin—with an attendant rise in drug dependency. In fact, Wood became addicted to morphine and his wife was the first victim of an injected drug overdose [1].

The language of medicine often helps us follow the trail of medical thought and practice. Here are the etymologic origins of opium and its derivatives: The word opium is from the Greek opion, which refers to the juice of the poppy. Camphorated tincture of opium is called paregoric, from a Greek term denoting “consoling and soothing,” certainly an appropriate appellation for a medication that can staunch dysentery (Haubrich, p. 161). Laudanum comes from a Latin word describing a Cyprian shrub (Partridge, p. 340), although Weekley (p. 827) suggests that Paracelsus may have used the word to describe his opium-laced elixir because the word laud means “praise” in Latin.

From the Greek god of dreams, Morpheus, comes the word morphine, referring to its hypnotic properties. Next came codeine, from the Greek word kodeia, meaning “head,” referring to the “head” of the opium poppy. I find the most ironic etymology of all to be that of the word heroin, which comes from Heroin, a trademark registered in 1898 by Friedrich Bayer & Co. The origin is the Greek word heros, meaning “hero,” connoting the “high” that can make the user feel superhuman.

In my professional life, I have encountered all of the opiate variants described above. Paregoric, camphorated tincture of opium, is now a seldom-used remedy for recalcitrant diarrhea; in the early years of my practice, I prescribed this drug from time to time and was impressed with its effectiveness. I carried a morphine vial in my black house call bag and found it to be literally lifesaving a few times when faced with a pulmonary edema patient at home and far from the hospital. Codeine is today one of our most effective antitussives, especially when fortified with alcohol. In World War II, elixir terpin hydrate and codeine was used extensively by US soldiers (presumably because of some mysterious endemic idiopathic cough), who referred to it as “ GI gin.” More recently, working in a clinic caring for many homeless persons, I have seen the devastation caused by heroin in its various forms of administration.

To return to our story, in the United States, the 1914 Harrison Act criminalized drug addiction. Opiates were available only by prescription, thus greatly increasing the monetary value of illicit drugs. Supplying a narcotic drug to an addicted person became illegal, resulting in the arraignment of approximately 25,000 American physicians and subsequent imprisonment of 3000 of them (Porter, p. 665).

Since 1914, America has engaged in its “war on drugs” with astonishingly little success. From time to time, a physician is arrested for “illegal prescribing of narcotics,” but such instances are uncommon, and, generally, prescription narcotic use is for “laudable” purposes such as pain relief as part of end-of-life care.

Ergot

Many family physicians have special areas of interest, such as sports medicine, adolescent health care, or geriatrics. One of mine has been headaches; at one time headache patients constituted almost a third of my practice. Hence, in the days before the “triptans,” I prescribed ergot preparations for hundreds of patients, and I still consider them to be useful—and underused—medications. In addition to their clinical utility, the ergot derivatives have a very colorful history, as well as being part of a confusing etymologic controversy.

Ergot arises naturally from a fungus, Claviceps purpurea, which grows on grasses and cereal grains, notably rye. A 600 BCE Assyrian tablet describes the fungus as a “noxious pustule in the ear of grain.” The agent is a vasoconstrictor with oxytocic properties that have been recognized since very early times. Ergot is an Old French word meaning “cock’s spur,” alluding to the shape of the fungus as it grows on the grain.

Ergotism is caused by a chronic overdose of ergot, and the manifestations chiefly reflect peripheral vasoconstriction. An early symptom is a burning sensation in the hands and feet, after which the very most peripheral tissues, such as fingertips, become dry and black because of impaired circulation. In extreme cases, fingers and toes become gangrenous (see Fig. 3.2). In addition, pregnant women can suffer abortions.

Fig. 3.2
figure 2

Advanced ergotism with gangrene. Author: G. Barger (https://commons.wikimedia.org/wiki/File:Barger.TIF)

Ackerknecht records that Galen was familiar with the disease but that ergotism was generally not found in the inhabitants of ancient Greece or Rome because wheat was not a major part of their diet (p. 139). In the Middle Ages, however, ergotism was a significant problem, notably in northern Europe. Ergotism tends to come during damp periods (a common climactic condition during medieval times) when the fungus grows luxuriantly on grain. The disease is also prevalent in times of food shortage, when grain becomes the chief food consumed. The combination of these factors—damp weather and famine—contributed to the epidemic of ergotism in the Middle Ages. In that era, the burning sensation of ergotism came to be called St. Anthony’s fire. Why?

St. Anthony of Padua (1195–1231) was a Portuguese Franciscan who, in a vision, received the infant Jesus in his arms. He was canonized in the year after his death and today is especially invoked for the recovery of things lost. He is also patron saint of amputees, animals, boatmen, domestic animals, expectant mothers, fishermen, harvests, horses, mariners, oppressed people, paupers, sailors, scholars, swineherds, travel hostesses, and travelers. St. Anthony’s relics are preserved at the basilica in Padua, Italy.

Somehow, ergot sufferers discovered that their burning and other symptoms abated if they made a pilgrimage to St. Anthony’s Basilica in Italy. The reason? As they made the sojourn, they reduced their consumption of contaminated rye products and ate the greater variety of foods available in sunny Italy (Goodman and Gilman, p. 872). As their daily consumption of tainted grains ceased, the manifestations of ergotism miraculously improved. Perhaps this is true, but read on.

The Middle Ages probably was not the last time the world saw ergotism. Matossian makes a plausible case that the hysteria of the 1692 Salem witch trials might have been the result of ergotism. The “witch hunt” began when ten young girls reported being bewitched by Tituba, a West Indian slave owned by the Reverend Samuel Parris. The author notes that the symptoms of the “witches” seem to match some of the manifestations of ergotism: feelings of burning or being pinched, visual hallucinations, and out-of-body sensations. All of these could have been caused by arterial vasospasm. One report at the time describes red sacramental bread; bread with a high ergot content tends to be cherry red in color [2]. We will never know with certainty whether ergot caused the 1692 witchcraft phenomena, but reflection on the event is a stimulating exercise in retrospective diagnosis, which I will discuss further in Chap. 9.

Matossian further postulates that the beginning of the French Revolution in 1789 might also be attributed to ergot poisoning. Trouble began as panicked peasants fled to the forests and armed themselves with pitchforks and clubs. Three weeks of riots marked the beginning of the revolt that brought down the monarchy. Matossian has discovered that the weather in the spring and early summer of 1789 was wet and cold—the very conditions most favorable to the growth of rye fungus. Furthermore, shortly before the uprising, the peasants had begun to eat flour ground from the summer’s rye crop. To make matters worse, there were rumors that the aristocrats were sending bandits to confiscate the rye crop, which would have caused starvation among the peasants. Matossian concludes, “These people weren’t rebelling. They were terrified” [3].

The origin and use of the phrase “St. Anthony’s fire” is still disputed. First of all, there is more than one St. Anthony. In Egypt, St. Anthony of the Desert, aka St. Anthony the Great (251–356), predated St. Anthony of Padua by nine centuries. St. Anthony the Great gave all his worldly possessions to the poor and spent the rest of his life in monastic prayer in the desert. Then, in 1090, the father of a young man with ergotism vowed on the tomb of St. Anthony of the Desert that he would henceforth devote his resources to the aid of ergotism victims if only his son would be cured. St. Anthony of the Desert then became the patron saint of ergotism sufferers. Also, during the Middle Ages, there were a number of epidemics called ignis sacer ( holy fire) or St. Anthony’s fire—names that are not illogical given the hyper-religiosity of the times. Some, but not all, of these epidemics were ergotism; other causes included erysipelas, scurvy, and anthrax (Ackerknecht, p. 139).

Today, you may find articles or dictionaries that define St. Anthony’s fire as ergotism, erysipelas, or both. For the record, I looked up St. Anthony’s fire in my Stedmans Electronic Medical Dictionary. The entry states, without equivocation, that St. Anthony’s fire is a synonym for ergotism, and the historical origin listed is the Egyptian monk St. Anthony the Great (see Fig. 3.3).

Fig. 3.3
figure 3

Saint Anthony the Great (https://commons.wikimedia.org/wiki/Category:Icons_of_Saint_Anthony_of_Egypt#/media/File:Saint_Anthony_The_Great.jpg)

And what about ergotism today? The chief use of ergot is in headache therapy. The disease ergotism, rare in the twenty-first century, is most likely to be seen in patients who overuse ergotamine-containing medications for migraine headache, as reported by Zavaleta et al. [4].

Quinine

In Chap. 2, I discussed malaria—a disease that afflicted much of the ancient world, notably in areas where mosquito vectors were prevalent. Early practitioners often applied the popular remedies for fever—bleeding and purging—measures that were less than helpful for persons with malaria-induced anemia (Cartwright, p. 143). Then, sometime in the early seventeenth century, the Spanish conquerors in Peru learned that the Andean natives had recognized the therapeutic properties of the bark of the “fever tree,” now called Cinchona calisaya (see Fig. 3.4).

Fig. 3.4
figure 4

Cinchona calisaya tree (https://commons.wikimedia.org/wiki/File:Cinchona-calisaya01.jpg)

In the native language, the word for the bark of the tree was quina or quina-quina, meaning “bark of bark.” Cartwright (p. 143) describes what may be an etymologic misadventure. Some thought that the bark, cinchona, came from the title of the wife of the Spanish Governor of Peru, the Countess of Chinchon. Legend holds that in 1742, she used the native remedy to overcome a febrile illness and subsequently shipped a supply of the bark to Spain. We now know that the name of the tree and the medicine came from the indigenous language and that by 1742 the bark had been used in Europe for at least 100 years.

By 1820, the active principle, now named quinine, had been extracted from the “Peruvian bark.” In 1833, quinidine, an alkaloid of cinchona, was discovered and was so named because as an isomer, it is a sort of “quinoidine” (Haubrich, p. 187). By 1850, quinine was widely used for malaria prophylaxis. Quinidine would prove useful in treating irregularities of the heart rhythm.

Quinine became especially significant in World War II, when many Allied troops were sent to malaria-infested areas, such as Sicily and Southeast Asia. Under the pressure of the war effort, Edwin H. Land (do you recall the Polaroid Land Camera?) led a research team that discovered a way to make quinine without the use of Peruvian bark.

What is quinine? Bateman and Dyson describe quinine as a “general protoplasmic poison” that is toxic to many microorganisms, including the malarial plasmodia [5]. The drug can cause headache, vasodilation, sweating, gastrointestinal disturbances, tinnitus, visual and auditory symptoms, and thrombocytopenia. But today, since few of us will prescribe quinine for malaria prophylaxis or therapy, what is the significance of the drug’s toxic properties?

First of all, the drug is rumored to be an abortifacient; self-poisoning can follow the consumption of large quantities taken in an effort to terminate a pregnancy. Also, quinine is occasionally used to “cut” heroin for illicit use, resulting in inadvertent toxicity in users. What’s more, although no longer recommended, quinine has been prescribed to treat nocturnal leg cramps, a benefit attributed to the drug’s property of prolonging the refractory period of skeletal muscle. However, such use is not without risk. Brasic reports a case of quinine-induced thrombocytopenia precipitated by “voluminous consumption” of tonic water to relieve nocturnal leg cramps [6].

A more quaint use is the addition of quinine to seltzer water to produce tonic water, a custom that probably began with the British Raj in India, who mixed quinine tonic water with gin for its healthful effects. This continues today with tonic water and bitter lemon beverages.

Barbiturates

In 1863, two chemists at the Friedrich Bayer Company in Germany (which would later give the world aspirin) isolated a chemical that turned out to have hypnotic, sedative, and anticonvulsive actions. The name may come from the German Barbitursäuer, from St. Barbara’s Day, when the compound was discovered. St. Barbara is the patron saint of artillery officers, and the tavern where the Bayer team toasted their new discovery was a favorite of artillery officers (see Fig. 3.5). Fortuine (p. 258) adds an intriguing footnote to the tale: “A company representative has stated that the name derived from a Munich café waitress named Barbara, who had often donated urine specimens for the investigation.” Another legend is that “Barbara” was the girlfriend of one of the chemists.

Fig. 3.5
figure 5

An image of Saint Barbara, patron saint of artillery officers. Source: Riccardo Spotto (https://commons.wikimedia.org/wiki/File:Santa_Barbara_(Paternò).jp)

For those who relish etymologic controversy, Pepper (p. 100) has another theory. He holds that the “barba” part of barbiturate comes from Latin barba, meaning “beard,” and -urate refers to uric acid. Pepper does not mention girlfriends or saints, and I personally prefer the more colorful explanations.

The first compound discovered was barbital, introduced in 1903 as Veronal—the trade name referring to Verona, Italy, where Juliet swallowed a sleep potion. (Romeo did so also, but unlike Juliet, he failed to awaken from the drug-induced state. And the grief-stricken Juliet then stabs herself through the heart.)

Then came phenobarbital, marketed as Luminal, from the Latin word lumen, meaning “light.” I have never been too sure how a sedative hypnotic drug brings one to light; it seems that the opposite might occur. However, I do recall that in the early 1960s, we treated hypertension with 15-mg phenobarbital tablets taken three times daily, reflecting our desperate paucity of antihypertensive medications at the time.

Then came pentobarbital (Nembutal), an excellent hypnotic widely used a generation or two ago. This was followed by amobarbital (Amytal), and eventually some 2500 barbiturate compounds were synthesized, of which some 50 were eventually brought to market.

Today, in the state of Oregon, assisted suicide is legal. The patient who chooses this option will (after satisfying the many strict legal requirements) eventually be given a prescription for barbiturates.

Aspirin

Inexpensive, potent, and possessing a reasonable spectrum of side effects, aspirin is one of the world’s most cost-effective medications. We use the drug to relieve many types of pain, to combat the inflammation of arthritis, and to treat acute heart attacks. It is a seemingly “magic” treatment for osteoid osteoma and in low doses can help prevent stroke recurrence. Aspirin represents a classic story of a folk remedy that has become a widely used modern drug.

Aspirin’s precursors have been used for more than two millennia. Before salicylic compounds were discovered in the bark and leaves of plants, the ancient Egyptians recognized the analgesic properties of myrtle leaves, Hippocrates (460–377 BCE) treated fever and labor pain with willow extract, and Native Americans chewed willow bark as a folk remedy for various ailments. In eighteenth-century England, it was believed that willow helped reduce fever because both fever and willow are found in damp areas.

In the 1820s and 1830s, salicylic acid was derived by adding an acid to an extract of the Salicaceae family of willows and poplar. Later, a naturally occurring salicylic acid (no added acid needed) was discovered in a species of the Spiraea genus, meadowsweet. In 1853, acetylsalicylic acid was formed by adding an acetyl group to the salicylic acid. Then, in 1897, Felix Hoffmann, an industrial chemist employed by the Friedrich Bayer Company, found a practical way of making a stable, solid form of acetylsalicylic acid that could be produced as tablets (see Fig. 3.6). Hoffmann’s father, who suffered from severe arthritis, was treated with the new drug, reporting superior pain relief and fewer side effects.

Fig. 3.6
figure 6

Chemical structure of acetylsalicylic acid (aspirin) (https://commons.wikimedia.org/wiki/File:Acetylsalicylsäure2.svg)

In 1899, Bayer set out to promote their new discovery, which would become the world’s first medicine to be sold in tablet form and eventually the world’s largest-selling over-the-counter medication (Gershen, p. 119). First, a name was needed. Bayer chose the word aspirin: a- to denote the addition of the acetyl group, -spririn to indicate the Spiraea genus, and -in as a commonly used suffix for medications.

Because acetylsalicylic acid is a chemical that can be readily manufactured by any pharmaceutical company, the name of the drug was quite important. Bayer received a trademark—different from a patent—on the word aspirin in 1889. Following World War I, and probably somewhat as a result of the ill feelings engendered by Germany’s role in the conflict, Bayer lost trademark protection in the United States, Great Britain, and France. As a result of a US Supreme Court decision in 1919, the word aspirin became a generic term in the United States, a fate that today certainly concerns the makers of Kleenex tissues, Xerox copiers, and Scotch tape. Today, Aspirin is a registered trademark of Bayer AG in Germany and more than 80 other countries, but not in the United States. Even without trademark protection in the United States, aspirin brought Bayer $1.7 billion in sales during 2013 [7].

Penicillin

Previously, I discussed the discovery of penicillin as a milestone in medical history, and a grand event it was. Penicillin was the first of the bactericidal antibiotics and, in a sense, is the gold standard against which we measure all that have been discovered since. But penicillin languished for more than a decade between its discovery as a laboratory curiosity and its use as a lifesaving antimicrobial. This is the story.

In Chap. 1, I told of the time, in 1928, when British bacteriologist Alexander Fleming, described by Weiss (p. 76) as “not the neatest of laboratory workers,” returned from vacation and began to clean up a pile of plates that he had cultured before his departure. He noted the historic plate, by then showing a mold contaminant that had caused lysis in nearby bacteria, This set in motion a chain of events for which Fleming, along with others described below, received a Nobel Prize, making him one of the few Nobel laureates in medicine whose name is known outside the walls of academia.

It seems that Dr. Fleming may have had the ability to recognize the significance of a chance observation, but he lacked what was needed to exploit his finding. He published the results of his studies in the British Journal of Experimental Pathology, he presented his findings at scientific meetings, but he was unsuccessful in translating his discovery into clinical practice. Lax tells of Fleming’s unconvincing presentation skills, inadequate experimental design, inconclusive experimental results, and “miserly” literary style [8].

In 1939, Howard Florey, professor of pathology at Oxford University, and his colleague Ernst Chain resumed work on penicillin. In 1940, a classically designed study looked at what happened to mice infected with large doses of streptococci. Half were injected with penicillin and lived; the other half received no therapy and died.

At this time, World War II was in its early stages and the development of penicillin took on great urgency. Penicillin was grown in broth in bedpans and subsequently in specially designed ceramic culture trays. The few units of penicillin developed were precious and were sometimes retrieved from the urine of patients who had been given the drug. One such patient was a local policeman, and Goodman and Gilman report that “It is said that an Oxford professor referred to penicillin as a remarkable substance, grown in bedpans and purified by passage through the Oxford Police Force” (p. 1130).

As France fell to the German invasion in 1940, Florey and his colleagues feared that Britain might be invaded. If so, they vowed to destroy their work rather than have it fall into Nazi hands. As a precaution, they smeared their jackets with Penicillium spore so that their discovery could travel with them if they were forced to flee [8].

The United States entered the war in 1941. Florey and colleague Norman Heatley flew to the United States to meet with American scientists, who had begun work on penicillin as a high priority. The epicenter of work was the Northern Regional Research Laboratories of the Department of Agriculture in Peoria, Illinois. One interesting sidelight is the story of “ Moldy Mary” Hunt, an assistant assigned to find mold-containing fruit to test for strains of Penicillium. Hunt found the most luxuriantly productive strain of mold growing on a cantaloupe in a Peoria fruit market.

In World War II, penicillin saved the lives of countless Allied troops (see Fig. 3.7). Production capabilities increased rapidly, and from an early meager output, the US production of penicillin reached 222 trillion units (148 tons) in 1950.

Fig. 3.7
figure 7

Nurse administering a penicillin injection to a patient, assisted by an orderly, 15th Canadian General Hospital, World War II, 1944 (https://commons.wikimedia.org/wiki/File:Nurse_Giving_an_Injection_of_Penicilin_to_a_Wounded_Man,_15th_Canadian_General_Hospital_Art.IWMARTLD3905.jpg)

Fleming, Florey, and Chain shared the 1945 Nobel Prize for Physiology or Medicine. At Fleming’s death, flower vendors in Barcelona decorated the tablet erected when Fleming had visited their city, and on Greek islands, flags flew at half-staff. These were all quite generous tributes to a man who delayed and almost “disinfected” one of the epic discoveries of the twentieth century.

Short Tales About Selected Remedies

Vitamins

Vitamins are naturally occurring substances and are not really remedies, unless one happens to have a vitamin deficiency. I have included vitamins here because I find their story interesting. In the early years of the twentieth century, observers became aware that persons eating a diet composed chiefly of white (polished) rice were much more likely to develop beriberi than those who ate “ rough” unpolished rice. Something seemed to be missing in the white rice. In a 1905 experiment that would hardly be approved today, William Fletcher (1874–1938) studied two groups of inmates in a Kuala Lumpur prison, feeding one group a diet high in polished rice while the other group ate “rough” rice. The prisoners eating polished rice had a much higher incidence of beriberi than those eating unpolished rice (Porter, p. 554).

The results from Malaysia and separate observations of beriberi among Norwegian sailors on long voyages were followed by animal experiments, eventually showing that quite small quantities of “accessory food factors” could prevent beriberi, as well as rickets, scurvy, and pellagra.

The first vitamin isolated was discovered in 1912 by a Polish biochemist with the euphonious name Casimir Funk (1884–1967). Funk’s discovery was an amine of nicotinic acid, which could prevent beriberi. He coined the word “vitamine,” indicating that he considered the amine to be vital to life. Also in 1912, Funk postulated that deficiency states might be responsible for a number of diseases, including scurvy and pellagra [9]. Thus Funk’s work anticipated the later findings of Joseph Goldberger in 1916 (see Chap. 1).

In 1913, a substance called vitamin A was discovered. Credit goes to Elmer McCollum (1879–1967) and his team at the University of Wisconsin (Sebastian, p. 751). As subsequent vitamins were discovered, they were assigned the letters B (eight of which are numbered), C, D, and so forth. The alphabetical aberration was vitamin K, named in 1935 by Danish biochemist Henrick Dam (1895–1976). Vitamin K helps prevent a hemorrhagic disorder and hence is the koagulation vitamin (Haubrich, p. 245). There are also wannabes—such as “vitamins” U (from cabbage juice) and O (“supplemental oxygen in liquid form”)—sold to enhance health in various ways, but not recognized as vitamins.

Along the way, specifically in 1920, the terminal e was dropped from vitamine when it was found that not all vitamins were amines. Today, we in the developed world consume huge quantities of vitamins, while controversy rages as to whether this one or that is really useful in preventing heart disease, cancer, or various manifestations of aging. Strangely, it all began with camel dung, as I describe next.

Amines

The story of the amine group (NH2), the basis of the gas ammonia (NH3), begins in North Africa about the fourth century BCE, the time of Alexander the Great. At the shrine of Jupiter Ammon, near the Libyan city of Ammonia (this is the hint), visitors inside the temple warmed themselves around fires made with the customary fuel—dried camel dung (Shipley, p. 1945). And why not? Firewood was unavailable in the desert. American pioneers in the 1800s burned “chips” of cattle and buffalo, and today in many desert villages, animal dung is dried for use as fuel.

Back to the story. Inside the temple, the years’ accumulation of smoke caused a powder to form in the ceiling. This was called sal ammoniac, or the salt of Ammon (Gershen, p. 2001). Fast forward to 1774, when Joseph Priestley, Unitarian minister and chemist who later discovered oxygen, found that adding an acid to sal ammoniac yielded a pungent gas. With this discovery, science was on its way to understanding one of the important building blocks of all proteins—the amines—and to Funk’s discovery of his “vitamine.”

Nitrogen Mustard

On April 22, 1915, chemical warfare using toxic gas was formally introduced into the world. The place was the city of Ypres, a small Flemish market town just over the border from France. The setting was the Western Front of World War I, and the event was the release of chlorine gas by German troops. As the gas wafted over the Allied lines, there were more than 5000 casualties (Weiss, p. 125).

Later came subsequent generations of poison gas—phosgene and eventually mustard gas—used in various battles (see Fig. 3.8). Mustard gas, however, did not merely poison the lungs of those who inhaled it. Some exposed soldiers developed bone marrow aplasia. The poison gas seemed to attack cells that produced blood cells.

Fig. 3.8
figure 8

Aerial photograph of a gas attack launched by the Germans against the Russian circa 1916. Source: Popular Mechanics magazine (https://commons.wikimedia.org/wiki/File:Poison_Gas_Attack_Germany_and_Russia_1916.JPG)

If nitrogen mustard can attack blood cells in healthy persons, might it also do so—beneficially—in persons with too many cells, such as patients with lymphoma or leukemia? In 1942, nitrogen mustard was first used to treat a patient with lymphoma. We now know that nitrogen mustard is an alkylating agent that modifies cellular DNA. Since the 1940s, we have discovered better and safer alkylating agents—chlorambucil, melphalan, and busulfan—as well as other classes of chemotherapeutic drugs. But the first suggestion that a drug could reduce harmful cell production can be traced to a Flemish battlefield enveloped by a cloud of toxic gas.

Streptomycin

The first antimicrobial effective against tuberculosis, streptomycin, was isolated in 1943 by graduate student Albert Schatz, working in the basement laboratory of Selman A. Waksman at the Department of Soil Microbiology at Rutgers University. As World War II wound down over the next few years, studies were conducted to confirm the efficacy of the new antibiotic in treating life-threatening infections. One of these was conducted at the Percy Jones Army Hospital at Battle Creek, Michigan. The early results were mixed, at best: patient number one died of his disease and patient number two became blind, one of the possible side effects of streptomycin. The third patient to receive the drug recovered from his infection. Patient number three was Robert J. (Bob) Dole, who would go on to become US senate majority leader and a presidential nominee [10] (see Fig. 3.9).

Fig. 3.9
figure 9

Robert J. (Bob) Dole (https://commons.wikimedia.org/wiki/File:Bob_Dole.jpg)

But, wait, there’s more. Waksman was recognized as the discoverer of streptomycin and received the 1952 Nobel Prize in Medicine or Physiology, not to mention credit for coining the word “antibiotic.” But Schatz objected, asserting his claim as at least the codiscoverer. He filed a lawsuit that was eventually settled in 1950. In that settlement, defendant Waksman acknowledged that “As alleged in the complaint and agreed in the answer, the plaintiff' Albert Schatz ‘is entitled to credit legally and scientifically as co-discoverer, with Dr. Selman A. Waksman, of streptomycin.’ ” In 1993, on the 50th anniversary of the discovery of streptomycin, Schatz published “The True Story of the Discovery of Streptomycin” in the journal Actinomycetes [11]. At least this is the Schatz version of the story. Selman Waksman died in 1973 at age 85 and could not respond to the 1993 “true story.”

Isoniazid

Isoniazid has been a mainstay in the treatment of tuberculosis. It is still quite a useful drug, having three of my favorite characteristics: It is old, cheap, and (relatively) safe. To make a long story short, in 1952, two companies—Squibb and Hoffmann-La Roche—independently and coincidentally announced the discovery of a new antituberculosis drug, isonicotinic acid hydrazide.

In a moment of wisdom that avoided years of agony and millions of dollars in litigation, the two companies agreed upon a novel approach to resolve the conflict. They decided to drop the dated notes of their investigators into a hat. They agreed that the patent rights would go to the company with the earlier date and that the other company would have a royalty-free right to manufacture and sell isoniazid. In the drawing, Hoffman-La Roche was the winner—by a few days in the dated notes.

But wisdom is not always rewarded. When the patent application was filed, there was a third claim that predated even those of the two major companies. Only a “use” patent was granted, and neither company received major profits from the venture.

There was one bright note, however. In 1955, the Albert Lasker Medical Research Award was presented to both Squibb and Hoffmann-La Roche, the first pharmaceutical companies ever to be so honored (Bordley, p. 461).

Sildenafil (Viagra)

If antibiotics were the breakthrough drugs of the mid-1900s, the game changer at the end of the century began as a remedy for angina pectoris. Working at Pfizer’s laboratory in Sandwich, England, in 1986, scientists Andrew Bell, David Brown, and Nicholas Terrett developed a novel compound, sildenafil citrate. The new drug was fundamentally a vasodilator; it relaxed smooth muscles to improve arterial blood flow. Pfizer began studies involving treatment of patients with cardiovascular disease and hypertension, but results in these patients were disappointing, at least as far as cardiovascular disease was concerned. However, male subjects noted one curious side effect. Some who had been “impotent” found that they could resume sexual activity.

Studies began, testing sildenafil as a vasodilator of vessels in the penis. At about this time, with growing excitement and in the vanguard of political correctness, the term “impotence” became considered to be pejorative and somehow confusing. The National Institutes of Health (NIH) convened a distinguished panel of psychiatrists, urologists, gerontologists, and others. These luminaries, following long deliberation, recommended that the name of the disorder be changed from “impotence” to “erectile dysfunction [12].”

As Pasteur might have said, in this instance, chance favored the prepared minds at Pfizer. Following reports of successful studies, the US Food and Drug Administration (FDA) approved sildenafil (trade name Viagra) for the treatment of erectile dysfunction in 1998.

Subsequently sildenafil has been used to treat several other diseases. In 2005 the FDA approved use of the drug for pulmonary hypertension, and it has been used to treat high-altitude pulmonary edema in mountain climbers. And as a highlight of the fanfare surrounding sildenafil, the 2007 Ig Nobel Prize for Improbable Research (Aviation) went to three Argentinian researchers who found that Viagra aids jetlag recovery in hamsters [13].

Despite the introduction of some competing products by other pharmaceutical companies, and hamster jetlag notwithstanding, Viagra continues to be a “cash cow” for Pfizer, earning $1.6 billion in revenue in 2014.

A Potpourri of Drug Names

As we close this chapter, here is a short list of intriguing drug names:

Mannitol takes its name from manna, an edible substance that, according to Exodus 16:13–36, helped sustain the Israelites during their 40-year desert sojourn (Dirckx, p. 75).

In 1942, pharmaceutical manufacturer Ayerst needed a brand name for its new product containing estrogenic hormones. The source of the compound was the urine of pregnant mares. Hence, a logical name for the new medications was Premarin (Fortuine, p. 317).

In 1943, a young girl sustained an open fracture of the leg, which became infected. Cultures of the infection yielded a substance with, ironically, antimicrobial properties. The cultured organism was named Bacillus subtilis. The young girl was named Margaret Tracy. The antibiotic produced from the culture material was named bacitracin, in honor of Margaret.

Also in the 1940s, chemists at the University of Wisconsin were working on a problem. It seemed that cattle eating moldy silage made from sweet clover suffered severe bleeding problems and sometimes died of internal hemorrhage. Studies showed that the active agent was a coumarin derivative. The first patented coumarin product was dicoumarol, patented in 1941. But subsequent efforts yielded a superior drug, released as warfarin (Coumadin) in 1948. Why was the drug named warfarin? WARF stands for Wisconsin Alumni Research Foundation, and -arin denotes its relationship to coumarin.

In 1950, nystatin was developed to treat Candida infections. The drug is derived from bacteria found in the garden of a friend of one of the researchers; the friend’s name was Nourse, and so the organism was termed Streptomyces noursei. The drug itself, nystatin, was named to commemorate its place of origin: the New York State Public Health Department.

Just for fun, here are some drug naming stories I found on a Web site, Reddit.com [14]. These may be apocryphal or perhaps even true tales:

  • Ambien comes from AM (morning) and bien, the Spanish word for good or well.

  • Lasix connotes that the drug lasts 6 h.

  • Maalox combines letters from MAgnesium and ALuminum OXide.

  • GoLYTELY, a laxative, was named by someone with a sense of humor.

  • Ursodiol (brand name, Urso) is found in bear (ursa) bile.

  • Was Soma named after the drug found in the novel Brave New World?

  • Ansaid was probably not so named because it was “just another NSAID.”