The Collapse of Engineered Structures: Dust Thou Art, and unto Dust Shalt Thou Return

Fig. 3.1
figure 1

(photo by the author, 2019)

A balcony photographed in 2019 near Florence, Italy. You can see the badly corroded iron beams appearing through the cracks of the concrete structure. This balcony is in a dangerous condition and it might collapse under stress but, apparently, the owners of the building cannot afford to have it repaired, a common condition for many reinforced concrete structures, all over the world. The pigeon, of course, does not care!

In the late morning of August 14, 2018, I was busy writing this book when I happened to open my browser. There, I saw the images of the collapse of the Morandi bridge, in Genoa, almost in real time. It was a major disaster: the bridge used to carry more than 25 million vehicles per year and it was a vital commercial link between Italy and Southern France. When it collapsed, it not only took with it the lives of 43 people who were crossing it, but it was nothing less than a stroke for the Italian highway system, forcing the traffic from and to France to take a long detour. It will take years before a new bridge can be built and the economic damage has been incalculable (Fig. 3.2).

Fig. 3.2
figure 2

(Image by Michele Ferraris, Creative Commons)

The remains of the Morandi Bridge (or Polcevera Viaduct) in Genoa, Italy, after a whole section collapsed on August 14, 2018.

How could it be that the engineers who took care of the maintenance of the highway could not predict and contrast the collapse of such an important structure? Much was said in the debate that followed about incompetence or corruption. Perhaps the fact that maintenance of the highway was handed over to a profit-making company was a recipe for disaster: profit-maximizing may well have led to cutting corners in the maintenance tasks. But, on the whole, we have no proof that the company that managed the bridge was guilty of criminal negligence. Rather, the collapse of the Morandi bridge may be seen as another example of how the behavior of complex systems tends to take people by surprise.

Even in engineering, with all its emphasis on quantification, measurements, models, and knowledge, the phenomenon we call “collapse” or “fracture” remains something not completely mastered. If engineers knew exactly how to deal with fractures, nothing ever would break—but, unfortunately, a lot of things do, as we all know. We saw in a previous section how critical phenomena in a network can be initiated by small defects in the structure, it is the effect of cracks in real-world structures, according to the theory developed by Alan Griffith [1]. The Morandi Bridge was a structure under tensile stress, sensible to the deadly mechanism of the Griffith failure.

The bridge went down during a heavy thunderstorm and that may have been the trigger that started the cascade of failures that doomed the bridge: one more case of the “Dynamic Crunch” phenomenon that leads to the Seneca Cliff. Somewhere, in one of the cables holding the deck, there had to be a weak point, a crack. Then, perhaps as an effect of a thunderbolt, or maybe of the wind, the cable snapped off. At that point, the other cables were suddenly under enhanced stress, and that generated a cascade of cable failures which, eventually caused a whole section of the bridge to crash down. You heard of the straw that broke the camel’s back, in this case we could speak of the lightning bolt that broke the bridge’s span. Complex systems not only often surprise you. Sometimes, they kill you.

But why was the Morandi bridge so weakened? Just like many other bridges in Italy and Europe, it had been built using “pre-compressed concrete.” This is a material European engineers seem to like much more than their American colleagues who, on the contrary, tend to use naked steel cables and beams for their bridges. Pre-compressed concrete had more success in Europe because it was widely believed that concrete would protect the internal steel beams from corrosion and avoid the need for laborious maintenance work of painting and repainting required, instead, for steel bridges. But, over the years, it was discovered that steel corrodes even inside concrete, and that turns out to be a gigantic problem, not just for bridges.

In the case of the Morandi bridge in Genoa, the problem was known. The bridge had been opened in 1967 and, after more than 50 years of service, it needed plenty of attention and maintenance. Years before the collapse, engineers had noted that corrosion and the vibration stress caused by heavy traffic, had weakened the steel beams of the specific section that was to go down in 2018. A series of measurements carried out one year before the collapse had indicated that the steel in that section had lost 10% to 20% of its structural integrity. That was not considered to be dangerous enough to require closing the bridge to traffic, especially at the height of the busy summer season. After all, most buildings are built with a hefty safety margin with respect to their breakdown limit, typically at least 100%. But there was a plan to close the bridge for maintenance work in October 2018. Too late.

We see once more how the best plans of mice and men often go astray. The engineers who were working on the bridge may have made a typical mistake of linear thinking: they assumed that there is a certain proportionality between weakening and danger. In this case, they believed that a 20% weakening of the beams was not enough to cause the bridge to collapse. But that was an average, and complex systems may not care about averages: do you know the story of the statistician who drowned in a river of an average depth of 1.5 meters?

Bridges are just an example of the many engineered structures subject to collapsing under stress. The Griffith mechanism of crack propagation is typical of the fracture of structures under tensile stress, such as the beams of a suspension bridge, the beams of a roof, moving objects such as planes and ships, everyday objects such as bookshelves, and even the bones of living beings. These structures tend to go down rapidly, suddenly, and sometimes explosively, typical examples of Seneca Collapses. There also exists another category of engineered structures, those which must withstand only compression stresses: this is the case of pillars, walls, arcs, domes, and the legs of the chair you are sitting on. These structures can collapse, but are normally much safer than those under tension because compression tends to close cracks instead of enlarging them, as tension does.

In ancient times, when reinforced concrete did not exist, buildings used to be made in such a way to avoid all kinds of tensile stresses as much as possible. That was because the main construction material available in ancient times was stone and stone just cannot take tensile stresses. So, stones can be used to build walls and buttresses, and also for bridges and roofs, provided that you arrange them carefully to form arcs and domes in order to make sure that all the elements are always under compression, never under tension.

But even compression structures have their limits. Ancient builders were perfectly aware that stone can crumble, even explode, when subjected to excessive stress. That generates a limit to the height of a building in stone: over a certain height, the stones at the base would burst out and bring the whole structure down. One of the arts that ancient builders needed to know was the capability of testing stones for their resistance to compression and they had developed sophisticated measurement techniques to determine this property. Maybe we are biased in our perception because what we see around us are only those ancient building which survived and arrived to our times, but it is true that many ancient buildings have survived the test of time beautifully and are still around us after several centuries, even millennia.

Many Roman bridges are still standing and are used today. Another remarkable example of a building that survived from Roman times is the Pantheon temple, in Rome. It was built nearly 2,000 years ago and it still being used as a temple today, now a Catholic church. Gothic cathedrals built during the Middle Ages were also sturdy and resilient: there are only few examples of structural collapses caused by poor design. For instance, the Beauvais Cathedral, in France, built mainly during the 13th century, suffered lots of problems and some structural collapses, but it is still standing nowadays. Another example is the Pisa tower, in Italy, built during the 14th century. For centuries, it survived the bending caused by ground movements. During the 20th century, the bending had reached an angle of 5.5°, bringing the tower to risk collapse. Today, the tilt has been reduced to less than 4° by acting on the foundations, and now the tower may well keep standing for more centuries in the future. Modern stone buildings are sometimes even more ambitious. The Washington Monument in Washington DC is an example of a building high enough (169 m) to be close to the limits of structural resistance of the stones at its basis. It was terminated in 1884 and seems to be still in good shape despite some cracks that it developed after an earthquake hit it in 2011.

As a last note on this classification, I could mention the “Euler Collapse,” a mode that mixes something of the tensile and something of the compressive elements of the fracture mechanism. It occurs when a thin structure is subjected to compression and, as a consequence, it twists sideways. An example is what may happen to women when they walk on high heels. The tensile stresses at the heel may break it at the juncture with the sole or, in the worst case, fracture the wearer’s ankle. Wearing high heels is dangerous, but many ladies seem to like the idea. I may tell you that once I was in a Russian town in winter and I saw a young lady on high heels running to catch a bus over the iced sidewalk, jumping inside gracefully and apparently at ease. How she could do that without slipping on the ice and killing herself, or being run over by the bus, remains a mystery to me to this date. Maybe you have to be Russian to be able to do certain things. But humans are complex systems and complex systems always take you by surprise.

But let us go back to the case of the Morandi bridge for a discussion on risk evaluation about engineered structures. I crossed that bridge by car several times in my life without ever even vaguely thinking that it was risky to do so. Probably, at least a billion vehicles safely crossed that bridge over its more than half a century of life, so the chance of seeing it collapse just when you were crossing it was abysmally low. Yet, it happened in 2018, and when a major bridge collapses someone is bound to be crossing it. Obviously, it would have made no sense to avoid crossing the Morandi bridge, or any other concrete bridge, for fear that it could collapse. Yet, it makes perfect sense to consider the risk of collapse for a building that you use much more often than bridges: your home or the place where you work. Unfortunately, normally you have no idea of how well and carefully your home was built and maintained. Maybe all the standards were respected, maybe not and, in the second case, your life is at risk: the collapse waiting for you could be rapid and deadly.

There are many cases when it was discovered, typically after the collapse of a structure, that the builders had saved money by reducing the amount of steel reinforcement for the concrete. Or maybe they had used poor quality sand; a typical trick to save money is to use sand taken from some beach. This sand is contaminated with sea salt and that favors the corrosion of the steel beams inside the concrete. In some cases, it is reported that instead of the standard steel beams, builders used wire mesh of the kind used for chicken coops. Then, you have to consider that a building rarely remains untouched after it has been built. People open doors and windows in the walls, add more floors, remove walls or add them. They may also intervene in other dangerous ways: for instance, everyone loves rooftop swimming pools, but they are heavy and may destabilize the whole structure of a building. These mongrel buildings may be very dangerous: one of the worse disasters in the history of architecture happened to a building that was modified and expanded without much respect for rules or for common sense. It is the case of the Rana Plaza collapse on April 24th, 2013 in Savar, a district of Bangladesh, when more than one thousand people died and more than 2,500 were injured. The owners had added four floors to the building without a permit (!!) and also placed the heavy machinery of a garment factory in these extra floors. Not only was he machinery heavy, but it also generated strong vibrations that further weakened the building. More than half of the victims were women workers of the factory, along with a number of their children who were in nursery facilities within the building. A good example of criminal negligence.

Building collapses are rare, so the risk is so small that it is not normally listed in the various “Odds of Dying” tables that you can find on the Web [2]. Yet, it is one of those risks for which you can take precautions and there is no reason for not doing so. If you live in a building made of reinforced concrete that is older than a couple of decades, you should check for the details that may indicate danger. In some cases, you can directly see the corrosion of the steel beams where the surrounding concrete has been eroded. Cracks in the walls are an evident symptom of troubles and it has been reported that the noise of a steel cable snapping open inside a concrete beam may be perceived as the noise of gunshots. In Europe, if you hear that kind of noise, you may reasonably think that there is something wrong with the structural integrity of the building you live in, but, of course, gunshots may be much more likely if you live in the US. By the way, the collapse of the Morandi bridge gave rise to noises that could be interpreted as explosions and—guess what!—that led some people to interpret the disaster as the result of a “controlled demolition” carried out by the evil “Zionist Illuminati” in analogy with the demolition theories proposed for the 2001 attack to the world trade center in New York [3]. Human fantasy seems to have no limits in terms of crackpot theories.

Not seeing or hearing anything suspicious in a building does not necessarily mean it is safe. If it is older than 50 years, it would not be a bad idea to seek professional help to have it checked for its structural integrity. It is expensive, though, and not routinely done for private buildings. Stone buildings are normally safer and more durable than concrete ones; you have to be careful, though, because these buildings can crumble under the effect of lateral vibrations generated by earthquakes. Wooden houses are often said to be more resilient and safer than both concrete and stone buildings and that is probably true, within some limits. But take into account that wooden beams are susceptible to degradation, too: they may be attacked by termites and their presence may be difficult to detect because they eat away the interior of the wood before breaking through to the surface. In terms of structural safety, an Indian tepee or a Mongolian yurt would be the best choice for a place to live. Otherwise, you have just to accept that there are some risks in life.

In the end, the problem of concrete degradation is not with single buildings: it is a global problem that affects all the infrastructure built over the past century or so (Fig.  3.3).

Fig. 3.3
figure 3

Global cement production. Data from USGS

You see in the figure how concrete production went through a burst of exponential growth from the 1920s all the way to a few years ago. Only in 2015 did the global production of concrete start to show signs of stabilizing and, probably, it will go down in the coming years. It means that our highways and our cities were built in a period of economic expansion and on the assumption that the needs for their maintenance would have been minimal, just as it had been for the previous generation of stone buildings. It turned out to be a wrong estimate.

In the future, we seriously risk an epidemics of infrastructure collapses if we do not allocate sufficient resources to the maintenance of their concrete elements. Otherwise, the result could be that a considerable fraction of the world’s buildings and roads will have to be sealed off and left to crumble. Worse, crossing a bridge or living in a skyscraper could come to be considered risky. It is already the situation you have in some poor countries. In Cuba, after the revolution of 1959, the government expropriated most buildings that had been owned by rich Cubans and foreigners and distributed them among the poor. The problem is that these buildings had been erected using Portland cement made from beach sand contaminated with sea salt. Sea salt favors the corrosion of the steel beams—it is a very serious problem. It can be remedied, but it is expensive and requires sophisticated technologies [4] that Cubans cannot afford today. The problems of old concrete buildings in poor countries do not seem to be related to a specific political ideology or government system. Puerto Rico is under the control of the American government but the problem of crumbling buildings seems to be the same as in Cuba [5], worsened in recent times by the Hurricane Maria that struck the island in 2017 [6]. Other areas with warm climates and close to the sea seem to be affected in the same way.

We lack worldwide statistical data for this kind of problems, but there seems to exist a “crumbling belt” of decaying buildings everywhere in tropical regions, especially near the sea, where higher temperatures and sea salt spread by the wind cause the steel beams of concrete building to corrode faster than in other regions of the world—incidentally, the Morandi Bridge was near the Mediterranean coast and it may well be that in that case too, sea salt had a role in the collapse. Add to that the fact that in many of these regions people are poor and unable to afford the costs involved in the remediation of these old buildings, and you have a big global problem: another Seneca Cliff awaiting.

In the end, the problem has to do with an old Biblical maxim: “dust thou art, and unto dust shalt thou return.” Applied to a concrete structure, it would sound more like, “sand thou art, and unto sand shalt thou return.” Concrete is nothing else than compacted sand, not unlike the sandcastles that children build on the beach. The substance that binds the sand in sandcastles is water, and when it evaporates the castle crumbles. In concrete, the binder is cement, and it is typically lime or calcium silicate. Of course, this kind of solid binder doesn’t evaporate and concrete lasts much longer than sandcastles, but not forever. So, what we are seeing today in Cuba and other poor tropical countries may be just an image of what our world will be in a not-so-remote future.

The risk of a collapse affects all kinds of engineered structures, not just buildings. Among the countless objects that humans build, many are especially dangerous because they move—sometimes very fast. According to the available statistics [2], pedestrians are the most likely victims of street accidents while the most dangerous kind of vehicles are motorcycles. The odds of being killed in a car accident in the US are about 1 in 10,000 every year, a value that we do not consider as worth worrying about because most of us normally use cars and walk in streets where the risk of being hit by a car exists. Planes are significantly less dangerous than cars. According to The Economists [7], a typical value for the probability of being killed in a plane crash is one in 5 million per flight. Even if you were to take a flight every day for a year, the chance of being killed in a crash would be less than one in 10,000, not really worth worrying about.

Although these odds are small, they are not negligible and most of us have relatives or friends who suffered a major road transportation accident. The question is how to reduce the chances of being involved in one. In the case of road transportation, there are many well known rules and recommendations about the things to do and not to do when putting oneself at the wheel. But when you ride a vehicle driven by someone else, for instance when you take a bus, you have no idea of the competence of the person at the wheel: the driver may be incompetent, drunk, under the effect of heavy drugs, or, worse, harboring suicidal thoughts. On this point, it may be worth remembering the recommendations made by Jared Diamond in his book The World Until Yesterday (2013) where he tells us how he nearly drowned when a small boat carrying him and a few others was sunk by a reckless New Guinean pilot. Diamond notes that he should have noticed that there were problems with that boat before boarding it if he practiced the art that he noted in his New Guinean friends and that he calls “constructive paranoia.” It is a set of habits involving extreme attention to details of potentially dangerous people and objects, developed by people who live in more challenging environments than those typical of our experience of Westerners. Overall, though, you cannot use paranoia as the way to manage your life. You have to accept that perfect safety is something that you can have only inside your grave.

Nevertheless, you may improve your chances of surviving by exercising a certain critical attitude with choosing your transportation system. There is much discussion on whether some airlines are safer than others, but a comparison is often difficult because there are many factors involved - the route, the kind of planes, the number of flights but, more than that, because the number of disasters in the airline industry is so small (fortunately) that a statistically significant comparison is nearly impossible. It is also true that not all planes are the same and you might think you could choose a flight on the basis of which model of plane will be used. But that is rarely specified in the ticket and can be changed anyway according to the needs of the airline. When you buy an airline ticket you automatically agree to the contract called “conditions of carriage” which is normally a ponderous document that nobody ever reads. In the US, every airline has a different contract, but they tend to be very similar. For instance, the conditions of carriage of Delta airlines in 2017 specified that [8] “Delta may substitute alternate carriers or aircraft, delay or cancel flights, change seat assignments and alter or omit stopping places shown on the ticket at any time. Schedules are subject to change without notice.” And notice that they do not even say that they will take you there by a plane—they only mention “alternate carriers” which might be a camel caravan. Fortunately, that does not happen very often.

So, you have no way to know what kind of plane the company you chose will use, nor whether it will be a new plane or an old one, and whether it could have had maintenance problems in the past. For instance, the people who boarded the Aloha Airlines flight 243 en route from Hilo to Honolulu, in 1988, had no way of knowing that the plane—an old version of the Boeing 737—had a serious problem. Having been employed for several years on that route, it had undergone a much larger number of cycles of compression and decompression than similar planes employed on longer hauls. These numerous cycles had weakened the hull and, as a result, the plane lost part of the fuselage in mid-flight. It was another case of critical failure generated by the mechanism of the expansion of a Griffith crack. In that case, fortunately, the pilots managed to land the damaged plane in Honolulu, still in one piece, although minus a big chunk of the fuselage. The pictures taken after the landing show the passengers still sitting in their seats in the open, it was as if the plane had been turned into a convertible car. We can only vaguely imagine how these people must have felt finding themselves, literally, sitting in the middle of the sky when the plane opened up like a tin can. Sadly, one of the flight assistants died when she was sucked out of the plane, but the survival of the other passengers and crew was nothing less than a small miracle.

Occasionally, though, you do have a choice for which plane to board. You surely heard of the recent case of the crashes of two Boeing 737 “Max” planes in 2019 caused, probably, by a faulty design of the control software [9]. Several state regulatory agencies all over the world grounded all the 737 Max planes immediately, but in the US, the plane continued to fly for some days. In that case, you had a choice on whether to fly with an airline that still used the Boeing 737 or not. You might have been paranoid enough to choose a European or a Chinese airline instead of an American one.

In the end, when we travel we tend to lock ourselves up inside metal boxes running on roads or flying in the sky at speeds such that crashes will often be fatal. Statistically, someone will have to be hit by this specific kind of Seneca cliff. This, too, is part of the rules of the universe.

Financial Collapses: Blockbuster Goes Bust

Fig. 3.4
figure 4

Illustration from Act 4 of Shakespeare’s The Merchant of Venice. The evil money-lender Shylock demands one pound of flesh from Antonio because he is unable to repay his debt, an illustration of how harsh the penalties for insolvency were in ancient times

Imagine that the year is 2008 and that you are the CEO of Blockbuster: a large, international company specialized in movie rentals. At its peak, in 2004, Blockbuster was employing more than 80 thousand people with almost 10,000 retail stores worldwide and a global yearly revenue of some 6 billion dollars. But, in the following years, the company stopped growing. As CEO, you realize that there are problems, but it is also true that Blockbuster remains the top dog in the market. True, you have a competitor: a newcomer called “Netflix,” they are aggressive and they are growing. They have even proposed to you a merger, but why should you accept to merge with a smaller company? There is no reason for Blockbuster to make big changes, the current slowdown is just a temporary downturn, it can surely be remedied by trimming expenses and improving efficiency. Then, in 2009, Blockbuster suddenly loses more than 20% of its revenues. One year later, the company is bust and you are out of your job (Figs. 3.4 and 3.5).

Fig. 3.5
figure 5

Data from [10]

The collapse of Blockbuster and the rise of Netflix.

Why did Blockbuster go down so fast? Mainly because the company was caught with a marketing strategy that had become obsolete. Greg Satell reports on “Forbes” [10]

Blockbuster’s model had a weakness that wasn’t clear at the time. It earned an enormous amount of money by charging its customers late fees, which had become an important part of Blockbuster’s revenue model. The ugly truth—and the company’s Achilles heel—was that the company’s profits were highly dependent on penalizing its patrons.

Netflix had a different approach: ordering was done online, monthly subscriptions were flat-rate, and there were no late fees. Then, Netflix was a pioneer in offering online streaming services, Blockbuster followed, but too late and with a less effective plan.

When things started going bad for Blockbuster, we can imagine the bells ringing in the company headquarters. Surely there were meetings of the managers desperately trying to “do something” to stave off the disaster. And, just as surely, a lot of “solutions” were devised and some put into practice. But it was too late: the management of Blockbuster was taken by surprise and the deadly mechanism of enhancing feedbacks had kicked in: the more Blockbuster was losing money, the more the debt it accumulated made it difficult for Blockbuster to propose good deals to its customers. And with customers leaving Blockbuster, more money was lost and more debt accumulated. Until the bitter end, in 2010.

Hemingway probably never heard of the concept of Seneca Collapse, but he described it perfectly well in his 1926 book, The Sun Also Rises:

“How did you go bankrupt?” Bill asked.

“Two ways,” Mike said. “Gradually and then suddenly.”

Debt, in itself, need not be a bad thing: you may argue that without debt the whole society could not function. David Graeber provided a general history of debt in his book “Debt, the first 5000 years” [11] showing how debt is the very essence of money, something that had been argued first by Mitchell Innes in 1914 [12]. But the problem with debt is that it accumulates, often beyond the practical possibility to repay it. Then, debt becomes insolvency and that’s bad. In all human societies, not being able to maintain one’s promise is a serious breach of trust, something that may destroy the very fabric of a family, a company, or an entire state. Monetary insolvency is just a quantified version of breaking a promise.

In ancient times, people unable to repay their debt faced harsh laws and customs that we could term as “draconian.” The early Roman laws were based on the concept of manus iniectio (literally, “hand lain on him”) which could mean that the insolvent debtor could be physically punished, perhaps killed or reduced into slavery. A remnant of these ancient laws can be found in Shakespeare’s The Merchant of Venice when the moneylender Shylock insists in taking a pound of flesh from the protagonist, Antonio, when the latter cannot pay back his debt. Surely a bad case of a Seneca Collapse for him.

So harsh were the penalties for insolvency that most ancient law codes also included provisions for leniency. Already in Sumerian times, as early as three millennia ago, there existed a use called Ama-gi (or Amar-gi) [13], a term translated as “freedom” but, literally, “return to the mother” that involved the periodic wiping out of all debts. The Jews had similar traditions with the Shemitah (Sabbatical) and the Jubilee (every seven sabbatical years), when various obligations were forfeited, including debt. The periodic cleaning ups had the function of avoiding excessive accumulation of debt.

The Jubilee was a good idea, but it carried a big problem: when the year of the cleanup was approaching, then nobody would loan anything to anyone knowing that soon his credit would be canceled. That was probably the reason why Rabbi Hillel the Elder introduced the prozbul rule during the 1st century BCE. It allowed the stipulation of contracts with an explicit clause that would make the debt immune from the periodic wiping out of the Shemitah.

Other kinds of legislation did not involve the erasure of all debt, but reduced the penalties on the insolvent debtor. Already during Roman Imperial times, the punishment for insolvency was considerably softened in comparison with earlier times. During the Middle Ages in Europe, the penalty for insolvency could be limited to a public humiliation to be carried out on a special stone called lapis scandali [14] or the “stone of scandal.” The debtor was forced to sit, naked, on the stone while he had to claim loudly that he forfeited all his goods to the creditor. That was humiliating, surely, but not as bad as having to give a pound of one’s own flesh to the creditor. The idea of softening the punishment for insolvency cuts across many societies and cultures and, in the Koran, in the Sura al-Baqarah (Sura of the Cow), we can read at verse 280: “And if someone is in hardship, then let there be postponement until a time of ease. But if you give from your right as charity, then it is better for you, if you only knew.” In modern times, bankruptcy laws are varied and depend on the specific legal systems of different countries. The general idea, anyway, is always the same: to soften the impact of insolvency on both the debtor and the creditor.

These current bankruptcy laws are surely not perfect, but they are badly needed since financial collapse is a very common event. According to Eric Wagner, writing on Forbes [15], 80% of startup companies fail within the first 18 months. Bankruptcy is normally imposed by court order. The court nominates a bankruptcy trustee who then liquidates the assets of the insolvent company or person and distributes the proceeds to the debtor’s creditors. Then, the bankrupted person or company can restart anew, at least in theory. In practice, it is not always possible to wipe out one’s debt so easily. For instance, in the United States, you may have obtained a federal student loan on the basis of the idea that the rates are low, but you may still find that you cannot pay your debt back. In this case, you may discover that the option of personal bankruptcyis not available to you, except in special cases. You are indebted for life in a condition that starts looking like a form of slavery [16]. Much worse than having to sit naked on a stone for a while.

Even when it works the way it is designed to do, bankruptcy may have bad consequences on everyone involved. Small scale bankruptcies may generate the foreclosure of one’s house, which is a serious trauma that may affect people for the rest of their life. Debt and bankruptcy can result in symptoms of PTSD (post-traumatic stress disorder) leading to depression and even suicide [17]. Large scale bankruptcies involve the loss of jobs for thousands of people and, at very large scales, the result may be political instability, civil wars, and more disasters. Clearly, financial collapses are a bad kind of Seneca collapse and, as a first approach, the problem boils down to avoiding to get caught in one. There is little that you can do if you live in a country which goes down in a major financial collapse: all you can do is to try to survive the best you can. But, on a smaller scale, it is possible to take precautions.

There are many recipes you can find in books and internet sites on how to invest your savings in such a way to multiply them by a large factor and make you rich. An incredible number of these recipes are evident Ponzi schemes designed to siphon your money out. Just as an example among the many, I can cite the scheme called “Quantum Code” [18]. At the moment I am writing this chapter, early 2019, the web site of the Quantum Code company still exists and you can find it by googling various combinations of “quantum code,” “financial,” and “investing.” The clip they use to peddle their scheme is very well done and, over and over, you are shown all the perks of being rich: a personal jet plane, big cars, jewels, expensive trinkets, and more. Art, after all, is mostly based on some kind of make-believe process and when we watch a play by Shakespeare we do not worry about whether Hamlet is a historical character or not. In this case, the actors playing the alleged financial tycoon “Michael Crawford” and his personal assistant, “Tasha”, do a superb acting job.

It is clear that the scheme is a scam from the first sentence you hear in the clip: “my name is Michael Crawford, yes, that guy you might have read about on Forbes and other financial magazines.” It takes less than one minute to verify that there does not exist anyone with that name mentioned on Forbes or any other magazine as a financial tycoon or anything like that. Maybe a lot of people, out there, are unable to use search engines for debunking this kind of stories. Still, anyone should be wary when hearing “Michael Crawford” telling them that he wants to make them millionaires in exchange for nothing, out of pure philanthropy. Don’t they have a grandmother who told them that “there ain’t no such thing as a free lunch”? So, how would anyone believe in this so transparent scam?

But for everything that exists, there has to be a reason for it to exist. The fact that the Quantum Code clip is so easily debunked cannot be a bug; it must be a feature. The scam is so transparent that we can only imagine that the script of the clip was thought from the beginning as a sucker’s bait. Evidently, they want suckers and they make sure that those who fall for the trap are suckers. Indeed, it can be shown that the trick of pre-selecting suckers is the best strategy in order to optimize the effort of the scammer. We can read a discussion of this point in a paper by Herley [19]. (Here, “attack” refers to the decision of the scammer to engage with the target).

The endgame of many attacks require per-target effort. Thus when cost is non-zero each potential target represents an investment decision to an attacker. He invests effort in the hopes of payoff, but this decision is never flawless.

The general idea is that scammers tend to pre-select targets stupid enough to fall into a transparent form of scam and are, therefore, nearly sure victims of the scam. After all, it is nothing different from the strategy that lions and leopards tend to use when they choose their prey among the weak and among isolated individuals.

Of course, different people need to be cheated differently. A cursory examination of the World Wide Web shows that there exists a whole market of financial scams “graded” for different customers. An extreme case of obvious scams is that of the “Nigerian Scam,” sometimes known as the ‘Nigerian 419’ scam, since the first wave of this scam came from Nigeria and ‘419’ comes from the section of Nigeria’s Criminal Code which (in theory) outlaws it. It works like this: the victim will receive a message telling an elaborate story about large amounts of money trapped in a bank during events such as civil wars or coups, or maybe because of an inheritance blocked by government restrictions or taxes. The scammer will then offer the victim a large sum of money to help them transfer that money out of the country.

I don’t think anyone among the readers of this book needs to be warned not to fall into a scam so obvious as the Nigerian 419 but, as I said, there is a whole zoo of scams at various levels. The Quantum Code is an easily detectable one, although it is much more sophisticated than the Nigerian 419. Climbing up the ladder, we find theoretically serious schemes such as the various kinds of “hedge funds.” The idea of these funds is to use sophisticated risk management techniques in order to diversify investments and reduce the risks for investors—today, hedge funds manage several trillion dollars worldwide. However, it is debatable that these funds can really protect investors from systemic risks such as a global market collapse, as happened in 2008. In addition, according to Nassim Taleb [20], the hedge funds are vulnerable to the “black swan” collapse, in this sense they have some point in common with the “martingale strategy” at the roulette—doubling the bet after each loss. Just as for the martingale strategy played at the roulette table, hedge funds may only trade risks: they may reduce the frequency of small losses in exchange for a low frequency, but not impossible—large loss.

A good example, here, is the case of Amaranth Advisors. We can read in “Investopedia” [21] that

After attracting $9 billion worth of assets under management, the hedge fund’s energy trading strategy failed as it lost over $6 billion on natural gas futures in 2006. Faced with faulty risk models and weak natural gas prices due to mild winter conditions and a meek hurricane season, gas prices did not rebound to the required level to generate profits for the firm, and $5 billion dollars were lost within a single week. Following an intensive investigation by the Commodity Futures Trading Commission, Amaranth was charged with the attempted manipulation of natural gas futures prices.

In this field, we all have to be very careful because none of us is immune from the Dunning-Kruger trap [22]. It is a syndrome that makes people think they are smarter and more knowledgeable than they really are. And no matter how smart you are, there is a probably a scam exactly tailored for you, somewhere. I could tell you stories about my own experience, even though, fortunately, never involving major financial losses. But every time I hated myself for having been so naive to fall for such obvious tricks—and yet I did. But so is life, they say that a sucker is born every minute and every one of us can be a sucker in some circumstances.

There is one more case of financial Ponzi scheme worth a note, here: “technological scams,” a field in which I can claim a certain degree of experience because of my job as scientific researcher. This kind of scams is based on the diffuse idea that technology can produce miracles. That, in turn, is based on the fact that during the 20th century we saw the development of new technologies that could be described as nearly miraculous: think of antibiotics, nuclear power, electronic devices, and much more. But that does not mean technological miracles can be obtained at will. There remains valid the basic rule that says that progress is based on “1% inspiration and 99% perspiration.” Professionals in innovation know this, but ordinary people often do not and their naive faith in advanced technology may make them easy victims of technological scams.

There exist plenty of people and companies claiming to possess wonderful technologies able to solve this or that world problem. Some of these ideas are serious ones, proposed by serious people, that deserve attention for future developments. But many are over-hyped and in not a few cases they are outright scams. The fauna in this area involves a variety of types, from the solitary mad scientist to well-intentioned but misguided efforts destined to fail because of the realities of physics or of the market.

In the category of the “mad scientists,” a mention should surely go to the Italian inventor Andrea Rossi, known for his “energy catalyzer” or “E-Cat,” a device supposed to produce energy by the nuclear fusion of hydrogen (or perhaps of some other element, or perhaps none at all) [23]. Surely, Rossi has a certain knack in promoting himself and his ideas. He succeeded in peddling his E-Cat to the Department of Physics of the University of Bologna [24] making the members of this ancient and respected institution suffer a considerable loss of prestige. Using his association with the university as a certificate of seriousness, Rossi’s invention went through the Web as a bright meteor that for a while even reached the mainstream media. Today (2019) the E-Cat seems to have lost interest and faded away, even though Mr. Rossi is still active in promoting it.

Rossi’s scheme is a typical example of many similar ones I have seen in my career. It goes like this: someone shows up at the door of a department of a university or of a research institution. The person proposes a hefty grant to the researchers to test and improve the wonderful process he or his company are developing. If the university or the institute accepts the grant, the money involved may or may not be paid, but the inventor(s) will use the grant to claim that the idea has been validated by the university or the research instutute. Rossi had promised Eur one million to the University of Bologna, which he never paid. It is reported that he tried to play the same trick with NASA [25].

Something similar happened to me. Years ago, someone asked the University of Florence to test a new method to produce ultra-pure silicon that his company had been developing. It looked like a serious proposal and the physics involved was sound. So, we accepted the grant and two researchers of my group worked on that subject for about a year. We found that the process worked, at least at the laboratory scale, and that it surely deserved more efforts to be developed at the industrial level. But we soon discovered that the proposers had no intention of exploiting the new process, they only wanted cash from the government and some big investors. And they did not even want to pay us. Fortunately, the legal office of my university could force them to shell out most of the money they had promised in the contract. Afterward, we never heard of them again.

These stories illustrate how difficult it is to invest in technological schemes: some are scams, many are just bad ideas, and even a good idea can turn into a scam if the people promoting it are financial sharks. As someone said, there are three ways to ruin oneself: the most pleasant one is with women, gambling is the fastest, high technology is the surest.

Returning to the problem of bankruptcy, clearly, insolvency is one of those elements of our society that we would like to ignore but that may badly affect us anytime during our lives. But what is insolvency, actually? And why does it exist? In standard economics, bankruptcy is dealt with in the two main branches of the field. “Microeconomics” studies the behavior of individuals and firms in making decisions in allocating resources and structuring production and other characteristics. “Macroeconomics” takes a larger view, studying the economy as a whole, in particular in terms of the effect of government policy decisions.

Microeconomics uses a variety of models aimed at finding the optimal values of the parameters of a firm or of a process. It may also assume a qualitative aspect when it examines what decisions managers take to steer their companies through the perilous waters of that entity we call “the market.” It is the same challenge faced by individuals and families trying to navigate in a difficult world: paying the mortgage for the house, feeding the children, repairing the car, all that. Bankruptcy may ensue because someone makes a wrong decision—as for Blockbuster that of not following the evolution of the market. Or, it may happen because something changes all of a sudden, say, one loses one’s job and cannot find a new one. Overall, microeconomics gives us many examples from which to learn, but no general theory of why economic entities collapse.

Macroeconomics, instead, aims at understanding how the economic system works and that includes also financial collapses, obviously part of the system. Here, Hyman Minsky developed the “Financial Instability Hypothesis” [26] starting in the 1970s. I think Minsky’s idea can be summarized as “success breeds excess.” That is, during periods of economic growth people tend to become excessively optimistic, they borrow heavily from banks, and find themselves in a spiral of debt that soon goes out of control. Then, investors want to be paid back and that generates a cascade of reinforcing feedbacks bringing the whole company crumbling down in a classic Seneca Cliff. It looks very much like the story of Amelia the amoeba that we saw in an earlier chapter: a biological population that grows exponentially until it crashes down when the food runs out. In the case of a company, money plays the role of “food” and uncontrolled growth makes the company run out of the food it needs.

Eventually, the whole problem of financial collapses is the result of the existence of money. But what is money exactly? Without going into the various theories of money that economists are still discussing, we can say that, once, money was something that everybody agreed on: a weight of precious metals. After all, the British currency is still defined in units of weight, even though one pound (in monetary terms) does not weigh a pound (in physical terms). Still, up to not too long ago, money was simply a token representing a physical entity: a certain weight of gold or silver. But things changed a lot with time and, with the 20th century, the convertibility of the dollar into precious metals became more theoretical than real. In 1971 President Nixon formally canceled it. From then on, money has been a purely virtual entity, created by central banks out of thin air. How can it be that people accept to be paid for their work with something that does not exist is a little strange, if you think about that. But that does not change the fact that money is the backbone of society: it is exchanged, lent, borrowed, distributed, spent, and more. And, with money, there comes debt. With debt, there comes insolvency and, with it, bankruptcy and all the associated disasters.

Could we think of going a step beyond the institution of bankruptcy laws and imagine a financial system where people cannot go bankrupt? This is an idea that floats nowadays in the world’s global consciousness. Perhaps the first proposal in this sense was made by Cory Doctorow in 2003 (during the pre-Facebook age) in his novel Down and Out in the Magic Kingdom [27], where he proposed a kind of “merit money,” called “Whuffie,” that people could accumulate on the good deeds that they performed. This money was a form of credit, but it could not be spent—it just produced perks and advantages for its owner. It was something that prefigured the “credit score” that Facebook and other social media would later develop. Maybe Doctorow was inspired by Mark Twain’s story The Million Pound Bank Note (1903), where the protagonist finds that the mere possession of this banknote of enormous value entitles him with honors and goods without the need to spend it. But Doctorow may have been thinking of the concept of personal honor, fashionable in less monetized times than ours. As an honorable man you were entitled to privileges, but enjoying them did not mean that your honor would be reduced as a consequence.

Later on, the idea of using the credit score of social media as a form of money was proposed perhaps for the first time by Solitaire Townsend in 2013 [28]. The Chinese government seems to have taken the idea seriously with their plan of implementing a statewide system of social credit (shèhuì xìnyòng tǐxì) [29] that would “grade” all Chinese citizens on a merit score. You get positive points for being a good citizen: helping an old lady crossing the street will bring you points from the lady and from the people who witnessed the deed. You get negative points when you do something bad, like getting a traffic ticket or just a bad report from someone who felt hurt by something you did. The Chinese social credit system can be seen as a form of money in the sense that it is based on the yin-yang opposition of debt and credit. For a Chinese citizen, having a sufficiently high social credit score is a prerequisite for being able to purchase certain things which, in the West, are possible only for the rich, plane tickets for instance. Something similar had been developed in earlier times in the Soviet Union, where the members of the Soviet Communist Party were considered as having a higher credit score than the others. They enjoyed non-monetary perks and services being par to of the nomenklatura system, not so different from what we call the “establishment” in the West.

A “reputation currency” could work, at least in a certain way. An advantage of such a system is that it may be rigged in such a way to create no negative credit (no debt). Could we eliminate the bad consequences of insolvency in this way? And, in a single sweep, we would eliminate such things as theft, robbery, corruption, swindles, and all the crimes related to money. Nobody ever could steal your credit rating at gunpoint! But, obviously, there are problems with the idea. Doctorow says about his creation, the “Whuffie” money, [30]

Whuffie has all the problems of money, and then a bunch more that are unique to it. In Down and Out in the Magic Kingdom, we see how Whuffie – despite its claims to being ‘‘meritocratic’’ – ends up pooling up around sociopathic jerks who know how to flatter, cajole, or terrorize their way to the top. Once you have a lot of Whuffie – once a lot of people hold you to be reputable – other people bend over backwards to give you opportunities to do things that make you even more reputable, putting you in a position where you can speechify, lead, drive the golden spike, and generally take credit for everything that goes well, while blaming all the screw-ups on lesser mortals.

Reputation may be a terrible form of currency for those who find themselves at the wrong end of the scale. Have you ever been bullied as a teen? If you experienced that, you know how hard it can be to be the boy at the lowest rung of the ladder. It is known to be a cause of suicide among teenagers in Western Countries [31]. The only way to escape is to behave in the most abject way with the leaders of the group: flattering them and obeying their orders.

There exists at least one more case of a non-monetary currency system: scientific research. Scientists grade themselves on various scoring factors based on how popular their work is with other scientists, measured in various arcane ways, the most popular one being at present the so-called “h-factor”. If you are a young scientist, your career depends on your credit score and that pushes you to conformism. You cannot afford to criticize your senior colleagues, nor to propose ideas or theories that are outside the commonly accepted wisdom in your field. That’s a privilege you will earn only after getting your tenure and even then you will have to be careful about displeasing the powerful dons who control the funding of research.

The scientific ratings never go negative and, no matter how low the credit score of senior scientists can be, it is rare that they can be hit by the equivalent of a bankruptcy sentence. This is probably the reason why it is often said that “science progresses one funeral at a time.” (a quote attributed to the German scientist Max Planck). It means that old scientists tend to block scientific progress until the natural phenomenon of biological collapse removes them from the system. It would be an interesting reform to introduce “negative points” in science and fire the scientists who publish one or more truly bad papers. But, before that happens, the “Whuffie trap” that Doctorow described would play its role to push scientists toward the most abject conformism. That would surely destroy that spark of creativity that, despite all odds, science has still managed to maintain up to now.

At this point, you can see that bankruptcy is not a bug but a feature of the system. It is one of the checks that the system has to maintain the link between the virtual entity that is money and the physical entities which are goods you can purchase. Like inflation, bankruptcy is an evolutionary tool that prevents the system from getting stuck in a no-win situation by removing the inefficient and obsolete entities which populate it. Were it not for bankruptcy, we would probably still have Blockbuster renting you video CDs and charging you if you are late in returning them. In the end, money may be a virtual entity and you may also define it as the devil’s dung. But we are addicted to it and we keep playing the money game. Money is so deeply intertwined with the way our society works that we cannot even imagine how it could work without it. What could happen to us if a large financial collapse were to destroy the value of our mighty dollar? We cannot say for sure, but the mighty Globalized Empire might crumble like a house of cards in a single, huge, Seneca collapse.

Natural Disasters: Florence’s Great Flood

Fig. 3.6
figure 6

(Photo by the author)

One of my books that survived the Florence flood of 1966. It is The Gold of Troy by Robert Payne. The illustration shows Sophia Schliemann, wife of the archaeologist Heinrich Schliemann, wearing the jewels found by her husband in the ruins of the city of Troy. You can see the dark spots of mud left by the water. The book still faintly smells of something undefinable but that was the typical smell of the time of the flood.

Not long ago, I was accompanying my daughter who was looking for an apartment in Florence. Since her family includes three cats, she needed a little garden and we were mainly visiting apartments on the ground floor. These places are always at risk of flooding and I was using a GPS app running on my cell phone to measure the height of the floor over the sea level. The employees of the real estate company accompanying us were often surprised and they would ask me what I was doing. At my explanations, they were bemused: flooding? In Florence? That can’t happen! (Fig. 3.6).

These young men and women, typically in their thirties, had no personal memory of the great flood that hit Florence in 1966. They knew that it had happened, yes, but they were classing it as part of ancient history: barbarian invasions, the Black Death, the Crusades, and the like—events that took place in the remote past and that would not happen again. A flood of half a century before had no relevance in their daily planning.

It is the characteristic of natural disasters that they strike at intervals long enough for people to forget that they can and do strike. Flooding is one of those events and the 1966 flood of Florence is probably already beyond the forgetting line. But it was a major event: not the only case of a major flood affecting a modern city, but one that threatened to destroy the art treasures kept in Florence from the Renaissance. The flood affected many ancient buildings and damaged precious works of art, generating great concern all over the world. Fortunately, the number of casualties was relatively small.

I witnessed the 1966 flood as a 14-year old boy. It was one of those experiences that mark one’s life even though my home was on relatively high ground and was not touched by the waters. But my father’s office was downtown, at the ground floor, and it was invaded by the murky waters that filled it nearly all the way to the ceiling. Fortunately, there was nobody there when the flood arrived, but it was the place where I kept most of my books. Most of them were turned into heaps of mushy paper and I still perfectly remember the smell of gasoline or kerosene that these remnants of books emanated. Some could be restored and I still keep a few of them on my bookshelves.

The flood left a town in complete disarray. Nothing worked anymore: the shops had been flooded, the banks were closed, the sewage system was clogged with debris, there was no water in the buildings, no public transportation available, people’s cars were soaked in mud and would not start, many homes were without electric power. And the Italian government, taken by surprise, was slow in bringing help.

In the days that followed, the Florentines rolled up their sleeves and started working. For those who experienced it, it was an incredible surge of community spirit and reciprocal help. The bad-smelling mud was shoveled away and there started the slow work of cleaning up and restarting. That also involved taking care of the flooded museums and ancient buildings with their art treasures. Soon, the cleaning effort ceased to be just a job for the citizens of Florence: people came from all over the world to help. They were called the “angels of the mud” and some of them were so taken in by the vibrant atmosphere of the reconstruction effort that they never left. They got married to Florentines and many of them are still there, getting old in Florence and taking care of their children and grandchildren, by now Florentines, too.

The story of the Florence flood has a happy ending: damage was limited and the city could be returned to its original conditions. It is not always like this: natural disasters are of many kinds and can cause much worse damage as well as horrific loss of human life. Floods, hurricanes, forest fires, earthquakes and other manifestations of Nature’s force are rare events, but also common enough that each one of us is likely to experience one or more of them during our lives. Often, although not always, the distribution of natural disasters tends to follow the Pareto law, as we discussed in an earlier chapter. That is, they tend to behave according to a mathematical formula where the frequency of a disaster is proportional to its size raised to an exponent (power law). Disasters tend to be less probable the bigger they are but there is a non-zero probability that even extremely large events will occur.

In practice, on the basis of historical data, you may be able to say that a certain disaster has a specific probability to happen in your region, but that does not tell you when and where exactly it will take place. Imagine that the probability of, say, an earthquake of a certain size is 1% every year where you live. That means there is a 63% probability for the earthquake to strike within a century. But it might strike tomorrow morning, or after 99 years, or never over the next 100 years. It is nearly certain (more than 99.9% chances) that it will strike within the next 1000 years, but that helps you little in planning for this possibility. So, you have to plan taking into account the worst case hypothesis, which may be a good idea in general.

There exists is a whole taxonomy of rare natural phenomena that can do great damage to people. We may start with earthquakes. The kind that destroys buildings is rare but, when large earthquakes occur, the consequences are usually disastrous. The strongest ever recorded is “The Great Chilean Earthquake” which occurred on May 22, 1960, near Valdivia, in southern Chile. Its magnitude was measured as 9.5 on the Richter scale. It is an enormous value: the scale is logarithmic so that, in terms of energy, each whole number increase corresponds to an increase of about 31.6 times the amount of energy released. The Valdivia Earthquake did a lot of damage but, fortunately, few victims because it was preceded by a powerful foreshock that caused a lot of people to leave their homes before the main shock arrived.

There are many more examples of destructive earthquakes and everyone knows about the San Andreas fault that marks the two plates that form California and which slowly slide against each other in irregular bumps. The disastrous San Francisco earthquake of 1906 is a reminder of how dangerous living in California can be, but there were many more quakes in the area. Sometimes, it is said that California is waiting for “The Big One,” an earthquake so powerful that everything West of the San Andreas fault would slide into the Pacific Ocean (or, alternatively, that everything East of the San Andreas fault would slide into the Atlantic Ocean). This is mostly folklore and media hype: it is true that earthquakes do occur in California and will keep occurring as the two continental plates keep moving, but there is no evidence that a humongous event, way larger than anything seen before, is brewing and will someday send San Francisco or Los Angeles to the status of park attractions to be visited by tourists wearing scuba gear.

California is just one sector of the great “Ring of Fire,” a geologically active region that circles the Pacific Ocean. Japan, opposite to California across the Ocean, is part of the ring and another earthquake-prone, highly populated region. The great ring is just one of the many geologically active areas of the planet: those at risk also include the Mediterranean region, the Middle East, Central Asia and the Himalayan region, and more. An especially active region is the “Great Rift Valley” that goes from the Middle East to Central Africa. At the border between Somalia and Ethiopia, a geological process is in action to split the African Plate into two new separate plates. What’s now a low-height valley will be a new sea, perhaps part of an ocean, but that will take millions of years.

There is no place on the whole planet that can be said to be completely free from earthquakes, but some places are surely quieter than others. In general, you are unlikely to experience major earthquakes if you live in the central areas of North America, in Australia, or in Eurasia, but note that a medium-sized earthquake struck Chicago in 1968. In general, the danger of earthquakes is nowhere so large that you should relocate far away from seismic areas, but surely you cannot ignore it if you live in one. In all cases, it is good practice to take precautions by living in a solid house—if you can—and to keep the emergency equipment that will be needed after the earthquake to cope with the disruption of services such as food, electricity, clean water, and more.

A phenomenon directly related to earthquakes is the tsunami, taking place whenever an earthquake shakes the seafloor. That can perturb a large mass of water that then moves across the ocean. When this water arrives near a coast, it takes the form of a wave, sometimes very large, that crashes on the shore and may destroy everything even for miles inland. The most tsunami-prone regions in the world are probably those on the coasts of the Pacific Ocean, along the great ring of fire, the most recent major tsunami in this region was associated with the Tohoku earthquake and it struck Japan in 2011. The Indian Ocean is also a tsunami-prone region and you may remember the 2004 tsunami that struck Indonesia, killing 230,000 people. The Mediterranean region is geologically active and it is also subjected to tsunamis: A relatively recent one struck the Italian cities of Messina and Reggio in 1908, causing a large number of victims. Much earlier, some 3600 years ago, a large volcanic eruption took place in the island of Thera (today called Santorini) in the southern Aegean Sea. The related tsunami may have destroyed the Minoan civilization and generated the legend of the sinking of Atlantis.

The Atlantic Ocean is less active than other oceans in terms of moving tectonic plates, but it is nevertheless sensitive to tsunamis caused by coastal landslides. A source of a possible future Atlantic tsunami could be the collapse of a large section of the island of La Palma, one of the Canary Islands in the Eastern Atlantic. It could happen as the result of an eruption of the Cumbre Vieja volcano. According to some estimates, if this landslide were to occur, the result would be a wall of water up to 300 feet high moving across the Atlantic and reaching the East Coast of the United States in about nine hours [32]. The resulting damage on the coastal cities would be unimaginable, a true super Seneca Collapse. But we have no idea of when and whether such a disaster could take place.

If you live in a geologically active zone, you should also worry about volcanoes, probably the most destructive phenomenon generated by purely geological forces. A well-known example is the destruction of the cities of Pompeii and Herculaneum in Southern Italy in Roman times, in 79 CE. These cities were buried under a thick layer of ashes, excavations are still ongoing today and archaeologists keep finding traces of the bodies of the people who suffocated or were killed by heat, sometimes still in the position they had assumed when they died.

An even more spectacular case of a volcanic disaster is that of Toba, a “supervolcano” which erupted about 75,000 years ago at the site of present-day Lake Toba in Sumatra, Indonesia. It was perhaps the largest eruption known to have taken place in an age when our human ancestors had a chance to experience its effects. Some evidence indicates that the enormous mass of dust pushed into the atmosphere generated a “volcanic winter” that may have led to the disappearance of a large fraction of the human population of the time. That led to the “genetic bottleneck theory” proposed by a number of scientists [33] that would explain why the humans of today have a relatively small genetic differentiation. It may be because we are all descendants of the small group of people who survived the Toba eruption and who then spread all over the planet in an interesting case of “Seneca rebound.” But, no matter how fascinating the bottleneck theory can be, at present it seems that the data do not support it [34]. Whatever the case, if a Toba-size eruption were to take place nowadays, it would likely destroy our whole civilization, maybe even causing the extinction of the human species. We can only hope that the “fat tail” of the Pareto distribution for volcanic eruptions is not so fat as to give a significant probability to such an event.

The Toba supervolcano eruption is related to plate tectonic activity [35] just like most of the volcanoes active today. But there is also another kind of volcano, related to the “hot spots” in the Earth’s crust [36]. These volcanoes are generated by plumes of hot magma that start in the asthenosphere, the region of the Earth’s mantle that lies just below the crust. These plumes look a little like the whirls and bubbles of a “lava lamp,” although they move at an enormously slower speed. The flood of lava generated by a hot spot is often gentler and moves slower than the plate-generated kind—the “normal” volcanoes, because the magma is basaltic (or “mafic” using the geologists’ jargon) and contains smaller amounts of dissolved gases than the “felsic” (again, the geologist’s jargon) kind of volcanoes. So, the outflow of basaltic lava is less subjected to explosive outbursts.

A well known hot-spot volcano is the one that generated the Hawai’i archipelago over the past 5–6 million years, one island at a time, as the ocean floor moved over the hot spot (which may also have been moving). Today, you can see the hotspot in action at Kilauea, on the South—Eastern side of the Hawaii islands. The latest bursts of eruptions took place in 2018, it was not so gentle because it destroyed several homes and caused an earthquake, but no victims were reported. The Lōihi volcano, off the South-Eastern coast of Hawai’i, is the latest incarnation of the underground hotspot. It is presently undersea, but gradually growing and rising. It is expected to begin emerging above sea level about 10,000–100,000 years from now. It will surely be something spectacular to watch for our descendants who will have a chance to be there.

The Yellowstone hotspot volcano also deserves a mention [37]. Right now, it is quiet, it is not even a volcano. But, over the past 18 million years or so, the hotspot generated a succession of violent eruptions and floods of basaltic lava, at least a dozen of them were so massive that they are classified as “super-eruptions.” The hotspot could become active again and generate a new supervolcano that could rival the ancient Toba in terms of global destruction, or even be much worse. It is another entry in the list of the event that could destroy the human civilization and even cause the extinction of the human species. But we cannot predict when (and whether) it will take place.

This review of giant natural disasters cannot neglect the possibility of meteorites, often called asteroids when they are large. Asteroids that fall on the Earth can cause enormous damage, so much that they are a popular subject of catastrophic movies. It is true, indeed, that the geological record shows several cases of large meteorites falling on the Earth’s surface. An especially spectacular one was that of the Chicxulub meteorite hitting the Yucatan peninsula, Mexico, some 66 million years ago. The impact is commonly said to have caused the mass extinction that included the disappearance of the non-avian dinosaurs at the boundary between the Cretaceous and the Paleogene periods (K–Pg boundary). This idea had become almost universally accepted up to a decade ago, but it is now debated and often rejected: it seems clear that the dinosaurs were destroyed by a different phenomenon, a giant basaltic eruption that took place in the region we now call the Deccan, in India [38]. In any case, the risk associated with falling meteorites is extremely low and there are no reliable reports of anyone having been killed by one in modern times.

Geology-related disasters are often classed as “acts of God,” meaning that they are completely unrelated to human actions, but this is not always the case. The human influence on the Earth system is by now so large that it affects even geological phenomena. For instance, the slow melting of glaciers caused by global warming, largely the result of human activities, is generating a phenomenon called “isostatic rebound” of the regions covered by ice caps. It works like this: the tectonic plate below the glacier “floats” over the fluid astenosphere, below the crust, just like all tectonic plates. The weight of the ice sheet on it pushes the plate down but, with the sheet thinning, the plate moves up. It is a very slow phenomenon but it destabilizes the whole area and may generate earthquakes, volcanoes, and sometimes tsunamis.

Many more natural phenomena are only partially natural: they may be triggered by human activity and the damage they generate may be increased by unwise human practices. Among these, we can list forest fires and hurricanes, often enhanced by global warming. A hotter atmosphere may make hurricanes more destructive, and it can also make forest fires more frequent and more deadly both because of the higher temperatures and because of droughts. In recent times, California has been struck by several major fires: these are natural phenomena but human activities can enhance their frequency and intensity in various ways. One is the change in the weather patterns caused by climate change, others are poor forest management practices. The “Oakland Firestorm” of 1991 is an example: the fire was enhanced by the introduction of non-native trees in California, the easily flammable eucalyptus trees [39].

Landslides are also triggered by human activities in terms of deforestation or poor soil management. A good example is the landslide that struck the town of Sarno, in Italy, in 1998, causing the death of 160 people, engulfed by a giant mass of slide coming down from the surrounding mountains. It was enhanced by the deforestation of the hills around the city. In some cases, landslides are wholly human-made: for instance, in 1966, the collapse of a pile of coal mining debris at Aberfan, in England, killed 116 children and 28 adults in a school that had been erected nearby.

Do we see any trends in the number and lethality of natural disasters? The data reported by Our World in Data [40] show that the sum of all reported disasters—of all kinds—reveals an increasing trend up to around the year 2000, then it starts going down. If we examine the data for different types of disasters, we see that phenomena as diverse as earthquakes, wildfires, and floods show this trend: their frequency goes up until the turn of the century and then declines or stabilizes. The trend for the number of fatalities is less clear-cut: some cases, such as for the deaths caused by extreme temperatures, we see an increase with the turn of the century, while for others, such as those caused by droughts, we see a clear decline starting from the 1920s and still ongoing. Finally, if you are worried about being struck by lightning, you may be happy to know that the data show that the number of fatalities in the US has been declining by a factor of almost 100 from 1900 to 2015.

These data are not easy to interpret: what made the frequency of many natural disasters go first up and then down? Did the Earth’s weather patterns change? Or was it just a question of different reporting criteria? It is hard to say, mainly because the damage caused by natural disasters depends on several factors: it is not just the intensity of the forces of nature, but how people are prepared to cope with the event. So, we cannot know whether there will be larger changes in the coming decades: the ongoing global warming may make weather-related phenomena more destructive and more frequent, but that cannot be said with absolute certainty. What we can say is that, overall, natural disasters in the world have caused some 70,000 victims per year during the decade of the 2010s, so far. If the trend does not change, it means that, on the average, your probability to die struck by any of the several possible “acts of God,” from floods to volcanic eruptions, is of the order of one in a million per year.

So, should you be worried? Yes, absolutely. First of all because, although the probability of dying is low, the probability of suffering heavy damage is way higher. Here, we may again remember the story of the statistician who drowned in a river of an average depth of 1.5 meters! If you were living in Florence in 1966, your probability to die because of the flood was about 0.003% (17 victims out of a population of ca. 500,000 persons), but almost everyone in Florence was negatively affected in various degrees. Then, your probability to die or suffer heavy damage in a major natural disaster depends very much on where you live. If you live in a mountainous area in a continental region, you should not be worried about tsunamis, unless you think of the movie 2012, where the tsunami waves were taller than the Himalaya mountains! But if you live on an island of the Pacific or the Indian Oceans then, yes, a major tsunami has a significant possibility of striking you during your lifetime.

So, it makes sense to plan ahead for the possibility of major disasters. As usual with critical phenomena, it is not possible to predict where exactly a natural disaster will strike, nor how large it will be. That does not mean you should not apply the wise strategy proposed by captain Kirk of the Federation’s starship Enterprise: “I never put myself in a no-win situation.” It is a re-statement of the best strategy for winning at the Russian Roulette game: just don’t play it! It is the strategy that I applied when my daughter was looking for an apartment in Florence for her family, using a GPS app to make sure that the apartment was high enough over the sea level that it did not risk being flooded in case of an event such as the 1966 flood. Maybe it will not happen during my daughter’s lifetime, but why take chances? If you live in California, you should at least avoid buying a house that stands right across the St. Andreas fault or in the midst of a eucalyptus forest.

If you can’t avoid living in dangerous areas, then your best bet for survival is to be ready. If you happen to face a forest fire racing toward your home, your hope consists in having your car ready and to have planned in advance the road to take to move away from the risk zone. It happened to a friend of mine who was living in Oakland, California, at the time of the firestorm of 1991. She was at home when she saw a giant wall of fire surging from the woods. She did not even have the time to put her shoes on, she ran for her life in her slippers, managing to start her car and outrun the firestorm. Then, if you live in the “Tornado Alley” in the central US, you should not just content yourself with the fact that the probability of being killed by a tornado is low even in that area, probably less than one in a million per year [41]. Most people living in that region equip their home with a “storm cellar,” an underground refuge. You may never have to use it, but not having it is a risk not worth taking.

Overall, natural disasters are highly destructive but, mercifully, they are reasonably quick to go away, at least in their most intense form. After the earthquake has struck, the flood waters have retreated, the twister has faded in the clouds, there comes the moment to look around, assess the damage, and plan for rebuilding. Here, an important factor is scale. Small scale disasters, such as tornadoes and forest fires, are spectacular, but localized. In most cases, they may destroy a few homes, but the overall damage is limited. Then, if the people who have been hit have good insurance, they can rebuild their homes. This is what happened with the Oakland firestorm of 1991, in California: the fire burned to cinders some very expensive homes on the hills in the area, but when the time to rebuild them came there was no need of the kind of communal solidarity that the Florentines had shown in rebuilding their city, probably alien to the cultural orientation of the residents of the hills around Oakland [39]. Instead, most owners had insurance policies that allowed them to rebuild even larger homes, sometimes even extravagant ones. That was the case of my friend whose home had been destroyed in that fire: she and her husband were able to build themselves a better and bigger house. What they were most sorry about was having lost all the records of their previous life: the pictures of their marriage, of their children, of their families. Today, you probably have all those pictures in the cloud, so even in case of a wildfire destroying your home you will not lose the memories of a life. That is probably more likely to happen today if your cloud provider loses your records: it happened to the cloud provider MySpace that lost some 15 million of users’ records in 2019 [42]. It is another kind of Seneca collapse, this one, fortunately, just virtual.

A different story is when the disaster is so large that the resources needed to rebuild are insufficient. An example of a disaster large enough to put the whole society to test is that of Hurricane María, in 2017. It was not an exceptionally strong hurricane and not even an unexpected one. It was just rain, rain, rain. Initially, it seemed that the effects had been limited, but the true size of the disaster became apparent months afterward. One reason was the poor response of the authorities but, really, the problem was that Puerto Rico was—and remains—poor. Not only poverty had weakened the island’s infrastructure, but the poor lacked the extra resources which are needed when people have to recover from a catastrophe. On this subject, let me report an excerpt from Ariel Lugo’s book Social-Ecological-Technological Effects of Hurricane María on Puerto Rico [6] (p. 49)

Before María the consensus was to make government like a private enterprise, without realizing that the government tends to provide more benefits because it has a service mandate not a profit motive. Privatization makes money for entrepreneurs, lifts the economic status of the politicians that selected them, but often dramatically fails in public services to the citizens, particularly when faced with extreme events. A government operated according to the profit motive of the private sector will use cost-effectiveness as the criterion for action as opposed to public service and public good. The profit of the privatized agency or government sector is secured while portions of the public, which help to underwrite that profit, are left to fend for themselves.

Most people do not realize that, when examined objectively, government, not private entities, tends to deliver services most efficiently, that is, at less cost per unit benefit. And it gets a lot worse following an extreme event.

Here, Lugo hits a fundamental point: we are becoming more vulnerable to catastrophes with our emphasis on privatizing everything in the name of efficiency. In this way, we have no resources left to cope with extreme events nor to help people who are hit and cannot pay for what they need. We are making the social network tighter and more efficient, but at the expense of resilience. More than that, the very fabric of society is being destroyed because of the emphasis on efficiency: cooperation and trust among citizens disappear just when collaboration—rather than competition—becomes the fundamental virtue to attain the resilience we need to survive and rebound after the catastrophe. Lugo notes also that (p. 48),

When faced with the overwhelming effects of an extreme event, the human spirit and will rise to the occasion. Many Puertoricans did not wait for external aid and choose instead to rise to help themselves and their neighbors.

Could this kind of resilience be planned for even before a disaster will strike? Some people appear to be engaged in this kind of planning. In the US, there is the movement of the “preppers,” or “survivalists,” people preparing for whatever major disaster may strike them, including the end of the world as we know it (TEOTWAWKI). In many cases, preppers emphasize individual or single-family preparation rather than community resilience, they may stockpile food, supplies, and weapons in their cellars in the expectation for the worse to come. A different approach may be common in Europe with the “Transition Towns” movement [43] which emphasizes collective action to preserve the local social network. These are experiments in building community-level resilience by means of collaboration, local resources, local agriculture, and sometimes using local currencies. It does not seem that survivalists or transition towns people have been put to test yet by a true emergency situation, so we do not know how well these ideas will withstand contact with reality. In principle, both ideas may be good in some circumstances but, as usual, we move into the future without being sure that we are taking the right direction.

Mineral Collapses: The Coming Oil Crisis?

Fig. 3.7
figure 7

The author, Ugo Bardi, discussing the future of the oil industry at a meeting of the Club of Rome in Vienna, 2017. You can see on the whiteboard that his prediction was not very optimistic: it is the Seneca curve

In 2003, I attended my first conference on oil depletion in Paris. There, I met the larger-than-life figures of the experts who had revamped global interest in oil depletion and founded the Association for the study of peak oil (ASPO): Colin Campbell, Jean Lahérrere, Ali Morteza Samsam Bakhtiari, Matthew Simmons, and many others. In Paris, everything looked new, remarkable, exciting: we were riding a wave of interest in oil depletion that had started in 1998 with an article by Campbell and Laherrere in Scientific American [44]” titled “The End of Cheap Oil.” The resonance of that article had been enormous: among harsh criticism and enthusiastic acceptance, the term that Colin Campbell had coined, “peak oil” had rapidly gained worldwide popularity.

For me, the Paris conference was the start of my interest in collapse. True, the peak oil concept, did not imply that decline was to be faster than growth. But, already in 2005, I published my first paper on oil depletion [45] finding the conditions that led to what I called “sawtooth-shaped” collapsing curve. The idea of calling it the “Seneca curve” came much later. I was not the only one who found the concept of peak oil fascinating. The importance of oil as the main support of civilization was well known, but the idea that oil was becoming scarce provided a new interpretation of past events, from the great oil crisis of the 1970s to the 2001 attacks against the World Trade Center in New York. Peak oil had a certain ring of apocalypse to it, especially because many people understood the peak as the same thing as running out of oil. Not everybody misunderstood the concept so badly, but peak oil, it was said, meant the end of the world as we know it and we had better be aware of the punishment that the dark divinities of the black liquid found underground were preparing for us.

The popularity of the concept of peak oil rose to high levels in the early 2000s, but it was short-lived. It may have peaked around 2006 then, way before it could be said that any of the peak oil forecasts had been right or wrong, it started declining [46]. Not even the great oil price spike of 2008 generated more than a transient blip of interest. In time, a new wave of optimism came and the concept of peak oil became politically unnameable, sometimes a source of scorn for those who still dared to propose it.

The peak oil parable is just an example of how human worries tend to go in cycles. Erwin Schlesinger, former US secretary of state, said that people have only two modes of operation: complacency and panic. It may also be that these two modes tend to go in cycles, periodically replacing each other. So, the wave of interest in oil depletion that had started in 1998 was not the first: the idea had ebbed and flowed all along the great cycle of exploitation of crude oil. Already in the 1950s, the American geologist Marion King Hubbert had proposed his “bell-shaped curve”, generating an early cycle of interest that faded in the 1980s with the wave of enthusiasm for the Internet and the dot-com economy. It may very well be that the current complacency phase could give rise to a new phase of panic in the near future. And, in the case of crude oil, the term “panic” is justified. Without liquid fuels, everything would stop in the world. Recently, Alice Friedemann published a study on this subject: When the Trucks Stop Running, [47] and the title, alone, tells the whole story. No fuels, no trucks, no food, no civilization. Could it really happen?

It could. Something similar already happened with the great “oil crisis” of the 1970s that for a period seemed to destroy the very foundations of the Western civilization. If you experienced that crisis, you cannot forget what happened: gas prices suddenly skyrocketing, long lines at the gas stations, governments enacting all sorts of measures: lower speed limits on highways, “odd-even rationing” schemes, support to the production of small cars, and more. The shock on the financial system was even worse: recession and two-digit inflation. It was a disaster for a world that had experienced, up to then, more than 2 decades of uninterrupted economic growth. The data show how world oil production had started declining faster than it had been growing before the peak. It was a clear case of a Seneca curve (Fig. 3.8).

Fig. 3.8
figure 8

Oil production at the time of the great oil crisis of the 1970s. Data from IEA

Eventually, the crisis that had started in the 1970s abated. With the development of new oil fields such as those of the North Sea, production started to grow again. With the mid 1980s, pre-crisis production levels were reached again and then surpassed. In the decades that followed, the world oil market turned out to be remarkably resilient: we saw wars, collapses, international crises, and all sorts of changes and disasters. But crude oil and natural gas kept flowing everywhere in the world.

Today, the events of the 1970s are part of the “memory fog” of humankind, a fog that turns into ancient history everything older than a few years (or even less than that). So, the story of the oil crisis was turned into something that looks like an ancient myth of good vs. evil. The way it is often told, it involves a group of power-hungry Arab sheikhs (or maybe ayatollahs) who had attempted to take over the world using oil as a weapon. But their efforts were eventually thwarted by the good people of the West who found new sources of oil. From then on, everything had been well in the best of worlds.

There are some elements of truth in this simplified version of the story. If we look at the global production data over the past century or so, we can see how the increase has been nearly continuous. True, the chart is optimistic because it reports volumes produced and not energy—which is what we are interested in. But, overall, the growth of oil production is a real phenomenon (Fig. 3.9).

Fig. 3.9
figure 9

World oil (all liquids) production. Data from “The Shift Project” https://theshiftproject.org

But there remains in our collective consciousness a deep unease that derives from the realization of how fragile our prosperity is. Not for nothing was the so-called “Carter Doctrine” expressed during the oil crisis years. It stated that the interests of the Middle East regions are vital to the United States and that the US will consider all attacks to these regions as a threat to its national security. There is a logic in this attitude: a large fraction of the oil reserves of the world is located in this region. If something goes wrong with the oil production of one of the major oil producers of the Middle East, Iraq, Iran, and Saudi Arabia, it will affect not just the United States but the whole world. It seems that the world’s energy security hangs on political factors that may suddenly create unexpected problems: this was what happened with the great oil crisis of the 1970s. And the question is: could it happen again?

To answer this question, we can start from a favorite sentence by Colin Campbell, one of the first proponents of the “peak oil” concept, “the availability of crude oil today depends on events that took place during the Jurassic period and that cannot be influenced by politics.” In other words, the supply of oil is finite despite some politicians claiming the opposite [48] and also despite the efforts of a group of vocal contrarians who try to push the concept that oil resources are really infinite, being continuously recreated by mysterious “abiotic processes” operating in the depths of the Earth [49]. It does not work that way, if you are an adult you should know that after you have eaten your cake, you don’t have it anymore.

The oil industry seems to be perfectly aware of the limitations of available resources and spends considerable efforts on estimating the size of the available oil “cake”. Obviously, these efforts are stimulated by the fact that resources are a factor in attracting investments. As you can imagine, it is not an easy task to evaluate the amount of something that lies miles underground. But there exist sophisticated measurement technologies that, coupled with even more sophisticated statistical treatment of the data, allow the industry to perform reasonably accurate estimates of the resources that are expected to lie hidden underground.

The problems with resource estimates is not so much a technical one but a political one: the search for a top position in the pecking order in the oil world may lead some governments or company boards to “adjust” the results of their analysis. In a 1998 paper, Colin Campbell and Jean Lahèrrere noted how the estimates for the oil reserves of six Middle Eastern countries members of OPEC showed an abrupt bump upward in the mid-1980s, creating a total of 300 billion additional barrels of oil added without having reported major discoveries of new fields. One can at least suspect that the estimates had been tweaked, and not a little, for political purposes. Western governments are not immune from exaggerated claims. As an example, much resonance was given in the media in 2016 about the “discovery of new oil reserves” in Texas, about 20 billion barrels calculated as worth some $900 billion [50]. One problem was that it was not a “discovery” but simply a new estimate of the quantity of technically recoverable oil in known deposits. But there was a much bigger problem that Arthur Berman noted [51]:

Where did the $900 billion value come from? Multiply 20 billion barrels times $45 per barrel and you get $900 billion. In other words, if the oil magically leaped out of the ground without the cost of drilling and completing wells; if there were no operating costs to produce it; if there were no taxes and no royalties.

According to the USGS’ input data, it would take 196,253 wells to produce the 20 billion barrels if it exists. At $7 million per well, that would cost almost $1.4 trillion in drilling and completion costs alone.

It would cost more than $1.4 trillion to generate $900 billion in revenue resulting in a net loss of $500 billion at $45 oil prices excluding all operating expenses, taxes and royalties–and no discounting.

That’s a discovery that no one can afford to make.

But there is an even bigger problem with reserve estimates. Assuming that they are correct, they tell you something about the volumes that you may be able to extract, but nothing about the cost of extraction. As you may imagine, that is more than a small problem: it is like evaluating the military power of a country simply by counting the number of soldiers it can field, neglecting their firepower and their willingness to fight. That is a mistake Saddam Hussein made when he tried to hold onto Kuwait in 1991, just an example among many. The fact that some poor guys with rifles stand in a trench, somewhere, does not mean that they will be able to fight effectively and, in the same way, the fact that some “extractable oil” exists, somewhere, does not mean it will be extracted, unless somebody will be willing to pay the costs involved.

Despite the technical sophistication they deploy in the task, oil companies—just as all mining companies—seem to have little or no interest in using models that take into account the costs of extraction of mineral reserves to estimate future production. The most sophisticated model they normally use to peek into an uncertain future is the “reserve/production ratio” (R/P). It works by dividing the current estimated amount of reserves by the yearly production rate. The result is a number that can be interpreted as the number of years that production could go on at the current rates before the resource runs out completely.

The reason why companies (and politicians, too) love the R/P ratio is that it normally provides a comfortably large number of years before we run out of anything. For oil, for instance, the R/P ratio stands today at some 50 years, that for coal at a few centuries or, in some assumptions, at more than a thousand years. Most people understand from these data that there is nothing to be worried about regarding oil for at least 50 years and, by then, it will be someone else’s problem. And, if we really have a thousand years of coal, then what is the fuss about? Add to this the fact that the R/P ratio has been increasing over the years and you understand the reasons for a rather well-known statement by Peter Odell, who in 2001 said that we are “running into oil” rather than “running out” of it [52]. In this vision, extracting a mineral resource is a little like eating a cake. As long as you have some cake left, there is nothing to be worried about. Actually, the peculiar cake that is crude oil has the characteristic that it becomes bigger as you eat it.

But then, if there is still plenty of cake to eat today, surely there was even more of it at the time of the great oil crisis of the 1970s. Then, maybe it is true that the crisis was all the fault of those evil Arab sheikhs, wasn’t it? Again, adult people should recognize that blaming one’s problem on some evil characters is not the best way to solve it.

As is often the case with complex systems, the oil crisis of the 1970s was a complex phenomenon generated by a chain of feedback effects. Depletion was a trigger that started the chain, but it was not in itself the cause of the disaster. And the same is true for political factors. They were not the “cause” of the disaster, not any more than a feather was the cause of the broken camel’s back, as in the proverb. The oil crisis of the 1970s was a problem of the size of the faucet, not of the tank. No matter how much oil there was, somewhere underground, the capability of the industry to extract it was insufficient to satisfy the a growing demand. It is a problem that Arthur Berman perfectly framed when he said that considering only the underground oil resources is as “if the oil magically leaped out of the ground without the cost of drilling and completing wells; if there were no operating costs to produce it; if there were no taxes and no royalties” [51].

More precisely, the problem in the 1970s was that the industry was unable to keep enlarging the faucet at an exponentially increasing rate, as had been the rule from ca. 1940 to 1970. During that period, production was doubling every 10 years and, indeed, in some 30 years it had increased by a factor of nearly 10. If it had continued doubling worldwide at the same rate up to now, it would have doubled 5 times more and today the oil industry would produce about 30 times more oil than it did in 1970. Starting from about 50 million barrels per day, production would have arrived today at the fantastic value of one billion barrels and a half per day—while in the real world it is less than 100 million barrels.

Of course, that could not happen and it did not happen. Not only was it physically impossible to keep production growing for such a long time, but we would all have been cooked well done by global warming in the meantime. So, the oil crisis was not a bug but a feature: it was a needed adjustment in order to slow down the system and make it compatible with the real world. This interpretation is confirmed by the fact that it had been predicted in advance on the basis of exactly these considerations. In the 1970s, Pierre Wack and his group at Shell Oil were applying a technology called “scenario analysis” to oil production and they had noted that the evolution of the oil market was leading to a completely different situation from the one that had been standard during the past decades. He wrote in 1985 [53] that

  • The oil market—long characterized by oversupply—was due to switch to a sellers’ market.

  • Soon there would be virtually no spare crude oil supply capacity.

  • Inevitably, the Middle East and, in particular, the Arabian Gulf would be the balancing source of oil supply.

  • The great demand on Middle East production would bring a sharp reduction in the Middle East reserve-production ratio, if met.

  • The sharp peak in Middle East production would not be allowed to occur. Intervening factors would include a desire by Arab countries to extend the lifetime of their one valuable resource and a cornering of the world energy market by Gulf producers for perhaps 10 to 15 years by limiting production.

  • Only something approaching a sustained worldwide depression could reduce the growth of demand for Middle East oil to levels where the anticipated sellers’ market would be too weak to command substantially higher oil prices.

Wack wrote about what the consequences would be:

More than 20 centuries ago, Cicero noted, “It was ordained at the beginning of the world that certain signs should prefigure certain events.” As we prepared the 1973 scenarios, all economic signs pointed to a major disruption in oil supply. New analyses foretold a tight supply-demand relationship in the coming years.

Now we saw the discontinuity as predetermined. No matter what happened in particular, prices would rise rapidly in the 1970s, and oil production would be constrained—not because of a real shortage of oil but for political reasons, with producers taking advantage of the very tight supply-demand relationship. Our next step was to make the disruption into our surprise-free scenario. We did not know how soon it would occur, how high the price increase would be, and how the various players would react. But we knew it would happen.

In the 1960s, oil reserves were considered abundant and no supply problem was foreseen for the short and medium term future. The problem was how to find the financial resources needed to keep production growing as it had been growing during the previous decade. Today, the situation is similar but worse: the problem is how to find the financial resources needed to keep production at least as stable (in energy terms) as it has been during the past decade. In both cases, the task was not and is not impossible, but it is surely difficult. In 1973, the relatively minor geopolitical shock of the Arab-Israeli war sent the system tumbling down the Seneca Slope. Today, another geopolitical shock could have the same effect.

There are factors, today, that could create a new oil crisis, possibly much worse than the one that started in the 1970s. On the demand side of the market, the fossil fuel industry is threatened by several factors. Renewables such as solar and wind, already produce energy at lower costs than those of fossils and that may be pushing coal toward extinction. Changes in the transportation market are also changing the rules of the game. Liquid fuels are mainly used in transportation: typically a good 50% of the oil industry production is gasoline. To this, you may add about 20% of diesel fuel and the result is that some 70% of the output of the industry is for internal combustion engines used for transportation. This market is threatened by two factors: one is the diffusion of electric vehicles, the other the diffusion of the concept of “Transportation as a Service” (TAAS) [54]. Electric cars can be powered using electricity produced by any source and the preference will reasonably go to renewable energy since it is clean and inexpensive. TAAS, then, may make individual cars as obsolete as wearing coats made of home-tanned bear skins. The concept of TAAS is not necessarily based on electric vehicles, but it may surely reduce the number of cars on the road, promote more efficient vehicles and a more efficient way of using them. The final result is likely to be a reduction in the demand for oil products.

These factors may badly dent the market of the oil industry as the result of the “collapse of the demand,” a term that seems to be more acceptable in the mainstream debate than the tainted “depletion” term. But depletion is also gnawing at the profits of the fossil industry. No matter how enthusiastic one may be about the shale oil “miracle,” it is a fact that miracles do not exist and that depletion is going to make the extraction of crude oil, gas, and coal more and more expensive as time goes by. Several producing regions have already gone through their Hubbert peak, often plunging into wars and social unrest as the result. Many of the main current wars are located in regions that saw the decline of national oil production: Yemen, Syria, and Venezuela are just some examples.

Eventually, it does not matter so much whether the problem is related to supply or to demand, these are two sides of the same coin. Nothing is produced at a price too high for customers to pay for it, and customers will never buy something they cannot afford. So, the destiny of the oil industry may well be to be brought down in a spectacular collapse by the candle burning at both ends: depletion on one side, demand decline on the other—it is another typical example of the concept of “dynamic crunch” that generates the Seneca cliff. We do not need a large reduction in the demand for transportation fuels to generate a spiral of decline for the oil industry. Less demand means less production, less production means the loss of economies of scale, and the loss of the economies of scale means higher costs that translate into higher prices which also depress the demand. And so it goes until it hits the bottom. As Lucius Annaeus Seneca said, long ago, “ruin is rapid.”

So, there is a significant probability of seeing an oil shock scenario playing out in the near future (Fig. 3.7). It may be triggered by a decline of shale oil production in the US, maybe coupled to a political shock reducing the export capabilities of other producers, such as Saudi Arabia or Iran. The results would be similar to those seen in the 1970s: oil prices would skyrocket, the economies of industrial countries would go into recession, importing states would need to implement measures for reducing oil consumption. Although today we are not so dependent on oil for electrical energy production as we were in the 1970s, we are still highly dependent on liquid fuels for transportation—with about 85% of oil production being used for fuels. Actually, we may well be more dependent on oil for transportation today because, everywhere, the tendency toward urban sprawl has generated suburban agglomerates of homes and shopping centers which can hardly be serviced by public transportation. So, a new oil shock would generate again long lines at service stations and fuel rationing might become necessary. The new oil shock might well be much more destructive than the earlier ones, also considering that today we lack the equivalent of the brand new oil fields of the North Sea coming to the rescue, as was the case in the 1970s.

But a new oil crisis may not be a bad thing, either. Since we have been consistently unable to curb the consumption of fossil fuel in order to reduce carbon emission, it may be that peak oil or, more exactly, the Seneca Collapse of the oil industry, would solve the climate problem by having emissions crashing down before the ill-fated “2 degrees” threshold is reached and surpassed. And if that is not enough? That is, what happens if the threshold has already been passed and we are facing the dreaded climate tipping point leading toward a runaway climate change? It that case, both the current sects of catastrophists will be satisfied: we would die in fire and in ice, and for sure that would suffice!

The Seneca Cliff and Human Violence: Fatal Quarrels

Years ago, I was somewhere in central Tokyo, in a place where I could have a good view of a large expanse of roads, squares, and areas where new skyscrapers were being built, a rare sight in a city that doesn’t seem to value its own skyline as a sightseeing attraction. Sitting on a bench, nearby, there was an old Japanese man. Maybe because I was an obvious Gaijin, a foreigner, he endeavored to tell me in a mix of Japanese and English what the place around us was when he was a young man, just after the end of the Second World War. At some moment, he made an arching gesture with his arm, as to encompass the whole city, and said something like, “all destroyed, nothing, nothing, all the same, mina onaji…” I knew what he was referring to: I had seen pictures of Tokyo after the firebombing in 1945 and it was exactly what this old man was describing. The allies had used incendiary bombs against the mainly wooden houses of the town: the fires had not only flattened everything, but had left no chance to the inhabitants who found themselves trapped without ways to escape. In Tokyo, firebombs killed some 100,000 civilians in a single bombing raid on the night of 9/10 March 1945 (Fig. 3.10).

Fig. 3.10
figure 10

Photo taken by Ishikawa Kōyō (1904–1989) https://en.wikipedia.org/wiki/Bombing_of_Tokyo_(10_March_1945)#/media/File:Tokyo_kushu_1945-4.jpg

Tokyo after the bombardment of 9/10 March 1945.

Over the years, a brand new Tokyo has been built over the ruins of the destroyed one, but everywhere the city still gives you a certain sensation of impermanence, something like living in the world of Basho’s poems. Every time a small earthquake shook the building of the University of Tokyo where I was working at that time it was a little like hearing good old Godzilla stomping its giant feet just around the corner. And it is difficult to walk in Tokyo and miss that the large avenues crisscrossing the city blocks have a purpose. They were designed to act as barriers against fires spreading in case of a new wave of incendiary bombing.

The cities of the Western world have been free from aerial bombardments for more than half a century by now, with only some exceptions, such as Belgrade in 1999. But some people in Europe are old enough to remember the daily raids, the rushes to the bomb shelters, the flashes, the smoke, and the terrible noise of the dropping bombs. When they will be gone, no living memory will remain of those moments, but some physical memory will, for instance in the form of faded “bomb shelter” signs on some old buildings. The garden of the house where I live now still keeps an ogival concrete shelter used during WW2 by the inhabitants to defend themselves against the fires and the splinters. It is so heavy that nobody knows how to get rid of it and so it is still there. Maybe it will turn out to be useful again in the future, who can say? refers to Fig. 3.11.

Fig. 3.11
figure 11

The author in front of the ww2 air raid shelter still standing in the garden of his home, in Florence. It is a reminder of times past, but it cannot be excluded that it could become useful again in the future

Today, we are trying hard to forget what war is, but it remains with us, a ghost that we seem unable to exorcise. There is a quote attributed to Leon Trotsky that goes, “You may not be interested in war, but war is interested in you.” Trotsky probably never said that, but it is a good description of the fact that when a war starts you have little or no possibility to avoid being affected by it.

Lev Tolstoy was among the first to speculate about the reasons for war when he wrote in his novel War and Peace (1867):

To us it is incomprehensible that millions of Christian men killed and tortured each other either because Napoleon was ambitious or Alexander was firm, or because England’s policy was astute or the Duke of Oldenburg wronged. We cannot grasp what connection such circumstances have with the actual fact of slaughter and violence: why because the Duke was wronged, thousands of men from the other side of Europe killed and ruined the people of Smolénsk and Moscow and were killed by them.

To us, their descendants, who are not historians and are not carried away by the process of research and can therefore regard the event with unclouded common sense, an incalculable number of causes present themselves. The deeper we delve in search of these causes the more of them we find; and each separate cause or whole series of causes appears to us equally valid in itself and equally false by its insignificance compared to the magnitude of the events, and by its impotence—apart from the cooperation of all the other coincident causes—to occasion the event. To us, the wish or objection of this or that French corporal to serve a second term appears as much a cause as Napoleon’s refusal to withdraw his troops beyond the Vistula and to restore the duchy of Oldenburg; for had he not wished to serve, and had a second, a third, and a thousandth corporal and private also refused, there would have been so many less men in Napoleon’s army and the war could not have occurred.

<.. > Without each of these causes nothing could have happened. So all these causes—myriads of causes—coincided to bring it about. And so there was no one cause for that occurrence, but it had to occur because it had to. Millions of men, renouncing their human feelings and reason, had to go from west to east to slay their fellows, just as some centuries previously hordes of men had come from the east to the west, slaying their fellows.

<.. > it was necessary that millions of men in whose hands lay the real power—the soldiers who fired, or transported provisions and guns—should consent to carry out the will of these weak individuals, and should have been induced to do so by an infinite number of diverse and complex causes.

<.. > When an apple has ripened and falls, why does it fall? Because of its attraction to the earth, because its stalk withers, because it is dried by the sun, because it grows heavier, because the wind shakes it, or because the boy standing below wants to eat it?

<.. > And so there was no single cause for war, but it happened simply because it had to happen

Tolstoy was no scientist, but these words could have been written by a modern scientist versed in system science. It is a characteristic of complex systems that their behavior can hardly be described in terms of “causes” and “effects,” rather, they change, move and evolve as the result of the interplay of forcings and feedbacks. This was the intuition of Tolstoy, who had not seen the 1812 Patriotic War in person, but had been with the Russian army during the Crimean War (1853–1856) and the siege of Sevastopol in 1855. That war is today mostly forgotten but it provides another example, if needed, of a totally useless conflict. Why was it fought? Apart from some silly pretexts about the freedom of cult of some religious sects, nobody seemed to know for sure at that time and you would hardly find anyone, today, who could explain it either. Nevertheless, the Crimean war prefigured the much larger and more cataclysmic wars of the 20th century, so much that we could rightly call it “World War Zero” [55].

Many studies and assessments of war as a social phenomenon have been published after Tolstoy and, today, we seem to regard war with a certain degree of optimism, perhaps because the world has not seen another large world war after WWII for some 60 years, so far. Among the most optimistic assessment, we have the one by Steven Pinker with his well-known book, The Better Angels of our Nature (2011) [56]. Pinker’s thesis is that modern times became less violent during the past decades, and that it is a trend that will be maintained in the future. Other historical analyses of war are also optimistic. According to Rudolph Rummel (1932–2014), democracies are much less likely than dictatorships to engage in wars [57]. In this interpretation, promoting democracy could be a good way to avoid wars and the trends toward more democracy in the world could be a reason why we may be living in less troubled times than in the past. It may be because of Rummel that the idea of “exporting democracy” has become so popular, nowadays, although in ways that leave many of us a little perplexed.

In any case, both Pinker and Rummel base their conclusions on historical data and may well be right for the time range they consider: the past few decades or, at most, the 20th century. It is true that the “big one,” the Second World War, was probably the most destructive war in history and that afterward there were no more wars of comparable size. But is that a true long-term trend or just a statistical fluctuation? [58]. The current world’s political situation does not seem to provide ground for optimism, with reciprocal threats of nuclear annihilation being again exchanged nowadays, as it was fashionable to do in the 1950s.

To understand what we are facing, we need data that go beyond the past few decades and, as much as possible, beyond the past century. The task of analyzing wars from a long-term statistical viewpoint was first attempted by the British physicists Lewis Fry Richardson (1881–1953). Richardson was in many ways well ahead of his age, and his contributions in fields such as meteorology and fractal analysis were so advanced that it took time for them to become part of mainstream knowledge. He was also a pacifist who tried to understand what generates wars and how we could, perhaps, avoid them. So, he performed a series of analyses of the frequency and the size of human wars and more in general of what he called “fatal quarrels,” those human interactions ending with the death of someone.

Richardson proposed that wars and homicides tend to follow a “Poisson distribution,” [59]. In time, it was found that wars are another kind of critical phenomenon [60,61,62]. Just as earthquakes and wildfires, wars tend to follow power laws. The initial intuitions of Richardson were confirmed by later studies. Let me show you some data from the database prepared by Brecke [63], covering some 600 years of human history. Together with my coworkers, Martelloni and Di Patti, we analyzed these data in a recent paper [62] (Fig. 3.12).

Fig. 3.12
figure 12

Data from [62]

Total War Fatalities in the world, normalized to the world population.

You see how the history of wars is dominated by a few very large conflicts—shown as normalized for the world population in the figure. The scene seemed to be relatively quiet up to mid-17th century, then a series of war “spikes” started. Some are especially large and recognizable: the 30 years war, the Napoleonic campaigns, the Crimean war, the First and the Second World War. It is hard to see, here, any continuous trend: what we can say is that, over 600 years of wars, the absolute number of wars has increased, but it decreased if we normalize it to the increasing world’s population. Then, if we look at the frequency of wars as a function of their size, we find the typical “power law” distribution of critical phenomena. That is, large wars are less frequent than small ones, but there exists a “fat tail” in the distribution that makes large events not as unlikely as they would be if they were purely casual—or random—events.

As argued, among others, by Clauset [61], the so-called “long peace” of the period after the Second World War is not statistically significant as a change in past trends. Clauset arrived at the conclusion that a war the same size as the Second World War has a more than 40% probability to occur within the next 100 years, while a war with one billion battle deaths and, presumably, the extermination of most of humankind, has a median waiting time of little more than 1000 years. That is, it is nearly certain considering that human beings have been waging war against each other for longer than that.

There follows that wars, it seems, are emergent phenomena in the complex social system formed by human groups. In other words, it is not the will of mad rulers that generates wars but some kind of collective force that emerges out of a social network as the result of reinforcing feedbacks. War appears to be an unavoidable consequence of the behavior of human beings, perhaps a result of our primate ancestry [62, 64]. It is remarkable how this quantitative analysis validates the intuitions that Lev Tolstoy proposed one century and a half ago.

Of course, I said more than once in this book that predicting the future by extrapolating from past trends is dangerous and unreliable. Yet, the results we found over 600 hundred years of history are sobering: if nothing changes in the behavior of humankind or in the structure of society, the probability of major wars occurring in the near future is high. And we are not extrapolating anything if we just look at the current trends: wars are going on right now and the behavior of the “great powers” seems to be increasingly aggressive and reckless in a situation that reminds one more and more of what preceded the First World War. At that time, it is likely that nobody among the leaders had any idea of what the consequences of their decisions were. It was said that WWI was to be the war that would end all wars but, judged in retrospect, that looked a little optimistic. And yet, the same concept, the war that was to end all wars, was repeated as recently as with the 2003 invasion of Iraq.

Is there some way to stop wars in the future? There is no lack of ideas on the matter and it may be interesting to quote here the book by David Wilkinson, Deadly Quarrels (1980).

The most common way of contributing to the debate over war causation and peace strategy has been to assert some definite theory, to show how it fits current circumstances, and to deduce immediate practical conclusions. If we follow this public debate, we may expect to be told that war is a consequence, for instance, of wickedness, lawlessness, alienation, aggressive regimes, imperialism, poverty, militarism, anarchy, or weakness. Seldom will any evidence be offered. Instead, the writer is likely to present a peace strategy that matches his theory of war causation. We shall therefore learn that we can have:

  • Peace through morality. Peace (local and global) can be brought about by a moral appeal, through world public opinion, to leaders and peoples not to condone or practice violence, aggression, or war, but to shun and to denounce them.

  • Peace through law. Peace can be made by signing international treaties and creating international laws that will regulate conduct and by resorting to international courts to solve disputes.

  • Peace through negotiation. Peace can be maintained by frank discussion of differences, by open diplomacy, by international conferences and assemblies that will air grievances and, through candor and goodwill, arrive at a harmonious consensus.

  • Peace through political reform. Peace can be established by setting up regimes of a nonaggressive type throughout the world: republics rather than monarchies; democratic rather than oligarchic republics; constitutionally limited rather than arbitrary, autocratic regimes.

  • Peace through national liberation. Peace can be instituted only through the worldwide triumph of nationalism. Multinational empires must be dissolved into nation-states; every nation must have its own sovereign, independent government and all its own national territory, but no more.

  • Peace through prosperity. Peace requires the worldwide triumph of an economic order that will produce universal prosperity and thereby remove the incentive to fight. Some consider this order to be one of universal capitalism, or at least of worldwide free trade; others hold it to be some species of socialism, reformist or revolutionary, elitist or democratic.

  • Peace through disarmament. Peace can be established by reducing and eventually eliminating weapons, bases, and armies, by removing the means to make war.

  • Peace through international organization. Peace can be established by creating a world political organization, perhaps even a constitutional world government resembling national governments, to enforce order and promote progress throughout the world.

  • Peace through power. Peace can be maintained by the peaceable accumulation of forces, perhaps overwhelming, perhaps preponderant or balancing or adequate-sufficient to deter, defeat, or punish aggression.

It is clear that we are not going anywhere if we are dealing with nine different and incompatible theories on how to establish peace. Does that mean we have to live with war? It may well be that everyone of us has to adapt to the idea that in a not-so-remote future our town may be vaporized in a nuclear explosion or that you or your son will be asked to charge a machine gun nest armed only with a bayonet and all that, again, in the name of the war that will put an end to all wars. If war is a collective phenomenon that happens at the level of states and governments, then there is nothing you can do to avoid it, individually, or as a group. It is a meager consolation to know that this is the way the universe works.

Perhaps the best we can do, at this point, is to report the advice of a stoic philosopher, contemporary to Seneca, Epictetus, who in his Enchiridion (“The Manual”) wrote that

“Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our actions. The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you not be harmed.”

Famines, Epidemics, and Depopulation: The Zombie Apocalypse

Fig. 3.13
figure 13

Bridget O’Donnell with her children, victims of the great famine that struck Ireland in 1845 (from Illustrated London News, December 22, 1849)

In 1968, George Romero directed a low-cost, black and white movie titled, The Night of the Living Dead. It was a great success that soon became a cult. Evidently, the film struck something deep in the human psyche with its theme of the dead rising from their tombs to devour the living. The movie critic Roger Ebert wrote about it that “I felt real terror in that neighborhood theater last Saturday afternoon” [65] and I have a personal recollection of having seen people vomiting in the hall of the theater after having watched the movie (Fig. 3.13).

The term “zombie” wasn’t used in Romero’s movie but it was the start of the genre that we call today “zombie apocalypse,” plots involving a large number of ‘undead’ people haunting towns and suburbs in the search of live humans to kill and eat. But why this fascination with zombies in our times? How is it that we created a genre that never existed before in the history of human literature? Can you imagine Homer telling us that the city of Troy was besieged by zombies? Did Dante Alighieri find zombies in his visit to Hell? How about Shakespeare telling us of Henry V fighting zombies at Agincourt?

If something exists, there has to be a reason for it to exist and I think there is a reason why the zombie theme is so popular in our times. Literature always reflects the fears and the hopes of the culture that created it; sometimes very indirectly and in symbolic ways. And, here, it may well be that zombies reflect an unsaid fear present mainly in our subconscious: starvation.

Let’s start with a typical feature of zombies: the black circles around the eyes. Zombies are supposed to be cadavers that somehow maintain a semblance of life. But do cadavers have this kind of eyes? Maybe, but the facial edema that creates the dark eye socket effect is also typical of malnourished people. If you look at how artists drew the starving Irish people during the Great Famine that started in 1845, they clearly perceived this detail. In the figure at the beginning of this section, you can see a rather well-known image of Bridget O’Donnell, one of the victims of the Great Irish Famine—note the darkened sockets of her eyes. Her children, too have the same dark circle around their eyes. Of course, comparing the starving Irish to zombies does not imply a lack of respect for the Irish men and women who perished in one of the greatest tragedies of modern times, but it tells us something about how starving people are perceived in our collective imagination. Zombies seem to be the perfect image of the effect of famine, not just in terms of their emaciated aspect but also in terms of their behavior.

Now, imagine that something happens that stops the supply of food to the aisles of your local supermarket. Imagine that it happens to all supermarkets in your region: maybe a shortage of fuel, maybe a war, maybe something else, it is anyway something that could happen [47]. People living in suburban areas would be first surprised, then angry, then desperate, and, finally, starving when their home stocks of food run out. Even before that, they would have run out of gas for their cars; the only system of transportation available to them.

Unless the government could (and would want to) intervene, the inhabitants of the suburbs would soon become emaciated, blundering, hungry people haunting the neighborhood and the shopping malls in the desperate search for something to eat. When they run out of canned food, some may turn to cannibalism, as zombies do in movies. Some may be able to put their hands on a good supply of guns and ammunition and then they could play king of the hill for a while, stealing most of the remaining food from those who hoarded it and shooting dead the poor wretches who still lumber in the streets, one more trope of zombie movies. The old Latin adage “mors tua, vita mea” becomes the rule. As Seneca Collapses go, this case is among the worst possible ones!

Of course, this is not a prediction and we can hope that nothing like that will ever happen, but it cannot be ruled out as impossible. I am not the only one to have noted this point, Terrence Rafferty wrote in 2011 in a literary review in The New York Times [66] that,

… it’s a little disturbing to think that these nonhuman creatures, with their slack, gaping maws, might be serving as metaphors for actual people — undocumented immigrants, say, or the entire populations of developing nations — whose only offense, in most cases, is that their mouths and bellies demand to be filled.

Fictionalized catastrophes (“it is only a movie!”) are surely less threatening than those that are described as likely to happen for real. It is a curious trait of the human mind but it may be that the only way for our mind to cope with possible catastrophes to come is to see them as fairy tales. But what are the chances of a real major famine striking the world of our times?

The general opinion on this point seems to be that famines are a thing of the past. You probably know the story of the wrong predictions made by Paul Ehrlich [67] with his 1968 book The Population Bomb, where he wrote that “In the 1970s hundreds of millions of people will starve to death.” It was another example of how the secret for making wrong predictions consists of extrapolating the current trends. Indeed, the 1950s and 1960s had seen several large famines, including the Great Chinese Famine of 1959–1961 which caused at least 15 million deaths. So, the idea that famines were common and that they would continue in the future was a common perception in the 1960s. It may not be a coincidence that Ehrlich’s book and the zombie movie by George Romero appeared in the same year.

On the other hand, if Ehrlich made a wrong prediction in terms of timing, that doesn’t mean he was wrong in terms of substance. If he had framed his views in terms of a scenario rather than a prediction, then it would not be so easy to sneer at him, something which seems to have become a popular pastime. So, always remembering that the future is never like the past, what can we say about the possibility that major famines could cause local—or even global—collapse of the human population?

We know that there are more than 7.5 billion people alive on Earth today. Evidently, if they are alive, it means enough food is produced to keep them alive but that, of course, does not mean abundant food for everyone. Many people in poor countries are undernourished, while in rich countries many suffer from the opposite problem: obesity. That may, actually, be another form of undernourishment: it is known that poor people eat more “junk food” than the rich, and that they are also more overweight on the average [68]. A common interpretation is that the diet of poor people in rich countries lacks in vegetables, fish, and fruit and so it cannot provide the vitamins and micronutrients needed for good health. They try to compensate by eating too much, in particular in terms of carbohydrates. Even though the direct link between sugar and obesity is controversial [69] this interpretation can explain many features of the current obesity epidemics in the West, a multi-scale, systemic problem [70]. Surely, not something that can be explained by simply assuming Westerners are too rich.

But it is also true that nowhere in the world today do we see the kind of famines that occurred decades ago, with starving people stumbling around and looking like zombies before falling dead on the sidewalks. The lull in famines appears clear in the historical data for the past century or so [71]: there was a maximum of famine-related deaths in the 1940s, with more than 18 million deaths during the decade. In comparison, the decade of the 1980s had slightly more than 1.3 million deaths. The 21st century saw a certain increase with more than 2.8 million deaths during the 2000s, still much lower than the historically recorded maxima. These are not negligible numbers but they do indicate an improvement. Evidently, the world’s food production system has been able to cope with the increasing world population, so far at least. By all means, it was a remarkable achievement (Fig. 3.14).

Fig. 3.14
figure 14

Famine mortality in the world. Data from the World Peace Foundation (2015)

The decline in famines is normally attributed to technological factors. Fertilizers, pesticides, and mechanization greatly increased the yield of production per area unit, creating what we call today the “Green Revolution.” The term gives the impression of some sudden technological improvement but that was not the case: yields gradually improved as the result of progressive innovation in cultivation techniques. But more than that, the disappearance of famine was due to container ships and low-cost trucks that made it possible to transport food everywhere in the world. In turn, these ships would not have transported food had they not been coupled with political and market-based measures. After World War II, providing food for the population of poor countries was seen as a way to avoid the diffusion of Communism and, also, as a simple way to subsidize the overproduction of Western agriculture [72]. That was one of the factors generating the economical and political system we call “Globalization.” With the world having become one single giant market, anyone can use dollars to purchase food from everywhere and have it delivered to where they live. Since food is so cheap and since its purchase is often subsidized, the result has been a capillary distribution of food everywhere. Paul Ehrlich had not understood the importance of these factors when he predicted that hundreds of millions of people would starve to death. They haven’t. Not yet, at least.

The problem is that, if there is enough food for 7.5 billion people today, that does not mean there will be enough in the future. It is another case of the main rule of prediction: the future is never like the past. So, you would be making the same kind of mistake Ehrlich made if you were to extrapolate the current situation and from that conclude that there will be no more famines in the word. The destruction of fertile soil, the depletion of aquifers, the increased reliance on depletable mineral fertilizers, to say nothing of climate change, are all factors that may make the future of food supply much harder than it is nowadays for humankind. The problems will be exacerbated if the population continues to grow.

Note also that the world’s food supply system is a complex one that links technological, economical, and political factors. As we saw in this book, these systems are subjected to the kind of sharp crash that we call “Seneca Cliff.” The slow growth of the system lulls you into a false sense of security until you find yourself falling down the cliff. So, famines are often accompanied by epidemics and wars. An undernourished population is easy prey of microbes in various forms and in ancient times famines and plagues went together or followed each other. Then, the stress generated by famines may generate political stresses which, in turn, generate violence. Conversely, wars may generate famines, sometimes intentionally provoked by one side to weaken the other. It is always the same mechanism that I dubbed the “Seneca Crunch:” all the negative factors gang up together to bring the system down.

Here are some examples of famine-related population collapses that took place in the past. First of all, here are the data for the Chinese Famine of 1959–1960s [71] (Fig. 3.15).

Fig. 3.15
figure 15

Demographic data for China. Data from “Our World in Data” [71]

In terms of sheer numbers, with 15 million deaths directly or indirectly attributable to lack of food, it was one of the largest tragedies generated by famines in the historical record. Yet, note how these 15 million victims caused only a barely detectable dent in the Chinese population, about 2% of the total number, at that time close to 700 million people. The number of births rebounded just a few years after the famine phase and in practice, the trajectory of Chinese population growth was not significantly affected by the event.

Here is, instead, a graph of the victims of the Irish famine of 1845–1849. The rapid population drop was not caused just by starvation and the associated sicknesses, but also by emigration, but even that was a consequence of the lack of food. Losing some 2 million people in a few years, about one quarter of the total population, was not just a human tragedy but a social and cultural disaster that led Ireland, among other things, to lose its national language, Gaelic, to be replaced by English (Fig. 3.16).

Fig. 3.16
figure 16

The Population of Ireland. Data from the Maddison Database www.ggdc.net/maddison/Historical_Statistics/horizontal-file_02-2010.xls

Finally, a third example where we see both phenomena at play in the same country, a transient loss of population and a long lasting one: Ukraine (Fig. 3.17).

Fig. 3.17
figure 17

The Population of Ukraine, including the effect of the Great Famine of the 1930s. Data from Wikipedia, https://en.wikipedia.org/wiki/Demographics_of_Ukraine

The data are incomplete but they clearly show two phases of population decline in Ukraine. The first corresponds to the Great Famine of 1932–1933 which affected not just Ukraine but large areas of the Soviet Union. It was a tragic famine with some 2 million deaths in Ukraine alone, perhaps more. But, tragic as it was, it is a transient in the population growth curve. The Ukrainian population may have suffered another decline phase during WW2, but the data are missing. In any case, in the 1950s, the population had rebounded and the growth phase that followed lasted until Ukraine reached its population peak at about 53 million, around 1990. Then, with the fall of the Soviet Union, in 1991, decline started, lasting to this day. This decline was not caused by famines, at least not the kind that lead people to die by starvation. But the quality of nutrition is likely to have declined together with the quality of health care and that has been increasing the death rate, especially with the elders. At the same time, Ukraine saw a reduction of birth rates in the same way as most former Soviet countries. We cannot say if the currently ongoing decline is irreversible, but it may well turn out to be.

These are just examples of modern famines, representing a phenomenon that has been common in history. Famines happen: sometimes they are transient phenomena generated by some natural disaster such as extended droughts, worsened by the mismanagement of corrupt or incompetent governments, or both. Sometimes they are systemic trends caused by the population having overcome the limits of what the local agriculture can sustain. This limit is not a fixed entity, it may be overcome by better agricultural technologies as well as by social and economic factors that favor better distribution of the food. The limit may also decline as the result of the depletion of the key resource for agriculture: fertile soil destroyed by overexploitation.

Whatever the case, in some historical examples it is clear that some limit was breached: countries such as Ireland in 1845 and Ukraine in 1991 were simply unable to sustain the population level they had reached. The return to sustainable limits took the shape of an apocalyptic disaster in Ireland, where the underdeveloped transportation and financial infrastructure of the country made it impossible to compensate for the collapse of the agricultural production in the South-Eastern regions. It was less dramatic in Ukraine, but it was still a major event. The case of Ukraine, as well as of several former Soviet countries, shows that there is no need of seeing people dropping dead in the streets for the population to decline. Apparently, young people tend to think that their children will have few opportunities in an economically declining system and abstain from procreating more than a few. The elder, then, must cope with poor nutrition and lack of health care: that may not kill them right away, but surely lowers their life expectancy. A similar effect is taking place in most Western European countries in terms of lower birth rates, but the life expectancy remains high and the reduced of the native population is compensated by immigration.

An often discussed interpretation of famines is that some of them are “man-made,” that is are the result of specific evil actions carried out by governments and designed to starve and kill people. The best-known case is that of the Irish famine of the mid 19th century, said to be a crime perpetrated by the evil British government against their Irish subjects, but this accusation is heard for other modern famines. The Soviet government is blamed for the 1932 famine in Ukraine and the Chinese government for the Chinese famine of 1959. Now, it is true that governments are not benevolent associations, rather they tend to be among the most deadly organizations ever created by humankind. According to Rudolph Rummel [73], over the 20th century, some 256 million people were exterminated, directly or indirectly, by government actions in what Rummel calls “democides,” a term that includes not only the victims of regular wars, but also other kinds of actions designed, for instance, to starve people to death.

Overall, though, it seems that governments are rarely interested in killing their own citizens: they need them as taxpayers or cannon fodder. On the contrary, they often try to multiply them: encouraging natality is a traditional policy of dictatorships. But governments do engage in the extermination of minorities, people who are identifiable and who can be labeled as enemies because of their race, language, religion, and ideology. For this purpose, they normally use conventional weapons: the problem with famines as weapons for ethnic cleansing is that one cannot easily distinguish friend from foe unless the population to be exterminated is localized in a specific geographical region.

For what I can say on this matter, I see no evidence that the British government willingly acted to create or worsen the Irish famine of 1845. They had no interest in killing a population that was providing a revenue for them. But it is true that they were slow and inefficient, and sometimes their actions worsened the situation. This is not surprising: another well-known characteristic of governments is that they are poor at managing complex systems. Other cases are less clear-cut but, personally, I tend to think that incompetence is normally a better explanation than evil intention for the great famines of history.

What about the future? Will we see new major famines in the world? A commonly heard question on this point is “how many people can the Earth support?” It is an ill-posed question for several reasons. It should be, rather, “how many people can the Earth support indefinitely.” It is a truism that the Earth can now support nearly 8 billion people: it is doing just that. But that is done in large part by “mining” a non-renewable resource: fertile soil. So, the large human population living today on the Earth may be just a transient phenomenon, way above the carrying capacity of the planet.

We often hear, today, about the “number of earths” we would need in order to provide for a long time the amount of resources we are consuming today. This is a concept related to that of “ecological footprint” proposed by Wackernagel [74]. Using the concept of footprint, we can calculate that, today, we are using almost 2 earths, and if everyone were to live at the same level of consumption of natural resources as the United States then we would need something like five Earth-like planets. That may force us to “return” well below the sustainability limits and that may turn out to be somewhat uncomfortable for most of us.

But there is a deeper reason why the question of the population limit is an ill-posed question. It is because famines and the related epidemics in history have always been localized in specific regions of the world. When disaster strikes, it is hard for a starving and sick population to move far away in search of food. In Ireland, for instance, people had no transportation other than their feet and most of the victims of starvation died close to their villages. In modern times, it is much easier to transport food where it is needed rather than transporting people where there is food available. As long as the economic system we call “globalization” remains active, this capability provides a remarkable resilience to the food production and distribution system. But things may change with the fashionable trend of building walls to mark state borders further limits the mobility of the poor and provides a barrier against the possibility of masses of hungry people swamping richer regions. That may result in large regions of the world experiencing disastrous famines, while others manage to maintain a sufficient food supply. It would be nothing different from the situation before globalization, when famines where a normal feature of life, everywhere.

Overall, famines may be one of the most clearly perceived threats nowadays, although it is a perception rarely expressed in the open. As individuals, we may want to prepare for a major famine by stocking supplies in the basements of our homes or by stockpiling guns and ammo in order to steal the supplies of our neighbors. It is doubtful (to say the least) that these strategies will be effective. If a major famine strikes, survival is possible only acting together as a whole society. Whether this will be possible in the world we call the “West” which puts so much emphasis on individual reliance, is all to be seen.

The Big One: Societal Collapse

Fig. 3.18
figure 18

(Image by Limitchick https://en.wikipedia.org/wiki/Worker_and_Kolkhoz_Woman#/media/File:The_Worker_and_Kolkhoz_Woman.jpg)

The giant stainless steel monument to the Soviet Worker and the Kolkhoz Woman in Moscow. It was created by Vera Mukhina in 1937 to symbolize the march forward of the then recently created Soviet Union (1922–1991). The Union was the last (so far) of the long series of empires that ebbed and flowed along human history

In 1992, I received an email from Russia. Written in very good English, it contained wishes for my birthday and a proposal of research collaboration. It arrived from a research institute in Moscow where some Russian physicists had been working in the field of science where I was active in at that time, surface science. With the Soviet Union having disappeared just one year before, they were looking for international contacts and collaborations. Without their salary, and without funding for their research, the researchers of former Soviet countries were being forced to find jobs as janitors, clerks, or translators, while many of them had to leave Eastern Europe to continue their career in the West. That was the start of my involvement with former Soviet researchers and research institutions, especially in Russia and Ukraine (Fig. 3.18).

Witnessing the effects of the Soviet collapse from inside was a sobering experience and it made me wonder about the reasons that had brought down the Soviet Union. At the time, I tended to agree with the generally accepted explanation that Francis Fukuyama had termed the “End of History” [75]. In this view, the crash had been due to the inefficiency of the Soviet State and it had demonstrated the superiority of the Western Political system.

But the more I understood Russia the more I became dubious about this optimistic interpretation. With all its defects, its quirks, its ideological bent, its overblown bureaucracy, and its many more problems, the Soviet Union was still a state that encompassed a large part of Eurasia and nearly 300 million people. Its scientific achievements had been remarkable and had included the first artificial satellite, the first man in space, and mounting a serious challenge to the West in the race to the Moon of the 1960s. To say nothing about having defeated the German invasion during WW2 at a loss of more than 20 million soldiers. If you ever took a train of the Moscow subway and saw the elaborately decorated stations there, you could not miss the fact that the Soviet Union had been much more than just a dictatorship kept together by its secret police. And, although the research work of the Soviet scientists was mostly unknown to their Western counterparts, it was often at the same level, if not better.

Mostly, it was the resilience of the Russian people that impressed me. I still remember a scene that I witnessed in the 1990s, probably at the darkest moment of the economic crisis in Russia. At that time, the local currency, the ruble, had become nearly worthless and most transactions were made in dollars, even for ordinary items such as food in the supermarkets. So, at the exit of a train station in Moscow, I saw maybe a dozen Russians, men and women, lined up along the sidewalk, each one with something in their hands: a shirt, a pair of shoes, a hat, or other everyday items. They were selling what they had for a few rubles. At first, I thought that they were doing that out of desperation. But then I thought it better: these people were not desperate, they were making a statement. They were sharing what they had with the others and, in doing so, they were saying that rubles were still money and that Russia was still an independent country with its national currency. Eventually, they were vindicated: over the years, Russia returned to use rubles and the economy rebounded to a degree of reasonable prosperity. Many scientific institutions in Russia and in the former Soviet Union have returned to their previous level of excellency and I am glad to have been able to give a hand in the task although, of course, the merit goes entirely to the obstinacy, the persistence, and the hard work of the Russian researchers.

My experience with the Russian collapse went in parallel with that of Dmitry Orlov, an American of Russian origin who also personally experienced the effects of the collapse of the Soviet Union. Orlov reported his experience and his ideas in a series of books, the first one (2011) with the title Reinventing Collapse. The Soviet Example and American Prospects [76]. This title, I think, explains what the book is about. Orlov can speak native Russian and his knowledge of the Russian society is obviously much better than mine, but his experience agrees very much with mine. The collapse of the Soviet Union was not a simple question of a wrong ideology put to rest: it was due to deep reasons that had weakened the Soviet society from inside causing it to follow a trajectory that inevitably led it to decline and disappear. According to Orlov, the same factors are at work to bring the Western society to an inevitable future collapse.

I think that if Tolstoy had witnessed the collapse of the Soviet Union, he would have interpreted it in the same way as he had interpreted the invasion of Russia by Napoleon’s armies. “It happened because it had to happen.” It did not and it could not happen because some puffy leader had decided something. In other words, it had little or nothing to do with the often-heard story that Ronald Reagan, Margaret Thatcher, and King Fahd of Saudi Arabia together had managed to bring down oil prices in order to lower the revenues that the Soviet state obtained from oil export and make it collapse. The collapse of the Soviet Union had a lot to do with crude oil, but certainly not in terms of conspiracy theory. Nor the fall could be caused by the Soviet leader of the time, “Mad Misha” Gorbachev, alone, who was so naive to be easily cheated by the promises of the evil Western leaders.

The Soviet Union went down and disappeared from history just as one more example of how states, empires, and entire civilizations collapse. So far, no human civilization has survived this destiny, none has lasted more than a few thousand years without undergoing at least some kind of collapse, maybe re-emerging stronger afterward, but also deeply changed. Some civilization were smashed by external events: the Minoan one on the shores of the Mediterranean sea was probably destroyed by the mega-eruption of the Thera volcano during the mid-second millennium BCE. Some civilizations were destroyed by the military power of technologically more advanced ones, such as the Aztec and the Inca Empires, destroyed by the Spanish armies during the 16th century CE. But, in the great majority of known cases in history, civilizations and empires fell by themselves or, if defeated by foreign powers, because they had been greatly weakened for internal reasons. Between 1934 and 1961, British historian Arnold Toynbee (1889–1975) wrote A Study of History describing the rise and fall of the 23 civilizations he had studied. His conclusion was that “civilizations die from suicide, not by murder.” He had identified a typical feature of complex systems, tending to collapse because of the sometimes deadly mechanism of reinforcing feedbacks. That was certainly the case of the Soviet Union, neither militarily defeated nor hit by an asteroid: it collapsed mainly for internal reasons.

The collapse of civilizations is one of the most controversial subject of historical study. There are, literally, hundreds of different explanations for some of the most spectacular falls, such as in the case of the Roman Empire. It seems that these explanations appear and disappear in reason of the current worries of our own civilization. For instance, historian Kyle Harper recently transferred to the ancient Roman Empire one of our major worries: climate change, arguing that it was at least one of the major causes of the fall [77]. That involves stretching the data a little, to say the least, since the data show no evidence of significant climatic changes in Europe until well after that the Roman Empire was in its death throes [78].

In reality, the historical cycles of empires and civilizations indicates that there have to be generally valid mechanisms that bring about their fall. In recent times, a certain agreement seems to be emerging on this point and a pioneer in this field has been Joseph Tainter with his idea of the “diminishing returns of complexity” [79]. According to Tainter, civilizations tend to expand and, as they do, they develop internal structures that are used to cope with external and internal threats and challenges: the army, the legal system, the police, the bureaucracy, and many others. Tainter’s idea is that the efficiency of these structures diminishes as they grow larger. That is, they become less and less effective at performing the tasks they were built for. According to Tainter, this phenomenon leads eventually to diminishing returns. This is the mechanism that brings down the stupendous structures we call “civilizations” or “empires.”

Tainter’s ideas are steeped in the science of complex systems, but they are qualitative. Actually, Tainter does support his interpretation with archaeological and historical data but only indirectly and one question that remains unresolved is how exactly diminishing returns bring collapse rather than just slow down growth.

Recently, together with my coworkers Ilaria Perissi and Sara Falsini, we tried to reproduce Tainter’s ideas using a model developed using the tools of system dynamics [80]. We found that the basic concept proposed by Tainter, diminishing returns, can be reproduced by the model. But we also found that it is not just the increase in size that reduces the efficiency of the structures of society: it is the combined effect of the higher cost of natural resources and that of having to fight pollution. When these effects are taken into account, the model produces a curve for the diminishing returns of complexity that looks qualitatively similar to the one proposed by Tainter (Fig. 3.19).

Fig. 3.19
figure 19

The main results of the Study on Civilization Collapse performed by Bardi, Falsini, and Perissi in 2019 compared with Tainter’s curve. In the study, we assumed that the level of complexity of a civilization is proportional to the size of its economy [80]

Most civilizations in history seem to arise from the availability of some abundant and cheap natural resource. The Roman Empire grew on the production of precious metals from gold and silver mines, in particular those of Northern Spain. Our current world empire has grown on the availability of abundant and cheap fossil fuels, first coal, then—currently—crude oil. But we have seen how natural resources tend to be overexploited and also how this phenomenon leads to their rapid depletion and, often, to a rapid crash of the system: it is the basic mechanism of the Seneca Collapse.

The Soviet Union was an empire mostly based on its vast mineral resources, and it was unable to escape the fate of other mineral-based empires: collapse caused by overexploitation. The fall of the Soviet Union was amply predictable much before it happened and it was, indeed, predicted by Soviet researchers themselves. On this point, Dennis Meadows, one of the main authors of the 1972 study The Limits to Growth, gave a talk in Moscow in 2012, telling how Soviet researchers had applied the same models to study the economy of the Soviet Union, finding that the system would soon collapse. They published their results in 1980 in a book (in Russian) titled The Soviet Union and Russia in the global system. According to Meadows, in the 1980s, Viktor Gelovani, first author of the Russian book

went to the leadership of the country and he said, ‘my forecast shows that you don’t have any possibility. You have to change your policies.’ And the leader said, ‘no, we have another possibility: you can change your forecast’.

The 2012 talk by Meadows has disappeared from the Web, but its main points are summarized in an article of mine on the blog Cassandra’s Legacy [81]. Meadows’ statements are confirmed by the work of Eglé Rindzevičiūtė who wrote an excellent article that tells the whole story [82]. It is clear that several Soviet scientists knew very well the “Limits to Growth” story and its methods and results, even though the study was officially rejected by the Soviet Government as the result of decadent Western science. These Russian scientists understood that the same factors that the study had considered for the whole world would apply to the Soviet Union. They seem to have made a considerable effort to warn the Union’s leadership that the system was going to collapse. The reaction of the Soviet leadership was the same as it was in the West: both Soviet and Western leaders were completely tied to the concept of “growth at all costs” and refractory to changes. So, the warning was ignored and, as usual, ruin followed. It may well be that the straw that broke the back of the Soviet camel was the increasing costs of oil production, as argued by Douglas Reynolds in his book Cold War Energy (2016) [83].

So, what can we expect from the future? Are we going to see the Western Civilization following the same path as the old Soviet Union? It is perfectly possible that many of the readers of this book will experience this kind of future. So, it may be worthwhile to listen to the forecast of someone who experienced the Soviet Collapse: Dmitri Orlov. In his book, The Five Stages of Collapse, [84] he summarizes how the collapse of a complex society takes place.

  • Stage 1: Financial collapse

  • Stage 2: Commercial collapse

  • Stage 3: Political collapse

  • Stage 4: Social collapse

  • Stage 5: Cultural collapse

It may well be that we are already experiencing the early stages of the process, mainly in the form of financial troubles. The financial shock of 2008 was somehow remedied by what was called “quantitative easing” (QE) and consisted mainly in pumping large amounts of currency into the system, It seems to have worked, for a while at least, but many economies in the world have not completely recovered and maybe never will.

The problem with using financial tools to solve the crisis is that you can have all the virtual money you want, but people cannot eat virtual food, nor power their cars and homes with virtual energy. This is a problem that the ancient Soviet Union already had with the ruble, which gradually became a worthless currency and gave rise to the well-known joke that said, “they pretend to pay us and we pretend to work for them.” In our world, money in the form of dollars is valuable even if it is fully virtual as long as you can exchange it for oil and all the products made from oil, from clothes to food, to fuel for your car. If (when) oil ceases to be available on the world market, then all the dollars in the world will become worthless.

Indeed, the 2008 financial collapse was directly related to the spike in oil prices which had reached the record value of $150 per barrel that year—in turn related to the high costs of extraction caused by depletion. The tumultuous arrival of shale oil on the market gave us at least a decade of pause, with oil prices remaining high on the average, but never again reaching their 2008 values. From where we stand now, everything is possible: we can see more instabilities, the collapse of the shale oil industry, and more perturbations of the fragile oil production and supply system which might well bring down the whole financial market, this time in a way that no new quantitative easing trick will fix. In that case, we would see nothing less than a stroke for the whole system, a true Seneca cliff of the worst kind.

Following the financial collapse, we might see the three other Seneca Horsemen of the Apocalypse: commercial, political, and social collapse. We are not there, yet, but if the whole system loses the fundamental communication ingredient that keeps it together: money, it means that people will still have things to sell and there will be people wanting to buy them. But, without money, not only buyers cannot pay sellers, but the goods cannot be delivered. It means that the shops run out of everything. Will you run out of food and starve? Maybe. Already in 2008, a consequence of the financial collapse was that the ships carrying merchandise all over the world stopped moving. It did not last long enough to cause the death by starvation of billions of people, fortunately, but that could be the consequence of a longer-lasting financial shock.

Some evident symptoms of commercial collapse are already visible all over the West. If you live in a poor area of your country, you may have noticed that your options in terms of shops and merchandise available have been drastically reduced. Then, of course, you may buy whatever you want on Amazon.com, but only if the financial system still lets you do that and if there is a still functioning delivering system taking it to your door. On a larger scale, the numerous economic sanctions enacted by the US government and their allies against countries perceived as enemies prefigure the breakdown of globalization as a worldwide commercial system.

Political collapse goes together with commercial collapse. Without money, people cannot buy anything and risk starving or freezing (or both). At this point, the only possibility to keep the social fabric together is for the government to intervene and provide emergency supplies, as they normally try to do in the case of large natural disasters. But the historical record on governments managing catastrophes is not good. Will they really want to help people? Or will they rather save themselves and their cronies?

Social collapse also comes together with commercial and political collapse. People will do what they can to help each other, but if things really go out of control the result may be true mayhem. We are already seeing evident symptoms of the breakdown of the social fabric in the West in the increased political polarization. In a two-party system, people try to elect people holding ideas similar to theirs, but normally the people voting for the other party are not supposed to be monsters to be hated, as it seems to be the rule in our times. The kind of ideological hate that pervades our society nowadays is a true fracture of the social fabric. Racism, hate for foreigners, defensive walls, every man for himself, bomb them back to stone age, gun and ammunition in everyone’s basement, and more. So far, a veneer of civilization seems to be still holding, but never forget that someone said that the only thing that separates civilization from barbarism is two hot meals. Is it a prophecy? No. It is a scenario. And scenarios sometimes come true.

There remains the final stage, cultural collapse, the phase in which people cease to recognize themselves in the culture that supported the state that collapsed. The Romans stopped being pagan and the Russians stopped being communists. Neither was necessarily a bad thing, it was part of the unavoidable force that moves complex systems: change. Passing the tipping point that is collapse, the system needs to re-adapt. It did so for past collapses, it will do that for the future one, at least if we cannot manage to mitigate it.

Cultural collapse is a major change, it is actually gigantic. Think of what happened to the Roman Empire: it reverted to the political organization that had existed before, city-states and chiefdoms. But it was not just a return to the past: it was a radical change in many things. The Western Roman Empire left as an inheritance its imperial language, Latin, which became the sacred language of the Catholic Church. Latin was the governance tool in what became a social experiment never attempted before in the world: the Middle Ages mimicked the old imperial order but, instead of money, it used the spiritual benefits that the Church dispensed to the believers.

I do not mean that the fall of the modern Western Empire will bring back the Catholic Church, even though you never know what to expect from an organization resilient enough to have been able to survive for at least 1500 years. What I mean is that the cultural change that awaits the West will be enormous and radical. It may bring humankind to a new stage of social organization by going in parallel with the evolution of the human brain that led us to the axial age in just a couple of millennia. If we manage to maintain some of the technological capabilities that our tumultuous times have developed, we may one day emerge into a new civilization that might be benevolent and merciful to itself and also toward all the creatures of this planet.

Apocalypse: The Collapse of the Earth’s Ecosystem

Fig. 3.20
figure 20

An interpretation of the four horsemen of the apocalypse by Arnold Bocklin (1927–1901) in an 1896 painting titled Der Krieg (the War). Apocalypse means “revelation” in Greek, but it is commonly understood as referring to the end of the world

Imagine you are living in Jerusalem in the year 70 CE. And imagine that you have a chance to climb on one of the ramparts, on the walls, and take a look at what is happening outside. Out there, you see the encampments of four Roman legions surrounding the city in full war posture, equipped with giant siege machines. At that point, you might be justified if you were to feel a certain sensation that the city is doomed.

Indeed, some of your fellow Jerusalem citizens seem to have become a little catastrophistic in their feeling. One is Jesus son of Ananias (Yeshua ben Hananiah), whose last deeds are so reported by Josephus in his “The Jewish War” written some years after the war.

… he every day uttered these lamentable words, as if it were his premeditated vow: “Woe, woe to Jerusalem.” Nor did he give ill words to any of those that beat him every day, nor good words to those that gave him food: but this was his reply to all men; and indeed no other than a melancholy presage of what was to come… Until the very time that he saw his presage in earnest fulfilled in our siege; when it ceased. For as he was going round upon the wall, he cried out with his utmost force, “Woe, woe to the city again, and to the people, and to the holy house.” And just as he added at the last, “Woe, woe to myself also,” there came a stone out of one of the engines, and smote him, and killed him immediately. And as he was uttering the very same presages he gave up the ghost.

Prophets of doom seem to be common in history whenever the situation starts looking hopeless for one reason or another. The list is long, with Yeshua ben Hananiah being just one of them. They are not usually seen with sympathy and their litanies are scoffed at. They may be compared to Chicken Little who thought the sky was falling because a nut fell on his head. But there seems to exist a basic phenomenon in human social groups that makes prophets of doom appear whenever there is a chance of some major disaster to occur.

As you surely noted, in our times doom-mongering has become a small cottage industry. A good example is the story of the planet Nibiru (or maybe Planet X, or maybe Herculobus, or whatever), said to have been aiming toward the Earth and scheduled to hit it in 2012, a prediction based—it seems—on an ancient Mayan calendar. Maybe the Mayans had ended their calendar with the year corresponding to our 2012 just because they had reached the end of the stone wall where they were engraving dates. In any case, the story became popular even though, of course, nothing larger than ordinary meteorites hit the Earth in 2012. The most recent version predicted that the planet Nibiru would hit the Earth in 2017. It was wrong, too, and it is possible that the arrival of Nibiru will be postponed to some future date.

Nibiru is part of a wave of imagined threats periodically sweeping the Internet in various forms and in various degrees of silliness. Some seem to be the domain of complete nuts: one is the “chemtrails” story that sees the innocuous trails left by aircraft as harmful chemicals spread by the powers that be in order to poison us. Other legends have a certain scientific basis although the threat may be wildly exaggerated, such as when some people fear that burning fossil fuels would consume the oxygen we breathe. It is true that we can measure a slight reduction of the oxygen concentration in the atmosphere, but it is minuscule and even burning all the known fossil fuel reserves would not lead to a decline large enough to affect human health.

Overall, existential threats seem to have a certain sales power. For instance, Listerine was marketed in the 1920s as a remedy against halitosis, or simply bad breath, that the creators of the advertising campaign aggressively described as a serious threat for people’s social success [85]. Peddling Listerine as a way for girls to get a husband surely was a little aggressive, although not so bad as trying to scare people about the threat of a whole planet falling onto us. But the problem is that some prophets of doom turn out to have been right when the catastrophe arrives. After all, poor Yeshua ben Hananiah had correctly predicted the fall of Jerusalem in 70 CE. So, not all prophets of doom can be simply discounted as rambling madmen.

In our times, we surely face a number of threats large enough to be a source of worries not just for madmen and prophets, but for every one of us. For instance, every few years a group of thousands of the world’s best scientists in climate and ecosystem matters get together to prepare a new report of the organization called IPCC (intergovernmental national panel on climate change). And, every few years, they tell us that if we do not stop burning fossil fuels, and fast, humankind is in dire trouble. What we are facing is the possibility of a disaster beyond anything ever experienced by humankind. The world’s ecosystem is on track toward a temperature increase of about 3–4 °C over the next several decades, unless truly draconian measures are taken to reduce the emissions of carbon dioxide. And there is no guarantee that the warming would be limited to that: nonlinear feedback effects could increase it by 6–8°, perhaps even more. This level of warming would have an enormous impact on the ecosphere, threatening to destroy civilization as we know it, if not to cause the extinction of the human species. Now, if that is not apocalyptic I don’t know what is. And we are not told about that by a screaming madman, but by the community of the best scientists in the world.

Facing the entity of the climate threat, the response of the human community has been weak, to say the least. You are told that you can fight climate change by such things as separating your waste, using low consumption light bulbs, buying local groceries, cycling, and other actions that seem to be conceived mainly to assuage one’s guilty feelings, but little more than that. Most people tend to ignore the climate threat, while a small minority vocally maintains that it is all a hoax invented by a group of evil scientists who thought they could get more research grants and more graduate students by hyping a non-existent threat. The opinion polls show that the general opinion on climate change remains stuck at a 50/50 level with the public, that is about half of the people think it exists and is a serious threat, the other half think it does not exist or is not a problem. Recently, a survey carried out by Yale University [86] showed a certain movement toward a larger fraction of the public identifying climate change as something to be worried about. Maybe they are by now a majority, but it remains to be seen how many of them will be willing to pay money or make sacrifices in order to combat climate change. On this point, it is worth remembering that the “Yellow Vests” movement in France started in 2018 mainly as a result of fighting increasing fuel prices.

But for how long can people remain indifferent to the threat at the door of their cities? As the intensity of the threat mounts, it becomes more and more difficult to ignore. The change from indifference to terror may take the shape of a true tipping point, according to John Schlesinger’sassessment that “people have only two modes of operation, complacency and panic.” The switch to panic may start small and there is evidence that it is, indeed, starting.

The accumulating knowledge about the phenomenon called “climate change” is indeed giving rise to at least one group of prophets of doom who claim that the end of the world is coming (or, as flea prophets would say, “the end of the dog is coming” as we can read in a Far Side comic by Gary Larsen). They tend to use the term “Near-Term Human Extinction” (NTE or NTHE) and one rather well known member of the group is Guy McPherson who keeps a blog titled “Nature Bats Last” [87]. NTE is not a monolithic concept, especially in the meaning of “near-term” but, according to McPherson, humankind could be already mostly or wholly gone by 2030, which is given as the last year for humankind on Earth. In a recent interview [87], McPherson stated that (emphasis in the original)

Specifically, I predict that there will be no humans on Earth by 2026, based on projections of near-term planetary temperature rise and the demise of myriad species that support our own existence.

A rather bold prediction, to say the least. For the human population to go from nearly 8 billion to zero in seven years would be some kind of a Seneca cliff! Indeed, the NTE idea is normally discounted as the product of deranged minds. It must be said, in addition, that the members of the “NTE movement” do little to endear themselves to non-believers. They are often aggressive in the debate and tend to take a rigid attitude: NTE will happen because it has to. It is rather typical of groups embracing extreme, non-mainstream views. Being a tiny minority surely requires developing some defensive communication techniques. But the real problem with these prophecies of doom is that they encourage passivity. If we must die, why bother doing anything that could perhaps avoid it? One might as well take a vacation to Hawai’i as long as it is still possible. It might be worse if the NTE meme arrives to infest the minds of opinion leaders and of policymakers. In this case, if panic sets in, the response of the powers that be could be reckless, to say the least. It they were to come to the conclusion that climate change is caused by too many human beings, they could well decide that getting rid of most of them is a good idea. It is a disturbing idea, but we know how often and how easy in history entire societies tend to go into “extermination mode.” It happened in the past, it can happen again.

In the end, is there a chance that the NTE believers might be right? Here, unfortunately, it is not possible to demonstrate that they are wrong. Yes, we can say it is unlikely, we can say that the models do not predict anything like that, that some extreme catastrophes such as the Venus effect seem to be ruled out by the physics of the Earth system [88]. But it is also true that climate-related catastrophes did take place in the Earth’s past and we know that the results were mass extinctions, in some cases involving the extermination of most vertebrates. They are the results of massive volcanic eruptions known as “Large Igneous Provinces” (LIP). The effects of the largest LIPs on the biosphere was devastating [89] and it is now believed that the extinction of the non-avian dinosaurs was not—at least not directly—the result of the impact of a large asteroid but of a LIP that appeared in the region called today Deccan, in India. The End-Permian extinction was caused by another massive LIP appearing in the region called today Siberia. It wiped out about 95 percent of all vertebrate species on the planet [90].

The destructive effects of large igneous provinces are not directly caused by the heat generated but by the emission of large amounts of carbon dioxide (CO2) in the atmosphere. As it is typical of complex systems, this forcing generates a cascade of enhancing feedback effects, including the release of methane stored in the permafrost and perhaps the combustion of coal deposits invested by the hot magma. The result is that the Earth is pushed on the other side of a tipping point into the condition described as “hothouse Earth” [91], as opposed to the conditions in which humans are accustomed to live, an “interglacial Earth.” A “hothouse Earth,” is a very hot Earth where temperatures are so high that large areas of the planets are uninhabitable by humans and possibly by most vertebrates, while mass extinctions occur as the result of factors such as the reduction in oxygen concentration (anoxia), the release of poisonous hydrogen sulfide from bacteria and more bad effects on life.

Now you can see what we are discussing about: a major kind of Seneca collapse not just for humans but for the whole biosphere. Of course, there is no active LIP on Earth, today, but what we are doing with our habit of burning what we call “fossil fuels” is having a similar effect: we are pumping large amounts of greenhouse gases into the atmosphere. The result is a forcing that could generate a cascade of feedbacks of the same kind of those generated by the ancient LIPs that destroyed most of the ecosystems of the time. As a further damning factor, today solar irradiation is stronger than it was during the past. It increases by about 10% every billion years and today it is significantly higher than it was during the largest mass extinction episodes of the past. It means that a smaller forcing is necessary in order to generate another major hothouse episode. No wonder that we seem to have entered the “sixth mass extinction” era [92]. The first five were caused by LIPs, the current one is human-made.

So, what are we facing, exactly? The climate models we use cannot provide an exact assessment of the effects of the reinforcing feedback loops that might lead to a climate tipping point, but there is a general agreement among scientists that some kind of “climate tipping point” exists [93], although nobody can determine its parameters exactly. The emphasis given in the Paris treaty about the need to stay below a maximum of 1.5 or 2 °C of warming is because of fears that going above these temperatures would mean passing the tipping point. But, again, these values have not been determined by quantitative calculations—they are a best guess, and they could be an optimistic guess.

Overall, we cannoty exclude that we are doomed, but it is also true that it is far from being sure and, for what we know, there is still plenty of room for maneuvering and, possibly, avoiding the worst. One thing that is reasonably certain is that the damage will be huge, well before hothouse Earth wipes humankind out—if it ever does that. Climate-related droughts may destroy a sufficiently large fraction of the agricultural production to cause widespread famines. Or the opposite phenomenon, floods, may do the same by washing out the fertile soil. Sea level rise may also cause a similar effect: making ports inoperable would interrupt the vital flow of food carried by container ships. It is not clear whether major weather phenomena, hurricanes or tornadoes, could have disastrous effects of the same magnitude, but that cannot be discounted. Facing these increasingly grave threats, humans could react in different ways: the basic rule of politics is to find a way to blame someone else, so a possible result would be to double down and increase the effort to ignore the threats. Or, conversely, a tipping point in perception could lead the elites to decide to move to desperate attempts to redress the situation by using geo-engineering, with all the unknowns involved. Who knows? It might even work. Or, the elites could decide to dump the poor and save themselves by occupying regions in the high north, or in the mountains.

Overall, for those of us who are not part of the elite, the future does not seem to be bright in terms of what climate change is bringing to us, and even if you happen to be part of the elite, the future looks hard as well. But the beauty of the future is that it cannot be predicted. So, we march into the future always equipped with an indispensable tool: hope.