Technological Progress Against Collapse. The Cold Fusion Miracle that Wasn’t

Fig. 4.1
figure 1

(Image from Wikimedia, https://en.wikipedia.org/wiki/Nuclear_fusion#/media/File:Deuterium-tritium_fusion.svg)

The fusion of a nucleus of deuterium and a nucleus of tritium is believed to be usable as an energy source but it occurs at significant rates only at very high temperatures. In 1989, Martin Fleischmann and his coworker Stanley Pons claimed to have been able to attain the fusion of two deuterium nuclei inside a test tube at near room temperature. It was the dream of “Cold Fusion” that turned out to be just that: a dream

In March 1989, Martin Fleischmann and Stanley Pons, researchers in electrochemistry at the University of Utah in Salt Lake City, published their claim about having attained the room temperature fusion of deuterium nuclei by means of an electrochemical process [1]. It was a new field of nuclear science that they dubbed “cold fusion.” If it was true, it was not just the discovery of the century, it was the discovery of the millennium: with their test tubes, Fleischmann and Points had succeeded, it seemed, in tapping the same energy that makes stars burn. It was a discovery that could put to rest all fears of running out of oil at a time when the memory of the great oil crisis of the 1970s was still fresh (Fig. 4.1).

In the months that followed the announcement, almost every scientist in the world who had some background in solid state physics or electrochemistry stopped doing whatever they were doing to examine the new discovery. I was part of that crowd: that year, in July, I traveled to California to spend the summer to work at the Lawrence Berkeley Laboratory. There, they had one of the best surface science and electrochemistry labs in the world and if anyone was able to confirm the claims of cold fusion, they were the right ones.

When I arrived in Berkeley, I expected to find my colleagues excited by the new discovery and maybe working on it. But I found that they had already passed that stage and now they were disappointed. They had tried to replicate the cold fusion experiments without getting any results. They had concluded that the whole story was a mistake or, worse, a scam. So, I spent that summer in Berkeley working on subjects not related to cold fusion, but I had not given up: the fascination of the idea of being able to replicate a star in a test tube was too strong. So, back to Italy, in September, I thought I could do some experiments myself using a different setup than the one that my colleagues in Berkeley had experimented with. Maybe, in that way, I could see something that they had missed.

Let me not bother you with the details of what I did, here, you can find a little more in a blog post of mine [2]. Let me just tell you that I spent a few months working alone in my lab, feeling a little like Dr. Zarkov, the character of the Flash Gordon comics, who builds a spaceship in his basement.

But, in my case, no spaceship emerged out of the lab. I soon discovered that if there was such a thing as “cold fusion” it was a very weak effect, if it was there at all. For sure it was nothing like the strong effect that Fleischmann and Pons had claimed when they spoke of the “ignition” of the deuterium they were using in their experiments. No matter what I tried to do, I could not see anything like that with my setup.

I did not give up immediately, there was a certain “Elvis sighting” atmosphere about cold fusion at that time. It was not unlike the many claims of having seen Elvis Presley alive in the 1980s, after he died in 1977. Claims of experimental evidence of cold fusion were popping up everywhere and that made me think that maybe I was a bad experimenter, that I was making some mistake. The Elvis sighting effect can be strong: you tend to see what other people claim to have seen. Several times I thought I had seen a signal that showed that, yes, a nuclear reaction was taking place in the steel vessel I was using for the test. It seemed that, really, the energy that powers stars had appeared in my lab. But when I redid the experiment, the signal was gone. I was chasing a ghost and, by Christmas of 1989, I gave up.

Rethinking about that old story, I think I was lucky that I lost just a few months of work. Others would spend years, stake their reputation on some uncertain results, and retire decades later still claiming that the elusive room temperature fusion was just one more experiment away. One of the characteristics of “pathological science”, indeed, is that the signal is always weak, at the edge of the sensitivity of the instrumentation. But only pathologically optimistic scientists could see that signal and, gradually, cold fusion slipped away from science to settle into something performed by colorful figures of pseudo-scientists or mad solitary geniuses touting weird machines and claiming that they are going to revolutionize the world. But that is always for next year, or for as soon as the new machine or the new test is ready. Changing the name of a discredited field did not help: turning “cold fusion” into the more hi-sounding “LENR” (low energy nuclear reactions) did not change the fact that nuclear fusion is not and cannot be “low energy.” Call it the way you like, cold fusion or LENR, it turned out to be full of sound and fury, signifying nothing.

Gradually, interest in the idea faded but, even today, people are still fascinated with the idea of reproducing a star in a test tube. So, 30 years after the first claims by Fleischmann and Pons, the communication giant Google engaged some researchers in a program aimed at trying again to find signs of nuclear fusion at near room temperature [3]. Unsurprisingly, they found nothing: they just repeated experiments that had already been done, confirming that there is no such thing as “cold fusion” (or LENR). They might as well have sent their researchers to search for the lost ark of the covenant.

This enthusiasm for something that does not exist was always fueled not so much because it was a new physical phenomenon: nuclear fusion had been known for at least half a century. Cold fusion was always presented as something that would fulfill the prophecy of the 1950s that nuclear technologies would bring to us energy “too cheap to meter.” It was a prophecy borne out of the incredible achievements of the 1940s and 1950s, when it really seemed that nuclear energy was a Pandora’s box that would bring to us perpetual abundance. No one who has watched Walt Disney’s movie Our Friend, the Atom (1957) as a teenager can forget the atmosphere of expectation of great things to come of those years.

But reality was, as usual, around the corner and the promise of nuclear fission turned out to be much less exciting than it had seemed to be at the beginning. Apart from accidents, the problem of proliferation, the difficulties of controlling the technology, it was soon discovered that the mineral reserves of uranium were far from sufficient for the kind of limitless prosperity that had been imagined at the beginning. If we wanted enough fuel for the kind of abundance envisioned in the 1950s, we would have had to engage in the dirty and dangerous business of “breeding” nuclear fuels in the form of plutonium to make up for the scant uranium resources. But the idea was soon abandoned: too complex, expensive, and risky in political terms. Nobody wanted plutonium to become commonplace all over the world when it could be used to make nuclear warheads or, more simply, turned into a deadly poison. That left nuclear fusion as the workhorse of nuclear hopes: the energy that powers stars. It seemed obvious that, if we could have it here, on Earth, all problems with energy would fade away forever.

Alas, controlled nuclear fusion turned out to be an elusive dream. It is not impossible to attain it on our planet: it can be done inside nuclear warheads, but that is not the kind of technology you can use to power the electric grid. What people were dreaming about was the concept of “controlled” nuclear fusion, the same kind of taming of the enormous nuclear energies that had been obtained with nuclear fission. In the 1950s, it seemed to be just the next step in an unstoppable progression of better technologies, but things turned out to be more difficult than imagined. Decades of work and untold billions of dollars were spent to build larger and larger “Tokamak” machines supposed to be able to reach temperatures so high that “hot” nuclear fusion would take place at a sufficiently fast rate for useful energy to be produced. So far, the only result obtained was to show that a bigger machine was needed. The latest incarnation of this “big is beautiful” approach is the ITER machine, being built in Southern France. It is so big that 35 nations had to pool their resources in order to make the project possible. Construction was started in 2007 and the machine is scheduled to start working as a fusion reactor by 2035 [4]. That doesn’t mean that ITER will produce useful energy—a new and even bigger machine will be needed for that—if it will ever work. It is even more uncertain whether it will make economic sense to use it. At this rate, our civilization may go through a couple of Seneca Cliffs before we find a way to make this kind of machines useful for something.

Other approaches to fusion not based on tokamaks turned out to lead to dead ends, too. It is often possible to create devices that can produce nuclear fusion, the problem is to turn them into useful energy sources. There may be a fundamental problem here: despite all the hype, it might be that nuclear fusion is just not such a great idea for what we need. The power density of the Sun is ridiculously low: less than 300 Watts per cubic meter [5]. The engine of a small car may have a power density thousands of times larger! Nature, it seems, doesn’t like to keep very high power densities for long times and stars are spectacular machines but not very efficient ones. So, the dream of cheap and abundant energy from nuclear reactions may always remain a dream, at least on our planet.

But let us crank up the dreaming machine into motion and start speculating a little. What if we could really develop a miraculous technology that would give us nearly free, non-polluting, and abundant energy? Would that help us avoid the impending Seneca cliff of our civilization?

First of all, with cheap and abundant energy, the depletion of mineral resources would not be a problem. We would not need anymore to mine from depleting ores, we could just mine the crust for whatever element we need. It would be the concept of the “universal mining machine” [6], a mechanism that eats rocks and spits out their contents nicely arranged in boxes of pure elements. A machine like that is physically possible but, today, it would make no sense because of horrendous costs in terms of the energy it would need. But what if we could increase the global energy supply by a factor, say, one hundred or one thousand? Then, we could really mine the Earth’s crust to obtain all the chemical elements we need. Of course, these machines would also produce a gigantic amount of pollution but they could be sent to the Moon or to the asteroids and the pollution would remain there while the precious materials mined could be shipped to Earth. Or, with abundant energy, we could ship pollution to space.

Then, how about the problem of human overpopulation? Cheap and abundant energy could solve that problem, too. We could use artificial light to power photosynthesis on a truly gigantic scale. There is a wonderful science fiction novel by Robert Hanson Heinlein, The Moon is a Harsh Mistress (1965) describing a future in which the Moon has become a granary for an ever-expanding Earth population, with the grain shipped to Earth by means of an “electromagnetic catapult.” If something like that were possible, we could turn the Earth into a planet similar to Trantor, the galactic capital described in Isaac Asimov Galactic Cycle: a completely urbanized planet formed of a single, giant city, covering the whole landmass. Then we could have hundreds of billions of people on Earth and, probably, no other species of body mass larger than a few kgs except, perhaps, for cows. Maybe cows could be raised on the Moon, too.

If we had really large amounts of cheap energy, we could ship people to space and have them live inside giant artificial habitats orbiting around the Earth, a daring scheme proposed in 1974 by Gerard O’Neill [7], in part as a response to the scenarios of collapse proposed in the first edition of The Limits to Growth, in 1972. O’Neill’s concept was based on immense pressurized habitats that would be placed at the L4 and L5 Lagrange points, where the interplay of the gravitational fields of the Moon and the Earth, and the Sun generates a minimum in the gravitational potential. At these points, an object can remain in a stable position in principle forever. Some dreams of space colonization turned out to be even grander. In 1960, Freeman Dyson [8] proposed that the whole Solar system could be turned into an immense sphere surrounding the Sun, built using matter obtained from dismantling the planets. If such a feat were possible, it would increase the human habitat by an enormous factor in comparison to occupying the surface of just one planet. Some other studies even considered the possibility of colonizing the whole galaxy. Although the speed of light is an absolute limit that, as far as we know, cannot be overcome, even at relatively slow speeds, an intelligent species could colonize the galaxy in times of the order of a million years [9].

The concept of unlimited energy available can be modeled and it was done for the first time in the 1972 study “The Limits to Growth,” [10]. The model used did not consider energy as a disaggregated parameter but it could be indirectly modeled by removing the limits to the flux of natural resources into the economy. A simulation along these lines was performed already in the first Limits study, in 1972, and it was confirmed in the later versions: infinite energy available postpones collapse but generates it anyway as the result of a combination of overpopulation, depletion of agricultural soil, and pollution. But, if these limits are removed, too, assuming an expansion into space, then we have a scenario that the authors of the study termed IFI-IFO (infinite in, infinite out). And, as you would expect, the result is that the economy and the human population keeps growing forever or, at least, for as long as you care to run the model into the future. Yes, but also Santa Claus could solve a lot of problems if he existed.

So, let’s go back to the real world and examine what we could reasonably do in terms of technological progress to avoid the Seneca Cliff for our civilization or, at least, mitigate its damage. Of course, we must first ask ourselves what we mean as progress. Spaceships? Smartphones? Laser beams? Boner pills? All this and more, but what is it that links together all those things? How can we define progress? And how can we measure it when we are not sure how to define it? One thing we can say about it is that it is a relatively new idea: the ancient Romans or the people of the Middle Ages would see no difference in their way of living compared with that of their parents or grandparents, and not even for people living centuries before. They would have been baffled by the concept that, somehow, tinkering with mechanical things would change their lives and make the world better. It was only during the 18th century that Edward Gibbon noted the trend of technological progress perhaps for the first time in history his Decline and fall of the Roman Empire [1788] when he wrote that, “The ancients were destitute of many of the conveniences of life which have been invented or improved by the progress of industry.” In time, the concept of progress became commonplace and the enthusiasm for progress probably spiked up to the highest level during the mid-20th century, when the “Atomic Age” was in full swing and people expected friendly home robots, flying cars, and weekends on the Moon for the whole family. The mid-20th century was also the time when the first attempts at quantifying progress were performed.

The merit of having been the first to try to quantify progress goes perhaps to Robert Anson Heinlein (1907–1988) mainly known as a science fiction writer. In his 1952 article titled Pandora’s box (originally published with the title Where To? [11]) he proposed that technological progress had been growing exponentially up to then and would continue to grow exponentially in the future, bringing unimaginable wonders to humankind. It was a bold attempt to understand a difficult concept, but also flawed in many ways. Heinlein did not even attempt to define or quantify his concept of “technological progress,” he just drew by hand a growing curve on a Cartesian graph. Then, his detailed predictions turned out to be nearly all wrong. He spoke of anti-gravity, space flight for the masses, life extension over 100 years for humans, and many other wonders that never materialized. On the contrary, he failed to imagine such things as the Internet, cell phones, personal computers and most of what we consider today as the tangible manifestations of progress.

But the idea that technology grows exponentially seemed to be mature in the 1950s and it appeared in a different form when, in 1956, the economist Robert Solow published the results of a study that is often considered the basis of the understanding of technological progress in economics [12]. Solow could fit his data assuming the presence of a factor, that he called “A(t),” that grew  exponentially with time. This entity came to be known as “Solow’s residual” or “Total Factor Productivity” (TFP) and it is commonly understood as a quantitative measurement of technological progress. According to Solow, it grows exponentially with time at a rate of 1%-2% per year. If this factor could keep growing forever, it would easily compensate for such factors as the decline of the availability of natural resources, as argued, for instance, by William Nordhaus in 1992 [13]. Just 1%–2% per year? That does not seem to be so difficult. If we could keep that rate of growth of progress, the A(t) factor would get rid of all cliffs and keep the economy growing forever or, at least, for a very, very long time. That is surely a comforting idea, and it is by now rather well entrenched in economics and with policymakers. So much that when a problem appears, the knee-jerk reaction of many politicians is “we must finance more research.”

But is it true that progress grows exponentially with time? And what is exactly this “Solow residual?” How can we be sure that it will keep growing exponentially, assuming that is what it has been doing up to now? And can we put our trust in a parameter that cannot be measured but can only be inferred on the basis of a highly simplified model. The residual identified by Solow may actually exist, but it may be related to factors other than technological progress. It may simply be proportional to the supply of energy to the system, as proposed, among others, by Robert Ayres [14]. So, the incorporeal TFP factor may really be something much more concrete than what it was thought to be. Indeed, the conventional understanding of the TFP was criticized by Herman Daly in his Steady state Economics (1977) [15] where we can read in chapter 5 that:

The idea that technology accounts for half or more of the observed increase in output in recent times is a finding about which econometricians themselves disagree. For example, D. W. Jorgenson and Z. Grilliches found that “if real product and real factor input are accurately accounted for, the observed growth in total factor productivity is negligible” (1967). In other words, the increment in real output from 1945 to 1965 is almost totally explained (96.7 percent) by increments in real inputs, with very little residual (3.3 percent) left to impute to technical change. Such findings cast doubt on the notion that technology, unaided by increased resource flows, can give us enormous increases in output. In fact, the law of conservation of matter and energy by itself should make us skeptical of the claim that real output can increase continuously with no increase in real inputs.

A further perplexity on the role of the TFP residual derives from the fact that it may be the only entity in economics that is supposed to keep growing forever. That is curious, to say the least, considering the established concept of “diminishing returns” in economic sciences. Why should technological progress be exempt from this very general law? This point was examined already in the 1970s by Giarini and Laubergé [16] and more recently by Tainter [17]. From these studies, it seems clear that the growth rate of technological progress is slowing down in our times. It is not growing exponentially anymore, assuming that it did in the past.

There are plenty of technological areas progressing very slowly if they are progressing at all. Just think of how the human average life expectancy is not significantly increasing any more after the spectacular rise observed up to a few decades ago. Even highly touted cases, such as “Moore’s law” in electronics, are showing signs of fatigue. Moore’s law indicated the number of elements placed on a computing chip should double every two years, approximately. But it has been clearly slowing down—perhaps just disappearing—during the past few years [18]. The mysterious technological force that is said to push the economy onward may be made of such stuff as cold fusion is made of: dreams and bad measurements.

That does not mean that technological progress does not exist, but it means that we need to look at it as something real, something that works, something other than uncertain parameters of uncertain models. What kind of technology do we need to avoid the Seneca Cliff we are facing?

Nowadays, much research is about solutions that would worsen the problem. Think of biofuels: they are another knee-jerk solution to depletion problems. “Are we running out of oil?” So, what’s the problem? We’ll use biofuels! But that makes no sense if you think of it quantitatively. Photosynthesis, the process plants use to create organic molecules out of sunlight and atmospheric carbon dioxide, is not very efficient, around 1% on the average, probably less than that for crops. So, it is easy to calculate that if we were to use agriculture to produce the fuel needed for the gigantic fleet of fossil fuel-powered vehicles of today, we would use most of the available agricultural land [19]. And, surely, the idea of starving people in order to feed cars does not seem to be very smart. So far, the effort on biofuel cultivation has resulted mainly in the wholesale destruction of many primeval forests to cultivate palm oil and, as a consequence, to the near extinction of orangutans. All that just for the production of little more than 2% of the total diesel fuel produced in the world [20]. Maybe you do not care about the Seneca collapse of orangutans, but for sure it will not save us from our own collapse. So, is it worth it?

Similar considerations can be made for the many efforts to develop technologies making us more energy efficient. That is surely a worthy task in many respects. It is a good thing to insulate our homes, use more efficient cars, LED lights, public transportation, organic food, and things like that. But would it save us from depletion and climate collapse? Unfortunately, in many cases all these efficiency-related ideas amount to little more than greenwashing. Not that they are bad ideas, but their economic return is slow: it takes several years to recover the investment in, say, insulating one’s house. And we are running out of time with mineral depletion and climate change.

Then, there is a perverse effect associated with technologies that improve efficiency. You probably heard of the “Jevons Paradox,” described for the first time in Jevons’ 1865 book The Coal Question [21]. The gist of Jevons’ idea was that improvements in efficiency do not lead to a reduction in the amount of energy used, something that he could demonstrate by means of data on the use of coal-powered steam engines in England during the 19th century. It is not obvious that the “paradox” holds exactly in its original form in modern times, but studies tend to support this idea [22] under such names as “rebound,” “backfire,” and “Khazzoom-Brookes Postulate.” Indeed, the idea makes a lot of sense: it is not at all a paradox. Imagine that you insulated your home: it means you save money in heating costs and what will you do with that money? Maybe you’ll make a donation to the WWF to save the tortoises of the island of Pago-Pago but, more likely, you will take a vacation to Hawai’i using at least the same amount of fossil resources and creating the same amount of pollution that you would have created by means of your heating system before insulating your home.

This discussion may sound pessimistic but we do not have to be discouraged, we only need to be more creative. If technology cannot produce miracles, it is also true that maybe we do not need them. We saw that complex systems are entropy-producing machines that feed on energy potentials. So, if we want the complex system we call “civilization” to keep going in some form or another, we need to provide food for it: an amount of energy comparable to the one produced today mainly by means of fossil fuels. It is not impossible. The paper that myself, Sgouris Sgouridis, and Denes Csala published in 2016 with the title The Sower’s Way [23] shows that the renewable technologies we have today, mainly wind and photovoltaics, are good enough to replace the energy flow we obtain today from the dwindling fossil fuel resources, without causing greenhouse emissions. We found also that it would be possible to use the remaining fossil fuels to jump start a renewable-based infrastructure that, subsequently, would not need fossil fuels anymore. In other words, we would use fossil fuels in the same way as our farmer ancestors used corn saved from the previous harvest for the new one. A nice idea with one glitch: it will be very expensive, although not impossible. The data also show that, if we want this transition, we have to start paying for it right now. We need to increase by about a factor of 50 the amount of energy invested in creating a new energy infrastructure. That is  unlikely to happen considering that in the present debate the opinion leaders have not yet realized the true potential of renewable energy. Apparently, we are not as wise as our ancestors and we believe that the good thing to do is to eat our seed corn. As long as we keep this attitude, no technological progress will save us from the coming Seneca Cliff.

To conclude this chapter, let me note that there exists another view of technological progress, grander and more ambitious than the one that derives from the smooth curves of economics models. As an example of this view, we can cite Kevin Kelly’s book Out of Control [Kelly 1994] where we find a description of progress that was produced as a direct criticism of the Limits to Growth study. We read at p 575 that:

Direct feedback models such as Limits to Growth can achieve stabilization, one attribute of living systems, but they cannot learn, grow or diversify—three essential complexities for a model of changing culture or life. Without these abilities, a world model will fall far behind the moving reality. A learning-less model can be used to anticipate the near future where co-evolutionary change is minimal; but to predict an evolutionary system—if it can ever be predicted in pockets—will require the exquisite complexity of a simulated artificial evolutionary model.

And:

The Limits of Growth cannot mimic the emergence of the industrial revolution from the agrarian age. “Nor,” admits Meadows, “can it take the world from the industrial revolution to whatever follows next beyond that.

In this view, progress is something that moves in leaps and bound, actually in “quantum leaps,” and as it grows it spikes up changing everything radically and forever. From a human viewpoint, at some moment, progress it will appear to, literally, shoot out to infinity. In some interpretations, this phenomenon will lead humankind to transcend into a nearly godlike, “transhuman” status, an idea that may have been expressed for the first time in its modern form with Robert Ettinger’s book “man into Superman, originally published in 1972 [24]. The most recent proposer of the concept of technological singularity is probably Ray Kurzweil, who has published several books on the subject. Among these The Singularity is near [25]. These concepts are fascinating but, at present, they remain in the realms of possibilities for the future. If humankind goes through a technological singularity, then we cannot know where it will go, and not even if it will continue existing afterward.

Even without these extreme possibilities, it is clear that technology in its expression of Artificial Intelligence (AI) is taking us somewhere, and that somewhere may not be exactly where we want to go. The Web is more and more invading our minds, changing us, rather than changing our environment. Instead of finding the magic energy trick to have abundant energy, it may lead us not to need it. But will it? Let me cite from a recent article by George Dyson on Edge [26]

Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun. None dare speak its name.

The genius — sometimes deliberate, sometimes accidental— of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph.

We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves. The new gatekeepers, by controlling the flow of information, rule a growing sector of the world.

What’s going to happen to us? Will it alter the way our brains are built, with the ingrained desire to have more? Will it lead us to learn to live with the limits we have? Whatever happens, the future is never like the past: if the next Seneca Cliff will be in real space or virtual space, we cannot say.

The Evil Side of Collapse: The Iago Strategy

Fig. 4.2
figure 2

(Picture by Ivan Bea, https://en.wikipedia.org/wiki/Joker_(character)#/media/File:Joker_expo.jpg)

The character of “The Joker” in at the 2015 art exhibition at the Barcelona International Comics convention, complete with the Satanic laughter pertaining to truly evil characters

With Iago, in Othello, William Shakespeare created perhaps the best evil character in the history of literature. The  drama is all based on the subtle plotting of Iago to get revenge on his master, Othello, by having him suspect his wife, Desdemona, of betraying him. In the story, neither Othello nor Desdemona are described as especially dumb people, but they are overwhelmed by the superior cunning abilities of Iago who exploits every detail, every chance, every event, to fan Othello’s suspicions until, eventually, Othello is led to killing his wife and then to kill himself.

In modern times, it seems that the subtle and sophisticated evil characters of past literature, such as Iago, have been replaced by ugly monsters endowed with little more than a Satanic smile and the kind of laughter that goes like “Bwa-ha-ha-ha” in comics. But if evil characters have existed in fiction since the time of the Sumerian priestess Enheduanna, it is because they are the mirror of something real. In your everyday life, you will rarely see the equivalent of “The Joker,” the arch-villain of the Batman universe, but you do see equivalents of Iago in terms of people managing the twists and the traps of what we call “office politics.” Some people seem to show an uncanny skill in maneuvering things in such a way to damage other people. They can destroy themselves as well! I don’t know about your experience, but I saw that happening more than once in my career. And, of course, evil is a common occurrence in politics, where people in positions of power can do a lot of damage to all of us.

Iago is truly the embodiment of the concept of evil in the sense attributed to Satan himself described as “The Master of Lies.” How does he attain this proficiency of arch-villain? I would say that Iago masters the science of complex systems. His actions follow the basic tenets of Griffith’s theory of fracture: he is engaged in creating small cracks in the network of the social relations among the characters surrounding him, making the fissures grow by exploiting the internal strains of the connections. The cracks grow until they coalesce into a single one in the relation between Othello and Desdemona. The crack grows longer than the Griffith length, and it makes the system go critical and pass through a tipping point: tragedy ensues, as we know. We could call this technique of destroying a complex system “The Iago Strategy.”

The idea of using collapse to get rid of your competitors and enemies goes beyond individual actions, and may become a business or a political strategy. Especially in politics, calumny is a well known and honed strategy, sometimes going under the name of “muckraking” when it is done by journalists. In some cases, calumny is part of an election strategy: an example is how Lyndon Johnson damaged his opponent, Barry Goldwater, in the presidential elections of 1964 by accusing him of planning a nuclear war. On a larger scale, the method is part of the concept of “Yellow Journalism,” a technique that combines exaggerations, wild claims, and unsupported accusations aimed at specific persons. It became popular in the US starting with the late 19th century, and it is still very popular today. We just need to remember the case of Dominique Strauss-Kahn, French manager and politician, who was accused in 2011 of having sexually attacked a hotel maid in New York. The story was, and continues to be, highly controversial but surely it thwarted his ambitions to compete for the presidency of France.

The idea of causing an opponent to collapse may not refer just to political struggles. As Von Clausewitz said, war is nothing more than the continuation of politics by other means and the capability of causing the collapse of the enemy has obvious military implications. Warfare is, after all, a struggle that involves complex systems: armies fight and maneuver against each other, entire countries support them, the battle goes on and it ends when one of the two sides collapses as the result of accumulated strain.

The most brutal and expensive way to get rid of an enemy is simply to destroy it. But, already in ancient times, Sun Tzu noted how “all warfare is based on deception.” That seems to imply that the best way to win a war would be to exploit the internal strains of the enemy’s networked structure, and this needs to be done in a covert manner. Then, the enemy will defeat itself and, citing again from Sun Tzu, “the supreme art of war is to subdue the enemy without fighting.” It must be said that, in modern times, these ideas do not seem to be very popular with the military or with politicians. Maybe a wave of barbarism is pervading the world, but the Second World War was the last major war to be formally declared by the governments engaged in it. Afterward, only a few local wars were actually declared despite many having been fought. Nowadays, the war goes on until the losing side is utterly destroyed and its leaders captured and often executed.

Wars may become more cruel and ruthless than they used to be also in another factor: the involvement of civilians. Of course, exterminating civilians is an ancient tradition but, in our times, it is supposed to be illegal and those who directly target civilians risk being hanged when the war is over (of course, only if they are on the losing side). In practice, the idea of civilians as a legitimate war target is deeply entrenched in the current military thought. It seems that it was explicitly proposed for the first time in modern times by Giulio Douhet, Italian officer and the author of “The Command of the Air” (Il dominio dell’aria) (1921). Douhet’s ideas seem to be taken from an evil character of a comic book, a sort of early “Joker,” even though we have no record that Dohuet would intersperse bouts of Satanic laughter within his utterances on strategy. But the concept he proposed was truly evil: abandon all conventional warfare intended as a struggle of armed forces and concentrate instead on aerial bombing to kill civilians. They will have to surrender, else they will be exterminated (Fig. 4.3).

Fig. 4.3
figure 3

(Image from National archives. https://www.archives.gov/files/research/military/ww2/photos/images/ww2-73.jpg)

An American B17 Bomber in action over Germany in 1943

The idea of killing everyone on the other side is at the basis of the deployment of the various mass murder weapons that were accumulated and sometimes used along the 20th century. Still today, the USA and Russia have considerable overkill capabilities against each other and against the whole humankind in terms of the number of nuclear weapons they stockpile. Other countries may not be able to exterminate humankind by using the nuclear weapons they possess, but they seem to be doing their best efforts in that direction.

In addition to nuclear weapons, there are interesting (in a certain sense) possibilities in terms of mass extermination by means of chemical and bacteriological weapons, although neither seem to have been experimented on a truly large scale, so far. The same is true for the latest generation of hi-tech weapons: aerial drones which might also be used for purposes of extermination. At present, they seem to be only used for “targeted killing” directed against a relatively small number of targets. The latest available data speak of some 10,000 victims of drone strikes carried out by US forces from 2004 to date [27]. We have no idea of how reliable this estimate can be. If it is, this is a relatively small number of casualties, but surely drone warfare could be stepped up and these weapons turned into proper mass murdering tools. The concept of killer microdrones has been described in the 2017 “slaughterbots” movie by the Future of Life institute and Stuart Russell [28]. It is based on the idea of small drones carrying a small explosive charge, sufficient to kill a person, and with facial recognition technologies able to identify specific persons or generic people who wear a certain uniform or have some ethnic facial traits. If that is not evil, I do not know what is. Maybe the makers of this weapon could improve it by adding the capability for the drone to emit a Satanic laughter that goes Bwa-ha-ha-ha just before it kills its target by exploding near his or her forehead. Fortunately, it seems that this technology is not available, yet, but there is no reason why it could not be developed in the future.

Mass extermination is surely a way to push an enemy down a steep Seneca Cliff, but it seems to be a little drastic as a method. Besides, it has a big problem that, curiously, Douhet and his followers completely forgot to take into account. If you have an inexpensive and effective technology to kill them, chances are that they will have it, too, to be used against you. And that makes things a little problematic with the risk of symmetric reciprocal extermination, as nearly happened in Europe during WW2 with aerial bombing in which the Allies and the Axis forces engaged. It is strange that this point does not appear clear either to the public or to policymakers. For instance, a recent survey carried out by the Bulletin of The Atomic Scientists [29] finds strong support with the American people for a preventive nuclear attack against Korea that would kill one million people, there. Apparently, many people love the idea of pushing others down what could be the steepest Seneca cliff of all, nuclear extermination, without thinking too much about what the targeted nation could do in terms of retaliation. But killing people on both sides until nobody is left alive looks a little dumb as a military strategy, to say the least. Can’t we think of something smarter?

If war is a struggle involving the stability of complex systems, a smart strategy would consist in exploiting the networked structure of the enemy society to cause it to collapse: it is the system science view. An army, or any fighting organization, is a network and in all networks nodes must communicate with each other. So, every army is sensible to collapse caused by a loss of communication and, in particular, to the feedback effect that takes place when the nodes communicate the wrong information to each other, For instance, if a soldier starts running away from the battlefield, soldiers nearby receive the communication that things are not going well and they may start running away, too. Enhancing feedbacks take over and the whole army melts away: it is the nightmare of all generals, ancient and modern.

Avoiding this occurrence is the reason why modern armies are pyramidal networks where each node communicates almost exclusively with the upper and the lower layer. Soldiers do not give orders to each other, they receive them from their officers who in turn receive orders from higher level officers and the whole army depends on a central command. This kind of structure avoids the melting catastrophe but makes the army sensitive to “decapitation strike”. If all communication must pass through a single node of the network, then removing this node is a way to generate a Seneca Collapse.

The problem with the idea of destroying a military structure by decapitation is two-fold: the first is that this vulnerability is well known and strategies are normally implemented to ensure that leaders are difficult to kill. For instance, in the United States, the president has a bunker under the White House that’s supposed to be used as a secure shelter and communications center in case of an emergency. In case of a major war and of threats against the US territory, the president is expected to be flying in a “doomsday plane,” a plane with the sole purpose ofing keep the president in the air, where he is presumably difficult to locate and hence safe.

A different approach to counter the risk of a decapitation strike is to abandon the typically rigid structure of armies and adopt a flexible one with small units able to continue fighting even if they lose contact with their command center. It is a way of fighting that was pioneered by Edwin Rommel on the Italian front during the First World War. A recent example of resilience in an armed conflict is the 2006 confrontation between Israel and Hezbollah in Lebanon, where Hezbollah successfully applied this strategy.

The concept of inducing a collapse in the enemy army is a way to improve the effectiveness of warfare while at the same time reducing the cost and violence of a conflict, but it remains embedded in the conventional views of wars fought by armies. Nowadays, the very idea of conventional armies may be obsolete. War is becoming more and more embedded in the structure of society, taking different shapes under the general concept of “hybrid war.” Modern armies are part of a network that includes the economic, social, political, and religious structure of a whole country. Attacking or weakening this larger network may lead it to collapse and, even though the army may maintain its fighting capabilities, it becomes useless without a country to support it.

It is an idea that runs along the lines of the extermination proposal put forward by Douhet, but it is more sophisticated: a hybrid war is not about exterminating civilians, at least not directly. It is about weakening the economic and social structure of an enemy country, if possible causing its collapse so that it cannot support a war effort anymore. A good example is the fall of the Soviet Union in 1991. The Red Army was not defeated, not even attacked, and at the moment of the fall it maintained most of its fighting capability. But there was no government anymore able to pay the salaries of soldiers and officers. So, the army went through a Seneca collapse and dissolved.

Economic warfare is a common component of hybrid warfare. It may take different shapes: in its most brutal form it simply consists in starving the enemy population, to death if necessary. There are many examples of this strategy being applied in ancient times. We have a poignant example in the description of the siege of Jerusalem in 70 CE by Flavius Josephus where he tells us of such graphic details as mothers eating their children. In modern times, we may remember how, in 2018, the US secretary of State, Rex Tillerson, declared that the economic sanctions enacted against North Korea imposed from 2006 are effective because of the evidence of deaths caused by starvation in the country [30].

A specific variant of economic warfare is “energy warfare,” consisting in starving an enemy country not of food but of energy. It may have been tried for the first time by the Allies with their attack on German dams carried out in 1943 in the “Operation Chastise” carried out using a purpose-built “bouncing bomb” designed to skim over the surface of the German hydroelectric basins before detonating against the dam wall. The attack was successful in the sense that it caused considerable damage to German dams, but it had little long term effects and it cost to the allies 40% of the attacking aircraft.

Another case was the Israeli air strike carried out on 7 June 1981 which destroyed an Iraqi nuclear reactor southeast of Baghdad—the plant was still under construction and held no nuclear material. Later on, the Iraqis targeted an Iranian nuclear reactor in Bushehr in 1987. Neither strike had a significant military effect. Then, the 1999 NATO bombing campaign against Yugoslavia saw attacks specifically directed against power plants. During the early phases of the campaign, NATO planes used a special “soft bomb” or “graphite bomb,” specifically created to emit a cloud of graphite to short-circuit the connections of power plants [31]. The Western press reported that these bombs disabled about 70% of the Serbian electric grid. The Serbians admitted that they experienced blackouts, but also claimed that they were able to restore power in a short time and that the effect of the attacks was negligible. We do not seem to have a reliable assessment of the actual results of the attacks and, in any case, after that first attack, NATO did not use any more graphite bombs, preferring to use conventional weapons directed against power plants and transformer stations. None of these attacks succeeded in forcing Serbia to surrender and so far, the idea of targeting the energy network of a whole country has never been very effective. But, if it were to succeed on a large scale, the consequences of leaving a whole country without power for a long time would be so devastating as to be nearly inconceivable, a Seneca Collapse that nobody would ever want to see.

Overall, the simplest way to cause economic damage to an enemy population is by means of economic sanctions. That may be a very powerful weapon and it can starve whole countries although, in modern times, it seems that sanctions are rarely carried out to their extreme consequences. For instance, the economic embargo against Iraq after the first gulf war in 1991 was relaxed to allow Iraq to export oil in order to import food and avoid mass starvation of its population.

In general, the idea at the basis of all hybrid war methods is that the targeted civilian population should not be exterminated, but rather become discouraged and cease to support the war effort. In history, that turned out to be difficult and often counterproductive. Starved or bombed people will normally direct their hate toward those who are starving or bombing them, not necessarily against their government, no matter how oppressive and dictatorial it is. If you want an example of how economic sanctions may misfire, consider the case of the international sanctions against Italy imposed by the League of the Nations in 1935–1936 [32], after that Italy had invaded Ethiopia. The sanctions generated strong nationalistic feelings in the country and reinforced the grip of the Fascist Party on the government. Later on, when Britain enforced a coal embargo against Italy, the result was that Germany became the main supplier of coal to Italy and that led Italy to join Germany during WW2 [33]. Embargoes seem to normally achieve exactly the opposite effect of what they are said to be enacted for. Or, possibly, this is exactly what they are enacted for: to force a country to go to war even in unfavorable conditions.

So, it seems that if we want to cause the collapse of an enemy without the need of conventional warfare, we need something subtler and more effective than bombs or economic sanctions: we need to convince the population of the target country that their enemy is their own government. This is the basis of the subset of hybrid warfare known as “psyops” (psychological operations). It is a way of waging war that mainly relies on propaganda, but with a few extra twists. Normally, propaganda takes a reactive approach, trying to influence people’s perception of reality by means of three cardinal techniques: obfuscation (denying or hiding information), saturation (distracting the targets by means of irrelevant information) and spin (presenting information in a form favorable to a certain interpretation) [34]. Acting along these lines, propaganda is a consensus-building technology used mainly as a tool for reinforcing national cohesion. That is often obtained by developing hate against some political, ethnic, or religious enemy.

Psyops use some of the typical techniques of propaganda, but they are more aggressive and tend to be pro-active in stimulating some kind of action. They are probably best described in terms of a quote attributed to an “aide of the Bush administration” at the time of the 2003 invasion of Iraq in an article by Ron Suskind in The New York Times, in 2004 [35]. The quote is often attributed to Karl Rove, although Rove himself denied being the author. It is worth reporting it in full:

The aide said that guys like me were “in what we call the reality-based community,” which he defined as people who “believe that solutions emerge from your judicious study of discernible reality.” I nodded and murmured something about enlightenment principles and empiricism. He cut me off. “That’s not the way the world really works anymore,” he continued. “We’re an empire now, and when we act, we create our own reality. And while you’re studying that reality – judiciously, as you will – we’ll act again, creating other new realities, which you can study too, and that’s how things will sort out. We’re history’s actors . . . and you, all of you, will be left to just study what we do.”

You see here the basic aggressive tenets of psyops: the idea is not just to distort reality, as propaganda does. It is to transform reality into something that is one’s own creation. The masterpiece of psyops in recent times has been the creation of the alleged “Weapons of Mass Destruction” that the government of Iraq was said to stockpile somewhere within the country. It was to those non-existing weapons that Karl Rove was referring when he spoke about “creating reality.”

Psyops may also go trans-national and directly target the social and political system of a foreign country. This is a very innovative concept: so far, propaganda had been linked to shared cultural memes in the country where it originated. For instance, during WW2, it was not difficult to convince Americans to hate the “Japs”, variously described as evil and monkey-like, but the same techniques would hardly have worked in Japan. Perhaps the first example of a successful transnational psyop may have been with Mata Hari, the Dutch dancer who was accused of espionage and shot by the French in 1917. Not all the details of this story are known, but it seems clear that Mata Hari was not a spy: the case may have been created by the German secret service to balance for the blunder they had made in 1916, when they had shot a British nurse, Edith Cavell, under the same accusation. The allies had amply exploited the Cavell case to paint the Germans as evil Barbarians and the Germans may have just tried to reciprocate [36]. It did not work very well: Mata Hari was amply vilified as an evil femme fatale by the French press and her execution did not generate the international indignation that of Edith Cavell had. At that time, psyops were not yet so sophisticated as they are today.

In more recent times, it has been said that the fall of the pro-Russian Ukrainian government in 2014 was the result of a psyop created by the Western Powers in order to bring Ukraine within the Western sphere of influence. The operation went under the name of the “Orange Revolution” and it was just one of the several “color revolutions” taking place in various locations in the world during the past two decades or so, in particular in former Soviet countries, Wikipedia has a list of 23 of them. Some were successful, such as in Ukraine, others have been complete failures, such as the “Violet Revolution” of 2009 aimed at bringing down the prime minister of Italy, Silvio Berlusconi. There is no proof that they were all psyops controlled by foreign powers, but it is possible that at least some were.

Overall, colored revolutions seem to be out of fashion, today, replaced by more sophisticated Web-based operations. The alleged collusion of Donald Trump and the Russian secret services in influencing in the US presidential campaign elections of 2016 is an example of a possible Web-based psyop operation. In 2019, the Special Counsel investigation (also referred to as the Mueller probe or the Mueller investigation) found no evidence of collusion, but it is a safe inference that governments all over the world are involved in trying to affect the policies of other countries. Those who control the Web control the whole world and, at present, the Web seems to be a battlefield where all players in the international arena are engaged in a gigantic struggle.

Psyops do not involve just people wearing colored T-shirts or trolling the internet under false identities. They include targeted assassinations of enemy leaders, false flag operations, terrorism, and more dark and dire things directed against the enemy’s government. There is little doubt that psyops have a bright future and the results of the struggle are uncertain but, at least so far, they do not involve human casualties. It is a true “battle of memes” which appear, grow, and then collapse in cyberspace. Where this line of conflict will take us is impossible to say: maybe virtual battles will reduce real violence, or maybe the havoc they wreak will make it worse. As usual, the future cannot be predicted: we need to wait until it becomes the present.

In military matters, there may also exist an “anti-Seneca” strategy. It consists in disregarding Sun Tzu’s principle of minimum effort in warfare and aiming instead at continuing the war all the way to the complete military defeat, or even the annihilation, of the enemy. Such a plan could be based on ideological, political, or religious considerations that lead one or both sides to believe that the very existence of the other is a deadly threat that must be removed using force. In ancient times, religious hatred led to the extermination of entire populations and there is a rather well-known statement that may have been pronounced after the fall of the city of Béziers, in Southern France, in 1209. It is said that the Papal legate who was with the attacking Catholic troops was asked what to do with the citizens of Béziers, among whom there surely were Catholics and Albigensian heretics. The answer was “Kill them all, God will know His own.” That war, just as most modern wars, was an “identity war” where the enemy is seen as not just an adversary, but an evil entity to be destroyed. These wars tend to be brutal and carried on all the way to the total extermination of the losing side. In some cases, wars may be prolonged because they are good business for some people and companies on both sides.

A possible recent case of this kind of “anti-Seneca” strategy may be found in the campaign that was started in the US in 1914 to provide food for Belgium during the First World War. The campaign is normally described as a great humanitarian success but in the recent book Prolonging the Agony (2018) [37], the authors, Docherty and Macgregor, suggest that the relief effort was just the facade for the real task of the operation: supplying food to Germany so that the German army could continue fighting until it was completely destroyed. This seems to be mainly speculation, nevertheless Belgium was occupied by the German army at that time, and so it could be expected that at least part of the food sent there would end up in German hands. But it is also true that, at the time of the campaign, the US was not at war with Germany so the operation can be described simply as a lucrative business for American farmers who found a way to sell food to Germany in this rather indirect way.

Something more ominous took place during the Second World War. By September 1943, after the surrender of Italy, it must have been clear to everybody on both sides that the Allies had won the war, it was only a question of time for them to finish the job. So, what could have prevented the German government from following the example of Italy and surrender, maybe ousting Hitler as the Italian government had done with Mussolini? We do not know whether some members of the German leadership considered this strategy but it seems clear that the Allies did not encourage them. One month after Italy surrendered, in October 1943, Roosevelt, Churchill, and Stalin, signed a document known as the “Moscow Declaration” [38]. Among other things, it stated that:

At the time of granting of any armistice to any government which may be set up in Germany, those German officers and men and members of the Nazi party who have been responsible for or have taken a consenting part in the above atrocities, massacres and executions will be sent back to the countries in which their abominable deeds were done … and judged on the spot by the peoples whom they have outraged.

… most assuredly the three Allied powers will pursue them to the uttermost ends of the earth and will deliver them to their accusors in order that justice may be done. … < else > they will be punished by joint decision of the government of the Allies.

What was the purpose of broadcasting this document that threatened the extermination of the German leadership, knowing that it would have been read by the Germans, too? The Allies seemed to want to make sure that the German leaders understood that there was no space for them to negotiate an armistice. The only way out left to the German military was to take the situation in their own hands to get rid of the leaders that the Allied had vowed to punish. That was probably the reason for the assassination attempt carried out against Adolf Hitler on June 20th, 1944. It failed, and we will never know if it would have shortened the war.

Perhaps as a reaction to the attempted assassination of Hitler, on September 21, 1944 the Allies publicly diffused a plan for post-war Germany that had been approved by the British and American governments [39]. The plan, known as the “Morgenthau Plan,” was proposed by Henry Morgenthau Jr. secretary of the Treasury of the United States. Among other things, it called for the complete destruction of Germany’s industrial infrastructure and the transformation of Germany into a purely agricultural society at a nearly Medieval technology level. If carried out as stated, the plan would have killed millions of Germans, since German agriculture, alone, would have been unable to sustain the German population.

Unlike the Moscow declaration that aimed at punishing German leaders, the Morgenthau plan called for the punishment of the whole German population. Again, the proponents must  have been aware that their plan was visible to the Germans and that the German government would use it as a propaganda tool. President Roosevelt’s son-in-law Lt. Colonel John Boettiger stated that the Morgenthau Plan was “worth thirty divisions to the Germans.” [39]. The general upheaval against the plan among the US leadership led President Roosevelt to disavow it. But it may have been one of the reasons that led the Germans to fight to the bitter end.

So, what was the idea behind the Morgenthau plan? As you may imagine, the story generated a number of conspiracy theories. One of these theories proposes that the plan was not conceived by Morgenthau himself, but by his assistant secretary, Harry Dexter White [40]. After the war, White was accused of being a Soviet spy by the Venona investigation, a US counterintelligence effort started during WW2 [41] that was the prelude to the well known “Witch Hunts” carried out by Senator Joseph McCarthy in the 1950s. According to a later interpretation [40], White had acted under instructions from Stalin himself who wanted the Germans to suffer under the Allied occupation so much that they would welcome a Soviet intervention. It goes without saying that this is just speculation but, since this chapter deals with the evil side of collapse, this story fits very well in it.

In the end, there is no evidence that the Morgenthau plan was conceived by evil people gathering in secret in a smoke-filled room. Rather, it has certain logic if examined from the point of view of the people engaged in the war effort against Germany in the 1940s. They had seen Germany rebuilding its army and restarting its war effort to conquer Europe just 20 years after it had been defeated in a way that seemed to be final, in 1918. It is not surprising that they wanted to make sure that it could not happen again. But, according to their experience, it was not sufficient to defeat Germany to obtain that result: no peace treaty, no matter how harsh on the losers, could obtain that. The only way to put to rest forever the German ambitions of conquest was by means of the complete destruction of the German armed forces and the occupation of all Germany. For this, the German forces had to fight like cornered rats and be exterminated. And it seems reasonable that if you want a rat to fight in that way, you have to corner it first. The Morgenthau plan left no hope to the Germans except in terms of a desperate fight to the last man.

We do not know whether the people who conceived the plan saw it in these terms. The documents we have seem to indicate that there was a strong feeling among the people of the American government during the war about the need to punish Germany and the Germans, as described, for instance, in Beschloss’s book The Conquerors [39]. Whatever the case, fortunately, the Morgenthau plan was never officially adopted and, in 1947, the US changed its focus from destroying Germany to rebuilding it by means of the Marshall plan.

There have been other cases of wars where there was no attempt to apply the wise strategy proposed by Sun-Tzu who suggests to always leave to the enemy a way to escape. Nowadays, wars seem to be becoming more and more polarized and destructive, just as the political debate. And that makes them more destructive: once a war has started, nowadays, the only way to conclude seems to be the complete collapse of the enemy and the extermination of its leaders. The laughter of Hillary Clinton, then US secretary of state, at the news of the death of the leader of Libya, Muammar Gaddafi, in 2011 is a case in point of how brutal and cruel these confrontations have become. It is hard to see how the trend in this direction could be reversed until the current international system of interaction among states that created it collapses. At least, it should be clear that the anti-Seneca strategy is an especially inefficient way to win wars.

To conclude this section on the evil aspects of the Seneca Cliff, we may examine the subject of deception and betrayal as tools to avoid ruin. Lying is surely a very ancient art, can it be used to trigger the collapse of an enemy or of a competitor? On this point, there exists a paradigmatic story: that of the two unarmed men who found themselves facing a hungry lion, somewhere in Africa. While one of the two calmly starts putting on his running shoes, the other asks him, “why are you doing that? Don’t you know that the lion can outrun you even if you wear those shoes?” The first man answers, “I don’t need to run faster than the lion, I just need to run faster than you.”

This story is one of the many narrative versions of the concept that in some conditions one person’s gain may be optimized by ensuring another person’s loss and that may involve deception and betrayal. In studies on human behavior, collaboration is often the focus [42], but there also exists a scientific literature about betrayal. Much of this work has been done done on the basis of case studies, see for instance the book Betrayal and Betrayers by Malin Akerstrom [43]. Another well known method is that of operational games where betrayal is studied in the framework of optimizing the payoff for players in different situations. In this field, you find the “Dictator’s Game,” the “Ultimatum Game,” the “Trust Game,” all part of the field known as “Game theory,” originally developed by such figures as John Nash and John Von Neumann (see, for instance, the book by Myerson, Game Theory [44]). Then, of course, betrayal plays a fundamental role in many competitive boardgames with perhaps the oldest example being Diplomacy, a strategic game created by Allan. B. Calhamer in the 1950s. In Diplomacy, just as in many strategic boardgames, players take the role of leaders engaged in local or world dominance.

The field of game theory, and of boardgames as well, is vast but we can limit it to those decisions that affect the possibility of a collapse. In other words, when is it convenient to betray someone in order to minimize or avoid one’s own collapse? A good example is the well-known “prisoner’s game.” [45]. This is the way it was described by Poundstone in 1992 [46]

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge, but they have enough to convict both on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent.

In the game, betrayal brings a benefit to one of the players only if the other player decides to cooperate. If both defect, they both suffer heavy penalties. And if both cooperate by not betraying the other, they suffer only minor penalties. In principle, the best strategy overall is when players collaborate with each other, but they cannot know what the other will be doing and they may be tempted to defect, hoping that the other will be naive enough to collaborate.

The prisoner’s dilemma game has no optimal strategy. Empirical studies show that the simple strategy called “tit for tat” is the one that performs best if the game is played several times with the same players. That is, each player cooperates or defects according to what the other player did in the previous round of the game. In this version, the behavior of a player is based on what he perceives to be the reputation of the other. But there is no guarantee that this strategy will always bring a benefit to those who adopt it. Besides, what to do when playing against someone whose reputation is not known? So, the game reflects the complexity and unpredictability of the real world.

The prisoner’s game involves betrayal, but no deception: there is no lying to each other involved. Something similar takes place in the story of the lion and the two men: it involves no deception, either. On the basis of the known data, each player makes a calculation of the odds of two possible strategies: fighting the lion together or running away. There is no real “game” here since there exists an obvious optimal strategy: the man who believes he is faster runs away alone, leaving the slower man to face his personal Seneca cliff in the form of a hungry lion. But, in real life, deception is often a fundamental element of the interaction among human beings.

We may inject deception into the rules of these games. In the story of the lion and the two men, what if only one of the two knows that the lion is coming? This is a version of the game that I called “the camper’s dilemma” in 2017 [47]. I described it in terms of a bear threatening two unarmed campers, but the story is the same when it involves a lion or any other dangerous creature. The gist of the game is to decide what is the best strategy to survive when one of the players discovers that a hungry lion, or bear, is near. Is it better to try to survive alone or to cooperate with the other camper? It depends on the situation. Let us imagine that you saw the bear when you were searching for berries while the other camper was near the tent. What you do depends on how serious the threat is (or it is perceived to be). Maybe the bear you saw was far away or maybe it was a small bear, not likely to attack two human beings who fight together. Then, the best strategy is collaboration.

But what if the bear is near and it is a grizzly, so big that you have no hope of surviving a fight, not even if you join forces with your fellow camper? In this case, your best chance of survival is deception. You tell your friend that you will take a walk to collect strawberries and, as soon as you are out of sight, you start running. Your friend will do the same when the grizzly appears, but you have a good advantage and you may be able to survive this mini-stampede.

The “camper’s dilemma” game shows that there are situations in which asymmetric knowledge makes betrayal convenient when facing a potential catastrophe. It is a condition that may well apply to real-world situations. Let me give you an example: In 2017, there appeared a piece on “The Guardian” [48] titled, “‘We need development’: Maldives switches focus from climate threat to mass tourism.

This week the Maldives, under new president Abdulla Yameen, apparently changed environmental tack, saying that mass tourism and mega-developments rather than solar power and carbon neutrality would enable it to adapt itself to climate change and give its young population hope for the future.

Fears of immediate sea level rise, which scientists said in the latest IPCC report was accelerating and could mean 75% of the Maldives being under water by 2100, were unfounded, Adam said. “It is not going to happen next year. We have immediate needs. Development must go on, jobs are needed, we have the same aspirations as people in the US or Europe.”

As a first impression, these declarations sound like pure madness. The Maldives are islands rising out of the sea on top of coral reefs of no more than a couple of meters on the average. So far, they have been able to survive a sea level rise of the order of centimeters and there is no evidence that they are at immediate risk of sinking [49]. But sea level rising rate is accelerating [50] and for how long will the coral islands be able to cope? Nobody can say for sure, but they may well succumb in a non-remote future since, as far as we know, the islands never experienced the kind of rapid sea level change that global warming is going to generate in the near future [51]. And there is no need for the islands to be completely submerged for their inhabitants to suffer great damage. Coral islands are a very bad place to experience floods: there is no high ground to take refuge on.

So, there are good reasons for the people living on these islands to be worried, but the Maldivian government does not seem to care because it plans to build a “Riviera-style super-resort with sea sports, six star hotels, high-end housing and several new airports,” and “Plans to increase tourism from 1.3 million people a year to more than seven million within 10 years.” Is this a case described by the proverb “Whom the Gods would destroy they first make mad”?

The Maldives are not the only archipelago where the local leaders have decided that the threat of global warming should be ignored. Something similar is going on in the Kiribati islands, another archipelago of coral islands in the Pacific Ocean. According to an article which appeared on CBS news [52] in November 2017, the Kiribati government,

… proclaims the goal of promoting tourism by attracting foreign investors to develop “5-star eco-friendly resorts that would promote world-class diving, fishing and surfing experiences” on currently uninhabited islands. It says the nation’s 20-year plan “has an ambitious aim to transform Kiribati into the Dubai or Singapore of the Pacific.”

I am sure that the events taking place in the Maldives and in the Kiribati islands remind you of the similar political reversal regarding the policy to face climate change that occurred in the United States in 2016, even though the US is under no threat of being swamped by the waves. More recently, a similar evolution took place in Brazil with the election of Jair Bolsonaro as president in 2019. Among other things, the new president threatened to have Brazil quit the Paris agreement, just like the US did with President Trump.

Why do people start denying the threat as it becomes closer? There may be deep psychological reasons for that, but I might propose a different interpretation. It has to do with the fact that, while at the individual level you can only deceive yourself when facing the Seneca Cliff, at the collective/political level you have the possibility to deceive someone else and, if you are a member of the elite, you may decide to deceive the commoners in order to save yourself.

Here is a recent historical example of the elites deceiving the commoners. In 1943, during the second world war, the Italian high command had been negotiating the surrender of Italy to  the Allies for months in complete secrecy. Up to the last moment, the official truth was that there would be no surrender and that the superior fighting spirit of the Italian people would triumph, no matter what the superiority of the Allies was in terms of materials and manpower. Then, when the surrender was made public, on Sep 8, 1943, the King of Italy and the top generals saved themselves by taking refuge with the Allies while the army was left to be “eaten by the lion,” in this case the German army.

Now, let us go back to the cases of the Maldivian and the Kiribati archipelagos. Imagine that you are part of the elite of the islands and that you are smart enough to understand what is going on with the Earth’s climate. You know that it is unlikely, to say the least, that the people of the rich world would give up their shiny SUVs for the sake of a bunch of wretches living on some remote islands. So, what is the rational thing for you to do? Of course it is to sell what you have and then say good riddance to those who remain. That implies, of course, that you should not tell anyone that you fear that the islands will sink. On the contrary, you must prepare grand plans of development as if you were sure that the islands will stay afloat forever. Then, when things start going bad, you have a chance to leave and join your bank accounts on the mainland. The poor will be stuck where they are, for them, the Seneca Cliff ends underwater.

The cases of small islands are not isolated, only more evident than others. Look at what Donald Trump is doing: he downplays climate change in favor of economic development, just what the Kiribati’s and Maldives’ governments are doing. If the US elites have decided that there is no hope to save everyone, the logical thing for them is to move into “cheating mode” and let most people die not just by sea level rise, but by starvation, sickness and other consequences of climate change. That gives them the time to prepare, accumulating resources for the coming emergency. Unfortunately, this particular strategy to deal with complex systems under stress has a perverse logic and, if this interpretation is correct, the elites of most of the developed world will soon follow suit in the denial of climate change. We just have to wait and see.

Avoiding Overexploitation. Drill, Baby, Drill!

Fig. 4.4
figure 4

(https://www.publicdomainpictures.net/en/viewimage.php?image=177469&picture=oil-pump-jack)

A pumping jack in an oil field

In 2008, Sarah Palin, then the Republican candidate for the vice-presidency, engaged in a TV debate with her Democratic opponent, Joe Biden. The debate touched on the question of climate change and energy resources with Biden stating that, [53]

Now, let’s look at the facts. We have 3 percent of the world’s oil reserves. We consume 25 percent of the oil in the world. John McCain has voted 20 times in the last decade-and-a-half against funding alternative energy sources, clean energy sources, wind, solar, biofuels.

Politicians like to state that they care about facts, except that what they call facts are often more their interpretation of reality than actual reality. But, in this case, Biden was reporting reasonably correct data for 2008 when the “shale boom” of oil production in the US had barely started.

And here is how Sarah Palin answered:

The chant is ‘drill, baby, drill.’ And that’s what we hear all across this country in our rallies because people are so hungry for those domestic sources of energy to be tapped into. They know that even in my own energy-producing state we have billions of barrels of oil and hundreds of trillions of cubic feet of clean, green natural gas.

Sarah Palin provided no facts, rather she spoke about a “chant,” drill, baby, drill,” a magic spell, an enchantment, an exorcism. In terms of facts, she provided only vague estimates using resounding words in terms of “billions of barrels” and “trillions of cubic feet.”

This is the way politics works: using magic rather than facts to convince people. It is all part of an ongoing trend in politics: over the past decades the political discourse has become more emotional and less fact-based, pivoting around the capability of the big man at the top (rarely the big woman) to sound convinced and reassuring. It is a trend that’s described in a recent paper by Jordan et al. [54] as

Across multiple corpora from the American presidents, non-US leaders, and legislative bodies spanning decades, there has been a general decline in analytic thinking and a rise in confidence in most political contexts, with the largest and most consistent changes found in the American presidency.

The Palin/McCain team was defeated by the Biden/Obama team in 2008 but that changed little to the fact that Palin’s proposal to drill more overcame Biden’s idea of moving to renewables. In politics, one of the main rules for success is “all changes you propose must have the purpose of avoiding change.” Biden was proposing to move to clean energy: that meant real change and that is a no-no in politics. Palin was proposing no change at all, except maybe chanting some mantra all together. That was a winning strategy in political terms. Fortunately for the Obama/Biden team, climate change and energy remained marginal themes in the debate.

The idea of drilling more was already in motion before the 2008 election and it progressively gained ground. The financial world provided resources for the industry to engage in a major effort to extract more oil and that could be done by exploiting from shale deposits. It is a kind of oil contained in the rock matrix in bubbles not interconnected with each other, so that the gas or the liquid cannot spontaneously flow to the surface once the rock is drilled. To get the oil, it is necessary to create a path for the oil to flow by fracturing the rock (or “fracking,” as it became fashionable to say in recent times). In the old times of the petroleum industry, it is said that it could be done by throwing a lighted dynamite stick into the borehole, nowadays it is done by injecting high-pressure fluids inside the rock. It does not change the basic idea so much, although the dynamite stick was probably more spectacular.

Despite the complexity and the high cost of fracking, in a few years the US oil industry managed to invert the declining trend that had been ongoing from the 1970s. With the 2010s, drilling increasingly became the accepted wisdom while renewable energy was gradually going out of fashion or relegated to some marginal regions of the debate  while most politicians engaged in new  magic slogans such as “clean coal” and “green growth.” The “drill, baby, drill” chant triumphed and the oil depletion problem seemed to have been pushed to a future so remote that nobody would have to worry about that anymore. In time, Palin’s 2008 chant of “drill, baby, drill” was transmogrified into the one called today “energy dominance,” another magic slogan used for the first time by Donald Trump in 2017. An interesting concept: it is as if you could dominate your neighbors by burning your house faster than they are doing. But never mind the logic of that: aren’t we dealing with magic?

Extracting shale oil may be described as “magic” by politicians, but surely it is a complex and expensive technology. To give you some idea of the difficulties involved, note how a recent article from China by Stephen Chen [55] reports how nuclear weapon technologies could be used to mobilize hydrocarbons trapped in shale deposits. Not that the plan is to detonate nuclear warheads for that purpose, but the device described in the article is called an “energy rod” able to create shock waves that will fracture the underground rock. Apart from sounding a little like the staff of Gandalf the White in Tolkien’s trilogy, it seems to be an especially expensive and complicated variant of the old idea of dropping dynamite sticks into the borehole. Given the costs and the difficulties involved, we cannot say for how long the shale boom will last. What we can say is that, so far, the shale industry has not provided much of a profit for investors [56]. So, for how long can the industry keep going like that? The Seneca Cliff for the shale oil industry may not be far away in the future. In politics, magic always wins against reality—but only for a while.

The Palin versus Biden debate is a good starting point to discuss a very general question: how should we manage the Earth’s natural resources? Can we really keep growing forever, as most politicians seem to imply? Or do we face the Seneca Cliff for our whole civilization when we start truly running out of the resources that created it?

All natural resources are scarce by definition: if they were not, they would come for free. This is why you do not pay for the oxygen you breathe nor for the sunlight coming through the window (so far, at least). But oil, gas, gold, whales, grain, and caribous are all examples of limited resources, a well-known concept in economics. Economists normally agree on a concept called “general equilibrium theory” which implies that if demand exceeds production, prices will rise, reducing the demand and/or generating new investments that will increase production. In both cases, equilibrium will be restored. The opposite will take place if production exceeds the demand.

These concepts are considered proven within the assumptions at the basis of modern economics, but are they true in the real world? Kate Raworth notes in her book Doughnut Economics (2017) how the early economists banked on Newton’s prestige to make economics “laws” look like physical laws, similar to the laws governing the motion of planets. Raworth remarks (p. 135)

One thing that’s clearly coming to an end is the credibility of general equilibrium economics. Its metaphors and models were devised to mimic Newtonian mechanics, but the pendulum of prices, the market mechanisms, and the reliable return to rest are simply not suited to understanding the economy’s behavior. Why not? It is just the wrong kind of science.

Raworth means that Newtonian mechanics is perfectly suitable to describe the motions of bodies in a gravitational field as an approach that naturally leads to a condition of equilibrium. But the economic system is not in equilibrium. It may be in homeostasis—a condition that may look like equilibrium, but that is a completely different concept. The market is well known to go through cycles of growth and decline and prices normally oscillate, sometimes wildly, something equilibrium physics cannot describe. Physics and economics stand to each other a little like chess and paintball, they are both games simulating real battles, but with very different rules.

The problem is most evident when we discuss non-renewable resources. When Sarah Palin was promoting her “drill, baby, drill” chant, she meant that every oil company should strive to maximize both production and profits. But if oil is a non-renewable resource, then drilling more will only lead to run out of it faster—although operators may be able to enjoy the short-lived abundance. The reason why depletion was neglected in the debate is due in large part to the human tendency to discount the future, in other words to think that an egg today is better than a chicken in the future. This is a big problem and it seems that, for most people, events that are expected to occur more than about five years in the future are just not considered important.

Nevertheless, economists do not just tell people, “eat your egg as long as you have it.” On the contrary, already about one century ago, economists started thinking about the problem of depletion. The basic idea that seems to be still current in this field is that the efficiency of the market in allocating scarce resources should be able to take care of optimizing the exploitation of non-renewable ones as well. So, as producers deplete a stock, the all-knowing market will perceive the increasing scarcity and react by increasing the price of the product. That allows producers to maintain their production despite the higher costs while seeking for new resources which could be of the same kind, but more expensive to produce, or completely different ones, possibly renewable ones. According to a model developed for the first time by Harold Hotelling in 1931 [57], the result will be a smooth substitution of the depleted resource with a new one, called the “backstop resource.”

You may object that it is an act of faith that there will be always something available to replace a resource that has become too expensive to be used. Indeed, in many cases the belief of the availability of replacements is built on a rather naive faith in technological progress. But it is also true that many scarce resources can be replaced with less scarce ones. Over the past few centuries, coal replaced wood, oil replaced coal, natural gas may be replacing oil. And we can replace copper with aluminum, zinc with titanium, plastic with bio-plastic, and so on.

This line of reasoning has led to some overoptimistic assessments in the past, such as the “principle of infinite substitutability” proposed in 1978 by Goeller and Weinberg [58], mainly based on what appeared to be a promise of cheap and abundant energy obtainable from nuclear power. We tend to be less optimistic, nowadays, but it is also true that physical scarcity, in itself, is not an unsolvable problem: replacement, recycling, efficiency, restructuring, are all strategies that can be used to fight the depletion of mineral resources. After all, humans can hardly mine themselves out: everything we extracted in the past has not disappeared, it is somewhere and will remain forever with us. So, nothing prevents us from using the same strategy that has been used by plants to “mine” the crust for hundreds of millions of years without ever running out of anything. How did they manage that? On the basis of three fundamental principles: (1) use only what is abundant, (2) use as little as possible, (3) recycle ferociously.

It worked for plants and it is still working for the whole biosphere, but could we do the same with our industrial system? Not easy, of course, but there are no physical reasons why it could not be done. Some people have a wrong understanding of the second principle of thermodynamics and assume that because entropy is supposed to increase always, then it will never be possible to completely recycle minerals. But the second principle works only for isolated systems and our planet is not one—that is why plants could manage to recycle everything for so long. The problem with recycling is not thermodynamics, but the cost, and it is hard to think that the deity called “free market” will do the miracle for us with no pain involved. Moving to 100% recycling involves forsaking the current “energy subsidy” that millions of years of sunlight and other forces have accumulated in mineral ores—we’ll have to pay the price for this energy ourselves and that implies a complete rethinking of the way we extract, use, and recycle mineral. A change of attitude that looks very unlikely considering that the government of the US seems to have fully embraced the idea that the way to deal with oil depletion is to extract what is left at the fastest possible speed in the largest amount possible, without thinking—even vaguely—of the necessity of investing in a replacement for the future. We have a lot to learn in this field.

But what about renewable resources? In principle, we can keep producing biological resources—wood, grain, food, fiber, and more—as long as there is sunlight to power the photosynthesis process, can’t we? Unfortunately, we do have a problem of depletion also with renewable resources, a problem that can be even worse than that with the non-renewable ones. Human beings are so good at exploiting resources that they tend to destroy them, creating a scarcity that, in itself, would not need to exist.

It is a story that goes back to very ancient times. Think of how American Indians used to kill bison by pushing them down a cliff and making sure that not a single one survived, as told by Lewis and Clark in the report of their 1804–186 expedition [59]. The idea that the best way to get a bison steak for dinner is to exterminate a whole herd does not seem to be the most efficient one, but this attitude may have been typical of our remote ancestors. Indeed humans are often accused of having been the cause of the pulse of extinctions of “megafauna” (creatures weighing more than 100 lbs or 44 kg) observed around 10,000 years ago [60]. This is a controversial point and there are other possible causes for ancient extinctions, but it is also true that we have direct historical evidence of how modern wasteful hunting practices led to the near—or total—extinction of large animals. If you read Melville’s Moby Dick, you surely noticed how 19th century whalers would kill whales to get just a few liters of the spermaceti oil contained in their large heads, the rest they would throw away except for a few chunks, such as when we read of first mate Starbuck eating a whale steak on the deck of the Pequod. From the age of whaling, things have not changed so much and we have not really learned how to manage the exploitation of marine creatures. Having nearly run out of several species of whales [61], we now risk running out of much smaller creatures, such as squid [62].

Why do people keep destroying the resources that make them live? Gandhi is reported to have said that “the Earth provides enough to satisfy every man’s need, but not every man’s greed.” This statement can be understood not as meaning that humans can expand their numbers forever but that an economic system based on greed will always create needs that the Earth will not be able to satisfy. Unfortunately, the idea that greed is good is enshrined in current economic thought and economists seem to have been slow in detecting the gaping hole at the basis of their views.

That’s exactly where the problem lies: it is called “overshoot” and we saw its description in an earlier chapter of this book. The more you go in overshoot, the harder you have to “return” to a flow rate well below the carrying capacity of the system. Unfortunately, the tendency of a system that works simply according to maximizing dissipation of the resources it uses is equivalent to maximizing the utility function of the operators: nobody is in control except for the abstract entity we may call “Greed.” It is like following Sarah Palin’s suggestion in the form, “exploit, baby, exploit.” In all fields, everyone tries to maximize production and the result is a rollercoaster economy. And, at times, the rollercoaster may well crash into the ground when a resource is exploited to a level below its capability to rebuild itself. In biological systems, extinction is forever.

These problems are generally recognized nowadays, even though not always expressed in a form that takes into account the dynamic factors of overshoot and collapse. The way to solve them has normally been to emphasize individual commitment and goodwill. A good citizen, it is said, participates in the fight against climate change by consuming less and polluting less than what is imposed on him or her by law. It is a very common idea: there are few discussions on climate change and pollution that do not end with a brief list of recommendations, such as to use bikes, turn off the lights when one is not at home, buy groceries from local producers, use natural fibers, and the like. It is not even a new idea, the Stoics at the time of Seneca were doing the same: faced with a terrible dictatorial government they had no power to control, they emphasized personal virtue and, yes, “stoicism” against the unavoidable adversities.

But can individual goodwill avoid the overexploitation of natural resources? Despite all efforts done up to now, it is hard to think that drinking your coke without using a plastic straw will do anything significant to solve our environmental problems. The problem is simple: a person’s restraint is another person’s opportunity. In other words, a person who is a good ecologist and decides to go to work by bike may simply free fuel resources that a less conscientious person may use to go to work on an SUV. It is something similar, but slightly different from Jevons’ paradox. It is what I called the “hummingbird effect” [63]. The idea comes from the old story of a hummingbird trying to extinguish a giant forest fire while carrying just a drop of water in its beak. It is, of course, useless against the fire, but the hummingbird is very proud of what he is doing and, in the story, the little bird is praised for his willingness to do its duty against all odds. Humans, it seems, have a similar attitude: they tend to be very proud of some minor contributions against global warming they engage in, say not using plastic straws for their drink, but using several tons of fossil fuels for their summer vacations. Jean Baptiste Comby described the problem in his 2015 book La question climatique (“The Climate Question”) [64]. He didn’t use the hummingbird analogy but he argued that the climate question has been thoroughly depoliticized and consigned wholly to the realm of individual decisions. A way to make people feel good, but with little or no impact on the system.

It seems that it starts being recognized, today, that individual actions are insufficient to solve the problems we are facing and avoid the impending climate and depletion cliff. That is the reason for the appearance of such political movements as the “Extinction Rebellion,” emphasizing collective action. A popular leader in this field has been the young Swedish activist Greta Thunberg. Her action is clearly framed in collective terms: her message rarely includes recommendations on individual actions such as “don’t take a plane if you can get there by train” (although she does that, too). She speaks to leaders asking them to do something to ensure that the people of her generation will have a future. It is clear in her message that this action will carry a cost that most of us will have to pay. Will this message be heard, or will the environmental movement continue to toy with double pane windows?

Controlling of Complex Systems: The Story of the Last Roman Empress

Fig. 4.5
figure 5

(Image by Clio20—https://en.wikipedia.org/wiki/Galla_Placidia#/media/File:Honorius_et_Galla_Placidia.JPG)

This is perhaps the only realistic portrait we have of Galla Placidia (388–450 c.e.), the last (and the only) Western Roman Empress. The inscription says “Domina Nostra, Galla Placidia, Pia, Felix, Augusta,” that is “Our Lady, Galla Placidia, Pious, Blessed and Venerable.” A contemporary of such figures as Saint Augustine, Saint Patrick, Attila the Hun, and—perhaps—King Arthur, Placidia had the rare chance of being able to do something that past Roman Emperors never could do; take the Empire to its next stage which was to be, unavoidably, its demise

The story of Galla Placidia reads like an adventure novel [65]. Born in late 4th century CE, she lived most of her life during the last century of the Western Roman Empire. In 410 CE, she was a young Roman princess when she was kidnapped by the Goths during the sack of Rome. Undeterred, she married their king and became their queen. There followed more dramatic events: her husband, the king of the Goths, was killed in a conspiracy and Placidia went back to Roman lands, battling against her half-brother, Honorius, for the Imperial throne in the city of Ravenna, at that time the capital of the Western Empire. Defeated, Placidia had to flee, but Honorius died and she came back at the head of an army to retake Ravenna, in the meantime occupied by a usurper. Placidia defeated the usurper, captured him, had his hand cut off, paraded him in town riding a donkey, and finally had him beheaded. In 425 CE, the victorious Placidia took for herself, alone, the title of Augusta (venerable) that had belonged to the first Roman Emperor, Julius Caesar, some 500 years before her (Fig. 4.5).

As I said, Placidia’s story is truly an adventure novel and it is strange that nobody ever thought of turning it into a movie. After all, Placidia was a contemporary of such well-known figures as Attila the Hun and, (perhaps) King Arthur of Britain, both much more popular than her in fiction. But the interest in Placidia’s life and deeds is not limited to her juvenile adventures. As Empress, she never was just a doll in expensive clothes. Rather, she was possibly the last person who actually ruled the Empire: she faced enormous problems but managed to keep the Empire together. After her death in 450 CE, no one was left who could do the same and the Empire faded away forever.

I can imagine that, at times, many of us dreamed of being what Placidia had managed to become: the absolute ruler of the world. I am sure we all have in our mind the perfect recipe for solving the world’s problems: hunger, wars, pollution, global warming, and more—it would surely work if only we had the power to impose our ideas as benevolent and merciful rulers. That’s just a dream, of course, but it is true that the Roman Emperors were powerful, semi-divine rulers. They were said to be people “born in the purple,” indicating that from childhood they would wear clothes dyed with purple made in Tyre, so expensive to produce that it was reserved for kings and emperors. But then, suppose you were one of those purple-wearing emperors, what would you do to save a collapsing empire?

In general, the record of the performance of Roman Emperors is terribly poor. We all know of Emperor Nero who was accused of having set Rome on fire to find inspiration for one of his songs, and of Caligula who nominated his horse as a senator and engaged in all sorts of debaucheries. Probably much in these accusations is legend and propaganda, but it is true that absolute rulers are often psychologically unstable individuals: they may be murderers, sexual predators, sadists, and worse than that. Even when they succeed in maintaining a certain level of mental sanity, the task of managing a whole state is beyond the capabilities of a single person. To be effective, rulers need competent staff to inform them and guide their decisions, but they tend to surround themselves with yes-men who amplify their biases and misconceptions. Absolute rulers do not solve problems, they are problems.

Curiously, there seems to be an exception to this rule: Galla Placidia. She may have been a rare case of a ruler who understood what was wrong in the system and acted accordingly. At the time of Galla Placidia, the last century of the Western Roman Empire, the problem for the Roman state was mainly financial: with the gold mines of Spain exhausted, the Empire had run out of money. In other words, the Empire was in full financial overshoot: it was spending more than it could earn. The previous Roman Emperors had tried to refill the imperial coffers by increasing taxes—but that meant straining the system, making it more fragile. The more they raised taxes to be spent on more troops, the poorer the Empire became, less and less able to face the Barbarian invasions.

Instead, Placidia did exactly the opposite. For sure, she didn’t think that wars were a good way to solve the Empire’s problems. Cassiodorus (c. 485—c. 585) described her ruling years as involving “too much peace,” even though it was intended as a criticism. Stewart Oost, who wrote Placidia’s biography in 1969 [66], reports that she enacted two especially interesting laws. One forbade the coloni, the peasants bound to the land, to enlist in the army. That deprived the army of one of its sources of manpower and we may imagine that it greatly weakened it. The other law allowed the great landowners to tax their subjects themselves. This deprived the Imperial Court of its main source of revenues and it surely forced the Court to reduce its expenses. These two laws were the push needed to gently nudge the Empire toward its next stage: its demise.

Did Placidia understand what she was doing? Of course, we have no way to know the inner thoughts of a person who lived a millennium and a half before us and who left us nothing written by herself. But she must have been steeped in the ways of seeing the world that were typical of late antiquity in Europe, including a strong influence from Stoic philosophy. In addition, she had lived with the Goths, she could probably speak their language, and she never reneged the title of queen that she had gained with them. That experience may have opened her mind and made her think in ways that were different from the narrow views that we can imagine are typical of a cloistered emperor or empress. So, she applied a strategy consisting in not opposing the unavoidable. Placidia did not try to push the system in a direction where it could not go and she played a fundamental role in opening the way for the coming of the Middle Ages.

This Excursus in Roman history is an introduction to the concept of the control of complex systems. In general, human societies, living creatures, human-made devices, and other kinds of complex systems tend to reach a specific state—sometimes called “homeostasis”—and to maintain it. In some cases, this is the result of the interaction among the internal feedback mechanisms of the system which tend to balance each other. A good example is a flock of birds. The flock is kept together by feedback-dominated interactions among single birds. It has no structure that we could identify as a control system: no “Emperor bird” at the top gives orders to the other birds!

Instead, some complex systems have structures specifically dedicated to control. The nervous system and the brain of vertebrates is an obvious example. Another one is a 19th-century invention that made it possible to run steam engines in a reliable manner: the “steam governor,” an automatic valve to regulate the flow of steam into the engine (Fig. 4.6). The steam governor was the precursor of the modern concept of control systems for our machines and devices: many are simply set point systems, just like the thermostat that regulates the temperature of a room. Others can actively chase a moving set point, like an automatic anti-aircraft gun. And some can be very complex and adaptive, you can think of the control mechanism that keeps a flying drone stable despite the various maneuvers it performs. The latest example of how sophisticated these systems can become is the currently very fashionable self-driving car, expected to revolutionize road transportation.

Fig. 4.6
figure 6

https://en.wikipedia.org/wiki/Centrifugal_governor#/media/File:Centrifugal_governor.png

The Centrifugal Steam Governor: an early automatic control device to regulate the flow of steam into the engine. It was the precursor of all modern control devices. Image from “Discoveries & Inventions of the Nineteenth Century” by R. Routledge, 13th edition, 1900.

The steam governor greatly impressed the scientists of the 19th century with its capabilities that, up to then, had been thought to be characteristic of living beings only. By means of its internal feedback-based control system, you could see the governor as endowed with a certain degree of “intelligence,” reacting to changes in its environment, adapting to new conditions. Similar capabilities exist for living beings: your body, for instance, is a tangle of feedback-based control systems. The level of sugar in the blood is controlled by the synthesis of the insulin hormone. The body temperature is controlled by neural feedback mechanisms operated by the hypothalamus gland, which also contains temperature sensors. And the blood pressure is controlled by a system called the Renin-Angiotensin-Aldosterone System (RAAS). All these systems may malfunction, that is why you may have to take blood pressure control pills. Or, the set point may be varied depending on circumstances, such as when your body temperature increases as a response to infection: it is called “fever.” The most basic control system of your body is the one that prevents your cells from growing and reproducing at the fastest possible speed. If that system ceases to work, the result is called cancer.

But not all complex systems have control mechanisms that can keep them in homeostasis. For instance, there is no set point for populations in ecosystems: amoebas in a Petri dish reproduce to increase their numbers as fast as possible and the total is kept in check only by the limited availability of food. It is no different for vertebrate populations: there are no set limits except the one generated by the availability of food. There is a logic in all this: individual creatures have internal set-points and control mechanisms because that makes them better at competing for survival. But there is little or no reason why these mechanisms should have evolved at the group or at the species level, so there is none. Only “eusocial” species, ants, for instance, actively control their population.

For human societies, it does not seem that there exist biological control mechanisms limiting, for instance, population or resource exploitation. But it is also true that we are a partly eusocial species and that we have developed cultural mechanisms supposed to reduce individual independence for the benefit of the community. They take the form of laws, religions, social rules, and more. Many human social structures rely on some kind of “central processing unit” that may go under various names: boss, chief, commander, king, emperor, or—more simply–the “government.”

Governments have many purposes, but the overall impression is that they exist mainly to harass their citizens with more and more taxes in order to maintain themselves. Apart from that, all over history governments have tended to justify their existence in terms of defending their citizens from (sometimes real) threats: crime, terrorism, foreign invasions, and the like. Only in relatively recent times, has it become commonplace to believe that the government had to intervene in the economy in ways other than simply issuing currency. An extreme view in this field is that all the means of production should be owned by the state and controlled by the government in order to avoid the waste that is generated by competition among different producers. This view is typical of socialism, but it has been largely abandoned today. Yet, it is still believed that when the economy does not work as it should, the government should do something.

But what should a government exactly do? Financial matters are the most debated area of government action and they can be seen as attempts to control the system by acting, for instance, on the interest rate. The problem is that, here as in other sectors, the government is not normally trying to control the economy in the sense of stabilizing it. Rather, it tries its best to make it grow at the fastest possible speed. For most people, this is supposed to be the obvious thing to do but it may not be such a smart idea. It is as if the governor of a steam engine were to be operated to open the valve as much as possible, all the time. That could lead the machine to rev up over its limits and maybe even explode.

We saw in a previous section how the attempt to keep the flow of natural resources growing, the “drill, baby, drill” approach, has similar consequences. It sends the system in overshoot and then causes it to crash down generating what we call here the “Seneca Cliff.” Individual operators or single firms are perfectly capable of generating a collapse by resource overexploitation, but it is an especially destructive effect when several operators or firms compete for the same resource. In that case, the operator who shows restraint and tries to avoid going in overshoot would simply leave more of the resource to another, less scrupulous, operator.

It was a biologist, Garret Hardin (1915–2003), who first noted how the economy was subjected to this problem when he published a famous paper in “Science” in 1968, titled the “The Tragedy of the Commons [67].” Hardin’s model is the same as the one by Lotka and Volterra that we saw earlier on in this book, except that it was expressed in words rather than using differential equations. Hardin proposed a model based on a hypothetical pasture managed as a “commons,” that is, free for everyone, where a number of shepherds could bring their sheep. Shepherd will tend to increase the size of their flocks to increase their profit and that will result in overgrazing. That is, grass will be eaten by the sheep faster than it can grow back. The sheep will starve and the shepherds will see their flocks collapsing. And there comes the Seneca Cliff.

There is little evidence that Hardin’s tragedy of the commons actually takes place in pastures [68]. But it was found later on that the Hardin model does describe some economic systems, such as fisheries [69] just as Volterra’s studies had demonstrated earlier on [70]. Hardin had identified what we call today the problem of overshoot and collapse, although he did not use these terms in his papers. His ideas were revolutionary in the sense that they showed that in some conditions economic systems do not tend to reach the situation of stability that the general equilibrium theory assumes they should when left alone in conditions of “perfect” free markets. Hardin’s model was much discussed and often rejected, but it has been lingering in the debate on how to manage the economy.

In parallel with Hardin’s considerations, the question of overshoot and collapse was being examined within the new approaches to complex systems. Jay Forrester, the founder of system dynamics was probably the first to use this terminology, noting how economic and biological systems tend to behave like electronic circuits when they “overshoot” the signal and then “return” in a series of damped oscillations [71]. This led Forrester to the first dynamic study of the world’s economic system, published in 1971 [72] and his coworkers to the other milestone study The Limits to Growth of 1972 [10]. These studies went beyond the hypothetical pastures that Hardin had used as a metaphor and used real-world data to study the world’s economy. The result was that the global economy was—or would soon be—in overshoot and that it would have had to return below the carrying capacity of the world’s system. This return would be painful, to say the least. Neither Forrester nor the authors of The Limits to Growth used the term “Seneca Collapse” but that was what they had identified for the first time in the story of dynamic modeling.

Forrester and the authors of the Limits to Growth did not just recognize the problem, they proposed solutions for it. If you want to avoid the overexploitation of a natural resource, then you have to regulate its flow so that the throughput of the exploitation does not exceed the carrying capacity of the system. [73] Both studies showed how the phenomenon of overshoot and collapse could be avoided by putting brakes on some of the main elements of the economic system: the exploitation of natural resources should be slowed down, the human population growth should be stopped, increasing amounts of resources should be dedicated to fighting pollution. The result of implementing these policies was that the world’s economy would not go in overshoot and then collapse but would reach a steady state condition that could be maintained throughout the 21st century, at least (Fig. 4.7).

Fig. 4.7
figure 7

(Right to reproduce courtesy by copyright owner, Mr. Dennis Meadows)

One of the “stabilizing” scenarios proposed in the 1972 The Limits to Growth study. It assumes that the growth of some sectors of the economy is curbed starting in 1975

These results were obtained considering the world’s whole economy but they are valid also for smaller economies at the level of single states. The authors also never exactly specified what kind of entity should implement the proposed stabilization policies, but it seems obvious that it could have been only some form of government. Basically, avoiding disastrous phenomena of overshoot and collapse required the government to operate in a way not so different from that of the governor of a steam engine (and, indeed, their name is almost the same!). A governor regulates the speed of rotation of the engine to a predefined set-point, preventing it from running so fast that it could damage itself. A government should do the same, regulating the flow of natural resources into the economy and managing the output in such a way that the “engine”—or the whole society—runs smoothly, avoiding the overexploitation trap.

But we have a problem, here. Whereas centrifugal governors have an excellent record of being able to control steam engines, governments don’t enjoy the same good reputation. If you ever tried to push your government to do something sane that would benefit everybody, you understand what seems to be a general rule. A government is nothing like a thermostat or the governor of a steam engine. It is, rather, the embodiment of the concept of the tragedy of the commons described by Hardin with all the actors (lobbies) pushing to grab what they can, when they can, for themselves.

Today, in the West we tend to believe that liberal democracy is the best system of government and, for sure, it has several good points. But it is clearly unable to avoid the overexploitation of the commons. It seems to be a built-in feature: in a democracy, a politician who implements laws that require citizens to make sacrifices to reduce their consumption is not re-elected. The result is that there is no Western leader, at present, who can afford to declare that economic growth may not be the one and the only way to take us toward the nirvana of ever-lasting growth: the best of all possible worlds.

Maybe democracy is not such a great idea, surely not so good to be worth exporting by means of aerial bombing of the unfortunates who do not have it. Among others, the concept that we need different political systems has been expressed by Jorgen Randers [74], one of the authors of the first The Limits to Growth report [10]. Randers does not advocate dictatorship, but he thinks we should learn from China how a government should act forcefully when necessary, even against the opposition of the population. The one “one-child” policy enacted by the Chinese government starting in 1979 is a rare example of a successful quota imposed by a government.

The growing opinion that democracy is unable to face the challenges ahead may be a factor in the trend of more authoritarian governments appearing in the West, often with a focus on a single, powerful figure at the head. Yet, it does not seem that the new big men at the top are doing any better than the old parliament-based democracies in terms of protecting the ecosystem. The cases of Jair Bolsonaro, president of Brazil, and of Donald Trump, president of the United States, are clear evidence of this trend: both are heavily focused on promoting economic growth and engaged in dismantling the rules to protect the ecosystem conceived by previous governments. Some leaders, such as Emmanuel Macron in France, claim to be in favor of environmental policies but that seems to to be mainly a veneer of “green” painted over a traditional approach. In practice, the world governments continue to engage in their traditional power games, competing in terms of spheres of influence and occasionally waging wars on each other. Nobody in charge seems to understand that the problem, nowadays, is not that of expanding their country’s borders but to ensure the physical survival of their citizens from potentially disastrous events related to climate change and the collapse of the ecosystem.

So bad is the record of many governments nowadays that some people arrived at the conclusion that the only good government is no government at all (just like the only good Indian was no Indian in the views of some 19th century Americans). One result is the extreme Libertarianism of some sectors of the political right in the US, from where there comes the idea that the economic system should be left absolutely and completely free to regulate itself. But if that is the solution, how to avoid the tragedy of the commons? The Libertarian answer to the question is privatization. If every economic actor owns a slice of the resource being exploited, then they won’t have any interest in overexploiting it. It has been suggested that the wave of privatizations that swept the world during the past decades was a direct result of Hardin’s ideas or, at least, of how they were understood in some political sectors [75] (But note that Hardin himself never advocated privatization.)

At first sight, privatizing the commons seems to be a good idea. Surely, greed is a powerful force in determining people’s behavior, so why not exploit it to avoid overshoot? But things are not so simple. One problem is that people may well overexploit resources that they completely control, as appears from a series of studies carried out by Erwin Moxnes [76] that show how people easily misjudge the amount of resources available and the capability of the system to recover after having been perturbed. Jay Forrester also examined this problem with the model he called the “Beer Game” where he showed how managers can completely lose control of a system even when they have the right data and the full capability of acting on it [77]. That may not be a critical problem: people do make mistakes, but they can also learn from them. The real problem with the idea of privatizing the commons is that it does not mean that you do not need a government. For middle-class Westerners, private property may appear an obvious feature of their world: they expect their governments to guarantee their property rights. But this is not true in many areas of the world where ordinary people are subjected to be evicted, dispossessed, or worse. There is a long series of cases in history of entire peoples being chased away from lands they thought they owned; the classic case being that of the American Indians in the 19th century. And, everywhere in old times, property rights were not guaranteed by anyone except by the capability of the owner to defend it using arms. But that is hardly a good way to organize the exploitation of natural resources. If nothing else, it invites the most powerful players in the economic game to behave as pirates, using force to dispossess the weaker ones. Besides, in many cases privatization is simply impossible: for instance, you cannot fence the ocean to prevent fishermen from destroying entire fisheries. Even more difficult would be to use this strategy to manage climate change by privatizing the atmosphere.

So, it seems that we do need some kind of a government but, if the current forms of democracy are unable to carry out the task of stabilizing the economy, could we think of different kinds of political systems? Many ambitious utopias have been proposed in the past, starting from Plato’s Republic, written around 380 BC. Plato’s ideas were never put into practice but during the past few centuries the trend of experimenting with new political theories seems to have become frantic. We had Socialism, Communism, Fascism, Nazism, and more ideologies that were supposed to be at the basis of governments that could take forms such as monarchy, aristocracy, plutocracy, oligarchy, democracy, theocracy, tyranny, and more.

The results have been variable, in most cases very bad. It seems that many revolutionary movements start with noble and lofty ideas on how to reform the government and turn it into something that would work in the name of “we, the people,” as in the US constitution. In practice, all political systems tend to degenerate: they may become ineffective kleptocracies, hideous dictatorships, or other forms that just create misery and disasters for everybody. And if you think that Capitalism is the big bad wolf of the story you just have to think of how the government of the Soviet Union destroyed the ecosystem of the Aral Sea to understand that Communism, theoretically the bugaboo of Capitalism, is not a solution for the overexploitation problem (at least in the Soviet version).

Does that mean we are condemned to an eternal series of cycles of growth and collapse as wolves and foxes experience in the simplified Lotka–Volterra model? Or, like in the Buddhist view, can we escape the cycle of death and reincarnation and attain the Nirvana of sustainability? These are difficult questions but, as Thomas Browne said, even the song that sirens sang is not beyond all conjectures.

One thing that is sure is that speculating about political systems may be dangerous. Over history, there have been several cases of people trying to put someone else’s political speculations into practice: the result has often been major disasters, as we all know. Instead, we may do better if we look for historical examples of governments that did succeed in managing the commons without having to oppress their citizens (not too much at least). At least one such example exists: Japan during the Edo period, from 1603 to 1868.

The Edo period in Japan is also known as the “Tokugawa period” and it started when the warlord Tokugawa Ieyasu managed to end the age of civil wars (the Sengoku jidai) and unify Japan under a military government called the bafuku, headed by a commander in chief called the “Shōgun.” It is a period that, in the West, we mainly know because of the many Samurai movies that use that period as a setting. But having been a battleground for swordmasters is not the main reason of interest of the Edo period, rather, we can examine it as a relatively recent example of a true “zero-growth” society.

We have no data about Edo Japan that we could directly compare to our modern concept of “Gross Domestic Product,” at the basis of our idea of economic growth, but we know that the Japanese economy was lively and growing in terms of wealth per capita [78]. Remarkably, this economic growth did not result in an increasing population. After an initial period of expansion, from ca. 1700 onward, the Japanese population stabilized to a level of around 26–27 million people [79], a number that remained unchanged until 1854, when Commodore Perry used his “Black Ships” as shock and awe tools to force Japan out of its economic isolation to restart a period of expansion. We also know that the extent of cultivated land in Japan did not vary over almost one century and a half, from 1720 to 1874 [80]. We have some records of famines during this period, but they seem to have been rare and related to special climatic events, such as volcanic eruptions. Overall, we can say that for some two centuries Japan was as close to a “zero-growth” society as we can imagine one.

How did Japan manage to attain this condition? Probably, the simplest answer is that the Japanese had no other choice. They had tried military expansion under the leadership of the warlord Toyotomi Hideyoshi who launched two offensives against Korea in 1592 and in 1597, but the effort was not successful and that forced the Japanese to face the necessity of living within the limits of their islands.

But how was zero growth obtained? First of all, it does not seem that the government had a plan to ensure the sustainability of the Japanese economy. Like most governments in history, the bafuku was mostly interested in its own survival. For this purpose, it implemented a strict control over all the sectors of the Japanese society by means of the system called “danka” that obliged every Japanese family to register with the local Buddhist temple [81]. The popular story of the “Forty-seven rōnin” that took place in 1702 tells us how the government handled with a heavy hand every attempt to act outside the laws: just note how all the “heroes” of the story were forced to commit ritual suicide.

Today, we would define the bafuku as a harsh dictatorship: it was ruthless against everything it perceived as a threat. Among other things, it forbade Christianity, believed to be a tool of foreigners to gain a foothold in Japan and, eventually, dominate it. But, mainly, the bafuku was engaged in playing the game that the Japanese describe with the saying, “The nail that sticks out gets hammered down.” It intervened to make sure that no competing force, warlords, foreigners, or commercial companies would become strong enough to threaten the central power.

A dictatorship, sure, but it must be said that the bafuku ensured an environment where commerce and craftsmanship could develop and flourish. Agriculture could provide food for the whole population and the Japanese developed a lively economy based on commerce along the “five routes” (Gokaidō) that linked the capital, Edo, with the main cities of the islands. And Japan was not just a land of warriors and peasant, there were people whom we could identify as our concept of “middle class,” merchants, artists, craftsmen, and literates. They lived in a simple world, dressed in simple cotton kimonos, their only drink was sake, and wherever they wanted to go, they had to walk there on their own feet. But they seemed to be able to live a fulfilling life. They enjoyed nature, poetry, literature, music, and each other’s company, just think of the poetry of Matsuo Bashō (1644–1694), still known all over the world. A good visual impression of that period is the delicate and beautiful movie Miss Hokusai (2016).

In terms of managing the ecosystem, Japan was forced to develop a self-contained economy that produced what the system needed with minimal or no imports from abroad, what we call today a “circular economy.” [82]. It was obtained mainly by a bottom-up approach where the government does not seem to have directly intervened. Gerald Marten describes how the Japanese rose to the challenge of deforestation during the Edo period [83]:

Japan responded to this environmental challenge with a “positive tip” from unsustainable to sustainable forest use that began around 1670…. The central role of catalytic actions and mutually reinforcing positive feedback loops, local community, outside stimulation and facilitation, letting nature and natural social processes do the work, demonstration effects, social/ecological coadaptation, and using social/ecological diversity and memory as resources. It is difficult to single out the initial tipping point with certainty, but it seems to have derived from the centuries-old tradition of cooperation among villagers for protection against bandits, allotting rice fields and irrigation water, and storing rice.

These traditions of collaboration and agreement affected all the sectors of the Japanese economy. It is fascinating to read about the details of how everything was reused and recycled: candles, clothing, cooking pots, tools, brooms, umbrellas, and much more [84]. Note also that since the government had to renounce to the temptations of military adventures abroad, it had no need of cannon fodder and no reason to push the population to grow. No active top-down birth control policies seem to have been ever enacted, but the Japanese population seemed to be able to use mainly natality control to keep population stable, although in some cases it was necessary to recur to abortion or infanticide [78].

You can see here a clear example of how a complex system reacts to external perturbations by using its internal feedbacks. The system could attain sustainability just because it was complex and it had the resources and the mechanisms to adapt. Probably, it would not have worked as well—perhaps not at all—if it had been imposed by the government from above. And note that the Japanese peasants were doing exactly what their European counterparts had been doing to manage their commons: a tangle of rules, customs, cultural practices, and collective goodwill generated a situation in which nobody could overexploit the commons in the sense that Hardin had described. It was not because of legal punishments, nor because of fences: it was because nobody could afford to place him or herself alone against the whole community.

All this is not meant to provide a blueprint for what we should do in the future. The Edo culture was characteristic of a specific period and of a specific area and, obviously, we would never be able to recreate Edo Japan in the modern world, even if we were convinced that it was worth doing that. Discussing that age is, mainly, a demonstration of feasibility. The Edo experience shows that it was possible to create a society that thrived for two centuries or more in conditions of zero growth and sustainability. It was, under several respects, a brutal dictatorship, but it was also a sophisticated and refined culture that attained, among other things, levels of literacy that were superior to those of the European society of the time. Note how the system was finely structured and optimized: it was not purely bottom-up nor purely top-down. The government ensured stability by a top-down management, the people ensured flexibility by a bottom-up management. No need of a big brother to micro-manage the commons, nor it was a free-for-all libertarian paradise. It was a machine that had attained the “self-organized criticality” conditions that we discussed in an earlier chapter of this book.

If Japan could attain economic stability, it means that it is possible to do that in other conditions, in different cultures, maybe even at the worldwide level. What the story of Edo Japan tells us is in line with what we know about complex systems: they tend toward stability. In other words, our current fixation on growth may be just a quirk of history, destined to fade away in the future as we find ourselves forced to live within the limits of the Earth’s ecosystem. But there one condition that we badly need for that: it is peace, as the Edo experience tells us.

Surely, reaching such a condition will take time and efforts and, at present, we have little or no idea of what kind of political system could manage the planetary commons for the good of all humankind. Most likely, we’ll have to go through some kind of “Seneca bottleneck” before we learn how to do that, but it is not impossible to attain sustainability, especially because it is unavoidable.

Returning After Collapse: The Seneca Rebound

Fig. 4.8
figure 8

This is the way we tend to see Europe during the “Dark Ages”—a depopulated land of isolated castles. The image shows the Hermitage Castle in Liddesdale, Scotland, in a print made in 1814 (https://www.flickr.com/photos/126409951@N04/14772362853. It is the prototypical sinister castle, probably haunted by the appropriate ghosts (in this case, said to be Mary, Queen of the Scots)

Imagine Europe at the start of the period we call the “Dark Ages,” more correctly “Late Antiquity.” In 650 AD, the European population has shrunk to some 18 million people [85], less than half of what it had been during the high times of the Roman Empire and enormously smaller than it is today, some 700 million people. The Europe of that age was a forested region, nearly empty of people, where nothing especially interesting happened except for the squabbles of local warlords fighting each other. No one at that time could have imagined that, in less than a millennium, the descendants of the inhabitants of that backward peninsula of the Eurasian continent would start the bold attempt of conquering the world and, eventually, succeed at that. By end of the 19th century, practically all the world was under the direct or indirect control of European countries or of their American offspring, the United States. Under some respects, the situation has not changed much today.

The conventional explanation for the European success at conquering the world has to do with the “white man’s burden”, a term invented by Rudyard Kipling in 1899. According to this idea, the European domination was a sort of “manifest destiny” generated by the superior genetic or cultural qualities of the European people who turned out to be smarter, more laborious, better organized, and generally more efficient than the populations of the rest of the world, supposed to be lazy, disorganized, uncultured, and in the grip of superstitions.

It is surely flattering for Europeans to think that they are smarter than everybody else, but it is also an interpretation that is not supported by data: Richard Dawkins actually argues for the opposite in his book Guns, Germs, and Steel (1997). Indeed, when non-Europeans were given a chance to confront the Europeans using the same weapons, the European superiority was far from being assured. Some historical cases include the battle of Adwa in 1895 when Ethiopian forces destroyed an Italian invading contingent, and the battle of Tsushima, in 1905, when a Japanese fleet defeated a Russian fleet during the Russo-Japanese war of 1904–1905. In more recent times, we have the example of Vietnam, where the mighty United States had to admit defeat to the Vietnamese forces in 1975.

But these were exceptions to the general rule that sees Europeans dominate almost everywhere in the world and the list of the battles and of the wars won by European or American forces against non-European ones would probably require several pages. So, what led Europeans to have so much success? Without pretending to have the definitive explanation, I think I can propose that it is not a question of genetic or cultural factors but rather that it was caused by a phenomenon that I call the “Seneca Rebound”—the fact that a society, a state, or an organization can restart growing after collapse at a faster speed than before the collapse. In this case, Europe may have obtained a decisive advantage in a specific historical period because of a combination of geographical and historical factors that caused its population to “rebound” at the right moment. It happened when the technologies needed to expand all over the world had been developed and could be used for that purpose.

A rebound is something that comes after collapse and there is no doubt that Europe has known economic and population collapses over its long history. There is evidence of an early European collapse that took place during the Neolithic, in the 5th millennium BCE [86]. Then, of course, there was the collapse of the Western Roman Empire that started around the 3rd century CE. Moving onward in time, we have the terrible collapse of the mid 14th century, when famines, wars, and the plague epidemics known as the “Black Death” wiped out an estimated 30% to 50% of the European population of the time [85]. There was another collapse during the mid 17th century, in correspondence to the “Little Ice Age” although less pronounced and less destructive than the others.

So, we have a total of four major collapses over European history and each collapse, so far, was followed an economic rebound and by a rapid population growth. There are no quantitative data for the first two rebounds, but a visual impression for the events that took place during the past millennium can be seen in a paper by William Langer, published in 1964 [87] (Fig. 4.9).

Fig. 4.9
figure 9

Graph from William E. Langer, 1964 [87]. Note how growth is faster after the collapse. This is what I call the “Seneca Rebound”

These are the data: how do we explain them? The first question usually asked is what caused the collapses, but it may be an ill-posed one. It is typical of complex systems to behave in a complex manner and that may generate a series of feedback effect that may mistakenly be taken as the “cause” of the collapse. For instance, the Neolithic collapse of Europe was accompanied by an invasion of nomads (the “Yamnaya”) [88] and we all know how the Roman Empire saw its territory swept by wave after wave of barbarian populations during the last phases of its existence. In both cases, the invasions have been proposed as the cause of the collapse, but note that no such invasions took place in correspondence with the two later European crashes, so we are justified to think that the previous invasions were opportunistic reactions to an already weakened society.

Then, consider climate change: it is a typical cause reported for civilization collapses, but its effects have been ruled out for the Neolithic collapse [86] and no significant temperature changes are reported in correspondence to the decline and collapse of the Western Roman Empire [89]. Instead, in the case of the two more recent collapses in Europe, there is evidence of cold spells that damaged agriculture, possibly generated by volcanic eruptions. So, so maybe climate change caused these collapses? It is possible but, as usual, complex systems defy simple interpretations in terms of cause and effect. Maybe the population decline was generated by atmospheric cooling, but it may also be that the population drop cooled the climate as the result of reforestation—another case of reinforcing feedback in a complex system. Indeed the data show a small decline in atmospheric CO2 concentration in the centuries after the Black Death in Europe [89]: it may have contributed to the cooling. The effect is stronger and clearer for the great crash in the populations of the New World [90], occurring in a later period. Overall, it seems that the European collapses are mostly the result of internally generated feedbacks in societies that were growing so fast that they had outpaced the capability of the resources they were exploiting to keep pace.

In any case, the point is not so much what caused the collapses but the remarkably rapid recovery that followed them: what I call here the “Seneca Rebound.” The reasons for the rebound are reasonably clear: depopulation frees resources that can be exploited for a new phase of rapid growth. Before the fossil fuel age, societies had two main natural resources to exploit: fertile soil and forests. Both tend to be overexploited: forests are cut faster than trees can regrow and the fertile soil is eroded and washed to the sea faster than it can reform. That generates a decline of agriculture and the result is not just an end to population growth, it is a ruinous collapse resulting from famines and epidemics. The loss of revenues from forests weakens the state and the result is internecine wars which also hasten the collapse. But the disappearance of a large fraction of the population frees cultivated land for forests to regrow and that regenerates the soil. Then, when the population starts regrowing, people find in the new forests a near-pristine source of wood and, once cut, of fertile soil. Trees provide the wood for ships and the charcoal made from wood provides the material needed to make steel for weapons. The cycle restarts and it may go faster than the earlier one because society still remembers the social structures and the technologies of the previous cycle.

The cycles of deforestation and reforestation are evident in Europe: both the Roman Empire and the Medieval society had badly overexploited their forests and the reforestation after the collapses freed resources that could be used for the population to grow and expand beyond the earlier borders. The phenomenon was not unique to Europe but, as always, success is a question of timing, opportunities, and a little luck. The Europeans found themselves rebounding forward in a moment when they had the right technologies to expand worldwide and while the other, potentially competing, civilizations were unable to stop them.

On the opposite side of the Mediterranean, the Arab civilization was socially and technologically as sophisticated as the European one, but its climate did not allow forests to grow fast enough to generate the same rebound seen in Europe. The American civilizations we call “pre-Columbian” had forests, but they hadn’t yet developed the technologies of steel and of oceanic ships—they also lacked horses for transportation and as a military weapon. The Chinese, instead, had the technologies and also the forests and they could have wrestled with Europe for the control of the world. During the 12th-13th centuries, an outbreak of the same plague that affected Europe caused a decline in the Chinese population that was followed by the Mongol invasion. Then, the Chinese economy experienced a rebound: the population restarted growing and the age of “treasure voyages” started in the early 1400 s, during the Ming dynasty, with fleets of ships exploring the lands around China. But the Chinese exploration phase soon stopped when the central government forbade all oceanic travels. We can only wonder what would have happened if the Chinese government had continued to support overseas exploration. Maybe Columbus would have found Chinese-speaking people when he landed in the New World. But that is the way history works.

During the Middle Ages, Europe didn’t have a central government, as China did, so there were no brakes applied to the military expansion of the European states, competing with each other to conquer new lands. The first phase of European expansion came with the Crusades—the first one took place in 1095. But the real push forward was with the rebound after the Black Death of the mid-1300s: it was called the “age of explorations” and we know how the Europeans managed to expand over most of the Americas and in Africa. After the latest collapse, the one that took place in the mid-17th century [85] there was another burst of economic growth which ushered in the age of coal and, with it, the period defined as “The Age of Divergence” by Kenneth Pomeranz with the book he published in 2000 [91], when Europe truly became the dominating world power. Right now, Europe is declining again, maybe there will be a new phase of collapse and rebound in the future.

These considerations are qualitative, but it is possible to see the Seneca Rebound as an engine that propels civilizations forward in bursts. If this is the case, can we expect a rebound if the world’s civilization goes through a new Seneca Collapse in the coming decades? If previous history can serve us as a guide, it might happen. Of course, it is possible that the upcoming collapse will be so bad that humankind will never return to the complexity of the civilization it managed to create during the 20th and 21st century. For all we know, the effects of the destruction we are wreaking on the ecosystem could cause humans to go extinct, the ultimate Seneca Collapse. But a much more interesting case, and I would also say a more probable one, is that the coming collapse will be just one more of the series of previous collapses that affected human civilizations: it might lead to a new rebound. Would that be really possible in a world badly depleted in terms of mineral resources and subjected to extensive ecosystem damage?

As we saw in earlier chapters of this book, a complex system is an entity that lives on an energy flow. A civilization needs energy to survive and, the more energy it can get, the more complex and structured it can be. The problem we are examining here is whether a sufficient energy flow of energy can be maintained for civilization to keep at least some of the characteristics it has today, for instance the electronic treatment and storage of information, a worldwide Internet, automation, scientific research, and more.

Today, our civilization is maintained by a flow of some 18 TW of primary energy, mainly (ca. 85%) produced by the combustion of fossil fuels [92]. The rest is provided in part by nuclear fission (ca. 6%) and by a mix of renewable technologies such as hydroelectric, photovoltaic, wind, and others. A civilization of complexity comparable to ours cannot exist without access to a comparable flow of energy. The resources that powered ancient civilizations, wood and animal power, created remarkably sophisticated societies, but none endowed with the technological level we have reached. So, the first question is what would happen to the current energy sources in case of a collapse of the world’s economic system.

We can be reasonably certain that fossil fuels won’t survive the Seneca bottleneck. The deposits of these fuels have been badly depleted over a couple of century of exploitation and, today, it is possible to maintain production only by means of extremely sophisticated technologies and large inputs of financial and human capital. An extensive economic and social crisis, coupled with wars and civil unrest, could easily send the fossil fuel industry down a death spiral from which it might never re-emerge. It would be the end of the “fossil age,” at least until the Earth manages to re-create them, but that would take millions of years.

The situation is even more difficult for nuclear energy. First, nuclear energy is also affected by depletion in the same way as fossil energy. The high-concentration mineral resources of uranium have been largely consumed by the exploitation of the 20th century and a future civilization attempting to restart with fission reactors would have to reckon with the lack of inexpensive uranium  resources. Perhaps they could use our abandoned nuclear warheads, but it is an iffy proposition, to say the best. They might try to jump start to the much more expensive and complex technology of “fast” reactors, able to breed fissile material from non-fissile isotopes but this is, again, a difficult proposition, especially if starting from scratch. A further, and perhaps worse, problem for nuclear energy is that an abandoned nuclear plant is at serious risk of going into meltdown if it loses active cooling. Typically, the fuel will melt because of its residual radioactivity and then the reactor vessel may build up enough pressure for it to explode, spreading radioactive material all over. This is what happened to one of the reactors of Fukushima, hit by the tsunami of 2011. In the case of an extended breakdown of the societal structure, the current reactors—there are about 500 of them—are all at risk of meltdown, a collective disaster with nearly unimaginable consequences. Even if that can be avoided, nuclear reactors remain vulnerable to military action, terrorism, or sabotage [93, 94]. In case of a major economical collapse, with the associated social and strategic unrest, nuclear reactors could become a major burden rather than an asset and those destroyed by meltdown would remain radioactive traps for centuries—hardly something that would encourage our descendants to restart with the technology.

Things look much better if we examine the third leg of the current energy supply: renewables. On all counts, renewables are more resilient than both nuclear and fossil technologies. Renewables are not subjected to fuel depletion, even though, of course, the plants wear out and need to be periodically replaced. But most of the materials used in a renewable plant can be recycled and these technologies need little or no rare minerals. Photovoltaic (PV) panels use only silicon and aluminum, both very abundant on the Earth’s crust, plus traces of other common minerals—that includes some silver, but it is not essential to their functioning [95]. Wind plants use rare earths for their magnets but, also in this case, alternatives are available and it is also possible to recycle the materials of an old plant to build a new one. Renewable plants are also long lasting. One of the first PV plants in Italy was installed in 1984 and, more than 30 years later, in 2016, it was still working, having lost just about 10% of its initial efficiency [96]. Of course, the electronic parts of a PV plant need to be replaced at shorter intervals, but even without an inverter the panels can still provide DC power: it is what is needed, for instance, to recharge batteries.

In general, PV plants can take a lot of damage and continue functioning. I personally witnessed how a plant in Italy was hit by a twister that turned it into something that looked like a Mad Max movie scene of broken panels scattered all over. But when the sun shone again, the remaining panels, although damaged, still produced more than 50% of the power that the plant had been producing before the disaster. The plant could be rapidly repaired and now it works at full power. The situation may be more difficult for the modern generation of wind plants: tall wind towers can fall in conditions of exceptionally strong winds and, in that case, there is little that can be done except rebuild the plant from scratch. Instead, hydroelectric plants can last a long time and are very resilient to damage.

Overall, it is possible that the renewable infrastructure of a country may survive a crisis that could include major military operations, civil disturbance, and ecosystem collapses. Our descendants could re-emerge on the other side of the Seneca bottleneck relying on these plants to produce electric power. This power could be used to build new plants to replace the old ones as they wear out. The diffuse legend that renewable energy needs fossil energy in order to keep going is just that: a legend [97]. Over the course of their life, renewable plants produce much more energy than it is needed to create their replacement. So, it would be possible for our descendants to have a good supply of electric power using the renewable technologies that our society has developed.

That leaves open the question of mineral resources: a future civilization would not have the cheap ores that ours has depleted. Yet, our descendants would have large amounts of minerals already extracted that they could salvage from the ruins of our civilization. It is nothing new: during the Middle Ages, people would scavenge Roman ruins for stone and metals. From our waste, our descendants could have plenty of metals of all kinds and their probably smaller population wouldn’t need so much of them as we do nowadays. That would be sufficient to jump start a new civilization.

Of course, our ruins could not last forever as sources of minerals: just as we are not mining Roman ruins anymore, our descendants would need to find new sources. Since they won’t have the same high-grade ores we had, they would be constrained in terms of the mineral resources they could use, but they would still have good strategies to keep going. As I discuss in my book Extracted, [98] the Earth’s crust contains abundant silicon for electronic devices and for photovoltaic panels, plenty of metals such as iron, titanium, aluminum, and magnesium for structural applications and, of course, plenty of silicon oxide for glass and the like. As conductors, copper, too rare, would have to be replaced with aluminum. Other technologies should have to be re-designed to use none or very little of the rare metals we use nowadays, from gallium for semiconductors to rare earths for magnetic materials. It would be a long term challenge that, nevertheless could be met, at least in principle. There is no need for humankind to return to subsistence agriculture or to hunting and gathering although, of course, it might be argued that it will happen and even that it would be a good idea.

There is another possibility worth discussing: could humankind mine space bodies to replace the dwindling ores on out planet? This would be enormous expensive and in many cases it would be useless even if we could afford to pay the cost. The concentrations of elements we call “ores” is a characteristic of geologically active bodies and we know on only one such bodies: our Earth. There are no ores on the Moon or on the asteroids—maybe on Mars, but we have no evidence of that, so far. So, mining space bodies to bring minerals to Earth makes little sense. Nevertheless, there may be a logic in the idea if we change the target market from the Earth to space. Asteroids are rich in elements such as iron, nickel, aluminum, titanium, silicon and even carbon and water in the form of ice. These minerals are not there in the form of ores, but they form a sufficiently large fraction of some asteroids that extracting and purifying them could be possible. Take also into account that space is rich in solar energy that can be transformed into electric power by PV panels and that in space you have little to worry about pollution. Of course, putting together a mining industry in space is a task which was never attempted so far and the unknowns are enormous. One thing is clear: it is not a task for humans. Humans cannot live in space unless they bring with them expensive and complex equipment and it is extremely difficult to shield them from dangerous high-energy radiation [99]. Instead, space is a good place for robots which can do the same things human can do in a better and cheaper way. And these robots could be made, at least in part, from materials obtained from asteroids. Our robot-children have a chance to inherit the solar system and they could build a completely new, silicon-based, civilization [92].

The future is beautiful because it is always full of possibilities and what we do now will echo in eternity. As Seneca said in one of his letters,

“Every new beginning comes from some other beginning’s end.”