Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In the foreword to this book I mentioned that I “rode the crest of the wave” that was mechanics. This is a fitting analogy, for mechanics transmits information via the propagation of waves. As we know from the application of Newton’s second law to deformable bodies, the mechanical behavior of bodies can be modeled using continuum mechanics (see Chap. 9), and this approach results in hyperbolic partial differential equations in spatial coordinates. In other words, they are wave equations, meaning that when mechanical loads are applied to bodies, the effects of these loads are transmitted via traveling waves, and these mechanical waves are propagated in both fluids and solids.

The square of the speed of these mechanical waves is proportional to the material stiffness of the object in question, and it is inversely proportional to the density. The resulting wave speed is what we term “the speed of sound” because these waves can be heard if they are within the frequency range that is audible to humans. This speed varies somewhat from one material to another, but it is around 343 m per second in air at sea level, and 1484 m per second in water. In addition, it takes a not insignificant amount of energy to transport information via mechanical waves.

Fig. 13.1
figure 1

Photograph of James Clerk Maxwell

As an example, when an earthquake occurs in the ocean off the coast of South America, it takes several hours for the resulting waves to reach the Hawaiian Islands, so that warnings can be sent out far enough in advance to permit the locals to evacuate to high ground before the tsunami arrives. Thus, while mechanical waves may appear to move rapidly, they move at a velocity that is slow enough to be perceptible to humans.

Time-Space

In 1861 James Clerk Maxwell (1831–1879) published the first of three papers formulating a model for electromagnetic phenomena [120] (Fig. 13.1). In that theory, he hypothesized that electromagnetic energy is transmitted via waves. As it turns out, visible light is one form of electromagnetic energy. Light travels at the speed of light, which is about 300,000,000 m/s. Also, the speed of light is much less sensitive to the material that the light is flowing through than are mechanical waves. Furthermore, all electromagnetic energy travels via waves at the speed of light, and it takes very little energy to propagate these waves when compared to the energy necessary to propagate mechanical waves.

So think about the earthquake off the coast of South America. If the resulting mechanical waves propagated at the speed of light, there would be insufficient time for a tsunami warning to be useful, because the waves would strike the shore in Hawaii in a little more than one one-hundredth of a second after the earthquake occurred several thousand miles away. Whew! We are lucky that mechanical waves travel slowly on Earth.

But there is a downside to this analogy. Because mechanical waves travel at a perceptible speed to humans (sound is also a mechanical wave), we tend to not relate very well to electromagnetic waves. I would venture to say that to most humans, their perception is that from a practical perspective light travels at infinite speed.

Light travels almost a million times faster than sound. So let’s suppose you have two significant others. One lives on the surface of the Moon, and the other lives about 400 m (1,350 ft) from you. You communicate with the one on the Moon by direct cell phone, and you communicate with the local one by speaking directly to him/her via two tin cans connected by a string. Since the Moon is on average about 375,000 km (235,000 mi) from the Earth, that means that it will take the same amount of time for your verbal correspondence to reach each recipient. So you will have to make the choice as to whether you want your significant other to be close by or whether you want a long distance relationship, because they will both receive your communications simultaneously.

Professor Maxwell knew exactly what he had stumbled upon. He understood full well that electromagnetic information could be transmitted about a million times faster than mechanical energy, and he also understood that it took a lot less energy to do so. Suppose you are a businessman. Suppose someone tells you that they can make a product that functions both cheaper and faster. And now here is the difference—they can make it a million times faster. Are you going to invest in that? This is a no-brainer. Of course you are going to invest in that!

And that, reader, is exactly what happened to humankind in the twentieth century. In the span of 100 years, our species transformed from a primarily mechanical world to an ever increasing electromagnetic one. It was a no-brainer! Cheaper and faster will (almost) always trump any other opponent. The lone exception occurs when it turns out later that it is toxic. We have thus far found no evidence that electromagnetism is harmful to human health, but heaven help us if we ever do, because we will be in a world of trouble!

Let’s look at time-space from a pragmatic viewpoint. A light year, despite its deceptive name, is not a unit of time. It is a unit of length defined to be the distance traveled by light in a year. Since light travels about 299.79 × 106 m/s, when this is multiplied by the length of a year (about 365.24 days), we get a distance of about 9.46 × 1015 m.

So just how far is that? The Earth is 40,000,000 m in circumference (see Chap. 8). Thus, dividing the former by the later, we determine that a beam of light could circle the Earth 236,500,000 times in a single year. Wow! So light travels an extremely long way in a single year. In fact, it travels so far that scientists don’t even think in terms of meters when they are determining distances in the universe. For this they use a light year—the distance light travels in a year.

Below is a table of approximate sizes of various things found in our universe, and for your edification, I have written everything in meters, which is about 1.1 yards, or 3.3 ft, rather than in light years (Fig. 13.2). I have a reason for this. I think that using light years, somewhat like using logarithms, obfuscates reality for the average person. To see how this confusion occurs, let’s take the U.S. national debt as an example. Our debt at the time of this writing is approximately 17 trillion U.S. dollars. That sounds like a lot of money, but if we view it in powers of ten, it doesn’t look so bad—$17 × 1012. As a means of comparison, a million dollars is written in powers of ten as $1 × 106. Comparing the powers of the above two numbers, it is easy to get confused (sometimes I think our Congress does this) and think that a million is about half of 17 trillion, but the reality is that a million is only about 1/(17 × 106) of our national debt. To put it another way, in order to pay off the U.S. national debt, we would need 17,000,000 people to put up $1,000,000 apiece! This same analogy applies to light years. So I don’t like to think in light years—it’s really misleading to the average person.

Fig. 13.2
figure 2

Average length spans of various entities within the Cosmos

There are some very interesting revelations embedded within this table. For example, the universe is about 1025 times as large as a human. A human is about 1023 times as large as an electron. What that means is—if you were an electron, a human would look (within a couple of orders of magnitude) about the same size to you as the universe does to a human. Thus, if you were an electron within the head of a human, the toe of that human would be on the far side of your universe.

The point of this discussion is to introduce the concept of length scales. There are perhaps an infinite number of length scales in existence in our multiverse. We really aren’t sure, because we can’t see anything larger than our universe, or smaller than an electron (heck, we can’t even “see” an electron!). There may be objects larger than our universe, but we cannot see anything further than 13.7 billion light years from Earth for the simple reason that our universe has only existed that long, so that light from farther away could not exist (or so we think). On the other extreme, there is conclusive evidence that there are objects smaller than an electron. And perhaps most interestingly of all, both limits are growing (or shrinking) with every passing discovery.

This enormous variation in the size of things is so large as to be almost incomprehensible to humans. We can relate to things within about five or six orders of magnitude of our own size, but above that or below that, we have a hard time comprehending the immensity or miniscule nature of the object in question.

What all this means is that with our simple view of the way things are from the viewpoint of our experience on Earth, we humans tend to get a distorted impression of the universe. That is probably the biggest reason that nobody thought of relativity before Albert Einstein (1879–1955). His views were just counterintuitive to most people [121].

To see how confusing things can be, consider the Sombrero Galaxy, shown below (Fig. 13.3). This galaxy is 28 million light years from Earth. More importantly, it is 50,000 light years in diameter! What this means is, the light coming to us from the near side of the Sombrero Galaxy is 50,000 years newer than the light coming to us from the far side. Hold on a minute! This doesn’t sound like a single image. It’s as if you took photos of yourself over a 50-year span of time and assembled them by increasing age in strips one inch wide from top to bottom. The result would not look like you at all! So what we are seeing from Hubble may look nothing like the Sombrero Galaxy actually is at any instant in time. More importantly, for objects that span such large distances, there is no way to ever view them as they really are at any instant in time. Thus, we have the unfortunate reality that time and space are inseparable when we talk about large distances, ergo time-space, united as one.

Fig. 13.3
figure 3

The Sombrero galaxy-28 million light years from Earth. The dimensions of the galaxy, officially called M104, are as spectacular as its appearance. It has 800 billion Suns and is 50,000 light years across

To see how the speed of light distorts things visually, we really need to think of some thought experiments that the average person can relate to. Perhaps the best way is to imagine an everyday experience that most of us have encountered. Suppose that you have a very slow computer, one that is so slow that images tend to come up on the screen in slow motion. Almost everyone has encountered this sort of thing at one time or another. So parts of the image that show up first are like the beams of light that start out closest to Earth. The parts that show up later are like the beams of light that started out later. And here is a really bizarre twist on the whole image before you. Because of the time lapse, some parts of the image may actually look quite different if we could assemble them at the same instant in time, or they might in fact no longer exist at all, having been blown into the cosmos a long time ago. So time-space is not something that most humans will ever be able to grasp.

Electromagnetism has revolutionized the world we live in, from the telegraph, to the telephone, to the radio, to the television, to the computer, to wireless technologies. But there are some things that electromagnetism simply cannot do. For those things, we still need mechanics. So let’s get back down to Earth! This chapter is about the developments in mechanics in the twentieth century.

The primary subject of this book is classical mechanics. As such, the subject of quantum mechanics falls outside the scope of this treatise. As Richard Feynman once said, “Nobody understands quantum theory”. I therefore reserve the right to put off this subject for a future offering on nonclassical mechanics.

Computational Mechanics

Over the most recent half century the rise of computers worldwide has had a profound effect on the field of mechanics. Much of the formal mathematical structure of our modern models in the field of mechanics was in place by the middle of the twentieth century. Unfortunately, these models were so mathematically complicated that their structure precluded accurate solutions by analytic means for all but the simplest (as well as impractical) of circumstances. Perhaps this is best illustrated by an example.

In the field of elasticity, the equations describing the mechanical response of a linear elastic body at rest to externally applied loads was formulated in the early nineteenth century due to the collective efforts of Navier, Cauchy, and Lamé, among others [122], as described in Chap. 9. The model is composed of fifteen coupled equations in fifteen unknowns (nine are differential equations, and six are algebraic). These unknowns are the three components of the displacement vector, the six components of the (symmetrized) stress tensor, and the six components of the strain tensor. Using the model, it is possible to predict all fifteen of these output variables at every point in a linear elastic solid of arbitrary shape subjected to external loads, at least theoretically. The problem comes in when the object is not simple in shape. And all one has to do to understand the importance of the shape of the structural object is to look under the hood of any automobile, wherein virtually no part is simple in shape. The shapes of parts in air- and spacecraft (wherein minimization of mass is of heightened importance) can be even more complex.

Thus, for the better part of a century and a half, applied mathematicians such as Barré de St. Venant (1797–1886) went about the task of attempting to solve this complex problem for solid objects of varying shape, each difference in shape requiring a completely new solution. This was (and still is!) quite labor-intensive [123]. Much of the theory necessary to perform such modeling was transported to the United States by Stephen Timoshenko (1878–1972) after his immigration to the U.S. in 1922 [124] (Fig. 13.4).

Fig. 13.4
figure 4

Photograph of Stephen Timoshenko

So-called “closed form” solutions (meaning mathematically exact) to these problems were pursued in great detail right up to the 1970s, but a change was in the wind. In the 1930s and 1940s, engineers were attempting to develop methods for obtaining approximate solutions to these problems for purposes of designing aircraft. Coincidentally, mathematicians such as Richard Courant (1888–1972) were developing approximate mathematical approaches for solving generic sets of coupled differential equations. These two diverging approaches began to come together at precisely the same time that the high speed computer was coming into vogue in the U.S.—during the late 1950s.

Richard Feynman (1918–1988) was a Nobel prize-winning physicist who worked on the Manhattan Project during World War II. He is credited with inventing quantum computing. It seems that during the period when scientists at Los Alamos were attempting to determine exactly how much mass was needed to produce unstable nuclear reactions, thereby leading to an atomic explosion, Feynman was given this assignment. He responded by “drafting” a wave of brilliant math students from eastern seaboard universities who would otherwise have been shipped off to war [125].

The determination of how much mass is needed to create unstable nuclear reactions is a challenge in quantum mechanics (I know, I said I would not talk about this subject, but bear with me). Feynman set his army of mathematicians to calculating the statistical nature of this process by hand! He gave each math whiz a piece of paper telling him/her exactly what calculation to do, and then he handed the first one in the line a table with a number on it, requiring him/her to preform the calculation using that number, record the result on the table, and pass the table to his/her neighboring math whiz. This process went on for months and months, with each math whiz doing the same calculation over and over, day after day, month after month, and it eventually resulted in the determination of the amount of mass required in Little Boy and Fat Man, the two atomic bombs dropped on Japan in 1945.

Feynman later claimed that he had produced the first main frame computer in history. He argued that his army of math whizzes worked essentially just like any other computer, and he was indeed correct. The only difference between Feynman’s “computer” and later computers was the speed with which the information was passed from one operator to the next. As we now know from our discussion of Maxwell’s model, this processing of information can be done at the speed of light, and it is precisely this fact that allows us to perform incredibly complex mathematical operations so quickly today. All computers today transfer information at the speed of light, and we have managed to continue to increase the speed at which computers produce results by continuously decreasing the distance that the information is transported within computers (thus resulting in the field of nanotechnology, also attributed to Feynman), thereby constantly increasing the speed with which computers can compute results. This is a bit of a stretch, but this continuous improvement in computer speed is at least due in part to mechanics, as our ability to make computer chips smaller and smaller is a direct result of our creation of mechanical devices for fabricating tiny chips.

Fortunately for humankind, a computational method was developed in the twentieth century, and this method is today termed ‘the finite element method’. The term ‘finite element’ was coined by Ray Clough (1920–), a professor at UC-Berkeley in 1960. This terminology stuck, as did the methodology developed by the rapidly growing group of scientists and engineers researching within this exciting field of mechanics. The finite element method was first applied to the elasticity problem described above, but it rapidly expanded to other problems in applied mathematics and physics. Wherever there were sets of partial differential equations to be solved, the finite element method found a home. This included elasticity, elasto-plasticity, elastodynamics, structural vibrations, fluid dynamics, viscoelasticity, heat transfer, and electrodynamics, to name just a few, and almost all of which fall under the umbrella of mechanics.

The finite element method works by assuming the form of the solution in spatial coordinates over a small subdomain (volume) of the problem of interest called a finite element, as shown in Fig. 13.5. This element is then joined with other elements to approximate the shape of the object of interest, and although this analysis is fundamentally approximate in nature, it can be performed as accurately as desired by simply using smaller and smaller elements (called refining the finite element mesh).

Fig. 13.5
figure 5

Finite element analysis of a Volvo substructure assembly showing predicted vertical displacement component. Note experiment at upper left

When this technique was first introduced in the 1960s computers contained insufficient real addressable memory (RAM) to be capable of solving really complex problems. But with the advance of computer power (see the discussion on Moore’s Law in Chap. 10), it quickly became possible to obtain solutions to more and more complicated problems computationally by using the finite element method. Thus arose a field of mechanics termed computational mechanics.

Today it is possible to model just about any problem governed by a set of differential equations in spatial coordinates, whether linear or nonlinear, by using the finite element method. The added complexity associated with accounting for time can be handled by utilizing well-understood time stepping algorithms. And furthermore, software has been developed that makes the solution process extremely user friendly. For example, it is now possible to use a hand-held global positioning system (GPS) device to survey the surface of virtually any three-dimensional object, and a software package will create an image of the shape of the object. Another software package will then be utilized to construct a finite element mesh, and all of this occurs in the span of no more than a few seconds. Predictions that would have taken an army of mathematicians several years to solve 50 years ago can now be solved by a single person using a laptop computer in a matter of minutes.

I like to site the example of my youth. When I was a child growing up in the 1950s our (even new!) cars used to break down all the time. The reason for this was that we did not have mathematical tools for designing the mechanical parts precisely. Thus, every mechanical part in a car was designed experimentally, by putting it out there and seeing how it performed. Lots of parts failed, and they had to constantly be replaced. Far worse, sometimes the failure of these parts led to the loss of human lives.

With the rise of the finite element method, most structural parts are now designed with finite element computer codes that ensure the parts will not break. Today, the average automobile will go one hundred thousand miles without a single major structural failure anywhere on the vehicle. This is just a single example of how computational mechanics has shaped the modern world, and this technology has reached maturity only in the past few years.

Mechanics of Materials

Over the past century there have been amazing developments in materials technologies. Perhaps the most far-reaching developments are due to the invention of plastics, but there are numerous other materials that have burst onto the scene, including advanced concrete, asphalt concrete, high strength metals, polymer composites, and metal-matrix composites. A number of adaptive materials, also termed ‘smart’ or ‘active’ have also been developed. The deformable body mechanics of these materials can be quite complex.

In my career I have been involved in research dealing with polymers, plastics, polymer composites, metals, metal-matrix composites, geologic salt, sea ice, mud, geologic soils, rubber, human tissue, wood, and rocks. Because I find it to be the most interesting material of all, I will discuss just one example herein—asphalt concrete (Fig. 13.6).

Fig. 13.6
figure 6

Photograph of a core sample of asphalt concrete

I have been working in the field of asphalt technology for many years. Asphalt (also called bitumen) is the gooey stuff that comes out of the wellhead when an oil well strikes crude oil. During the refining process much of the heavy stuff that settles out is asphalt. It’s a good thing we have roadways, because otherwise most asphalt would be useless since it can’t be utilized as a fuel. So asphalt is the cheapest binder that we have naturally available in large quantities on this planet. And in most cases, it is nothing more than dead creatures that have been compressed over a very long period of time.

We call the composite material made by mixing asphalt with geologic aggregate asphalt concrete, and this material is used to surface roadways throughout the world. In fact, asphalt concrete is one of the most commonly used structural materials on Earth. The reason for this is quite obvious: asphalt concrete is cheap!

Asphalt concrete is a typical example of what we term a “composite material”. This is a catch-all phrase for a material that is made by mechanically (as opposed to chemically) combining two or more separate constituents. The objective is to take one material that displays poor performance and embed it with another constituent that will improve the poor performance characteristic of the former material. When the two constituents undergo a chemical process as a result of their combination, the resulting material is called a compound rather than a composite. But when the two or more constituents undergo little or no chemical change, but instead combine only mechanically, the new material is called a composite.

Asphalt is technically a liquid, albeit one with a very high viscosity. It is also quite compliant, being unable to withstand significant loading. When I was a child, there used to be a chemical plant near our house. Asphalt was put in large oil drums and stored for transportation to dumping sites. We would take a penny and place it on top of the surface of the asphalt filled drum. Although the surface appeared to be solid, if you came back the next day, the penny could be seen to be sinking into the surface very slowly. After a week, the penny would have disappeared completely from view. Thus, driving a vehicle over something this compliant and viscous is a lost cause.

But asphalt is so cheap! Thus, engineers have utilized its most admirable property to mitigate its least meritorious ones. Asphalt is really sticky, thus it makes a great binder with whatever is embedded within it. And stone aggregate is (literally) dirt cheap, making it the perfect material to embed in asphalt, thereby creating asphalt concrete. Unfortunately, asphalt insists on behaving badly much of the time. This bad behavior can lead to premature failure of the roadway, and in some cases it can even put drivers in mortal danger.

I never cease to be amazed at the complexity of asphalt concrete. The performance of roadways made of asphalt and aggregates depends on just about anything and everything that can be imagined, making it not only one of the cheapest materials, but also one of the most complicated materials known to humankind (perhaps second only to living tissue, especially that of humans). Asphalt roadways crack, rut, separate, buckle, degrade (called aging), discolor, and spall, due to such things as long-term cyclic tire loadings, rain, snow, ice, temperature variations, other environmental effects such as chemical spills, and even impacts from foreign objects such as IED’s (improvised explosive devices). All of these are problems associated with mechanics (Fig. 13.7).

Fig. 13.7
figure 7

Photograph of asphalt pavement that is both rutted and cracked

The design of a roadway is an open-ended design problem, meaning that there are numerous designs that may satisfy all of the design constraints, but of course, we are seeking the cheapest and safest solution that will work. And this is not easy to predict at all.

Part of the reason for the difficulty is that the loads applied to the roadway are not always well controlled. For example, tires that are either underinflated or overinflated will cause the evolution of roadway damage to increase dramatically, to the point that a single large truck with underinflated tires can cause substantial cracking and loss of roadway life. Furthermore, increasing the loads on the roadway just a small amount can increase the rate of roadway degradation exponentially, meaning that it only takes a few trucks that are overweight to completely destroy a roadway. For the same reason, smaller vehicles such as automobiles and motorcycles normally do almost no damage at all to roadways.

I remember I used to live on an asphalt roadway in the country. It worked fine for ten years, and then one day they struck oil. The big oil tankers that drove up and down that country road destroyed it in less than a year.

There is perhaps one hundred billion U.S. dollars worth of asphalt concrete poured on our planet each year. That is one with eleven zeros!!! That is a LOT of money. Suppose that we could decrease this cost by just 50 %. We could save fifty billion dollars a year. This could conceivably be done, but it has not because robust models for predicting pavement performance have not yet been developed. So let’s suppose that the governments of the world got together and decided to invest in the development of a model that could improve pavement models to the point that the amount of asphalt concrete poured per year could in fact be decreased by 50 %. I estimate that this problem could be solved with an investment of no more than 500 million dollars (and most likely a LOT less even than that). That means that in the first year that this new model is in use, the world will save 49.5 billion dollars! Why don’t we solve this problem? We have the scientific know-how. We have the resources. We have the technology, and the solution to the problem of asphalt concrete is primarily a problem in mechanics, but it also appears that the lack of a solution is related to politics.

The most important material to be developed in my lifetime is undoubtedly plastics. The affect of plastics on our world is nothing short of miraculous. The next time you think about it, go outside and bang about on your automobile. You will find an amazing number of parts made of plastic. Just 50 years ago, there was virtually no plastic at all in automobiles.

Materials development is one of the primary drivers of technology in humankind. From the Stone Age, to the Bronze Age, to the Iron Age, to the Modern Age, each new material has wrought fundamental changes in the way that humans live. Today more than ever before new materials affect our lives. Mechanics has played an enormous role in the development of these new materials.

Mechanics has been utilized both in the development and deployment of new materials across our planet. Utilizing mechanics models we are today able to design structural components so that they will not fail due to excessive deformations or fracture. Thus, mechanics of materials has contributed literally to the shapes in our modern world today.

Massive Construction Projects

The twentieth century produced massive construction projects not seen since the great pyramids were built nearly five millennia ago. We can even say that in some cases we have actually outdone the ancient Egyptians.

The Suez Canal

Although the Suez Canal was not actually built in the twentieth century, it seems like an appropriate project to begin with. At the time that it was proposed, nothing quite so audacious had been attempted since ancient times (there actually was a canal connecting the Mediterranean to the Red Sea in antiquity). The modern canal is approximately 162 km long, connecting the Mediterranean to the Red Sea. Completed in the year 1869 after 10 years of construction, the project linked Europe to the East by water, reducing water-born travel time by months (Figs. 13.8 and 13.9).

Fig. 13.8
figure 8

The Suez Canal viewed from space

Fig. 13.9
figure 9

Depiction of one of the first Suez Canal crossings

The canal was built under the guidance of Ferdinand de Lesseps (1805–1894) of the Suez Canal Company (Fig. 13.10). It is estimated that more than 1.5 million workers participated in the project, and that literally thousands died before the project was completed, making it one of the most costly construction project in terms of human lives in history.

The canal is 193 km long and approximately 205 m wide. It is built entirely at sea level, meaning that no locks are necessary. Thus, water can flow freely from the Red Sea to the Mediterranean. This was actually an issue during construction, as some people believed that sea level might be different in the Mediterranean and the Red Sea, so that when the canal was completed, one or the other might empty into the other, like a bathtub emptying out. Of course, sea level is essentially spatially constant on Earth, so that no such problems developed.

The canal took 10 years to complete and used slave laborers, mostly from Egypt. Although steam engines were available for both digging and transporting dirt, much of the construction was done by hand by an average workforce of about 30,000 laborers. This was truly the first massive project using mechanics in modern times.

Fig. 13.10
figure 10

Portrait of Ferdinand de Lesseps

The Corinth Canal

The Corinth Canal, completed in 1893, connected the Gulf of Corinth with the Aegean Sea in Greece, a distance of some 6.4 km. Although several attempts were made to construct a Corinth Canal in antiquity, they all failed due to the massive amount of stone that had to be removed (the canal has a peak height of 90 m). The modern project, although not as massive as the Suez Canal project, was nevertheless enormous in scope because the canal was quarried (mostly by hand) from sedimentary stone. The canal walls, at an angle of eighty degrees to the horizontal, are an impressive site. Unfortunately, the base width (21.3 m) is too narrow for modern tankers, so that the canal is used mostly by tourist ships today (Fig. 13.11).

Fig. 13.11
figure 11

Aerial photograph of the Corinth Canal

The Panama Canal

After completing the Suez Canal, Ferdinand de Lesseps attempted to dig a canal at Panama beginning in 1881. Unfortunately, this nearly 10-year effort failed with the loss of approximately 22,000 lives, mostly due to yellow fever and malaria. The tropical jungle, together with frequent torrential rains that destabilized the soil, doomed the project to failure.

Fig. 13.12
figure 12

The Panama Canal viewed from space

A second attempt was undertaken by the United States beginning in 1904, and this resulted in completion of the canal in 1914 (Fig. 13.12). By that time more advanced construction equipment was available than when the Suez Canal was built. This included the Panama Railway, a heavy duty railroad designed and constructed for the purpose of hauling the heavy equipment in, and the quarried material out. In addition, modern steam shovels and dredges were utilized for much of the canal construction. Finally, an enormous infrastructure had to be built in order to accommodate the needs of the thousands of workers who participated. The scale of this construction project was indeed larger than anything seen on Earth since the building of the Great Pyramids (Fig. 13.13).

Fig. 13.13
figure 13

Photograph taken in 1913 showing the Panama railway, the steam shovels, and the locks in the Panama Canal project

The canal is 77.1 km in length, but more importantly, it traverses a hilly region, connecting near its center to Lake Gatun at 26 m above sea level. Thus, it is necessary to employ locks within the canal, a challenge that made the project considerably more difficult than the Suez Canal. Completion of the canal cut average travel time from the Atlantic nations to the Pacific ones in half (Fig. 13.14). To date nearly 900,000 ships have transited the canal, making it the most successful canal in history. Because of excessive demand, the canal is at the time of this writing undergoing a much-needed expansion.

The Panama Canal is to this day perhaps the most ambitious construction project employing mechanics ever undertaken on Earth. Indeed, the American Society of Civil Engineers has named the Panama Canal one of the seven modern wonders of the world.

Fig. 13.14
figure 14

Photograph of the SS Kroonland transiting the Panama Canal in 1915

The Hoover Dam

The twentieth century saw the construction of many enormous hydroelectric dams. Perhaps the most famous of these is the Hoover Dam, built on the Colorado River in Southern Nevada. This dam was constructed in a 5-year span from 1931 to 1936, and at 221.5 m was the tallest dam in the world at the time, but this massive project opened the floodgates, as it were. Today there are more than forty dams over 200 m in height, and the Hoover Dam has slipped to number 23 in total height (Figs. 13.15 and 13.16).

Fig. 13.15
figure 15

Photograph of Lake Mead slowly filling on the upstream side of the Hoover dam in 1935

Fig. 13.16
figure 16

Downstream view of the Hoover dam

Even in ancient times the Romans understood that bridges spanning rivers should be curved toward the upstream direction, much like an arch lain on its side in order to maintain compression within the dam structure, as evidenced in the construction of the Pont du Gard (see Chap. 3). And so it was with the Hoover Dam. It was designed as a curved structure, with the apex on the upstream side. Furthermore, in order to support the massive weight of the dam while at the same time withstanding the water pressure (caused by Lake Mead, the result of the dam project) that would increase linearly with depth, it was necessary to make the dam thicker with depth. Thus, the completed design was to be 14 m thick at the top, gently widening to 200 m thick at the base (as well as 221.5 m in height, as mentioned above). The shear ingenuity of the conceptual design was reminiscent of the roof structure of the Pantheon, built by the Romans nearly two millennia earlier. And although the Pantheon dome was also made of concrete, no concrete structure of this size had heretofore been constructed on Earth.

Prior to construction it was necessary to lay a railroad from Las Vegas to the site. Upon completion of this project a workers’ city was built on the site (now called Boulder City), and the massive infrastructure necessary to complete the project was transported to the site.

As for the actual dam project itself, it was first necessary to divert the water from the Colorado River so that the dam could be constructed. This was accomplished by digging four massive tunnels through the surrounding rock faces. Two were on the Eastern (Utah) side, and two were on the Western (Nevada) side of the river. Each of these tunnels was 17 m in diameter, and they spanned a total combined distance of 5 km. Once the tunnels were completed a temporary cofferdam was constructed that diverted the water from the river into the tunnels. In order to protect the project against possible flooding of the river, two additional cofferdams were constructed. The upper cofferdam, made from rock, was 29 m high and 230 m thick at its base (Fig. 13.17).

Fig. 13.17
figure 17

Photograph of a jumbo rig used to dig tunnels during the construction of Hoover dam

Next it was necessary to remove loose rocks from the canyon walls so that the massive dam would have a firm foundation. A group of workers called “high scalers” carried out this dangerous task using jackhammers and dynamite. The walls of the canyon were subsequently filled and reinforced, so that the project was now prepared for the actual construction of the dam.

The pouring of the concrete for the dam commenced in June of 1933. Careful modeling had indicated that pouring the dam in a single casting would mean that the concrete would require more than a century to cure properly. This is another problem in mechanics. When concrete is poured it is in a liquid state, having been mixed with water. The water must then slowly diffuse to the surface and evaporate, and this process can be modeled with an application of conservation of mass called Fick’s law. The diffusion of water outwards causes shrinkage to occur, and if the structure is not designed properly the shrinkage will induce stresses that are sufficiently large to cause the structure to undergo multiple fractures.

An example of a naturally formed structure that has fractured during cooling and curing is the Devil’s Tower, in northeastern Wyoming. This monolithic 386 m tall structure was formed about 40 million years ago, when the region was volcanically active. Although scientists are not quite sure how the tower was formed, there is one thing they do agree on—as the molten lava that formed the tower cooled it fractured into a horizontally hexagonal pattern that then extended vertically, thus creating the curious geometric pattern that we observe today. Similar naturally formed geologic patterns are found at the Devil’s Postpile in California and the Giant’s Causeway in Northern Ireland (or simply check out a nearby mudflat that has dried out). Had the Hoover Dam not been designed properly against diffusion induced fracture, it would have most likely looked something like the Devil’s Tower, thus obviating its use as a dam (Fig. 13.18).

Fig. 13.18
figure 18

Photograph of the Devil’s tower in northeastern Wyoming

As one might expect, the further the distance the water must diffuse to the surface, the longer it will take for curing to reach completion. Thus, it was determined that in order to reduce the curing time and allow free shrinkage so that no cracking occurred, it would be necessary to pour the concrete in a series of blocks, as shown in Fig. 13.19. These blocks were typically about 5 m in height, and as much as 50 m2 in cross-sectional area. Each block contained steel pipes that were used to run cool water through the blocks, so that curing progressed at a rate that resulted in proper curing and contraction without fracturing the concrete. Once the blocks were cured, the pipes were filled, as were the spaces between the blocks. This laborious process was carried out over a nearly 2 year period, with a total of 2.5 million cubic meters of concrete being poured from massive buckets suspended from cranes. This is enough concrete to build a two lane highway across the entire United States!

Fig. 13.19
figure 19

Photograph showing the formworks for the massive concrete columns in Hoover dam

The completed dam was dedicated by President Franklin D. Roosevelt on September 30, 1935 (former President Herbert Hoover, for whom the dam is named, was not in attendance). The project was completed in just over 5 years, with an average workforce of about 4,000 laborers, and a total of 112 deaths during construction.

There are also several other dams that, although not overly impressive in height, are distinguished by their enormous lengths. Perhaps the first of these was the Aswan Low Dam, built at the first cataract of the Nile in 1899–1902. At 1950 m in length, the dam was the longest in the world at the time. Unfortunately, this dam had to be raised on several occasions because the Nile overflowed it. The Aswan High Dam was subsequently built 6 km upstream in 1970, thus creating Lake Nasser south of Aswan, and doing away with the annual flooding of the Nile for the foreseeable future (Figs. 13.20 and 13.21).

Fig. 13.20
figure 20

Photograph of the Aswan high dam from space

Fig. 13.21
figure 21

Satellite photo of lake Nasser, the world’s second largest artificial lake

The Relocation of Abu Simbel

One of my favorite construction projects involving mechanics in the twentieth century was the reconstruction of the ancient Egyptian Temples at Abu Simbel. These two temples were built in the thirteenth century BCE by Ramses II to show his power to the Nubians way up the Nile beyond Aswan. Unfortunately, the temples were built adjacent to the Nile River at a level near that of the river.

The construction of the Aswan high dam between 1960 and 1970 created Lake Nasser. Since the water level of the lake was to be much higher than the level of the Nile, the lake would have inundated these invaluable temples. Therefore, after considering various options the Egyptian government decided to move the temples to higher ground!

Working under the aegis of UNESCO, a team of international engineers devised a plan to transport the temples to the cliff above the river gorge. Construction began in 1964. The new site was directly above the old site that was carved into the cliffs along the Nile. Since there were no cliffs at the higher location, two artificial hills were first built above the cliffs. Next, the temples were carefully cut with large power saws into blocks averaging 18,000 kg in weight. These were then lifted with cranes 65 m to the top of the cliff and fitted together in exactly the same configuration in which they had been originally constructed more than three thousand years ago.

It was important to retain precisely the same orientation during this process, as the temples had been designed to align with the Sun at certain times of the year. In addition, the interiors of the temples were rather ornate and large, so that the project involved much more than reassembling the surface features of the two temples. This truly amazing feat of modern civil engineering involving mechanics was completed in 1968, thus allowing thousands of tourists to visit the site every day. One of the greatest treasures from Egyptian antiquity was thus preserved for us to see today (Figs. 13.22 and 13.23).

Fig. 13.22
figure 22

Photograph of Ramses II’s temple during reconstruction

Fig. 13.23
figure 23

Photograph of the reconstructed Ramses II’s temple at Abu Simbel

The Venice MOSE Project

Another rather unique dam project that is underway today is the Venice MOdulo Sperimentale Elettromeccanico (MOSE) Project. When completed, this massive construction project in mechanics is expected to mitigate flooding in Venice by utilizing ingenious pop-up dams that will surface only during periods of high tides (Figs. 13.24 and 13.25).

Fig. 13.24
figure 24

Satellite photo of Venice showing main entrance to the Lagoon

Fig. 13.25
figure 25

Schematic drawing of dams utilized in Venice MOSE project

The Chunnel

Another massive construction project was The Chunnel, the underground tunnel connecting England to France. Completed in 1994 at a cost of £4.65B (1985), this 50.5 km tunnel is a marvel of modern mechanics. While it may be slightly more expensive than the airfare from London to Paris, when the total cost and time of airport transfers is included, it is quite a bargain (Fig. 13.26).

Fig. 13.26
figure 26

Photo of full scale model of section of Chunnel at National Railway Museum in York, England

The Egyptian pyramids astounded the world for nearly five millennia. But with the rise of modern mechanics, applications of this science have produced an ever increasing plethora of truly massive construction projects on Earth. It is no stretch to say that in our time we humans believe that virtually anything can be built on Earth. Such is the impact of mechanics on our planet.

Modern Failure Mechanics

Sometimes human-made structures fail to perform as intended. There are numerous possible modes of physical failure in solids. Broadly speaking, failure can be induced mechanically, chemically, thermally, or even electromagnetically. A simple example of a thermally induced failure would be melting of a solid. For example, the failure of the Space Shuttle Columbia in 2003 seems to have been induced at least in part by overheating and subsequent melting of the structure during reentry to the Earth’s atmosphere. Of course, in this text we are concerned only with the first of these—mechanically induced failure.

Mechanical failure normally occurs in one of several different ways, including permanent deformation, fracture, excessive deformation, or structural instability. Consider for example failure due to excessive deformations. A case in point is that of the C-5 military transport aircraft, built in the 1960s. This aircraft was the largest ever built in the U.S. Unfortunately, when the aircraft was fully loaded to its design configuration; the wingtips were capable of touching down on landing. This is an example of failure by excessive deformations. Thankfully, this type of failure does not occur too often, because it is actually one of the easiest failure modes to predict. That is due to the fact that our modern continuum mechanics models predict deformations, usually quite accurately.

Tacoma Narrows Bridge Collapse

A very famous failure toward the middle of the twentieth century was the Tacoma Narrows Bridge collapse. This bridge failed dynamically due to structural resonance brought about by a steady wind through the narrows. This is a mechanically induced phenomenon within the discipline called aeroelasticity, in which the aerodynamic forces due to the wind interact with the structure in such a way that the forces applied by the wind change with time due to the deformations of the bridge. In the case of the Tacoma Narrows Bridge, this interaction caused the steady wind to induce cyclic loading in time that caused the bridge displacement amplitude to increase over time until the bridge collapsed. It was subsequently determined that the bridge decking was too flexible for the wind loads applied to the bridge (Fig. 13.27).

Fig. 13.27
figure 27

Photograph of the Tacoma narrows bridge collapse

One mode of failure that was studied to great extent over most of the twentieth century is the subject of fracture mechanics. Solids are distinguished from fluids by the fact that they can undergo fracture, and this process can often (though not always) lead to failure of a structure to perform its intended task. Cracks can occur on very large length scales, such as Halfdome at Yosemite National Park (see Chap. 11).

The pivotal question with a crack in a solid is: when will it grow, and where will it go? I suppose that is really two questions. And as it turns out, neither of them has an easy answer. On the surface of it, one would have thought that the issue would have been resolved within the field of materials science, but that has not turned out to be the case. Instead, the problem has been at least partially solved using the science of mechanics.

Early twentieth century structural failures such as the sinking of the Titanic (see below) led researchers to study the underlying cause of crack propagation in solids. The first significant paper on this subject was published by Charles Inglis (1875–1952), a British civil engineer, in 1913. In his seminal paper, Inglis noticed that rivet holes in the hulls of ships tended to be elliptical in shape, leading him to study stress concentrations at the edges of elliptical defects.

Drawing on Inglis’ ground-breaking work, Alan A. Griffith (1893–1963) proposed in 1921 that a crack of length, a, can be predicted to propagate when the available energy for crack growth, G, exceeds the required energy for crack growth, G c , a material property. This concept can be said to have been the birth of modern fracture mechanics. Stated mathematically, it reads as follows [126]:

$$ G > G_{c} \Rightarrow \dot{a} > 0 $$

where the dot over a means the time derivative and the symbol \( \Rightarrow \)means ‘implies that”. This model has turned out to be quite accurate for many materials.

While the above concept is simple, its consequences are profound. Furthermore, its implementation and deployment is exceedingly complicated. First, you should have a solid object with a crack in it. Then, you have to know the material property G c , the intrinsic ability of the material to resist crack extension. This property is hard to measure experimentally, but most of the time it can in principle be done, and in many materials it is in fact a material constant. Armed with this property and the ability to predict stresses in structural components using the finite element method (see above section on Computational Mechanics), it is in principle possible to predict when a crack will grow and where it will go. Further seminal works on the mechanics of fracture were reported by D. S. Dugdale, Grigory Barenblatt (1927–), George R. Irwin (1907–1998), and James R. Rice (1940–), to name a few. The above explanation is an oversimplification of this still developing field of mechanics, but it serves to elucidate the complexity associated with the subject of fracture mechanics.

The above fracture model is just one example of a failure model. The twentieth century produced a plethora of new design methodologies for structures that are based on the mechanics models introduced in this century. The important point to be noted herein is that the continuum mechanics models developed in the nineteenth century do not predict failure in and of themselves. They must be adjoined with additional physically inspired mathematical constraints such as the Griffith criterion for crack growth mentioned above.

The interested observer need only look as far as automobiles, bridges, buildings, aircraft, spacecraft, and modern windmills to see the worldwide impact of these failure models. The seminal concept is this: if one can predict failure of an object theoretically a priori, then one can utilize this information to design the object so that failure is completely obviated. This is a powerful outcome of the ability to predict the future. Using this concept, the design of essentially all modern load carrying structures emanates from the mechanics concepts developed in the nineteenth century. Unfortunately, our understanding is still imperfect, so that failures still occur, as we will see below.

Sinking of the Titanic

A successful structural design requires that the object satisfy all of the design constraints. If the design fails to satisfy even one of the design constraints it has failed. One of the most famous structural failures in modern times is the case of the ill-fated RMS Titanic. As the reader may well know, at the time the ship was built (1912) it was the largest passenger liner in the world. It went down in the North Atlantic on its maiden voyage when it struck an iceberg. There were 1,517 persons killed in the disaster (Fig. 13.28).

Fig. 13.28
figure 28

Photo by F.G.O. Stuart of RMS Titanic departing Southampton on April 10, 1912

Investigations determined that the ship went down due to a complex series of related design flaws. Large ships are designed with bulkheads spaced along their length so that if the hull is pierced, several of the bulkhead-separated compartments can flood without causing the ship to sink. In the case of the Titanic, the ship was designed to remain afloat if four bulkheads were flooded. Unfortunately, when the ship struck the iceberg, it scraped along the starboard (right) side for nearly the entire length of the ship. The ship had exposed rivets along its length, and these rivets were made of somewhat brittle steel that may have been further embrittled by the cold waters in the North Atlantic. A more recent investigation of the debris field adjacent to the ship on the floor of the Atlantic has disclosed the presence of large stones that have been traced to the coast of Greenland. These stones may have been imbedded in the iceberg, thereby providing a sharp edge that further enhanced the cutting ability of the iceberg as it slid along the hull of the ship.

At any rate, the iceberg both sheared off the heads of these rivets as well as produced cracks in the hull, allowing water to begin flooding the first five compartments. Furthermore, as the ship canted from the flooding in these compartments, water poured over the tops of some of the bulkheads (another design flaw), causing the aft compartments to flood more rapidly. The Titanic disaster caused a worldwide calamity that led to significant changes in the design of modern ships.

The Failure of the Space Shuttle Challenger

Although the design constraints are often known precisely, this is not always the case. An example of a case wherein the design constraints were not known a priori sufficiently to avert structural failure is the 1986 failure of the Space Shuttle Challenger. Challenger blew up during launch on January, 28, 1986 (Fig. 13.29).

The Rogers Commission was appointed by President Reagan after the disaster for the purpose of determining the cause of the failure. Over the succeeding nearly 3 years, this commission gathered information, finally determining that the major contributory factor to the failure of the Challenger was the temperature at launch, which was −8 °C (18 °F). As pointed out on television by Cal Tech Professor Richard Feynman (see the section above on computational mechanics), a member of the commission, the O-rings in the shuttle rocket motor booster casings were embrittled at this low temperature, causing them to fail during launch (you can check the film clip of Dr. Feynman out on YouTube). All seven members of the on-board launch crew were killed.

The Rogers Commission found another more serious cause for the disaster. They determined that engineers and scientists had warned upper management that launching at such low temperatures could cause failure of the shuttle. However, senior management had overruled the technical staff and allowed the launch to proceed. As a result of these findings, there was a major overhaul of the procedures used at both NASA and other scientific agencies in the United States. Management is no longer allowed to overrule technical staff in such matters. This is an example of a mechanical failure that was caused by poor oversight by managers.

Fig. 13.29
figure 29

Photos of the challenger disaster; explosion on the left; Challenger underwater on the right

Other modes of failure can be much more difficult to predict. Perhaps the most difficult mode of failure to predict is failure of structural components due to long-term cyclic loading. Somewhat incongruously, the prediction of failure by this mode is termed “life prediction”. Perhaps “death prediction” would be more appropriate, but it may be a bit too ghoulish a term.

1988 Aloha Airlines Disaster

An example of failure due to cyclic loading is the case of the 1988 Aloha Airlines disaster, where corrosion due to the salty environment in the Hawaiian Islands contributed to the development of fatigue cracks in the aircraft fuselage. As shown below, a large portion of the fuselage was instantaneously popped off of the aircraft while in flight. A flight attendant was swept out to her death, but miraculously, the aircraft landed safely and no one else was seriously injured. As a result of this world-famous accident, safety inspections have been implemented that have thus far obviated further commercial aircraft disasters of this type (Fig. 13.30).

Fig. 13.30
figure 30

Photo of Aloha airlines disaster

I-35 Minneapolis Bridge Collapse

Another recent major failure caused by chemically induced fracture and fatigue was the collapse of the I-35 W Mississippi River Bridge in Minneapolis on August 1, 2007. A post-mortem inspection of the bridge revealed that the beam connection plates had corroded over time, thus reducing the material properties of the plates. This corrosion, together with long term bridge overloading caused by adding two lanes of traffic to the initial design, contributed to unstable crack propagation and collapse of several sections of the bridge. Thirteen people were killed and 145 people were injured. Although inspections have been stepped up subsequent to this tragedy, our nation’s aging infrastructure is likely due for more structural failures unless further safety measures are implemented (Fig. 13.31).

Fig. 13.31
figure 31

Photo on left of the Minneapolis bridge collapse; photo on right showing fracture in gusset plate

The UA Flight 232 Crash

Small cracks subjected to cyclic loadings can also lead to long-term catastrophic failure, as in the case of the Sioux City UA flight 232 (DC-10) aircraft crash on July 19, 1989. It was found that cracks in the number 2 (tail mounted) engine stage 1 fan disk propagated in an unstable manner, thereby causing portions of the engine to break off and damage the aircraft tail and controls [127]. The aircraft subsequently broke up during an emergency landing at Sioux Gateway Airport. Although 185 passengers survived, 111 passengers were killed. The fan disk was found in a cornfield 3 months after the disaster, and it was reconstructed (Fig. 13.32).

Fig. 13.32
figure 32

Photo of reconstruction of the stage 1 fan disk in the Sioux city aircraft crash of flight UA 232 on July 19, 1989. Note the large crack in the disk

The ability to predict when a crack will grow and where it will go depends on the material utilized in the solid under consideration. Brittle solids generally are the class of materials for which fracture can be most easily predicted. However, in the case where the brittle solid of interest is not isotropic, such as a laminated continuous carbon fiber composite currently being deployed in the Boeing 787 Dreamliner, the Airbus 350 and the Airbus A380, the prediction of fracture is an advanced topic in mechanics (Fig. 13.33).

Fig. 13.33
figure 33

Boeing 787 dreamliner on left Airbus A380 on right

The Chernobyl Reactor Meltdown

One of the most significant failures of the twentieth century was the meltdown of the nuclear reactor at Chernobyl on April 26, 1986. This disaster was initiated by mechanics of nuclear fission at the molecular scale, and progressed to an environmental disaster that is still being assessed. Various agencies estimate that the death toll due to cancers caused by this accident will range between 25,000 and 200,000 (Fig. 13.34).

Fig. 13.34
figure 34

Photograph taken in 2006 of the Chernobyl Sarcophogus

The Tōhoku Earthquake and Tsunami

The 2011 Tōhoku earthquake and subsequent tsunami in Japan reminds us all of the power of mechanics. The earthquake, a mechanical disruption of two tectonic plates, as described in Chap. 11, resulted in the propagation of a mechanical wave in the ocean. The two events combined to create a disaster of incomprehensible proportions. The World Bank estimated the total loss at $235 billion USD, making it the costliest disaster in recorded history. Perhaps one day we will possess the mechanical technology to avoid such disasters (see Chap. 11) (Fig. 13.35).

Fig. 13.35
figure 35

Photos of buildings collapsed by the Tōhoku earthquake

The Leaning Tower of Pisa

On a brilliant Sunday morning in the summer of 1971, I stepped down from the train in Pisa, stored my pack in a luggage locker, and headed for the center of town. I arrived in the Piazza del Miracoli (Square of the Miracles), and to my surprise, I was entirely alone in the square. I walked gingerly over to the Leaning Tower and began my pilgrimage to the top (Fig. 13.36).

Fig. 13.36
figure 36

Photograph of the Leaning Tower of Pisa

At that age I was cursed with a case of acrophobia. Thus, when I arrived alone and breathless at the precipice moments later, I found it nearly impossible to stand. Indeed, I groped for the iron railing on hands and knees, unable to summon the bravado to stand. Finally, calling upon my innermost strength, I rose, white-knuckled hands gripping the railing, and took in the magnificent view, all the while fearing that I would totter from the summit.

I visited the summit once again 9 years later, but my next trip to the top would not occur for nearly a quarter of a century, in 2004. In the interim period the tower was closed due to fear that it would topple. By the time the tower was reopened, the Piazza dei Miracoli had changed from the quiet spot remembered from my youth to something resembling a circus. The age of frenzied and ubiquitous air travel had arrived on our planet within that span of time, and Italy was the destination of choice for the multitudes.

The Leaning Tower of Pisa is perhaps the most famous example on Earth of a failure that somehow succeeded. Indeed, when the tower came dangerously close to collapsing in the 1990s, the International Commission wisely realized that their goal was not to right the tower to vertical. Instead, they sought to decrease the tilt only enough to ensure the safety of the tower, while retaining sufficient tilt for the tower to continue to attract tourists.

I was fortunate enough to be living in Italy for two summers in 1996 and 1997, and during that period one of the members of the International Commission gave a speech at our study center. The Leaning Tower is a fascinating challenge in mechanics. When the tower was first begun in 1173, the foundation was poorly designed. Workers did not comprehend that the soil beneath the foundation was partially saturated from the nearby Arno River. Thus, by the time the first level had been completed the bell tower was already listing away from the river. Workers attempted to correct this by building the second level leaning slightly toward the river, but this caused the tower to begin listing instead toward the river. Construction was therefore halted, and the tower was left incomplete for nearly a century [128].

Construction recommenced in 1272. By then it was assumed that the underlying soil had stabilized. Thus, workers attempted to correct subsequent levels, thereby creating the slightly curved structure that is plainly visible to the observer today. The tower was finally completed in 1372, an enormous span of time for such a seemingly straightforward project.

Unfortunately, the tower continued to list, the angle growing ever so slowly over the centuries, as the foundation crept due to the overbearing load of the structure on the viscoelastic soil beneath. By the twentieth century the angle of tilt had increased to about 4 degrees from the vertical. I say “about” because the angle of tilt of the tower is continuously changing. A plumb bob was at one time dropped from the center of the interior of the tower, and this device plotted a continuously changing path on the floor of the tower due to the actions of Sun and wind on the structure. Thus, in order to determine if the tilt is actually increasing in time, a three month average of the plumb bob’s path was taken in order to measure the change of tilt in time. This is a fabulous problem in mechanics!

In 1989 the Civic Tower of Pavia collapsed, killing one person. This disaster caused officials to close the Leaning Tower, and it remained closed while the International Commission deliberated on a corrective course of action. It was well known that previous attempts had come to nought. For example, Mussolini had injected concrete into the base in the 1930s with the result that the tower had lurched still further.

Still later, liquid nitrogen was injected into the ground in an attempt to slow the progression of the tilt, but this had caused the tower to lurch even further, perhaps in part due to Mussolini’s previous error in judgment.

The International Commission finally settled on applying lead weights to the high side of the foundation. This did in fact slow the progression of the tilt, but the foundation was not wide enough to provide sufficient moment arm for the weights to reverse the tilt. Thus, a second approach was taken. A girdle was built about the third level of the tower, and cables were attached to the girdle. These were then pulled on by two wenches attached to the cables.

I visited the tower on several occasions during this period of weighting and tugging, and I don’t mind telling you, each successive time I saw the Leaning Tower, I presumed that it would be my last because to my eye it appeared that this approach might in fact induce the tower to collapse.

Fortunately, a third attempt at a solution was formulated by the commission, and this final approach was successful. Engineers poked large hypodermic needles into the ground on the high side of the apron and sucked 38 m3 of soil from the ground beneath the tower. The tower straightened by 45 cm, thereby returning to the angle of tilt of 1838 (Fig. 13.37).

The Leaning Tower of Pisa was reopened to the public on December 15, 2001. A second removal of 70 metric tons of soil in 2008 has further stabilized the structure. Unfortunately, all of this has only increased the volume of tourist traffic, to the point that it is necessary to wait in line, buy tickets and then wait still further before climbing the tower. Thus, my advice to those who wish to go to the top of the tower is—plan ahead! After all, the Leaning Tower is really worth visiting, and many of you may only get one chance to go to the top.

Fig. 13.37
figure 37

Photograph of the lead weights placed on the foundation of the Leaning Tower. Note the row of hypodermics can also be seen at the left edge of the photo, as well as tension cables attached to the third level

Every time we lose a commercial aircraft somewhere in the world, the general public is horrified, as well they should be. Indeed, I know people who refuse to fly in an aircraft because they are afraid that such a catastrophe will befall them. But let’s look at the statistics of commercial aircraft crashes. We have about 35,000 commercial flights per day in the U.S. That translates to about 12.8 million flights per year. Lately, there has been about one commercial aircraft crash per year in the U.S. So your odds of being in a crash are about one in 12.8 million. That’s pretty low odds! It is in fact not too dissimilar to the odds of winning the lottery. So if you think you are going to win the lottery, you should also avoid flying on airplanes. In fact, your odds of being killed in an automobile are significantly higher than this. Furthermore, your odds of being killed while crossing the street on foot are higher than your odds of being killed in a commercial aircraft. Commercial aviation is in fact the safest form of transportation in the history of our planet, and it is virtually entirely an outgrowth of mechanics.

We can now say that while classical mechanics does not have all of the answers, it has nonetheless served us well. Despite the fact that we continue to experience highly visible catastrophes due to our imperfect understanding of mechanics, we are a far more advanced and successful species than has ever before inhabited this planet, or any other, for all we know.