The Road to Abstraction

Set Theory and Axiomatic Systems

The revolutions in mathematics in the nineteenth century paved the way for rapid development and unprecedented expansion in mathematics in the twentieth. Modern mathematics no longer comprises only geometry, algebra, and analysis. Rather, mathematics today is a vast web of interconnected and evolving disciplines and concepts, characterized not only by rigorous logic but also by high abstraction and wide applicability. This indicates the basic division of modern mathematical research into pure mathematics and applied mathematics. The latter classification has expanded in recent decades to include computer science, the importance of which in the modern world goes without saying: from the perspective of employment opportunities alone, it has already exceeded every other branch of mathematics (Fig. 8.1).

Fig. 8.1
A photo of Georg Cantor with an aleph symbol to the left.

Georg Cantor, founder of set theory

The modernization of pure mathematics was driven primarily by two innovations: the invention of set theory and the introduction of axiomatic methods. Set theory was created in the nineteenth century by Georg Cantor. Its invention was initially ill received by the mathematics community, notably Kronecker, but eventually achieved widespread success. Sets were originally conceived as collections of numbers or points, but the definition of a set quickly expanded to include collections of arbitrary elements, for example, sets of functions, sets of shapes satisfying a given property, and so forth. Today, it is the universal language of mathematics in which the basic concepts of mathematics, say integrals, functions, and spaces of various kinds, are all expressed. The introduction of set theory has also had a profound influence on the machinery of mathematical logic and motivated the debate between mathematical intuitionism and formalism, which is the subject of the present chapter.

Georg Cantor was born in 1845 in Saint Petersburg into a family of second-generation German emigrants. His father was a businessman with connections in Hamburg, London, and even New York. When Cantor was 11 years old, his father became ill, and the family returned to Germany. He completed his secondary education in Amsterdam and attended universities in both Zurich, Switzerland, and Berlin. He had a talent for painting which was a source of considerable pride for his family, but settled eventually upon a career in mathematics.

As Cantor saw it, a set consists of any abstract collection of well-distinguished objects. He introduced the notion of the cardinality of a set in order to compare the sizes of different sets, whether finite or infinite. His definition relies on the notion of a one-to-one correspondence between sets, which is illustrated by a surprising and beautiful demonstration: Cantor discovered and proved that it is possible to set up a one-to-one correspondence between the rational numbers and the natural numbers. The proof is encapsulated by the following diagram.

A diagram presents the correspondence between rational and natural numbers. The numbers in the first row are 1 over 1, 1 over 2, 1 over 3, 1 over 4, and so on. The numbers in the second row are 2 over 1, 2 over 2, 2 over 3, and so on. The rows and columns have infinite elements which have arrows diagonally in zig-zag pattern.

Such infinite sets that can be put into a one-to-one correspondence in this way with the natural numbers are called countable. Infinite sets that cannot be put into any one-to-one correspondence with the natural numbers are called uncountable. Cantor proved that the set of real numbers is uncountable.

Moreover, Cantor was able to use set theoretical arguments to provide a simple nonconstructive proof for the existence of transcendental numbers: since it is not difficult to see that the set of algebraic numbers, which includes as a subset the set of rational numbers, is countable. Since every real number is either algebraic or transcendental, and the set of real numbers is uncountable, it follows that the majority of real numbers must be transcendental. The study of transcendental numbers became a deep and active area of research in twentieth-century mathematics.

The philosophical assumptions and implications at the heart of Cantor’s research were not uncontroversial. In particular, the successful and influential mathematician Leopold Kronecker opposed the introduction of actual infinities into mathematics. Kronecker was head of mathematics at the University of Berlin and a successful businessman, and his vigorous public opposition to Cantor may have prevented Cantor from ever obtaining a post there, and Cantor spent the entirety of his career at the less prestigious University of Halle.

Cantor borrowed from Hebrew the notation 0 (aleph null) to stand for the cardinality of the natural numbers and showed that it is possible to construct an increasing sequence 0 < 1 < 2 < ⋯ of transfinite cardinalities. Since the cardinality of the real numbers is strictly larger than the cardinality 0 of the natural numbers, Cantor proposed a natural conjecture, referred to today as the continuum hypothesis: there exists no cardinal number lying strictly between the two. When David Hilbert presented his famous list of open problems at the turn of the twentieth century at the International Congress of Mathematicians in Paris in 1900, the problem of the continuum hypothesis was first among them (a problem related to transcendental numbers was seventh).

Cantor corrected a serious defect in the foundations of mathematics that had persisted since the time of Zeno in Ancient Greece. The philosopher Bertrand Russell discusses the historical significance of his work in his Mathematics and the Metaphysicians, published in 1901:

Zeno was concerned, as a matter of fact, with three problems, each presented by motion, but each more abstract than motion, and capable of a purely arithmetical treatment. These are the problems of the infinitesimal, the infinite, and continuity …From him to our own day, the finest intellects of each generation in turn attacked the problems, but achieved, broadly speaking, nothing. In our own time, however, three men—Weierstrass, Dedekind, and Cantor—have not merely advanced the three problems, but have completely solved them. The solutions, for those acquainted with mathematics, are so clear as to leave no longer the slightest doubt or difficulty. This achievement is probably the greatest of which our age has to boast …Of the three problems, that of the infinitesimal was solved by Weierstrass; the solution of the other two was begun by Dedekind, and definitively accomplished by Cantor.

Unfortunately, Cantor’s Promethean efforts and many personal insecurities and misfortunes led to his own mental breakdown at the age of 40, and he spent much of his later life in and out of sanatoriums, in one of which he died some many years later (Fig. 8.2).

Fig. 8.2
A stamp exhibits a fingerprint expression with several markings from a to e along with 2 diamonds arranged vertically with markings C, A, B, top to bottom respectively and a point D to its right that is connected to the points C, A, and B via lines.

A commemorative stamp issued by the Democratic Republic of the Congo featuring David Hilbert

The story of axiomatization in mathematics also begins in Ancient Greece, with Euclid and his Elements of Geometry. In it, he introduced the five axioms discussed at length in the previous chapter. His system however was incomplete and imperfect. The mathematician David Hilbert introduced a new system of axioms for geometry in order to clear up its ambiguities. He is reported to have described the objective of his axiomatic system with the words: “One must be able to say at all times—instead of points, straight lines, and planes—tables, chairs, and beer mugs.”

In Euclid, points, lines, and planes have descriptive definitions in terms of their spatial properties. Hilbert endeavored to replace these descriptive definitions with purely formal definitions. Points, lines, and planes become purely abstract objects with no specific content, and the axioms define formal relations between them. Hilbert established three legitimacy requirements for an axiomatic system: consistency, independence, and completeness. Of course, axiomatization at this stage was only a methodological question and does not possess as rich a content as set theory. Nevertheless, Hilbert provided with his method a rigorous foundation for geometry, and since then, the method of axiomatization has gradually seeped into other branches of mathematics and become a powerful tool for refining mathematics and a specific topic of mathematical research in its own right.

David Hilbert was born in 1862 in the outskirts of Königsberg, a Prussian city that today is part of Russia and known as Kaliningrad. Probably the most famous resident in the history of Königsberg was Kant, who spent his entire life there. The city is also associated with a famous problem in mathematics. There are seven bridges across the river Pregel running through it, some of them connecting the mainland to one or the other of two large islands at its center, one of them joining the two islands to one another (Fig. 8.3).

Fig. 8.3
A schematic diagram presents 7 bridges which connect mainland and islands. 3 pathways in different directions connect to a rectangular pathway at the center through 7 bridges. A left sided triangle with 2 curved sides and a straight line at the center connects 2 ovals on the third side.

Abstract illustration of the Seven Bridges of Königsberg problem

The problem was to find a walk through the city that would cross each of the bridges once and only once, and it was resolved by Euler in the eighteenth century, who proved that no such walk exists. This seemingly simple mathematical problem eventually gave rise to the modern theory of topology. Another mathematically famous resident of Königsberg was Christian Goldbach (1690–1764), responsible for a famous eponymous open conjecture in mathematics, that every even integer larger than 2 admits a presentation as a sum of two primes. Perhaps the greatest progress toward the resolution of this problem was provided by the Chinese mathematician Chen Jingrun, who proved in 1966 that every sufficiently large even number can be written as a sum of either two primes or the sum a prime and the product of two primes. In 2013, Zhang Yitang (1955–), another mathematician born and raised in China, made a breakthrough in the study of the twin prime conjecture, which states that there exist infinitely many pairs of prime numbers with a difference of two, such as, for example, 5 and 7, 11 and 13. His result was subsequently improved by a new method created by the British mathematician James Maynard (1987–), who was awarded the Fields Medal in 2022.

During Hilbert’s lifetime, the Königsberg mathematician who played the largest role in his mathematical career was his colleague Hermann Minkowski (1864–1909) who was born 2 years after Hilbert in the Russian town of Aleksotas, now part of Kaunas in Lithuania, and moved with his family when he was 8 years old to Königsberg, where they lived across the river from Hilbert. This talented mathematician earned the prestigious Mathematics Prize of the French Academy of Sciences when he was 18 years old for a manuscript on the theory of quadratic forms. His brother Oskar Minkowski (1858–1931) was also a successful medical researcher, who discovered the relationship between the pancreas and diabetes, which led to the discovery of insulin as a treatment of the disease.

Hilbert’s talent was in no way outshone by the remarkable talent of Minkowski, but rather he was impelled to hone and accumulate his skills and quietly endeavor to build for himself an even more solid foundation. The two of them developed a remarkable friendship that spanned more than a quarter century until Minkowski’s sudden death due to appendicitis in 1909. Hilbert lived to see his eighties and became one of the most accomplished and respected elder statesman of mathematics in his time. The famous list of open questions and research projects that he introduced at the turn of the century remain to this day an influential guidepost for the entire discipline.

We say a bit here about Hilbert’s ninth problem, which was partially resolved by the work of the Austrian mathematician Emil Artin (1898–1962) and the Japanese mathematician Teiji Takagi (1875–1960) with the creation of class field theory. Takagi pursued his doctorate at the University of Göttingen under the supervision of Hilbert and later returned to his country where he trained a generation of outstanding Japanese mathematicians: indeed, following the end of World War II, Japan produced three Fields Medalists, the first of them being Kunihiko Kodaira (1915–1997).

The Abstraction of Mathematics

Set theory and the axiomatic method became the paradigms for mathematical abstraction in the twentieth century, even more so after they were integrated into a singular foundational approach to all of modern mathematics. Eventually, four central disciplines emerged: real analysis, functional analysis, topology, and modern (or abstract) algebra. It is interesting to note that all the mathematicians mentioned in the previous section in connection with this development hailed from Germany, a country which has always nurtured a talent for the abstract, whether in art, music, or the humanities and social sciences.

The introduction of set theory brought about a revolution in integral calculus which led to development of the modern theory of functions of a real variable. The rigorous treatment of analysis in the nineteenth century had forced into the light a variety of pathological functions such as the Weierstrass’s function, discussed in the previous chapter. Another example is Dirichlet function, named after another of Gauss’s students, who discovered it:

$$\displaystyle \begin{aligned} f(x) = \begin{cases} 1 \text{ if } x \text{ is a rational number} \\ 0 \text{ otherwise} \end{cases}. \end{aligned}$$

This function has the interesting property of being discontinuous at every real number. Such examples forced mathematicians to study a more general class of functions than that which had typically been admitted into calculus (Fig. 8.4).

Fig. 8.4
A photo of Henri Lebesgue.

Henri Lebesgue, father of modern analysis

The first significant success in this d irection was achieved by the French mathematician Henri Lebesgue (1875–1941). He adopted a set theoretical approach to invent a new mathematical discipline called measure theory. In measure theory, certain familiar geometrical concepts including length and area are generalized and made abstract by the introduction of a measure on a given space. Similarly, Lebesgue extended the integral of classical calculus by defining the Lebesgue integral. On the basis of these foundations, it is possible to recover the fundamental theorem of calculus relating the differential operation and the integral operation and the other familiar theorems in calculus due to Leibniz and Newton. The contributions of Lebesgue became the building blocks of modern real analysis. However, his work received a hostile reception from classical analysts, and he struggled to find consistent work for a period of time after its publication. Its importance is recognized today by the division in analysis between classical analysis and modern analysis, the latter of which refers to any topic in analysis which makes use of his innovations.

Another deep development in analysis in the twentieth century was the development of modern functional analysis. The word functional was coined by Jacques Hadamard (1865–1963) to describe a function whose argument is another function. We have had occasion already to discuss examples of such functions in our treatment of the calculus of variations. The list of mathematicians who contributed important results in functional analysis is a long one. Hilbert, for example, studied the space of square-summable sequences: sequences (a1, a2, …, an, …) of real numbers subject to the requirement that the series ∑ n=0∞an2 converges. He defined the notion of an inner product on such a space, as well as its various operations, and provided in this way the first example of an infinite-dimensional vector space. This space is referred to today as a Hilbert space.

Ten years later, the Polish mathematician Stefan Banach (1892–1945) presented a more general class of vector spaces, the so-called Banach spaces. He replaced the inner product of Hilbert with a real valued function called a norm by means of which it is possible to provide general definitions of the length of a vector, the convergence of a sequence of vectors, and so on. The study of general Banach spaces marked a considerable expansion and abstraction in the scope of functional analysis as a discipline. Around the same time, considerable progress was made toward a more abstract and general concept of a function. We present here only an example of this work: the so-called Dirac delta function δ(x), invented by the British physicist Paul Dirac (1902–1984)Footnote 1 and defined by the properties

$$\displaystyle \begin{aligned} \delta(x) = 0\text{ for all }x \neq 0,\text{ and }\int\limits_{-\infty}^{+\infty}\delta(x)dx=1. \end{aligned}$$

Of course, there exists no function in the classical sense satisfying these properties, but the Dirac delta function proved extremely useful for physics, and eventually, a mathematical formalism was discovered to handle such cases. Today, functional analysis is among the areas of mathematics that has proved most useful to physics and the other sciences, in particular engineering technology (Fig. 8.5).

Fig. 8.5
A photo of Emmy Noether who wears a skirt with a belt, shirt with long sleeves and a bow.

Emmy Noether, a founding figure in abstract algebra

At the same time that set theoretical methods were facilitating revolutions in real analysis and functional analysis, the axiomatic method was also extending its reach into every area of mathematics. The most significant developments were in abstract algebra. Ever since Galois had first introduced the group concept into mathematics, mathematicians had expanded the class of groups to include finite groups, discrete groups, infinite groups, and continuous groups. A host of other algebraic systems also appear, including rings, fields, lattices, ideals, etc. The focus of algebraic research began to shift toward abstract algebraic structures; such a structure consists of a set equipped with some number of finitary operations subject to a list of prescribed axioms (Fig. 8.6).

Fig. 8.6
A graphical representation of Agnesi curve. A circle and a bell-shaped curve above it are symmetrical about the positive y-axis. A straight diagonal line from the origin meets the circle at point B and the curve in the first quadrant.

The witch of Agnesi curve

It is generally believed that the first mathematician to formally set down the idea of modern abstract algebra was the German mathematician Emmy Noether (1882–1935) in her 1921 paper Idealtheorie in Ringbereichen (Theory of Ideals in Ring Domains). She was one of the finest mathematicians of her age or any age and contributed to the axiomatic treatment of the general theory of ideals and noncommutative algebra. At the time of her death, she was memorialized as the greatest woman mathematician of all time, having surpassed in accomplishment the mathematicians Hypatia (c. 350–415) of Ancient Gaetana Agnesi (1718–1799) of Italy, Sophie Germain (1776–1831) of France, and Kovalevskaya of Russia. Sex discrimination prevented her for many years from obtaining a regular post at Göttingen despite the fervent recommendations of David Hilbert, and she often worked for no pay. After the rise of Hitler the Nazi party, she was removed from her position and eventually moved to America where she spent her final years lecturing at Bryn Mawr College.

In addition to abstract algebra, probability theory also benefited from axiomatization. The main work in this area was carried out by the Soviet mathematician Andrey Kolmogorov (1903–1987). Kolmogorov graduated from Moscow State University in 1925 and immediately began to carry out research at the same institution. Four years later, he published his General Theory of Measure and Probability Theory, in which he proposed six axioms as a foundation for probability. He also contributed to the practical development of probability theory through his work on continuous-time Markov process. Leaving probability aside, Kolmogorov also carried out important work in functional analysis, topology, the theory of turbulence, information theory, dynamic systems, and classical mechanics.

In 1980, Kolmogorov shared the Wolf Prize in Mathematics with the French mathematician Henri Cartan (1904–2008). Two years earlier, his student Israel Gelfand (1913–2009) had received the first ever Wolf Prize in Mathematics for his work on functional analysis, group theory, and representation theory; Gelfand shared this award with the German mathematician Carl Ludwig Siegel (1896–1981). Israel Gelfand was born into a poor Jewish family in the Odessa Oblast (province) of Ukraine, where he was expelled from high school, according to his own account for political reasons related to his father’s status as a mill owner. At the age of 17, he and his father made his way to Moscow to live with some distant relatives. Two years later, without having received a high school diploma or university degree, Gelfand begin postgraduate studies at Moscow State University under the supervision of Kolmogorov. His doctoral dissertation introduced the theory of normed rings; he also proved an important theorem concerning the space of maximal ideals in rings of continuous functions and established the general spectral theory of C-algebras.

We turn finally to topology. The great German-born American mathematician Hermann Weyl (1885–1955) famously said, “In these days the angel of topology and the devil of abstract algebra fight for the soul of every individual discipline of mathematics.” This indicates something of the great importance of these two disciplines. The premodern origins of topology however appear much earlier than those of abstract algebra, and its motivating examples are more immediately accessible. These include the problem of the bridges of Königsberg (1736), the four-color problem for maps (1852), and the famous Möbius strip (1858). The basic objects of interest in topology are abstractions of geometric shapes subject to continuous processes – two topological structures are considered to be equivalent to one another if one can be obtained from the other by an invertible continuous transformation (intuitively, transformations that can be achieved by stretching or distorting, but without introducing any cuts or joins). The word topology seems to have first been coined by a student of Gauss in 1847. In Greek, it means the study of position.

Modern topology is subdivided into point-set topology, also called general topology, and algebraic topology. In point-set topology, the basic structure is that of a set equipped with a collection of distinguished subsets referred to as open sets or neighborhoods. The entire ensemble is known as a topological space. In this way, it is possible to give abstract definitions for various properties of interest to mathematicians, including continuity, connectedness, and dimension, and also some more specialized concepts such as compactness and separability. The theory has some interesting and surprising applications. For example, it follows from the famous fixed point theorem of topology that at any given time there is always some point on the surface of the earth at which there is no wind (like the eye of a hurricane) and that there is some point on the surface of the earth from which every direction points southward, specifically the North Pole. The fixed point theorem states: every continuous map from an n-dimensional object (satisfying certain conditions) to itself has a fixed point.

Algebraic topology was founded by the French mathematician Henri Poincaré (‘854–1912). Just as a wall is made up of bricks, Poincaré began by partitioning geometric spaces into finitely many little regions. He defined in terms of these regions the topological concepts of higher-dimensional manifolds, homeomorphisms, and homology. Subsequent mathematicians also developed such related concepts as homotopy and homology. This procures a translation of topological problems into the domain of abstract algebra. One of the earliest results in what is now referred to as algebraic topology was first discovered by Descartes in 1635 and independently rediscovered by Euler in 1752. This is the famous Descartes-Euler polyhedral formula which says that for any simply connected convex polyhedron, the sum of the number of vertices and the number of faces minus the number of edges is always equal to 2. Another famous result in algebraic topology is the Poincaré conjecture, which states that every simply connected closed 3-manifold is homeomorphic to the 3-sphere. Poincaré first proposed his conjecture in 1904, and it was proved by the Russian mathematician Grigori Perelman (b. 1966) in 2006 (Fig. 8.7).

Fig. 8.7
A photo of Henri Poincare.

French mathematician Henri Poincaré

Henri Poincaré was born in Nancy, Meurthe-et-Moselle, in 1854, the same year in which Riemann developed his theory of non-Euclidean geometry. He exhibited a prodigious intelligence from an early age, although he became seriously ill with diphtheria when he was 5 and sometimes had trouble expressing his thoughts fluently for a period afterward. Nevertheless, he enjoyed all manner of games and dancing as a child and developed a reputation as a remarkably quick and attentive reader. In school, he excelled in all his subjects and especially in written composition. His interest in mathematics flowered somewhat late, probably when he was about 15, but his talent quickly revealed itself. He enrolled at the École Polytechnique when he was 19 (Fig. 8.8).

Fig. 8.8
A photo of Grigori Perelman.

Grigori Perelman, who proved Poincaré’s conjecture

Poincaré never stayed too long in one area of research – one of his contemporaries described him as more a conqueror than a colonizer. To some extent, he planted his flag in every discipline in mathematics, and several disciplines outside it, but his most important contributions were certainly in topology. Research into the Poincaré conjecture and its generalizations and eventual proof produced three separate Fields Medalists at intervals separated by 20 years: first in 1966 and then again in 1986 and 2006 (Fig. 8.9).

Fig. 8.9
A painting of Les Demoiselles D Avignon. It presents 5 partially nude women with angular and disjointed body shapes. The left woman covers with a long gown, and the right 2 women wear masks.

Les Demoiselles d’Avignon, Picasso (1907)

Poincaré was also an exceptional popularizer of mathematics. His popular works were translated into many languages and read with interest by people from all walks of life with an influence not unlike that of A Brief History of Time by Stephen Hawking (1942–2018) in the present day. Finally, Poincaré sustained an active interest in philosophy throughout his life and published three influential works on the philosophy of science: Science and Hypothesis, The Value of Science, and Science and Method. He famously argued for the position of conventionalism in physics, which holds that the laws that govern physics and physical space are subject to competing equivalent formulations and that the choice of one or another particular system of formulations is a question of convention and convenience. At the same time, he was opposed to the use of infinite sets in mathematics and believed instead that the most basic concept in mathematics is the concept of the natural numbers. In this respect, he was one of the earliest proponents of intuitionism. In connection with this belief, Poincaré always emphasized the role of creativity in mathematics and its relation to the arts. He wrote in The Value of Science that “it is only through science and art that civilization is of value.”

At a time when people were still actively debating the legitimacy of non-Euclidean geometry, Poincaré presented in powerful intuitive guides to the geometry of space in four dimensions. In Science and Hypothesis, he argues “consider a purely visual impression, due to an image formed on the back of the retina. A cursory analysis shows us this image as continuous, but as possessing only two dimensions. However, sight enables us to appreciate distance, and therefore to perceive a third dimension.” Just as information in three spatial dimensions can be translated onto the two dimensions of the retina, it is possible to imagine that the three dimensions of physical space are projections onto a surface in four-dimensional space not unlike the artistic choice of perspective on a canvas. This argument had a profound influence on Pablo Picasso, who was inspired by it to begin his experiments in cubism with the painting Les Demoiselles d’Avignon in 1907.

Science and Hypothesis also had a profound effect on another member of Picasso’s circle, the Paris actuary Maurice Princet (1875–1973), who is generally credited with introducing its ideas to the cubists who lived and met at the Bateau-Lavoir building in the Montmartre district. The writer and critic Guillaume Apollinaire (1880–1918), who moved in the same circles and invented the term cubism, observes in his book The Cubist Painters (1913) that “geometry, the science of space, its dimensions and relations, has always determined the norms and rules of painting.” He likened the idea of a fourth spatial dimension to the “immensity of space eternalizing itself in all directions at any given moment,” a great metaphor containing the seeds of an entirely new art. He further pointed out that “geometric figure is as essential to painting, and geometry is as important to the plastic arts, as grammar is to writing.” We can perhaps regard Cubism as a second great encounter between painting and geometry after the Renaissance (Fig. 8.10).

Fig. 8.10
A portrait painting of Cezanne.

Self Portrait 1875, Cézanne

Abstraction in Art

The word “abstract” as a noun occurs frequently at the beginning of mathematical and other scientific papers, just beneath the title, author, and institution, where it has the meaning of “summary.” In this section, we discuss its more usual descriptive meaning in the context of art and mathematics (Fig. 8.11).

Fig. 8.11
A portrait of card players. 2 men who sat on either side of a desk hold cards in their hands. The man on the left holds a smoking pipe in his mouth.

The Card Players, Cézanne (1893)

Just as the introduction of set theory and the tendency toward abstraction in mathematics in the early part of the twentieth century was not met without a certain amount of resistance and controversy, the abstract movement in art has also been cause for significant dispute. Ever since Aristotle, the ultimate aim of painting and sculpture had always been the imitation of nature.

It was only in the mid-nineteenth century that artists began to view their project differently and regard painting as an end in itself without reference to verisimilitude. Over time, a new style emerged: specific forms increasingly were exaggerated and deformed and transformed for expressive effect. The pioneer of this new style was Paul Cézanne (1839–1906). Cézanne took inspiration from his own idiosyncratic optical theories according to which the eyes perceive a scene continuously in time and from a variety of perspectives. His innovative ideas concerning nature, people, and painting are all on display in his paintings of mountains, rivers, and still life compositions in his native Provence. For Cézanne, abstraction was a tool for restoring to painting its natural beauty and independence (Fig. 8.12).

Fig. 8.12
A landscape painting of Starry Night. It presents a big bush in the foreground to the left, houses, and trees in the central part, and hills in the background. The sky and all the above are painted with wavy lines.

La Nuit étoilée (Starry Night), van Gogh (1889)

Cézanne is known as the father of modern art, and his guidance initiated a great wave of modernism in art in the late nineteenth and early twentieth century. His immediate heirs were the Fauvists, represented by Henri Matisse (1869–1964), and the Cubists, represented by Picasso. All of these artists however retained in their work some connection to the representation of natural forms. Their work cannot yet be called abstract art, but rather only abstracted art, or perhaps half-abstract. The word abstract here is merely descriptive and does not have the status of a proper noun, as in “abstract art” and the mathematical term “abstract algebra.” Rather the phrase abstract art in its fullest sense refers to works with no identifiable subject matter (Fig. 8.13).

Fig. 8.13
A landscape painting at Murnau. It presents a road from the left with greenery on either side. A few big trees are in the background to the left and an elevated surface with greenery and rocks is to the right.

Landscape at Murnau, Kandinsky (1908)

The first truly abstract artist was probably the Russian painter Wassily Kandinsky (1866–1944). Since the eighteenth century, Russia under Peter the Great and Catherine II had engaged in large-scale patronage in the arts and sciences. Beneficiaries of this patronage in mathematics alone included Euler and the Bernoulli brothers. Russians at that time travelled often to France, Italy, Germany, and other countries, and by the nineteenth century, Russian literature, music, drama, and ballet had all developed to an extraordinarily high degree of refinement.

It was in this context that Kandinsky was born in Moscow in the same year that Riemann died in Germany and only a few months before Baudelaire died in Paris. His father was a tea merchant from Siberia, and his grandmother a princess of Chinese Mongolian descent. His mother was a Moscow local. When he was still young, Kandinsky travelled with his parents to Italy. After his parents divorced, he lived with an aunt in Odessa on the shores of the Black Sea in modern Ukraine and completed his education there. He took up piano and cello and began to teach himself painting (Fig. 8.14).

Fig. 8.14
An abstract painting presents a partitioned semicircle to the right and a semicircle to the left. A few shapes are below, and the symbol t is at the top right. The background is shaded.

Abstract painting by Kandinsky

When he was 20, Kandinsky enrolled at the University of Moscow to study law and economics and eventually obtained a degree equivalent to a modern doctorate. He maintained a strong interest in painting however and was especially influenced by the colorful folk art he experienced as part of an ethnographic research expedition to the Vologda region north of Moscow. In 1896, when he was already 30 years old, Kandinsky decided once and for all to become a painter. He abandoned a promising teaching career and took the train for Germany, where he studied privately at first and later enrolled as an art student at Munich Academy. Among his classmates was a young Swiss artist named Paul Klee (1879–1940) who later became one of the great painters of the early twentieth century alongside Kandinsky (Fig. 8.15).

Fig. 8.15
A published text. The title Kandinsky and text in foreign language at the bottom have an art in between.

On the Spiritual in Art, German edition (1912)

It was during his time in Munich that Kandinsky began to develop his mature ideas about nonobjective and nonrepresentational art. After a period of exploration, he struck upon his purpose in art: the creation of decisive spiritual and emotional reaction by way of line and color, space and movement, without reference to the representation of natural objects. In his tract Concerning the Spiritual in Art, Kandinsky discusses his first encounter with the impressionist paintings of Édouard Manet (1832–1883) and the attraction he felt toward an art in which the material reality of its objects was deemphasized. Revolutionary advances in the natural sciences in his lifetime further corroded his commit to the world of direct sense perception (Fig. 8.16).

Fig. 8.16
3 paintings from top to bottom. The top landscape painting presents a tree with multiple stretched-out branches and flowers. The middle painting presents the tree with branches. The bottom constructive painting comprises multiple small rectangles and squares.

Progression from representation to abstraction: Flowering Trees by Mondrian

Kandinsky endeavored in his art to give spiritual expression to mystic inner experience independent from external reality on the one hand and technical refinement on the other. He believed that the harmony of color and form must always take as its primary objective the task of reaching the human soul. In middle age, Kandinsky wrote an autobiography in which he described his experience of colors:

The colors which made the greatest impression on me were bright green, white, magenta, black, yellow. Even now I have memories of them from when I was three years old. I noticed them again and again in a variety of shapes and objects, and over time the objects became less clear in my eyes than the colors themselves.

In his later years, Kandinsky began to develop a more geometric style of abstraction built in circles and in triangles. His ideas are reflected in the titles of some of his works: Concentric Circles; A Center; Yellow, Red and Blue; and Sounds. In another important treatise, Point and Line to Plane, Kandinsky analyzed the specific emotional effect of formal elements in painting, claiming, for example, that a horizontal line has a coldness to it, while a vertical line is hot. In any case, his works are characterized by an immediately recognizable feeling for color and form that suggest the new horizons of expression facilitated in art by the turn toward the abstract, in much the same way that non-Euclidean geometry had conjured up a broader space of possibilities in mathematics (Fig. 8.17).

Fig. 8.17
A painting presents the backside of a man with long attire and thread tied to his waist. 3 different colored horizontal regions are at the bottom background with a hut above.

Painting by Kazimir Malevich

After Kandinsky, the prominent representatives of abstraction in art have included the Russian painter Kazimir Malevich (1879–1935), the Dutch paint Piet Mondrian (1872–1944), and the American painter Jackson Pollock (1912–1956). Malevich brought geometric abstract to its ultimate and simplest form of expression, for example, in such Black Square. Both Malevich and his contemporary Mondrian had also deeply influenced by the Cubist movement (Fig. 8.18).

Fig. 8.18
An action painting includes splashed, smeared, and dripped paints on the canvas.

Action painting by Jackson Pollock

Pollock, inspired by the Surrealists, worked in a very different style, sometimes called action painting, which involved subconscious and bodily techniques such as the dripping and pouring of paint onto the surface of the canvas or even the hood of a car. The success that he and his fellow traveller Willem de Kooning (1904–1997) enjoyed (de Kooning was born in Holland and came to America as a stowaway) suggests the shift in the center of gravity of the art world from Europe to America in the second half of the twentieth century.

Applications of Mathematics

Theoretical Physics

At the beginning of this chapter, we mentioned that research in modern mathematics split into two major directions, pure mathematics and applied mathematics. The previous section introduced briefly the four main branches of modern abstract mathematics; the interactions between these branches also contributed to the birth of further branches, such as algebraic geometry, differential topology, and so on. Given the limitations space and scope of this book, we will not discuss these in any further detail. Instead, we turn now to the penetration of mathematics into the other intellectual crystallizations of human civilization, that is, the sciences, starting with physics. The eighteenth century had been the golden age for the synthesis of mathematics with classical mechanics, and in the nineteenth century, the greatest mathematical applications to physics occurred in the theory of electricity and magnetism, and its best representative was James Clerk Maxwell (1831–1879), associated with the mathematical physics school at Cambridge University. Maxwell established a complete system of electromagnetic theory consisting of four concise partial differential equations. He seems to have first developed a more complicated formulation, but started over on the basis of his belief that the mathematics representing the physical world should be beautiful (Fig. 8.19).

Fig. 8.19
A photo of Maxwell.

Maxwell at Cambridge

Maxwell joined a long line of Scottish thinkers and inventors; indeed, this small country has contributed the largest number of inventors relative to its population of any in the world. Prior to Maxwell, there was James Watt (1736–1819), who contributed one of the early practical steam engines, and afterward, there appeared also Alexander Graham Bell (1847–1922), inventor of the telephone; John Macleod (1876–1935), a coauthor in the discovery and isolation of insulin; Alexander Fleming (1881–1955), who discovered penicillin; and John Logie Baird (1888–1946), who contributed to the invention of television and demonstrated the first true working television in London in 1927. Scotland was also home to Adam Smith (1723–1790), who presented the first complete and systematic theory of economics. The central concept of his masterpiece The Wealth of Nations is that the apparent chaos of the free market consists in fact of the workings of a self-regulating mechanism that tends as if automatically to the production of those products that are most desired and needed by society (Fig. 8.20).

Fig. 8.20
A photo of Hermann Minkowski.

Einstein’s mathematics teacher, Hermann Minkowski

After the advent of the twentieth century, mathematics has occupied the center of such disciplines in theoretical physics as relativity, quantum mechanics, and elementary particle theory. In 1908, the German mathematician Hermann Minkowski (1864–1909) proposed his four-dimensional spacetime model ℝ3, 1 equipped with the metric relation

$$\displaystyle \begin{aligned} ds^2=c^2dt^2-dx^2-dy^2-dz^2 \end{aligned}$$

where c is the speed of light. This provided the most suitable mathematical model for the special theory of relativity introduced only a few years early in 1905 by Albert Einstein (1879–1955); this model is now referred to as Minkowski space. Incidentally, Minkowski had been among the teachers of Einstein, although he was unimpressed by the mathematical ability of his early student.

Afterward, Einstein sought to expand his theory to account for the gravitational field; he achieved a basic outline of his new theory by summer of 1912, but he lacked sufficiently sophisticated mathematical tools to develop it completely. But during this time, he reacquainted with an old classmate in Zurich who had since become a professor of mathematics, who introduced him to Riemannian geometry and more generally to differential geometry, which Einstein referred to as tensor calculus. After more than 3 years of hard work, in a paper completed on November 25, 1915, Einstein derived the gravitational field equations

$$\displaystyle \begin{aligned} R_{\mu\nu}=kT_{\mu\nu}+\frac{1}{2}Rg_{\mu\nu} \end{aligned}$$

where Rμν is the Ricci tensor, Tμν is the stress-energy tensor, R is the scalar curvature, gμν is the metric tensor, and k is a constant related to the gravitational constant and the speed of light. With these equations in hand, Einstein remarked that the logical construction of general relativity was now complete.

Although Einstein had completed his derivation of the general theory of relativity in 1915, his work was published only the next year. It is fascinating that at almost the exact same time, the German mathematician David Hilbert obtained the same gravitational field equations from along a different line of thought. Hilbert took an axiomatic approach based on the theory of invariants for continuous groups developed by Emmy Noether. He submitted this paper to the Göttingen Academy of Sciences on November 20, 1915; it was published 5 days earlier than Einstein’s paper.

On the basis of his theory of general relativity, Einstein predicted the existence of gravitational waves and black holes, which were confirmed experimentally in 2017 and 2019, respectively; more precisely, in 2017, scientists directly detected gravitational waves produced by a collision of binary neutron stars, and in 2019, the first photograph of a black hole was produced. These remarkable achievements were the result of a collaboration between many scientists from many different countries. Another consequence of general relativity is that spacetime taken as a whole is not uniform; it is uniform only across tiny regions. Mathematically, this nonuniformity can be expressed via the Riemannian metric

$$\displaystyle \begin{aligned} ds^2 = \sum_{\mu,~\nu=1}^{2}g_{\mu\nu}dx_{\mu}dx_{\nu}. \end{aligned}$$

The mathematical description of general relativity revealed for the first time the practical significance of non-Euclidean geometry and stands as one of the greatest achievements of applied mathematics in history. This perhaps does not quite place its realization on a level with the establishment by Newton of the law of universal gravitation, since Newton unlike Einstein also developed the entire mathematical basis for his new mechanics (Fig. 8.21).

Fig. 8.21
A photo of Einstein's home. A photo of Einstein is placed on the wall to the left. A curved staircase is to the right. A fire extinguisher is between the photo and staircase.

Einstein’s home; photograph by the author, Bern

In contrast with the theory of relativity, the development of quantum mechanics is not associated with the name of any single physicist but rather with an ensemble of scientists working around the same time. The pioneers were Max Planck (1855–1947), Einstein, and Niels Bohr (1855–1962) and subsequently Erwin Schrödinger (1887–1961), Werner Heisenberg (1901–1976), and Paul Dirac (1902–1984); they established formulations of quantum mechanics in terms of wave mechanics, matrix mechanics, and operator theory, respectively. The integration of these various theories into a unified system called for new mathematical theories. Hilbert introduced analytical tools such as integral equations for this purpose, and John von Neumann (1903–1957) further extended what is known as the theory of Hilbert spaces to solve the eigenvalue problem in quantum mechanics. He also finally extended the spectral theory introduced by Hilbert to address the situation of unbounded operators that frequently arise in quantum mechanics. This laid the rigorous mathematical foundations for the discipline.

In the second half of the twentieth century, there were further developments in theoretical physics that required applications from the abstract branches of pure mathematics; two well-known examples are gauge theory and superstring theory. In 1954, the Chinese physicist Yang Chen-Ning (1922-), who shared a Nobel Prize in 1957 with another Chinese physicist Tsung-Dao Lee (1926-), and the American physicist Robert Mills (1927–1999) introduced Yang-Mills theory, which proposes gauge invariance as the common feature of the four fundamental forces of nature (electromagnetic force, gravitational force, and the strong and weak forces), bringing back into the spotlight the theory of gauge fields which by that time had already been long established. They attempted to achieve through this theory a unification of the interactions between known forces. Mathematicians quickly observed that the necessary mathematical tools were already available in the form of the fiber bundles of differential geometry. The Yang-Mills equations were recognized as a set of partial differential equations, and research into these equations has promoted the further development of mathematics. Another bridge between pure mathematics and theoretical physics by way of Yang-Mills theory came from the Atiyah-Singer index theorem, proved in 1963 and subsequently determined to have important applications in Yang-Mills theory. The research areas involved in this topic include analysis, topology, algebraic geometry, partial differential equations, functions of several complex variables, and other core disciplines in pure mathematics, a remarkable instance of the unity of modern mathematics.

Superstring theory, and string theory more generally, emerged in the 1980s. This theory views the elementary particles as a kind of stretch one-dimensional stringlike massless forms, about 10−33 centimeters in length (i.e., on the order of the Planck length), in place of the dimensionless points in spacetime that feature in other theories. This theory takes aim at a unified mathematical description of gravitation, quantum mechanics, and elementary particle interactions and has become one of the most active areas of collaboration between mathematicians and physicists. In particular, the mathematics involved includes differential topology, algebraic geometry, differential geometry, group theory, infinite-dimensional algebra, complex analysis, the moduli spaces of Riemann surfaces, and so on; countless physicists and mathematicians have now associated themselves with this research.

Biology and Economics

Outside of physics, mathematics has also played an important role in other disciplines in the natural sciences and social sciences. For reasons of space, we limit our discussion here to a treatment of mathematics in biology and mathematical economics as representative examples. Modern biology is a younger discipline than physics, which took off in earnest only after the invention of the microscope in the seventeenth century, but alongside physics these are the two most important disciplines within natural science. The introduction of mathematical methods to research in biology was also relatively slow to get off the ground, and the story begins at the start of the twentieth century, when the versatile British mathematician Karl Pearson (1857–1936) began to apply statistics to the study of problems in genetics and the theory of evolution. In 1901, he founded the journal Biometrika, the first journal in the discipline of biomathematics.

In 1926, Italian mathematician Vito Volterra (1860–1940) proposed the system of differential equations

$$\displaystyle \begin{aligned} \begin{cases} \frac{dx}{dt} = ax-bxy \\ \frac{dy}{dt} = cxy - dy \end{cases} \end{aligned}$$

as a successful model of the dynamics of fish populations in the Mediterranean Sea. Here, x represents the number of small fish eaten as prey and y the number of large carnivorous fish. These equations, known also as the Lotka-Volterra equation, set a precedent for the use of differential equations in biological modelling (Fig. 8.22).

Fig. 8.22
A photo of Sir Andrew Huxley.

Biologist Sir Andrew Huxley, grandson of the physiologist Thomas Henry Huxley and brother to novelist Aldous Huxley

In 1953, 2 years after Hartline and Ratliff introduced their model, the American biochemist James Watson (1928-) and the British biophysicist Francis Crick (1916–2004) discovered the double helix structure of DNA (deoxyribonucleic acid); this not only marked the birth of molecular biology as a discipline but also introduced abstract topology as a tool in biology. Since the double helix strands exhibit winding and kinking under the gaze of the electron microscope, a sub-branch of algebraic topology known as knot theory came into play, fulfilling a prediction made by Gauss more than a century earlier. In 1984, the New Zealand mathematician Vaughan Jones (1952–2020) established the Jones polynomial as an invariant of an ordered knot, which has proved useful to biologists for the classification of knots observed in the structure of DNA. Jones himself received the Fields Medal in 1990 for his work (Fig. 8.23).

Fig. 8.23
A photo of Watson and Crick who look up at the structure of D N A. Crick to the right points at the structure and Watson sits to the left.

Watson and Crick display their DNA model

Watson and Crick were awarded the Nobel Prize in Physiology or Medicine in 1962, and the significance of their discovery has still not been fully unraveled, and I would like to say here a bit more about it. We contrast the scope of various disciplines: physics and classical mechanics takes as its object primarily the macroscopic world, and the importance of the internal structure of atoms is seen also at the level of the large via the tremendous energy of nuclear fusion and fission; the objects of biology such as cells and genes on the other hand are mainly microscopic. Darwin’s theory of evolution can be compared to Galileo’s law of free fall insofar they express the external life, motion, and development of things. On the other hand, Newton’s law of universal gravitation introduced the internal laws and causes governing the motions of objects, even the universe. The corresponding achievement to this in biology is precisely the discovery of the double helix structure of DNA, which reveals the internal mysteries of life. Watson and Crick announced this monumental result at the Eagle Pub in Cambridge, where they were frequent patrons alongside their various colleagues.

We discuss finally another pair of recipients of the Nobel Prize in Physiology or Medicine; in 1979, it was awarded to the South African-born American physicist Allan M. Cormack (1924–1998) and the British electrical engineer Sir Godfrey N. Hounsfield (1919–2004), both of them nonspecialists in biology. While he was working part-time in the radiology department at a hospital in Cape Town alongside his regular job as a physics lecturer, Cormack became interested in X-ray imaging of human soft tissue and tissue layers of different densities. After he began teaching in the United States, he established the mathematical basis for computerized scanning, specifically a formula for determining the amount of X-ray absorption in different human tissues. This formula was rooted in integral geometry and lay the theoretical foundations for digital tomography, which prompted Hounsfield to invent the first computerized tomography scanner (CT scanner), which achieved profound success in clinical trials (Fig. 8.24).

Fig. 8.24
A photo of Thomas Nash.

Thomas Nash, mathematician and protagonist of the film A Beautiful Mind

Leaving biology aside, we turn next to mathematical economics. This discipline was introduced by the Hungarian mathematician John von Neumann, who coauthored a book entitled Theory of Games and Economic Behavior in 1944, in which he proposed a mathematical model of competition and its application to problems in economics. A full half-century later, the American mathematician John Nash (1928–2015) and the German economist Reinhard Selten (1930–2016) shared the Nobel Prize in Economics for achievements in game theory. Nash was the subject of the successful film A Beautiful Mind, and he developed the concept of Nash equilibrium as an attempt to explain the dynamics of conflict and action between competitors. In the last year of his life, Nash was awarded the highest honor in mathematics, the Abel Prize, for his contributions to the theory of nonlinear partial differential equations.

Two relatively simple further contributions came from the Soviet mathematician and economist Leonid Kantorovich (1912–1986), who created the discipline of linear programming, and the Dutch-American mathematician Tjalling Koopmans (1910–1985), who studied in particular the relationship between inputs and outputs in production. They shared the Nobel Prize in Economics in 1975 for their contributions to the theory of optimal resource allocation. More profound mathematics also began to appear in economic applications as well: the French-born American economist Gerard Debreu (1921–2004) and the American economist Kenneth Arrow (1921–2017) introduced tools from topology into economics, in particular the theories of convex sets and fixed points. Following upon their research on equilibrium price theory, others added additional abstract mathematical concepts to the toolkit, including differential topology, algebraic topology, the theory of dynamical systems, and global analysis. These two also received Nobel Prizes in Economics, but many years apart from one another: Arrow in 1972 and Debreu in 1983.

Since the 1970s, stochastic analysis has emerged as a fundamental tool in economics. In particular, the American economist Fischer Black (1938–1995) and Canadian-American economist Myron Scholes (1941-) developed the Black-Scholes model, which reduces options pricing in the stock exchange to the solution of a stochastic differential equation to obtain the Black-Scholes formula, an option pricing formula that is consistent with real market behavior. Previously, investors struggled to precisely determine the value of future options, but with the introduction of this formula and its inclusion in the risk premium in the price of the stock, the complexity risk of investing in stocks was diminished significantly. Following upon their work, the American economist Robert C. Merton (1944-) removed many of the restrictions on this model, expanding the scope of its application to other areas of financial activity, such as residential mortgages. The Nobel Prize in Economics was jointly awarded to Merton and Scholes in 1997 for this work.

However, the development of the world economy in the twenty-first century has been significantly affected by the subprime mortgage crisis in the United States and the global financial crisis precipitated by it in 2008. In particular, people had become reluctant to apply for bank loans as they would under normal circumstances in such circumstances as poor credit conditions. As a result, many leading lending institutions began to issue loans under looser credit requirements but with higher interest. Such subprime loans involve a greater risk of default, mainly due to the derivative products based on them. The relevant departments were generally reluctant to take on risk on their own and instead sold package deals to investment banks or even insurance or hedging institutions. The derivative products became invisible and intangible, and their prices and packaging schemes were in accessible to estimation by ordinary human judgment; all of this required and encouraged the development of a new branch of mathematics, which became financial mathematics or quantitative finance.

The pricing process of derivatives involves two especially important parameters, the discount rate and the default probability. The former is a stochastic differential equation, and the latter is given as a Poisson probability density function. The global financial crisis made it clear that these and other methods related to pricing and estimation were in need of refinement. In the 1990s, the Chinese mathematician Peng Shige (1947-) and French mathematician Étienne Pardoux (1947-), who were born in the same year, collaborated to develop the theory of backward stochastic differential equations, which has become an important tool for the study of pricing of financial products. In the eighteenth century, Jacob Bernoulli had remarked that anyone who carries out research physics without understanding mathematics is actually investigating without sense. In the twenty-first century, this has also proven to hold for the financial and banking industries. Citibank, based in New York City, has claimed that some 70% of their business depends on mathematics, emphasizing that they could not survive without this dependence on mathematics.

Finally, we return to the linear programming theory of Kantorovich to remark that it was one of the earliest mature research branches of operations research, which is the study of analytic methods based in mathematics and logic for decision-making and organization management in order to obtain optimal results. This was born as a scientific discipline in the flames of World War II, alongside the applied mathematical disciplines of cybernetics and information theory, founded by the American mathematicians Norbert Wiener (1894–1964) and Claude Shannon (1916–2004), respectively. Both Wiener and Shannon were professors at MIT until their retirements and served as influential public figures. Wiener had received his doctorate from Harvard at the age of 18 and later published two autobiographies, Ex-Prodigy: My Childhood and Youth and I am a Mathematician. Shannon is widely considered the preeminent founding figure of the age of digital communication.

As formulated by Wiener, cybernetics takes as the object of its study laws of control and communication that govern both machines and living things, and the maintenance in such a dynamic system of stability or equilibrium under changing environmental conditions. He coined the name cybernetics for his new research program, borrowing from the Greek word κυβερνητική, meaning governance and derived in turn from the word for navigation or steering. Plato used this word often in his writings to describe the art of managing and governing human affairs. Information theory refers to the use of mathematical statistics to study the measurement, transmission, and transformation of information. It is important to point out however that information in this context has a specialized meaning and refers to a specific hierarchy order or degree of non-randomness that can be measured and quantified as precisely as mass, energy, and other such physical quantities.

Computers and Chaos Theory

As a definition, the word computer refers to any automated electronic device capable of storing and processing data according to programmatic instructions and returning the results of its operations as output. Throughout the history of computing, the most important figures contributing to its innovations have almost all been mathematicians. In China, computer science majors were for the most enrolled in mathematics departments through to the end of the 1970s, just as in the past, say in the time of Kant, mathematics was considered a part of philosophy departments. Today, most universities have one or two schools dedicated to computer science. It has long been a human desire to replace manual computation with automated machines; perhaps the best early example is the abacus, which may not have first been used by the Chinese people, but enjoyed the widest use for the longest period of time in China. In a book published in 1371, during the Ming dynasty, there appear illustrations showing the ten-speed abacus. In fact, its invention was much earlier. Later, the mathematician Cheng Dawei (1533–1606) published his Suanfa Tongzong ({算法統宗} , General Source of Computation Methods), in which he detailed the system and use methods of the abacus, marking its technological maturity. This book spread to Korea and Japan, where the abacus also gained widespread popularity.

The first to propose a mechanical calculating machine was the German scientist Wilhelm Schickard (1592–1635), who described his idea in a letter to Johannes Kepler. The first working mechanical calculator, capable of addition and subtraction, was invented by Pascal in 1642, and 30 years later, Leibniz created a calculator further capable of multiplication, division, and root extraction. A key step in the transition toward modern computing was achieved by the English mathematician Charles Babbage (1791–1871) who had the bold insight to make the arithmetic operations of his device programmable. In number theory, there is also a congruence relation related to the binomial coefficients named after Babbage. The Analytical Engine that Babbage proposed in 1837 as a successor to his earlier Difference Engine was divided into a storage component and a processing component, as well as a special mechanism for the operation of its programming. He envisaged for it the possibility of various arithmetical operations according to the instructions given in zeros and ones on punch cards; this was the prototype for the modern electronic computer (Fig. 8.25).

Fig. 8.25
A stamp displays the side view of Charles Babbage. The text below reads as Babbage Computer, and a symbol of a woman's face with the number 22 is at the top left. Numbers 0 to 9 are marked on Babbage's head.

Charles Babbage on a British postage stamp

In a tragic turn, Babbage devoted the remainder of his life and most of his property to the promulgation of his ideas and inventions, to the extent that eventually he was compelled to turn his resignation as a Lucasian professor at Cambridge, but few people could understand his thinking. He seems to have had only three true supporters: his son, Major General Henry Prevost Babbage (1824–1918), who continued the struggle to promote the Difference Engine and Analytical Engine even after the death of his father; Luigi Menabrea (1809–1896), a professor of mechanics and construction at the University of Turin who later became Prime Minister of Italy; and Ada Lovelace (1815–1852), daughter of the poet Lord Byron. Ada was the only daughter of Byron and his wife, who separated a month after her birth. She compiled calculation programs for various functions and can therefore be regarded as the first modern programmer. Due however to the limitations of the times, there were huge technical obstacles to the implementation of the Analytical Engine, and the ingenious and forward-looking idea that Babbage dreamed up to control digital computers by general-purpose programs would not be realized for more than a century. From the beginning of the twentieth century, the rapid development of science and technology brought with it a mountain of new problems for data analysis. In particular, the computing needs of the military during World War II brought urgency to the requirement for improved computing speed. The first steps were the replacement of mechanical gears with electrical components. In 1944, the American physicist and mathematician Howard Aiken (1900–1973), working at Harvard University, designed and manufactured the first practical general-purpose programmable computer, which occupied a space of 170 square meters. The first of these made only partial use of electronic components, but he quickly followed up with another computer containing entirely electronic components, specifically relays. Meanwhile, at the University of Pennsylvania, computers were produced using vacuum tubes in place of relays. The first programmable, electronic, general-purpose, digital computer was the ENIAC (Electronic Numerical Integrator and Computer), produced the following year in 1945, a thousand times faster than the computer made by Aiken (Fig. 8.26).

Fig. 8.26
A photo presents John von Neumann who stands beside his big computer.

John von Neumann with his computer

In 1947, von Neumann arrived at the idea of replacing the external programs used by the ENIAC with internally stored programs. Computers made after this model operate according to stored instructions, and the programs can be modified by making changes to these instructions. A year earlier, von Neumann had coauthored a paper proposing a comprehensive structure for parallel programming and stored-program computers, which ideas had a profound impact on the design of later digital computers. John von Neumann was born in Budapest, Hungary, and became an extraordinarily prolific and versatile thinker; he made remarkable contributions to mathematics, physics, economics, meteorology, explosion theory, and computing. He is said to have met the designer of ENIAC while they were both waiting at a station for the train to arrive. The latter caught his attention and asked him to explain some technical problems related to computing (Fig. 8.27).

Fig. 8.27
A photo presents a statue of Alan Turing who sits on a bench. A small bouquet of flowers is near his left hand and a stone is placed in his right hand. A name plate with his date of birth and death, and few more details is placed between his feet on the floor.

Bronze statue of Alan Turing; photograph by the author, in Manchester

Another outstanding contribution to the concept and development of computer design came from the British mathematician Alan Turing (1912–1954). In order to solve theoretical problems in mathematical logic, in particular compatibility and the problem of mechanical determination of solvability or computability in mathematics, Turing introduced the concept of an abstract automatic machine (now referred to as a universal machine), an idealized model of the computer from which they have not departed to this day. This model comprises:

  • Input and output (infinite memory tape divided into cells and a machine head capable of reading and writing)

  • Memory (a table)

  • Central processing unit (or control mechanism)

Turing also investigated the question of an artificial thinking machine, making him an early pioneer in the field of artificial intelligence. He proposed a famous standard for machine intelligence known as the Turing test, which requires that at least 30% of a team of human interrogators could not correctly identify the test subject as human or machine. His life had a tragic end, unfortunately: he had been persecuted and eventually prosecuted for his sexuality and died of poisoning after having eat an apple laced with cyanide shortly before his 42nd birthday. In 1966, Intel Corporation established the Turing Award, to this day the highest distinction in computer science. His death seems also to have inspired the logo of Apple Inc., founded in 1976 and famous around the world today for its computers and iPhones. The logo suggests also that only imperfection can lead to progress and the pursuit of perfection. Since 2019, Turing has appeared on the £50 banknote.

An interesting influence on Turing during his time at Cambridge was the mathematician G.H. Hardy (1877–1947), who was a natural leader in the mathematics department at Cambridge who is credited with establishing the Cambridge school of number theory. Hardy was obsessed with the Riemann conjecture and proved that there are infinitely many zeros along the critical line (the line in the complex plane with real part equal to 1 2). Turing wrote the last research paper of his life on the Riemann hypothesis, in which he proposed a numerical method for verifying it and its implementation on an early computer. He seems to have believed that the Riemann hypothesis is false and hoped to find a nontrivial zero off the critical line through his method. Of course, he did not succeed; perhaps if he had, it would have furnished him with some encouragement and prevented him from succumbing to despair at the end of his life.

Through four successive generations of digital computers, from tubes and transistors to integrated circuits and eventually very massive integrated circuits, binary switches have remained a constant, and this will not change even if someday electronic computers are replaced, for example, by quantum computers (a recently developing kind of physical device that uses the laws of quantum mechanics to perform mathematical and logical operations at high speed and store and process quantum information; this discipline is called quantum computing). This is a natural extension of the system of symbolic logic developed by the British mathematician George Boole (1815–1864) in the nineteenth century. Boole completed work dreamed of by Leibniz two centuries earlier, the creation of ideographic symbols standard for simple or atomic concepts and their combination into complex ideas. He was born into a poor family, the son of a cobbler, and his knowledge of mathematics came mainly through self-study, eventually enabling him to earn a post as professor of mathematics at Queen’s College, Cork, in Ireland and an election as a Fellow of the Royal Society. His life was cut short at the age of 49 by pneumonia brought on by a walk in heavy rain. Earlier in the same year, his youngest daughter Ethel Lilian Voynich (née Boole, 1864–1960), who went on to write the novel The Gadfly, was born.

As a shining example of the applicability of abstract mathematics, the computer has also become a powerful tool for mathematical research and even a source of new problems for mathematical inquiry, leading to the birth of the new branch of mathematics, computational mathematics. This branch is concerned with the design and improvement of various numerical methods, as well as problems of error analysis, convergence, and stability related to these calculations. von Neumann appears again here as an important early founder of this research area. He introduced a new method for numerical calculation known as the Monte Carlo method and led a team of researchers to use ENIAC to accomplish numerical weather prediction for the first time. The centerpiece of this effort was the solution of the relevant hydrodynamical equations. In the 1960s, the Chinese mathematician Feng Kang (1920–1993) created another method for numerical analysis known as the finite element method, independent of simultaneous research efforts in the Western world. The finite element method has found applications in the calculations involved in aviation, the study of electromagnetic fields, and the design of bridges (Fig. 8.28).

Fig. 8.28
A 2-D illustration presents a different colored framework. It comprises a vertical rectangle at the center which is enclosed by parallel half rectangular frames at the top and bottom. Each rectangular frame is shaded with different colors.

Illustration of the four-color theorem of maps

In the fall of 1976, two mathematicians at the University of Illinois named Kenneth Appel (1932–2013) and Wolfgang Haken (1928–) proved with the aid of computers a result known as the four-color theorem for maps, a problem with a history stretching back more than a century, perhaps the most inspiring example of the use of computers to solve a big problem in mathematics. The four-color theorem was proposed as a conjecture in 1852 by the British mathematician Francis Guthrie (1831–1899), who had just earned a double bachelor’s degree at University College London. As part of his research, he undertook to color a map of the counties of England and noticed that four colors were sufficient to complete the task such that no two neighboring counties shared the same color. But neither he nor his younger brother, at that time still a student, could prove that this is always sufficient, and his well-established teachers, De Morgan and Hamilton, were also defeated by the problem. Arthur Cayley heard of the problem and presented it as a report to the London Mathematical Society, and it became a famous problem in mathematics.

Since that time, computers have become a powerful tool for the study of pure mathematics. Perhaps the most outstanding example of this is the discovery of solitons and chaos theory, the two core problems of nonlinear dynamics, which can be described as the two beautiful flowers of mathematical physics. The history of solitons predates the formulation of the four-color theorem. In 1834, the British engineer John Scott Russell (1808–1882) followed the water waves caused by sudden stops of ships in the canal on horseback and observed that they mostly maintain their shape and speed in the course of their propagation. He reproduced this effect in a water tank and named such waves of translation; today, they are referred to as solitons or solitary waves. More than a century later, mathematicians discovered that two solitons remain solitons upon collision, which explains the etymology of their name. Such waves occur in large numbers in optical fiber communication, activity at the Great Red Spot on Jupiter, nerve impulse conduction, and other fields. Chaos theory is another powerful tool for the description of irregular phenomena in nature and considered one of the major revolutions in modern physics following relativity and quantum mechanics.

The rapid development of computer science has not only been inseparable from mathematical logic but also promoted the transformation and even creation of other related branches of mathematics. A characteristic example of the former comes from combinatorics, while a field of the latter type is fuzzy logic. The origins of combinatorics can be traced by to the ancient Chinese legend of the Luo Shu. The term combinatorics was first proposed by Leibniz in his Dissertation on the Art of Combinations (Dissertatio de arte combinatoria). Over time, mathematicians resolved some substantial problems in this field, such as the Seven Bridges of Königsberg problem (which gave birth to graph theory, the main branch of combinatorial mathematics), the 36 officers problem, Kirkman’s schoolgirl problem, and the problem of Hamiltonian cycles. But since the second half of the twentieth century, problems of computer system design and information storage and recovery have injected the study of combinatorics with a new and powerful impetus.

In contrast with the long history of combinatorics, fuzzy mathematics is a truly young discipline: it was introduced only in 1965. Fuzzy mathematics is established as an alternative to classical set theory, in which every set is defined as composed of its elements, and membership in the set is a clear and binary proposition, for example, given by the characteristic function

$$\displaystyle \begin{aligned} \mu_{\mathcal{A}}(x)=\begin{cases} 1 \text{ if } x \in \mathcal{A} \\ 0 \text{ if } x \notin \mathcal{A} \end{cases}. \end{aligned}$$

In fuzzy logic, the characteristic function is replaced by a membership function satisfying 0 ≤ μA(x) ≤ 1. In this case, A is called a fuzzy set, and μA(x) the degree of membership of x in A. The values μA(x) = 1 and μA(x) = 0 of classical theory correspond to 100% and 0% membership in A, but such situations as μA(x) = 0.2 corresponding to 20% membership in A or μA(x) = 0.8 corresponding to 80% inclusion in A have no place in classical set theory.

Fuzzy mathematics was created in a paper by the mathematician Lotfi A. Zadeh (1921–2017), born in Azerbaijan but later based in Iran and eventually the United States. Since human thought encompasses both precise and fuzzy aspects, fuzzy mathematics has played an important role in the simulation process of artificial intelligences and related aspects of modern computer design. As a branch of mathematics, however, fuzzy mathematics is not yet fully mature (Fig. 8.29).

Fig. 8.29
A photo of Lee Sedol playing against Alpha Go software at a tournament.

Lee Sedol does battle against AlphaGo in 2016

We now discuss artificial intelligence in more detail. The name and concept of artificial intelligence was first formally proposed at a research seminar hosted at the Dartmouth Institute in 1956. Its main practical goal is enable machines to carry out complex tasks that ordinarily require human intelligence, including language and image recognition and processing, robotics, and so forth, which involve tools from machine learning, computer vision, and other recent fields. The mathematical foundations of machine learning include statistics, information theory, and cybernetics, and the mathematical tools involved in computer vision also include projective geometry, matrix and tensor algebra, and model estimation. Artificial intelligence was considered alongside space technology and energy technology as one of the three most cutting-edge technological areas of the twentieth century, starting especially in the 1970s, and developments in artificial intelligence were rapid and plentiful in the past half-century, as were its applications in various fields with outstanding results. In the twenty-first century, artificial intelligence remains at the forefront, but the other two most cutting-edge technologies of our times are probably genetic engineering and nanoscience.

Artificial intelligence does not exhibit the same contours as human intelligence, but machines can think as a human does and may eventually surpass general human intelligence. One striking example of this was the 1997 defeat of the Azerbaijan-born Russian chess master Garry Kasparov (1963-) by the Deep Blue chess supercomputer developed by IBM. In 2016 and 2017, AlphaGo, developed by the subsidiary DeepMind Technologies of Google, also defeated two world champions of Go, Lee Sedol (1983-) of South Korea and Ke Jie (1997-) of China. Advances in this area have benefited from the development of cloud computing, big data, neural network technology, and the progression of Moore’s law. At present, artificial intelligence has already surpassed human thought in terms of mechanical or logical reasoning, but achievements in cognitive emotion and decision-making remain very limited. Experts believe that artificial intelligence remains for the time being a mathematical problem and has not yet reach a stage of sufficiently advanced development to require ethical discussions as, for example, is the case for cloning technology.

We consider next cloud computing and big data. The cloud is a metaphor for the internet, and cloud computing refers to shared computing across a large number of servers distributed through the cloud. The user sends instructions to the service provider through his or her personal computer, and the service provider returns the result to the user via a calculation that can be compared to a nuclear explosion of computing activity. Since the era of cloud computing, big data has received more and more attention as a mode of thought. The explosion of data and its analysis have replaced the traditional cognitive tools of experience and intuition with an influence on decision-making in business, economics, and beyond. In 2013, the Austrian researcher Viktor Mayer-Schönberger (1966-) and the editor Kenneth Cukier (1968-) of The Economist published a book entitled Big Data: A. Revolution That Will Transform How We Live, Work and Think that has proved a pioneering work in the development of big data. The authors pointed out as their title suggests that big data and the resulting storm of information associated with it are transforming every aspect of our lives, thought, and work. Mayer-Schönberger believes that the core feature of big data is its predictive power, which suggests three subversive conceptual shifts: first, everything is data, and not random sampling; second, big data provides general direction rather than precise guidance; and third, correlation takes precedence over causality. The latter is equivalent to replacing the question why? with the question what?, which recalls also the traditional mode of thought of the Chinese people (Fig. 8.30).

Fig. 8.30
An illustration presents the Mandelbrot set. It resembles a circle with a curved surface to the right. A small circle on its left end and a few more patterns around its circumference are shaded. The above diagram is enclosed by shaded regions.

The Mandelbrot set

As we have seen, every leap forward in computer technology has been inseparable from the work of mathematicians, but at the same time, advances in computing have promoted new directions in mathematical research. We introduce here a final example of a wonderful interaction between computer science and geometry. In the twentieth century, there occurred two great developments in geometry: in the first half of the century, the study of finite-dimensional spaces was extended to infinite-dimensional spaces, and in the second half of the century, integer-dimensional spaces were expanded to fractional-dimensional spaces. The latter refers to fractal geometry, which provides mathematical foundations for the emerging scientific discipline of chaos theory. The geometry of fractals was established through a study of self-similarity carried out by a Polish-born Lithuanian mathematician with dual French and American nationality named Benoit Mandelbrot (1924–2010). The new features uncovered by this geometry include spots, pits, broken, twisted, and winding and kinking spaces, which feature a kind of dimensionality not necessarily measured in integers.

In 1967, Mandelbrot published How Long is the Coast of Britain?. He had consulted the encyclopedias of Spain and Portugal, and Belgium and the Netherlands, and found that the estimates of these neighboring counties of their shared borders differed by up to 20%. It turns out that the length of a coastline or national border depends on the length of the scale used to measure it; for example, an observer attempting to estimate the length of a coastline from aboard a satellite will arrive at a smaller number than surveyor working directly on its bays and beaches. The latter in turn will provide a smaller number than say an erudite snail crawling across its every pebble.

Common sense suggests that while each of these successive estimates is larger than the last, they should converge toward a certain value that represents the true length of the coastline. But Mandelbrot proved that this is not so, and in fact every coastline is in a certain sense infinite, as its bays and peninsulas give way to smaller and smaller sub-bays and sub-peninsulas. This is a kind of self-similarity, a special type of symmetry with respect to scale that is associated with recursion and patterns within patterns. It is not a new concept, and in fact, it has ancient roots in Western culture. As early as the seventeenth century, Leibniz had imagined that a single drop of water includes within itself an entire variegated universe. Later, the English poet and painter William Blake (1757–1827) wrote in his Auguries of Innocence:

To see a World in a Grain of Sand

And a Heaven in a Wild Flower

Hold Infinity in the palm of your hand

And Eternity in an hour.

Mandelbrot considered the simple function f(z) = z2 + c where z is a complex variable and c an arbitrary complex parameter. Starting from an initial point x0 and iterating this function generates a set of points x1, x2, x3, … where xn+1 = f(xn). In 1980, Mandelbrot noticed that for some values of the parameter c, the values xn would fall into a cyclical repetition or at least remain bounded in value, while for other values of c, the values of xn would explode without bound. Parameters of the former kind are called attractors, and the latter type chaotic; the set of all attractors in the complex plane is known now as the Mandelbrot set (Fig. 8.31).

Fig. 8.31
An illustration presents the Lorentz attractor. A dotted line forms 2 spiral circular loops on either side in slanting positions.

The Lorenz attractor and chaos butterfly

Since the complex iterative process requires a huge number of calculations even for relatively simple equations (or dynamical systems), research into fractal geometry and chaos theory can only be carried out with the aid of high-speed computers. The visuals associated with this subject have proved popular as book illustrations and even wall calendars, but the practical applications are many: fractal geometry and chaos theory have been used to describe and explore many irregularities in nature, such as the shape of coastlines, atmospheric movements, ocean turbulence, wildlife, and even the fluctuations of stocks and funds.

In its aesthetics, this new geometry also brings the hard sciences in line with the particularities of modern taste, in particular the return to wild, uncivilized, and natural forms that became popular with postmodern artists since the 1970s. Mandelbrot expressed the view that satisfying art should not be fixed to any specific scale, or rather that it should contain attractive elements in every dimension. As an antithesis to the boxy skyscraper, he points to the Palais des Beaux-Arts in Paris, with its sculptures and gargoyles, horns and jambs, and swirls of arches and cornices with gutter dents, all of which present some pleasing detail to an observer situation at any distance away. As you approach it, the construction itself changes, revealing new structural elements.

Mathematics and Logic

Russell’s Paradox

Since the twentieth century, the turn toward abstraction in mathematics has not only brought it into closer alignment with science and art but has also facilitated a resurgence in dialogue between mathematics and philosophy, for the third time considering the earlier periods of their sympathetic harmony, first in Ancient Greece and later in seventeenth-century Europe. It is perhaps no coincidence that mathematics has also struggled through three periods of crisis, corresponding to these moments in history. The first was the discovery of irrational numbers or incommensurable quantities in Ancient Greece, in contradiction with the doctrine that all numbers are represented by integers or ratios of integers. The second occurred in the seventeenth century, when calculus ran up against serious theoretical obstacles, and in particular the question whether an infinitesimal or vanishing quantity was identical with zero or in fact has some nonzero value. The problem is apparent: if it is zero, how can it appear as a divisor?; but if it is not zero, how is it permissible to eliminate terms involving infinitesimal quantities?

Recall that it was the Pythagoreans who first discovered that the diagonal of a square with unit sides is neither an integer nor can it be written as a ratio of integers. This triggered the first crisis, and one legend has it that the response was so severe that a disciple of Pythagoras named Hippasus who is credited either with revealing the existence of irrational numbers was thrown overboard into the Mediterranean Sea to drown for his offenses. In a strange coincidence, the birthplace Metapontum of Hippasus was also the site of the murder of Pythagoras. In any case, the crisis was resolved some two centuries later by Eudoxus, who introduced a geometrical formulation of incommensurable quantities. According to Eudoxus, two line segments are said to be commensurable if there is some third segment that can simultaneously measure each of them and otherwise incommensurable. For the sides and diagonal of a square, there is no such third line segment, and they are therefore incommensurable with one another. But as long as the existence of incommensurable quantities is admitted in geometry, the crisis is dissolved.

More than two millennia later, the birth of calculus introduced the second crisis of basic theoretical contradictions, sowing chaos within the foundations of mathematics. This crisis involved the definition of infinitesimal quantities, among the most basic concepts involved in calculus. In the course of very typical derivations, Newton would introduce the infinitesimal as a denominator by which to divide a quantity or expression; afterward, he would treat the infinitesimal as though it were zero and eliminate any terms still containing infinitesimal terms once the division is carried through. Although their application to mechanics and geometry allowed for no doubt that the formulas obtained by this process were correct, the process itself is logically self-contradictory, and this problem was not clarified until the first half of the nineteenth century, when Cauchy developed his theory of limits. Cauchy treated the infinitesimal as an arbitrarily small but positive quantity quantified in such a way as to permit it to behave as a vanishing variable.

After the advances in analytical rigor at the end the nineteenth century and in particular its crowning achievement in the birth of set theory, mathematicians believed that it should be possible to eliminate all crises and even the possibility of crisis from the foundations of mathematics once and for all. In 1900, Henri Poincaré even declared to the International Congress of Mathematicians in Paris that complete rigor had at last been achieved. But a new paradox in set theory, which seemed the simplest and most clear of theories, provoked a new debate concerning the foundations of mathematics and triggered its third crisis. In order to resolve this crisis, mathematicians turned to a deeper consideration of the basis of mathematics and undertook the development of mathematical logic, another important trend in pure mathematics in the twentieth century (Fig. 8.32).

Fig. 8.32
A photo of Bertrand Russell who holds a smoking pipe in his right hand.

The versatile Bertrand Russell

A key figure in this story is Bertrand Russell (1872–1970), who was born in 1872 into an aristocratic family in England. His grandfather had twice served as Prime Minister of the United Kingdom. Russell lost both his parents by the age of 3, and the strict puritanical bent of his subsequent education made him suspicious of religion as early as the age of 11. Rather, he began to consider the world always through a skeptical eye, inclined to consider how much we know and do not know and with what degrees of certainty and uncertainty. Starting around the onset of puberty, loneliness and despair began to take hold in his thoughts, and Russell struggled with suicidal thoughts. In the end, it was an obsession with mathematics than enable to him to break free of his darker impulses, and at the age of 18, he was admitted to Cambridge University after having spent the entirety of his previous schooling at home. He continued to search for perfect and definite goals for his mathematical ambitions, but during his final year became attracted to the writings of Hegel and turned to philosophy (Fig. 8.33).

Fig. 8.33
A photo of Alfred North Whitehead.

Russell’s teacher, Alfred North Whitehead

It seems obvious that the most natural area of research for Russell was in mathematical logic and philosophy of mathematics, which had been established not long earlier as a unified discipline by the German philosopher and mathematician Gottlob Frege (1848–1925). Fortunately, Cambridge University offered both fertile grounds and admirable colleagues for this pursuit. These included Alfred North Whitehead (1861–1947), a teacher and a friend; George Edward Moore (1873–1958), 1 year Russell’s junior; and later his brilliant student Ludwig Wittgenstein (1889–1951). Russell was proficient early on in mathematics and a passionate believer in the basic correctness of the scientific worldview, and on this basis, he identified for himself three major goals as a philosopher. The first was to reduce the vanity and pretense to which human cognition is by nature subject to an absolute minimum and express himself as simply as possible, the second was to establish a link between logic and mathematics, and the third was to find a path of inference from language to the world it describes. These three goals were each of them eventually achieved with more or less success by Russell and his colleagues, setting the stage for analytic philosophy.

A significant factor in the wide reach of Russell’s influence was also his natural ability as a popularizer. His philosophical prose is clear and beautiful, and many philosophers have been first drawn to the subject by way of his popular works, Introduction to Western Philosophy, Wisdom of the West: A Historical Survey of Western Philosophy, and even the somewhat more specialist work Human Knowledge: Its Scope and Limits. Russell was also prone to venture beyond the ivory tower in his writings and touch upon social, political, and moral issues, never shirking from addressing sensitive issues with passion. He was twice imprisoned, fined, and at one point dismissed from his position at Trinity College, Cambridge, for his controversial views and activities as a conscientious objector. Nevertheless, he was awarded the Nobel Prize in Literature in 1950. Later recipients of this award have also included writers with a background in mathematics: the Russian novelist Aleksandr Solzhenitsyn (1918–2008), who won the Nobel Prize in Literature in 1970, and the South African-Australian writer J.M. Coetzee (1940-) who won it in 2003 both studied mathematics as undergraduate students.

The paradox in set theory known as Russell’s paradox goes like this: consider the menagerie of sets as divided into two categories. The first kind consists of sets that do not contain themselves as elements; most ordinary sets are like this. The second kind consists of sets A satisfying A∈A. An example of a set of this kind would be the set of all sets, if such a thing exists. It is obvious that every set A belongs to one of these kinds. Let ℳ be the set of all sets of the first kind, that is, the set containing every set that does not contain itself. Then the natural question is, does ℳ belong to the first kind or the second kind? Suppose it belongs to the first kind; then ℳ does not contain itself, and it follows then from the definition of ℳ that ℳ∈ℳ, a contradiction. But suppose instead that it belongs to the second kind. Then ℳ∈ℳ, from which it follows again by the definition of ℳ that ℳ is not an element of ℳ, another contradiction (Fig. 8.34).

Fig. 8.34
A photo inside a Salon. A barber stands beside a person who is covered with a cloth and sits on the chair.

The village barber challenges the mathematicians

In 1919, Russell presented a colloquial version of this paradox known as the barber paradox:

Consider a village barber who shaves all those and only those who do not shave themselves. Does this barber shave himself?

In both the formal and informal case, it is apparent that the construction leads to an unresolvable contradiction, and this pointed to a flaw in the very foundations of set theory as it had been formulated up to that point. Recall that the second crisis in mathematics, the crisis of calculus, had been resolved through the development of the theory of limits. But the theory of limits was in turn based on set theory. Therefore, the appearance of Russell’s paradox in set theory formed an even deeper crisis for the foundations of mathematics.

In order to remove this paradox, mathematical logicians began to work toward an axiomatization of set theory. The first attempt was made by the German mathematician Ernst Zermelo (1873–1953), who put forward seven axioms that support a set theory free from paradoxes. This system was further refined by the German-born Israeli mathematician Abraham Fraenkel (1891–1965), resulting in ZF set theory, which remains the most widely used axiomatic foundation for mathematics in use today (commonly with the somewhat controversial axiom of choice append to it to form ZFC set theory). This eased the severity of the mathematical crisis, although nobody can prove that this system itself is consistent, and indeed it follows from Gödel’s second incompleteness theorem that it cannot prove its consistency; few mathematicians however suspect that there are hidden inconsistencies lurking with ZFC, but there are nevertheless mysteries still to be unraveled in the foundations of mathematics. One particularly noteworthy example: the American mathematician Paul Cohen (1934–2007) proved in 1963 that the continuum hypothesis cannot be proved within the Zermelo-Fraenkel system, which taken in conjunction with an earlier result due to Kurt Gödel shows that in fact it is independent of the Zermelo-Fraenkel axioms, a resolution of sorts to Hilbert’s first problem, and perhaps that most complete resolution of it that can be expected. Cohen received a Fields Medal in 1966 (Fig. 8.35).

Fig. 8.35
A photo of L E J Brouwer who holds his face with his left hand.

L.E.J. Brouwer, one of the founders of topology, who introduced and proved the fixed-point theorem

Further efforts to find a logical solution to the paradox of sets led to the formation of three major philosophies of mathematics. The first is logicism, represented by Frege and Russell. The second was called intuitionism, introduced by the Dutch mathematician L.E.J. Brouwer (1881–1966), and the third was formalism, represented by Hilbert. The formation and activity of these competing schools of thoughts elevated the question of the foundations of mathematics to an unprecedented height. Although these efforts failed to achieve a completely satisfactory resolution to the situation, they contributed substantially to the formation and development of the program of mathematical logic first initiated by Leibniz. Due to space limitations, we present only a few of the arguments associated with each school below.

The first position is logicism, as promoted by Russell and his school. According to logicism, mathematics is simply an extension of logic, and there is no need to introduce any special axioms to demarcate the two. Rather, all of mathematics can be written in language of logic, mathematical concepts are simply a certain family of logical concepts, and mathematical theorems can be derived entirely from the axioms of logical and logical rules of deduction. As for the development of logic itself, the proceedings are entirely axiomatic. For the reconstruction of mathematics, the logicists first defined the theory of propositional functions and classes; proceeded to the construction of cardinal and ordinal numbers, and in particular the natural numbers; and on this basis established the real and complex number systems, functions, and analysis; the contents of geometry can also be fully reproduced atop these foundations. In this way, mathematics became the mathematics of philosophers, with no special content of its own, only a special form of logical thought.

Intuitionism stands in direct contrast with logicism and holds that mathematics exists independent of logic in the mental activity of humans. The essence of intuitionism is its insistence on purely constructive approaches to mathematical objects. Brouwer in particular held that the proof that this or that mathematical object exists is valid if and only if it as accompanied by a construction or proof of construction that can be carried out in finitely many steps. In set theory, for example, the intuitionists admit only the existence of finite constructible sets, in this way easily avoiding the paradoxes associated with infinite sets such as the set of all sets. One striking consequence of this perspective is that it necessitates the denial of the so-called law of the excluded middle, which states that either every proposition is true or its negation is true. It is also necessary to throw out the general theory of irrational numbers, and even the well-ordering principle of the natural numbers, which states that every subset of the natural numbers, including of course infinite subsets, has a smallest element.

Hilbert replied: “Taking the principle of excluded middle from the mathematician would be the same, say, as proscribing the telescope to the astronomer or to the boxer the use of his fistsFootnote 2.” As part of his criticism against intuitionism, Hilbert brought out his long incubating Hilbert program for the foundations of mathematics, referred to later as formalism. The main idea is that the basic objects of mathematical thinking are mathematical symbols themselves, rather than any meaning attached to them say as geometrical or physical objects, and therefore that all of mathematics can and should be reduced to the laws governing the use of symbols in formulas, without any reference to their interpretation. Formalism absorbed some ideas from intuitionism, but retains the law of the excluded middle, and permits the fundamental transfinite axiom that goes a certain way toward proving the consistency of the theory of natural numbers, with some restrictions. Any hopes for a more complete realization of this program, however, were dashed by the work of a young logician named Kurt Gödel, as we will discuss in more detail below.

Wittgenstein

But before we discuss Gódel’s incompleteness theorems, we turn to one of Bertrand Russell’s most brilliant students and collaborators, Ludwig Wittgenstein (1889–1951), who elevated in his works the abstract discipline of logic to the heights of pure philosophy. Wittgenstein was born in Vienna in 1889 in a wealthy Jewish entrepreneurial family, the youngest of eight children. He was educated at home until the age of 14 and only afterward underwent formal schooling with some hardship. After a study of engineering in Berlin, Wittgenstein enrolled at Victoria University of Manchester in 1908 to pursue a doctorate. His focus was on aeronautical projects and patented the design of a propeller jet with small engines in 1911. All this fostered in him an interest in applied mathematics. His preference soon turned toward pure mathematics, and he became eager to understand more deeply the foundations of mathematics and eventually mathematical philosophy (Fig. 8.36).

Fig. 8.36
A photo of Ludwig Wittgenstein who stands against a wall with crisscross lines.

Philosopher Ludwig Wittgenstein

In 1912, the 23-year-old engineering student made his way to Cambridge, where he spent five semesters at Trinity College and quickly caught the attention of the philosophers Russell and Moore, both of whom regarded him as an intellectual equal. The outbreak of World War I however led Wittgenstein to volunteer in the Austrian army as an artilleryman on the eastern front; he wound up in Turkey, where he was captured by Italian soldiers in winter of 1918. He lost contact with his connections at Cambridge, and Russell wrote in his Introduction to Mathematical Philosophy, published the following year, that it was not clear whether or not he was even still alive.

But in the same year, Wittgenstein wrote a letter to Russell from the prisoner-of-war camp where he was being held. He had read his former teacher’s book while in prison and believed that he had answered successfully several of the questions raised within it. Both teacher and student hoped to meet as soon as possible after his release for a discussion of philosophy. By this time, however, Wittgenstein was destitute, having been persuaded by the writings of the great Russian author Leo Tolstoy to renounce his wealth and leave his considerable inheritance divided among his siblings under the condition that they not leave it in trust to him. Russell resorted to the sale of some of his furniture left behind in Cambridge in order to cover his travel expenses, and the two were able to meet at last in Amsterdam.

Wittgenstein is rare even among philosophers of genius for having developed two brilliant and highly original systems of thought at two completely different periods in his life, the two of them also very different from one another. The first of these is represented by his classic, the Tractatus Logico-Philosophicus, published in 1921, and the second by his Philosophical Investigations, published posthumously in 1953. Both of these works exhibit a refined and bold style of writing and thinking and exerted a profound influence on the course of subsequent philosophy. Apart from a short essay entitled Some Remarks on Logical Form, the Tractatus Logico-Philosophicus was the only work published by its author during his lifetime (Fig. 8.37).

Fig. 8.37
A cover page of a book titled Wittgenstein, Tractatus Logico-Philosophicus.

The Tractatus Logico-Philosophicus

This short book is an undisputed philosophical masterpiece, constructed from out of its central premise that philosophy in the final analysis is nothing other than the study of language. The central question of the book is how is it that language can be language?, prompted by a thoroughly familiar fact with which every living person is thoroughly familiar, but which surprised and amazed Wittgenstein: a person can understand a sentence that he or she has never heard before. He explains this fact as follows: a sentence or a proposition that describes something creates a picture of the world being a certain way. Propositions have a certain meaning, and the world has a certain state, and these are phenomena of the same kind. Wittgenstein argued that all proposition schemes and all possible states of the world are committed to the same logical form, which is simultaneously a form of representation and a form of reality.

The nature of this logical form itself however cannot be discussed; rather, it is meaningless in a very literal sense of this word. Wittgenstein makes this claim by way of a very famous analogy:

My propositions serve as elucidations in the following way: anyone who understands me eventually recognizes them as nonsensical, when he has used them— as steps— to climb beyond them. (He must, so to speak, throw away the ladder after he has climbed up it.)Footnote 3

There are certain things that simply cannot be spoken in language: the necessary existence of the simple elements of reality, the existence of the self of thought and will, and the existence of absolute values. These inexplicable things cannot even be imagined, because the limits of language are identical with the limits of thought. The last sentence of the book is associated with its author as a kind of motto: whereof we cannot speak, thereof we must be silent.

Language had become the central topic in philosophy starting from the work of Gottlob Frege, mentioned above as the founder of modern philosophy of mathematics and who introduced the important distinction in language between sense and reference. Wittgenstein admired Frege deeply and visited him at the University of Jena in 1911 to show him some work on philosophy of mathematics and logic. In fact, he hoped to study under Frege, who recommended instead that he attend the University of Cambridge to learn from Russell. Wittgenstein later credited these two figures, Frege and Russell, as the source of his best ideas in philosophy. Frege was also an important influence upon the works of Russell and also Edmund Husserl; the former once communicated his deep admiration in a letter. Frege himself famously remarked, “Every good mathematician is at least half a philosopher, and every good philosopher is at least half a mathematician.”

Wittgenstein believed sincerely that philosophy is not a merely theory or body of doctrine, but rather an activity whose goal is to clarify the propositions of natural science and expose the emptiness of metaphysics.Footnote 4 Since he believed that his work in this direction was completed in the Tractatus Logico-Philosophicus, he disappeared from philosophy after its publication and spent the next several years work as a primary school teacher in mountain villages in southern Austria, having also previously built for himself an isolated log cabin in the remote Norwegian countryside. Eventually, he returned however to England and submitted the Tractatus Logico-Philosophicus as his doctoral dissertation to Cambridge University. Naturally, he earned his degree, and he was elected shortly afterward as a Fellow at Trinity College (Fig. 8.38).

Fig. 8.38
A close-up photo of Wittgenstein's tomb. A few pieces of paper, coins, and leaves around are scattered on the tomb.

Wittgenstein’s tomb; photograph by the author, Cambridge

Wittgenstein remained there as a lecturer for a further 6 years, during which time he became increasingly dissatisfied with the contents of the Tractatus Logico-Philosophicus. He began to dictate some new and original developments in his thought to two of his students. He paid a visit to the Soviet Union and considered settling there before spending a year in his cabin in Norway. He made his way again to Cambridge and succeeded the chair in philosophy vacated by Moore. After World War II broke out, he became disgusted with professional philosophy and worked instead a volunteer at Guy’s Hospital in London and then as a laboratory assistant at the Royal Victoria Infirmary in Newcastle upon Tyne. It was during this time that he began the writing of Philosophical Investigations. After the conclusion of the war, he returned to Cambridge as a professor for a further 2 years before resigning finally and making his way to Ireland, where he spent 2 years finishing the book.

As for the Philosophical Investigations, although it is not so inextricably devoted to logic as the Tractatus Logico-Philosophicus, all the same it retains some connection with mathematics. In this masterpiece, Wittgenstein abandoned the idea of a unified nature underlying the endless varieties of language. He compares language to games, observing that there is no property common to all games, only a certain family resemblance. When we consider all the various activities that make up games, there emerges a complex web of overlaps and intersecting similarities, sometimes broad, sometimes in specific details.

In the course of his elucidation of this argument, Wittgenstein introduces as examples several integer sequences, since in his view numbers also constitute such a family of resemblances. His question is: what does it mean to grasp a mathematical pattern? One example is as follows. Suppose one person sees another write down the numbers

$$\displaystyle \begin{aligned} 1, 5, 11, 19, 29 , \dots, \end{aligned}$$

concluding with the notorious phrase, “and so on.” Of course, there are various ways to continue the sequence, and the observer endeavors to write down various formulas to describe it, for example, an = n2 + n − 1. Or even without identifying this formula, he or she recognizes that the first number is 12 + 0, the second number 22 + 1, and the third 33 + 2 and therefore obtains the next number as 62 + 5 = 41 or notices instead that the differences between pairs of successive numbers make up the arithmetic sequence

$$\displaystyle \begin{aligned} 4,6,8,10,\dots \end{aligned}$$

and on this basis concludes that the next number should be 29 + 12 = 41. In any case, it requires little effort to continue.

His point is that it is not necessary to derive an explicit formula to have successfully grasped the pattern governing the sequence. On the other hand, it is imaginable that the viewer equipped with the formula may experience a comprehension of the sequence that extends no further than the contents of the formula, unaccompanied by any intuitive epiphany or other special experience. The lesson of it all is that a pattern is not the same thing as a straitjacket; at all times, we are free to accept or reject the dictates of the pattern. He also insisted that the outcome of the mathematical process is not predetermined: although we follow a procedure that seems clear to us, we cannot predict exactly where it will lead.

Gödel’s Theorems

At the end of the last century, the American magazine Time published its list of the hundred most influential people of the previous hundred years, one-fifth of which consisted of leading scientists and technological and academic figures. Among these 20, one of them was a philosopher and the other a mathematician. The philosopher was Wittgenstein, and the mathematician was Kurt Gödel, to whom we turn now. In fact, these two have much in common: both occupied an intellectual position at the intersection of mathematics and philosophy, and both were Austrian but wrote in English as a second language. But one made his way to England and Cambridge University to pass the latter part of his life, and the other to the United States and Princeton University. And of course, neither of them remained Austrian citizens by the time of his death (Fig. 8.39).

Fig. 8.39
A photo of Kurt Godel.

Kurt Gödel

In 1906, Gödel was born in Brünn in Austria-Hungary, known now as Brno in the Czech Republic. It was in a monastery in this city that the nineteenth-century Austrian geneticist Gregor Mendel (1822–1844) discovered the principles of genetics, and it was also home to the Czech composer Leoš Janáček (1854–1928). As for the broader Moravia region, both the father of psychoanalysis Sigmund Freud (1856–1939) and the father of phenomenology Edmund Husserl (1859–1938) were born there. Husserl had a background in mathematics and earned his doctorate from the University of Vienna for a thesis entitled Contributions to the Calculus of Variations. Gödel also ended up at the University of Vienna, after spending his youth entirely in his hometown, and he studied theoretical physics there before developing a keen interest in mathematics and philosophy and teaching himself mathematics to a more advanced level (Fig. 8.40).

Fig. 8.40
A photo presents Godel on the left and Einstein on the right.

Gödel and Einstein

By his third year at university, Gödel was entirely preoccupied with mathematics, and his library card for this period showed that in particular he read a number of works devoted to number theory. He also began to participate in some of the proceedings of the famous Vienna Circle, having been introduced to him by his mathematics teacher. The Vienna Circle comprised an assortment of philosophers, mathematicians, and scientists who met to discuss primarily the linguistic nature and methodology of science; this group came to occupy an important position in the history of twentieth-century philosophy. At the age of 23, Gödel was the youngest of 14 members to attach his name to the manifesto of the Vienna Circle, Wissenschaftliche Weltauffassung: Der Wiener Kreis or The Scientific Conception of the World: The Vienna Circle. The following year, he completed his doctorate on the basis of a remarkable thesis On the Completeness of the Logical Calculus. Not long afterward, he obtained his world-shatter first and second incompleteness theorems.

In January of 1931, when he had not yet reached the age of 25, Gödel published his Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I or On Formally Undecidable Propositions of Principia Mathematica and Related Systems I in the Monthly Journal of Mathematics and Physics of Vienna. Within a few years, it was already considered among the most monumental milestones in the history of mathematics. The results of this paper are of their nature first and foremost negative results, overturning the belief among mathematicians of every stripe that mathematics as a whole could be subject to axiomatization and eradicating any hope of proving the internal consistency of mathematics as envisioned by Hilbert. But at the same time, this negative result eventually led to an epochal change in basic mathematics research, introducing a fundamental distinction between the concepts true and provable and also introducing analytic logic to the basic toolkit of mathematical thought.

Gödel’s first incompleteness theorem states:

Any consistent formal system F that is strong enough to carry out the basic arithmetic of numbers contains statements S such that both S and its negation are both not provable within F.

In brief, any consistent axiomatization of the natural number system is incomplete. It follows immediately that no formal system completely describes all of mathematical theory. A few years later, the American mathematician Alonzo Church (1903–1995) proved an even stronger result along the same lines: (Church’s theorem) given any consistent formal system strong enough to contain the natural number system, there is no algorithmic decision process to determine whether a given arbitrary proposition is or is not a theorem of the system.

On the basis of his first incompleteness theorem, Gödel also proved the second incompleteness theorem:

If F is a consistent formal system strong enough to contain the natural number system, then the consistency of F cannot be proven within F.

In other words, among the propositions of the system that are true but unprovable, within it occurs the proposition that the system itself is consistent. This put a full stop to the hopes of Hilbert and his program. It appeared now that the internal consistency of classical mathematics cannot be obtained except by way of sophisticate principles of reasoning that are subject to questions of consistency no less worthy of suspicion as the question of the consistency of classical mathematics itself.

Taken together, the two incompleteness theorems show that basic mathematics is as a whole out of the reach of axiomatization and furthermore that it is impossible to guarantee that mathematics harbors no hidden inconsistencies. These are strict limitations of the axiomatic approach and suggest that the procedure of mathematical proof cannot and does not conform to the procedure of formal axiomatization. Taken in a positive light, they suggest also that the role of human intuition and insight in mathematics cannot be fully formalized. In formal systems, it is possible to mechanically reproduce the provable content, but this is guaranteed not to exhaust the full spectrum of true statements within the system. Or in other words, all provable statements are true in the system, but not all true statements are provable within it (Fig. 8.41).

Fig. 8.41
A photo of Godel's tomb. The tomb has the engraving of a book which is written with Godel and years.

Gödel’s tomb; photograph by the author, Princeton

Gödel’s two incompleteness theorems are indisputably among the most important theorems in the history of mathematics; we do not prove it here, since the proof is more technical than the general tenor of this book. It is worth mentioning however that the concept of a recursive function that appears in the proof was proposed to Gödel in a letter from a friend, who died suddenly and unexpectedly 3 months after writing it. After the appearance of the incompleteness theorem, recursive functions became widely known and used and eventually formed the basic starting point for the theory of algorithms. It was also this idea that led Turing to develop his idea of Turing machines and universal Turing machines, another foundational moment in the history of the electronic computer. Since that time, the controversy surrounding paradoxes and mathematical foundations has settled a bit, and concerns about such questions do not much intrude upon the daily work of ordinary mathematics; they did however contribute to a resurgence of interest and energy in mathematical logic, leading to a flurry of development within this discipline.

Conclusion

In modern times, the natural progression toward increased division of labor has led to an extension of the period of time dedicated to studies among aspiring scholars in various fields, and the content of their studies has become more complex and abstract. This is the case not only in mathematics but in every area of human civilization. In poetry, it is no longer possible to compose clear and simple poems such as Climbing Stork Tower by Wang Zhihuan (688–742); in mathematics, such easily derived low hanging fruit as Fermat’s little theorem seems to have been exhausted. Simultaneously, in mathematics, in the natural sciences, and in the arts and humanities, there have also been great changes in aesthetic preferences and conceptions, and complexity, abstraction, and depth have become completely standard measures of judgment (Fig. 8.42).

Fig. 8.42
A low-viewpoint photo of Notre Dame du Haut. It resembles a hat-like concrete roof on thick curved walls.

Notre-Dame du Haut Chapel by Le Corbusier (1953), in Ronchamp, France

This is not to say that abstraction has not relegated pure mathematics to the back shelf; if anything, its application is wider today than ever before, further confirming that the process of abstraction in mathematics is altogether in line with the developments and changes in social trends more broadly. With the birth of calculus, mathematics had emerged as a powerful tool in the course of the scientific and technological revolutions of the seventeenth and eighteenth centuries, with mechanical motion as the main protagonist. After 1860, the new stars of the technological revolution appeared: generators, motors, and electronic communications. Finally, since the 1940s, electronic computers, atomic energy technology, space technology, the automation of production, and communications technology have all been inseparably linked to mathematics. The branches of mathematics called upon by newer fields of science such as relativity, quantum mechanics, superstring theory, molecular biology, mathematical economics, and chaos theory in particular are esoteric, abstract, and modern (Fig. 8.43).

Fig. 8.43
A photo of the Guggenheim Museum. It resembles a stacked cylinder swirling towards the sky.

The Guggenheim Museum in New York City, by Frank Lloyd Wright (1959). Photograph by the author

With the progression of science and technology and the increasingly complex developmental needs of human society, new mathematical theories and disciplines are constantly appearing. Here, we present two examples: catastrophe theory and wavelet analysis. Catastrophe theory was introduced in 1972 by the French topologist and Fields Medalist René Thom (1923–2002) in his book Structural Stability and Morphogenesis; its object of study is the methodology and classification of system control variables subject to sudden massive shifts in behavior. As a mathematical discipline, it is a branch of geometry, and the behavior and trajectories of its variables occur as curves or surfaces. An example of its application is the arch bridge, which deforms at first more or less uniformly under pressure until the load reaches a certain critical point, after which the shape of the bridge undergoes an instantaneous change and it collapses. Concepts from catastrophe theory were later used by sociologists to study such phenomena as gang warfare.

Turning next to wavelet theory, it has sometimes been referred to as the microscope of mathematics, and it represents a milestone in the development of harmonic analysis. Around the year 1975, the French geophysicist Jean Morlet (1931–2007) invented the word wavelet to describe functions he was using to study signal processing problems for oil prospecting. Wavelet analysis or wavelet transform refers to the use of wavelike oscillations with finite length and fast decay to represent signals. As with the Fourier transform, these oscillations can be written as a sum of sinusoidal functions, but wavelets are local with respect to both time and frequency, whereas the Fourier transform in general is local only with respect to frequency. The computational complexity of the wavelet transform is also small: it is of O(N), in comparison with the time O(Nlog N) required for the fast Fourier transform. In addition to signal analysis, wavelet analysis has been used for military intelligence, computer classification and recognition problems, music and language synthesis, mechanical fault diagnosis, data processing for seismology, and so on. In medical imaging in particular, the wavelet transform allows for fast imaging times and improved resolution in B-scan ultrasonography, CT, and MRI.

The mainstream of mathematics in the twentieth century can be described as structural mathematics, promoted and developed by a major school of French mathematicians known pseudonymously as Nicolas Bourbaki. The research objects of structural mathematics are not the classical objects of numbers and shapes in any traditional sense, and mathematics is no longer split up into the clean disciplines of algebra, geometry, and analysis, but rather organized according to the occurrences within it of equivalent structures. For example, linear algebra and elementary geometry are isomorphic to one another in the sense that it is possible to carry out a complete translation of statements between the two, and in this sense, they are considered simultaneously. The mathematician and historian of mathematics André Weil (1906–1998), who was a major figure in the Bourbaki school and a recipient of the Wolf Prize in Mathematics, was close with the cultural anthropologist Claude Lévi-Strauss (1908–2009), who borrowed structuralist ideas to study the mythologies of various cultures. He identified various isomorphic correspondences between them, a striking example of the influence of the new mathematics on linguistics and anthropology. This inaugurated a new trend in French philosophy in the 1960s known as structuralism. Its most famous adherents were Jacques Lacan (1901–1981), Roland Barthes (1915–1980), Louis Althusser (1918–1990), and Michel Foucault (1926–1984), who used structuralist ideas to investigate psychoanalysis, literature, Marxism, and socio-historical topics, respectively. Jacques Derrida (1930–2004) introduced his influential theory of deconstruction as a critique of linguistic structuralism.

Looking now to the future, the major question facing mathematics is whether or not it can achieve some kind of unification. This has long been a preoccupation among mathematicians: as early as 1872, in the second year of German reunification, the young German mathematician Felix Klein (1849–1925) published his famous Erlangen program, an attempt to unify modern geometry and mathematics from the perspective of group theory. The Erlangen program developed from collaborations with the Norwegian mathematician Sophus Lie (1842–1899), inventor of Lie groups and Lie algebras, and took its name from the university at which Klein was employed at the time, now known as the University Erlangen-Nürnberg, in Bavaria. Lie groups also played a deep role for the Bourbaki school, who regarded them as a synthesis of group theory and topology. The group theoretical perspective has since become commonplace in every area of mathematics, but the full achievement of the goals set forth by the Erlangen program has remained out of reach.

Nearly a century later, the Canadian mathematician Robert Langlands (1936-) set up the banner of his Langlands program. In a 1967 letter to Weil, and then in 1970, Langlands proposed a series of conjectures entailing a web of relationships intertwining the Galois groups of number theory, automorphisms in analysis, and representation theory in algebra. Langlands was awarded the Abel Prize in 2018. Meanwhile, André Weil, whose sister was the famous philosopher Simone Weil (1909–1943), proposed in 1948 an analogue to the Riemann hypothesis in algebraic geometry, which was later proved by the Belgian mathematician Pierre Deligne (1944-), using methods pioneered by his uniquely brilliant mentor and doctoral advisor, the stateless mathematician Alexander Grothendieck (1928–2014). Both Grothendieck and Deligne received Fields Medals, in 1966 and 1978, respectively (Fig. 8.44).

Fig. 8.44
A photo of the Beijing C C T V headquarters. It is a tall building formed by a podium structure that joins 2 high leaning towers that are linked at the top through a cantilevered overhang structure.

The Beijing CCTV Headquarters, by Rem Koolhaas and Ole Scheeren (2007)

On the other hand, although there has emerged since the nineteenth century a trend toward the interpenetration and integration of disparate subjects in mathematics, which has led to the formation of new disciplines, at present, mathematics as a whole is still a highly differentiated domain, characterized in modern times by abstraction and generalization, but also intense specialization. A very considerable portion of new mathematics is necessarily divorced from the natural world and scientific applications, perhaps a troubling phenomenon. It is reasonable to ask then if abstraction or structuralism can provide a framework for mathematical unification. Certainly it is possible, but it seems also likely that mathematics cannot become unified in a context of isolation within itself.

There is an analogy to be drawn with art, where collage has gradually become a central technique and in some cases even the predominant conception of art. Modern philosophers have also embraced collage as a kind of ideal myth. In the past, collage was considered primarily as an artistic technique involving the random combination of unrelated pictures, words, sounds, and so on in order to produce a special effect. Today, it seems that the range of this word should be expanded to include the combination of disparate ideas. In this sense, collage has played a role in modern mathematics, even in the nature of modern civilizations. For example, many of the new interdisciplinary topics in mathematics could be considered as an instance of collage. To some extent, collage and abstraction are identical phenomenon, except that the use of one word is more common in the art world and the other in mathematics (Fig. 8.45).

Fig. 8.45
A model of the Beijing National Stadium. It is a saddle-shaped elliptic structure that resembles a bird's nest.

The Beijing National Stadium, or Bird’s Nest, by Jacques Herzog and Pierre de Meuron (2008)

For reasons of space, we have only considered the medium of painting, but abstraction has also occurred in other forms of art. Architecture, for example, has undergone tremendous changes with respect to content, form, and decoration. In his classic De architectura, the Roman architect Vitruvius held up the three words strength, utility, and beauty as the cornerstone of architecture, and these three words became the basic criteria for quality of buildings or architectural plans. In the Renaissance, Alberti subdivided the category of beauty into the beautiful and the decorative, where the beautiful is defined by harmonious proportion and the decorative consists of mere auxiliary splendor. Since the twentieth century, architects have rejected the dismissal of ornament as auxiliary splendor and treated it rather as an indispensable and ubiquitous aesthetic component, not unlike collage for painting. Geometric figures, both classical and modern, have played a particularly important role here.

Like music, painting, architecture, and the other arts, mathematics is without borders and suffers little from the limitations of language barriers. It has been an essential part of human civilization, and it seems not unreasonable to speculate that if there exists any alien civilization, mathematics has played just an important role there as it has here. Indeed, if extraterrestrial intelligences exist, it seems very possible that they can understand mathematics and may even be proficient in it, and many have suggested that mathematics is the most suitable arena for the first attempts at communication. As early as 1820, Gauss proposed to use a graphical proof of the Pythagorean theorem cut into the vast Siberian forest as a signal to space indicating the presence of human civilization. Some 20 years later, the Austrian astronomer Joseph Johann von Littrow (1781–1840) proposed instead to fill a large circular canal dug out in the Sahara desert with burning kerosene for the same purpose.

In any case, they both agreed that signals containing such prominently mathematical imagery should attract the attention of any intelligent alien life, although neither of these ideas was ever put into practice. Carl Devito, a mathematician at the University of Arizona, has argued that accurate communication with a civilization from another planet must start from an exchange of scientific information, with the first step being the establishment of units of measurement. In recent years, he has collaborated with a linguist in an attempt to construct a language derived from universal scientific concepts. For example, differences in the chemical composition of the atmosphere or the energy output of a planet may facilitate communication. The basic idea is that both civilizations should have arrived at mathematical methods and computations, discovered chemical elements and the periodic table, and carried out quantitative studies of the states of matter.

But of course there remain many difficulties and obstacles in the way of communication with an alien civilization even in the case of contact. Perhaps they have derived their laws of motion along very different mathematical lines and arrived at formulations very different from the ones with which we are familiar. The mathematical basis for our study of motion is calculus; indeed, calculus is the basis for many fields of science. Should this also be true of an alien civilization? Or as another example, will the natural starting point in geometry for a distant civilization be Euclidean as it was for ours or some non-Euclidean geometry? Their physics may be so different from ours that they would not recognize the theory of our solar system introduced by Copernicus or our picture of the universe. And afterward, there is the equally challenging question: how to present other aspects of human civilization in terms of mathematics. It is exactly this question, which still stands in need of much intercultural research and further discussion, that this book has endeavored to explore.