1 Intuitive Thinking and Axiomatic Reasoning

Usually we think in a logically disorganized fashion, without distinguishing assumption from deduction. When proceeding in this intuitive or heuristic manner we can advance quickly, but we may inadvertently introduce controversial or even false assumptions, that imperil the whole construction.

Axiomatics is intended to help avert such catastrophes. Indeed, to axiomatize consists in subjecting a theory built in an intuitive or heuristic fashion to the following operations:

  1. 1.

    to exactify intuitive constructs, that is, to replace them with precise ideas, as when substituting “set” for “collection,” “function” for “dependence”, and “derivative with respect to time” for “rate of change;”

  2. 2.

    to ground and, in particular, to justify the postulates and bring hidden assumptions to light—assumptions that, though seemingly self-evident, may prove to be problematic;

  3. 3.

    to order deductively a heap of known statements about a given subject.

These three tasks are interdependent: when exactifying an idea one may discover that it implies others or is implied by others; and when ordering a handful of propositions one may discover a missing link or an unjustified premise or consequence. For instance, since empiricism equates existence with possible experience, it presupposes that humans have always existed.

The main interest of axiomatics for philosophers is that some of the tacit assumptions of the intuitive formulations of important theories may be philosophical rather than either logical or factual. For example, set theory has been axiomatized with as well as without the axiom of choice, which constructivists reject because they demand a precise constructive choice function instead of the promise of one, as implied by the phrase “there exists a function such that…”

Second example: In 1925 Heisenberg published his matrix quantum theory, holding that it included only measurable magnitudes; but in 1941 he admitted that this was not true, and proposed his S-matrix theory, which in fact was closer to experiment but, alas, it was also incapable of solving any problems without the help of standard quantum mechanics, as a consequence of which it was soon forgotten—even by its author, who does not mention it in his 1969 memoirs. A realist philosopher might have spared him those disappointments, but the only philosopher in his Leipzig seminar was a Kantian who supported the Copenhagen “spirit,” that there are only appearances.

Ordinarily one axiomatizes with one of these goals: to unify previously disconnected findings (Euclid’s case), to deepen the foundation of a research field (Hilbert’s case), or to eliminate paradoxes. For example, in 1909 Ernst Zermelo axiomatized set theory to avoid the paradoxes inherent in the naïve theory built by Bolzano and Cantor, and that had kept Frege and Russell awake while confirming Poincaré’s skepticism. The mathematician Constantin Carathéodory wished to gather, cleanse and order logically the thermodynamic findings of Carnot, Clausius, and Kelvin. By contrast, my motivation for axiomatizing the two relativities and quantum mechanics (Bunge 1967a) was philosophical: to rid them of the subjectivistic elements that had been smuggled into them by the logical positivists.

Regrettably, the price paid in the first two cases was excessive. Indeed, Zermelo’s axiomatics deals with sets of sets, so that it gives preference to the notion of species to that of individual. This Platonic bias renders it useless in the factual sciences, which went on using, if at all, naïve set theory (e.g., Halmos 1960).

As for Carathéodory’s axiomatics, it was restricted to reversible and adiabatic processes, which are hard to find in either nature or industry, where irreversible processes, such as those of dilution, diffusion, explosion, implosion, and heat transmission prevail. Thus Carathéodory achieved mathematical rigor by taking dynamics out of thermodynamics.

That is why the subsequent contributions to the field, such as those of Lars Onsager, Ilya Prigogine, and Clifford Truesdell, owed nothing to Carathéodory’s thermostatics. This theory remained a tool for taming engineering students and reassuring pessimists that the universe is going downwards and will end up in “thermal death”—despite the mounting number of discoveries of self-organization processes such as those occurring in the “star nurseries.”

Many of the physics students of my generation learned classical thermodynamics in Fermi’s textbook, and statistical mechanics in Landau and Lifshitz’s. They taught us Boltzmann’s lasting lesson: that, far from being a fundamental and isolated discipline, thermodynamics was the culmination of statistical physics, which explained heat as a macrophysical effect of the random motion of lower-level entities. In addition, they reminded us that the most interesting macrophysical processes, those of self-organization, occur in open systems, where neither of the two famous laws is satisfied.

An unforeseen consequence of Carathéodory’s axiomatics was that some pedagogues misinterpreted it as a theory of states in themselves rather than states of a thermodynamic system, such as a heat-transfer device (e.g., Falk and Jung 1959; Moulines 1975). This absurdity suggested another: that natural science is not about material things (Moulines 1977). But of course even a mathematician such as Carathéodory, when writing Zustand (state), presupposes that this is the state of the concrete system in question, since it makes no sense to speak of the state of abstract objects. Moreover, unlike Moulines (1975), he will call process, not system, a sequence of states of a given system, such as its cooling down.

And all model theorists, such as Tarski, know that their models are examples or interpretations of abstract theories (or formal systems) such as those of sets, graphs, lattices, and groups—hence totally unrelated to the theoretical models devised by scientists and technologists, which are special theories, such as that of the simple pendulum. Thus the entire model-theoretic (or structuralist) approach to theoretical physics, adopted by Joseph Sneed (1971) and his followers, such as Moulines and Stegmüller, is the fruit of an equivocation, as would be regarding ring theory as dealing with fried onion rings and the like.

However, let us go back to our central theme. Perhaps the greatest virtue of axiomatics is not that it enhances formal rigor but that it uncovers tacit assumptions in the intuitive formulations, such as that the laws of thermodynamics hold regardless of the number of components—which Boltzmann had doubted when he allowed for violations of the second law in the case of small numbers. Another, philosophically more interesting case, is this: the claim that things acquire their properties only by virtue of measurements on them, it inadvertently assumes that the universe was not born until the first modern laboratory was set up.

A better-known example is this. The operationalist demand that all the physical concepts be defined in terms of laboratory operations implies distinguishing two different masses of a body: the inertial, which occurs in Newton’s law of motion, and the gravitational, inherent in his law of gravitation. But any correct axiomatization of classical mechanics will contain a single mass concept, to allow for the cancellation of m in equations such as that for a falling body: “mg = GmM/r 2.”

This does not preclude adding the remark that mass has three aspects: as a measure of inertia, of gravitational pull, and of quantity of particulate matter. Likewise the electrodynamic potential A has two faces: it generates the field and it accelerates charged matter. And when writing de Broglie’s formula λ = h/p one evokes both the corpuscular and the wavelike metaphors, but one does not claim that there are two kinds of linear momentum.

Even Einstein, otherwise an outspoken realist, fell into the operationist trap when he admitted Eötvös claim that he had measured both the inertial and the gravitational masses of a body, finding that they are the same—while actually he had measured a single property, mass, with two different methods. Likewise, one may measure a time lapse with a clepsydra, a pendulum, a spring clock, or other means—which does not prove that there are multiple times. Distance, temperature, energy, and other magnitudes are parallel.

The ultimate reason for the one-to-many correspondence between magnitudes and measurements thereof is that the properties of real things come in bundles, not in isolation—a metaphysical principle. For the same reason, and also because every measurement apparatus calls for its own special theory, it would be foolish to tether any general theory to a particular measurement procedure.

Something similar holds for the social theories and techniques. For example, even a partial axiomatization of standard economics suffices to discover its most egregious assumptions (see Bunge 2009). Another case in point is this: if handled intuitively, one runs the risk of treating the key features of a social group one by one rather than in combination with other properties of it. For instance, those who claim that liberty trumps all the other social values ignore that there can be no liberty where power is monopolyzed by a tiny minority of privileged individuals, be it tyrants, tycoons, or priests.

The systemic or integral approach to any fragment of reality suggests favoring theories that emphasize the key variables and the connections among them. For instance, liberty will be linked with equality and solidarity, the way the French revolutionaries did in 1789—and it may be added that this famous triad rests on another, namely occupation, health, and education. It is up to political theorists to imagine ways of constructing such a hexagon, and to the more rigorous among them to construct axiomatic theories clarifying and inter-relating the six variables in question.

In sum, intuitive or heuristic thinking can be creative and fast, but it may be marred by muddled concepts or false assumptions or presuppositions, which in turn are bound to entail wrong conceptual or practical consequences. In cleaning and ordering reasonings, axiomatization may save us from such mistakes and the corresponding barren controversies.

2 Axiomatic Versus Heuristic Formulations of Theories

In ordinary language the terms ‘hypothesis’ and ‘theory’ are synonymous. Not so in the exact sciences, where a theory proper is a hypothetic-deductive system closed under deduction, and whose components support one another, so that the outcome of the empirical test of any of them affects the standing of the others. Ideally, all the knowledge about any domain is contained in one or more theories plus a set of empirical data.

For the most part, scientific theories are formulated in a heuristic manner, so that anyone feels entitled to add any opinions or even to interpret them as they wish. For example, some authors say that quantum physics is about microphysical entities whereas others claim that it a admits no levels; and, whereas some restrict its domain of validity to objects under observation, or even to object-apparatus-observer wholes, others admit that it also holds outside labs—for instance, in stars. Again, some authors deal with the Heisenberg principle on one page, only to prove it on another. And most authors freely exchange ‘indetermination’ and ‘uncertainty’, so that the reader is unsure whether Heisenberg’s inequalities constitute a law of nature or just an opinion about the limitation of human understanding. The critical student may be left with the impression that the author did not know what he wrote about.

Only a suitable axiomatic formulation of quantum mechanics will prove that Heisenberg’s inequalities constitute a theorem, not a principle. And only a realist axiomatics can claim that, if true, they will constitute a law of nature. It will accomplish all this by stating from the start that the theory is about real existents, not about observations by means of the mythical Heisenberg microscope, or the no less mythical clock-a-box that Bohr imagined to “derive” his time-energy inequality.

In proceeding thus the axiomatizer will be helped by the logical finding that no set of empirical data, however bulky, can prove a general statement—if only because the theory contains predicates that do not occur in the empirical data. Much the same holds for the mostly groundless opinions on quantics scattered in the literature. It took me two decades to realize that only reasonings from principles could justify any assertion of the kind. This is why I undertook to axiomatize several contemporary physical theories and free them from unscientific philosophical assumptions (Bunge 1967a).

Others have updated or expanded that effort (see Pérez-Bergliaffa et al. 1993, 1995; Covarrubias 1993; Puccini et al. 2008). Physical axiomatics was thus a fruit of the union of philosophical realism with the drive to replace quotation with argument, and stray statement with axiomatic system.

3 Dual Axiomatics: Formal and Semantic

Euclid is likely to have been the earliest axiomatizer: he collected and ordered all the bits of geometric knowledge accumulated in the course of the two preceding centuries. Another giant, Bento Spinoza, revived axiomatics in mid-seventeenth century for philosophical purposes. And around 1900 David Hilbert, Giuseppe Peano and Alessandro Padoa updated and analyzed the Euclidean format. The latter may be summarized as follows:

  • Primitive or undefined concepts.

  • Postulates or axioms.

  • Lemmas, or statements borrowed from other fields.

  • Theorems.

  • Corollaries.

In some cases the definitions are given right after listing the primitives, whereas in others they are introduced further down, as new concepts occur in theorems. Occasionally the foundation of a theory is condensed into a single axiomatic definition, as will be seen in Sect. 6.

All of the above is fairly standard fare and of no great mathematical interest, since anyone well acquainted with a theory can axiomatize it without tears, as long as s/he does not question the philosophical motivations underlying the preference for one choice of primitives over another. Being purely structural, the mathematical formalism of an axiomatic system calls for no extramathematical elucidation.

Consider for instance the mathematical formalism of Pauli’s theory of spin one-half particles, such as electrons. The core of this formalism is the spin vector σ = u 1σ1 + u 2σ2 + u 3σ3, where the u i , for i = 1,2,3, are the components of an arbitrary unit vector, while the corresponding σi are the 2 × 2 Pauli matrices, implicitly defined by equations such as σ1σ2 − σ2σ1 = 2iσ3. So far, this is only a piece of undergraduate algebra. But trouble is bound to start if someone asks how to interpret σ in physical terms, that is, if s/he asks what is the property called ‘spin’?

A first answer is that is “the intrinsic angular momentum,” all the more so since the equations defining it are similar to those obeyed by the quantum counterparts of the orbital angular momentum L = r × p. But a moment’s reflection suffices to realize that this answer is an oxymoron, for most of the said particles are also assumed to be pointlike, and points cannot revolve around themselves. In short, electrons and their ilk do not spin any more than they weave.

A more cautious if elusive answer is that spin is “responsible” for the Zeeman multiplication of the spectral lines of an atom when embedded in a magnetic field, as well as for the splitting of an electron beam when entering the magnetic field in a Stern-Gerlach apparatus. But these answers do not tell us anything about the spin’s mechanism or modus operandi.

I suggest that σ is a mathematical auxiliary with no physical meaning: that what does have such meaning is the magnetic moment µ = µBσ, where µB = eh/4πmc is the Bohr magneton. In metaphorical terms, µB is the physical flesh attached to the mathematical bone σ. Moreover, this flesh is magnetic and unrelated to spinning, which is a dynamical process. Thus, ‘spin’ is a misnomer: electrons and their ilk are not like spinning tops but like magnets.

This account coheres with Heisenberg’s explanation of the difference between ferromagnetism and paramagnetism in terms of the alignment, total and partial respectively, of the magnetic moments of the valence electrons of the atoms constituting the material in question.

Likewise, what accounts for the splitting of an electron beam entering the magnetic field of intensity H in a Stern-Gerlach apparatus is not σ but µ: the electrons with a magnetic moment parallel to the field (“spin up”) acquire the additional energy µB H, whereas the antiparallel ones (“spin down”) lose the same amount of energy.

In short, the physical property or magnitude in question is not the spin, a non-dimensional mathematical bone, but the elementary magnetic moment, which is affected by an external magnetic field and explains, among other things, the multiplication of the spectral lines of an atom immersed in a magnetic field (Zeeman effect). Thus this effect is explained in terms of the perturbation that H causes on the electrons’ intrinsic magnetism, not on their imaginary spinning.

Let us now return to the original subject, namely the pairing of every mathematical axiom of a factual theory with a semantic assumption assigning it a factual meaning (that is, reference and sense). Such an assumption is necessary to learn what is being assumed about what. Ordinarily the context will suffice to perform this task. But some cases, such as those of the terms ‘mass’, ‘entropy’, ‘potential’, ‘spin’, ‘state function’, and ‘information’, are far from trivial, and have originated controversies lasting decades. The reason is of course that pure mathematics is not about real things, even though some mathematical concepts, such as those of derivative and integral, were born with geometrical and dynamical contents. Only the addition of a semantic assumption may disambiguate or “flesh out” a mathematical formula occurring in a factual discourse.

The formalist school started by the McKinsey et al. (1953) paper on the axiomatization of classical particle mechanics overlooks semantics. In identifying a body with its skeleton, the formalists fail to explain why the same mathematical concepts occur in many different fields, mostly with different meanings. Consequently they are unable to participate constructively in the controversies provoked by the two relativities, quantum mechanics, genetics, psychology, or standard economics. Notably, few physicists have objected to the mathematical formalism of quantum mechanics: the spirited debates about it over nearly one century have concerned the interpretations of it. This question is so important, that Max Born earned his Nobel Prize basically for proposing the so-called statistical interpretation of the famous ψ.

However, we must be fair: the foundational paper of the Suppes-Sneed-Stegmüller structuralist school was just ill-timed: had it appeared two and a half centuries earlier, it would have shed some light on the lively discussions among Newtonians, Cartesians and Leibnizians, that Newton’s Principia had provoked in 1687. And even had it appeared as late as in 1893, it would have saved college physics teachers from Mach’s wrong definition of “mass,” since the said article uses Padoa’s method to prove the independence of this concept from the remaining primitives of Newtonian particle mechanics.

We shall call dual axiomatics the one that accompanies every key mathematical concept with a semantic hypothesis specifying its reference and sketching its sense. We call it hypothesis and not convention or rule because it can be overthrown by observation or experiment. For example, Hideki Yukawa’s pioneer meson theory was long assumed to concern mu mesons, until it was found to describe pi mesons.

When the semantic component is overlooked, one runs the risk of incurring mistakes like the so-called Aharonov–Bohm effect. This consists in believing that the electrodynamic potential A related to the magnetic field intensity by H = ∇ × A is just a mathematical auxiliary, because it may happen that A ≠ 0 but H = ∇ × A = H = 0. An operationist will hold that such an A has no physical meaning because it does not affect a magnetized needle, but a realist may remind her that the presence of A will decelerate an electron but accelerate a proton, by altering the particle momentum in the amount −(e/c)A. For this reason a realist is likely to recommend axiomatizing classical electrodynamics taking the four current densities and the corresponding potentials rather than the field intensities, even though they represent aspects of one and the same thing—the electromagnetic field (Bunge 2015).

In sum, we reiterate the axiomatization strategy proposed in earlier publications (Bunge 1967a, b, d), which differs from the structuralist one defended by Suppes, Sneed, Stegmüller, Moulines, and other philosophers. This formalist stance, which ignores the semantic side of scientific theories, and skirts the controversies this side generates, has been the target of a couple of criticisms (Bunge 1976; Truesdell 1984) that I regard as decisive. But these criticisms have been ignored by nearly all philosophers of science.

4 Quanta

Since its inception in 1900, quantum physics has been the object of many spirited controversies. Its mathematical formalism has seldom been questioned: What was being questioned was its physical interpretation. But, because the theory was axiomatized only in 1967, most of those controversies were about just a few components of it, and they were basically clashes of opinions among a few leaders of the physics community, mainly Bohr and his many followers on one side and a handful of dissenters led by Einstein on the other.

Worse, most of the discussants confused philosophical realism—the thesis of the reality of the external world—with what I call classicism (Bunge 1979). This is the opinion that the quantum theory is seriously flawed because it does not calculate the precise positions and trajectories of quantum objects.

Contrary to scientific realism, the “official” opinion, held by Bohr’s Copenhagen school, holds that electrons and the like come into existence only upon being observed or measured. The most extreme among the orthodox hold that “[t]he universe is entirely mental” (Henry 2005) or even sacred (Omnès 1999). Such strange ex cathedra revelations did not result from an examination of the principles of quantum mechanics. When these are identified and examined, a sober picture emerges: quanta are certainly quaint from a classical-physics viewpoint, which is why they deserve a name of their own, quantons (Bunge 1967c). But they do not imperil the project of the Greek atomists, of uncovering the real and law-abiding if often messy universe beneath appearances and beyond myth.

We shall confine our discussion to the main issues that confronted physicists and philosophers before Schrödinger’s cat stole the show, that is, during the formative decade between Heisenberg’s foundational paper of 1925, and the Bohr-Einstein debate in 1935. It is pointless to rush to later issues before settling the earlier ones, which are still festering and, more to the point, are still being discussed more theologico rather than more geometrico.

Many of the controversies about quanta revolve around three of its key concepts: the state (or wave) function ψ; the eigenvalues a k of the operator A op representing a dynamical variable (“observable” A, which occur in the equations of the form A op u k  = a k u k ; and the symbol Δ in the inequalities of the form Δp Δq ≥ h/2π. The subjectivist and objectivist (realist) interpretations in question are summarized in the following table.

Symbol

Copenhagen

Realist

ω

Measured object

Object in itself (quanton)

x

ω’s position coordinate

Arbitrary point of space

H(x,p,t)

Hamiltonian = ω’s energy

 

|ψ(ω,x,t)|2Δv

Probability of finding ω inside Δv when measuring ω’s position at time t

Probability of ω’s presence inside Δv at time t

a k

Measured value of A with probability |uk|2

Objective value of A with probability |u k |2

ΔM

Uncertainty about M’s value

Variance or mean square deviation of the really possible values of M

Every one of the above statements is a semantic assumption, and moreover a controversial one. For example, the orthodox party disputes the very existence of the referent ω, or at least its existence separate from both observer and apparatus; realists adopt the view that probabilities represent possibilities of futures rather than either frequencies of past events or degrees of belief; and if the innocent-looking x is interpreted as the referent’s position, it follows that its time derivative is the particle velocity, which in Dirac’s and Kemmer’s theories turns out to be a matrix whose eigenvalues are +c and −c, which is absurd and consequently calls for a different coordinate Bunge 2003).

The realist interpretation does not state that the object ω of study is being observed all the time. Only an examination of its hamiltonian can tell us whether it is free or under the action of external actions, in particular actions exerted by an experiment, in which case the hamiltonian will include a term H int depending on both ω and dynamical variables characterizing the apparatus. If the latter do not occur in H, as is the case when calculating energy levels of free atoms or molecules, talk of experimental perturbations is shameless smuggling motivated by philosophical convictions.

Note also that the orthodox view fails to specify the measuring instrument and the indicator the experimenter uses, as if universal instruments and indicators could exist. It is ironical that an approach claimed to follow closely laboratory procedures actually involves a magician’s top hat or, to put it more politely, “a ‘black-box’ process that has little if any relation to the workings of actual physical measurements” (Schloshauer 2007: 334).

Any responsible talk of observables and actual measurements will involve a detailed description of specific laboratory procedures. Such description, being tied to a specific set-up and a specific indicator, has no place in a general theory such as quantum mechanics, just as statesmen have no right to stipulate the value of π—as the writers of the Indiana constitution did. Therefore it is wrong to claim, as Dirac (1958) did with respect to the eigenvalues of “observables,” that they are the values that any experimental procedure must yield. If this were true, governments could junk all the measurement instruments and sack all the experimentalists.

The only rational way to approach rationally foundational dilemmas like the subjectivism/realism one is to examine the foundations of the theory in question, which in turn requires axiomatizing it. The rest is leaning on quotations, handwaving, or preaching.

For example, the Heisenberg-like inequalities for an arbitrary pair of conjugate canonical variables are not justified by telling stories about thought experiments, but by rigorous deduction from the relevant axioms and definitions—a job that takes only two theorems, one definition, and a lemma borrowed from mathematics (Bunge 1967a, pp. 252–256).

In conclusion, if we wish to avoid apriorism, anthropocentrism, and dogmatism, we must adopt scientific realism and we must reason from principles, not opportunistically. In turn, principled reasoning in factual science requires replacing the obiter dicta of famous people with axiomatized or at least axiomatizable theories that take care of content as well as form. And sketching the content of a theory starts by indicating its referents. Indeed, the least we are expected to know is what we are talking about: whether it is a thing out there, a mental process, a social fact, or a fictive object.

5 The Mental: Brain Process, Information Transfer, or Illusion?

At present, the two most popular opinions on the nature of the mind among the philosophers of the mind are informationism and materialism. The former holds that the mind is a stuff-free information-processing device—more precisely, a computer that runs on programs. By contrast, materialism holds that everything mental is cerebral. Another difference between the two views is that, whereas informationism is only a promissory note, materialism is behind cognitive neuroscience, the most productive branch of psychological research as well as the scientific basis of biological psychiatry (Bunge 1987). Let us see briefly how the axiomatic approach may help evaluate both schools.

Let us start with informationism, the only precise formulation of which is the thesis that minds, like computers, are basically Turing machines. Let us quickly review the main traits of any axiomatic theory of these artifacts. The basic or primitive concepts are M (the set of Turing machines), S (the set of possible states of an arbitrary member of M), E (the set of admissible inputs to any member of M), and T: S × E → S, the function that takes every <state s, stimulus e> pair into another state t of the same machine: T(s,e) = t.

Note that S and E are sets, that is, closed collections of items given once and for all, hence quite different from the variable collections found in living beings. Second, a member of M will jump from off to on only it receives a suitable stimulus: it lacks the self-starting or spontaneous ability of human brains. Third, the machine goes from one state to another only if it admits one of the stimuli in E: it does not recognize novel stimuli and it does not come up with new states—it is not creative. In particular, the machine neither develops nor belongs to an evolutionary lineage. Whatever novelty it encounters is a product of the engineer in charge of it. In my pig Latin, Nihil est in machina quod prius non fuerit in machinator. Fifth: the Turing machine is universal: it does not depend on the kind of stuff or material, whereas human minds exist only in highly developed brains.

By contrast to machines, human brains can start by themselves, are inventive or creative, distinguish concepts from their symbols (unless they are nominalist), enjoy some freedom or self-programming, and can invent problems whose solutions are not necessary for survival.

In sum, the Turing machines behave as prescribed by the behaviorist psychology that ruled in the American psychological community between 1920 and 1960—like well-trained rats and preverbal babies. On top of ignoring everything that distinguishes us from rats, that psychology rejects all attempts to investigate the neural mechanisms that explain behavior and mind, as a consequence of which it cannot help design treatments of mental disturbances more complex than phobias, such as depression and schizophrenia.

Contrary to information-processing psychology, cognitive and affective neuroscience makes precise and consequently testable hypotheses, such as “The cerebral amygdala is an organ of emotion,” “Moral evaluations and decisions are made by the fronto-parietal lobe,” and “The hippocampus is the organ of spatial orientation”—John O’Keefe corroboration of which was rewarded with the 2014 Nobel Prize.

These and other precise hypotheses can be cobbled together into psychoneural theories, such as the author’s quasiaxiomatic one (Bunge 1980). This theory consists of 27 postulates, 16 theorems and corollaries, and 44 definitions, all of which refer to neural systems and their specific functions, that is, the processes peculiar to them.

For example, Definition 7.9 (iv) in that book reads thus: an animal a is creative = “a invents a behavior type or a construct, or discovers an event, before any other member of its species.” Immediately thereafter follows Postulate 7.5, inspired in Donald Hebb’s 1949 seminal work: “Every creative act is the activity, or an effect of the activity, of a newly formed neural system.”

6 Axiomatic Theory of Solidarity

It is well known that solidarity, together with shared interests and values, are mechanisms of survival and social cohesion of human groups, starting with the family, the gang and the village. In particular, the marginal people survive because they practice mutual help, as Larissa Adler Lomnitz (1975) showed in her pioneering study of Mexican shantytowns.

It is not for nothing that solidarity is a member of the most famous political slogan in history: Liberty, equality, fraternity. However, there are few scientific studies of solidarity; worse, this same-level process is often confused with charity, which is top-down. In the following we present a mathematical model of solidarity, built just to illustrate the dual axiomatics format. However, we shall start by presenting it in a heuristic fashion.

We shall say that two individuals or social groups are solidary with one another if they share some of their material resources—that is, if each of them hands over to the other a part of his/her concrete goods or bears some of his/her burdens.

One way of formalizing the solidarity concept is to assume that the rate of change dR 1 /dt of the resources of unit g 1 is proportional to the sum of its own resources R 1 plus the excess of g 2’s resources over g 1’s. In obvious symbols,

$$ dR_{1} /dt = k\left[ {R_{1} + (R_{2} - R_{1} } \right)] = kR_{2} $$
(1a)
$$ dR_{2} /dt = k\left[ {R_{2} + \left( {R_{1} - R_{2} } \right)} \right] = kR_{1} , $$
(1b)

where k is a constant with dimension T −1. Dividing the first equation by the second and integrating, we get

$$ R_{1}^{2} {-}R_{2}^{2} = c, $$
(2)

where c is another dimensional constant. The graph of the preceding equation is a hyperbola on the <R 1, R 2> plane. Anticipating the meaning to be assigned to the two axes, it is clear that only the first quadrant is to be kept.

What follows is one of the possible dual axiomatizations of the preceding model of solidarity.

Presuppositions

Classical logic and elementary infinitesimal calculus.

Primitives

G, R i (where i = 1,2), T, V.

Definition 1

Rate of change of resource R i  = dR i /dt.

Axiom 1m

G is a countable set.

Axiom 1s

Each element g i of G represents a social group.

Axiom 2m

Every R i is a function from G to the set V of real positive numbers, and differentiable with respect to t, where t is an arbitrary element of T.

Axiom 2s

R i denotes the material resources of group g i .

Axiom 3m

k and c are positive real numbers, the dimension of k is T −1, and that of c is the same as that of V 2.

Definition 2

The social groups g 1 and g 2 are solidary to one another if and only if they satisfy conditions (1a) and (1b).

Corollary 1

Only the first of the quadrants in the <R 1 , R 2> plane has a sociological sense.

This is an immediate consequence of Axiom 2 s since, by definition, all resources are positive. Physical parallels: the advanced waves in classical electrodynamics and the negative probabilities in a now forgotten theory of Dirac’s.

Theorem 1

The system of differential Eqs. (1a) and (1b) implies the algebraic equation

$$ R_{1}^{2} {-}R_{2}^{2} = c, $$
(2)

where c is another dimensional constant. The graph of this equation is a vertical hyperbola on the <R 1 , R 2> plane.

Proof

Dividing (1a) by (1b) results in

$$ dR_{1} /dR_{2} = R_{2} /R_{1} $$

Integrating one obtains (2).

Theorem 2

Solidarity approaches equality as both the resources and solidarity increase.

Proof

The asymptote of the right branch of the hyperbola (2) is the straight line “R 1 = R 2”, representing equality of resources.

Remark

A political moral of Theorem 2 is that, since solidarity breeds equality regardless of the initial endowments, it is not necessary to impose it by force. However solidarity does not emerge by itself: it is but one of the sides of the square Liberty, equality, solidarity, competence. Take note, Pol Pot’s ghost.

7 Virtues of Dual Axiomatics

Axiomatization is required when the precision of some key concepts is questioned, or when the basic assumptions must be identified to ensure their covariance (or frame invariance). However, when the either the denotation or the connotation of a theory is not obvious, we need to state them explicitly, in which case we must engage in dual axiomatization. This boils down to enriching the formalism of a theory with a set of semantic assumptions, such as “General relativity is a theory of gravitation”—rather than, say, a theory of spacetime, or a generalization of special relativity.

The main virtues of dual axiomatics are the following:

  1. 1.

    it preserves the form/content, a priori/a posteriori, and rational/factual dualities;

  2. 2.

    it unveils the tacit assumptions, in particular the imprecise or false ones—that is, the place where the dog was buried, as a German would put it;

  3. 3.

    it reminds one from start to finish which are the referents of the theory, which prevents philosophical and ideological contraband; for example, it shows that the presentations of the relativistic and quantum theories in terms of measurements are false, since the concepts of observer, apparatus and indicator do not occur in the principles;

  4. 4.

    it helps disqualify extravagances such as the many-worlds and branching universes fantasies, as well as the thesis that its (material things) are nothing but bundles of bits (information units), by showing that they violate standard conservation principles;

  5. 5.

    it exhibits the legitimate components of the theory as well as their deductive organization, and with it the logical status of each constituent (universal/particular, primitive/defined);

  6. 6.

    it facilitates the empirical test of theories, in showing the absence of indicators or markers in it, hence the need to add at least one indicator for every type of measurement;

  7. 7.

    it eases the understanding and memorization of theories, in highlighting the most important constructs and thus reducing the number of formulas required to reconstruct theorems.

8 Conclusion

We conclude that axiomatics, in particular that of the dual kind, is anything but a dispensable luxury. In fact, axiomatics facilitates understanding and improvement, for it helps to detect false presuppositions, gaps, weak links, and pseudotheorems; it is indispensable for doing serious, deep and useful philosophy of advanced science; and it suggests that deep science presupposes philosophy.