1 Philosophy lore, about the Perrin episode

There is a bit of conventional wisdom often recounted by Scientific Realists concerning the history of science:

LORE: until the early 20th century there was insufficient evidence to establish the reality of atoms and molecules, but then Perrin’s experimental results on Brownian motion convinced the scientific community to believe that they are real.

On the rationale for the nineteenth century disputes over, and opposition to, the atomic theory there are two views. Steven Brush and John Nyhof, for example, argued that the opponents held positivist philosophical presumptions against admitting the unobservableFootnote 1. Penelope Maddy holds on the contrary that the dispute was purely scientific. Her diagnosis is that all sides agreed already at that time that

the claim ‘there are atoms’ … was considered empirically adequate before Einstein and Perrin; afterwards it graduated to another status. (Maddy 2001, p. 59)

But with respect to this ‘graduation’ she is in accord with LORE:

Indeed, reputable scientists once rejected atomic theory on the grounds that it could not be confirmed, in other words, because there were no evidential rules that could settle the question. The young Albert Einstein set himself the problem of devising a theoretical test: ‘My major aim … was to find facts which would guarantee as much as possible the existence of atoms of definite finite size’ … Even when he had succeeded, Einstein doubted that actual experiments of scientific accuracy could be designed and carried out. Thus the meticulous and decisive work of Jean Perrin on Brownian motion came as a welcome surprise.Footnote 2

…scientists were not content with the empirically adequate atomic theory, that they wanted to know whether or not it was more than that—and presumably if the atomic hypothesis had failed Perrin’s and subsequent tests, they would still have wanted to know why it was empirically adequate, what it was about the world that made the atomic theory so successful. (Maddy 2007, p. 309)Footnote 3

I will not pause to address these differences, or whether philosophical scruples played a role for individual scientists. What interests me here is what these two accounts have in common as a view of this sort of scientific research and evidence gathering.

On this view, the main question is one of legitimation. Once the philosophers’ lore is accepted, the question becomes only how we can understand Perrin’s work as epistemically legitimating the conclusion drawn from it, that is, the reality of atoms and molecules.

This question of legitimation (with its presupposition intact) is addressed by Wesley Salmon, Clark Glymour, and Peter Achinstein, with different answers.Footnote 4 Of course, this question arises on the assumption that the story presents at the same time not just historical events that happened but what was scientifically at stake, and therefore the real significance of this scientific advance. The presumption involved is that success of a theory means that it comes to be believed to be true, and that the work done to that end was significant precisely in the way and to the extent that it produced evidence and arguments to justify that belief.

This presumption is supported by a plethora of quotes from eminent scientists of the time, including erstwhile opponents of the atomic theory who changed their minds, to show that the advance consisted in demonstrating, beyond all reasonable doubt, that the atomic hypothesis was true. But do scientists, in practice, make the distinctions so familiar to philosophers, between what is true and what is good for the future of their enterprise? Between, on the one hand, counsel to doubt that there are atoms and, on the other, counsel to doubt that the atomic hypothesis points to the good direction for the advance of physics? When scientists describe the acceptance of a scientific theory, do they think in terms of such distinctions as those between truth and empirical adequacy? Even if particular scientists do so, should we take that as judgments free of interpretation? Or unconditioned by social or cultural or educational factors?

Whether on the lips of scientists or of philosophers, it remains that LORE is an interpretation, though unacknowledged as interpretation. It can be challenged—or perhaps I should say, exposed as such—by presenting an alternative interpretation, and scrutinizing the credentials these rival interpretations may have. Only if alternative interpretations are considered can we see whether there are ambiguities in the story, whether there are interpretative leaps.

2 Difficulties besetting this philosophical lore

When the story is told in terms current in philosophy of science we must be especially critical. Thus Maddy says simply:

in a case like the post-Einstein/Perrin atomic theorist, it seems incorrect to interpret the claim ‘there are atoms’ to mean that the assertion of the existence of atoms is empirically adequate: it was considered empirically adequate before Einstein and Perrin; afterwards it graduated to another status.(Maddy 2001, p. 59)

  1. (i)

    But “empirically adequate” is a philosophical term of art, the scientists did not have that term. If they had had it, they certainly could not have thought that the evidence established that the atomic theory was empirically adequate, for that claim would extend far beyond the evidence.Footnote 5

The history is anyway badly portrayed! If the reference is instead to empirical success conceived in a broader sense (taking “empirically adequate” in a less philosophically technical sense) then Maddy does follow Perrin’s presentation, but that portrayal looks Pollyannic given the severe problems of the atomic theory in the two decades preceding Perrin’s work.Footnote 6

  1. (ii)

    Nor was the theory Perrin addressed believed to be empirically adequate in fact! Perrin worked throughout with the ‘billiard ball’ version of the kinetic theory. In his models, molecules are perfectly hard, perfectly elastic spheres, taken as a relevant approximation. (As had been pointed out, perfectly hard + perfectly elastic was a self-contradiction, but such problems were assumed to wash out in the approximations). Research well established at the time had already shown that this theory couldn’t be empirically adequate—e.g., Rutherford’s research on nuclear structure, starting in 1895, for which he received the Nobel prize in chemistry in the year of Perrin’s main results.

If (totally unaccountably) scientists had nevertheless believed the kinetic theory Perrin addressed to be empirically adequate, and then come to believe that also it was true, as in Maddy’s story, what would have happened to the empirical research that led to the quantum theory? Conversely, what happens to the story if it is modified so as include the extra step that leads from success (of an approximation to a clearly inadequate classical model) to belief (in atoms as conceived in the then current atomic research)?

  1. (iii)

    Thirdly, the historical debate concerning the atomic theory in the nineteenth and early twentieth century was not between rivalry over postulating or disallowing unobservable entities. For although atoms are unobservable, so are the forces and energies postulated in the rival programs of dynamism and energetism. Thus energetics began with the point that many steps in thermodynamic arguments remain “in the dark from the point of view of physical intuition”, and proposed to add a physical interpretation to close this gap.Footnote 7 To do this, energy was reified, and new features of this energy were postulated, to begin that it can be factored into two physical quantities, capacity or volume energy, and intensity, but this did not suffice, and in specific cases there was a need to introduce new kinds of energy in an ad hoc fashion.

Resistance to the kinetic theory was thus certainly not a simple instance of resistance to the tactic of postulating features inaccessible to direct measurement or observation. In fact both sides of the debate were empirically challenged in essentially the same way, namely, challenged to provide a concrete and empirically investigable link between such theoretical quantities and the phenomena. It was the atomic theory that finally met those empirical challenges.

As always, the bottom line in the empirical sciences is to meet the criteria of success that relate directly to test and experiment. So let us leave this philosophical lore and the realism/empiricism debate behind and look into some actual empirical constraints on theories and models.

3 The empirical grounding criterion

The understanding in practice of what is required for a good empirical theory, though in evidence throughout the development of modern science, was naturally not explicitly formulated till late. I will begin with an early example to guide us, and then present a contemporary formulation, before entering on its application to the development of the atomic theory.

3.1 Newtonians and the Cartesian critique

The Cartesians’ critique of Newton was that, with his introduction of non-kinematic parameters such as mass and force, he had brought back ‘occult qualities’. The Newtonian response was, in effect, that admittedly what is measured directly are lengths and durations, but that they could show nevertheless how to measure mass and force.

The rationale of this response was thoroughly re-investigated in the nineteenth and early twentieth century, by Mach, Duhem, and Poincaré.Footnote 8 As they showed, the measurement of those dynamic parameters on a body is an operation that counts as such a measurement relative to Newtonian theory. To say that the operation measures mass, for example, is to presuppose the applicability of Newton’s second and/or third law. So for example the Atwood machine, or measurements by contracting springs, presuppose that the set-up as a whole is a Newtonian system. The values of the masses are indeed calculated from the observations of kinematic quantities, but via Newton’s laws.

Rev. George Atwood (1746–1807) The accelerations are equal but opposite in direction, and proportional to (M − m)/(M + m), which determines M/m

The Newtonian response was precisely to the point, and it reveals quite clearly the norms concerning empirical constraint accepted in modern scientific practice. All the parameters in the theoretical models must admit of such empirical grounding.Footnote 9 If not, they are empirically superfluous, and provide an obstacle to the acceptability of the theory.

The base line criteria for science are empirical. That explains why hidden variable theories do not get any attention among the scientists themselves, as opposed to philosophers, until and unless there is some suggestion of a possibility of empirical testing. It is not relevant to object that all the evidence is as much in accord with the hidden variable variant as with the original. Parameters introduced into modeling must not be empirically superfluous—there must be, in some way, even if at some distance, a coordination with empirically differentiating phenomena.

Sometimes the parameters that appear to be empirically superfluous can be simply removed without imperiling either the coherence of the theory or its empirical strength and credentials. The ‘grounding’ requirement turns into a salient problem only when elimination is not possible, while there are no theoretically specifiable conditions in which their values can be determined, relative to the theory, on the basis of measurement results.

The appropriate, and typical, response in that case is to start enriching the theory so that it becomes more informative, informative enough to allow the design of experiments in which this empirical determination of the values does become possible.Footnote 10

But meanwhile, can we imagine the Cartesians’ feelings? Those measurements of mass or force make sense only in the context of the assumption that the set-up or target is itself a Newtonian system—something that the Newtonian postulates. So how, in what sense, is this evidence that bears out Newton’s theory? How can the evidence, taken in a way that is neutral between Cartesian and Newtonian, legitimate the conclusion to the truth of the Newtonian theory? We can imagine the Cartesian asking these questions, and the dissatisfaction with this on the Cartesian side, especially since Cartesian general epistemology is paradigmatically foundational. But in this—uncharitable? anachronistic?—imagined response, the Cartesian is barking up the wrong tree.

3.2 Weyl and Glymour: the empirical constraints on science

The relevant methodological insight was, as I said, formulated much later; some of the current philosophical ‘conventional wisdom’ seems never to have assimilated it. As initial formulation, here is Hermann Weyl’s in his Philosophy of Mathematics and Natural Science:Footnote 11

  1. 1.

    Concordance. The definite value which a quantity occurring in the theory assumes in a certain individual case will be determined from the empirical data on the basis of the theoretically posited connections. Every such determination has to yield the same result … Not infrequently a (relatively) direct observation of the quantity in question …is compared with a computation on the basis of other observations …

  2. 2.

    It must in principle always be possible to determine on the basis of observational data the definite value which a quantity occurring in the theory will have in a given individual case.

It is easier to read these points in reverse order. Given that one is called Concordance let us call the other Determinability (“Determinance” has unfortunately been grabbed for a sword-fighting computer mouse game). This deserves detailed discussion, but for now I just want to put the following empirical grounding requirement on center stage:

  • Determinability: any theoretically significant parameter must be such that there are conditions under which its value can be determined on the basis of measurement.

  • Concordance, which has two aspects:

    • Theory-Relativity: this determination can, may, and generally must be made on the basis of the theoretically posited connections

    • Uniqueness: the quantities must be ‘uniquely coordinated’, there needs to be concordance in the values thus determined by different means.

This third point emphasizes here what Schlick and Reichenbach insisted on in the phrase “unique coordination”.

There is, at first sight, as we noted above, a glaring possible objection to the completeness of this formula, if viewed as putatively sufficient. If the theory’s being thus borne out by experimental and measurement results is on the basis of the theoretically posited connections, why does that not trivialize the putative evidence?

This concern was addressed explicitly by Clark Glymour in his account of relevant evidence and testing. Glymour was explicitly following Weyl here, but saw the need for the additional constraint to prevent self-fulfilling prophecy in science.

I will adapt the following to our present purpose, from Glymour’s Theory and Evidence—adapt and amend, since his presentation of the ‘bootstrapping method’ was confusingly conflated with what was then called ‘confirmation theory’.Footnote 12 For simplicity let’s take theory T to be presented as a set of equations, involving certain parameters both directly measurable and theoretical, and take relevant evidence to consist similarly in a set of equations that simply assign values to some of the measurable parameters.

Then Glymour imposes the constraint that there must be an alternative possible outcome for the same measurements that would have refuted the hypothesis on the basis of the same theoretically posited connections. His conception may be presented initially as follows:

E provides relevant evidence for H relative to theory T exactly if E has some alternative E’ and T some subtheory T’ such that:

  1. (1)

    T \( \cup \) E \( \cup \) {H} has a solution.

  2. (2)

    T’ \( \cup \) E’ has a solution.

  3. (3)

    All solutions of T’ \( \cup \) E are solutions of H.

  4. (4)

    No solutions of T’ \( \cup \) E’ are solutions of H.

For example, if T consists simply of the equation P(t)V(t) = RT(t), with R a theoretical constant, then we can take H to be just T itself, and E could be

$$ {\text{E}}\, = \,\left\{ {{\text{P}}\left( 1\right)\, = \, 2,\,{\text{V}}\left( 1\right)\, = \, 3,\,{\text{T}}\left( 1\right)\, = \, 30;{\text{P}}\left( 2\right)\, = \, 3,\,{\text{V}}\left( 2\right)\, = \, 1,\,{\text{T}}\left( 2\right)\, = \, 1 5\,} \right\} $$

which satisfies T while determining the value of R to be 5. It has the requisite possible alternative that could have been found instead; for example:

$$ {\text{E}}^\prime \, = \,\left\{ {{\text{P}}\left( 1\right)\, = \, 2,\,{\text{V}}\left( 1\right)\, = \, 3,\,{\text{T}}\left( 1\right)\, = \, 30;\,{\text{P}}\left( 2\right)\, = \, 3,\,{\text{V}}\left( 2\right)\, = \, 1,\,{\text{T}}\left( 2\right)\, = \, 1 1} \right\} $$

which does not satisfy T for any possible value of R. (Here the subtheory T’ is trivial, empty or tautologous, which is what makes the example very simple.)

The threat of trivializing circularity or vacuity may not be entirely eliminated, in logical principle, by Glymour’s additional requirement. It would be surprising if we could find complete sufficient conditions for having an empirically solid theory so quickly. But satisfaction of the above requirements characterizes well and clearly what can be offered on behalf of the significance of a particular empirical grounding of the theoretical parameters in any specific case.

4 The problem of empirical grounding in the nineteenth century

Now we have come to the besetting problem of the atomic theory that Dalton introduced early in the nineteenth century, and that was extended into the kinetic theory of heat, finally into the statistical mechanics which rivaled phenomenological thermodynamics. I’ll use the term ‘kinetic theory’ to refer to all of that, for short.

This methodological demand for empirical grounding, that we see so clearly operative throughout the modern history of science, applies to the kinetic theory as well. The attitude toward the atomic and molecular structure postulated in the nineteenth century was precisely that the models provided by the atomic theory must be thoroughly coordinated with measurement procedures. Let’s make the demand explicit in general terms:

  1. (I)

    If two such models of a given phenomenon differ only in the values of certain parameters, there must be in-principle measurement results that will differentiate between them.

  2. (II)

    Similarly, for any distinct states in the theory’s state-space, in which the model locates the systems’ trajectories, there must be in-principle measurable quantities that differentiate them.

The term “in-principle” refers here not just to the idealization that measurements have unlimited precision, but also to Weyl’s observation that the differentiation is not crudely theory-neutral, but on the contrary, relative to the theory (and perhaps additions from background theory) itself. If these demands are satisfied, let us call those parameters, or the theory, empirically well-grounded.

In a kinetic model of a gas, there are many parameters that pertain to the individual molecules. The empirical success of such models is related to the measurement of ‘gross’ quantities such as mean kinetic energy. If two such models of a gas agreed on those quantities that were meaningfully measurable also in phenomenological thermodynamics, but differed in the values of such parameters as individual masses, sizes, momenta, or number of molecules, could there be measurements to differentiate those, in principle?

Philosophers’ history of the scientific research directed to this question has largely seen it displaying philosophical rather than scientific motivations. But if we look at the texts with new eyes we see that the objections and challenges concerned precisely the question of whether the parameters in the atomic theory could be empirically well-grounded.

4.1 Grounding dalton’s theory empirically

To begin, Dalton’s atomic theory (1808) implied what he took (though in fact somewhat controversially) as already established and accepted chemical principles of the time:

  • The law of definite proportions of elements in compounds

  • The law of equivalent proportions: the ratio of the weights of A and B which react with given amount of C does not depend on C.

  • The law of multiple proportions: if A and B can combine to form different compounds C and C’, then the ratios of the masses of B that combine with a fixed amount of A is a ratio of whole numbers.

The appearance of whole numbers immediately suggests counting: hence that the models should be discrete, particulate. That different compounds can be formed is then accounted for in his theory by allowing different structures for the molecules of the compound.

The three main parameters of the theory (combining proportions, molecular structures, and relative atomic weights) are such that from any two the third can be deduced. But only the first was measurable, raising the problem that the other parameters were not empirically well-grounded. Thus Wollaston complains

“it is impossible in several instances, where only two combinations of the same ingredients are known, to discover which of the compounds is to be regarded as consisting of a pair of single atoms, and … the decision of these questions is purely theoretical” (Wollaston 1814, p. 7; cited Gardner 1979, p. 16)

How did this situation change? By enrichment of the theory through additional theoretical hypotheses. Specifically, Avogadro (1811) added his ‘equal numbers in equal volumes’ hypothesis (see more precisely below) while Dulong and Petit added, in 1819:

Atoms of every simple body have exactly the same heat capacity.

Measurement of specific heats of various elements can then be related to the number of atoms in a unit quantity so as to yield the atomic mass. Their hypothesis implied that the specific heat of the substance (which is measurable) is inversely proportional to atomic weight. If indeed two of the parameters are empirically determinable, the values of the third also admit calculation. So that was an improvement in precisely the respect I am emphasizing. However, the addition does not go all the way; because only equalities are postulated and not specific numbers, this cannot yield values beyond mass ratios.

Greater difficulty with this reasoning was actually to come, for this improvement rested on a postulate concerning one of the most problem-beset areas for the theory: specific heat ratios.Footnote 13 The kinetic theory had as a fundamental principle the equipartition of energy: the total energy is distributed equally between all the n mechanical degrees of freedom. This had as consequence that the ratio of the specific heats of a gas at constant pressure and constant volume respectively, will take form (2 + n)/n. When the gas particles are assumed to be perfectly smooth and rigid spheres having only the three degrees of translational freedom, with rotation or vibration negligible, that ratio is 5/3, approximately 1.67, whereas experiments available by midcentury show only 1.4. Introduction of rotational and vibratory motions did not lead to better accord with the measurement results, and in 1860 Maxwell concluded that the discrepancy “overturns the whole hypothesis, however, satisfactory the other results may be”.Footnote 14 Fifteen years later Maxwell (1875) tells the Chemical Society “And here we are brought face to face with the greatest difficulty which the molecular theory has yet encountered”. The Dulong–Petit addition did therefore not nearly suffice to make the entire theory empirically well-grounded.

There continued thus to be a good deal of skepticism within the physical sciences with respect to the atomic theory. The question for us, philosophers, is: just what is this skepticism? And that is a question of interpretation. Philosophers, seeing what is familiar to them in the words of scientists, may take it to be a skepticism about the existence of unobservable objects, taking that be the concern of the scientists, precisely because they interpret the scientists’ attitudes toward their theories in terms of belief and disbelief in the truth of those theories. But if we look carefully at how such ‘skepticisms’ are expressed, we can arrive at a different understanding.

Probably the best known addition to Dalton’s theory, to enrich it in the direction of testability, was Avogadro’s hypothesis: “The first hypothesis to present itself in this connection, and apparently even the only admissible one, is the supposition that the number of integral molecules in any gases is always the same for equal volumes, or always proportional to the volumes” (1811, p. 58). Within the atomic theory, applied for example to model water and drawing on the results of electrolysis, it is derived that the ratio of the molecular masses of water and hydrogen is 18:1. But again, knowledge of such ratios does not determine the masses themselves. How to go from these mass ratios to the masses? Obviously the number N of particles in a standard gas sample, if determined, would together with the molecular mass ratios yield the molecular masses. But in effect, for the time being at least, this just added one more so far empirically ungrounded parameter.

4.2 When concordance appears to fail

A telling episode, about mid-way, is found in the set-back the atomic theory received in the 1820s when Dumas obtained data on gas densities that seemed irreconcilable with Avogadro’s hypothesis. The episode is telling about the quest for empirical grounding in the sciences, and also, as it happens, telling about how misleadingly easy it is to read through later philosophical eyes.

Dumas’ research on the vapor densities of mercury and sulphur could have had results that would have yielded consistent estimates for the atomic weights, thus providing at least greater empirical grounding for that all-important theoretical parameter in Dalton’s models. Because of the found inconsistencies, Dumas’ opinion of the theory by 1836 is entirely negative (and this is an often cited remark in the philosophical literature):

on this subject too many hypotheses have already been made … instead of investigating these hypotheses more thoroughly, it would be far better to seek some more reliable foundations on which to base more substantial theories … If I had my way, I should erase the word ‘atom’ from science, in the firm belief that it goes beyond the realm of experiment; and never in chemistry must we go beyond the realm of experimentFootnote 15

“Never go beyond the realm of experiment”! Are we listening to a positivist here, who wants to ban all unobservables from physical theory? Should we read this as expressing an instrumentalist or empiricist or even Machian philosophical prejudice against the atomic theory?

Not at all: the quotation scandalously omits the sentence just before that “If I had my way”, which is “It is my conviction that the equivalents of the chemists—those of Wenzel and of Mitscherlich, which we call atoms—are nothing but molecular groups”.Footnote 16

Dumas’ work, whose outcome led him to this negative verdict, had been designed to improve the atomic theory. So was its failure an inspiration to him to say that in chemistry, unobservable things are not to be included in theoretical postulation? Again, not at all. The impression that this is what he means comes from reading the phrase “beyond the realm of experiment” anachronistically, as if it meant that chemical theory is to be written in a naïve-empiricist ‘observational’ vocabulary. That is in fact not just anachronistic, it is to ignore the distance between our philosophy and Dumas’ scientific concern. What he means must be gathered from precisely how the atomic theory failed for him, namely that it seemed, in view of his results, impossible to achieve empirical grounding for its theoretical parameters.

Precisely the same sort of problem plagued the kinetic theory in the last decade of the nineteenth century, after a long period of apparently successful improvement—a decade of intense critique of the theory even by such avowedly ‘realist’ thinkers as Max Planck. The problem of accounting for the specific heats of gases was first of all that the different possible models of the kinetic theory of gases all gave values clearly different from measured values—except for Boltzmann’s model, which could not be reconciled with mechanics, and so was derided as involving an ad hoc, inexplicable departure from basic principles. The problem in a nutshell was that vibratory motion had to be attributed to the molecules (as well as translational and rotational), but that the degrees of freedom of vibration could not be allowed to play a role in the deductions concerning specific heat.

To put it differently: some measurement results led, via the theory, to one value for this quantity, and other measurement results were incompatible with that value—concordance failed. The determinations of its value, by measurement, relative to the theory, were inconsistent with each other. The same was true for the transport equations, involving the determination of a value for the mean free path of a molecule on the way to calculating the measurable quantities pertaining to viscosity and diffusion. The different calculations relative to the theory were not consistent with each other (cf. Clark 1976, 82–88). There could be no clearer demonstration of the failures of the kinetic theory with respect to precisely the requirement of empirical grounding than what Planck and others complained of at the end of the nineteenth century.

5 Perrin begins

To report on Perrin’s work here I will rely on the general presentation he gave of his work in 1909, just a year after the publication of his epoch-making experimental results.Footnote 17

5.1 How and where empirical grounding is needed

Early on in his account he lists the parameters which have resisted satisfactory or sufficient empirical grounding to date. As mentioned above, Avogadro’s hypothesis allowed deduction of molecular mass ratios. In Perrin’s formulation, the hypothesis is that any two gram-molecules of a substance contains the same number of molecules, Avogadro’s number N.Footnote 18

There is a similar theoretical relation between N and the mean kinetic energy of the molecules, via the ideal gas law; and this can in the same way be used to yield an equation connecting N and the mean square molecular speed. The perfect gas law is the well known equation pV = RT, where R is the perfect gas constant and the temperature T was proved to be proportional by the factor 3R/2 N to the mean kinetic energy (1910, pp. 11–12). So we have as resultant the equation pV = (1/3)N< s2 >, where

  • N = the number of molecules

  • m = the mass of each molecule

  • <s2 > = the mean square speed of the molecules

Pressure p and volume V can be measured directly; but on the right we see three theoretical parameters.

The number of unknowns can be reduced if we bring in more such equations. For example, Perrin points out, when the theory of electricity is also brought into the fold, a relation can be deduced between the minimal electric charge e and this number N. On the kinetic theory electrolysis is explained by postulating that in electrolysis the molecules are dissociated into ions carrying a fixed electric charge. A Faraday is the quantity F of electricity that passes in the decomposition of 1 g molecule of hydrochloric acid, it is at the same time equated to the charge carried by 1 g molecule of hydrogen ions. It is known empirically that the decomposition by current of a gram-molecule of an electrolyte displays always the passing of the same quantity of electricity, and it is always a whole number of Faradays. This number must be the product of the number of ions taken with the number of minimal electric charges e that they carry. Putting these equations together and noting that by hypothesis 1 g molecule of hydrogen consists of N hydrogen atoms, we have

$$ {\text{N}}e\, = \,F $$

where F is an empirically known quantity, and we have two theoretical parameters. Of course the two above equations can be combined, so as to place an equivalent constraint on just three of the theoretical parameters, with the fourth defined in terms of the other three.

It is easily seen that these theoretical developments consist only partly in calculations, and partly in the introduction of further hypotheses to elaborate on the basic kinetic model. At this point, measuring an electric charge, a mass, and a volume places on the numerical relations between parameters pertaining to the molecules and their motion some quite definite constraints relative to the theory as developed so far.

In his exposition prior to the statement of his results, Perrin continues these points by adding in a similar hypothesis due to Maxwell, on the statistical independence of the spatial components of molecular speeds. To be specific, Maxwell derived a law of the distribution of molecular velocities in a gas, but on the special assumption that the distribution along spatial direction x is statistically independent of the distribution along the orthogonal y and z axes. Adding then a special hypothesis relating the internal friction between two parallel layers of a gas that are moving at different speeds to exchange of molecules between these layers, Maxwell found a further linkage to measurement, namely that:

the coefficient ζ of internal friction, or viscosity, which is experimentally measurable, should be very nearly equal to one-third of the product of the following three quantities: the absolute density δ of the gas …, the mean molecular speed Ω …, and the mean free path L which a molecule traverses in a straight line between the two successive impacts. (1910, p. 14)

This gives information about the mean molecular speed, which Maxwell designates here as Ω. Adding to this a further hypothesis that provides a kinetic model for the internal friction between two parallel layers of gas, moving at different speeds, Maxwell arrived at the equation

$$ \zeta \, = \,0. 3 1\delta \Upomega {\text{L}} $$

where:

  • ζ is the coefficient of internal friction (viscosity)—an experimentally measurable parameter

  • δ is the absolute density of the gas (also measurable)

  • Ω is the mean molecular speed (mentioned above)

  • L is the mean free path of the molecules

Given the hypotheses and measurement results so far then, the mean free path is calculable.

Then with the still further addition that the molecules are spheres—one of very few shapes the kinetic models had admitted—Clausius and Maxwell derived an equation that fixes the molecular diameter approximately as a function of the mean free path and the number n of molecules per cubic centimeter. The latter will bring us to Avogadro’s number N, but what is still needed to solve the equation in question is a second constraint on the relation between the molecular diameter and that number n.

5.2 Let’s take stock for a moment

The theoretical development went still several important steps beyond this before Perrin could begin, but it was his achievement to tie the research that was needed, to complete these efforts at the empirical grounding of the theory, to the study of Brownian motion.

What does not change in the story, however, is the point that I have been emphasizing: the development of the kinetic theory consisted in the addition of specific hypotheses, pertaining to the models of liquids and gases to which it was applied, which implied stricter and stricter connections between the measurable parameters and the parameters pertaining directly to the molecules and their motion. The result of these additions is that relative to the theory the empirical measurements take on a special significance: their outcomes place constraints on what the values of the molecular parameters can be. And when the process is completed, the constraint must be so strict as to determine those values uniquely, at least in principle.

That is what empirical grounding of a theory is.

So now we have seen what the first part of the 1910 monograph was devoted to spelling out: that empirical research and theoretical development in tandem had progressed to the point where you could see that relative to the theory (taken sufficiently broadly) only one more parameter needed empirical grounding to finish the job.

At this stage in history Perrin can point out that relative to the theory, the measurement of any of these so far undetermined, or only partially determined, parameters would fix the others as well (1910, p. 12). Thus we see in principle a number of approaches to the empirical grounding of the so far remaining indispensable parameters in the models provided by the kinetic theory. By this we must mean of course: operations that will count as measurement, relative to the theory, that is, utilizing the above theoretically derived equations.

5.3 Perrin continues

Perrin’s research on Brownian motion is directed precisely to this end; and to begin it was quite independent of the new theoretical results due to Einstein. After his initial successes in determining values for those parameters, he continued by guiding further experiments in ways linked to Einstein’s work, and found good agreement with the previous results.Footnote 19

To be realistic we should note that the theoretical derivations which Perrin assumes are largely dependent also on assumptions added to the kinetic theory, in the construction of specific models. Most of the work proceeds with models in which the molecules are perfect spheres, for example, though Perrin (1910, p. 14) notes that other hypotheses are needed in other contexts. As long as the simple models work, to allow a transition from the empirically obtained results to values for the theoretical parameters, and as long as these values obtained in a number of different ways agree with each other and with what is theoretically allowed—to within appropriate margins of error—this counts as success.

The addition Perrin made to this already almost century-old story follows the same pattern. As Achinstein emphasizes, Perrin also introduces an addition to the theory, a “crucial assumption, viz. that visible particles comprising a dilute emulsion will behave like molecules in a gas with respect to their vertical distribution” (Achinstein 2001, p. 246). Note that this is a blithe addition: Perrin argues for its plausibility, but in terms that clearly appreciate the postulational status of this step in his reasoning. After a discussion of the range of sizes of molecules (according to results derived from measurements via such extensions of the theory as we have just been inspecting) he writes

Let us now consider a particle a little larger still, itself formed of several molecules, in a word a dust. Will it proceed to react towards the impact of the molecules encompassing it according to a new law? Will it not comport itself simply as a very large molecule, in the sense that its mean energy has still the same value as that of an isolated molecule? This cannot be averred without hesitation, but the hypothesis at least is sufficiently plausible to make it worth while to discuss its consequences. (1910, p. 20)

On this basis, the results of measurements made on collections of particles in Brownian motion give direct information about the molecular motions in the fluid, always of course within the kinetic theory model of this situation.

But that is just what was needed for empirical grounding of those remaining theoretical parameters.

This was not the end of the story for Perrin. What Weyl calls the requirement of concordance and unique coordination was apparently very much on his mind. Perrin (1910) begins Part III with the remark that his experiments have allowed “the various molecular magnitudes to be determined” but then adds

But another experimental advance was possible, and has been suggested by Einstein at the conclusion of the very beautiful theoretical investigation of which I must now speak. (1910, p. 51)

Perrin notes that Einstein obtained his results in part “by the aid of hypotheses which are not necessarily implied by the irregularity of the Brownian movement” and details two of them. These include his own main hypothesis, namely that “the mean energy of a granule is equal to the molecular energy” (1910, p. 53). After discussing a rather large amount of experimental work bearing on Einstein’s results, and its nevertheless inconclusive outcome, Perrin himself set about (to use his own words) an experimental confirmation of Einstein’s theory. In this he was very successful as well.

Not only that: in his experimental work related to Einstein’s theory he arrived at the same values for the theoretical quantities as he had found in his own previous research. Logically speaking, the outcomes could have been at odds with each other, since no matter how tightly the theory is constructed, the actual results of measurement are after all ‘up to nature’. So we can read this part of the story as not simply a further inquiry but a demonstration that Weyl’s ‘concordance’ requirement is taken into account and the credentials of the theory with respect to this empirical constraint are demonstrably provided.

5.4 How Perrin ends his 1910 monograph

Finally, although Perrin’s text is such a boon to scientific realist writing, I think we should attend to his own emphasis on how thoroughly empirical his work was. His explanation is precisely in line with what I have here displayed as the project of empirical grounding. This comes in his final section, headed “43. Molecular reality”, and begins with the telling sentence

Lastly, although with the existence of molecules or atoms the various realities of number, mass, or charge, of which we have been able to fix the magnitude, obtrude themselves forcibly, it is manifest that we ought always to be in a position to express all the visible realities without making any appeal to elements still invisible. But it is very easy to show how this may be done for all the phenomena referred to in the course of this Memoir. (1910, p. 91)

and he then explains how to isolate the empirical content (still, although he does not say so, relative to the theory!) of the theoretical results. For example, he suggests comparing two laws in which Avogadro’s constant enters:

The one expresses this constant in terms of certain variables, \( a,\,\,a^{\prime},\,a^{\prime\prime} \ldots , \)

$$ N\, = \,f\left[ {a,\,\,a^{\prime},\,a^{\prime\prime} \ldots } \right]; $$

the other expresses it in terms of other variables, \( b,\,b^{\prime},\,b^{\prime\prime}, \ldots , \)

$$ {\rm N}\, = \,g\left[ {b,\,b^{\prime},\,b^{\prime\prime} \ldots } \right]. $$

Equating these two expressions we have a relation

$$ f\left[ {a,\,a^{\prime},\,a^{\prime\prime} \ldots } \right]\, = \,g\left[ {b,\,b^{\prime},\,b^{\prime\prime} \ldots } \right] $$

where only evident realities enter, and which expresses a profound connection between two phenomena at first sight completely independent, such as the transmutation of radium and the Brownian movement. (ibid. 91–92)

Once again, it seems to me mistaken to read this in a philosophical vein. I do not offer this as a case of an apparent scientific realist contributing grist for the empiricist’s mil! Rather, this passage is important because of how it illustrates the factors of Determinabily and Concordance in empirical grounding.

6 Conclusion

In sum then, I propose we see the century-long story of research, to establish the credentials of the kinetic theory, as a truly empirical enterprise. This way we do not view it as a century-long search for independent evidence for the truth of a well-defined hypothesis about what nature is like, but in a quite different light. The enterprise of those scientists from Dalton to and including Perrin, aimed to develop the theory itself, and to enrich it so as allow construction of models for special cases in its domain—all so as to make empirical grounding possible for its theoretical quantities. That enterprise essentially involves the concurrent development of measurement procedures to implement the grounding thus made possible. It is neither purely theoretical nor purely empirical, the theoretical and empirical are indissolubly entangled, but what is achieved is an empirical success.

At various points in that story it seemed that nature balked, and the actual measurements that were meant to achieve this grounding yielded results either directly inconsistent with the theory or failing to satisfy the ‘follow up’ requirement that Weyl called “concordance”. One famous such episode is often cited quite perversely to demonstrate philosophical prejudices against admitting the unobservable into theoretical postulates: the work of Dumas. But (as I recounted above) it ends up demonstrating the anachronism of such a reading.

One greatly gratifying aspect of Perrin’s work was that when he followed up his own research on Brownian motion with an experimental inquiry into Einstein’s new theoretical development, he found a satisfactory concordance in the results obtained.

It is still possible, of course, to also read these results as providing evidence for the reality of molecules. But it is in retrospect rather a strange reading—however, much encouraged by Perrin’s own prose and by the commentaries on his work in the scientific and philosophical community.Footnote 20 For Perrin’s research was entirely in the framework of the classical kinetic theory in which atoms and molecules were mainly represented as hard but elastic spheres of definite diameter, position, and velocity. Moreover, it begins with the conviction on Perrin’s part that there is no need at his late date to give evidence for the general belief in the particulate character of gases and fluids. On the contrary (as Achinstein saw) Perrin begins his theoretical work in a context where the postulate of atomic structure is taken for granted.

What can we make of all those triumphant remarks that suggest otherwise? I submit: that his work laid to rest the idea that it might be good for physics to opt for a different way of modeling nature, one that rivaled atomic theories of matter. That result was, in retrospect, well vindicated—an outcome as welcome to empiricists as to scientific realists, I would say.

But for the methodology and epistemology of science the most salient conclusion to draw is, it seems to me, that evidence can be had only relative to the theories themselves (the ‘bootstrapping’ moral) and that this is so because a theory needs to be informative enough to make testing possible at all. Thus the extent to which we can have evidence that bears out a theory is a function of two factors: first of all of how logically strong and informative a theory is, sufficiently informative to design experiments that can test the different parts of the theory relative to assumptions that the theory applies to the experimental set-up, and secondly of how well the measurement results in situations of this sort are in concord with each other. But thirdly, the testing involved cannot be adequately or properly portrayed as just checking on implied consequences (along such lines as suggested by the ‘hypothetico-deductive method’ or Hempel’s ‘confirmation theory’). To properly credential the theory, the procedures that count as tests and measurements in the eyes of the theory must provide an empirical grounding for all its significant parameters. The completion of this task, which made the kinetic theory into, at least in principle, a truly empirical theory, was Perrin’s real achievement.