Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

In the Wimsattian definition of robustness as ‘invariance under multiple independent derivations’ (Wimsatt 1981, reprinted in this book, Chapter 2), the robustness of the invariant result R presupposes that the multiple convergent derivations leading to R are themselves sufficiently solid (see Chapter 1, Section 1.3). In the present chapter, I address the question of the solidity of the derivations.

This will be done through a fragmentary analysis of an experimental derivation involved in a historical episode often described as ‘the discovery of weak neutral currents’. This is a well-documented episode, which has notably been studied in detail by Andrew Pickering in his book Constructing Quarks,Footnote 1 and I will largely rely on the historical material as well as on some philosophical insights offered by this book.

I will proceed along the following road.

To begin, I will introduce the concept of an argumentative line (Section 10.1). The aim is to provide both a general framework in order to characterize the derivations involved in a Wimsattian robustness scheme, as well as the tools needed to specify different kinds of derivations and to distinguish them in the analysis of particular historical cases. Then (Section 10.2) the discovery of the weak neutral current in the 1970s will be reconstructed as involving a robustness scheme composed of three experimental argumentative lines converging on the same conclusion (namely that weak neutral currents indeed are a physical reality). Some reflections will be provided (Section 10.3) with respect to the convincing power of such a scheme and to its widespread realist interpretation.

Second, I will focus on one of the three experimental argumentative lines involved in the robustness scheme, and I will analyze its internal structure as a ‘four floors modular architecture’ (Sections 10.4, 10.5, 10.6, 10.7, 10.8, 10.9, and 10.10). After a methodological interlude about the relations between the reconstructed architecture and the level of ongoing scientific practices (Section 10.11), one particular zone of the global architecture, namely the ‘Muon noise module’, will be examined more closely. I will re-describe it as a prototypical instantiation of the robustness scheme, and will exhibit on the way what I take to be some prototypical features of a convincing robustness scheme (Section 10.12). This will lead to a reflection on the origin of the invariance of the ‘something’ that is supposed to remain invariant under multiple determinations in a robust configuration. It will be stressed that the ‘invariant something’ involved in the ‘invariance under multiple determinations’ formula of robustness, far from being given ‘from the beginning’, is the result of an act of synthesis characterizable as a more or less creative calibrating re-description (Section 10.13).

Third, the way the elementary scheme of robustness (N arrows converging of one and the same result R) intervenes inside of the whole architecture of the experimental argumentative line will be analyzed (Section 10.14), and answers will be provided to the initial question of what constitutes the solidity of an argumentative line taken as a whole (Section 10.15). Finally, in the last section, conclusions will be drawn regarding the kind of work the robustness scheme is able to accomplish for the analysis of science, and some potential implications of the chapter with respect to philosophically important issues (such as scientific realism and the contingency of scientific results) will be sketched.

Before starting, one last preliminary remark: The analysis I will propose is not properly speaking an analysis of laboratory practices. It applies, rather, to a level of scientific practices that is emergent with respect to laboratory practices themselves, but that is nevertheless highly important with respect to real scientific developments, and highly relevant with respect to the problem of robustness (see Chapter 1, Section 1.5).

10.1 The Concept of an Argumentative Line

When we ask what makes a given established result R solid, we are inclined to appeal to a certain number of supportive elements, that I will name, at the most general level, ‘argumentative lines’. This expression remains deliberately vague, since it is intended to encompass any type of derivation, of whatever nature and force, provided that this derivation is believed to support R.Footnote 2 The term ‘argument’ seems apt to play this role, since an argument can be either weak or strong, and since the word does not presuppose anything about the kinds of procedures involved.

Argumentative lines may differ from one another with respect to (at least) three standpoints:

  • From the standpoint of their epistemic sphere

    Examples. Experimental lines (i.e., supportive arguments in favor of R on the basis of performed experiments)/theoretical lines (i.e., supportive arguments in favor of R on the basis of the content of high-level theories)/Hybrid lines (for instance argumentative lines based on simulations).

  • From the standpoint of their kind of argument (kind of factors involved, form of the argument…)

    Examples. Analogical versus deductive lines. Esthetical lines (i.e., supportive arguments in favor of R on the basis of valued esthetic properties such as simplicity, symmetries…) contrasted with (what can be comparatively called) cognitive lines (i.e., inferences from taken-as-true primitive propositions)…

  • From the standpoint of their force

    At this level we have a rich graduation, reflected in the lexicon familiar to philosophers of science: proof, verification, confirmation, corroboration, and so on.

Obviously, the three characters ‘epistemic sphere’, ‘kind of argument’ and ‘force’ are not independent. In a given historical context, some combinations are especially prototypical and frequently instantiated. For example, today, experimentation is commonly viewed as the method par excellence in order to establish results in the field of empirical disciplines. In this context, an argumentative line issued from the experimental sphere has great chances to be perceived as a very strong argument, that is, as a genuine proof – or at least, and more prudently since experimental lines have a high variability in their strength, as a more compelling argument than a purely theoretical line or an argumentative line based on a computer simulation.

Let us now consider from this point of view the particular case of the discovery of weak neutral currents, that is, the case in which the result R = existence of weak neutral currents.

10.2 A Panoramic Analysis of Robustness: The Experimental Argumentative Lines in Favor of the Existence of Weak Neutral Currents

Fortunately, it is not required to know too much about the physics of weak neutral currents to be able to understand the global logic of what I want to develop in this chapter. I will just give some basic elements.

Weak neutral-current processes can be defined as weak interactions (scattering or decay) in which no change of charge occurs between the initial and final particles. By contrast, a change of charge takes place in weak charged-currents processes. At the time of the discovery of weak neutral currents, the situation was currently represented by diagrams of the kind of the ones that appear on Fig. 10.1. Figure 10.1 illustrated the particular case of a neutrino (ν)-nucleon (n) scattering. In the neutral-current case, the weak force is mediated by a neutral particle Z 0 and the particles undergo no change of charge. Whereas in the charged-current case, the weak interaction is mediated by a positively charged particle W +, and the particles undergo a change of charge: the incoming neutrino is transformed into a negative muon μ at the upper vertex, and the neutron is changed into a proton at the lower one.

Fig. 10.1
figure 10_1_210961_1_En

Representation of a neutrino (ν) – nucleon (n) scattering in a Feynman graph

The period commonly associated to the discovery of weak neutral currents is, roughly, between 1972 and 1975.

Which argumentative lines will be marshaled, from today’s standpoint, if one asks what gives robustness to the proposition, currently viewed as well-established, that weak neutral currents exist?

If we simplify the situation, that is, if we only consider the experimental argumentative lines, and if we only consider the experimental argumentative lines which played in favor of weak neutral currents during the short period of 1972–1974, we will invoke (at least) three favorable lines:

  • The experimental line I will call ‘Gargamelle’ (L 1)

    This line investigates the weak neutral current reaction ‘ν + nucleon → ν + hadrons’ (where ν is a neutrinoFootnote 3) with a ‘visual’ detector (in that case a giant bubble chamber named ‘Gargamelle’) (Hasert 1973b, 1974).

  • The experimental line I will call ‘NAL’ (L 2)

    This line investigates the same weak neutral current reaction ‘ν + nucleon → ν + hadrons’ with an ‘electronic’ detector (at the National Accelerator Laboratory) (Benvenutti 1974).

  • The experimental line I will call ‘Aachen’ (L 3)

    This line involves another weak neutral current reaction ‘ν + electron → ν + electron’ and a bubble chamber (Hasert 1973a).

These three experiments show undeniable differences from one another. Indeed they at least differ two by two: on the one side with regard to the experimental design (visual detectors for Gargamelle and Aachen, electronic detector for NAL); and on the other side with regard to the type of weak neutral current reaction.

Here, we seem to get a perfect exemplification of a robustness scheme à la Wimsatt, in the case in which the multiple derivations correspond to experimental argumentative lines. One seems thus entitled to represent the situation by means of the diagram of Fig. 10.2.

Fig. 10.2
figure 10_2_210961_1_En

The retrospective panoramic scheme in the case of robustness of the weak neutral current discovery

10.3 Revealing the Tacit ‘Argument’ That Gives the Robustness Scheme Its Convincing Power and Places It in a Position to Play the Role of a Springboard Toward Realist Attributions

What is it that provides this retrospective panoramic scheme with its convincing power? What is it that so strongly pushes us (philosophers of science as well as actual practitioners of science) to feel that in such a configuration, the result R is indeed robust in an intuitive, ‘pre-Wimsattian’ sense? What is it that works as a ‘justification’ of the robustness of the result R in such a situation?

The same kind of reasons, I think, lies behind the current intuitive attributions of robustness by scientists and philosophers of science on the one hand, and on the other hand the Wimsatt’s decision to define, for epistemological purposes, a precise, explicit sense of robustness, namely the invariance of R under multiple independent derivations. It seems to me that these reasons are implicitly related to an argument of the ‘no-miracle argument’ kind.

In its most naïve and most convincing form, this argument conceives the derivations as independent, both logically-semantically with respect to their content, and historically with respect to their empirical implementation.Footnote 4 On the one side, people develop L 1… And find R. In parallel, people develop L 2, very different in content from L 1… And find R. In parallel, people develop L 3, very different in content from L 1 and L 2… And find R. Then, the results are confronted: three times R! (see Fig. 10.3). Three experiments wildly different in content, each leading to one and the same result, three different and independent argumentative lines converging on one and the same result R… (it is at this point that the Fig. 10.3 is converted into Fig. 10.2). This would be an extraordinary coincidence – a ‘miracle’ – if it was by chance… It is much more plausible to conclude that R is indeed, ‘in itself’ or ‘intrinsically’ robust…

Fig. 10.3
figure 10_3_210961_1_En

A common reading of the retrospective panoramic scheme of robustness

At this point, if we ask what, exactly, does ‘intrinsically’ mean in this context, we are quasi-inevitably led to realist intuitions. To say that three times R by independent lines cannot be by chance, is to mean that something outside us must have participated, conspired to precipitate R… This intuition that we have bumped into something outside us can be expressed through different, more or less strong formulations: R has been obtained because R is indeed a genuine characteristic of the object under study (in contrast to an experimental artifact, a mistake of the subject of knowledge…)… Or stronger: R has been obtained because R has been imposed by the object under scrutiny… R is objective, if not (at least approximately) true

Of course, from a philosophical point of view, the leap from the claim that we have several, indeed sufficiently independent derivations leading to one and the same R, to the objectivity if not the truth of R, would need to be argued (and is indeed highly questionableFootnote 5). But here, I just want to point to the kind of intuition that lurks behind the robustness scheme and is, I think, the common source of its force and convincing power.

10.4 From the Panoramic Scheme of Robustness to the Internal Structure of One of Its Derivational Ingredient

One could, relying on the example of the discovery of weak neutral currents, discuss the validity of the robustness scheme at the panoramic scale. But despite the great interest of this example in this respect (see my remarks in Chapter 1, Section 1.8.3), that is not what I want to do in this chapter. What I want to do is, from the panoramic scheme, to zoom in, and to focus on one of the argumentative lines, namely, the Gargamelle line.

The panoramic scheme offers a view of the situation at a given scale. At this scale, it adopts a simplified representation, one that consists in identifying each of the experimental lines with a monolithic unity: one experimental argument, this experimental proof (represented on the scheme by a single arrow).

But of course, looking more closely, zooming on a supportive arrow, we discover a complex internal argumentative structure.

My aim, in this chapter, is to open the procedural black box represented by the Gargamelle arrow and to analyze its internal structure, with the intention of drawing some general conclusions about the solidity of procedures and the solidity of results.

The following reflections will mainly rely, in terms of primary sources, on an article published in 1974, which is, along with a few others, commonly considered to announce the discovery of weak neutral currents.Footnote 6

10.5 Some Preliminary Technical Elements in Order to Understand the Gargamelle Line: The Charged Currents as an Ally in Order to Convert the Machinic Outputs into Theoretically Defined Neutral Currents Interactions

First, let me introduce a few preliminary elements required in order to understand the Gargamelle argumentative line.

In the Gargamelle experiments, the (hypothetical) weak neutral reaction under study is:

$$\nu + {\textrm{nucleon}} \to \nu + {\textrm{hadrons}}\;\left({\textrm{NC}}\right)$$

A neutrino-nucleon scattering produces, as secondaries, a neutrino and a shower of hadrons. I will call this particular kind of weak neutral current ‘NC’.Footnote 7

How can the NC reaction be experimentally identified?

In the visual experiments of the Gargamelle kind, the data resulting from the experiments are, at the rawer, less-interpreted level, photographic images on a film. I will call these pictures the ‘instrumental outputs’ or ‘machinic ends’ (playing on the double sense of ‘end’ as the output and the aim), and I will equate them to the ‘zero degree’ of experimental data (what is sometimes called “raw data” or “marks”, see for example (Hacking 1992)).

Now, the correlation between visible tracks on the film on the one hand, and physical, theoretically defined events on the other hand, is not always an easy matter. Neutral particles, in particular, leave no visible tracks on the film. Hence the experimenters cannot but infer the presence of this or that neutral particle from the visible tracks of charged particles with which these neutral particles have interacted.

Since the hypothetical neutral currents NC involve two neutral particles, the incident neutrino and the outgoing neutrino, the experimental identification of an NC is a delicate task. A major concern of experimenters in the 1970s was the possibility of mistaking a pseudo-NC event produced by a high-energy neutron for an authentic NC event produced by neutrinos. This problem was known as the neutron-background problem.

In this context, a crucial point that I want to stress, is that the experimental identification of the NC-events involves, in a decisive way, events other than the NCs. Indeed, an entire set of other events, that I will call ‘the space of other relevant events’.

These other events can play the role either of allies (in the sense that they are helpful with respect to the identification of the NC events) or of parasites (in the sense that they could be confused with a NC event, that they work as background noise). In this chapter I will only be able to introduce a very small number of the events pertaining to the space of other relevant events. But there is one of them, absolutely constitutive of the Gargamelle line, which, because of its role of primary ally, has to be mentioned.

This is the charged current reaction symmetrical to the neutral reaction NC under study:

$$\nu + {\textrm{nucleon}} \to \mu^{-} + {\textrm{hadrons}}\;\left({\textrm{CC}}\right)$$

Where \(\mu^{-}\) is a negative muon. I will call this reaction ‘CC’.

Compared with a NC process, the same incoming particles are involved; but here we find, in the outgoing particles, in addition to the hadrons, a negative lepton instead of a neutrino.

How does the CC-interaction play its role of ally with respect to the NC interaction? At the time, the CC were, contrary to the NC reactions, assumed to exist, and the CC-interactions were much better known, both theoretically and experimentally, than the NC interactions. These circumstances led the experimenters to transform the initial problem ‘Experimental detection of the NC reaction’, into this other problem: ‘Experimental evaluation of the ratio NC/CC’. This methodological strategy – resorting to a ratio, one of whose terms is better known – exemplifies a paradigmatic strategy, and gives the CC interaction the status of a primary ally.

10.6 The Global Architecture of the Gargamelle Argumentative Module

Let us now turn to the analysis of the internal structure of the Gargamelle experimental line.

10.6.1 From the Line to the Module

When we analyse the constitution of any unitary argumentative line, we are, intuitively, inclined to replace the image of the arrow that was natural in a panoramic overview, with the image of the box or the module.

I will thus consider the Gargamelle line as a module, and will represent its internal architecture by a series of sub-modules included one in the others, like Russian dolls (but with more complex combinations of inclusions).

Before applying this representation, some brief remarks must be made about the methodological principles that govern the individuation of a module as a modular unit at a given scale. A module will be individuated and defined as a unit on the basis of its aim: on the basis of the question it is intended to answer, of the problem it tries to solve.

In that vein, the argumentative line ‘Gargamelle’ can be instituted as a modular unit of the same name (i.e. the Gargamelle module) defined by the aim-question: ‘is the NC-interaction experimentally detected with the Gargamelle bubble chamber?’

Similarly, what I have called, in Section 10.2, the “panoramic retrospective scheme of the robustness of R”, with R = existence of weak neutral currents, can be viewed, by zooming back from the Gargamelle module and by considering the situation at a broader (“panoramic”) scale, as a unitary module individuated by the aim-question: ‘Are weak neutral currents experimentally detected?’

As any situation can always be analyzed in different manners in terms of the structure of the aims, it must be stressed from the outset that the modular architecture that will be proposed in what follows is not univocally and inevitably imposed by the objective text of the 1974 paper. Sometimes – when turning to certain parts of the Gargamelle argument – the conviction forced itself upon the analyst, intuitively, that it has to be this unique decomposition, that it cannot be anything else… But when turning to some other parts, several options appear possible, and hesitations arise concerning the most adequate or relevant one. In any case, the analyst always has a certain degree of freedom, and the structural configuration finally adopted always depended on him for certain decisions.

10.6.2 The Gargamelle Module as a Four Floors Building

Here is a sketch of the global architecture of the Gargamelle line as I have decided to analyze it.Footnote 8

At a first level of analysis, the Gargamelle argumentative module can be split into four big boxes or sub-modules. These sub-modules can be seen as four logical moments or logical steps,Footnote 9 defined by their intermediary specific aims. In a first approach, these four steps will be considered as logically successive, and will be ordered sequentially from the bottom to the top along a vertical axis.

This bottom-up visual representation is intended to suggest graphically that what is logically prior (lower in the diagram) largely conditions, not to say is irreducibly constitutive, of what is logically posterior (higher in the diagram).

I will now present, in an unavoidably concise way, each of the four floors of the whole architecture.

The architecture will involve the four following floors (see Fig. 10.4):

  • As the first floor: a box ‘Selection of the basic data for the analysis’ (or for short, ‘Selections’)

  • As the second floor: a box ‘Coarse estimations of the number of events within the selected data’ (for short, ‘Coarse estimations’)

  • As the third floor: a box ‘Refined estimations after correction of the coarse first estimations obtained’ (for short, ‘Refined estimations’ or ‘Noise’)

  • Finally, as the fourth floor: a box ‘Confrontation between the experimental results obtained and the predictions of the high-level theories’ (for short, ‘Confrontation with high-level theories’).

Fig. 10.4
figure 10_4_210961_1_En

The Gargamelle module as a four floors building

10.7 The First Floor: The Module ‘Selections’

10.7.1 The Internal Constitution of the Ground Floor: Four Parallel Selections

At the end of the Gargamelle experiments, we have 290000 photographic images recorded with the giant bubble chamber Gargamelle. But all the tracks visible on the photographic film are not retained. Only a sub-part of the totality of the machinic ends are taken into account.

Four operations of selection or filtering are performed. I will represent them by four parallel sub-modules (see Fig. 10.5), without having space to describe their content. Just to give an example: in the module ‘Energetic cut at 1 GeV’, from the start experimenters get rid of all the events whose total energy is below the threshold of 1 GeV.

Fig. 10.5
figure 10_5_210961_1_En

The first floor: The module ‘Selections’

The aim of such selections is almost always to exclude at once, from the entire set of the machinic outputs, a sub-set of tracks that are judged too ambiguous, either because they are not clearly readable in terms of individual geometric properties, or because their population is deemed infected by a huge number of pseudo-events. In other words, the aim is to extract a set of tracks whose interpretation is globally more reliable, in such a way that it becomes less likely to make mistakes in the counting of the potential tracks of NC.

But if this is the aim, experimentalists are never sure that it is indeed achieved. The filtering operations aim at eliminating some confusions, but they can themselves be sources of mistakes. For example, if the energy cut at 1 GeV is too severe, real NC-events might be artificially eliminated, and the risk is to conclude mistakenly that weak neutral currents do not exist. But if the cut is too permissive, too many pseudos might be taken for authentic NC-events, and the risk is, this time, to conclude mistakenly that weak neutral currents do exist.

The four preliminary selections performed appear variably problematic to the experimenters’ eyes. But, whether problematic or obvious, the operations involved in each of the four sub-modules at the first floor have essential repercussions on the conclusions that will be drawn at upper floors. They are completely constitutive of the final answer that will be given to the question of the detectability of the NC reaction.

10.7.2 The Resultant Assessment of the Four Cutoff Operations: At the Exit of the ‘Selections’ Module

As the input of the ‘Selections’ module, we have the totality of the machinic outputs, namely all the visible tracks that appear on the 290000 photographic images (see Fig. 10.5). At the output of the ‘Selections’ module, after concatenation of the four specific selections, we have a restricted sub-set of all the visible tracks.

I will describe the result of these four constitutive operations of global filtering as the institution of a new layer, that I will call ‘level 1 of the experimental data’ (of course the number one acquires its sense only relatively to the level zero). Here the level 1 can be specified as the ‘level of the basic data for the analysis’, for it is at this level that the experimenters are going to evaluate the number of track patterns that could be manifestations of NC events. (See Fig. 10.5 for a schematic overview of the first floor).

The basic data for the analysis, that is, the output of the big ‘Selections’ module viewed as the first floor of the construction, will constitute the input of the module situated just above in my ascendant vertical representation.

10.8 The Second Floor: The Module ‘Coarse Estimations’

I identify this module, viewed as the second floor of the edifice, as a modular unit by the aim: to build, from the pre-selected sample of visible tracks associated with level one, a first coarse estimation of the relevant events, namely, primarily the NC and CC events.

The second floor will be portrayed as a duplex (see Fig. 10.6).

Fig. 10.6
figure 10_6_210961_1_En

The second floor as a duplex

10.8.1 The Lower Part of the Duplex: The Module ‘Individual Treatment’

The lower part of the duplex, that I will call ‘Individual treatment’, will be defined as a unitary sub-module by the aim: To count, on the photographs, the individual track-events of each type.

In order to achieve this goal, the experimenters specify, for each type of relevant theoretical event (NC, CC…), the observable characteristics a pattern of tracks must necessarily satisfy to be classified, at least provisionally as a first plausible hypothesis, as a NC-event, or a CC-event, etc.

Next the experimenters count, within the pre-selected sample of the machinic ends, the number of track-patterns that satisfy the criteria of experimental identification defining what I will call a NC-candidate and a CC-candidate.

As a result, they find:

  • NC-Candidates = 102

  • CC-Candidates = 428

At this stage, as the experimenters stress, “The number of NC events is large”.

10.8.2 The Upper Part of the Duplex: The Module ‘Collective Treatment’

The upper part of the duplex ‘Coarse estimation’ will be called ‘Collective treatment’, and will be defined as a unitary sub-module by the aim: To check, through the examination of collective properties of the populations of events, that no major mistake has been made in the previous step corresponding to the individual experimental identification of the NC-candidates.

In order to achieve this aim, the general strategy of the experimenters is to institute two privileged points of comparison, to which the 102 NC-candidates are confronted: a positive point of reference, namely the CC-candidates; and a negative point of reference, namely the neutral hadrons (and on the front line the neutrons).

The logic of the argument can be reconstructed as follows: If most of the 102 NC-candidates identified on the film are authentic NC-events produced by neutrinos, it is expected that their collective characteristics will present, statistically, some essential similarities with the collective characteristics of the 428 CC-candidates. Whereas sharp differences (of a partly determined type) are expected if a majority of the 102 NC-candidates actually are pseudo-NCs induced by neutral hadrons and not by neutrinos.

In this module, one can see precisely how the CC interaction plays, in concreto, its role of ally as an experimental standard. The collective properties of the population of 428 CC-candidates identified on the film are turned into an experimental norm. They show what collective properties a population of authentic NCs should have. They work as benchmarks.

The aim of the ‘Collective treatment’ sub-module is achieved by applying this general strategy to four different collective characteristics: the spatial distribution, the energetic distribution, the angular distribution, and the mean free path of interaction. The four corresponding investigations will be represented as four parallel sub-sub-modules (see Fig. 10.7).

Fig. 10.7
figure 10_7_210961_1_En

The second floor: The ‘Coarse estimations’ module

10.8.3 The Resultant Assessment of the Upper Part of the Duplex: At the Exit of the ‘Coarse Estimations’ Module

Each of the four statistical tests corresponding to the four parallel sub-sub-modules shows a global resemblance between the collective behavior of the NC-candidates and the collective behavior of the CC-candidates with respect to the physical variable involved. Thus, all four sub-modules of the ‘Collective treatment’ module are favorable to the hypothesis that most of the 102 NC-candidates indeed are authentic NC-events rather than pseudo-NC.

Thus at the exit the module ‘Coarse estimations’ (see Fig. 10.7), the intermediary conclusion is the following first raw evaluation of the NC/CC rate:

$${\textrm{NC-Candidates/CC-candidates}} = 102/428$$

I will describe the level of this output as a ‘level 2 of the experimental data’. In this example it corresponds to the level of the coarse estimations. The typical epistemic status of the conclusions reached at level 2 is ‘candidates’, a term which aims to indicate the still approximate and provisional character of the rate value retained. (See Fig. 10.7 for a schematic overview of the second floor).

The rate ‘102/428’ of the NC/CC-candidates that holds as the output of the first floor ‘Coarse estimations’, in itself “large” as stressed by the experimenters, will constitute the input of the upper floor, the second floor.

10.9 The Third Floor: The Module ‘Refined Estimations’ (Or ‘Noise’)

10.9.1 Identifying the Background Noise

The module corresponding to this second floor is defined by the aim: To obtain a refined, realistic estimation of the NC/CC ratio.

In order to achieve this goal, the coarse estimations obtained at level 2 must be corrected, taking into account the influence of new elements of the space of the relevant events. This time, these elements play the role of parasites, and in the 1974 paper, they are altogether categorized as “background noise”.

Four different sources of noise are mentioned and respectively treated. The module ‘Refined estimations’ or ‘Noise’ can thus be decomposed in four parallel sub-modules that I will only have the space to mention: ‘Noise of low momentum muons’; ‘Noise of cosmic rays’; ‘Noise of neutral hadrons from the primary beam’; and ‘Noise of neutral hadrons from the secondary beam’ (see Fig. 10.8).

Fig. 10.8
figure 10_8_210961_1_En

The third floor: The ‘Refined estimations’ module

As the output of each sub-module ‘Noise’, a numeric estimation of the number of pseudo-events of the relevant type is obtained.

10.9.2 The Resultant Assessment of the Four Sub-modules ‘Noise’: At the Exit of the ‘Refined Estimations’ Module

The global result of the four parallel sub-modules ‘Noises’ is obtained by conjunction: simplifying, one subtracts, from the number of the 102 NC-candidates obtained as the output of the second floor, the different numbers obtained at the third floor for each type of pseudo-events.

At the exit of the module ‘Refined estimations’, the conclusion is, finally (see Fig. 10.8):

$$ \textrm{NC/CC} = 22\% .$$

I will describe this whole stage of identification and processing of the different sources of noise as the constitution of a new level, the level 3 of the experimental data, that I will call the level of the corrected basic data.

On the level of the corrected data, the conclusions are viewed as the ‘best estimations that can be achieved in the current state of the research’. The ‘something’ that is quantitatively evaluated at level 3 has another epistemic status than the one of ‘candidate’ which was typical of level 2. It goes with a stronger realistic pretension: no more ‘simply’ a candidate… But a ‘real’ something… At the level of the corrected experimental data, experimenters claim to have identified an authentic phenomenon (as opposed to an artifact), even if what is at stake remains in many respects hypothetical and only partially characterized from the theoretical point of view.

In order to grasp the status of what is at stake, I will use the category of the ‘signal’. (See Fig. 10.8 for a schematic overview of the third floor).

As I conceive it, the category of the signal is intended to name the highest level of the experimental data (‘data’ already highly elaborated and interpreted as one can see). Whatever its number, the signal corresponds to the surface of the stratified experimental analysis, to the final point of the argument as an experimental argument. That is why in the 1974 paper, the conclusions associated with my category of signal are presented in a rubric entitled “results”.

Beyond the stratum of the signal, we leave ‘what experiment says’, to enter into the sphere of the theoretical interpretation of what has been experimentally extracted. True, to talk about a NC-signal is already to project a given theoretical interpretation of the ‘something’ that has been experimentally extracted. But at the level of the signal, this interpretation remains presumptive. So, on the whole, what the expression ‘S-signal’ exactly picks out, is: ‘the something that has been extracted from the instrumental outputs and partially characterized, and that can be potentially interpreted as an S’.

10.10 The Fourth Floor: The Module ‘Confrontation of the Experimental Signal with High-Level Theories’

I don’t have the space to describe the fourth floor. The aim of the corresponding module is to confront the NC/CC signal of 22% to what high-level theories say – and especially one of them, the Weinberg-Salam theory, which assumes the existence of the NCs.

At this floor the experimenters begin to stress that “The neutral current hypothesis is not the only interpretation of the observed events”. Then they list and investigate very partially some possible interpretations (these interpretations being re-describable as four parallel sub-modules, see the upper floor of the Fig. 10.9). Finally they arrive at this conclusion, striking for its cautiousness: “Interpreting these events as induced by neutral currents”, they appear “compatible” with the Weinberg-Salam theory of weak interactions.

Fig. 10.9
figure 10_9_210961_1_En

Decomposition of the Gargamelle module in sub-modules

This very cautious formulation, which, clearly, is far less than an outright declaration of existence proper, fits well with the Wimsattian scheme of robustness. Indeed, the conclusions of the 1974 paper are only supported by one single type of experiment, whereas the Wimsattian scheme of robustness requires several different types of experimental lines (or in the terminology of Catherine Allamel-Raffin: requires inter-instrumentality (Allamel-Raffin 2005). It is then perfectly congruent with the scheme, and even required according to it, that the conclusion of the chapter remains fragile, only plausible, associated with a weak degree of robustness.

10.11 Methodological Interlude: Some Remarks About the Relation Between the Gargamelle Architecture and Scientific Practices

Before continuing the analysis of the Gargamelle line, I would like to say a few words about the gap between the level of the previous analyses and the level of science in action.

The previous analyses, and the architectural representations that go with it, are based on the text of a published article. They are, thus, verbal re-descriptions and visual re-presentations of a reality that is akin to the level of emergent stabilizations. As I argued in Chapter 1, Section 1.5, the study of what holds at this emergent level is relevant to the problem of robustness.

But it is important to sharply separate the two levels, and to refrain from equating the logics associated with each of them. Indeed, the succession of the four big modules along the vertical axis cannot be equated with four successive steps that, chronologically, would have been taken as such consecutively by practitioners. It cannot even be equated with four logical moments that would have been thought and performed as such in this order by real practitioners. The four floors architecture is an emergent, highly simplified and largely reordered one with respect to the multiple constructions built all along the actual path. And the simple, sequential process exemplified by the different floors is not a good model of practices. In real practices, what is involved is a reticular logic with retroaction loops and multiple restructuring along the path.

Having sketched a (largely simplified) overview of the internal architecture of the Gargamelle emergent argumentative module (see Fig. 10.9 for a graphical overview), I will now zoom in again, and have a closer look at the internal structure of some of the sub-modules constitutive of the edifice.

10.12 The Internal Structure of the ‘Muon Noise’ Module: A Prototypical Example of the Robustness Scheme

I will begin with the sub-module ‘Noise of muons with low momentum’, since intuitively, one is inclined to see it as a perfect illustration of the robustness scheme.

The noise here identified, that is, the feared risk of mistake, is the following. A track of a negative muon of low momentum (<100 MeV) could be confused with a track of a short stopping proton (that is to say a hadron). Now, the presence or the absence of a muon within the outgoing hadrons is precisely what distinguishes the CC and NC interactions. If a low momentum muon is taken as a hadron, one will count as a NC-candidate what is actually a CC (a pseudo of the type: ‘CC with a low momentum muon’). We see here how an ally can switch into an enemy (here a parasite).

On the ground of this analysis, the problem to solve, which defines the ‘Muon noise’ module as a modular unity, can be formulated as follows: Estimation of the number of pseudos of the type ‘CC with a low momentum muon’ within the experimental sample of the 102 NC-candidates.

“The magnitude of the effect may be estimated (…)”, write the experimenters in the 1974 paper. How? The estimation involves three distinct parallel sub-modules (see Fig. 10.10).

  1. (A)

    In the first, which I will label ‘Experimental extrapolation’, experimenters extrapolate “the observed muon spectrum to zero energy”. This spectrum is constituted by muon-candidates of energy superior to 100 MeV, thus of tracks that are not subject to the muon-proton ambiguity.

    Upshot and output of the sub-module ‘Experimental extrapolation’: “This procedure predicts a misclassification of 9 events”.

    Conclusion: R 1 = 9 (see Fig. 10.10).

  2. (B)

    In the second parallel sub-module, which I will label ‘Theoretical calculus’, the number of low-momentum muons are determined by a theoretical calculus. The calculation is not detailed in the chapter, but some theoretical hypotheses on which it is based are explicitly mentioned (“a theoretical calculation assuming scaling and correcting for non-zero muon mass”).

    Upshot and output of the sub-module ‘theoretical calculus’: 11 events.

    Conclusion: R 2 = 11 (see Fig. 10.10).

  3. (C)

    In the third parallel sub-module, experimenters examine, on the film, how many events already classified as CC-candidates have, in their secondaries, a muon-candidate with a momentum inferior to 100 MeV.

    Upshot and output of the argumentative sub-module ‘CC-candidates with low-moment on the film’: 11 events.

    Conclusion: R 3 = 11 (see Fig. 10.10).

  4. (D)

    The resultant assessment of the three sub-modules: at the exit of the module ‘Muon noise’

    “The correction to be applied is 0 ± 5 events”, write the experimenters. They found 9, 11 and 11. They retained the value ‘10’.Footnote 10 That is, they conclude that the mistake in question concerns 10 events. Since they don’t see any reason that would favor the erroneous overestimation of the NC-candidates at the expenses of the CC-candidates, or the opposite mistake, they distribute symmetrically the amplitude of ‘10’ between plus and minus.

    Upshot and output of the module ‘Muon noise’: the experimenters could have over- or under-estimated by 5 events the initial count of the 102 NC-candidates.

    R = ± 5 (see Fig. 10.10).

    The procedure corresponding to the ‘Muon noise’ module exemplifies characteristic traits of the Wimsattian scheme of robustness, at least if we accept the following re-description of its content:

    • First, we find several parallel derivations (namely three) for the estimation of one and the same magnitude (the low momentum muon background).

    • Second, each of the three parallel approaches seems to be taken as ‘in itself sufficiently reliable’ (this is of course implicit in the paper but can be assumed on the basis of the absence of any discussion devoted to the matter, joined to the extreme brevity with which the whole issue is settled).

    • Third, the three derivations involve notable differences. I cannot go into the details, but the three sub-modules show at the same time:

      • Differences with respect to the epistemic spheres, since two of them are based on the experimental data constituted at the level 2, whereas the third is a theoretical calculus.

      • And differences in content, since, for example, the two experimental sub-modules use two disjoint sets of machinic outputs.

      Let us assume that these differences are sufficient to see the three modules as sufficiently independent.Footnote 11

    • Fourth, the convergence of the three derivations appears to be of excellent quality: 9, 11 and 11 events, here are three results that nobody will hesitate to judge as very close to one another (two of them are even numerically identical). This uniformity of judgment is favored by the circumstance that the three outputs here involved are three numbers: results given in a quantified from, in general appear to be more easily and less problematically comparable than conclusions stated in a more qualitative form.

Fig. 10.10
figure 10_10_210961_1_En

The ‘Muons noise’ module, as a prototypical exemplification of the robustness scheme

These characteristics of the ‘Muon noise’ module gives to its final result a strong degree of robustness, and justify the decision to give to this module (as analyzed just above) the status of a prototypical example of the general scheme (an exemplar in the Kuhnian sense). I think it is this type of example that commonly lurks behind the abstract scheme and feeds the intuitions about it.

10.13 A Revised Version of the Robustness Scheme

Now, a reflection on this example taken as an exemplar leads us to refine the first version of the robustness diagram that has been proposed at the beginning of this chapter (Fig. 10.2).

10.13.1 Recognizing the Gap Between the Multiple Sub-modular Conclusions and the Unique Totalizing Modular Conclusion

The first version suggested an identity of the conclusions at the output of each sub-module (represented by one and the same unique symbol ‘R’). Whereas the exemplar of the ‘Muon noise’ module clearly shows that, strictly speaking, we have three distinct results (R 1 = 9, R 2 = 11 and R 3 = 11).

This remark might seem merely anecdotal, especially when it is illustrated on such an example, in which two numerical values are identical and the third one is so close to the two others. But I think it is not anecdotal, and that in any case, it calls at least for an examination of the way we go from the individual outputs of the multiple sub-modules, to the unique output of the encompassing module.

Having obtained 9, 11 and 11, the value ‘10’ is retained. This ‘10’ is a unique totalizing estimation, built from the three sub-modular evaluations. How is it built? Actually, in the present case, we don’t know. After giving the three results 9, 11 and 11, the experimenters immediately write without further explanation: “The correction to be applied is 0 ± 5 events”. This being said, despite the absence of any explicit development of the matter, we can stress a number of contextual elements that act as constraints in the configuration under scrutiny.

First, the quantity to be estimated belongs to the category of a number of events, so it has to be an integer. Second, the estimated number is to be used to correct a definite number of NC- and CC-candidates. With respect to this aim, several different strategies are conceivable. For example: retain the highest number obtained by the different estimations, and examine if even the most pessimistic estimation (the maximal error) leads to a final corrected number of NC-candidates that is still sufficiently high to be interpreted in terms of the experimental detection of the NCs. Clearly, this is not the strategy that is retained here. Another possibility is to make the average of the three numerical values obtained and to round it off to the nearest integer: this would indeed lead to 10. Such a procedure would implicitly assume that each of the three derivations involved are equally reliable (and hence must be equally weighted). We can conjecture that it is what the experimenters did.

Anyway, the path the experimenters really followed in the present case is not important with respect to the general point I want to stress: namely, the non-straightforward character of the equivalence of on the one hand the three sub-modular values R 1, R 2 and R 3, and on the other hand the totalizing value R – a non equivalence which leads us to raise the question of the possibility that another path could have been followed.

In the passage from the three sub-modular values to the totalizing modular value, there is a jump. The passage involves a decision about the unique value R which will stand for the multiplicity of the three different values R 1, R 2 and R 3 obtained by the three different independent derivations L 1, L 2 and L 3. Very often – and this seems to be the case here –, the R is taken as the ‘true value’ (or the most-adequate-approximation-in-a-given-stage-of-knowledge, which in practice amounts to the same). In such a perspective, once the decision about the true value R has been taken, it feeds back on the epistemological status of the three intermediary conclusions: they become more or less close approximations of the true value: The ‘9’ must be corrected in ‘10’, the two ‘11’ also.Footnote 12

10.13.2 History of Science, Individually Variable Reliability Judgments of Practitioners and the Contingency Issue

Such decisions depend both on (a) the history of science and (b) pragmatic intuitive evaluations of the individual scientists involved about the reliability of L 1, L 2 and L 3.

  1. (a)

    Given the path of our history of science, it is today a quasi-automatic routine to use certain kinds of mathematical tools (especially statistical techniques) in order to interpret experimental machinic outputs (in order to evaluate the precision of instrumental devices; in order to build a unique measure from a series of measurements associated with different, more or less dispersed outcomes…). As a result of this historical path, the construction of one unique R through the operation of averaging a multiplicity of Ris can hardly be seen today as a creative jump. Actually, it is even hard to be aware that there is any jump. However, there is one. To feel it, we have to go backward along the time axis and to realize how problematic and controversial it has been, historically, to legitimize and impose these mathematical techniques as the best ones.Footnote 13 The point can be generalized to any mathematical algorithm routinely and ‘quasi-mindlessly’ used in the empirical sciences today.

  2. (b)

    Given the historical path and its crucial bifurcations, in a particular scientific context, the decisions relative to the construction of a unique R on the basis of a multiplicity of Ris depend on pragmatic evaluations of the practitioners involved in the research. In our case study for example, had the first derivation (R 1 = 9) been perceived as less reliable than the two other ones \(\left(R_2 = R_3 = 11\right)\), experimenters could have retained the value R = 11. Now, it is well known that judgments about what is reliable and what is not, or about the scale of what is more or less reliable, are a pragmatic (largely tacit) matter often subject to individual variations. Footnote 14

I do not claim at all to have shown by these brief considerations that the actual historical bifurcations or the actual options manifested in scientific published papers could indeed have been different in an epistemologically significant way. This is indeed a very hard philosophical issue, to which we can refer, following Ian Hacking, as the antagonism between “contingentism” and “inevitabilism” (see Hacking 1999, 2000). Even just a meaningful formulation of the issue would require too long a development to be provided here. Here I just want to suggest that contingentism should be taken seriously rather than being dismissed as too implausible from the very beginning without any true examination.Footnote 15

10.13.3 An Act of Synthetic Calibrating Re-description

This being said, I hope the preceding reflections are sufficiently convincing to show that the final conclusion built as the output of the totalizing module (the value of R) must be considered, in an important sense, as a different and new conclusion with respect to each of the intermediary conclusions built as the outputs of the multiple modular components (the value of R 1, the value of R 2, etc.).

In the jump from the multiple Ris to the unique R, one can say that there is a certain kind of mutual adjustment of the different intermediary results obtained as the outputs of the sub-modules. Had the experimenters opted for R = 11, the mutual adjustment would have been of another kind. In cases where R is viewed as ‘the true value’ and the Ris as ‘more or less close approximations of this true value’, the decision to retain R = 11 rather than R = 10 leads to different feedback judgments with respect to the proximity of each of the Ris to R and hence to different feedback judgments concerning the degree to which an Ri is a more or less good approximation. This can in turn have implications for the evaluation of the degree of precision of some instrumental devices or derivations. Suppose for example that the value ‘11’ is taken as the true value instead of the 10: a subsequently introduced instrument or argumentative line that will lead to values centered on 10 will be considered, all other things being equal, as less precise or accurate than an instrument that will lead to values centered on 11.Footnote 16

What is the nature of the constitutive act involved in the passage from the Ris to the R? By what kind of operation are the multiple sub-modular results converted in a single totalizing modular result? The move involves an operation that I will characterize as a (more or less creative) calibrating (or standardizing) re-description. Indeed, what do we do, from the sub-modular outputs Ris to the totalizing modular output R? By an act of synthesis (which, depending on the situation, requires more or less ingenuity and creativity), we build, from the Ris (here numbers, but they may be as well sentences expressed in more or less specialized words, mathematical equations, graphs, maps, pictures…), a new unique formula R (which can also take different forms) that is instituted as a pole of reference and substituted for all of the Ris. In the next stages, R will stand for the Ris. The Ris will be ‘forgotten’, and it is R that will be used as an unquestioned data in subsequent derivations (at the upper levels of the Gargamelle architecture). The act of synthesis involved operates a reduction of the manifold obtained at a certain level, and an identification of this manifold to one and the same thing, picked out by a new description, at another level. At the same time, it institutes the identity of this ‘same’ (assuming by doing so some operations of translation) and gives it the status of a pole of reference. In the frequent cases in which this pole of reference R is conceived as a ‘true value’ (as-we-know-it-in-the-present-stage-of-knowledge-of-course) rather than, for instance, a pessimistic threshold, R will work as a standard with respect to the precision of each Ri (the more an Ri will be far from R, the less this Ri and the argumentative line from which it has been derived will appear precise).

Admitting the preceding reflections, we are led to introduce some modifications to the first version (Fig. 10.2) of the robustness diagram. No longer do we have three arrows all ending at one and the same point, the result R, but (see Fig. 10.11) three arrows each ending at three different results, themselves then synthesized in a unique result at an upper emergent level. Or alternatively, relying on the modular representation (see Fig. 10.12) three sub-modules, with three outputs R 1, R 2 and R 3; one inclusive module with an output R; and an intermediary space between the horizontal of the Ris and the horizontal of the Rs, the depth of which represents the importance of the creative jump.Footnote 17 I will characterize the structural fragment represented by this figure as the elementary scheme of robustness. The Muon noise module is a prototypical exemplification the elementary scheme.

Fig. 10.11
figure 10_11_210961_1_En

A revised version of the robustness scheme: The elementary scheme of robustness in the arrow-node representation

Fig. 10.12
figure 10_12_210961_1_En

A revised version of the robustness scheme: The elementary scheme of robustness in the modular representation

In order to understand in what sense this scheme can be said ‘elementary’, let us come back to the global architecture of the Gargamelle experimental argumentative line.

10.14 The Elementary Scheme of Robustness and the Global Architecture of a Derivation

10.14.1 Identifying the Elementary Schemes of Robustness Inside the Gargamelle Modular Architecture

Inside an architecture of the Gargamelle type we find, locally, some modular units that satisfy a scheme akin to what I just called the elementary prototypical scheme of robustness. I don’t have the space to justify this claim. To justify it, we would have to:

  • First, describe the content of the modules that can pretend to be akin to the prototypical elementary fragment.

  • Second, analyze the differences with respect to the prototype exemplified by the ‘Muon noise’ module. For example:

    • Quantitative versus qualitative conclusions;

    • The more or less creative and more or less problematic character of the act involved in the passage from the sub-modular to the modular conclusions;

    • The involvement of parallel procedures in which the similarities largely dominates the differencesFootnote 18

  • Third, discuss the way in which these differences with respect to the prototype can influence the feeling of ‘miracle’ (see above Section 10.3) and thus the robustness associated with the totalizing result.

  • Fourth and finally, argue the decision that despite the differences involved, we are entitled to assimilate the modules in question to variants of the prototypical scheme of robustness.

I will content myself to make visually apparent, within the global architecture, some of the modules that are, in my opinion, good candidates to the title of variants of the elementary prototypical scheme of robustness (see Figs. 10.13, 10.14, and 10.15).

Fig. 10.13
figure 10_13_210961_1_En

Localisation of the elementary schemes of robustness inside the Gargamelle architecture (1)

Fig. 10.14
figure 10_14_210961_1_En

Localisation of the elementary schemes of robustness inside of the Gargamelle architecture (2)

Fig. 10.15
figure 10_15_210961_1_En

Localisation of the elementary schemes of robustness inside of the Gargamelle architecture (3)

10.14.2 Fractal Articulations of Robust Elementary Units and Other Kinds of Articulations

How are these minimal fragments of robustness, these locally robust blocks, involved in the overall construction?

First it is remarkable that, inside a floor, one often finds a sequence of elementary schemes of robustness included one inside the other, that deploy themselves in a kind of spiral on several successive contiguous level of emergence. Indeed, let us start from the deeper elementary fragment of robustness, say the fragment of level N (see, on Fig. 10.13, the colored module of the second floor). Once constituted, the totalizing output of this module acquires autonomy with respect to the derivations involved in the multiple parallel sub-modules of level N that have made it robust, and it is then used as the input of one of the parallel sub-modules that constitutes a new elementary pattern of robustness at an immediately superior level of emergence N+1 (see, on Fig. 10.14, the colored area at the second floor)… And so on from level to level (see Fig. 10.15 second floor). Something like a fractal is involved, the elementary scheme of robustness being the minimal pattern that is repeated at each level.

But the architecture is not entirely constituted in this way, following such a fractal algorithm.

First, the building blocks are not all structures that resemble the elementary scheme of robustness. Consider for example each of the sub-modules ‘Selections’. If it is often possible to give several parallel arguments in favor of the output of one of these modules, this output does not seem to be retained because it appears as the stable point of convergence to which the different parallel arguments end.

Second, even when the multiple sub-modules of a parallel set satisfy the pattern of the elementary scheme of robustness, their totalization in terms of an encompassing module does not always follow the same scheme. The operation by which one goes from the sub-modular outputs to the totalizing modular output is not always a synthesis of the calibrating re-description type. Sometimes, for example, the sub-modular parallel outputs simply add up. This is the case for the module ‘Noises’ (second floor). Or sometimes, the sub-modular outputs are in a relation of mutual exclusion. This is the case for the module ‘Theoretical interpretations’ (third floor).

On the whole, in the internal architecture of an argumentative line of the Gargamelle type, one finds local fragments of robustness and fractal articulations of such fragments, but not only that. The modules that satisfy the robustness scheme are embedded in a complex network and articulated through different combinations with other types of modules.

10.15 The Solidity of a Derivation Considered as One Derivational Unit: The ‘Internal’ and ‘External’ Contributions to Solidity Attributions

All this being admitted, to what is finally due the solidity of a derivation considered as a whole (for example the Gargamelle experimental line)?

  • One side of the answer relates the solidity of the derivation as a whole to internal characters: to what the derivation is intrinsically made of.

    On that side, solidity analysis will be based on a characterization similar to the one I just proposed about the Gargamelle line. And admitting the characterization I have proposed, the solidity of the line as a whole must be referred, not simply to the elementary scheme of convergence under multi-determinations, but to a much more complex scheme inside of which the elementary scheme is involved as an ingredient. In such conditions, the solidity of the whole (of the Gargamelle arrow-derivation) owes a lot, not only to the degree of robustness of those of its parts that satisfy the robustness scheme, but also to the repertory of all the sub-modules involved, to the structure and content of each, as well as to the manner they are combined with one another. The solidity of the whole is related to a ‘global fit’ between the multiple ingredients involved in the architecture (see Soler 201X).

  • The second side of the answer relates the solidity of the argumentative line as a whole to external or extrinsic circumstances: to the existence of other different convergent derivations.

    On that side, one will link the robustness of the Gargamelle line to the fact that a second type of experimental argument exists – for example an electronic one – and one is led to the same conclusion as the visual Gargamelle argument. Here, the solidity is asserted on the basis of the elementary scheme of robustness itself (and not on the basis of a more complex scheme).

Taking the two sides of the answer together, the solidity of an experimental argument of the Gargamelle type can be investigated:

  • Either by looking inside the procedural black box (inside the Gargamelle argumentative module) – and in this case the solidity will be related to its internal constitution;

  • Or by looking outside the procedural black box (outside the Gargamelle argumentative module), i.e. by examining the role played in more extended networks by the argumentative line treated as one unitary black box – and in this case, the solidity will be related to the situation of this line with respect to different argumentative lines (and often to the fact that the derivation under scrutiny is involved as an arrow in an elementary scheme of robustness).

From an analytic point of view, it seems desirable to distinguish these two parts of the answer. This being said, in a given historical situation, the solidity of a derivational modular unit considered at a given scale (for example the Gargamelle derivation as we defined it) can be due, either mainly to what is inside of it, or to what is outside of it, or to both what is inside and outside. The respective contributions of the inside and outside configurations to the solidity of the argumentative line, and the directions of the ‘solidity fluxes’ (from the inside to the outside or from the outside to the inside), will depend on the historical path. Or more exactly, they will depend on the ‘solidity values’ that are initially attributed by practitioners to the multiple ingredients involved in the historical situation (which will in turn depend on the past history of science: on what is, given this history, taken as already firmly established/discussable, reliable/not so reliable and so on. See Chapter 1 (Section 1.7). For example, in case an electronic experimental argument is led to the same result as a previously obtained visual argument first viewed as fragile taken in isolation, this will reinforce the overall visual experimental argument considered as a whole as well as its multiple ingredients. Here, the solidity flux will be mostly directed from the outside to the inside. But in case the internal ingredients of the visual argument are already taken from the beginning as especially solid, this can be enough to judge that the visual argumentative module taken as a whole is sufficiently solid ‘in itself’ (independently of any other ‘external’ convergent derivation).Footnote 19

10.16 Conclusions

10.16.1 What the Robustness Scheme Provides and Cannot Provide with Respect to the Analysis of Scientific Practices

The scheme of robustness ‘convergent results under multiple independent derivations’ is useful, and even indispensable, in order to describe science. I think that usually, when a philosopher of science asks the question of how a taken-to-be-robust scientific achievement has acquired this status historically according to scientists, he will be able to give an explanation involving an elementary scheme of robustness.

But at the same time, it is important to stress that the scheme is just a part of the explanation, in the sense that the structure of its skeleton is insufficient to account for the positions and decisions of living scientists (and a fortiori of no help for anticipations of practitioners’ future options from a given scientific stage characterized by a scientific debate). Indeed, this structure in itself tells us nothing, in a given situation:

  • Neither about the number of independent convergent derivations required to conclude to a sufficient robustness;

  • Nor about the required degree of independence of the parallel lines;

  • Nor about how to estimate the force of each of the parallel derivations that play in favor of a result;

  • Nor, finally, on how to weight the different parallel argumentative lines in the cases (historically frequent) in which only some of them converge but others disagree Footnote 20

Of course, the (more or less explicit) positions that practitioners will endorse with respect to these intertwined points will depend, not on the scheme of robustness as a form or a skeleton, but on the scheme as fed with a certain specific content. It seems, thus, that the scheme as a structure is not what counts primarily. In any case, the scheme as a structure is not a sufficient condition to impose in a compelling, uniform way, the robustness (or a determined ‘degree of robustness’) of the node R involved in it. It is a form that the philosopher indeed finds when he analyses science, but once such a form has been exhibited in a particular case, it is not enough to ‘justify’ or ‘explain’ the practitioner’s judgments that R is robust (or sufficiently robust). To account for such judgments (as far as this can be done), the philosopher will have to take into account the particular content to which the robustness skeleton is associated in each case.

This is indeed not a surprising conclusion, if we take into account the teachings of Science Studies devoted to the chapter on ‘scientific method’ in recent decades. As Thomas Kuhn showed already in the 1960s, and as the concept of scientific paradigm was intended to stress, practitioners’ judgments about what is reliable/unreliable, trustworthy or not, scientific or metaphysic, etc., have an irreducible pragmatic dimension (see Kuhn (1970), and Kuhn (1973) on values in scientific judgments). This means, among other things, that they are not reducible to the inescapable output of an algorithmic calculus that the philosopher of science could make entirely explicit, and in which the different ingredients of the historical scientific configuration under scrutiny would be uniformly and univocally weighted. Actually, the substance of such judgments remains opaque, and the explicit positions of different practitioners about what is reliable and what is not often appear to be divergent. There is no reason why robustness attributions should be an exception.

Taking all that into account, we should not ask too much of the robustness scheme. Once this is recognized, the robustness scheme is indeed a very useful analytic tool in order to analyze actual scientific practices. Indeed, the general scheme exhibits and characterizes a central and pervasive pattern that underlies practitioners’ judgments about the quality of scientific achievements, and this helps to recognize instantiations of the general structure in particular historical cases and to clarify its specific substance in each case.

In the perspective of this kind of ‘weak program’ about scientific method, the contribution of the present chapter has been to provide a reflection on what makes the solidity of the arrow-derivations involved in a Wimsattian robustness scheme. A given derivation can borrow its solidity, both from ‘external factors’ (namely from its position as an arrow in a robustness configuration in which the other arrows and nodes are already taken-as-sufficiently-solid) and from its ‘internal’ features. On this second side, the chapter has shown that the solidity of the derivation as a whole can be analyzed as a global good fit of a much more complex structure than the one of the elementary scheme of robustness (a complex structure involving multiple elementary schemes of robustness as ingredients). More work would be required to characterize the nature of such kind of complex fit and the kind of glue(s) involved in it. But on the basis of the present reflection, we can suggest that the robustness scheme is one particular, indeed especially prominent, kind of holistic fit among other possible ones, through which something might acquire the status of a solid achievement in the course of the history of science.

10.16.2 Epistemological Open Issues and Lines of Future Investigations

The analyses of this chapter are not devoid of consequences as regards issues of epistemological significance, such as the traditional one of scientific realism, and the less traditional one introduced above (Section 10.13) as the antagonism between contingentism and inevitabilism. In my opinion, the above analyses raise doubts about the plausibility of scientific realism and inevitabilism.Footnote 21 To close this chapter, I would like to indicate why I think the preceding analyses weaken correspondence realism and realist-inspired inevitabilism, by sketching the kinds of arguments they suggest (for more details, see Soler (201X)).

When we scrutinize what is behind robustness judgments, and in particular when we analyze what each of the multiple derivations is made of, the upshot encourages us to be extremely cautious about the passage from the robustness scheme to inevitabilism and scientific realism.

First, we should keep in mind two possible ways of speaking about a robustness scheme, which at first sight can both appear equally legitimate, if not quasi-equivalent, but which actually reflect and generate strongly different intuitions, and thus surreptitiously act as supportive elements for or against epistemological stances akin to realism and inevitabilism.

In order to describe the passage of the multiple sub-modular outputs to the unique totalizing modular output, we can say – and we commonly say – that the parallel derivations lead to an invariant something, or converge on one and the same result. This is certainly not false. But this formulation suggests an invariance already given as such at the level of each sub-modular outputs, a pre-determined ineluctable identity that scientists have bumped into, much as we bump against a wall. This image strongly pushes us toward reading of the convergence in terms of the ‘no-miracle argument’ (see Section 10.3 above), and hence fuels the realist-inspired inevitabilist conviction.

Whereas if, in order to stress the creative act of synthesis involved in what I called the calibrating re-description, we depict the passage from the multiple sub-modular outputs to the unique higher-level modular output through a formulation of the kind ‘the different parallel derivations have led to strictly speaking different conclusions that practitioners did succeed to conciliate by substituting to all of them a unique totalizing calibrating conclusion’, the flavor is rather different… In particular, the feeling that the convergence would have to be seen as a ‘miracle’ – i.e. it would remain completely unexplainedwithout an invocation of the pressure of ‘reality’, this feeling is strongly attenuated. No doubt, it is certainly very hard, and sometimes not convincingly possible, to make a multiplicity of new results cohere with one another so that all of them moreover nicely fit with the extended stock of the other already taken-for-granted scientific achievements. But if there is a ‘miracle’ here, it seems to be of a different kind than the one involved in the realist argument (on this point, see also Chapter 1, Section 1.8).

The relevance of each of these two possible re-descriptions of the robustness scheme has to be estimated case by case. But the very possibility that the first, usual formulation could actually hide a situation of the second kind, at least encourages us to be extremely cautious with respect to the quasi-irresistible tendency, undeniably active for practitioners and for each of us in ordinary situations, to assimilate what is robust to what is true, to what reveals a bit of reality and hence was inevitable (given, of course, some – admittedly partially contingent – ‘initial’ historical conditions, such as the questions scientists asked, the instrumental means at their disposal and so on).

Second, another complementary line of reflections also raises doubts on the intuitive realist reading of the robustness scheme.

The way the abstract scheme ‘multiple arrows converging on one result R’ instantiates in a given historical situation is strongly dependent on a conceptual and theoretical shaping: ways of analyzing problems, of elaborating questions, of deciding about the relevant variables and strategies (and as a particular case: of building this calibrating re-description)… Each module, each modular decomposition and global architecture, comes into existence on the basis of such a shaping. Now it is difficult to argue that such a shaping is, as such, written in nature or even uniquely imposed by what is already taken as the ‘scientific facts’ in a given stage of knowledge. Yet to be in a position to argue that something like that holds, at one stage of the investigation or another, is needed in order to support correspondence realism and inevitabilism. Otherwise, if several ontologically disparate solid fits are at all stages convincingly possible, if there is no point at which one is uniquely imposed, we are led to the idea of an alternative science that could be, at the same time, both solid (in the same intuitive sense we say our science is solid) but ontologically very different from our science. In other words, we are led to a contingentist position.

Let me say a little bit more about this point, starting from the shaping that lurks behind the modular architectures on the basis of which an item acquires its status of established result.

First of all, it has to be stressed that according to the context, the constitutive act of shaping involved in a modular decomposition appears more or less creative and problematic.

Sometimes its very existence can remain invisible to practitioners. This is the case when a given modular decomposition appears, in a given stage of scientific practices, almost automatic, obvious, deprived of any alternative and hence strongly compelling. In our historical episode, this is the case for the module ‘Collective treatment’ of the second floor: the division of the initial problem into three parallel investigations which respectively focused on the spatial, on the energetic and on the angular distributions was, at the time, a usual and almost inescapable step of the interpretative practices of the photographs obtained with visual detectors in high energy physics.

In some other contexts, the modular decomposition appears optional, creative and potentially problematic to the actors themselves. For instance, in the 1974 paper, the decision to investigate the NC/CC ratio (rather than to study the NC considered in isolation) appears as an optional strategy (although it is completely constitutive of the final conclusions since, as we have seen, the CC works as an experimental standard for the identification of the NC and for their differentiation from the neutron backgroundFootnote 22). Correlatively, the repertory and the treatment of the different kinds of relevant background events is quite problematic (the multiple practitioners involved in the research about weak neutral currents at the time were not worried about the same risk of confusion; they did not trust the same kinds of methods for the evaluation of the noises; they were deeply aware that some still unidentified pseudo could have been missed…). The treatment of the most problematic background event, the so-called “neutron background”, is itself a highly complex architecture inside the Gargamelle construction, and its modular constitution involves some rather creative steps (through the use of a Monte Carlo simulation, itself based on multiplicity of uncertain hypotheses about the properties of neutrons and neutron cascades).

Now, whether perceived as optional or inescapable, creative or imposed, problematic or obvious, it is very difficult to see something like a modular structure as inescapable or ‘uniquely imposed’. Or more exactly, when practitioners have the felling of inescapability, it is on the basis of an anterior historical trajectory which is itself made of similar modular structures. And so on indefinitely: we never find anything else than modular structures, past or present (see Chapter 1, Sections 1.6.2 and 1.7.2). Now a modular structure – and as a particular case a robustness scheme – works as a holistic equilibrium, and a great deal of contingencies are involved in the constitution and emergent conclusions of a holistic equilibrium.

The acts that determine the number and content of the parallel modular units, as well as the modalities of their articulations in more complex architectures involving different levels of emergence, are, uncontroversially, dependent on a partially contingent history. For example, if the CCs can be instituted as experimental standards with respect to the identification of the NCs, this is as the result of the – in themselves uncontroversially contingent – programmatic choices made in the previous years (i.e. the choice to conduct experimental studies of the CCs instead of other sub-atomic phenomena). At first sight this obvious remark seems anecdotal, but its potential harmfulness appears when we stress that what has been done and what has been established (until-further-notice-of-course) in the past, is not at all indifferent, and strongly conditions, what is done and what is taken as established in the future. So a genuine path-dependency could be at stake.

In each ‘synchronic’ stage of the history of science, what is taken to be plausible or established acquires its status from its situation inside of a global equilibrium structurally similar (although much more complex) to the Gargamelle architecture of our example. At each floor of this architecture, a module works as a holistic equilibrium. Imagine a change at one point or another: it is plausible that the emergent conclusions would have been different. With different or additional derivations at the level of a given module, the global assessment (the totalizing output) could have been different.Footnote 23 Now what is available and what is not in terms of derivations depends on the past history. And what is built as the output of a given floor is not indifferent to what is built at the immediately superior floor. Indeed, what is obtained at a lower level as the result of a local equilibrium (this or that totalizing output), is, subsequently, used as an unquestioned, given datum (plays the role of input) for the constitution of new equilibriums at higher levels of emergence. So that we can talk about a sort of amplification or cascade process when we jump from a level of the Gargamelle architecture to the next one. And arguably, something structurally similar holds when we consider the diachronic relations between successive synchronic ‘slices’ of scientific developments.

I do not claim to have demonstrated contingentism against realism and inevitabilism. As already indicated, my aim has only been to sketch the directions in which a genuine argument would have to be sought. The core of the argument would lie in the way human knowledge is built, that is, as a succession of holistic “symbioses” (in Pickering’s terminology) resting one on the other along a diachronic line. Reflecting on such structural characters, it becomes difficult to assert that at a point or another (be it at the ‘ideal end of research’), the contingencies related to the way the problems have been framed at each level, the contingencies related to way the solutions have been constructed as the converging points of multiple available derivations (or through more complex global good fits), or in brief, the contingencies of the whole process of the deployment of successive modular arborescences and equilibriums in the course of the history, can be erased, eliminated so as to impose a unique story, let alone a true unique story that could pretend to mirror (or at least to map in an isomorphic manner) a unique physical world which is what it is once and for all. It is in that sense that a reflection on the working of a modular architecture of the Gargamelle type weakens the intuitive obviousness of the inference from a robustness scheme to correspondence realism and inevitabilism.Footnote 24