1 Introduction

This paper deals with the problem of how non-experts can assess technical claims made by experts. The problem arises because of the asymmetry of technical understanding between non-experts and experts. The problem has been recognised by philosophers and scholars in other disciplines (see Krimsky 1984; Walton 1997; Goldman 2001; Shwed and Bearman 2010). Several contributions to this special issue also address the problem (e.g. Gelfert, Goodwin, Tindale, Zenker).

The approach developed here is different to existing studies in that it grows out of the programme known as “Studies in Expertise and Experience” (SEE). From the perspective of SEE (Collins and Evans 2007), to become an expert in a technical domain means acquiring the tacit knowledge pertaining to the domain. As far as is known, there is only one way to acquire tacit knowledge and that is through some form of “socialisation”; tacit knowledge cannot be transferred via written or other symbolic form so some form of sustained social contact with the group that has the tacit knowledge is necessary (e.g. Collins 2010). This social contact is very hard to establish and maintain in the case of narrow technical domains. The question is whether those who do not have such contact can find other ways of making judgments concerning technical matters.Footnote 1

According to SEE one reasonable strategy for non-technical experts is to judge the social position or social performance of experts. This is to use social expertise as a resource for judging expert claims. In Rethinking Expertise (Collins and Evans 2007), such judgements are referred to as “transmuted expertise”: social expertise is being transmuted into technical judgments (Collins and Evans 2007). What we will try to do here is to differentiate between different types of transmuted expertise and to explore their use in assessing expertise or expert claims in different contexts. This draws on the classification of expertises set out in the “Periodic Table of Expertises” (Collins and Evans 2007: 14) reproduced in slightly modified form as Table 1.

Table 1 The periodic table of expertises

2 Ubiquitous and Specialist Social Expertise

In the original Periodic Table of Expertises, Collins and Evans (2007) distinguished between five types of meta-expertise (row 4) where a meta-expertise is defined as an expertise used to judge expert claims or experts. A distinction is made between transmuted and non-transmuted meta-expertises. Non-transmuted meta-expertises such as Technical Connoisseurship, Downward Discrimination and Referred Expertise have in common that they turn on the use of technical understanding to judge expert claims. In contrast, the two types of transmuted meta-expertises, Ubiquitous and Local Discrimination, use non-technical, social understanding, to make technical judgements.

Ubiquitous Discrimination is based on the social expertise that everyone who is a competent member of society has to have if they are to try to avoid being duped by salespersons, politicians and so forth. It is, for example, the expertise needed to keep your savings secure by putting them in a bank rather than giving them to some stranger in the street to hold for you (and the expertise needed to know when it is time to participate in a “run on the bank” and take your savings out). One can make such judgements at a better than “toss of a coin” level without being a financial expert. Local Discrimination is the same thing but enhanced by a greater degree of proximity to the persons and places belonging to the expert domain. Thus, Wynne e.g. (1989, 1992) claims that citizens who had lived near the Windscale/Sellafield nuclear reprocessing plant had come, quite reasonably, to distrust the claims made by official spokespersons regarding the safety of the plant. Living locally, they had paid special attention to claim and outcome over many years and had acquired a special expertise in making these (cynical) judgments not shared by others, living further away, who had less concentrated experience of the matter.

Note that the distinction between ubiquitous discrimination and local discrimination points to a feature of the SEE approach not shared by analyses such as Walton’s (1997). The SEE approach takes into account the social location of different groups. Tacit knowledge deficits are less marked in social and physical locations that allow for greater access to the experts’ social world. A deficit of tacit knowledge cannot be remedied by individuals when they simply consult the literature from a distance because understanding the literature is only possible if reading is supplemented by expert understanding of how different items and outlets are to be weighted. The same applies to the weighting of track record and qualifications, especially in the kind of scientific controversy where all sides are equally qualified and experienced.

Since, in regard to how non-technical experts make judgments, an important feature of SEE is to examine social location, SEE begins with a distrust of ubiquitous transmuted expertise. SEE agrees that there is such a thing as ubiquitous transmuted expertise but, generally, it lies at the least reliable end of transmuted expertises; the argumentation approach appears to value it more highly than the SEE approach. Thus, we are, as will be seen, distrustful of most of the examples listed in our own Table 2 (below). Where such ubiquitously available judgements are reliable the cases are usually rare ones—such as the judgement that vast conspiracy needed to fake the Moon landings was impossible to sustain for social reasons, or the estimate of the total imbalance of expertise in the case of the MMR controversy. We now explain what we mean by stepping through a number of such ubiquitous transmuted expertises as SEE views them.Footnote 2

Table 2 Uses of discrimination

2.1 Assessment of Trustworthiness

The popular phrase that sums up the problem is “would you buy a used car from this man (or woman)?” President Nixon, with his darkly shadowed lower face and tendency to perspire, was notorious for doing badly when it came to this kind of judgment. So, it seems, was Sir Walter Marshall, Chairman of the British Central Electricity Generating Board, with his baggy blue-grey suit and “told you so” manner, when it came to announcing that the way the Board transported nuclear fuel around the country had been proved to be safe by the demonstrated integrity of a nuclear fuel flask after it had been hit by a train (Collins 1988). One can see immediately that, extreme cases aside, this way of judging is not entirely reliable. Nixon may have been responsible for Watergate but without changing his appearance he also did some great things, such as visit China. Sir Walter may have been glossing certain difficulties to do with the nature of the demonstration when he was being disbelieved by the public but doubtless much of what he said on other occasions was a model of integrity though he was wearing the same baggy suit and expressing himself in the same manner.

2.2 Assessing Consistency

Changing position is often seen as a weakness in politicians and it is exploited by journalists. Indeed the suspicion of inconstancy has become symbolised (in British politics, anyway) by the accusation made to politicians “that’s a U-turn!”; Margaret Thatcher’s notorious remark “the Lady’s not for turning” points to the same thing. In more recent times a short-lived scandal was caused when the current British Prime-Minister, David Cameron, wanted to stress his green credentials by riding his bike into the office. The effect was spoiled by the filming of his chauffer-driven Lexus following at a distance. More seriously, the ‘sheep farmers’ studied by Brian Wynne (1989, 1992) could point to many inconsistencies in the advice given to them by experts following the deposition of radioactive fall-out in the Cumbrian fells following the Chernobyl disaster.

2.3 Assessing Sociological Credibility

Assessing sociological credibility can sometimes be a rare powerful way for the public to choose a technical position. The case discussed in Rethinking Expertise is the 1969 American Moon landings. It is an interesting experiment to ask any group of scientists why they believe in the Moon landings rather than the counter-story, that they were faked in some desert area of the USA. The reply, at least from physical scientists, is likely to be in terms of various technical correlations such as the telecommunications black-out which attended the space-capsules passage behind the Moon, the way the astronauts were shown bouncing high into the air as the jumped in the Moon’s low gravity environment, the way dust took a long time to settle, and so forth. But all of this could have easily been faked if an organisation like NASA had decided to fake it so this kind of evidence is not decisive. At least, it is not decisive unless you were part of the team controlling the space-craft—it is not decisive at any kind of social distance from the scene of the activity. This shows, inter-alia, how speedily technical expertise degrades as one moves away from a technical specialism in sociometric space. The point is illustrated by the fact that American astronauts seriously considered that the first Russian “spacewalk”, in 1965, might have been faked even though it had been seen on television. In Rethinking Expertise astronaut David Scott is quoted, writing in his autobiographical 2004 book (Scott and Leonov 2004): “What proof do we have that this guy really went outside?” Bear in mind that the Americans in the story were space experts but they were still short of social proximity to their Soviet counterparts.

In the case of the Moon landings, the public, and any scientist not present at the heart of the activity, is, however, on quite strong grounds when they turn to sociological credibility. We all know that it is almost impossible to organise a conspiracy of that size without “leaks” and we all know that, given the circumstances of the Cold War, if the Russians had any inkling that it was a fake, they would have said so. The Russian silence is enormously strong evidence—much stronger then the technical evidence—that the Moon landings were genuine.

2.4 Assessment of Scientificness

Ubiquitous social expertise can also be used to tell whether a group of self-proclaimed experts belong to the scientific community. Thus, many citizens of a typical western society will understand that astrologers are not connected to the scientific community. Social understanding can often be used to discriminate in this way though, the very success of astrologers, show that the ubiquitous expertise is often misapplied. Here we are not making claims about how people should act, we are merely claiming that if they wanted to choose, say, orthodox science-based medicine over alternative medicine they would have the discrimination to know how to do it.

2.5 Assessment of Scientific Currency

Here we repeat the discussion of “cold fusion” found in Collins and Evans (2007, pp. 47–48). There it was argued that as the cold fusion saga drew to a public conclusion around the last years of the Twentieth Century, most reasonably literate members of Western society, who knew only what they had seen on the news or read in the newspapers, were in a position to understand that cold-fusion had been tried and found wanting. There was a time when cold fusion was continuous with science but there were now enough clues in the mass media to indicate that its cognitive and social networks had drawn apart from ordinary scientific society. Crucially, to make this judgment it was essential to ignore scientific credentials or track-records. Thus Martin Fleischman, the co-founder of the cold fusion field, had an enviable track record for success in the sciences, was immensely well-qualified, honoured as a Fellow of the Royal Society, yet still believed in the effect, contrary to the scientific consensus. To expect the citizen to be sufficiently educated in science so as to be able to make a technical judgment that went against Fleischman was, of course, ridiculous but to rely on qualifications or track record was just as bad. The crucial judgment, however, concerned whether the mainstream community of scientists has reached a level of social consensus that, for all practical purposes, could not be gainsaid in spite of the determined opposition of a group of experienced scientists who know far more about the science than the person making the judgment. Note that this is not the sort of judgment that we would expect even an immaculately qualified scientist from “another planet” to be able to make. A scientist from another planet, reading published papers for and against cold fusion, would have difficulty working out who was right; the scientifically ignorant citizens of this planet, in contrast, had a relatively easy decision to make.Footnote 3

Here, then, the citizen is in a better position to make a judgment for practical purposes than certain of the scientists. This is the case because science reserves the greatest honour for those who cleave doggedly to a widely scorned idea only to be proved right in the long term or even posthumously. A scientist, then, can be behaving in a proper scientific manner while behaving quite unreasonable from a policy point of view.Footnote 4

2.6 Assessment of Evidence

Perhaps most surprisingly, there are even occasions when citizens could, in principle, make judgments of scientific evidence through the use of transmuted expertise. The case of the 1998 revolt of the citizenry of the UK against Mumps, Measles and Rubella (MMR) vaccine is an example (Boyce 2007, Goldacre 2008, see also Tindale in this issue). Quite simply, there was never any evidence for the cause of the panic—the idea that MMR vaccine caused autism. There was some minimal evidence that autism was associated with measles virus in the gut discovered by the doctor, Andrew Wakefield, who started the scare by proposing the MMR link during a news conference. But Wakefield continued to recommend single shot measles vaccinations and had no specific evidence about MMR. That this was the case was not hard to find out—at least by those with some access to the goings on such as journalists. Unfortunately, the journalists chose to exploit and amplify the scare, at best providing a spurious ‘balance’ to their stories which in no way reflected the balance of scientific evidence.Footnote 5 This was a sorry episode for the public, the journalists, and even some of the social scientists who wrote about the issue. These social scientists, seemingly driven by a desire to defend “the powerless” against the medical establishment, concentrated on explaining the parents’ viewpoint rather than stressing the practical point, that the original claim for a link between MMR and autism was without technical foundation and the revolt was powered by the “free rider” problem which haunts any vaccination campaign. Social scientists’ own expertise in respect of the theory of moral panics, the free rider problem and even a minimal understanding of statistics was readily applicable should they have wanted to use it. The point is, however, that the public could have made the correct judgment, and probably would have done, had the journalists made the right choice of how to handle the story. If faced with a claim for which there is simply no evidence at all, and one for which there is some evidence (the absence of any link in all the epidemiological studies), it is reasonable to think that the proper use of transmuted expertise would favour the latter.

3 Domain Specific Discrimination

We now turn to an example of transmuted expertise that is at the other end of the scale of proximity to ubiquitous transmuted expertise since, by definition, those who use it are themselves an integral part of the technical domain. Domain-Specific Discrimination (DSD) has recently been analysed with a view to it being added to the Periodic Table classification (Weinel 2010). It is the “non-technical” expertise used by technical experts to judge their fellow experts.Footnote 6 The reason it has not been identified before as a type of transmuted expertise is precisely because technical judgment could not proceed without it and therefore it is so much integral with what technical experts do as part of the technical lives that it had been not been noticed that it was itself a transmuted expertise.

Name and categorisation aside, DSD was discovered as the sociology of scientific knowledge was developed. For example, Collins (1992[1985]), by closely observing a scientific controversy in gravitational wave physics, showed that scientific disputes cannot be brought to an end by purely “scientific” or technical means.Footnote 7 Scientists can invent any number of sub-hypothesis to rescue their favoured empirical findings from any number of conflicting experimental results. The most straightforward method is to question experimental competence. Thus, should the experiment of a rival fail to find the claimed phenomenon the first experimenter can insist that the rival’s experiment was not done with sufficient care or skill. Given that scientific experiments are difficult, and that there is no direct indicator of how well they were carried out other than their outcome, in a case where the correct outcome is disputed it is impossible to prove that an experiment has been properly executed. This gives rise to the “experimenter’s regress”.Footnote 8 Breaking out of the regress nearly always requires reference to some non-scientifically measurable social variable such as skill, reputation, background, institutional setting, prevailing opinion, and so on. This is an exercise in the application of transmuted expertise since it is these social judgments that resolve the regress one way or the other and therefore precipitate a new fact about the technical world addressed by the experiment. It happens that this kind of transmuted expertise is used by those who are also technical experts.

4 Sociological Discrimination

So far we have discussed three kinds of transmuted expertise: Ubiquitous Discrimination is open to all citizens; Local Discrimination is open to citizens who have some special local access to the persons or situations being judged; and Domain Specific Discrimination involves still closer access to relevant experts because the discriminator is located at the heart of the relevant scientific community. Local Discrimination is better than Ubiquitous Discrimination because it entails more detailed understanding of the local social situation in which the scientific or technological dispute is being played out. Domain Specific Discrimination is better still because the social situation being judged is right within the social community at the heart of the science. Ubiquitous and Local Discrimination are named categories in the Periodic Table of Expertises. Domain Specific Discrimination is not yet a named category but is latent in both Interactional and Contributory Expertise.

Now we turn to another new kind of transmuted expertise made salient by Weinel (2010) and which does not appear in any form in the Periodic Table; it would appear, however, were we ever to construct a revised version. The new category is “Sociological Discrimination”. It is involves the application of the specialist skills of the expert social analyst to discriminate among technical choices. The skills to be applied pertain to understanding societies, particularly the social organisation of science. Sociologists of scientific knowledge are among those who might be expected to have such skills.Footnote 9

For example, one of the problems for policy makers in technical areas is what to do in cases of expert disagreement; a situation classified as “novice-2expert problem” by Goldman (2001). Policy decisions often have to be taken immediately and cannot wait for the complete closure of a scientific controversy which might well take a generation. Thus, when a science is characterised by controversy, policy-makers are faced with the problem of making decisions that turn on science without scientific consensus to fall back on. In some cases we believe we can show, using the transmuted expertise of Sociological Discrimination, whether a controversy should be taken seriously for policy purposes or whether it is reasonable for the policy-maker to work with the majority. An example is the decision about whether to administer anti-retroviral drugs to pregnant mothers in South Africa.

4.1 Mbeki and Anti-Retrovirals

In late 1999, the South African government argued that it could not positively respond to public demands to provide the antiretroviral drug AZT to pregnant women living with HIV/AIDS. AZT had been widely used since 1994 to reduce the risk of mother-to-child transmission. In October 1999, President Mbeki, who had no recognisable expertise in pharmacology, toxicology or any other related natural sciences, justified the decision not to provide AZT in the public health sector by pointing to an unspecified “huge volume of scientific literature” which apparently argued that AZT was “a danger to health” (Mbeki 1999).

It is incontrovertible that some literature highly critical of AZT could be found on a variety of websites. In earlier years there were a number of publications in the high-ranking scientific journals that related to this dispute though, at the time of the Mbeki dispute, such papers were rare. Because of this, it appears that any real controversy within the scientific community was long over. Such a judgement follows from a certain level of understanding of the social nature of science and scientific controversies. There are enough publication outlets in the world science for a dispute to continue for generations even though in the heartlands of the science these are ignored.Footnote 10 The application of this kind of understanding of how science works to weight various contributions to a scientific or technological dispute is “Sociological Discrimination”.

Mbeki, seemingly, was not able to base his judgement about AZT on this kind of understanding. For him, the existence of critical texts and the fact that they were written by people with scientific credentials, with some being published in scientific journals, was enough to convince him that AZT was subject of a live scientific controversy. To repeat, sociological studies have shown that there are large differences in the quality of scientific texts and journals which are referred to by specialists as part of their Domain Specific Discrimination. These differences can, to a certain extent, be discerned by appropriately trained outsiders.

An example of the different treatment accorded to different publications by insiders has been much discussed under the Periodic Table’s heading of Primary Source Knowledge. In 1996, Joseph Weber, the pioneer of gravitational wave detection (Collins 2004), published a paper which could have led to a Nobel Prize had its results been widely accepted; he claimed to have found correlations between the gravitational waves he said he had detected in the early 1970s and Gamma Ray Bursts—another cosmic mystery. Weber early claims to have discovered gravitational waves had been discredited among the majority of the relevant scientific community by around 1975. In 1996, being well immersed in the gravitational wave physics community, Collins was able to ask a large sample of them what they made of the new paper. To his surprise he found the only person who had actually read the paper was Collins himself.Footnote 11 It was simply that by this stage in the history of gravitational wave detection, the credibility of discovery claims made by Weber had become so low that scientists simply took no notice of his papers. Thus, visitors from other planets, if there are any, who wish to apprise themselves of the content of terrestrial science would be ill-advised to rely on published sources alone since, before these sources reflect our science they have to be divided into a set which must be taken seriously and a set which should be ignored. What applies to extra-terrestrials also applies to those who come to a science from an alien social location but Mbeki did not understand this.

Weinel (2008, 2009, 2010), has used other aspects of Sociological Discrimination to show that what Mbeki claimed to be a live scientific controversy ought to have been judged by policy-makers to be a false scientific controversy like that over MMR vaccine. For example, a search of medical databases such as PubMed, reveals that only a single peer-reviewed paper published in the 1990s argues that the risks of using AZT outweighs its benefits. Moreover, this paper was ignored by all literature reviews of the safety of AZT conducted in 2000. In contrast, there are hundreds of scientific papers that, while pointing out risks of AZT, state explicitly that the benefits of using AZT to reduce the risk of MTCT outweigh these risks.Footnote 12

A closer qualitative look at the critical paper, based on an understanding of “how sciences work”, i.e. Sociological Discrimination, indicates further problems. First, it was published in a journal called Current Medical Research and Opinion that has a very low impact factor and has a history of publishing fringe research (Nattrass 2007).Footnote 13 Second, despite being a review article published in 1999, its central argument is based on claims made in scientific papers published in the 1980s and early 1990s. Literature from the mid and late 1990s is scarcely cited in the paper (Cherry 2009). Third, the main authors can easily be identified as being supportive of the claim that the existence of HIV has never been proven, despite the fact that none of them has been involved in any substantial research on the virus (Kalichman 2009).

Of course, it might still turn out that the claims made in the critical papers were correct, while the scientific mainstream was wrong. The point is, however, that in late 1999, when a policy-decision on providing AZT to pregnant women was made, the safety of AZT was virtually undisputed within the scientific community. If there had ever been a real scientific controversy over the matter it had long “passed its sell-by date”. Remember, the problem of policy—decision-making in the short term—is not the same as the problem of science—truth in the long term. Sociological Discrimination or “social intelligence regarding science” (Kutrovátz 2010) can, in this case, provide a basis for non-experts to distinguish between inauthentic and genuine policy-relevant scientific controversies.

5 Conclusion

We have argued, just as others discussed or writing in this special issue have argued, that it is sometimes possible for non-technical experts to make what amount to technical decisions using what we call, “transmuted expertise”. We have explained some of the ways in which the SEE approach differs from philosophical examination of what it is possible to argue reasonably, preferring to look at the social locations and access to tacit knowledge of groups of non-technical-experts and even, in the case of DSD, technical experts, and preferring to argue, where possible, from real world examples. We have argued that ubiquitous transmuted expertise are not very reliable and can be poorly applied. Thus, even in the case of our most reliable example, the Moon landings, for there to be a dispute there had to be a group of citizens who believed they were a fake; likewise, a substantial proportion of UK citizens, supported by journalists who were certainly in a position to do better, failed to make the best judgment in the case of MMR vaccine—and so on. Generally, slightly improved assessments can be made by those who are closer to the matter in question, and hence the value of Local Discrimination. Domain Specific Discrimination is an example of a transmuted expertise that operates within the heartland of science and should be the most reliable of the types of transmuted expertise. Finally we have suggested that specialists, such as sociologists, have expertise in respect of social behaviour in the sciences that can be transmuted into technical judgments—we have called this kind of transmuted expertise “Sociological Discrimination”. Such specialists are experts: they should ‘know what they are talking about’ in respect of the social nature of science. There is, then, an escape from uncritical reliance on expert authority: non-technical experts can make expert technical judgments but it is not always easy—it should certainly not be taken to lead toward what we have elsewhere called “technological populism”, the idea that ordinary peoples’ views on technological matters are to be given equal weight to that of experts (see Collins et al. 2010). What has to be accepted, nevertheless, is that judgments made using transmuted expertise can never be more than the “best possible” judgments in the circumstances. It is only science that can afford the luxury of being right in the long term and that is because science is directed toward truth rather than policy. The policy maker has, in general, to make decisions long before the scientific truth is established. The crucial point is that because a decision is only the best possible, it does not mean it is not much better than a decision based on no kind of expertise at all.