Keywords

1 Introduction

Public loss of trust in science is a topic of much discussion, especially when scientific misconduct becomes evident. For example in 2011, Diederik Stapel was suspended from Tilburg University after it became known that he had manipulated data and faked the results of various experiments. Investigations brought to light that more than 30 publications were based on fraudulent data, a misdemeanor he has since admitted (Stapel 2012). His legal case received considerable attention, leading to a discussion about the effects that fraud in science might have on public trust in science. Confronted with news about plagiarism or manipulation of data, it becomes obvious for the public that the everyday practice of scientific work and science communication is based on trust (Vetenskap and Allmanhet 2015).

But trust is not only an issue when researchers abandon the rules of scientific conduct (for example, by deliberately faking scientific results). Such cases are actually exceptional. In reality, trust is inevitable and essential for scientists for doing science (trust within science) as well as for the general public dealing with science-related topics in their everyday life. In other words, trust is critical for ‘insiders’ as well as ‘outsiders’ (Feinstein 2011).

In the following chapter, we will at first elaborate why trust is essential in the context of science and why it is therefore an important topic for empirical research about science and its public understanding. Second, we will provide an overview about the state of public trust in science as it has been depicted in surveys on public attitudes toward science. Referring to these findings, we will then discuss the specificities of ‘trust’ in the context of public understanding of science. In doing so, we will start from the same general understanding of trust as it applies to trust research in other domains and, henceforth, as most of the other chapters in this volume. We will then specify and extend this research toward epistemic trust as conceptual framework and point to the special importance of epistemic trust in a digitized knowledge society.

2 Why Trust is Essential in the Context of Science

‘Trust’ is most typically ascribed as a kind of assumption about others. Whenever people are dependent on agents (persons, organizations) and whenever they are willing to accept the risks that come along with this dependency, they put trust into these agents (the trustees) (see Blöbaum 2016). This general notion of trust implies that there are degrees of freedom for the trustor (for example he/she does not have to purchase products offered by the trustee) and that there is some risk that he/she cannot control (for example, the trustor cannot be sure if the product works as promised, and if the product fails, this would be detrimental according to the goals of the trustor). This has been described as the willingness to be vulnerable to another person (e.g., Mayer et al. 1995). However, this core idea of ‘trust’ has to be refined and specified when it comes to knowledge. In this case, the goods that the trustee provides to the trustor is ‘knowledge’, and the risk to the trustor is his/her vulnerability to a lack of truth or validity of that knowledge.

The Paradox of Trust in Science

Trust in ScienceFootnote 1 does address a paradox, as Science has evolved as a means to question and readdress established ‘facts’. As such, the very idea of modern Science is to know the truth instead of just trusting what you are told. This is based on the enlightenment idea that everybody should be able to overcome the vulnerability of not being told ‘the truth’ by empowering their own capabilities to think and to know. The emergence of modern Science in the sixteenth century was based on the idea that the truth of knowledge could be established by epistemic acts (by seeing, hearing, or providing correct impressions about nature) and by rational conclusions based on such impressions. Thus, Science is no longer based on the faith in assertions that have been put forward by authorities. As mentioned by Sperber et al., “Historically, this individualistic stance could be seen as a reaction against the pervasive role in Scholasticism of arguments from authority” (2010, p. 361). Understanding and doing Science is a way of controlling the risk of not getting ‘the truth’.

Referring back to the core notion of trust described above, this might at first glance imply that trust conflicts with Science. The legacy of modern Science, as a way of knowing and learning about the world that does not rely on trust, might be the reason why many seminal accounts on the philosophy of Science even do not mention the concept of trust at all. Only a small group of philosophers and historians of Science have emphasized the role of trust in Science, mostly on how it relates to doing Science and on communication among scientists (to a lesser degree on Science communication with the public). For example, in his seminal paper, Hardwig (1991) points to some proofs in modern mathematics that cannot be checked personally by most mathematicians. Thus, they must trust the claims of those who have processed the proofs. Obviously, interdisciplinary work is deeply reliant on cooperation and trust between experts of different knowledge areas (Origgi 2010; Whyte and Crease 2010; Wilholt 2013). Even within the same research team, trust in the knowledge of others is essential for everyday scientific practice. “Whenever the relevant evidence becomes too extensive or too complex for any one person to gather it all” (Hardwig 1991, p. 698), it is more advantageous to rely on the testimony of others than on one’s own empirical evidence (Chinn et al. 2011).

The Division of Cognitive Labor

The perpetual construction of new knowledge and the discovery of new scientific phenomena leads not only to individuals gaining ever more specialized expertise, but it also transforms scientific disciplines as new sub-disciplines evolve. The need for trust within as well as in Science is an immediate result of the specialization of knowledge (areas) (Barber 1987; Rolin 2002). Insofar, the constitutive role of trust for doing, using and understanding Science follows immediately from the division of cognitive labor (Bromme et al. 2010; Bromme and Thomm 2015; Keil 2010; Keil et al. 2008). Theoretical approaches on trust in Science point out that trust is a constituent for our contemporary society in which specialization and complexity are ubiquitous (Barber 1987; Rolin 2002). However, the body of scientific knowledge is not the only thing that is continuously growing: so is the public’s need for information about scientific topics. For example, in the realm of citizenship the public is continuously challenged to form opinions about Science related issues (e.g., “do I oppose nuclear energy?”) or even act upon those opinions (e.g., “should I then invest in solar power for my personal home”).

The Complexity of Scientific Knowledge

As mentioned above, trust is not only essential for scientists (trust within Science), but also and even more so for the general public, the ‘outsiders’ (trust in Science). The emergence of Science in modern history has also been a history of separating a scientific understanding from an everyday understanding of the world (Wolpert 1992). Scientific knowledge has become more and more abstract, tied to cognitive artifacts (like mathematical models) and technological tools (like microscopes, MRT devices) for the production of data (Daston and Galison 2007). The division of cognitive labor between scientists and the general public raises boundaries for the Public Understanding of Science (Bromme and Goldman 2014). Most scientific phenomena defy firsthand experience, as observation is impossible (e.g., oxygen, electrons, genes) without means that are only accessible to scientists. Furthermore, fully comprehending information about such phenomena does require specialized knowledge possessed by only a few experts (e.g., understanding why oxygen is existential in some concentrations but toxic in others). Also many topics of everyday interest to the public (‘socio-scientific’ topics) cannot be fully understood without deep scientific knowledge. Consider discussions about nuclear energy, climate change or stem cells, to name just a few. Most people possess only a bounded understanding of the underlying Science of such topics (while political or social consequences may be more easily accessible to them). Thus, we can conclude from the division of cognitive labor, going hand in hand with the complexity of our knowledge society, that a full Public Understanding of Science is unfeasible. Instead, a public trust in Science is essential.

Easy Access to Science-Related Information in Digitized Societies

The need for trust in Science has increased because of digitalization (especially the Internet). Nowadays, people in many countries (except those in which economical, educational or political conditions prevent it) have easy access to scientific information via the Internet. Young people especially search online to find out about scientific issues and research information (Anderson et al. 2010). In the U.S. in 2012 (Besley 2014), around 42 % of survey respondents mentioned that the Internet is their primary source of information about Science and technology, replacing the TV, which was only named by 32 % of respondents. Of all the respondents that named the Internet as their number one source of science-based information, 63 % said that they actually read online newspapers, while less than 10 % mentioned blogs. When searching for information about specific scientific issues, 63 % of Americans make use of online sources (Besley 2014). The Wellcome Trust Monitor found that in the United Kingdom, 63 % of adults and 67 % of young people choose to search the Internet when they were actively looking for scientific information (Wellcome Trust 2013). In all of Europe, similar results are found (European Commission 2013). This concurs with a disintermediation through digital technologies (Eysenbach 2008)—as opposed to in traditional news media, there are far fewer gatekeepers, like journalists or editors, online. This is due to very low costs of publishing and lack of quality control when compared to traditional publications, which employ peer review (Eysenbach 2008). Hence, recipients must take it upon themselves to gather and evaluate information about a (scientific) topic. One can imagine that finding reliable information is difficult when considering thousands of potentially relevant websites that are published by an equally great number of sources of varying trustworthiness. Consequently, for laypeople in a digitized world who are confronted with an overwhelming amount of information, it is essential that they are able to judge whom to believe. That is, judgments about who is a trustworthy source of information and who may provide relevant information about an issue are crucial (Bromme et al. 2010; Bromme et al. 2015; Hendriks et al. 2015a).

3 How much does the Public Trust Science?

As argued above, a full understanding of even of those segments of scientific knowledge that are relevant for our lives is unfeasible. In consequence, it is crucial that the public trusts Science. In fact, various representative research attempts that claim to study the general Public's Understanding of Science focus instead on attitudes, behaviors and activities of the general public—these issues are more closely related to people’s trust in rather than their understanding of Science.

Therefore, in the following section we will overview such recent representative survey studies, as they give various hints about how much the public trusts Science. We took the liberty to subsume all items (printed in italics) that point to public trust in Science, and subdivided these topic into items that focus on the public’s general appreciation of Science, general trust in Science, and trust in Science in the context of specific topics. We chose surveys from the U.S. and from several European countries (see appendix for detailed survey information). Data on the U.S. public’s views on Science are presented in the National Science Board’s report on attitudes about Science and technology in its Science and Engineering Indicators (Besley 2014) and in the Pew Research Center’s survey on Public and Scientist’s Views on Science and Society (Pew Research Center 2015). Data from representative samples of all 27 European Union member states are provided by a Special Eurobarometer: Responsible Research and Innovation (RRI), Science and Technology (European Commission 2013). Furthermore, we include three European national surveys, the Ipsos Mori Public Attitudes to Science survey from the United Kingdom (Castell et al. 2014), the German Wissenschaftsbarometer (Wissenschaft im Dialog [WiD] 2014) and the Swedish VA Barometer (Vetenskap and Allmanhet 2015).

The General Appreciation of Science

In all surveys, when asked about the outcomes of Science, the public holds a rather positive and optimistic view about Science in general. Respondents of all surveys mostly agree with the statement science makes life easier: namely, 79 % of American (Pew Research Center 2015), 81 % of British (Castell et al. 2014), 66 % of European (asked does science make life easier, more comfortable and healthier) (European Commission 2013) and 74 % of Swedish (asked if scientific developments in the last 10–20 years have made life easier for ordinary people) respondents agree (Vetenskap and Allmanhet 2015). Moreover, most Americans (90 %) have a great deal or some confidence in the leaders of the scientific community; only the military is trusted more (Besley 2014). In addition, 70 % of Americans (Besley 2014) and 55 % of British (Castell et al. 2014) respondents agree that the benefits of science outweigh the harmful effects. The German Wissenschaftsbarometer (Wissenschaft im Dialog [WiD] 2014) asked if science is more harmful than beneficial, and 68 % of respondents reject the statement. Furthermore, 77 % of Europeans agree that science has a positive influence on society, and among them 17 % regard the influence as very positive. In contrast, only 10 % of respondents to this question think that science has a negative impact on society (European Commission 2013). In the United Kingdom, 90 % of respondents agree (among them, 46 % strongly agree) that scientists make a valuable contribution to society (Castell et al. 2014).

Also, regarding scientists and their actions within the realm of Science, it seems that that the public mainly perceives scientists to have good intentions. In the U.S., 86 % of respondents think that scientists work for the good of humanity and the same amount of respondents thinks that scientists work on things that will make life better for the average person (Besley 2014). In Europe, 82 % of respondents think that scientists working at a university behave responsibly toward society by paying attention to the impact of their science or technology related activities. Only 66 % of respondents agreed with this statement when it concerned scientists working in private company laboratories (European Commission 2013). In the United Kingdom, scientists’ intentions are judged with a bit more reservation. Of respondents, 83 % agree (among them, 27 % strongly agree) that scientists want to make life better for the average person. However, 27 % of respondents agree with the statement that science benefits the rich more than the poor. Still, 48 % of respondents disagree with this statement (Castell et al. 2014).

General Trust in Science

In current surveys, only a few items focus directly on the trust people put in scientific knowledge claims. For example, 52 % of British respondents agree that the information they hear about science is generally true, elaborating that they had no reason to doubt it (40 %) or that they believed other scientists had checked it (15 %). Regarding scientists’ assumed competence, in the Eurobarometer survey, 66 % of respondents agree that university scientists are qualified to give explanations about the impact of scientific and technological developments on society, outdoing all other groups. In contrast, scientists working in private company laboratories are regarded to be qualified in this regard by only 35 % of respondents (European Commission 2013). Also, there are some data suggesting that respondents not only regard scientists to be able to inform and advise society, but also that they have the integrity to make truthful claims. Regarding the attitudes of the public about the honesty, ethicality and integrity of scientists, only data from Europe are available. In the Eurobarometer, 54 % of respondents are concerned that the application of science and technology can threaten human rights and 61 % think that researchers should not be allowed to violate fundamental rights and moral principles to make new discovery; conversely, 29 % believe that researchers should be allowed to do this in some special cases. In addition, 84 % of respondents believe that all researchers should receive mandatory trainings on scientific research ethics (like privacy, animal welfare, etc.) and 81 % agree that scientists should be obliged to declare possible conflicts of interest, such as sources of funding, when advising for public authorities (European Commission 2013). In the United Kingdom, respondents are mostly convinced that they can trust university scientists and researchers from university to follow the rules and regulations of their profession (90 % agreement); again, respondents agreed with this the most for university scientists and researchers compared to all other groups. For example, only 60 % of respondents agree that scientists working for private companies follow the rules and regulations of their professions. Furthermore, scientists are regarded to be honest by 71 % of British respondents. However, when asked if scientists adjust their findings to get the answers they want, 35 % of British respondents agree, while 34 % disagree (Castell et al. 2014).

By large, the public seems to be very positive about the benefits that Science has to offer society, and, related to these expectations, the public mostly trust scientists to produce reliable knowledge of good quality, not biased and adhering to scientific principles. But this is not blind trust, as answers also reflect a kind of suspicion about vested interests when research is funded by private companies [these findings are in line with earlier work by Critchley (2008)]. Furthermore, relevant proportions of the public take into account that scientists might not adhere to the standards of objectivity.

Trust in Science in the Context of Specific Topics

In contrast to its generally fairly trustful view of Science, when specific topics are considered, the public varies widely in its amount of trust in Science: The National Science Board’s Science and Engineering Indicators show that 25 % of respondents consider genetically modified foods to be very or extremely dangerous, and 57 % are in favor or strongly in favor of nuclear energy. In the U.S., nanotechnology is not very controversial, as only 11 % of respondents think the harms outweigh positive benefits, while 43 % hold no opinion (Besley 2014). The Pew Research Center’s survey also shows that U.S. adults are skeptical towards some scientific topics. For example, only 37 % of respondents agree it is safe to eat genetically modified foods, and only 28 % think it is safe to eat food grown with pesticides. Also in this survey, 50 % of U.S. adults agree that climate change is due to human activity, 68 % favor the use of bioengineered fuel and 45 % favor nuclear power plants. In spite of this, 79 % of U.S. adults agree that science has a positive impact on the quality of health care, 62 % believe in science’s positive effect on food (Pew Research Center 2015). While the Eurobarometer holds no data on specific scientific topics, other data from Europe can provide some insights. In the United Kingdom’s Ipsos Mori survey, again it is reported that for specific scientific topics, the public’s trust is inconsistent. Asked, if the benefits outweigh the risks, 84 % agree regarded benefits to be dominant for vaccination and 66 % of respondents agreed the same thing for renewable energy. Also, 57 % of the questioned British adults agree that benefits outweigh the harms regarding stem cell research while only 38 % agree to this for nanotechnology. For only a few topics, more than 10 % agree that the harms actually outweigh the benefits. For example, regarding nuclear power, 28 % agree that the harms outweigh the benefits, while 48 % agree that the benefits outweigh the harms. For genetically modified crops, 28 % see the harms to be dominant, while 36 % believe the benefits outweigh the harms (Castell et al. 2014). In the German Wissenschaftsbarometer, respondents were asked how much they trust scientists’ statements regarding specific scientific topics. For statements regarding renewable energies, 44 % of participants have trustful attitudes, and for statements regarding the genesis of the universe, 40 % have trustful attitudes. Statements regarding climate change are trusted by 37 % of Germans, but for genetically modified crops, only 16 % of participants regard such scientific statements as trustworthy (Wissenschaft im Dialog [WiD] 2014).

Mingling Trust in Science and Personal Stances About Specific Topics

The above-mentioned survey questions exemplify that the public seems to trust Science less when considering specific topics (nuclear energy, genetically modified food) then when asked about Science in general. This might be due to the following reasons:

Firstly, many of the survey questions confound the personal stance about a certain Science-based issue or development with the issue of trust in Science. For example, an item like: “Do you think it is generally safe or unsafe to eat genetically modified foods?” (Pew Research Center 2015, p. 92) is primarily an item on personal beliefs and positions about this kind of food. It does not distinctly measure the trust in the underlying Science, albeit this could also influence a participants’ response. For teasing out both aspects, it is necessary to reflect on the difference between a personal position about a topic and personal trust in the Science that produces knowledge about that topic. For example, the recent German Wissenschaftsbarometer asked: “How much do you trust statements of scientists regarding the topic renewable energies [author’s translation]” (Wissenschaft im Dialog [WiD] 2014, p. 14). The only way to view participants’ responses as a statement about their trust or distrust in the scientific knowledge on this topic would be if Science has provided a clear answer about this topic.

Secondly, typically surveys only focus on science- or technology-related topics that are of public interest, and, as a result, these topics are controversially discussed in the mass media. Participants’ responses might then reflect their degree of awareness about the very fact that a topic is controversial, and this might be confounded with their personal stance on the topic and the underlying Science. We have doubts that this kind of confounding can be prevented by using the following type of statement: “From what you know or have heard about renewable energy, which of these statements, if any, most closely reflects your own opinion?—saying benefits outweigh the risks/saying risks outweigh the benefits” (Castell et al. 2014, p. 35).

Both of the above reasons for why the public’s has a high general level of ‘default’ trust in Science yet displays much more varied trust when considering specific topics (including clear distrust by some subsamples) not only point to methodological challenges of survey research, they also imply that trust in Science is inherently ‘confounded’ with people’s perspectives on the topic of interest: When a science-related topic is of interest for segments of the public, then these sub-populations develop personal stances related to this topic. These stances thereby modify their ‘default’ trust in Science. In other words, trust in Science develops and changes in light of the public’s views about specific scientific topics.

4 From Trust to Epistemic Trust

Science (and science-based technology) is essential for life in modern societies, and trust is an essential component of how the public copes with Science; consequently, many recent representative surveys have tackled this topic. However, we have also described that there is some tension between trust and the core idea that Science is a means for freeing people from only relying on authorities to understand the world. In the beginning of this chapter, we had already emphasized that, when it comes to Science, the goods that are provided by the trustee to the trustor is ‘knowledge’, and the risk to the trustor is that he/she is vulnerable to a lack of truth or validity of that knowledge. In the following section, we will aim for a theoretical elaboration of what we will call ‘epistemic trust’, starting with the influential Integrative Model of Organizational Trust (Mayer et al. 1995), which is also discussed in most of the other chapters of this volume. From this, we will develop a definition of epistemic trust, which draws on the work of Origgi (2004, 2014) and Sperber et al. (2010).

Trust: A Rough Approximation

Trust is defined by a dependence of a trusting actor on the trusted person or entity (Tseng and Fogg 1999) combined with a vulnerability to risk (Mayer et al. 1995). In consequence, the question of what makes a person (the trustee) trustworthy to an interlocutor (the trustor) arises. Aristotle defined the following three major character properties a person should possess to be persuasive: “(1) practical intelligence […], (2) a virtuous character, and (3) good will” (Rapp 2010). Later, Mayer et al. (1995) summarized the literature on constituents of interpersonal trust and the extensive work on the credibility of sources (e.g., Hovland et al. 1953) and also arrived at three components that are believed to make up the trustworthiness of a trustee: A trustee should possess (1) ability, the domain-specific skills and competencies that enable the trustee to have influence within the same domain, (2) benevolence, which describes her acting independently from an egocentric profit motive and in a beneficial interest for the trustor, and (3) integrity, i.e., she should act according to a set of rules or principles acceptable to the trustor. According to the seminal model (Mayer et al. 1995), the three dimensions are related but separable. Furthermore, the trustee isn’t the only one who must possess certain characteristics; in order to give trust, the trustor must hold an attitude of a general willingness to trust others, a propensity to trust.

Epistemic Trust

In the following section, we will use the term ‘epistemic trust’ to describe the trust in knowledge that has been produced or provided by scientists. Such epistemic trust is unavoidably needed to gain knowledge (Resnik 2011; Sperber et al. 2010). If someone doesn’t have the chance to sense or learn something first-hand, she must defer to the testimony of first-hand sources (Hardwig 1991; Harris 2012; Schwab 2008).

Along these lines, researchers in developmental psychology have proposed that children are very good at identifying whom to trust for gathering knowledge (Harris 2012; Keil et al. 2008; Mills 2013). We have the best evidence for the characteristic of knowledgeability (or expertise) being a main constituent of how young children’s trust in sources of knowledge. For example, when learning new object names, 3-year-olds prefer informants who have previously displayed accuracy in naming objects (Koenig and Harris 2008). By the age of four, children remember previously accurate informants and prefer to trust them over inaccurate, but familiar, informants (Corriveau and Harris 2009). Children are also sensitive to a source’s benevolence: At the age of four, children can infer the intent of an informant either from behavior of informants or from being told of their moral character (Mascaro and Sperber 2009), and they place selective trust in the most benevolent sources. Thus, young children use informant expertise as well as helpfulness to make trustworthiness judgments (Shafto et al. 2012), and sometimes the benevolence of an informant even supersedes her expertise (Landrum et al. 2013). Furthermore, when deciding whom to trust, kindergarteners take into consideration an informant’s self-interest (Mills and Keil 2005). In addition, recent studies show that children as young as four take informant honesty (referring to a source’s integrity) into consideration when deciding to trust an information source (Lane et al. 2013; Vanderbilt et al. 2012).

In the same vein, when it comes to adults and their trust in Science, placing epistemic trust in someone means trusting her as a provider of information (Wilholt 2013). To minimize the risk of receiving wrong information, epistemic trust relies on evidence that an interlocutor is trustworthy (Resnik 2011) and on some vigilance to avoid the risk of being misinformed or cheated (Sperber et al. 2010). Hence, the notion of epistemic trust is not built on receivers who uncritically accept the authority of experts. In her work on the relations between epistemology and trust, Origgi (2004, 2012) argued that epistemic trust entails (1) a default trust, meaning that people are generally trustful to others as a predisposition for communication and cooperation, laying the groundwork for people to defer to the knowledge of others, and (2) vigilant trust, which includes cognitive mechanisms that allow people to make rather fine-grained ascriptions of trustworthiness before accepting what others say. From Mayer et al. (1995) as well as from the children’s trust in sources of knowledge, we can identify which features of a trustee (the source of science-related information) might be processed within such cognitive mechanisms, leading to the ascription of trustworthiness. But the Mayer et al. (1995) model only roughly specifies these features. Thus, for a conceptual and an empirical analysis of the emergence of trust in Science and its communication, we must reconsider and specify these features. As has been argued before, laypeople might take into account an expert’s expertise, benevolence and integrity while deciding if to believe his/her statements on a science-related issue.

Thus, these components describe the features of experts that determine whether recipients will depend on and defer to them when the recipients’ own resources are limited: First, a layperson should trust someone who is an expert because she is knowledgeable (Lane et al. 2014); she possesses expertise. Expertise refers to someone’s amount of knowledge and skill, but more than just the sheer quantities of knowledge and skill is important: the person must also have the relevant expertise. In other words, the dimension of expertise also encompasses the aspect of pertinence (Bromme and Thomm 2015). Second, an expert should be trusted when a layperson believes her to have a reliable belief-forming process (Schwab 2008; Wilholt 2013) and to follow the rules of her profession (Barber 1987; Cummings 2014). These factors make up her perceived integrity. Third, an expert is considered trustworthy if she offers advice or positive applications for the trustor or (more generally) for the good of society (Resnik 2011; Whyte and Crease 2010); that is, she must act with benevolence. Furthermore, when a layperson considers trusting an expert, a person’s propensity to trust can be equally assumed. One may assume that people who display a high trust in Science may be more prone to rely on experts when finding out about a science-related issue (Anderson et al. 2011).

It is quite important for recipients not only to be able to identify speakers that actually possess relevant expertise, but also to critically judge the intentions of such speakers. In other words, recipients must be able to vigilantly identify sources whose intentions might lead to a loss of benevolence or of integrity. For example, due to vested interests, scientific evidence might be distorted by pseudo-evidence produced by industry or policy stakeholders (e.g., evidence about smoking and climate change, Lewandowsky et al. 2012). From this, we can conclude that trust in scientists is not only based on features that are indicative of the epistemic quality of their work (in a narrow sense with regard to the use of reliable processes of knowledge acquisition), but also their moral integrity (Barber 1987; Hardwig 1991) as well as the usefulness of their work for the benefit of society (Resnik 2011).

Some empirical evidence supports that these three dimensions—expertise, integrity, and benevolence—also come into play for adults’ trust in knowledge that has been produced or provided by scientists. In three studies, we have shown with factor-analysis (exploratory and confirmatory) that when laypeople judge the epistemic trustworthiness of scientists that are providing science-based information, they indeed assess the scientists’ expertise, integrity, and benevolence (Hendriks et al. 2015a). Furthermore, in an experimental study where we varied (fictitious) scientists’ characteristics relating to those three dimensions, we showed that when making these epistemic trustworthiness judgements, laypeople consider all three of these dimensions in a differentiated way, again indicating that the dimensions are, albeit interrelated, clearly distinct from each other (Hendriks et al. 2015a).

Also, qualitative data show that when laypeople are asked to make trust evaluations about a scientific expert, they spontaneously report the scientist’s expertise, objectivity or work ethic, and potential interests that stand in conflict with the public (Cummings 2014). Furthermore, Peters et al. (1997) showed that when risks are communicated to the public, laypeople again consider an organizations expertise, integrity and benevolence. They also found that laypeople’s trustworthiness judgments about the industry (which is believed to care only about profits, but not about public welfare) improves the most when the industry gives off an impression of concern and care about society; the same is true for citizen organizations that give off an impression of competence and knowledgeability. Thus, laypeople seem to be especially vigilant when the trustee defies the trustor’s expectations (in this special case, the negative stereotypes). This study shows that giving epistemic trust is not only based on the characteristics of the trustee (the source of the science-related information), but that these features are also weighed against the more general expectations a trustee has about a specific trustor. This is only one example of further conditions that constrain how source characteristics (expertise, integrity, and benevolence) affect judgments of epistemic trust. Expectations about a trustors’ intentions are also highly relevant in Science communication.

There is some evidence that the way in which scientific results are communicated may matter for laypeople’s assessment of a scientist’s communicative intentions. In an experimental study, we investigated if trustworthiness perceptions were affected (a) if a flaw (in this case, an overestimation of a study’s generalizability) was disclosed, and (b) who disclosed it (Hendriks et al. 2015b). We found that on the one hand, the participants discarded a scientists’ expertise if a flaw was mentioned in a comment by an unaffiliated scientist (in contrast to no mention of the flaw). But on the other hand, the scientist’s integrity and benevolence were rated higher when the scientist himself disclosed the flaw (in contrast to when it was disclosed by the unaffiliated scientist). With these results, we showed that a scientist’s trustworthiness is judged in close relation to what evidence is known that speaks to the characteristics expertise, integrity and benevolence. By actively putting out such evidence (e.g., disclosing possible flaws themselves), scientists can improve the public’s judgments of their trustworthiness. Related results have been found when scientists themselves (in contrast to scientists not affiliated with the research) seem to be responsible for the communication of caveats or uncertainties of their results (Jensen 2008).

Because the above-mentioned characteristics apply to judgments about an individual scientist as well as whole scientific organizations (for example, research institutes, universities or companies who do research; Peters et al. 1997), it may well be assumed that the perception of expertise, integrity and benevolence influences trust in Science in general.

Interestingly, Science is a social as well as a cognitive entity. Science as a social entity refers to the people who produce scientific insights (i.e., who do Science) and to the organizations they work for, while Science as a cognitive entity refers to the continuously developing body of knowledge that evolves from doing Science. As an immediate consequence of its dual entity, it is inevitable that the assumed trustworthiness of Science also depends on the public’s appreciation of the knowledge claims that are produced by Science. In other words, the assumed trustworthiness of scientists depends on assumptions people already possess about what is true knowledge and the new knowledge scientists provide. While discussing the result from surveys that the public seems to be rather skeptical when it comes to trust in Science about specific topics, we have already pointed to the inherent confounding between people’s personal stance against the topics which are researched and people’s trust into the Science which provides new scientific insights about these topics.

This close entanglement between what is said (the content of Science) and who has said it (the producers of Science) requires us to suspend the categorical distinction between judgments about believability (of the knowledge claims provided by scientists) and trustworthiness (of the providing scientists). Of course, when researching public trust in Science it is possible to scrutinize what laypeople think about scientists, as well as what they think about the content of science-based assertions. However, it is very likely that the provided answers will mostly mingle both aspects, being determined by both. Given that modern sociology of Science also conceives the truth of scientific knowledge as being dependent on its underlying evidence as well as on the regulated discourse about this evidence (Longino 2002), the public might be on right track by considering both what is said with who said it when they place trust in Science.