Keywords

1 Introduction

In this paper I argue that the objectivity of persons is best understood in terms of intellectual virtue, the telos of which is an enduring commitment to salient and accurate information about reality. On this view, an objective reasoner is one we can trust to manage her perspectives, beliefs, emotions, biases, and responses to evidence in an intellectually virtuous manner. We can be confident that she will exercise intellectual carefulness, openmindedness, fairmindedness, curiosity, perseverance, and other intellectual virtues in her reasoning.

I argue further that the cultivation and exercise of such virtues is a social phenomenon, and challenge highly individualistic notions of intellectual character and epistemic autonomy. Community intellectual virtue is necessary for the development of personal objectivity. An advantage of conceptualizing objectivity in terms of community intellectual virtue is that it better equips us to address failures of objectivity in scientific research, science policy, and public debates about science. Normally, the blame for such failures is placed on the politicization of science, communication problems, scientific illiteracy, or industry propaganda. But responses within these frameworks often amount “to telling citizens why they should love science” (Brown 2009, 17). The difficulty is that no matter how much we love science or how well we communicate, confusion about what it is to be an objective, epistemically trustworthy person and what it is like to reason in objective, epistemically trustworthy communities will continue to undermine reasoned debate. Sorting out the virtue epistemological issues associated with objectivity therefore stands to help us better cultivate it in research, policy and public debate.

In the next section, I provide a working definition of objectivity in terms of intellectual virtue and then consider two problems for conceptualizing objectivity in virtue epistemic terms. The first is that accounts of objectivity tend to travel implicitly between the objectivity of persons and the objectivity of methods. The second is that accounts of biased reasoning in science and science policy often assume, but fail to explicate, virtue epistemological perspectives of objectivity. Addressing these problems will help emphasize important connections between objectivity and virtue. In the third section, I argue that to consider adequately intellectual virtue with regard to the objectivity of persons, we need to address its social epistemic dimensions. To support this view, I show that the objectivity of persons is deeply tied to epistemic trustworthiness, a social intellectual virtue. Finally in the fourth section, I raise and respond to three objections to my argument.

2 Objectivity as Intellectual Virtue

Virtues are usually defined in philosophy as excellences of character. Linda Zagzebski, for example, defines a virtue as “a deep and enduring acquired excellence of a person, involving a characteristic motivation to produce a desired end and reliable success in bringing about that end” (1996, 137). The excellences virtue epistemologists consider fall into two general categories. Reliabilist virtue epistemologists stress perceptual and functional virtues such as good eyesight and good memory, whereas responsibilist virtue epistemologists focus on aspects of intellectual virtue analogous to moral virtue. Responsibilists are interested in virtues that we are more likely to hold others accountable for, such as openmindedness, intellectual courage, and intellectual honesty. While I focus on responsibilist virtues, reliabilist virtues clearly also contribute to our motivation for desirable epistemic ends and reliable success in achieving those ends.

Adapting Zagzebski’s definition of a virtue to the case of objectivity, we can define an objective person as one whose objectivity is deep, enduring, and acquired. They will have a characteristic motivation to produce a desired end and reliable success in bringing about that end. In this case, there are both ultimate and proximate desired ends. The ultimate end of objectivity is the epistemic value of salient and accurate information about reality. This end helps explain why objectivity is often conflated with impersonal knowledge, truth, or even rationality itself. The proximate ends of objectivity concern more subsidiary epistemic values. For example, Jason Baehr argues that objectivity promotes the proximate end of “consistency in evaluation” (2011, 21). Tara Smith claims that “fidelity to reality” is the “heart of objectivity,” and this suggests that faithfulness to reality is another proximate end of objectivity (2004, 147). I consider the proximate end of objectivity along similar lines: it is to maintain an enduring commitment to salient and accurate information about reality. The objective person is one who has, among other things, an enduring commitment to salient and accurate information about reality.

But the idea that objectivity involves an enduring commitment says little about what the objectivity of persons actually is. Is objectivity a specific intellectual virtue, or is it related to intellectual virtue in some other way? Baehr treats objectivity as a specific intellectual virtue. Another possibility is that “objectivity” is a vague term that refers to the integrated exercise of various intellectual virtues. I am partial to the latter view; however, because little conceptual and normative work has been done for individual intellectual virtues (Riggs 2010), it would be premature to decide presently which of these two possibilities is best. Fortunately, either option is consistent with my argument that we should pay greater conceptual and normative attention to the objectivity of persons using the resources of virtue and social epistemologies. Moreover, my intention is not to create an ahistorical definition of the objectivity of persons in terms of intellectual virtue. As we come to learn more about our psychology, sociality, and rationality, our understanding of the relations between objectivity and virtue will likely improve.

One challenge for understanding objectivity in virtue epistemic terms, however, is that accounts of objectivity tend to shift between the objectivity of persons and the objectivity of claims, theories, and methodologies (Daston 1992; Daston and Galison 2007; Douglas 2009). Smith, for example, says that

objectivity is a deliberate commitment to keeping one’s beliefs grounded in reality by thinking logically. Objectivity is a method for human beings—who are fallible—to employ to discipline our thinking to help us gain an accurate understanding of the world (2004, 153).

In the first instance, objectivity is a personal virtue; in the second, a method. Similarly, while Heather Douglas focuses “on the objectivity of knowledge claims, and the processes that produce these claims, rather than the objectivity of persons” (2009, 116), some of her categories of objectivity clearly involve intellectual character. Detached objectivity “keeps one from wanting a particular outcome of inquiry too much, or from fearing another outcome to such an extent that one cannot see it” (122). And interactive objectivity requires that “[i]nstead of immediately assenting to an observation account, the participants are required to argue with each other, to ferret out the sources of their disagreements” (127). This requirement is reminiscent of the Stoic intellectual virtue of “non-precipitancy,” which is “a disposition not to assent in advance of cognition” (Sherman and White 2003, 48). It also characterizes objectivity in terms of persons who are capable of critical engagement and intellectual perseverance.

Daston and Galison are very mindful of the issue concerning conceptual shifts between personal objectivity and the objectivity of claims and methods. They say:

Understanding the history of scientific objectivity as part and parcel of the history of the scientific self has an unexpected payoff: what had originally struck us as an oddly moralizing tone in the scientific atlas makers’ accounts of how they had met the challenge of producing the most faithful images now made sense. If knowledge were independent of the knower, then it would indeed be puzzling to encounter admonitions, reproaches, and confessions pertaining to the character of the investigator strewn among descriptions of the character of an investigation. Why does an epistemology need an ethics? But if objectivity and other epistemic virtues were intertwined with the historically conditioned person of the inquirer, shaped by scientific practices that blurred into techniques of the self, moralized epistemology was just what one would expect (2007, 39).

Prior to the emergence of the concept of objectivity in the mid-nineteenth century, Daston and Galison argue that Enlightenment scientists involved in image creation held an epistemic virtue of “truth-to-nature.” To achieve truth-to-nature, careful observation, patience, good memory and a “talent to extract the typical from the storehouse of natural particulars” were required on the part of the investigator (58). Truth-to-nature presumed that variability existed in the objects of inquiry and it was up to the investigator to make sense of it. By the mid-nineteenth century, however, variation in natural objects “shifted inward, to the multiple subjective viewpoints that shattered a single object into a kaleidoscope of images” and scientific image-making was now thought to require “a set of procedures that would, as it were, move nature to the page through a strict protocol, if not automatically” (121). Mechanical objectivity came to the fore and was intended to exclude the self from inquiry.

As mechanical objectivity was discovered to be insufficient for the creation of images, trained judgment assumed a greater role. Student training consisted of “internalized and calibrated standards for seeing, judging, evaluating, and arguing” and “the scientist of the twentieth century entered as an expert, with a trained eye that could perceive patterns where the novice saw confusion” (328). Trained judgment emphasizes honed perception, skill in recognition, and interpretation in image creation in order to exceed the objectivity made possible by mechanical procedures. Unlike the ethically virtuous “sage” who served as the ideal in nineteenth century concepts of truth-to-nature, the scientist is now a trained expert in whom emotional, ethical, and social dimensions of intellectual virtue are suppressed in favour of a quantitative and perceptual-interpretive understanding of judgment. The self returns to objectivity, but is suppressed. We are left somewhere between the objectivity of persons and the objectivity of method.

A second challenge for understanding objectivity in virtue epistemic terms is that virtue epistemic concerns are generally left implicit in contemporary philosophical and historical discussions of science and science policy. For example, concerns about intellectual virtue underlie Don Howard’s (2009) analysis of weaknesses in public discourse about science and the failure of philosophers of science to participate in that discourse. One reason Howard thinks philosophy of science has been irrelevant to public discourse about science is that it erroneously treats the relationship between the mind of an individual scientist and the world as the most important epistemic relationship. This view, he argues, ignores social dimensions of scientific inquiry and emphasizes narrow technical problems. Instead of this, he calls for a philosophy of science that takes seriously motives, values, and sociality in science. He argues that the consideration of motives invites “healthy scepticism;” that reasonableness “is not modeled by inductive logic alone;” that philosophers of science need to engage questions of morality and justice within their epistemologies; and that philosophers of science should help us to “distinguish the crackpot from the scientist who is “thinking outside the box”” (205–207). Implicit in each of these claims are matters of intellectual virtue. Intellectual virtue is needed for good motives, healthy sceptical attitudes, and crackpot identification.

Questions of intellectual virtue and objectivity are also implicit in Naomi Oreskes and Eric Conway’s (2010) Merchants of Doubt, a fascinating history of a select group of American scientists, industries and think tanks who worked together to mislead the public about the harms of tobacco, secondhand smoke, acid rain, ozone depletion, global warming, and pesticides. One can read Oreskes and Conway’s history as a thoroughly documented account of intellectual vice. To mislead the public, the scientists and organizations involved made use of many fallacies of reasoning, including red herrings, straw arguments, poisoning the well, persuasive definitions, phony facts, and false histories. Particularly interesting is that those involved high-jacked the language of objectivity and intellectual integrity in order to undermine objectivity and intellectual integrity. Significant issues of intellectual virtue are therefore implicit in Oreskes and Conway’s account of biased science and policymaking about science.

The implicit nature of intellectual virtue in such accounts indicates that it will take work to excavate and understand its roles. But the fact that virtue epistemological concerns about objectivity are present, and that there is conceptual movement between the objectivity of persons and methods, suggests that the objectivity of persons is a significant issue. Why, then, is there little explicit conceptual, normative, and empirical work on the relations between virtue and the objectivity of persons? There are several possible reasons. The first concerns the relation of subjectivity and detachment to objectivity. Subjectivity is typically regarded as the objectivity’s opposite, and thus one way to increase objectivity is to “detach” oneself from the process and object of inquiry. As Daston and Galison point out, the view that objectivity involves “the suppression of some aspect of self,” is commonly held (2007, 36). Because intellectual virtues are frequently assumed to be personal, that is, as part of the minds of individual scientists, they may be implicitly associated with subjectivity. And given this association, intellectual virtues might not be considered sufficiently robust to act as objectivity-enhancing standards of scientific reasoning.

Subjectivity, however, is not the vicious opposite of virtuous objectivity. And “suppression” of some aspect of the self poorly describes the subjective interrelationships of belief, desire, emotion, reasoning, and character involved in the production of objective research. On a virtue epistemological account, objective persons have the right kinds of subjective experience: they are self-aware, sufficiently self-critical, intellectually honest, able to manage their emotions well, and so on. The difficulty with tying subjective suppression or detachment to objectivity is that objective people are good at regulating inquiry and maintaining their commitment to salient and accurate information about reality in complex value-laden circumstances. Subjective detachment might actually signal weak value-management and self-regulation skills; the self detached from emotional, social and normative concerns will not be able to reason virtuously.

This interpretation of virtuous subjectivity is supported by Douglas’s view that social and ethical values are needed to set burdens of proof in scientific evaluation of evidence. She points to Richard Rudner’s argument that

[i]n accepting a hypothesis the scientist must make the decision that the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis. Obviously, our decision regarding the evidence and respecting how strong is “strong enough,” is going to be a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis (Rudner 1953, 2).

Rudner argues that the concept of scientific objectivity thus needs revision. He says that “[o]bjectivity for science lies at least in becoming precise about what value judgments are being and might have been made in a given inquiry” (6). Intellectually virtuous reasoners understand how to adjudicate between relevant and irrelevant values and when and where to use those values. Carefully regulating the role of values in science and being precise about our value judgments requires fairmindedness, intellectual humility, intellectual courage (especially for transformative criticism), open-mindedness, and resiliency. Intellectual virtue is essential to the process of assessing value judgments in relation to evidence. Thus, we need to move from thinking of epistemic subjectivity in terms of bias to thinking of it in terms of intellectual virtue. Doing so suggests a solution to the “double bind” identified by Oreskes and Conway wherein objectivity requires scientists to “keep aloof from contested issues” but this aloofness prevents them from helping others learn “what an objective view of the matter looks like” (2010, 264).

A second reason why there is little explicit work on intellectual virtues and objectivity concerns vagueness about the distinction between justified true belief and understanding. One criticism virtue epistemologists make about traditional epistemology is that it focuses on individual beliefs and tends to ignore contextual factors, such as virtues and values, that are required for understanding. Zagzebski says:

making the single belief state of a single person the locus of evaluation is too narrow. For one thing, it has led to the neglect of two epistemic values that have been very important in the history of philosophy: understanding and wisdom (1996, 2–3).

Without understanding, there is no way to distinguish between trivial and salient true beliefs. Salience is determined contextually through complex evaluative processes. Virtue epistemology, which is organized around the concept of understanding rather than knowledge as justified true belief, thus enables us to make distinctions between trivial and salient beliefs. And the ability to determine salience accurately is important for objectivity (Howes 2012).

The emphasis on understanding is part of the reason why virtue epistemologists shift the focus of epistemic analysis to agents. As Heather Battaly explains, virtue theories

take the epistemic virtues and vices—types of agent-evaluation—to be more fundamental than any type of belief-evaluation. Accordingly, virtue theories in epistemology define belief-evaluations—justification and knowledge—in terms of the epistemic virtues, rather than the other way around (2010, 2).

From this standpoint, the scientists who sought truth-to-nature and trained judgment are quite alike. They each conceptualize understanding in their emphasis on grasping order, relevance, and patterns that exceed the mere collection of true beliefs. In truth to nature, the natural philosopher must extract the typical from the various; in trained judgment the scientist sees patterns emerging from the confusion. Each aligns with the virtue epistemological emphasis on understanding as opposed to traditional epistemology’s emphasis on justified true belief. Accounts of objectivity that focus on the objectivity of individual claims may thus be inherently inadequate. Instead of such “static evaluations,” Christopher Hookway argues that epistemology is better served by focusing on how character, habits, and reflection give us confidence in our inquiries and deliberations, than on puzzles about justified true belief or scepticism. He suggests we therefore address the evaluation and regulation of “activities of inquiry and deliberation” (2003, 193). Expanding the focus of epistemology may thus make issues of objectivity and intellectual virtue more visible.

A third reason that matters of intellectual character remain implicit in analyses of objectivity is simply that we lack tools for thinking about it. We lack these tools for a variety of reasons. The first is that we generally develop and exercise intellectual virtues tacitly. Nancy Daukas points out that

our epistemic activities and attitudes are ‘framed’ by a usually unarticulated, continually evolving ‘sense’ of the status of our (also continually evolving) epistemic character …and inflected by our continual, often tacit, ‘sense’ of the epistemic status of others… (2006, 112).

We are not always “self-consciously assessing whether we are accurately representing our epistemic competencies, or explicitly deliberating about whether or not to extend the epistemic principle of charity to others” (112). Moreover, we talk about intellectual virtue with an “impoverished and confused vocabulary” and we do not have “as firm or precise a prephilosophical grip … on the specifically intellectual virtues as we do on the ethical virtues” (Riggs 2010, 173, 174). As Roger Crisp observes,

there is no analogue to common-sense morality in epistemology. We do not, it might be suggested, set out explicitly to teach our children epistemological principles in the way that we teach them moral principles (2010, 33).

We might also avoid making detailed epistemic evaluations of others because we are unclear about the social and moral rules involved in doing so. As Nancy Sherman and Heath White explain:

we openly talk about people’s raw smartness or cleverness, their diligence or laziness, their conscientiousness or sloppiness. But do we freely talk about their zeal or cautiousness, their impetuousness or diffidence, their passion or lack of engagement? We think less so, and it probably has to do with the fact that we think that we are overstepping etiquette boundaries when we engage in these kinds of assessments (2003, 44).

Virtue-based evaluations might also make us feel socially or morally uncomfortable because of their potentially “self-centred” nature. As David Solomon points out, one of the principal objections to moral virtue is that “[i]nstead of my needing to be good in order to benefit others, I am required to be the sort of person who benefits others in order to be fulfilled myself. Virtue seems to be itself compromised by a kind of vanity or prissiness” (2003, 74). Some may worry that intellectual virtue is similarly compromised by vanity or prissiness. Intellectual virtues may also uncomfortably call to mind an elitism linked to the early modern idea that only “gentlemen of science” were epistemically trustworthy (Shapin and Schaffer 1989). We might also feel uncomfortable with the apparent self-centredness of intellectual virtue because it seems antithetical to objectivity in light of its historical conceptualization as suppression of self.

Despite such concerns, however, there are good reasons to develop the social, moral and epistemic tools needed to examine openly the relations between intellectual virtue and objectivity. Our ability to cultivate and promote objectivity could increase substantially and there may be other epistemological benefits as well. For example, to help people succeed intellectually it appears to be better to draw attention to the effort they put in when they get something right (Moser et al. 2010) than to praise them for being “smart.” It turns out that identifying people as “smart” might actually make them less smart (Mueller and Dweck 1998). This suggests that evaluating the virtues and strategies people use to reach an answer is more important to epistemic success than focusing on whether the people involved are “smart” or not.

I have now argued that matters of intellectual virtue are close at hand in accounts of scientific objectivity and public controversies about science. Such accounts shift between the objectivity of persons, claims, and procedures, and often implicitly rely upon virtue epistemological concepts. I have also outlined several reasons why matters of intellectual virtue are generally submerged. While these problems are challenging, they also show that intellectual virtues hold promise for expanding our understanding of objectivity. With this in mind, I turn to another aspect of this virtue epistemological account of objectivity: its inherently social nature.

3 Community Intellectual Virtue: Objectivity and Epistemic Trustworthiness

When intellectual character is considered explicitly, it is usually in the context of individual achievement. We praise or blame individuals for intellectual honesty or dishonesty. We assume that the “acquisition of knowledge, while by no means a strictly solitary enterprise, is generally more solitary (or capable of being so) than the acquisition of moral goods” (Baehr 2010, 211). And intellectual virtues can themselves be thought of in individualistic terms: they are “constitutively good-making, counting as such towards our being good reasoners in this or that capacity” (Garcia 2003, 107). Virtue and social epistemologists argue, however, that we often overlook the social dimensions of epistemic well-being, that is, those aspects of well-being related to the knowledge that we have, our ability to discover and learn truths, and the reliability of our minds. In this section, then, I challenge the view that good intellectual character principally reflects individual achievement and argue for the sociality of intellectual virtue. Bringing these arguments together with research about the virtue of epistemic trustworthiness and the development of epistemic trust in children, I claim that our current educational, research and public institutions could do more to provide the ingredients needed for the development and promotion of objectivity.

There are two general reasons why virtue epistemologists think that “a social context is intrinsic to the nature of a virtue as traditionally understood” (Zagzebski 1996, 44). First, virtuous activity is determined contextually and the social roles of agents are an essential part of this context. Sarah Wright says:

virtue is a mean between extremes, and because the location of the mean depends on the social roles of the person who must act, there is no way to be a courageous person simpliciter; there is no mean in a vacuum. Similarly, there is no way to exhibit the epistemic virtues except within a social role. One is only courageous in the role of a bystander, or epistemically careful in the role of the doctor (2010, 106).

Social roles are also important because intellectual virtues are acquired in part by following the examples set by role models. Sherman and White argue that

[i]ntellectual virtue will itself involve the example following and habituation of moral virtue: inspiration by role models will be important as will be learning through critical practice the habits of careful reasoning, methodological argument, and assessment of data. We study modes of reasoning and research, but we also practise them and model them (2003, 39).

A second and related reason why virtue epistemologists emphasize sociality is that communities shape our cognitive character through epistemically virtuous or vicious relations. Christine McKinnon (2003) argues that we need to attend carefully to the fact that we form our cognitive selves in response to the ways others engage with our explanations, justifications, and arguments. The development of cognitive character also requires that we successfully manage our emotions and desires and this too depends on characteristics of our epistemic communities. Amy Coplan argues that to achieve good cognitive character we must train

our emotions so that they can be conditioned, through practice and experience, to track appropriate things. Our education, including the stories we are told and the music we listen to, our environment, and the company we keep all matter greatly … since they can prevent or encourage virtue … We must take more seriously the facts of our embodiedness and sociality, and come to terms with the ways in which our sensory experiences and social interactions influence our emotions, and thus our behavior, our thoughts, and our values (2010, 148).

The sociality of intellectual virtue should not, however, be taken to mean that intellectual virtues are relative to one’s local epistemic community. As McKinnon points out,

the most commendable kinds of epistemic acts [are] those performed by agents who exercise in a cognitively responsible manner those belief-acquiring dispositions or faculties deemed to be reliable and who are motivated to do so by a desire to know how things are in the world (2003, 245).

A virtuous epistemic agent exercises skills deemed reliable by others in their epistemic community and is motivated by that community to commit to salient and accurate information about reality. Moreover, reliability is also evaluated in relation to all epistemic communities, not just those closest to the inquirer.

The social nature of intellectual virtue is thus quite important for understanding how objectivity is developed and exercised in our epistemic communities. The virtue of epistemic trustworthiness provides a particularly illustrative example. Daukas argues that epistemic trustworthiness is “a social epistemic virtue … insofar as it depends on appropriate attitudes towards others, as well as toward oneself, as epistemic agents” (2006, 113). She also points out that “the character traits and skills required for epistemic trustworthiness are developed over time, through interaction with others in the context of normative social practices” (2006, 113–14). This underscores McKinnon’s view that we “learn whom and what to trust. And, we do so, in part, by learning about our own and other’s cognitive selves” (2003, 249). Epistemic and moral trust are here intertwined—and each are clearly social in nature.

Epistemic trustworthiness is also tied to personal objectivity, and this gives us reason to think of objectivity in terms of social epistemic virtue. Scheman says that “[c]entral to what we do when we call an argument, conclusion, or decision “objective” is to recommend it to others, and, importantly, to suggest that they ought to accept it, that they would be doxastically irresponsible to reject it without giving reasons that made similar claims to universal acceptability” (2001, 24). These recommendations and epistemic responsibilities related to objectivity depend on epistemic trustworthiness. As Douglas says, all types of objectivity involve a “sense of strong trust and persuasive endorsement, this claim of “I trust this, and you should too”” (2009, 116). Epistemic trust in turn seems to depend on our confidence that others value objectivity.

This contrasts how sociality and epistemic trust in others is generally regarded in traditional epistemology. Naomi Scheman argues that in such accounts, if epistemic dependency

is acknowledged at all, it is to mark it as something that has, intellectually, to be superseded: The ground that we in fact traversed in our parents’ arms has to be retraversed under our own power, in order to prove that the place where we have ended up is one we could and would have reached had we done the entire journey under the direction of our own adult intelligence (2001, 41–2).

Dependency is denied, and with it epistemic trust of others. But this is a mistake. In the first place, we are never truly epistemically autonomous. And even if we were, highly individualistic understandings of epistemic autonomy would not guarantee epistemic trustworthiness any more than sociality. As Zagzebski argues, we do not actually have evidence that we ourselves are generally “more trustworthy than other people” (2007, 253–54). Moreover, we have evidence that we are sometimes less epistemically trustworthy than others. Given this, Zagzebski does not think we should prefer individualistic idealizations of epistemic autonomy over idealizations of epistemic dependence on trustworthy others. That we do prefer idealizations of autonomy suggests that we have an underlying problem with social epistemic trust. This problem is serious for, as Oreskes and Conway note, trust underlies all relationships: we “trust other people to do things for us that we can’t or don’t want to do ourselves” (2010, 272).

A further link between epistemic trust, social intellectual virtue, and objectivity concerns evidence that very young children evaluate character holistically when forming epistemic trust (Koenig and Harris 2008). They do not form epistemic trust solely by evaluating the reliability of claims people make. Epistemic development thus appears to align more closely with the agent-evaluation model of virtue epistemology than the belief-evaluation model of traditional epistemology. And if it does, detaching objectivity from intellectual character and social context may interfere profoundly with the maintenance of objective, epistemically trustworthy individuals and communities.

The failure to relate explicitly intellectual character with objectivity also helps to explain why epistemic dependence on trustworthy others is unrecognized as an ideal in traditional epistemology. Because in current epistemic communities we either lack explicit information about intellectual character or are poorly skilled in its evaluation, one of the main ingredients required for epistemic trust is missing. As a result, we become less trusting of others and the less we trust others, the more attractive individualistic ideals of epistemic autonomy become. In an effort to save objectivity, we place responsibility for objectivity squarely onto the autonomous inquirer and try to bolster it by suppressing self-characteristics and encouraging detachment. This does not work, so we then locate objectivity entirely in methodologies we think we can trust because they are not people. This does not work very well either (though it often helps).

Given the social nature of intellectual virtues and the requirements for epistemic trust, it is likely wrong to think that practices that forbid scientists to include anything of themselves in their published research promote objectivity. Such practices send matters of intellectual virtue underground and hide information that we might otherwise use to evaluate objectivity and epistemic trustworthiness. Even if someone accepts that a certain scientific procedure is objective, they may reject results based upon it because they do not have what they need to trust epistemically those using the procedure. And in mistrustful situations, people are more vulnerable to intellectual vices that might lead to the premature rejection of scientific research employing objective methodologies. This phenomenon may be especially damaging to epistemic trust in the public domain wherein most individuals, in addition to their exclusion from scientific epistemic communities, are unfamiliar with academic standards of inquiry and lack the specialization needed to understand publicly available academic research.

Thus, Oreskes and Conway’s sensible observation that if scientists do not get involved in public controversies about science no one will know what objectivity looks like provides support for conceptualizing objectivity in terms of community intellectual virtue. Though the exclusion of the public from scientific communities is a difficult problem to address, this need not stop us from better integrating academic work, public inquiry, and policymaking with social support for the formation of objective intellectual character. Explicit instruction in intellectual virtue and an explicitly positive valuation of intellectual virtues in the public realm would provide such support. But such considerations suggest that we need to study more thoroughly community intellectual virtue and its relation to the objectivity of persons. We need to find out what kinds of individual and community-level approaches best support the objectivity of persons. We need to understand better how intellectually trustworthy, objective people develop.

4 Objections

The first objection I consider concerns implicit bias. Contemporary psychological research shows that implicit biases against disadvantaged groups can be activated by virtuous efforts to counteract them (Fine 2006). It seems that when people explicitly claim that they are unprejudiced, or are asked to assess their own objectivity, their objectivity decreases (Lee and Schunn 2011). These counterintuitive results suggest that the social promotion of intellectual virtue might encourage intellectual vice instead.

To counteract the effects of implicit biases, Carole Lee and Christian Schunn (2011) recommend that we introduce a specific criterion for diversity in epistemic communities wherein individuals would “hold different implicit assumptions about the cognitive authority of individuals associated with different stereotypes” (364). They suggest further that

individuals with different background beliefs might be in a better position to render visible and to critique the negativity of others’ evaluative styles. This would require that procedurally objective communities make efforts to foster forms of diversity bearing not on the content of theories but on the cultural beliefs and norms of the community itself (365).

This is a reasonable remedy and one to which the structured promotion of community intellectual virtue could contribute.

We have, for example, a number of intellectual virtues in our epistemic toolbox, such as intellectual honesty, fairmindedness and intellectual courage, that require and enable us to challenge implicit biases. If implicit biases fool even the egalitarian-minded, then intellectual virtue demands that the egalitarian-minded find ways to outsmart them. Cordelia Fine points out that “implementation intentions” such as telling yourself not to stereotype help to reduce stereotyping. She says that “people who form egalitarian implementation intentions of this sort are happily impervious to the usual unconscious effects of stereotype priming” (2006, 200). Seeing the potential of implementation intentions, epistemic communities valuing fairmindedness could adopt them strategically.

A second objection to my view is that intellectual virtue is too weak to promote objectivity given certain truths about human irrationality. As Fine observes,

psychology texts like to make a few half-hearted suggestions as to how we can combat the mulish tendencies of our minds. “Entertain alternative hypotheses,” we are urged. “Consider the counterevidence.” The problem, of course, is that we are convinced that we are already doing this; it’s simply that the other guy’s view is absurd … It is a sad fact that the research fully bears out the observation by the newspaper columnist Richard Cohen that, “The ability to kill or capture a man is a relatively simple task compared with changing his mind.” (127)

In addition, scholars like Oreskes and Conway show just how easy it is for the epistemically vicious to manipulate intellectual virtue. If the appearance of intellectual integrity can be recruited to undermine intellectual integrity, perhaps virtues are simply too flimsy to counter such attacks.

But we could also argue that the reason people are so easily tripped up by the appearance of integrity is that they poorly understand intellectual integrity in the first place. It is easy to imagine people in communities that explicitly care about intellectual character resisting vicious epistemic manipulation. Moreover, the objection that virtues are too weak to defend us against intellectual manipulation relies on a rather individualistic understanding of virtue. The vices of politically powerful scientists might not be visible when they are considered a contextually, but the vice-laden goals of the epistemic community in which they reside might be easy to spot. Robert Roberts and Jay Wood point out that “intellectual carelessness tends to spread through a community” (2007, 232) and when it does it becomes more visible. We can take advantage of the visibility of community-level vice to create explicit standards of virtuous inquiry in industry and politics. The community virtue perspective could also be used to quell unjustified attacks on the credibility of scientists: it is harder to poison the well against individual scientists when the virtues of the epistemic community in which they work are explicitly upheld.

The third and final objection I consider concerns the fact that social practices of science can secure the production of accurate and salient information even if the scientists participating in those practices are epistemic jerks. Perhaps, then, intellectually virtuous people are epistemically unnecessary. But consider that at some point in the creation of these social practices at least someone virtuously cared about the epistemic good of securing accurate and salient information. Even jerks might care about this epistemic good and thus attain some minimal level of virtue. It helps to realize that intellectual virtue admits of degree. None of us is perfect and few, if any, of us are complete jerks.

We also need to distinguish between moral and intellectual character when considering knowledge production in vice-laden communities. While there is overlap, these two conceptions of character are not identical. You might be unflinchingly honest in your research but deceptive in your romantic life, or arrogant with your colleagues but impressively courageous about ideas. So while some vice-laden communities successfully create accurate and salient information, their vices may be generally moral, and their virtues generally intellectual. While moral and intellectual virtues are more frequently found together, communities of intellectually upstanding jerks are obviously possible.

It is also worth considering that certain intellectual vices may be useful in dysfunctional epistemic environments. Roberts and Wood (2007), for example, consider the possibility that intellectual vanity and arrogance might lead in some cases to more epistemic goods than intellectual humility. Such situations could be explained

by reference to some other fault in the individual or some corruption in the epistemic environment. Perhaps individuals need vanity as a motivation, because their upbringing does not instill in them an enthusiasm for knowledge as such. Or we might locate the pathology socially—say, in the fact that the whole intellectual community is warped by vanity and arrogance, hyper-autonomy and unhealthy competitiveness, so that in that fallen community some vices actually become more functional than their counterpart virtues. (252)

Because we learn to reason in very imperfect epistemic communities, intellectual vices may well lead to accurate and salient information and perhaps knowledge. This is analogous to cases wherein moral vices help people to function and survive in uncaring or abusive environments.

Finally, while intellectually virtuous and vicious communities might both produce accurate and salient knowledge of the world, intellectually virtuous communities will generally produce more epistemic goods and fewer epistemic evils. Intellectually virtuous communities are likely to have more time, energy, and support in the pursuit of understanding. Flawed reasoning and counterproductive social relationships will slow intellectually vicious communities. Roberts and Woods say that “in the long run, just about everybody will be epistemically better off for having, and having associates who have, epistemic humility” (2007, 251). This is surely true for other intellectual virtues as well.

5 Conclusion

In controversies about scientific research and policy, scientists and other academics are increasingly accused of biased “subjective” reasoning in cases where objective methods and strong empirical support undoubtedly exist. In such situations, various intellectual vices and inadequate ideas about objectivity are usually at play. To defend against unwarranted accusations and persevere intellectually against those who intentionally distort reality, it is therefore important that we explicate clearly the relations between objectivity, intellectual virtue, and community. Fortunately, because philosophers of science, social epistemologists, and virtue epistemologists are already concerned with the process of inquiry, they are in an excellent position to develop accounts of objectivity that are openly informed by intellectual virtue and social epistemic relationships. Their combined resources stand to contribute significantly to our understanding of objectivity and its promotion in diverse epistemic communities.