Human beings are epistemically interdependent. Separately, each of us is in a position to know very little. Collectively, however, we evidently can know a lot. Each person’s epistemic corpus brims with information others have conveyed to her. We survive, even thrive, because of an intricate division of cognitive labor. The division of cognitive labor is effective; otherwise, Homo sapiens would be extinct. Evidently, we get by because we are good enough at identifying dependable informants.Footnote 1 Although each individual’s epistemic access is restricted, different people have access to different facts, and different orientations toward the same facts. We depend on others for vital information that we do not, and in many cases cannot, get for ourselves. We have no choice. But the division of cognitive labor carries risks. Humans are a garrulous lot. People relentlessly bombard one another with information. Not everything we are told is creditable. So we need a filter—a way to screen out the dross. The problem, though, is that we are ignorant. We cannot screen each bit of information directly. We can, however, screen informants.

Craig (1990) maintains that the concept of knowledge is molded to provide our filter. In prototypical cases, someone who knows whether p is apt to be a good informant as to whether p. This is no surprise, for the concept of knowledge was, Craig maintains, expressly designed to pick out good informants. Humans decide what qualifications they want a good informant to possess, and declare that those who have them qualify as knowers. Craig’s construction yields a form of reliabilism. S qualifies as a good informant as to p, if p is true, S believes that p, and S is suitably reliable with respect to issues like whether p. So far, this is standard reliabilism.Footnote 2 Craig adds a further requirement: S must display a public mark that enables others to identify him as a good source of information as to whether p.

This additional requirement, I suggest, leads to a problem. If p is true, Q believes that p is true, and Q is suitably reliable with respect to issues like p—that is, she typically believes p-like propositions if they are true and does not believe them if they are false—but Q does not display the public mark, then although she satisfies the standard reliabilist requirements on knowledge, according to Craig, she does not know that p. Although I am far from convinced that bearing a public mark is integral to the concept of knowledge, to keep the issues straight, I will restrict my use of “knowledge” to cases that satisfy Craig’s requirements. To take up the slack, let us say that reliable true believers that p, whether or not they display the public mark, are aware that p are cognizant of the fact that p, or are privy to the information that p. This terminology enables us to characterize an important epistemic achievement, whether or not it amounts to knowledge.Footnote 3

Hannon (2019) builds on Craig.Footnote 4 Although Craig presents his position as a genealogy, Hannon maintains that the core of Craig’s position is functional, not genealogical. Neither the actual origin of the concept of knowledge, nor an origin it might have had in a mythical state of nature matters much. What is important is the function the concept of knowledge currently performs. Hannon labels this approach “function-first epistemology.” He says it “allows us to engage in the normative project of evaluating how well or poorly our epistemic practices actually satisfy our needs and goals” (Hannon 2019, p. 32). Craig and Hannon hope to craft a functionally viable concept of knowledge. My ambition is more limited. I am not primarily concerned with whether the features they impute to the concept are conditions on knowledge per se; I am concerned with whether, and at what cost, the concept they delimit performs the function they assign it—that of marking out the class of those qualified to provide good information. I will argue that their function-first approach typically equips us to identify only some of those cognizant of any particular fact. This need not seriously impede epistemic access to useful information. If enough available people are privy to the information we seek, being able to identify a suitable subset of them may suffice. But, I suggest, their criteria exclude some cognizers, thereby doing them an epistemic injustice.

Craig’s fundamental insight is important. We need not agree that identifying good informants is the function of knowledge or even the primary function of knowledge. It is enough to recognize that “[t]he concept of knowledge is used to flag approved sources of information” (Craig 1990, p. 11). Its contours suit it, he believes, to perform that function.

Craig’s genealogy shows how the concept of knowledge might emerge in a state of nature in which human beings who lack the concept nevertheless need to depend on others for information. His genealogy is not an exercise in a priori anthropology.Footnote 5 We can think of it as a rational reconstruction—a fiction designed to show why it is useful for creatures like us to have a concept of knowledge with familiar features. In the initial illustration, the good informant’s advantage is purely positional. Being up a tree, Fred can see further than his compatriots on the ground. Assuming he is dependable, they stand to benefit from what he reports. They can, for example, take cover if he says that a predator is approaching. As civilization advances, other advantages emerge. Differences in background experience, sensitivity, and expertise underwrite more textured divisions of cognitive labor. Craig hopes to craft a viable concept of knowledge by articulating and justifying the criteria for an epistemically acceptable informant.

This is a lovely idea. Rather than simply asking what our current concept commits us to, he asks why it pays to have such a concept, and what features enable it to provide the benefit it supplies. He notes that every language has a word for knowledge, taking this as evidence that the function it performs is grounded, not in parochial social or cultural circumstances, but in something fundamental to the human condition. He takes the function to be pragmatic. To solve practical problems, we need accurate information, much of which we cannot get for ourselves. A suitable concept of knowledge, he maintains, enables us to tag dependable informants.

The considerations that figure in Craig’s state of nature story figure in contemporary, mundane cases of information gathering. “Can you tell me whether the bus stops at the corner?” “Can you tell me whether muons are leptons?” “Can you tell me whether Millard Filmore ran for a second term?” In asking such questions, we hope to gain access to truths that others have but we do not. The crucial question is neither “How did we come by our concept of knowledge?” nor “How might we have come by our concept of knowledge?” It is “What singles out dependable informants?”

This is important, for the mere ubiquity of the concept is not enough to vindicate it. I suspect that every language has a word for a deity. Its ostensible function is to put adherents in touch with a transcendent realm, thereby underwriting religious beliefs, practices, and institutions. This is no argument for the existence of god. The function of the concept, along with the beliefs, practices, and institutions it figures in, is, anthropologists maintain, to stabilize society and entrench traditional values (see Geertz 1993; Durkheim 2001). Anthropology thus explains why every culture has a concept of a deity in a way that does not require that the concept denote anything. If knowledge is different, it is because the ostensible need—to identify dependable informants—is a real need. Knowledge itself plays the requisite role. As Craig insists, “The circumstances that favor the formation of the concept of knowledge still exist” (2007, p. 191).

Craig emphasizes that what we want from informants is not just information, it is accurate information. And we do not just want information that is fact accurate, we want accurate information that we have a good reason to believe is accurate. So we need a way to identify a good source—someone who can supply accurate information. And in certifying sources, we are inquirers, not examiners (Williams 1973, p. 146). That is, we are not in the business of “checking someone’s credentials for something about which we already know” (Hannon 2019, p. 30). Rather we seek someone who can tell us whether p because we do not ourselves know whether p. We need to be able to identify someone as a good source without already knowing that his information vis à vis p is accurate.

The need for accurate information, Craig maintains, underwrites the truth requirement on knowledge. A good informant about p is one who, when asked, generally tells the truth as to whether p. Nor is his doing so just a matter of luck, a whim, or a desire to curry favor. Typically, he has got to believe what he says. Even this is not enough, for belief is private. We cannot peer into someone’s head and see what he believes. Inasmuch as the role of knowledge under investigation is to certify someone as a source of information, we need someone who displays “a detectable property—which means detectable to persons to whom it is not yet detectable whether p—which correlates well with being right about p” (Craig 1990, p. 18). This is what Miranda Fricker calls an indicator property (2009, p. 115). Because it is open to an informant to confess ignorance, if he says that p, in general, at least we can assume that he believes that p (Craig 1990, p. 12). But merely saying that p because he believes it is not enough either. The informant must convey the information in a way that induces his listeners to credit him. The mark of a good informant is a publicly discernible feature that they correctly believe correlates with his believing the truth. Nothing stronger than correlation is required. There is no requirement that the mark be or be thought to be a manifestation of knowledge. Thus, if all members of the Drones Club know the password, and only members wear the club tie, and we are aware of these facts, then sporting the tie would mark someone as a good informant about the password.

The mark of a good informant, Craig maintains, is a publicly discernible characteristic that both correlates with (1990, p. 13) and is believed to correlate with being a reliable source of information (1990, p. 55) as to whether p. He calls the mark X, conceding that perhaps “the property X has no very precise identity. What it suggests is simply this: X is any detectable property which has been found to correlate closely with holding a true belief that p” (1990, p. 25). He suggests that being a taxi driver, recognized as such, is a mark of a good informant about how to get to a particular address, since it is the case, and we have reason to believe it is the case, that taxi drivers know how to find pretty much all the addresses in the areas they serve (Craig 1990, p. 26). Or anyway, this used to be the case. Now that taxis have GPS devices, cab drivers may no longer be likely to know how to find a particular address. Confidence is often a good general mark (Craig 1990, p. 13). If an informant speaks with confidence when he says that p, his demeanor provides evidence that he is convinced that p. That, ceteris paribus, gives us reason to believe him. A display of confident belief that p is a good mark if it is reliably correlated with being right about p. I will not worry here about the advisability of resting one’s epistemology on a property that seems to have no fixed identity or even about whether it is possible for a property to have no fixed identity. The peril I want to focus on lies what we do know about it—the requirement that the mark be publicly acknowledged.Footnote 6

Before raising my concern, let me emphasize that I do not take this peril to directly discredit function-first epistemology, or even the precise formulations of the position that Craig and Hannon develop. Knowledge may perform exactly the function that they ascribe to it. And it may be capable of doing so only because its possession involves the sort of public mark that they identify. I will urge, however, that there is a downside that needs to be acknowledged.

We need to be able to identify good informants and we need a discernible characteristic that will enable us to do so. Good informants, let us agree, are people who are aware of the facts we want to know.Footnote 7 The threshold for knowledge, function-first epistemologists maintain, should be high enough to secure a basis for action. So, it is pragmatic. This might seem to favor a highly variable threshold. In circumstances where stakes are low, an informant might responsibly state that p with considerably weaker evidence than in circumstances where stakes are high. But often, an informant is unaware of why an inquirer wants to know whether p. If the threshold on responsibly informing was variable, she might have no way to tell whether she was in a position to responsibly make her claim. Moreover, we are apt to store away information for future use. If a good informant conveys the information that p, we are likely to believe that p—not that p, with a subscript indicating the threshold in play when we got the information. Thus, they maintain that the information an informant imparts should generally satisfy a threshold suitable for the community at large (Hannon 2019, p. 34). This requires objectivization—a pulling away from a particular context in order to develop an increasingly objective standard of epistemic acceptability. “A concept is objectivized if it becomes progressively less tied to the particular concerns of the user” (Dancy 1992, p. 395). As we distance ourselves from the specific circumstances in which information is sought, the concept becomes more objectivized. If we go too far, and ask what standards an assertion would have to satisfy in order to be appropriate regardless of the circumstances and regardless of the stakes, we arrive at skepticism. At the limit, the informant should speak only if her information is good enough to defeat the malevolent demon. Craig and Hannon maintain, however, that by focusing on the reasonable concerns of the community at large, we obtain a sufficiently objective concept of knowledge that enables us sometimes to know. They concede that as the concept objectivizes, the detectability requirement dilutes. Dancy (1992, p. 392) and Kappel (2010, p. 86) maintain that it disappears entirely; Hannon, however, insists that it leaves a trace (2019, p. 35). If the detectability requirement disappears under objectivization, then my concerns do not apply to the concept of objective knowledge that emerges. If it leaves a trace, they do. Either way, however, they apply to the function-first account of how we identify good informants.

Not everyone who is privy to p is a good informant. Some may be unwilling or unable to impart the information we seek. Priests under the seal of the confessional, employees who signed non-disclosure agreements, and 8-year olds who have made pinky promises may know but refuse to say No matter. The claim is not that being a good informant is equivalent to knowing. Knowing is but a necessary condition on being a non-accidentally good informant. So as long as there are enough knowledgeable people around who are willing to talk, the division of cognitive labor will function smoothly. If enough people are privy to the fact that Sylvia stole the spoons, information transfer will hardly be impeded by her priest’s being barred from telling that she confessed to the theft.

Many non-taxi drivers are apt to be aware of whether Elm Street intersects Mass Ave. Many non-members of the Drones Club may be cognizant of the club password. The public mark is likely to identify only a proper subset of those privy to a given fact. This opens the door to testimonial injustice. If the public mark characterizes only a proper subset of those with the information we seek, the problem is not that some who are cognizant refuse to say, it is that they are not asked. Nor are their contributions credited should they volunteer the information. Unless the dependable informant requirement disappears under objectivization, they are not considered to know.

A social mark is apt to reflect the pathologies of the society that certifies it. Craig says that one public mark of a knowledgeable informant involves displaying confidence (1990, p. 13). Let us focus on that. Confidence is frequently associated with competence (Chamorro-Premuzic 2019). But both confidence and its manifestations are products of socialization. Race, gender, and social class are apt to affect both confidence and its display. The unending barrage of reports that girls are bad at math is apt to diminish a girl’s confidence in her mathematical ability and her confidence that she knows the solution to a particular math problem. If asked whether √457 is rational, she may reply with trepidation, even if she regularly performs such calculations correctly. An informant whose confidence has been squelched is likely to be more diffident than one whose confidence has been relentlessly reinforced. Since social arrangements evolved in such a way as to reinforce the self-assessments of members of the one group while undermining the self-assessments of another, it is no surprise that members of one group are apt to be more confident than members of the second, even when the basis for the assessments is the same (see Ehrlinger and Dunning 2003).

What counts as a public display of confidence is typically gendered and racialized as well (see Coffman 2014). Speaking up early and often, preferably in a deep resonant voice, gesturing expansively, asserting emphatically, and acting decisively (while failing to acknowledge caveats), are indications of confidence. Whether such behavior is an asset or a liability is variable. It is apt to be admired as assertive or authoritative in a white man, disparaged as aggressive or pushy in a woman, and as uppity in a person of color. If confidence or its manifestation figures in who will be recognized as a good informant, many of us are excluded, even when we have justified, reliable, epistemically virtuous true beliefs.

This might seem to show only that confidence is not a good mark. Things are not so simple. When the Royal Society was founded, it admitted only gentlemen (see Hunter 1982; Shapin 1994). The rationale was that a gentleman’s word is his bond. Having enough money, power, and prestige, he would not be swayed by pecuniary or political considerations. “It was the disinterestedness of the English gentleman’s situation that was most importantly identified as the basis of his truth-telling. The specific circumstances of his economic, political, and social free action . . . were mobilized as explanations of the integrity of a gentleman’s narration” (Shapin 1994, p. 83). This rationale suggests that the Society was more worried about lying than about honest mistakes. It assumes that the only reason to lie, mislead, or falsify scientific results is financial, social, or political gain—not, for example, to increase one’s intellectual standing, feed one’s ego, best one’s rivals, or promote a favored cause. Women—even gentlewomen—as well as merchants, craftsmen, Jews, and foreigners were not to be trusted. Being dependent on others, it was believed that they had reason to lie (Shapin 1994, p. 83). One result of the selection procedure is that the Royal Society ignored technicians, instrument makers, navigators, and apothecaries who were apt to have considerable expertise in the matters under investigation. Another was that, having stipulated that gentlemen were to be trusted, members of the Royal Society evidently took each other’s word about how they experimented and what they had found. They were reluctant to challenge other members (whose word was their bond) and thus had no incentive to replicate experiments (see Shapin 1994).

Social status in seventeenth century England was a clear public mark. People knew who qualified as a gentleman. But as a mark of a good informant, it was epistemically costly in at least two respects: (1) It deprived the scientific community of the insights and expertise of those who lacked the relevant social status; and (2) it placed too much trust in those who had it.Footnote 8 Competence is not assured by social status nor is it the case that friction-free social interactions are the best way to promote epistemic ends. So imposing the values of the gentleman’s club on an institution that aspires to advance understanding is an unpromising strategy.

The reliance on a purely social or purely behavioral mark for a good informant is an invitation to epistemic elitism. Perhaps, manifesting confidence positively correlates with dependability in an informant. That may justifying appealing to manifestly confident white middle or upper class informants. But some who would be equally dependable and lack the requisite social marks are, and by Craig’s lights, are justifiably ignored. Our epistemic condition threatens to be ineluctably unjust. Nevertheless, we need a public mark to determine whom to trust.

A practice that excludes from the class of knowers those cognizers who lack the social mark need not be epistemically impoverishing. If, for example, pretty much everyone is aware of how to get to Park Street, then pretty much anyone can tell you. If you restrict your informants to white men, you will still easily get the information you seek. Such an example fails to do justice to Craig’s point, though, because if pretty much everyone is aware of something, then the division of cognitive labor is all but idle. But the same challenge occurs in cases where we seek expertise. If pretty much everyone in the lab is aware of whether muons are leptons, then asking anyone in the lab is likely to get you the information you seek. Restricting yourself to white men in lab coats is unlikely to undermine your quest for information. Craig’s criterion for a social mark does not purport to identify all dependable informants. If the mark is a good one, then by deploying it, you are apt to target someone who has the information you seek. But you may also overlook others who also have that information. Whether this result in a deficit in accurate information depends on what proportion and distribution of those with accurate information display the mark.

Even if it does not, it is apt to result in epistemic injustice. According to Fricker, testimonial injustice occurs when, owing to prejudice, a hearer gives an undue level of credibility to a speaker’s word (2009, p. 1). Because her interest is primarily ethical, Fricker focuses on credibility deflation—in particular on cases of “identity prejudice,” where the grounds for the deflation are keyed to factors like race, religion, gender, and sexual orientation, which people take to be the core of their identities. But the breadth of the epistemic problem is vastly greater than this suggests.

Testimonial injustice arises from unjustified credibility imbalances. It is not exclusively and perhaps not even predominantly due to identity prejudice. There is an injustice in credibility inflation as well as in deflation. If, because of a person’s prestige, his word is given more than its due, he incurs an unwarranted epistemic burden. A real life example bears this out. Leonard Nimoy was famous for his portrayal of Spock, a half Vulcan who was far more knowledgeable about scientific matters and far more rational than the fully human characters in Star Trek. He reported “I was invited to Cal Tech and was introduced to a number of very brilliant young people who were working on interesting projects. And then they’d say to me “What do you think?”, expecting me to have some very sound advice” (Itzkoff 2009). He was an actor, not a scientist. He was completely unqualified to assess their ideas. Yet his credibility was inflated because of the character he played. “Do not take my word for it” may be the mantra of someone whose credibility is inflated, but folks might disregard it. Then again, unlike Nimoy, his ego may be so inflated that he purports to know and perhaps even thinks he knows things he does not. If so, he is a poor informant.

Moreover, much credibility deflation is due to prejudices that are not Fricker’s identity prejudices. It is a product of social hierarchies. Physicians often deflate the credibility of nurses; psychiatrists, the credibility of clinical psychologists; scientists, the credibility of humanists; sophomores, the credibility of freshmen; and so forth. We can, if we like, construe these as cases of class bias. But if we do, we have to recognize that the classes are not always economic classes, and they intersect in unanticipated ways. They may be social, professional, or educational classes, or pretty much any collections of people into groups who think of themselves as an “us” as opposed to, and superior to, a “them.”

As Fricker describes it, epistemic injustice occurs only if one’s credibility is not given its due because of one’s membership in some group. She denies that it would be an injustice if Beth deflated Bill’s credibility because she wrongly decided that he is not all that bright (2009, pp. 22–23). This might be so if Beth had made a non-culpable error about Bill’s expertise or intelligence. But not all one-off assessments are non-culpable errors. Perhaps, Beth jumped to the conclusion that Bill is not bright, having carelessly misread something he wrote or having cavalierly misconstrued his dry wit. Perhaps Beth regularly and irresponsibly jumps to such conclusions. If so, Bill seems to suffer an epistemic injustice. Here, I disagree with Fricker. Non-culpable errors aside, we do someone an epistemic injustice when we fail to give his words their due or when we silence him completely, regardless of whether the failure stems considering him a member of a putatively epistemically discreditable group.

The problem Craig begins with is that we have neither the time nor the epistemic resources to vet every informant. So, as Fricker notes, we resort to stereotypes (Fricker 2009, p. 31). She emphasizes that to be epistemically acceptable, the stereotypes have to be reliable. If pretty much everyone who instantiates the stereotype is knowledgeable, you will not get bad information by restricting yourself to them. But if the stereotype omits or downplays the expertise of others, it will still be morally and epistemically problematic. Then, even if they are reliable, the stereotypes will often be pernicious.

In the film Good Will Hunting, Will, a young janitor at MIT, surreptitiously solves a series of problems that advanced graduate students in mathematics cannot solve. Later in the film, he outperforms their professors as well. Arguably, toward the end of the film—after he has demonstrated his ability—he is recognized as a good informant. But initially, he is dismissed, for he plainly does not display any stereotypical mark of a good math informant. Indeed, he displays multiple marks of a poor one. He has little formal education; he comes from the working class; he has a menial job; he is on parole. He lives in South Boston, which is far from a hotbed of mathematical prowess. He is, in short, not the sort of informant Craig would advise us to depend on. Since he is obviously an outlier, his talent need not prompt us to revise our stereotypes about math informants. As a general rule, it remains preferable to ask a math professor rather than a janitor if you want information about the Krylov-Safonov Harnak inequality. Still, to dismiss Will’s claims about math is to do him an injustice. He knows his stuff.Footnote 9

Fricker says that the prejudices that are keyed to “identity” factors—race, gender, sexual orientation, ethnicity—are the most serious (2009, p. 15). They may be the ones with broadest scope. And maybe, they are the most ethically pernicious. But many people take their vocations or their relationships or their individual accomplishments to be central to their identity. So, to have one’s credibility deflated because one is merely a nurse, merely a psychologist, merely a step-parent, or merely an accomplished mathematician who works as a janitor is a serious moral affront. It also warps information-seekers’ epistemic access. It prevents them from appreciating the range and distribution of expertise in their epistemic milieu. So, the fact that we need to take shortcuts creates a problem.

Our predicament is this. We need to select an informant who we think is likely to have the information we seek. To make our selection we have to depend on some public mark—some observable characteristic which correlates with being a good informant about such matters. The criterion of a good mark is that we can be sufficiently confident that anyone who displays it has the requisite information. But it does not follow that only those who display the mark have the information in question. Others may have it as well. By restricting our choice of informants to those who display the mark, we do an epistemic injustice to those who have the information we seek but lack the mark. In effect, we place them under a cone of silence.

Perhaps the predicament is irremediable. Given our need to rely on one another, perhaps we should simply concede that such testimonial injustice is inevitable. Maybe it is social epistemology’s original sin. Arguably, however, we can at least improve our lot. The marks we have identified are social and behavioral; they are not normative. That I suggest is a mistake.

Craig and Hannon present the information seeker’s challenge as identifying an informant who is reliable, where the criterion of reliability is being sufficiently likely to speak the truth. This makes informants sound like barometers. A barometer is a source of information about the weather because the changes in air pressure that it measures reliably correlate with changes in the weather. But, they insist that informants differ from mere sources of information. Informants are people; they are capable of what we might call epistemic empathy. They can grasp the inquirer’s interests and abilities, and tailor their responses accordingly. They are articulate; they can frame their responses in a way that the inquirer can understand. They can appreciate why she wants to know, and what epistemic resources she brings to the table. A good informant’s responses to requests for information then are not merely accurate, they are relevant and useful to the information seeker. I suggest that a good informant is not merely reliable, she is trustworthy. As Fricker says, inquirers “do not merely rely on their good informants, relating to them as more or less complex epistemic instruments; rather they trust them” (2012, p. 255).Footnote 10 The difference is this: reliability is statistical; trustworthiness is normative.

A trustworthy informant is more than someone who speaks the truth. She is epistemically responsible. She can typically tell whether she has the information the inquirer seeks. This involves self-awareness as well as cognizance of the topic. She can typically tell whether her opinion is backed by adequate evidence. She does not overrate or underrate her competence. She is sincere. She ventures information only if she thinks it is accurate. She is, moreover, socially sensitive. She conveys information in a way that is attuned to her inquirer’s needs, interests, and abilities. She will not, for example, give a highly technical, jargon ridden, virtually unintelligible answer to a layman’s question. She will not adopt an orientation on the problem that is irrelevant to his needs. She will not be unduly precise or unduly imprecise. And so on. A merely reliable response to a request for information could easily be unintelligible, irrelevant, wrongly calibrated, or otherwise unsuited to the inquirer’s needs. A robot could, like a barometer, provide an accurate answer to the question whether charged leptons undergo strong interactions. But if it was not sensitive to the interests, background beliefs, and abilities of its audience, its answer might be useless. A good informant must be properly attuned to the topic and to her audience. These requirements are normative. Ceteris paribus, an informant is blameworthy if she lacks them. This is not to say that she is blameworthy for lacking them. Rather, ceteris paribus, she is blameworthy if she informs although she lacks them. The norms are, I suggest, at once ethical and epistemic, for the information is presumptively trustworthy if the source is.Footnote 11

The move from reliability to trustworthiness might seem no more than a tweak on Craig’s picture. I suggest otherwise. The components I just sketched are general character traits of an epistemic agent. They are not indexed to his utterance of a particular proposition. So even though we cannot judge an informant who says that p the way an examiner who already knows whether p would, we may have plenty of evidence that he is or that he is not trustworthy. Is he a sincere person? That is, in general, does he say that p only if he thinks he is justified in doing so? Does he introduce caveats when he thinks they are needed? Sincerity can be topic-neutral. When it is, we need not know anything about the informant’s expertise to judge him. Regardless of the topic, we may know that he is not the type to purport to know something unless he thinks he knows it. On the other hand, an epistemic agent might be sincere with respect to some topics but not others. Perhaps he routinely exaggerates his romantic successes, but never lies about his work. Then whether to take him to be sincere depends on the topic in question. Is he conscientious? Is he scrupulous about gathering and weighing evidence? Is he self-aware about the range of his competence? Or is he an egotistical, cavalier blowhard? Is he the sort who frames his information in a way that is apt to be useful to his audience? These are general character traits that we can assess without having any idea about whether he knows whether p. They are also the sorts of traits that are easily identifiable.

The question of competence is more focused. People have areas of expertise. So it is useless to expect someone to be competent in general. But we can ask whether he has shown himself to be trustworthy within a given epistemic neighborhood. Is he, for example, a generally dependable informant about amino acids or about baseball scores or about French wines? Is he generally able to frame his answers to questions about the topic in a way that his inquirer can understand and use? If we seek specific information about one of these topics, recognizing that he is sincere and conscientious and that he has proven himself to be a good informant about similar questions gives us reason to trust him now.

Ordinarily, we have a good deal of relevant information to draw on in deciding whose word to take. The information is more than correlational, for it locates a potential informant within a network of moral and epistemic norms. By attending to the epistemic environment, we improve our ability to identify those who are worthy of trust. Sometimes, this might just be a matter of looking at local or global track records. But attention to social dynamics may pay dividends as well. If the social costs of challenging a superior are high, we might reasonably suspect that a nurse challenging a physician, a research assistant challenging a professor, and an enlisted man challenging an officer at least takes himself to have strong backing for his claim. Then even if his utterance comes across as tentative and unconfident, we might suspect that he is right.

Over time, if the challenges withstand epistemic scrutiny, we revise our views about who is a good informant, about what makes for a good informant, and about how to identify one. This is what the Royal Society did. It came to recognize that common-born investigators were nonetheless good scientists, and that taking a member of the Society at his word was liable to entrench errors and impede scientific progress. So the Society revised its membership requirements and its criteria for scientific acceptability. As a result, it morphed from a rather odd sort of gentlemen’s club into a scientifically estimable institution.

Craig framed the problem as a quest for a particular bit of information. If you want to know whether p, should you ask A or B? I have argued that epistemic injustice can occur if B is as epistemically qualified to answer the question as A is but lacks the socially accepted mark of a good informant. By recognizing B’s epistemic standing, we prevent that injustice. Doing so pays additional dividends when we move away from the quest for a single fact. Because B is differently situated, she is likely to be privy to other relevant information that A and his ilk lack. Because nurses spend more time with patients, they are apt to have a range of information about a patient’s condition that an attending physician lacks. They may also be better able to convey the information in a way that is useful to the recipients. By lifting the cone of silence that epistemic injustice imposes, we gain access to that information.

The move from reliability to trustworthiness is no panacea. It does not promise to eliminate epistemic injustice. But it does afford a way to ameliorate it. It sensitizes us to the fact that we have more information about potential informants than Craig and Hannon appreciate. Information transfer takes place in rich, textured epistemic environments. As we learn about a topic, we learn how to learn about that topic. We extend, refine, and correct our methods of inquiry. As we come to appreciate the inadequacies of the marks we currently use to identify good informants, we not only eliminate potential injustices, we also expand our access to the facts.

Epistemic injustice puts the community at a disadvantage. Perhaps we cannot eliminate it completely, but we have resources for ameliorating it. We can learn whose word is worthy of trust, using epistemic marks rather than merely social or behavioral ones. We can learn to revise epistemically inadequate stereotypes and to recognize the contributions of those who do not fit the stereotypes. The benefit, I suggest, is not merely moral; it is also epistemic. We put ourselves in a position to recognize and draw on the epistemic accomplishments of others.