1 Introduction

It is widely acknowledged in epistemology that there are internal defeaters. An internal defeater is seen as a belief or a mental state that makes a belief unjustified or unwarranted that would have been justified or warranted in absence of the defeater. While the existence of internal defeaters is generally accepted among internalists as well as externalists, the existence of their counterparts (I will call them “confirmers”) is not.

I will start with Alvin Plantinga’s proper function model (Sect. 2) and give special attention to the concepts of doxastic evidence (Sect. 3) and of defeaters (Sect. 4). As there are two kinds of defeaters—undercutting and rebutting defeaters—so there are two kinds of confirmers: requirement confirmers and consistency confirmers. In Sect. 5 I will argue that a proper understanding of the concept of undercutting defeaters shows that there are also “requirement confirmers” as their counterpart. I will try to demonstrate that by unfolding the implications of the defeater-system in Alvin Plantinga’s model, but I suppose that the results of this investigation are valid for all theories that accept internal defeaters. In Sect. 6 I will investigate the relation of requirement confirmers to warrant, while Sects. 7 and  8 will investigate the role of consistency confirmers.

2 Plantinga’s proper function theory

Plantinga’s proper function theory is well known, but nevertheless it seems necessary to me to unfold some details in order to lay groundwork for my argument. For Plantinga a belief is knowledge if it is true and is warranted. So the essential question is: “What are the conditions for warrant?” He gives three requirements for a warranted belief:

  1. R (i)

    the belief has to be produced by cognitive faculties that are successfully aimed at truth;

  2. R (ii)

    the relevant faculties have to function properly;

  3. R (iii)

    the cognitive environment has to be appropriate for the relevant cognitive faculties.Footnote 1

According to Plantinga (1993, pp. 17–18) and Reid (1983/1785, pp. 275ff.), as human beings we ordinarily take it for granted that, if our cognitive faculties are working properly and the cognitive environment is not misleading, our beliefs are warranted. We presuppose that normally R (i) is fulfilled—except e.g. in cases of wishful thinking.

To presuppose the reliability of our cognitive equipment is, of course, not to claim infallibility. It is part of the “conditio humana” that we are not free from error. “Errare humanum est”—to err is human. Causes for errors may be the malfunction of cognitive faculties—R (ii) is not fulfilled—or a cognitive environment that is not appropriate – R (iii) is not fulfilled. The fallibility of our cognitive faculties implies that our beliefs are defeasible. For any belief B of a person S there may be (unbeknownst to S) a true proposition D\(_\mathrm{ex}\) that includes a state of affairs as a result of which at least one of the requirements R (i) to R (iii) is not fulfilled. We may call D\(_\mathrm{ex}\) an external or propositional defeater.

But there are internal defeaters as well and they will be the subject I want to deal with in this paper. Before I unfold some implications of the internal defeater system I want to consider a feature of Plantinga’s proper function model that is not as widely-used as the concept of defeaters, but is, as I think, helpful to understand how defeaters (and “confirmers”) work: the so-called doxastic evidence.

3 Doxastic evidence

I think Plantinga is right when he observes that our beliefs are accompanied by an experience he calls “doxastic evidence” (Plantinga 2000, pp. 110–111; pp. 203–204; p. 264, etc.) “The belief feels right, acceptable and natural; it feels different from what you think is a false belief.” (Plantinga 2000, pp. 110–111) There is “this inclination to believe, this perceived attractiveness, or inevitability, or fittingness of the proposition in question in the situation in question.” (Plantinga 1993, p. 192)Footnote 2 Memory beliefs may serve as an example to describe the experience: I meet a friend of mine whom I haven’t seen for a long time. I try to remember his name. Is it “Ronald”? Or “Roderick”? The names just do not “feel” quite right. “Robert”! That’s the right name! How do I know that “Robert” is the right name? It just “feels” right.

The doxastic evidence is an internal marker that—if it is functioning properly—indicates whether a belief is right or wrong. According to Plantinga (2000, p. 264) we have this doxastic evidence “in any case of belief.” Some beliefs are accompanied not only by this doxastic experience. In cases of perceptual belief e.g. we have also some kind of sensuous imagery. But in case of memory beliefs and a priori beliefs we often have nothing else than doxastic evidence, at least nothing else that gives these beliefs warrant.Footnote 3

For Plantinga my doxastic evidence for a proposition \(p\) is equivalent to my inclination to believe \(p\) (Plantinga 1993, pp. 190–193; 2000, pp. 203–204; p. 264; p. 492). And if everything goes right with regard to internal rationality, I will form my beliefs in accordance to this internal marker (Plantinga 2000, p. 111). My doxastic evidence for \(p\)—and therefore my inclination to believe \(p\)—may vary “from the merest shadow of an inclination to believe all the way to complete certainty.” (Plantinga 1993, p. 43) Accordingly, there may be any degree of firmness with which I believe \(p\). And “in the typical case, the degree to which I believe a given proposition will be proportional to the degree it has of warrant.” (Plantinga 1993, p. 9) This is true, of course, only if my internal and external cognitive faculties are functioning properly. If my memory works well, my cognitive equipment will give me a strong internal marker for a belief that has a high degree of warrant and I will form a belief with a firmness that is proportional to the doxastic evidence. If, on the other hand, my memory is malfunctioning and I am not aware of this fact, my cognitive faculties will give me a strong internal indication that \(p\) is true and I will form a belief in accordance with the doxastic evidence but my belief will have no warrant. But, again, if everything is functioning properly, the degree of doxastic evidence will be proportional to the degree of warrant a belief has for me.

If my eye-sight is good and I see a house 100 m in front of me in clear daylight, the belief “there is a house in front of me” will have a high degree of warrant for me and my inclination to believe, my doxastic evidence, will be accordingly strong. If, on the other hand, my eye-sight is not good—I have forgotten my glasses—and I see a house 100 m in front of me on a foggy day, my inclination to believe “there is a house in front of me” will be weaker than in the first case and that will correspond to the fact that this belief has less warrant for me under these circumstances. I will come back to this observation when I argue that there are not only internal defeaters, but also “confirmers”.

4 Internal defeaters

I already mentioned that there are internal defeaters as well as external ones. To avoid misunderstandings, I will try to make clear how I use the term “internal defeater”. When Plantinga speaks of defeaters, generally he speaks of internal defeatersFootnote 4 and defines them (2000, p. 363) as follows:

\(D \)is a purely epistemic defeater of \(B \)for \(S \)at \(t \)if and only if

  1. (1)

    \(S'\) s noetic structure \(N\) at \(t\) includes \(B\) and \(S\) comes to believe \(D \)at \(t\), and

  2. (2)

    any person \(S^*\)

    1. (a)

      whose cognitive faculties are functioning properly in the relevant respects,

    2. (b)

      who is such that the bit of the design plan governing the sustaining of \(B\) in her noetic structure is successfully aimed at truth (i.e., at the maximization of true belief and minimization of false belief) and nothing more,

    3. (c)

      whose noetic structure is \(N\) and includes \(B\), and

    4. (d)

      who comes to believe \(D\) but nothing else independent of or stronger than \(D\),

    would withhold \(B\) (or believe it less strongly).

To put it short: \(D\) is a defeater of \(B\) for \(S\), if S believes D, and any internally rational person \(S^*\) who believes \(D\) would withhold \(B\) (or believe it less strongly). So, clearly \(D\) is a believed proposition.

Kvanvig (2007, p. 111) thinks that this definition “does not represent Plantinga’s full thinking on the matter” and that defeaters for Plantinga can be—contrary to his “official account”—not only beliefs, but also experiences. An adjustment of the definition is necessary in his opinion because of Plantinga’s statement that argument is one way to give me a defeater but it is not the only way: “I claim that there are no prickly pear cacti in the upper peninsula of Michigan; you take me into the woods up there and show me a particularly luxuriant specimen; rationality requires that I drop my now discredited belief.” (Plantinga 2000, p. 367) But contrary to Kvanvig I don’t think that we have to adjust the definition in a way that not only beliefs but also experiences can be internal defeaters. The distinction Plantinga makes here is not between a belief as defeater and an experience as defeater but between coming to a believed defeater by inferenceFootnote 5 and coming to a believed defeater by an experience. In Plantinga’s example the natural thing is to form the belief “there is a prickly pear cactus” almost simultaneously with my experience of seeing the cactus, and if my cognitive faculties are working properly this is exactly what will happen. I therefore see no reason to think that the definition Plantinga gives us is not exactly what he really thinks a defeater is.

Bergmann (2005, p. 422, 2006, p. 155) goes one step further than Kvanvig and defines “mental defeaters” as to include not only experiences but also propositional attitudes. And propositional attitudes can be not only believing \(p \) but also disbelieving \(p \) or “an attitude of significant uncertainty about the proposition” (2005, p. 427). I will follow Plantinga’s understanding of defeaters as beliefs. For even if Bergmann is right that the class of mental state defeaters includes more than beliefs, beliefs as defeaters are at least a subclass of mental state defeaters. I want to focus on this kind of defeaters.

To limit my investigation to beliefs as defeaters has, as I think, two advantages: (i) The concept of an internal defeater-system (with beliefs as defeaters) is widely accepted and it is a useful one. It helps us to identify a module of our cognitive equipment that is aimed at a special feature of internal rationality. This internal defeater-system can be distinguished from other cognitive modules and from states of affairs that can be called propositional or external defeaters. To avoid misunderstandings it should be clearly distinguished. (ii) Bergmann (1997, pp. 405–407) has shown that internalists, as well many externalists, accept a “no-defeater condition” (NDC) in the sense of a “no-believed-defeater-condition” as necessary for warrant. Internal defeaters understood in accordance with Plantinga’s definition as beliefs are therefore a relevant part of probably most internalist and externalist theories.

There can be two kinds of defeaters: (i) if you hold a belief A but then you acquire another belief B that is inconsistent with A, you have a rebutting defeater for A. “I see (at a hundred yards) what I take to be a sheep in a field and form the belief that there is a sheep in the field; I know that you are the owner of the field; the next day you tell me that there are no sheep in that field, although you own a dog who looks like a sheep at a hundred yards and who frequents the field. Then (in the absence of special circumstances) I have a defeater for the belief that there was a sheep in that field and will, if rational, no longer hold that belief.” (Plantinga 2000, p. 359)

While a rebutting defeater produces an inconsistency in your belief system that causes you to give up the defeated belief, an undercutting defeater undercuts the evidence for your belief: “You enter a factory and see an assembly line on which there are a number of widgets, all of which look red. You form the belief that indeed they are red. Then along comes the shop superintendent, who informs you that the widgets are being irradiated by red and infrared light, a process that makes it possible to detect otherwise undetectable hairline cracks. You then have a defeater for your belief that the widget you are looking at is red. In this case, what you learn is not something incompatible with the defeated belief (you aren’t told that this widget isn’t red); what you learn, rather, is something that undercuts your grounds or reasons for thinking it red.” (Plantinga 2000, p. 359) To put it another way: What you learn shows you that the cognitive environment is not appropriate for your visual faculties. You realize that R (iii) is not fulfilled. If your cognitive faculties are functioning properly you will give up your belief. Even if the shop superintendent has fooled you and R (iii) indeed is fulfilled because there is no red light in the factory, it would be irrational for you to hold the belief “the widgets are red” as long as you belive that R (iii) is not fulfilled. If you would hold this belief in spite of a defeater the belief had no warrant. So when we unfold the implications of the defeater system we can see that R (ii) is not only an external requirement but also an internal one. A further investigation of the defeater system will show that defeaters must have as a counterpart “confirmers”.

5 The implications of the internal defeater-system: where there are defeaters, there are also “confirmers”

Most theories in epistemology accept that there is an internal defeater-system in the sense described. In the following section I will argue that the unfolding of the implications of this system shows that there are not only defeaters but also “confirmers”. I want to start with a case of perceptual beliefs.

5.1 What kind of beliefs do I need to know that there is a desk in front of me?

Plantinga (2000, p. 178) points out that most of our beliefs have warrant in a basic way. We form these beliefs spontaneously without inferring them from some other beliefs. My belief “There is a desk in front of me” is not the product of an argument like:

  1. (A)

    If

    1. (i)

      my eye-sight is good,

    2. (ii)

      the lighting conditions are acceptable and

    3. (iii)

      the cognitive environment is not misleading, then my perceptual beliefs about medium-seized objects in a distance of less than 5 m are reliable.

  2. (B)

    My eye-sight is good.

  3. (C)

    The lighting conditions are acceptable.

  4. (D)

    The cognitive environment is not misleading.

  5. (E)

    I have the visual impression that just in front of me there is a desk.

  6. (F)

    I don’t have any other beliefs that are incompatible with the belief that there is a desk in front of me. Therefore:

  7. (G)

    There is a desk 2 m in front of me.

At first blush it seems that only the basic belief (G) matters. Beliefs (A)–(F) seem to be irrelevant and it is indeed dubitable that we even hold these beliefs. I think Plantinga is right that we usually don’t hold these beliefs at a conscious level and that we don’t infer (G) by an argument from propositions (A)–(F). But that is not the whole story. I will argue that we indeed do hold beliefs like (A)–(F) and that we have to hold these beliefs if we believe (G). I want to show this by examining the beliefs (A)–(F) successively.

As a first step I want to classify the beliefs in three groups: Belief (A) is an application of Plantinga’s warrant-requirements R (i) to R (iii) to the faculty of sense perception. Let us call it the “requirement belief”. (B)–(D) are beliefs about the actual fulfillment of the requirements. Let us call them the “requirement fulfillment beliefs”. (E) is a proposition about a state of my mind. (F) is a belief about the consistency of the belief system. I will call it the “consistency belief”.

5.2 The requirement belief

The requirement belief (A) implies (a) the basic presupposition that our visual perception is reliable—R (i)—and (b) some experience-based descriptions under what kind of circumstances beliefs produced by this faculty fulfill R (ii) and R (iii). (A) therefore includes the belief that the cognitive faculty of visual perception is successfully aimed at truth. If (a) is true for visual perception and for sense perception in general, our past experience about sense perception will lead us to beliefs like (A) (i)–(iii) that spell out in some detail what it means that my cognitive faculties are functioning properly and that the belief (G) is produced in the right cognitive environment.

5.3 The “requirement fulfillment beliefs”: an asymmetry in Plantinga’s model

As I have already stated, according to Plantinga the “requirement fulfillment beliefs” (B)–(D) don’t play any role for producing belief (G). But there is an asymmetry in Plantinga’s model—and the same is true, it seems to me, for other externalist theories that accept the no-believed-defeater condition as well. While the “requirement fulfillment beliefs” (B)–(D) are considered as non-existent or at least irrelevant for warrant, some of their negative counterparts are seen as relevant for warrant:

  • (B) My eye-sight is not good.

  • (C) The lighting conditions are bad.

  • (D) The cognitive environment is misleading.

These beliefs count as defeaters for (G). The beliefs –(B), –(C) and –(D) are undercutting defeaters, they show that R (ii) and R (iii) are not fulfilled. If my eye-sight is not good, then my cognitive faculties are not functioning properly. And if the lighting conditions are bad and the cognitive circumstances are misleading, then the cognitive environment is not appropriate for the relevant cognitive faculties. But a belief is only warranted if the requirements are fulfilled. Therefore –(B), –(C) and –(D) forces me to abandon (G).

So there is obviously an asymmetry here: In Plantinga’s theory the defeaters play an important role in the design plan, but their positive counterparts (B)–(D) seem to play no role at all. This asymmetry raises the question whether these beliefs indeed play no role in our cognitive design plan. It seems more plausible to me that they do have a function, a function that can easily be overlooked. I will argue that we do hold these beliefs and, if we hold them, they are necessary in order to be warranted in believing (G).

5.4 We do hold “requirement fulfillment beliefs”

According to Plantinga—and other externalists—our cognitive practice shows a complete lack of beliefs like (B), (C) and (D). We form beliefs like (G) spontaneously without inferring them from (B), (C) and (D). I think it is true that we form beliefs like (G) spontaneously, but nevertheless I think that we have beliefs like (B), (C) and (D).

Let me explain this. All of us have a huge amount of beliefs like “Paris is the capital of France”, “I had toast for breakfast this morning”, “2 + 7 = 9”, etc. But as human beings we have limited cognitive capacities and we are only able to be fully aware of comparatively few beliefs at a given point in time. Just at this moment I have in mind some ideas about epistemology, but my knowledge of Latin grammar or my experiences during the years in high school are not in my mind just now. Nevertheless I have a lot of beliefs about these subjects—warranted beliefs in some cases, as I hope—that I can recall if I want to. So “to believe” is not the same as “to have in mind”.

The limited space of our “active store” makes it necessary that the vast majority of our beliefs remains outside our present attention. It seems to be part of our cognitive makeup that our mind keeps as many beliefs as possible outside our actual attention, if they are not necessary at this moment. This seems to be the reason why under normal circumstances we don’t have in mind beliefs like (B), (C) and (D). But that does not mean that we don’t have these beliefs. Under normal circumstances I do believe that my eye-sight is good and that the cognitive environment is not misleading, even if I don’t have in mind these beliefs. Since these are the normal circumstances, it is not necessary to give any attention to the relevant beliefs.

Under normal circumstances I am not aware of my heartbeat, my breathing, and the weight of my shoes at a conscious level, although at some level I am aware of my heartbeat, my breath and my shoes. The picture changes if my heart starts beating irregularly: then my belief about my heartbeat will rise to a conscious level. Similarly, if there is some problem with my eye-sight or with the circumstances, in most cases I will become aware of that. But even if this awareness does not rise to the level of conscious reflection, my cognitive equipment will adjust the doxastic evidence to the epistemic situation. Remember the example in Sect. 2: If my eye-sight is not good—I have forgotten my glasses—and I see at a foggy day a house 100 m in front of me, my inclination to believe “there is a house in front of me” will be weaker than in more favorable circumstances. I take this as evidence that we do hold requirement fulfillment beliefs. The reason for the absence of (B)–(D) at a conscious level is simply that they are not relevant and can therefore remain unnoticed. Nevertheless I have these beliefs: If I report my seeing the table to my friends the next day and someone asks me whether my eye-sight was good or whether the lighting conditions were acceptable, I will be able to recollect my memories and to respond: “My eye-sight was good, I could find no malfunction yesterday. And the lighting conditions were all right.” I may add “as far as I remember”, but I may add this caveat also if someone asks me what I had for breakfast yesterday. It does not show that I don’t have any beliefs about my eye-sight, it only shows that a lot of our memory beliefs are not as certain as e.g. beliefs from present sense perceptions. So it seems to me that indeed we do hold requirement fulfillment beliefs like (B)–(D), although under normal circumstances they remain unnoticed like many other beliefs.

There are, however, unusual circumstances in which I am aware of requirement fulfillment beliefs. For instance, if I get new prescription glasses that correct an old problem with my depth perception, then I will go around at least for a while with a belief that my eyesight was functioning properly. In cases like this I would even have consciously a confirmer.Footnote 6

6 “Requirement fulfillment beliefs” (“confirmers”) and warrant

Some externalist may not be very interested in the question whether we hold requirement fulfillment beliefs since they think that these beliefs are irrelevant for warrant. According to Plantinga’s proper function model for (G) to be warranted (B), (C) and (D) have to be true. But we don’t have to believe (B), (C) and (D) in order for (G) to have warrant. It seems to be enough if our cognitive faculties are functioning properly in a cognitive environment that is appropriate for our cognitive faculties.

But if I am right that we do hold beliefs like (B)–(D), these beliefs are necessary for warrant. An objection to this claim may be the contention that what counts for warrant is the fact that (B), (C) and (D) and not our beliefs about (B)–(D). An investigation of their counterparts—the defeaters –(B) to –(D)—shows that the objection is not successful. If the only relevant question was whether the requirements are fulfilled and not what we believe about the fulfillment, this should be the case not only when the requirements are fulfilled but also when they are not fulfilled.

Imagine that I see the table in an exhibition and form belief (G). But then I remember having heard that they sometimes show extraordinarily clever holograms instead of real tables. I form the false belief: “This exhibition shows a lot of holograms.” So I have a defeater for my belief (G). I believe that the cognitive environment is misleading (that R (iii) is not fulfilled). If I held (G) any longer, it would have no warrant because my internal rationality would not function properly. If only the fact that (D) was relevant for warrant, my belief (G) would have warrant. But that is not the case. Since I believe that –(D) belief (G) is not warranted. So not only the facts about the fulfillment of the requirements are relevant for warrant but also my beliefs about fulfillment of the requirements. I will call beliefs that the requirements are fulfilled “confirmers”. While defeaters invalidate a belief, confirmers validate a belief.

One may wonder why an unwarranted and false belief –(D) can bereave (G) of its warrant in spite of the fact that the true proposition (D) confers warrant to (G).Footnote 7 The reason is that for a belief to have warrant it is necessary that the external processes are reliable and the internal processes are functioning properly. It is not enough that only one of these requirements is fulfilled. If I believe –(D) and (G), my internal processes are not functioning properly and (G) has no warrant. And, of course, the same is true if my internal processes are completely rational but, unbeknownst to me, –(D) is true and therefore the external processes are not reliable. This double requirement leads to a kind of asymmetry: believing –(D) is enough to destroy the warrant for (G), but believing (D) is not enough for (G) to have warrant: (D) has to be a true proposition. And it is enough that –(D) is true to destroy the warrant for (G), but it is not enough for (G) to have warrant that (D) is true—I have to believe (D) or at least not to believe—(D). This kind of asymmetry arises naturally when more than one condition has to be fulfilled in order to achieve some goal.Footnote 8

So what counts for warrant is not only the fact that (B), (C) and (D) but also our beliefs about (B)–(D). But isn’t it enough that I do not believe –(B) to –(D), i.e. that I don’t have a believed defeater? Is it really necessary that I believe the confirmers (B)–(D)? To satisfy the widely acknowledged no-believed-defeater condition it is enough not to believe –(B) to –(D). But if it is true that we indeed have these requirement fulfillment beliefs, as I have argued in Sect. 5.4, it seems plausible that they play a crucial role for warrant. My claim is that we always have beliefs regarding the fulfillment of the requirements for warrant. And having a confirmer and having a defeater is, of course, mutually exclusive. If I have a defeater, I don’t have a confirmer and my belief has no warrant. If I have no defeater, I have a confirmer and therefore my belief is warranted.

But there seem to be more options than to believe (B)–(D), or –(B) to –(D). Let us call the conjunction of the true propositions (B)–(D) the “requirement fulfillment” (RF) and the conjunction of the beliefs that (B)–(D) are fulfilled the “requirement fulfillment belief” (RFB). In the ideal case (RF) is completely fulfilled and we believe (maybe not with conscious awareness) that (RF) is true. (G) has therefore maximal warrant, let us say (G) has a warrant of 1. In a less ideal case the lighting conditions are rather bad but still good enough to perceive a desk with some certainty. Let us assign (RF) a value of 0.8. If I assess my cognitive situation correctly, the contents of (RFB) will be: “(RF) is fulfilled to a degree of 0.8.” And if my internal cognitive processes are functioning properly, I will still be inclined to believe (G), but the inclination will not be as strong as in the ideal case. The doxastic evidence for (G) will be 0.8 and I will believe (G) with a firmness of 0.8. Therefore, to say that we always have (RFB) is not to say that we always have either an undercutting defeater that destroys the warrant for (G) completely or a confirmer that gives (G) a doxastic evidence of 1. Here not only black or white are possible but all shades of gray. In the cases we just imagined, the degree of warrant is equivalent to the degree of (RF), so it may seem that (RFB) has no influence on the degree of warrant. It is obvious, though, that not only an (even if erroneously) believed defeater can destroy warrant but also a belief (RFB) that erroneously ascribes (RF) a lower degree can lower the degree of warrant. If I have some slight doubts whether the table is a real table instead of a hologram, I still may be inclined to believe (G) with a degree of 0.8. Although (RF) is 1, the warrant of (G) will be 0.8. If it is right that we always have (RFB), (RFB) will determine the degree of warrant a belief has—regardless of (RFB) being a defeater or a confirmer or something in between.

7 “Consistency beliefs” are also “confirmers”

What I have said about requirement fulfillment beliefs is also true for consistency beliefs. Their counterparts—beliefs like –(F) “I have other beliefs that are incompatible with the belief that there is a desk in front of me”—are rebutting defeaters. If I have a successful rebutting defeater, my belief (G) has no warrant. So clearly –(F) plays an important role for the warrant of (G) and it seems plausible that its positive counterpart (F) has a function as well.

Under normal circumstances I don’t hold belief (F) at a conscious level. As long as (F) is true there is no necessity to pay attention to the fact that (G) does not contradict any other beliefs that I have. I already pointed out that this is the way our cognitive equipment works: Beliefs that are not relevant just now normally remain outside our conscious awareness. If everything is “all right”, there is no reason to pay attention to this fact. But if I report my seeing the table the next day to my friends and someone asks me: “Did you believe that your seeing the desk was consistent with other beliefs you held?” in most cases I will answer spontaneously: “Of course I did!” I probably will add: “Where do you think there is an inconsistency?” Two observations seem to me worth mentioning: First, I am able without hesitation to report that I did think that there was no inconsistency in my belief system. This indicates that I did believe (F). Second, the very fact that a friend asks a consistency question is unusual and makes sense only if there is the suspicion that there is an inconsistency in my belief system. Under normal circumstances beliefs like (F) remain outside our conscious awareness. This indicates that we take it for granted that our cognitive equipment would give us a signal if there was an inconsistency. If there is no signal, we suppose everything is all right, the new belief fits well in what we already believe.

But here a caveat is necessary: We do have a great number of beliefs and it may very well be that we overlook something, especially if the relation of a belief to other beliefs is more complex than “There is a desk 2 m in front of me.” In such cases we have something like an epistemic duty to consider more carefully whether the new belief is consistent with what we already believe and we must not take it for granted that our belief system is coherent if no contradiction pops up spontaneously.Footnote 9

So I conclude that we do have consistency beliefs, but these beliefs normally are at an unconscious level. Like requirement fulfillment beliefs consistency beliefs are confirmers, they validate beliefs, while their counterparts, rebutting defeaters, invalidate them. But we have seen in the case of “requirement confirmers” that there is not only black and white, but all shades of gray. It may be that a similar statement is true regarding consistency confirmers. Therefore I want to examine the function of “consistency confirmers” and rebutting defeaters a bit further. To get a clearer picture of the role of confirmers and defeaters I will investigate what happens or what should happen in a rational cognitive process, if a new belief \(p\) does contradict \(q\), a proposition we believed until now.

8 What to do with conflicting beliefs: the principle of doxastic prevalence

If a new belief \(p\) does contradict an “old” belief \(q\), the reasonable thing may be to give up \(q\). But, on the other hand, it may not. Under what circumstances is it rational to give up \(q\), when is it rational to maintain the belief? In the following section I want to unfold what I think is an uncontroversial principle for rational processes in case of conflicting beliefs.

I will start with a simple example: I am walking across the countryside and I see in 500 m distance what I think is a tower. I form the belief \(q\) “There is a tower on the mountain top.” But then I look through my spy glass and see that what I have taken to be a tower is a large tree. My new belief \(p\) “There is a tree on the mountain top” is a defeater for \(q\) and under normal circumstances I will immediately give up \(q\). But then I remember that I have been there some days before and that there was no tree at all on the mountain top. I find myself with the memory belief \(r\) “Some days ago there was no tree on the mountain top.” Given that it seems biologically impossible for a tree to grow to the height of 10 m in a few days and that it is highly improbable that someone planted such a large tree, my belief \(r\) is a potential defeater for \(p\). I take another look through my spy glass and I still see a tree very clearly. Although I may be a bit perplexed, I suppose that my belief \(r\) will not cast the shadow of a doubt on \(p\). I suppose everyone will agree that it would be irrational to give up \(p\) because of \(r\), or to maintain \(q\) and to give up \(p\). So why is it rational to give up \(q\) because of \(p\), but not vice versa? My eyes give me a somewhat unclear picture of a tower in the distance. Therefore \(q\) will have only a moderate degree of doxastic evidence for me. A look through my spy glass gives me a clear picture of a tree, therefore \(p\) will have a high degree of doxastic evidence for me. In case of conflict it seems rational to give up the belief with a lesser degree of doxastic evidence. And this is how our cognitive processes normally work. I want to call this “the principle of doxastic prevalence.” Someone may find a better term, but I suppose that no one will deny the principle itself.

If the degree of doxastic evidence is high enough, a potential defeater will not reduce this evidence. I have the clear memory belief \(p\) “Yesterday I spent the whole day in Paris.” But then my friend Mike tells me \(q\): “I saw you in Tokyo yesterday.” Mike normally is a reliable person and if his testimony \(q\) implies \(-p\) in other circumstances, this would be evidence for \(-p\). In this case, however, the potential defeater \(q\) will have no force at all in the face of \(p\) because \(p\) has much more doxastic evidence for me. Belief \(p\) is a defeater for the potential defeater \(q\), or, to use a term of Alvin Plantinga: \(p\) is an intrinsic defeater–defeater for \(q\).

In other cases a potential defeater \(q\) may not lead me to give up my belief \(p\), but it may diminish degree of doxastic evidence for \(p\). I saw Susan, a friend of mine from high school times I hadn’t seen for years, in the supermarket. I didn’t talk to her, but I have the firm believe \(p\): “Susan was in the supermarket this morning.” Now my friend Mike tells me that he has heard from another friend that Susan lives in China. My new belief \(q\) “Susan lives in China” is a potential defeater for \(p\), but, on the other hand, it may be that Susan is here just now to visit her family. So it may be that I still hold \(p\), but with a lesser degree of certainty. Let us say that my doxastic evidence for \(p\) was 0.9 after being in the supermarket, but now it is 0.7. If Mike tells me that he is still in contact with Susan and that she wrote in an email some hours ago that she just visited the Forbidden City in Bejing, the doxastic evidence for \(p\) may decrease to 0.2. I will probably give up the belief, even though I still take it to be possible that it was indeed Susan whom I saw at the supermarket.

We can, of course, imagine still more beliefs that may have an impact on \(p\) and we can imagine all degrees of doxastic evidence from 0.001 to 1. But I suppose that the principle I want to point out is clear enough. A conclusion I want to draw from these observations is: The lack of a confirming consistency belief may or may not diminish the doxastic evidence of a belief. So we do have—sometimes unnoticed—beliefs about the consistency of our belief system. If we have a consistency belief that a new belief \(p\) does not contradict our other beliefs, we have a confirmer for \(p\). If we lack such a confirmer, i.e. if there is a contradiction between \(p\) and other beliefs, this will in many cases reduce the doxastic evidence of \(p\).

9 Conclusion

It is widely accepted that there are beliefs that function as undercutting or rebutting defeaters of a belief \(B\). I argued that in the absence of an undercutting belief \(D\) for \(B\) we hold a confirming belief \(C\), at least at an unconscious level. If this is true, \(C\) determines the warrant of \(B\) as \(D\) would do. There is, however, not only complete confirmation or complete defeat, but there are all kinds of intermediate stages. Similarly, in the absence of a rebutting belief we hold a confirming “consistency belief”. If there is a belief that is inconsistent with the rest of our noetic structure, we have a potential rebutting defeater \(D\). Whether \(D\) destroys the warrant of \(B\), and if so, to what degree, is determined by the principle of doxastic prevalence.