In “Permission to Believe”, Schoenfield (2014) begins her paper with the following case:

I was once talking to a very religious friend… about whether or not her particular religious beliefs were justified. As these conversations tend to go, we each proposed arguments that challenged the other’s beliefs, responded to them, deemed the other’s responses unsatisfactory, and neither of us budged (Schoenfield 2014, p. 193).

Schoenfield describes a case of persistent disagreement between two epistemic peers: even when disclosing their arguments, evidence and objections, they stand their ground. And assuming that these agents are unable to settle their dispute by presenting their evidence and reasoning (which they disclosed to each other), we have a case of deep disagreement between epistemic peers.

There is a simple explanation of why two agents (call them Kate and Brad) who disclose their evidence and reasoning sometimes persistently disagree about some claim: one of them could be irrational. However, Schoenfield offers a different explanation. She thinks that epistemically rational agents can find themselves in a permissive situation, in the sense that, relative to the same body of evidence, it is rational for one agent to believe P and it is rational for the other to disbelieve P. However, Schoenfield goes a step further and argues that Kate can be rationally required to believe P while Brad is rationally required to disbelieve P. So, even when disclosing their evidence, reasoning and objections, they ought to disagree. Hence, if Schoenfield is right, there are cases of fundamental and persistent rational disagreement between epistemic peers.

In order to make sense of Schoenfield’s view, we need to make a distinction between intrapersonal and interpersonal versions of permissiveness, as in the following:

  • Intrapersonal Permissiveness: Relative to a body of evidence, one epistemically rational agent is permitted to take distinct incompatible doxastic attitudes towards P. For example, an agent could be rationally permitted to believe P and rationally permitted to believe ~ P (relative to the same evidence).

  • Interpersonal Permissiveness: Two epistemically rational agents who share the same evidence E can hold incompatible attitudes towards P. For example, agent 1 could be rationally permitted to believe P and agent 2 could be rationally permitted to believe ~ P (relative to the same evidence).

Relative to the above distinction, there are three possible stands concerning epistemic permissiveness. Some authors have argued that permissiveness is false both at the intrapersonal and at the interpersonal level,Footnote 1 or that permissiveness is true both at the intrapersonal and at the interpersonal level.Footnote 2 Others (like Schoenfield) endorse both intrapersonal uniqueness and interpersonal permissiveness.Footnote 3 If such a compromise solution is correct, there are situations in which, relative to a body of evidence shared by two agents, there is no uniquely rational answer at the interpersonal level, but one agent is required to believe P and the other is required to disbelieve P.

This paper can be understood as a response to Schoenfield’s argument. I will remain neutral on the first two positions (permissiveness across the board and uniqueness across the board). I am here concerned with the third option, namely to make a compromise between intrapersonal uniqueness and interpersonal permissiveness. I will argue that the compromise solution cannot hold under full disclosure (i.e., in cases where agents are fully aware of each other’s arguments and evidence). In other words, I will argue for a necessary connection (or a “bridge”) between interpersonal and intrapersonal epistemically permissive situations in cases where agents are fully aware of each other’s arguments and evidence, as in the following:

  • Restricted Bridge: Assume that two epistemically rational agents share the same evidence E,Footnote 4 take conflicting attitudes towards P, and are fully aware of each other’s evidence and reasoning. In such a context, if agent 1 is permitted to take doxastic attitude D towards P, then agent 2 is also permitted to take doxastic attitude D towards P.

Thus, I am making a conditional claim: provided that some cases of disagreement under full disclosure are permissive at the interpersonal level, such situations are also permissive at the intrapersonal level. Since I will merely be arguing for a conditional claim, I will assume throughout this paper that interpersonal permissiveness under full disclosure is true—I assume the antecedent of a conditional claim to derive the consequent.

In Sect. 1, I will clarify some notions such as evidence, peerage, and full disclosure. In Sect. 2, I will criticize a popular line of reasoning against the bridge, which relies on diachronic requirements of rationality. In Sect. 3, I will argue that a plausible principle of correct argumentation (the weak burden of proof principle) supports the Restricted Bridge. This will lead me to conclude that the restricted bridge is correct.

The Restricted Bridge has important implications in the debate surrounding rational peer disagreement.Footnote 5 Indeed, counterexamples to the Restricted Bridge would provide support against conciliationism, which roughly states that epistemic peers are rationally required to revise their doxastic attitudes in the face of disagreement. Indeed, counterexamples to the Restricted Bridge are cases in which epistemic peers can be required to disagree with each other, which is incompatible with conciliationism. Since I will argue that the Restricted Bridge is plausible, this means that the above line of reasoning against conciliationism is unavailable.

1 Preliminary Remarks

Kelly (2014) argues that there are situations where an agent is required to take a unique doxastic attitude towards P at the intrapersonal level, while being in an interpersonal permissive situation relative to P. For example, two agents could share exactly the same relevant evidence, but agent 1 could be required to believe P while agent 2 could be required to believe ~ P. In such a case, the agents are not in an intrapersonal permissive situation (since agent 1 is permitted only to believe P and agent 2 is permitted only to disbelieve P), but each of them may take distinct incompatible doxastic attitudes towards P. If this is correct, there is no “bridge” between interpersonal permissiveness and its intrapersonal counterpart. In view of the foregoing, here is how we can define the bridge between interpersonal permissiveness and its intrapersonal counterpart:

  • Bridge: Assume that two epistemically rational agents who share the same evidence E take conflicting attitudes towards P. In such a context, if agent 1 is permitted to take doxastic attitude D towards P, then agent 2 is also permitted to take doxastic attitude D towards P.

It should be noted that agents can take conflicting attitudes concerning epistemic standards. In such a case, the belief that P would be a belief concerning epistemic standards. An agent’s epistemic standards are the rules, models or assumptions he or she relies on to evaluate the evidence. They act as functions mapping an agent’s evidence onto doxastic attitudes towards P, and so can include background beliefs, standards of reasoning, prior probability distributions and the like. One motivation for interpersonal permissiveness is the thought that different sets of epistemic standards are permissible. In admitting an interpersonal permissiveness in terms of beliefs, we here take into account the possibility of interpersonal permissiveness with respect to epistemic standards.

As I indicated in the introduction, I will here focus on a restricted version of the Bridge. Before I discuss such a view, I want to clarify how I understand the notions of evidence and peerage (or the fact of sharing evidence). First, the notion of evidence is problematic (Kopec and Titelbaum 2016). In this paper, I will take the concept of evidence to refer to a very broad range of things, including seemings, intuitions, direct perception, memory, arguments, a priori reasoning and so forth.

With this understanding of evidence, we end up with very complex and unrealistic notions of shared evidence. It is not at all plausible that real-life agents share the same type of relevant evidence, including perceptual experiences and intuitions. King (2012, Sect. 1.2), for example, stresses that philosophers or scientists who disagree rarely rely upon the same arguments. Provided that arguments are part of an agent’s evidence, this means that their bodies of evidence are not identical, which means that the peerage condition is not satisfied. Furthermore, King notes that some pieces of evidence cannot be shared through discussion or argumentation. Specifically, discussing or arguing with each other does not result in the transmission of “perceptual experiences, rational insights, seemings, or intuitions” (King 2012, p. 256). Hence, agents may never have the same evidence, since not all phenomenal experiences can be transmitted to others through testimony.

In the debate surrounding uniqueness, it is now commonly admitted that epistemic peerage is an idealized notion (Christensen 2014; Kelly 2010). In the examples in this paper, I will simply assume that agents are idealized epistemic peers who disagree and I will not be concerned with the plausibility of such an assumption. But then, why should we care about these idealized cases, since they probably never occur in real life? Perhaps there are no “perfect” epistemic peers, but agents can come close to being perfect epistemic peers (especially in cases where their evidence and reasoning are fully disclosed). In such contexts, idealizations can be relevant in real life. For instance, if idealized rational epistemic peers may never disagree on whether P, it is natural to think that, in most cases, rational agents who are extremely close to being epistemic peers should not disagree on whether P.

A fairly good approximation of peerage occurs when agents fully disclose their evidence and reasoning to each other. This is why I will here focus on the following version of the Bridge:

  • Restricted Bridge: Assume that two epistemically rational agents share the same evidence E, take conflicting attitudes towards P and are fully aware of each other’s evidence and reasoning. In such a context, if agent 1 is permitted to take doxastic attitude D towards P, then agent 2 is also permitted to take doxastic attitude D towards P.Footnote 6

Full disclosure is not uncommon in argumentative contexts. Typically, when epistemic peers argue with each other, they disclose their evidence as well as their reasoning. In this paper, I am interested in peer disagreement and argumentative principles, which is why I will focus on the Restricted Bridge.

2 Diachronic Rationality and the Restricted Bridge

2.1 Diachronic Rationality and Diachronic Prohibition

I’ll start by explaining why an influential line of reasoning against the Restricted Bridge is problematic. The most popular explanation of why there are counterexamples to the Restricted Bridge is that some requirements of epistemic rationality limit an agent’s permissions over time, as in the following:Footnote 7

  • Diachronic Prohibition: If agent A has a specific set of rational doxastic attitudes at time t0 and does not acquire evidence between t0 and t1, A should refrain from changing his or her doxastic attitudes at time t1.

Mainstream interpretations of Bayesian conditionalization imply Diachronic Prohibition. While most subjective Bayesians admit that there may be more than one assignment of rational prior credences, they argue that epistemically rational agents are prohibited from changing their priors over time. The reason why agents cannot arbitrarily change their priors is that they would violate the diachronic norm of conditionalization, as Meacham explains in the following:

Arbitrarily changing one’s beliefs without getting new evidence violates conditionalization. Since Bayesians accept conditionalization, they will reject [that there is nothing wrong with arbitrarily changing one’s credence in P from x to y]. (Meacham 2014, p. 1206)

2.2 Diachronic Prohibition is Implausible

Diachronic Prohibition could provide an explanation of why, over time, agents end up under distinct incompatible rational obligations. However, I find such a requirement implausible. To show why, I will start by offering two thought experiments (Robot Acquisition and Transplant). These thought experiments have the same relevant normative features. I will then argue that, if Diachronic Prohibition is true, we need to make a normative distinction between these cases, which is problematic.Footnote 8 This will lead me to reject Diachronic Prohibition.

To begin with, consider the following two cases:

  • Robot Acquisition: Kate has designed Alpha and Beta, two robots based on replicas of her brain system. She has implanted circuits and chips in her brain so that all the evidence and sensory experiences she gathers are directly transmitted to her robots. While Kate has a risk-neutral prior function, her robots respectively have a risk-averse prior function and a risk-seeking prior function. When a big company asked her which robot they should buy, Kate said that she followed the principles of interpersonal permissiveness and that there is no uniquely optimal risk-based prior function. While they function differently, both robots satisfy the requirements of epistemic rationality.

  • Transplant: Kate discovers a credence transplant procedure. Specifically, she identifies a method by which she can replace her credence function with another one. Alpha and Beta, her robots, are perfect matches for a credence transplant, since they are based on exact replicas of her brain system and have updated their credences on an identical body of evidence. So, Kate intends to get a credence transplant, and she could get it from Alpha or from Beta.

The decisions involved in the above cases are the following: in Robot Acquisition, a big company could buy Alpha or buy Beta, and, in Transplant, Kate could get a credence transplant from Alpha or from Beta. I here assume that we cannot find significant normative differences between cases or decisions if there are no relevant factual differences between them. If the decisions involved in Robot Acquisition and Transplant rest on the same relevant considerations, it cannot be the case that the same decision is rational in Robot Acquisition while it is irrational in Transplant. Otherwise, we would be committed to a form of bootstrapping, a process by which reasons or obligations appear out of nowhere. I reject bootstrapping (at least, the putative reasons or obligations one gets from bootstrapping are not epistemically rational).

Robot Acquisition and Transplant have the same relevant features. Of course, the scenarios are a little different: a big company could buy one of the robots while Kate could exchange her credence function for one of the robots’ credence functions. However, if there are pros and cons related to choosing one option over the other in the Robot Acquisition case, then the same pros and cons related to choosing one option over the other will obtain in the Transplant case. For example, are the robots consistent? Are they reliable or accurate? Do they reason well? Do they lose information over time? If these factors are relevant in Robot Acquisition, they are also relevant in Transplant. In short, from an epistemically normative point of view, the kind of decision the big company has to make is no different from the kind of decision Kate could make. In view of the foregoing, we should not make significant normative distinctions between these cases.

Transplant is a good case for determining if there are diachronic norms prohibiting an agent from changing his or her credence function over time. If there are such norms, Kate is rationally prohibited from going for the credence-transplant procedure, since she would be prohibited from changing her prior function. However, as we can see in Robot Acquisition, there is no uniquely optimal risk-based prior function. Assuming that interpersonal permissiveness is true, a big company would not make a suboptimal decision in buying Alpha rather than Beta and vice versa. Since Alpha and Beta are based on rational systems, it is hard to see why Kate is prohibited from abandoning her credence function and going for one of theirs. After all, if it is just a matter of risk profile and Kate feels like going for a risky epistemic life, she should be permitted to adopt Beta’s credence function.Footnote 9 Furthermore, since Alpha and Beta updated their credences on Kate’s body of evidence, Kate has no reason to think that changing her credence function would result in her losing information. Thus, assuming that it is equally optimal for a big company to buy Alpha or Beta, Kate is permitted to change her credence function.

In summary, in cases like Transplant, we lack an explanation of why Kate would violate a requirement of rationality if she adopted a different credence function. It seems implausible that there would be diachronic norms of epistemic rationality prohibiting an agent from changing his or her attitudes over time. So, at least in interpersonal permissive situations, Diachronic Prohibition is implausible.

2.3 Objections and Replies

Here is an objection against my argument. In Robot Acquisition, Kate thinks that distinct incompatible epistemic standards are equally optimal. One could reply that agents like Kate ought to believe that their own standards are more accurate or truth-conducive than the others. According to Schoenfield, when an agent like Kate adopts or entertains a set of epistemic standards, she would refuse to adopt other incompatible standards. Indeed, compared with other epistemic standards, hers would now appear to be more truth-conducive, to maximize accuracy or to minimize inaccuracy. Also, a change in epistemic standards over time will strike an agent as irrational because “although she knows that, later, she will not be violating her own standards (since she will have new standards), she does not now think that her later standards will be as likely to lead her to a true belief as her current ones” (Schoenfield 2014, p. 201). So, Kate should not think that Alpha and Beta entertain optimal credence functions: she should believe that her credence function is optimal, and that Alpha’s and Beta’s credence functions are suboptimal.

If this is correct, we have an explanation of why Diachronic Prohibition obtains: entertaining epistemic standards changes our perception of other standards. Specifically, entertaining epistemic standards leads epistemically rational agents to believe that such standards are more truth-conducive than others. Schoenfield’s argument echoes the Strict Immodesty condition: a strictly immodest agent estimates that his or her beliefs and epistemic standards are the most accurate ones (relative to a body of evidence).Footnote 10

One could then push the following objection: if permissiveness is true, there are distinct incompatible but equally reliable epistemic standards.Footnote 11 Accordingly, epistemically rational agents should not believe that their standards are more truth-conducive than others, because incompatible rational epistemic standards are equally reliable. So, Strict Immodesty does not support permissiveness. However, even if it is a fact that there are distinct incompatible but equally reliable epistemic standards, an agent could rationally (but falsely) believe that his or her standards are epistemically superior.Footnote 12 Insofar as there are rational false beliefs, Strict Immodesty can explain why rational agents falsely believe that their own standards are more truth-conducive. So, this objection is unsatisfactory.

Be that as it may, Schoenfield’s argument is problematic for two reasons. First, with respect to acquired epistemic standards, a change in perception of rational standards leads to puzzling situations. Here is why.

It is plausible that agents do not start their epistemic lives with all the rational epistemic standards they can have. Consider the case of standards that are relevant for religious beliefs. One needs to acquire the concept of religious authority before being able to entertain standards such as “trust the religious authorities.” Since agents do not necessarily start their epistemic lives with such concepts, the standard “trust the religious authorities” can be acquired later in an agent’s epistemic life (e.g., after the agent acquires the relevant concepts).Footnote 13

Now, with respect to acquired epistemic standards, consider the following cases:

  • Kate and Brad at t0: At time t0, Kate thinks that she has no reason to prefer the standard “trust the religious authorities” over the standard “do not trust the religious authorities.” Even if she thinks that she has no reason to prefer one standard over the other, she decides to adopt the standard “trust the religious authorities.” Brad decides to adopt the standard “do not trust the religious authorities”.

  • Kate and Brad at t1: After Kate adopts the standard “trust the religious authorities”, something happens to her. She suddenly thinks that trusting the religious authorities is more likely to be accurate, even if she hasn’t acquired new evidence between t0 and t1. She suddenly thinks that, from an accuracy perspective, Brad’s standard is suboptimal.

A change in intuitions between t0 and t1 can explain why Kate no longer believes that she has no reason to prefer the standard “trust the religious authorities” over the standard “do not trust the religious authorities.” However, either (i) such a change in intuitions affects Kate’s evidence or (ii) Kate ought to change some of her attitudes without receiving new evidence. Either way, we face a problem. Here is why.

Provided that acquiring epistemic standards changes our intuitions concerning other standards, we can wonder if such a change in intuitions affects an agent’s evidence. First, assume that such a change in intuitions affects an agent’s evidence. If acquiring epistemic standards changes our intuitions concerning these standards, agents with different epistemic standards do not share all relevant evidence. This violates the assumption that agents are epistemic peers. Recall that this paper is concerned with cases where two agents who share all relevant evidence disagree.

In view of the foregoing, Schoenfield probably means that such a change in intuitions does not affect an agent’s evidence. But even in making such an assumption, we face a serious difficulty. Recall that one motivation in favour of Diachronic Prohibition is that agents should not change their doxastic attitudes without getting new evidence. However, on the assumption that we can appraise epistemic standards differently without acquiring new evidence, Kate ends up changing some of her doxastic attitudes without getting new evidence. Indeed, at time t0, Kate thinks that she has no reason to prefer one standard over the other. However, at t1, she is required to believe that her standard is uniquely optimal, and so it would be inconsistent for her to believe that “she has no reason to prefer one standard over the other.” In other words, she has to abandon her initial belief that “she has no reason to prefer one standard over the other” at t1. However, Kate did not acquire new evidence between t0 and t1 and her change in perception does not affect her evidence. This means that Kate ends up dropping her belief that she has no reason to prefer one standard over the other without having acquired new evidence. Consequently, assuming that acquiring epistemic standards does not affect an agent’s evidence, we are sometimes required to change our doxastic attitudes without getting new evidence. Either way, Schoenfield’s argument raises concerns when it comes to acquired epistemic standards.

The second problem concerns the assumption that epistemically rational agents always ought to believe that their standards are more truth-conducive than any other set of standards. Beliefs concerning one’s accuracy can be treated like any other beliefs. One can have (or lack) good evidence for or against the truth-conduciveness of one’s epistemic standards. Accordingly, if agents have clear evidence that other agents with other standards are equally reliable (or if they lack good evidence that their own standards are more reliable), they should refrain from believing that their epistemic standards are more truth-conducive.

Here is why. To begin, consider the following case described by Titelbaum and Kopec:

  • Reasoning Room: “You are standing in a room with nine other people. Over time the group will be given a sequence of hypotheses to evaluate. Each person in the room currently possesses the same total evidence relevant to those hypotheses. But each person has a different method of reasoning about that evidence. When you are given a hypothesis, you will apply your methods to reason about it in light of your evidence, and your reasoning will suggest either that the evidence supports belief in the hypothesis, or that the evidence supports belief in its negation… For each hypothesis, 9 people reach the same conclusion about which belief the evidence supports, while the remaining person concludes the opposite… Despite this precise coordination, it’s unpredictable who will be the odd person out for any given hypothesis” (Titelbaum and Kopec forthcoming, p. 14).

In the Reasoning Room, the members of the group are equally reliable. Each of them reaches the right answer 90% of the time. Yet, with respect to each hypothesis presented to the participants, there is no consensus among them on which answer is right, which is explained by the fact that agents in the Reasoning Room employ distinct incompatible epistemic standards.

Suppose that Kate and Brad are in the Reasoning Room. If Schoenfield is right, no matter what kind of information Kate and Brad are provided, they will never believe that they find themselves in such a situation, since agents in the Reasoning Room are equally reliable. If Kate and Brad believe that they find themselves in the Reasoning Room, they believe that distinct incompatible epistemic standards are equally optimal. But this contradicts the claim that agents should believe that their own standards are more truth-conducive. So, in accordance with Strict Immodesty, Brad and Kate will deny that they can find themselves in the Reasoning Room (even if, in fact, they could find themselves in such a situation).

Now, consider the following revised version of the Reasoning Room:

  • Daily Reasoning Room: Every day, Kate and Brad stand in a room with eight other people and are given 100 hypotheses to evaluate. Each person in the room possesses the same total evidence relevant to those hypotheses, but each person has distinct incompatible rational epistemic standards. After the participants have evaluated the hypotheses, a great number of independent and extremely reliable brain scanners reveal the following: every participant has formed 90 true beliefs and 10 false beliefs. This result is revealed to the participants day after day.Footnote 14

Here, it is patently clear that, day after day, agents have consistent evidence that their standards are not more truth-conducive than others. But if Schoenfield is right, the kind of evidence provided by the reliable brain scanners is not relevant. Following Strict Immodesty, epistemically rational agents should take their standards to be the most truth-conducive ones. So, in the above case, agents should stand their ground and keep believing that their standards are more truth-conducive than others. I find this result implausible: in order to discard the information provided by a great number of independent brain scanners, Kate has to be overconfident that her standards are epistemically superior. Being strictly immodest would be irrational given her evidence.

Of course, in the Daily Reasoning Room, agents do not have independent evidence for the conclusion that their standards are as reliable as others. Indeed, the scanners provide evidence that agents in the Daily Reasoning Room are equally reliable insofar as agents entertain an epistemic standard such as “trust the brain scanners.”Footnote 15 However, the issue is not whether the scanners provide independent evidence for the conclusion that agents are equally reliable. The issue is whether an agent’s rational epistemic standards will recommend not trusting the information provided by the scanners. In the Daily Reasoning Room, not trusting the information provided by the scanners amounts to being overconfident. Accordingly, agents with rational epistemic standards will trust the scanners.

This leads me to conclude that Kate’s beliefs concerning the truth-conduciveness of her standards can be confirmed or disproved by her evidence. If she lacks sufficient evidence to believe that her standards are more truth-conducive, she should not believe it. So, it is false that epistemically rational agents ought to believe that their standards are more truth-conducive than others. An epistemically rational agent ought to believe what his or her evidence supports, and the evidence might not support the belief that that his or her standards are epistemically superior.

3 The Restricted Bridge and Argumentation Principles

3.1 Correct Argumentation and Epistemic Rationality

In this section, I will argue that plausible principles of correct argumentation such as the weak burden of proof principle support the Restricted Bridge. This will lead me to conclude that the Restricted Bridge is plausible. Before I present my argument, I wish to explain why my strategy might face serious limits.

Theories of argumentation are interested in the properties of a good argument, understood as an object or a product. While a good argument is explicit, cogent and relies on inference principles such as deduction, induction or abduction,Footnote 16 context can also play a role in determining what counts as a good argument. For example, the properties of good arguments in a legal conflict may not be identical to the properties of good arguments in a philosophical dialogue.Footnote 17 However, theories of argumentation are also interested with good argumentation, understood as an activity between arguers (Godden 2016, pp. 345–346). Evaluating arguers and the dialectical activity they engage in goes beyond the formal aspects of correct arguments. It has been suggested that an ideal arguer possesses virtues—for instance, integrity, open-mindedness, humility or intellectual perseverance.Footnote 18

While argumentation theory is concerned with good arguments and competent arguers, the Restricted Bridge is concerned with epistemically rational believers. However, it is possible that the standards of good argumentation differ from the standards of rational belief. For instance, in a reliabilist theory of epistemic rationality, epistemically rational agents ought to entertain belief-forming processes that lead them to a good ratio of true to false beliefs.Footnote 19 However, such belief-forming processes might have nothing to do with good argumentation. Accordingly, even if principles of correct argumentation such as the weak burden of proof principle support the Restricted Bridge, such a result has clear limits with respect to theories of epistemic rationality.

Nevertheless, if principles of correct argumentation support the Restricted Bridge, this puts pressure on those who deny the restricted bridge. After all, in normal circumstances, epistemically rational agents satisfy principles of correct argumentation. So, those who deny the restricted bridge must explain why it can be epistemically rational to violate plausible principles of correct argumentation such as the weak burden of proof principle.

3.2 The Weak Burden of Proof Principle

Here is a plausible argumentation principle:

  • Standard Burden of Proof Principle: Assume that A claims that P, that A did not offer a reason to believe that P and that B challenges the claim that P. Assume, furthermore, that we are not in a special context such as a legal dispute. In such a context, provided that A maintains the claim that P, A ought to provide support for such a claim.Footnote 20

At first sight, the above principle is correct. First, the burden of proof is a central aspect of well-coordinated discussion between arguers. The burden of proof fixes the conditions under which agents will make plausible (or reasonable) claims to each other. Without the notion of burden of proof, it might be impossible for arguers to arrive at a definite reasonable response concerning P. As Walton notes:

One of the most trenchant and fundamental criticisms of reasoned dialogue as a method of arriving at a conclusion is that argument on a controversial issue can go on and on, back and forth, without a decisive conclusion ever being determined by the argument. The only defense against this criticism lies in the use of the concept of the burden of proof within reasoned dialogue… Only by this device can we forestall an argument from going on indefinitely, and thereby arrive at a definite conclusion for or against the thesis at issue. (Walton 1988, p. 251)

More importantly, the Standard Burden of Proof Principle is motivated by the fact that arguments worth their salt should rely on sufficiently warranted premises and assumptions. So, in cases where an assumption or a premise is challenged by an agent during the course of a reasoned dialogue, a competent arguer should provide support in favour of his or her premises and assumptions. Otherwise, the argument will be unconvincing to anyone who doesn’t accept such premises or assumptions.Footnote 21

Of course, there are special cases where some assumptions enjoy a default status even if they are not supported by arguments or evidence. For instance, in legal disputes, presumption of innocence frequently enjoys a preferential status, even if there is no evidence that the defendant is innocent. However, exceptions such as the presumption of innocence in legal disputes are acceptable for practical reasons, not epistemic ones: it would be unfair to presume that a defendant is guilty.Footnote 22 So, except in such special contexts, a competent arguer should provide support in favour of his or her premises and assumptions, especially when such premises or assumptions are challenged.Footnote 23

However, if some versions of permissiveness are correct, the Standard Burden of Proof Principle is problematic. Indeed, it could be argued that there are rational epistemic standards for which we cannot give independent support. According to Schoenfield, we cannot provide independent justification in favour of our rational epistemic standards, since those standards are precisely the considerations in virtue of which we evaluate our doxastic states. Asking for such a justification would leave us with the undesirable conclusion that no set of epistemic standards is epistemically justified. Indeed, she argues:

We can never give reasons for why we weigh the evidence in one way rather than another that are independent of everything else. This is just a fact about epistemic life that we have to live with: the methods that we use to evaluate evidence are not the sorts of things we can give independent justification for… the demand for such justification would result in widespread skepticism. (Schoenfield 2014, p. 202)

If Schoenfield is right, competent arguers should not be required to provide independent justification for their own rational standards when such standards are challenged by other competent arguers. So, if her objection is correct, the Standard Burden of Proof Principle is false.

In response to such an objection, it could be argued that the lack of independent justification in favour of rational epistemic standards is compatible with (i) the Standard Burden of Proof Principle and (ii) the denial of skepticism. For instance, some epistemic standards could be subject to an “overlapping consensus”—that is, the various rational epistemic systems could concur that some specific epistemic standards are correct, even if the grounds in favour of such standards may differ from one system to another. Alternatively, it could be argued that there are self-justified standards. For example, it is plausible that a standard roughly stating “trust your direct perceptions” is self-justified, even if there is no independent evidence in favour of such a standard. Even if there is no independent justification in favour of consensual or self-justified standards, they do not provide grounds for counterexamples to the standard burden of proof principle because epistemically rational agents will not challenge them in the course of an argument.

Be that as it may, in order to accommodate Schoenfield’s objection, I will leave aside the Standard Burden of Proof Principle. Instead, I will here endorse a weaker version of the principle:

  • Weak Burden of Proof Principle: Assume that A claims that P, that A did not offer a reason to believe that P and that B challenges the claim that P. Assume, furthermore, that we are not in a special context such as a legal dispute. In such a case, either (i) A ought to provide support for the claim that P or (ii) A is permitted to claim that P and B is permitted to claim that ~ P.

As with the Standard Burden of Proof Principle, the Weak Burden of Proof Principle makes sense of the fact that, in most cases, competent arguers should provide support in favour of their premises and assumptions. However, the Weak Burden of Proof Principle accommodates the kind of case described by Schoenfield—namely, that there are incompatible rational epistemic standards for which agents cannot provide independent justification. However, this means that, in an argumentative context, endorsing a set of epistemic standards can be rational for agent 1 and denying such a set of epistemic standards can be rational for agent 2. Otherwise, there would be an arbitrary distinction between the justification required for rationally believing P and the justification required for rationally believing ~ P. Hence, if it’s correct for agent 1 to claim that P without having any independent reason in favour of the conclusion that P, it is also correct for agent 2 to claim that ~ P without having any independent reason for the conclusion that ~ P.

Here is another way to put it. Suppose that there are incompatible but equally rational standards, and that one can’t provide independent support for one’s standards. Now, imagine that, while Kate can’t provide independent support in favour of her own standards, she expects Brad to provide independent support in favour of his standards. In other words, Kate expects Brad to provide the kind of support in favour of his standards that she can’t provide for her own standards. In such a context, Kate is making an arbitrary distinction between her own standards and Brad’s. If Kate is permitted to entertain standards while being unable to provide independent justification in their favour, Brad has the same permission. The Weak Burden of Proof Principle reflects such a possibility: the burden of proof to provide independent support for one’s epistemic standards falls neither on Kate nor on Brad.

3.3 The Restricted Bridge and the Weak Burden of Proof Principle

Let’s now assume that Brad and Kate take conflicting attitudes towards P. Even if they fully disclose their respective evidence, Kate still believes that P and Brad still believes that ~ P. Obviously, no undefeated total evidence has been presented against the conclusion that P (otherwise, Kate would be irrational to maintain her belief that P). Similarly, no undefeated total evidence has been presented against the conclusion that ~ P (otherwise, Brad would be irrational to maintain his belief that ~ P). Finally, withholding judgment is not the only rational response to such a disagreement (otherwise, Brad and Kate would be irrational). So, we are left with two explanations of why they still disagree, as in the following:

  1. (1)

    Agents still disagree because they maintain that their own conclusions are epistemically preferable. Kate believes that her own conclusion (P) is epistemically preferable and Brad believes that his own conclusion (~ P) is epistemically preferable.

  1. (2)

    Agents still disagree, but they do not maintain that their own conclusions are epistemically preferable. So, it would be correct for Kate to adopt Brad’s conclusion that ~ P and it would be correct for Brad to adopt Kate’s conclusion that P.

If (2) is correct, the Restricted Bridge is correct. This means that denying the Restricted Bridge amounts to endorsing (1). Now, the problem with (1) is that the agents’ evidence for or against believing P (including the evidence for their premises, reasoning and epistemic standards) is fully disclosed. So, relative to the evidence agents have, if P were more plausible than ~ P, there would be undefeated total evidence for the conclusion that P (and vice versa). However, this is impossible: if there were undefeated total evidence for or against the conclusion that P, one of the agents would be irrational.

If agents have no undefeated total evidence in favour of their own premises, reasoning and epistemic standards, why do agents believe that their own conclusions are preferable? There is one remaining explanation of why agents maintain that their own conclusions are preferable: they think that, while there is no undefeated total evidence in favour of their claim, the burden of proof is not on their side. Perhaps there is no evidence for believing P, but as long as the burden of proof is on those who defend the claim that ~ P, it is correct to maintain that P. That is, perhaps Kate has no evidence for believing P, but as long as she claims that the burden of proof is on Brad to argue that ~ P, it might be rational for her to claim that P.

However, such a possibility would violate the Weak Burden of Proof Principle. Indeed, if Kate thinks that it is correct for her to believe P without having any undefeated total evidence for believing P, she should also believe that Brad is correct to maintain that ~ P. After all, Brad is in a similar epistemic position, since there is no undefeated total evidence for the belief that ~ P. So, if Kate claims that the burden is on Brad to argue that ~ P, she is making an arbitrary distinction between her own conclusion and Brad’s conclusion. As I explained in Sect. 3.2, this violates the Weak Burden of Proof Principle. That is, Kate should not expect Brad to provide support in favour of his belief that ~ P if she can’t provide support for her belief that P. Consequently, following the Weak Burden of Proof Principle, Kate should not claim that Brad bears the burden of proof regarding the falsehood of P.

Where does that leave us? Suppose that Brad and Kate are epistemically rational, have the same evidence, maintain incompatible conclusions and do not believe that the burden of proof is on their opponent. In such a context, even if Kate currently believes that P, she has no reason to think that believing ~ P is irrational. After all, Brad is an epistemically rational agent who competently argued for the conclusion that ~ P, he faces no undefeated objection and the burden of proof is not on him. All these facts are salient to Kate. In view of the foregoing, it is rational for Kate to think that Brad’s conclusion that ~ P satisfies the requirements of epistemic rationality. So, it is epistemically rational for Kate to adopt Brad’s conclusion that ~ P—that is, since Kate observes that Brad is epistemically rational in believing ~ P, she can see that it would also be rational for her to believe ~ P. This confirms the Restricted Bridge: in a situation where two epistemic peers like Kate and Brad fully disclose their evidence, if agent 1 is permitted to take doxastic attitude D towards P, then agent 2 is also permitted to take doxastic attitude D towards P.

4 Conclusion

In this paper, I offered two reasons to think that the Restricted Bridge is correct. First, I argued that Diachronic Prohibition, a popular explanation of why there are counterexamples to the Restricted Bridge, is implausible. Second, I argued that a plausible principle of correct argumentation, the Weak Burden of Proof Principle, supports the Restricted Bridge. This leads me to conclude that the Restricted Bridge is plausible.