1 Introduction

Supererogation involves going beyond the call of (moral) duty in a morally good way. Such acts are morally optional, i.e. permissible but not required. Yet not just any morally optional act is supererogatory. It has to be in some sense morally better than some other permissible option. For example:

Five Lives: Liv realizes that an enemy grenade will kill five of her fellow soldiers if she remains in safety. She can save the five soldiers by jumping on the grenade, thereby sacrificing her own life. Or she can remain in safety and allow the fellow soldiers to die.

Liv’s jumping on the grenade would constitute a paradigmatic instance of supererogation. It is morally optional for Liv to jump on the grenade, and it is morally better if she does.

Otherist accounts of morality hold that there is an asymmetry between an agent’s interests and the interests of the other, i.e., the members of the moral community who aren’t the agent: while altruistic reasons are moral reasons, self-interested reasons are not.Footnote 1 This picture explains how Liv’s jumping on the grenade is morally better, for Liv’s self-interested reason to remain in safety is a non-moral reason. This is a genuine achievement. Otherism’s most prominent rival is impartialism, the view that the interests of all members of the moral community—including the agent—are equal moral reasons. Even if Liv had to choose between her life and that of a single fellow soldier, it would be morally better for her to sacrifice her life. Yet if Liv’s self-interest is a moral reason in exactly the same way as that of a fellow soldier, then how it could be morally better to sacrifice? Impartialists must make some concession to otherism if they wish to capture the idea that, other things being equal, altruism is morally better than self-interested action.

On the other hand, otherists must make concessions of their own. If Liv’s self-interest isn’t a moral reason, then how can it make it morally permissible to remain in safety? How can the otherist make sense of the sacrifice’s being a moral option rather than a moral requirement?

The standard otherist account of optionality holds that Liv’s self-interested reason is a non-moral reason which is nonetheless morally relevant. It is relevant insofar as it has moral justifying weight that outweighs the pro tanto requirement to save the five lives. If the self-interested reason did have moral requiring weight, then the standard otherist view is that it would be a moral reason after all (e.g., Portmore, 2011: 128). Hence, the account further contends that the self-interested reason cannot have moral requiring weight. I challenge this further contention.

While we can go beyond the call of duty in a morally good way, we can also go too far beyond the call of duty. Consider:

Mild Burn: Bernie realizes that an enemy grenade will mildly burn one of his fellow soldiers if he remains in safety. He can spare the soldier the mild burn by jumping on the grenade, thereby sacrificing his own life. Or he can remain in safety and allow the fellow soldier to be mildly burned.

My view is that Bernie is morally required to remain in safety. To account for the moral requirement against pursuing this mild altruistic benefit, I hold that the (allegedly non-moral) self-interested reason has moral requiring weight.

The otherist may protest. “If Bernie jumps on the grenade, he makes a mistake from the perspective of rationality but not morality. He is, in fact, morally permitted to jump on the grenade.”Footnote 2 As an impartialist sympathizer, I find this protest hard to swallow. Bernie is one person among many; his wellbeing matters to morality just like that of any other person; his humanity deserves just as much respect as everyone else’s; and so forth. Yet these tropes won’t convince the otherist. Indeed, the otherist can deny that Bernie is morally required to remain in safety…but only at cost.

Moral rationalism about requirement is the thesis that moral requirements are rational requirements. Many otherists endorse such moral rationalism and use it to motivate their standard account of optionality.Footnote 3 My goal is to force these otherists into choosing between: (a) giving Bernie’s wellbeing moral requiring weight and thereby granting that Bernie is morally required to remain in safety or (b) reject the moral rationalism that many otherists use to motivate their account of optionality.

In Sect. 2, I briefly review Gert’s justifying/requiring distinction. In Sect. 3, I present the standard otherist account of supererogation, as it is developed by Portmore (2008, 2011: ch 5).Footnote 4 Portmore’s version is influential,Footnote 5 and it is the clearest about how to weigh the altruistic moral reasons against the self-interested non-moral reasons. In Sect. 4, I present the parallel reasoning that commits Portmore to holding that Bernie is morally required to remain in safety. This parallel reasoning depends on two moves. The first is extending moral rationalism about requirement to moral rationalism about permission, i.e., moral permissions are rational permissions. I defend this extension in Sect. 5. The second move concerns how to weigh reasons to determine an overall deontic verdict. I defend the second move in Sect. 6. In Sect. 7, I deny Portmore’s last chance to break the parallel between his argument and the extension.

This paper brings together three neglected topics. First, there is little sustained reflection on how to understand going too far beyond the call of duty or how it relates to existing normative theories. Second, while there is a rich debate concerning the merits of moral rationalism about requirement, there is little discussion of moral rationalism about permission or whether one can coherently defend one kind of rationalism and reject the other.

The third neglected topic is how to weigh the reasons needed to account for supererogation. A natural model of weighing reasons is a single scale on which the reasons for φ go in one pan and the reasons for ~φ go on the other. It assumes that as a reason has justifying weight (as it pushes φ down toward permissibility), it also has requiring weight (it pushes ~φ up toward impermissibility). This assumption is incompatible with the standard otherist account of supererogation, which holds that self-interested reasons are merely justifying (have justifying but not requiring weight). Merely justifying reasons push φ toward permissibility without pushing ~φ toward impermissibility. What sort of model do we use to weigh merely justifying reasons?

2 Justifying versus requiring, requirements, and moral options

A consideration has justifying weight for φ (JWφ) iff the consideration makes φ permissible in the absence of sufficiently weighty countervailing considerations. I can accept or decline some surgery for my child. That the surgery will cause my child pain for weeks has justifying weight for declining: in the absence of sufficiently weighty countervailing considerations, it is permissible to decline the surgery.

A consideration has requiring weight for φ (RWφ) iff it makes the alternative, ~φ, impermissible in the absence of sufficiently weighty countervailing considerations. That the surgery will cause my child pain for weeks also has requiring weight for declining: in the absence of sufficiently weighty countervailing considerations, it is impermissible to accept the surgery.

A requirement to φ is a compound deontic verdict: φ is a requirement iff φ is permissible and ~φ is impermissible.Footnote 6 Justifying and requiring weight work together to make one required to φ. When the pain’s justifying weight makes it permissible to decline the surgery and its requiring weight makes it impermissible to accept it (so there are no sufficiently weighty countervailing considerations), the pain requires me to decline the surgery. On the other hand, if the benefits of the surgery outweigh the pain’s justifying and requiring weight, then I’m not (all-in) required to decline the surgery. I am only pro tanto required to decline the surgery, i.e., required in the absence of sufficiently weighty countervailing considerations.

When reasons have more justifying than requiring weight, they have some tendency to make actions permissible without making them required. Such reasons are useful when, like Portmore, you are trying to explain moral options, cases in which both φ and ~φ are permissible.

3 Making sense of going beyond the call of duty: optionality

In this section, I present Portmore’s otherist account of how it is morally optional for Liv to jump on the grenade. The upshot is that Liv’s self-interested non-moral reason must have moral justifying weight.

In a nutshell, the account is this. The lives of the five other people pro tanto—but not all-in—require Liv to jump on the grenade. Consequently, there must be some reason to remain in safety with at least as much justifying weight for remaining in safety as there is requiring weight to jump. The only candidate for having justifying weight to remain in safety is Liv’s self-interested reason, so self-interested reasons have moral justifying weight. More formally:

  1. 1.

    The prevention of five deaths is a reason to jump on the grenade that has moral requiring (and justifying) weight, and thus Liv is morally required to jump on the grenade in the absence of countervailing reasons.Footnote 7

  2. 2.

    Liv is not (all-in) morally required to jump on the grenade and is permitted to remain in safety.

  3. 3.

    If an agent has some moral requiring reason, MRR, for φ but is permitted to ~φ (and so is not all-in required to φ), then there is a reason with at least as much justifying weight for ~φ as MRR has moral requiring weight for φ.Footnote 8

  4. 4.

    In Five Lives, Liv’s self-interested reason is the only candidate for having justifying weight for remaining in safety.

  5. 5.

    Therefore, the self-interested reason has moral justifying weight for not jumping on the grenade that is at least as weighty as the moral requiring weight of the five lives.

1 is very plausible. There is general agreement that we are pro tanto morally required to prevent the death of others and, per Sect. 2, both justifying and requiring weight work together to explain (pro tanto) moral requirements.

2 is debatable. For example, Dorsey (2016: chs 3, 4) argues that Liv is (all-in) morally required to save the five lives. Dorsey and Portmore do agree, however, with the near undeniable claim that (2′) Liv is not (all-in) rationally required to jump on the grenade, where rationality is the unique practical perspective that has final authority over what to do.Footnote 9

Portmore and Dorsey part ways precisely because they disagree about whether morality’s requirements are “rationally authoritative” (Portmore, 2011: 4). In other words, they disagree over moral rationalism about requirements (MRREQ), i.e., the claim that φ is morally required only if it is rationally required. Dorsey denies MRREQ and Portmore endorses it. Since Liv is not rationally required to jump on the grenade, given MRREQ, it follows that she isn’t morally required either (2008: 377, nt 15; 2011: 130, nt 15).

3 is very plausible, and the model of weighing reasons in Sect. 6 will confirm it.

Portmore assumes that 4 is true. As Five Lives was described, Liv has at least two reasons, namely the altruistic reason (the lives of the five soldiers) and Liv’s self-interested reason (her life). Portmore tells us to “Assume that these are the only morally relevant facts” (2011: 126). We’ll revisit the assumption in Sect. 7, but for now let’s play along and screen off any other morally relevant considerations. Once we do, Liv’s self-interested reason is the only candidate for having the requisite justifying weight to prevent Liv’s pro tanto requirement from being an all-in requirement.

The picture that emerges from Portmore’s 1–5 provides an explanation of how jumping on the grenade can be morally optional. Our self-interested reasons have enough justifying weight to prevent jumping on the grenade from being morally required, despite its being morally better to do so.Footnote 10 If Liv jumps on the grenade, she really does go beyond the call of duty.

4 Making sense of going too far beyond the call of duty

This section argues that, if Portmore’s strategy shows that self-interested reasons have moral justifying weight, then it can be extended to show that self-interested reasons have moral requiring weight. One can’t endorse Portmore’s account and then deny that Bernie is morally required to remain in safety.

In a nutshell, the account is this. We are pro tanto justified in taking the necessary means of preventing others from suffering mild harms. Hence, the mild burn pro tanto justifies Bernie to jump on the grenade. But it doesn’t all-in justify it. Perhaps one can permissibly sacrifice their life to secure some altruistic benefits that fall short of saving a life. For example, perhaps Bernie could sacrifice his life to prevent someone’s paralysis. Yet one can’t permissibly sacrifice their worthwhile life to prevent a mild burn.Footnote 11 Consequently, there must be some reason to remain in safety with at least as much requiring weight for remaining in safety as there is justifying weight to jump. The only candidate is Bernie’s self-interested reason, so self-interested reasons have moral requiring weight. More formally:

  1. 1*.

    The prevention of the mild burn is a reason to jump on the grenade that has moral justifying weight, and thus Bernie is morally permitted to jump on the grenade in the absence of countervailing reasons.

  2. 2*.

    Bernie is not (all-in) morally permitted to jump on the grenade.Footnote 12

  3. 3*.

    If an agent has a moral justifying reason, MJR, for φ but is not all-in morally permitted to φ, then there is a reason with more moral requiring weight for ~φ than MJR has justifying weight for φ.Footnote 13

  4. 4*.

    In Mild Burn, Bernie’s self-interested reason is the only candidate for having requiring weight for remaining in safety.

  5. 5*.

    Therefore, the self-interested reason has moral requiring weight that is weightier than the moral justifying weight of the mild burn.

1* is just as plausible as 1. Just as we are pro tanto required to prevent the deaths of five people, we are pro tanto justified in preventing small harms to others.

2* will be a no brainer for those who, like Dorsey (2016, ch 3), endorse impartialism about morality. For impartialism holds that the agent’s wellbeing has the same moral weight as everyone else’s.

Portmore endorses the otherist conception of morality, so he cannot accept the impartialist rationale for 2*. Yet consider (2*′): Bernie is not (all-in) rationally permitted to jump on the grenade. This claim seems near undeniable. Jumping on the grenade in such circumstances would display “recklessness” (Massoud, 2016: 706) or “foolishness” (Stangl, 2016: 355). Bernie would have “decisive reason” not to jump on the grenade (Portmore, 2011: 4).Footnote 14

2*′ gives us 2 as long as we accept moral rationalism about permission (MRPERM), i.e., φ is morally permitted only if it is rationally permitted. MRREQ justified the inference from 2′ (Liv is not rationally required to sacrifice) to 2 (Liv is not morally required to sacrifice). Likewise, MRPERM justifies the inference from 2*′ (Bernie is not rationally permitted to sacrifice) to 2* (Bernie is not morally permitted to sacrifice). In the next section, I argue that Portmore’s defense of moral rationalism about requirement can be extended to an equally plausible defense of moral rationalism about permission. So 2 and 2* stand or fall together.

In Sect. 6, I defend the move from 3 to 3*, as well as defend a general model of weighing reasons.

For now, we are just assuming that 4* is true. As Mild Burn was described, Bernie has at least two reasons, namely the altruistic reason (preventing the mild burn) and Bernie’s self-interested reason (his life). Portmore tells us to “Assume that these are the only morally relevant facts” (2011: 126). We’ll revisit this assumption in Sect. 7, but for now screen off any other morally relevant considerations. Once we do, Bernie’s self-interested reason is the only candidate for having requiring weight for remaining in safety.

The picture that emerges from 1* to 5* explains how one can go too far beyond the call of moral duty. Bernie’s self-interested reasons have enough moral requiring weight to prevent jumping on the grenade from being morally permitted, despite the altruistic benefit. If Bernie jumps on the grenade, he makes a moral mistake, not just a rational one. He goes too far beyond the call of duty.

Premise 1* of my extension seems straightforward. The rest of the paper defends 2* 3*, and 4*. The ultimate payoff is that the otherist must either (1) grant that Bernie’s self-interest has moral requiring weight and thus Bernie is morally required to remain in safety, or (2) reject the moral rationalism that motivates the standard otherist account of supererogation’s optionality.

5 Defending the extension to 2*: moral blameworthiness and rationality

Portmore’s second premise, 2, is that Liv is morally permitted to remain in safety. To defend 2, he appeals to moral rationalism about requirement (MRREQ). The extension’s second premise, 2*, is that Bernie is morally required to remain in safety. To defend 2*, the extension appeals to moral rationalism about permission (MRPERM). I argue that the best argument for moral rationalism about requirement can be extended into an equally good (or, as I prefer to think of it, equally bad) argument for moral rationalism about permission. Thus, Portmore’s defense of Liv’s moral permission commits him to Bernie’s moral requirement.

5.1 Portmore’s argument for moral rationalism about requirement

Portmore (2011; cf. Harman, 2016) endorses moral rationalism about requirement but denies moral rationalism about permission. This position is unstable, at least when one defends the former moral rationalism by appealing to a connection with blameworthiness. This is what Portmore and many others do.Footnote 15 Here is the main argument:

The Blameworthiness and Requirement Argument

  1. A.

    If S is morally required to ~φ, then S would be morally blameworthy for freely and knowledgeably φ-ing.Footnote 16

  2. B.

    S would be morally blameworthy for freely and knowledgeably φ-ing only if S is rationally required to ~φ.Footnote 17

  3. C.

    Therefore, if S is morally required to ~φ, then S is rationally required to ~φ.Footnote 18

A links moral requirements with moral blameworthiness. B links moral blameworthiness with rational requirements. Together A and B give us C, which is just moral rationalism about requirement, the idea that moral requirements are also rational requirements. Before I present the parallel argument for moral rationalism about permission, it will be useful to first explain why this argument is so unsuccessful.

Moral rationalists, including Portmore, recognize that there is a conceptual distinction between morality and rationality, where only the latter is defined as having final authority over what to do. Of course, distinct concepts need not entail distinct referents. ‘Biden’ is conceptually distinct from ‘the President of the US’ even though Biden is (identical to) the President of the US. Likewise, perhaps some or all of morality’s deontic verdicts (e.g., its verdicts of required and permissible) have final authority even though they do not have such authority by definition. The conceptual distinction is nonetheless important. It shows us that we need an argument to insist that certain verdicts of morality have final authority (e.g., moral requirements are also rational requirements).

Yet once we make the conceptual distinction between morality and rationality, we must distinguish between moral blameworthiness and rational blameworthiness. The former is blameworthiness from the moral perspective and the latter is blameworthiness from the rational perspective. Again, distinct concepts need not entail distinct referents. It may be that there is just one kind of blameworthiness and that moral and rational blameworthiness are identical. The conceptual distinction is nonetheless important. It shows us that we need an argument to insist that blameworthiness from the moral perspective is or entails blameworthiness from the rational perspective.

The literature on moral rationalism tends to assume that there is just one kind of blameworthiness and it is moral blameworthiness.Footnote 19 You’ll notice that the argument for moral rationalism about requirement is stated in terms of moral blameworthiness.Footnote 20 This is why A is arguably a conceptual truth, as Portmore (2011: 44) and Darwall (2016: 269) contend. You might think that a coherent practical perspective is going to blame you just in case you freely and knowledgably make a mistake from that perspective. If you don’t do what you are morally required to do, then you are making a moral mistake. When you freely and knowingly make such a mistake, it is your fault that you made such a mistake and thus morality blames you for it. So, A is plausibly (conceptually) true.

Yet B is not a conceptual truth. B holds that blameworthiness from the moral perspective entails a rational mistake (violating a rational requirement). We need some argument to justify linking moral blameworthiness to rationality in this way. To avoid begging the question, this argument should not itself assume that moral blameworthiness just is rational blameworthiness.

Portmore does provide an argument for B, but the argument itself assumes that moral blameworthiness is rational blameworthiness. It begins by claiming that:

B1. S is morally blameworthy for some action only if S has the capacity to respond to both moral and non-moral reasons.Footnote 21

B1 says that responsiveness to both moral and non-moral reasons “opens the door” to assessments of moral blameworthiness (2011: 48). Such responsiveness is, in other words, what makes an agent’s actions eligible to be assessed for moral blameworthiness (and moral praiseworthiness).

Portmore remarks, “Surely, it cannot be that the very capacity that opens the door to an agent’s being blameworthy is the one that leads her to perform blameworthy acts” (48). In other words:

B2. If B1, then S can’t be morally blameworthy for flawlessly responding to both moral and non-moral reasons. [cf. Portmore’s 2.19, 2011: 48]

The conjunction of B1 and B2 entails:

B3. Therefore, S can’t be morally blameworthy for flawlessly responding to both moral and non-moral reasons.Footnote 22

If B3 is true, then B arguably follows.Footnote 23 Nonetheless, this argument for B is a complete failure.

An initial problem is the ambiguity that runs through B1-B3. Start with B1. Does it claim that moral blameworthiness requires the capacity to respond to some or all non-moral reasons? It is plausible that moral blameworthiness requires the capacity to respond to some moral reasons, such as epistemic reasons. If you aren’t capable of responding to epistemic reasons about what your moral reasons are, it isn’t clear that you can be capable of responding to your moral reasons at all. Yet moral blameworthiness doesn’t require the capacity to respond to every non-moral reason. Why would moral blameworthiness about any arbitrary φ require the capacity to respond to, say, aesthetic reasons?

The mere fact that moral blameworthiness requires the capacity to respond to some non-moral reason or another is useless in this context. Portmore needs to block the claim that self-interested reasons can make it rationally permissible to act against your moral requirements. Hence, Portmore needs B1 to entail that moral blameworthiness requires, more specifically, the capacity to respond to self-interested moral reasons. We should understand all of B1–B3 in like manner. For example, B3 should be interpreted to entail that S can’t be morally blameworthy for flawlessly responding to both moral and self-interested reasons.

Now that we’ve clarified the ambiguity, we can see that the premises aren’t plausible unless we assume that moral blameworthiness is rational blameworthiness. Regarding B1, we have no reason to suppose that moral blameworthiness requires the capacity to respond specifically to self-interested reasons. Suppose that there were moral weirdos, creatures capable of responding to moral reasons but incapable of responding to self-interested reasons.Footnote 24 I submit that such a creature could be morally blameworthy if it freely and knowingly acted against his decisive moral reasons. For example, the creature might be morally blameworthy for murdering his rich neighbor to benefit his relatively poor neighbor. If we are really concerned with moral blameworthiness and are not assuming that moral blameworthiness just is rational blameworthiness, it is hard to see why the capacity to respond to self-interested reasons is required to be morally blameworthy. And so B1 is in doubt.

B2 suffers from a similar problem. Suppose that morality cares only about some—not all—reasons that rationality cares about. That is, suppose that morality’s deontic verdicts are a function solely of moral reasons and it ignores non-moral (or self-interested) ones. Then it would be no surprise that morality would blame you for something that you were rationally permitted to do. That is, it would be no surprise that you could be morally blameworthy for flawlessly—from the point of view of rationality—responding to moral and non-moral (or self-interested) reasons. Once we take the distinction between moral and rational blameworthiness seriously, it isn’t clear why we should endorse B1 or B2.

There is a grain of truth (grain of soundness?) in Portmore’s argument for B. If we switch to rational blameworthiness, something in the neighborhood of B1–B3 will be sound. Perhaps a certain psychopath has no capacity to respond to moral reasons. He nonetheless can be rationally blameworthy for failing to respond appropriately to his self-interested reasons. For example, he might freely and knowledgeably choose a trivial immediate benefit and thereby sacrifice his long-run wellbeing. But he can’t be rationally responsible for failing to respond to moral reasons, for he lacks the capacity to do so (cf. Portmore, 2011: 48, nt 54). So:

B1′. S is rationally blameworthy for failing to respond correctly to moral reasons only if S has the capacity to respond correctly to moral reasons.

“Surely, it cannot be that the very capacity that opens the door to an agent’s being blameworthy [for failing to respond correctly to moral reasons] is the one that leads her to perform blameworthy [failures to correctly respond to moral reasons]” (48).

B2′. If B1′, then S can’t be rationally blameworthy for flawlessly (from the rational perspective) responding to moral reasons.

Put the two premises together and you get:

B3′. Therefore, S can’t be rationally blameworthy for flawlessly (from the rational perspective) responding to moral reasons.

(I do not claim that B1′–B3′ is an equally plausible extension of Portmore’s B1–B3. I claim that B1–B3 fails, whereas B1′–B3′ is plausibly sound and seems to capture the grain of truth in B1–B3.)

Yet B1′–B3′ is of no use to the moral rationalist. Morality and rationality presumably agree that one can flawlessly respond to a moral reason even if you act against it. They might disagree, however, about the conditions under which acting against a reason is flawless. Morality might demand an opposing moral reason of at least equal weight. Rationality might make a less onerous demand, namely that there be some moral or non-moral reason of at least equal weight. Since morality and rationality might disagree about what counts as a flawless response to moral reasons, it shouldn’t be a surprise if one can be morally blameworthy without being rationally blameworthy.

Of course, Portmore’s account of morality contends that morality takes into account non-moral reasons by giving them merely justifying weight. He will, therefore, reject the idea that morality blames you even when rationality doesn’t. Yet this contention can’t help him here without vicious circularity. His argument for that contention relies on moral rationalism as a (sub-)premise (Sect. 3). Hence, he can’t use his contention that non-moral reasons have merely justifying moral weight as a (sub-)premise in his argument for moral rationalism.

The second premise in Portmore’s argument for moral rationalism is B. His argument for B fails. Just as arguments for moral rationalism must be sensitive to the conceptual distinction between morality and rationality, such arguments must also be sensitive to the distinction between moral and rational blameworthiness. Portmore’s argument for B fails to have the latter sensitivity, so it can’t be used to show that B is plausible.

5.2 An equally plausible argument for moral rationalism about permission

The best (at least from Portmore’s perspective) argument for moral rationalism about requirement is the Blameworthiness and Requirement Argument. Now that we’ve seen why Portmore’s argument for the controversial second premise fails, it will be easy to see why there is an equally plausible argument for the claim that moral rationalism about permission is true.

To extend the Blameworthiness and Requirement Argument, let’s use ‘morally blameless’ as equivalent to ‘not morally blameworthy’. That gives us:

The Blameworthiness and Permission Argument

  1. A*.

    If S is morally permitted to φ, then S would be morally blameless for freely and knowledgeably φ-ing.

  2. B*.

    S would be morally blameless for freely and knowledgeably φ-ing only if S is rationally permitted to φ.

  3. C*.

    Therefore, if S is morally permitted to φ, then S is rationally permitted to φ.

A* links moral permissions with moral blamelessness. B* links moral blamelessness with rational permissions. Together A* and B* give us C*, which is just moral rationalism about permission, the idea that moral permissions are also rational permissions.

If A is a conceptual truth, then A* is too. If you do what you are morally permitted to do, then you’ve had a moral success. And morality is not going to blame you for succeeding (cf. Skorupski 1999: 146, 150). Consider an analogy. Dad says to Kid, “I prefer that you wash my car, but it is permissible to clean up your room instead.” Kid cleans his room instead. Dad can’t sensibly blame Kid for freely and knowledgably doing what he was expressly permitted to do.

Furthermore, Portmore’s B seems committed to A*. A* says that moral permissions guarantee moral blamelessness. Portmore’s B entails that rational permissions guarantee moral blamelessness.Footnote 25 If moral permissions don’t guarantee moral blamelessness, it is hard to see why rational permissions would do so. In other words, if Portmore rejects A*, it makes it even harder for him to make B plausible.Footnote 26

I admit that the second premise of the argument for MRPERM, B*, is not particularly plausible; however, this admission doesn’t break the parity. The best argument for B was a complete failure. That’s why I tend to think that the Blameworthiness Arguments for moral rationalism about requirement and moral rationalism about permission are equally bad arguments. With that said, there is a case to be made that they really are good arguments.

Darwall’s actual arguments for moral rationalism fail for the same reason that Portmore’s do.Footnote 27 Yet he has the resources to accept both Blameworthiness arguments, both kinds of moral rationalism, and allow that Bernie is morally required to remain in safety. On his view, morality is equal accountability to all members of the moral community, including oneself (2006: 100–104, especially 102). Since an agent is accountable to oneself just as much as he is accountable to others, it is no mystery why Bernie’s self-interested reasons would have moral requiring weight and would make him morally required to remain in safety. Of course, that’s no good for Portmore’s purposes. He needs a picture that vindicates moral rationalism about requirement without vindicating the claim that self-interested reasons have (moral) requiring weight.

I have provided an initial case, then, for the claim that the best argument for moral rationalism about requirement can be extended to an equally good (bad) argument for moral rationalism about permission. Further reflection confirms this initial case.

Two obvious worries about the argument for MRPERM are also worries about the argument for MRREQ. First, one might worry that A* is false because someone, Arthur, might perform a morally permitted act for morally illicit motives. Portmore’s argument for B faces the same worry, and he responds that “although it is clear that Arthur is blameworthy, it far from clear that Arthur is blameworthy for [performing the morally permitted act]” (2011: 45).Footnote 28

Second, there are plausible counterexamples to MRPERM (Harman, 2016). But there are also plausible counterexamples to MRREQ. Many philosophers hold that MRREQ should be rejected because the correct moral theory makes unreasonable demands, i.e., some moral requirements are not rational requirements (e.g., Wolf, 1982; Sobel, 2007a, 2007b; Dorsey, 2016: ch 3). Given the conceptual nature of Portmore’s argument for MRREQ, he uses MRREQ as a constraint on moral theorizing.Footnote 29 Putative counterexamples to MRREQ must be interpreted instead as counterexamples to the putatively correct moral theory. Yet the parallel argument for MRPERM is equally conceptual. Thus, if MRREQ should be used as a constraint on moral theorizing, then MRPERM should too.

At the end of the day, Portmore will likely note that MRREQ seems more plausible to him than MRPERM. So, here’s a challenge for him to think about as he drifts off to sleep. The conjunction of A* and B* entails MRPERM. If the latter is false, then so is the former. I predict that however he explains why that conjunction is false, he will find an equally plausible explanation for why the conjunction of A and B (the premises of Portmore’s argument for MRREQ) is false. If I’m right, his denial of MRPERM would commit him to rejecting his own strategy for defending MRREQ.Footnote 30 Since I reject both versions of moral rationalism and both blameworthiness arguments, I would consider that a win.Footnote 31,Footnote 32

The goal of this section was to defend the transition from Portmore’s 2 (Liv is morally permitted to remain in safety) to my 2* (Bernie is morally required to remain in safety). The only vulnerable part of Portmore’s defense of 2 is moral rationalism about requirement. The only vulnerable part of my extension to 2* is moral rationalism about permission. I argued that the best argument for the former moral rationalism can be extended into an equally good (or equally bad) argument for the latter moral rationalism. In short, I have shown that Portmore’s 2 and my 2* stand or fall together.

6 Defending the extension to 3*: weighing reasons

6.1 Dual scale

In this section, I argue that 3 entails 3*. 3 and 3* are principles concerning how to weigh reasons against one another in order to determine a deontic verdict. It is tempting to think that reasons must be weighed on a “single scale” (cf. Curtis, 1981: 31). The reasons for φ, Rφ, go in one pan and the reasons against φ, R, go in the other. φ is permissible iff the reasons against φ are not weightier than the reasons for φ. Such a view is simple and popular, but it cannot be Portmore’s.

figure a

Portmore’s account of supererogation depends on the assumption that different kinds of reasons (altruistic vs. self-interested) have different proportions of justifying and requiring weight. Such an assumption is incompatible with the single scale model of weighing reasons (Gert, 2004: ch 5; 2007; Tucker: Sects. 46). The problem isn’t with the image of the scale, but the image of a single scale.

Like the single scale model, Dual Scale holds that the deontic status of an action is determined by the relative weights of the reasons for and against it. But it holds that, to fully capture the two different weights (justifying vs requiring), we must appeal to two scales. Permission Scale determines whether an act is permissible: φ is permissible iff the justifying weight for φ (JWφ) is at least as weighty as the requiring weight against φ (RW).

We should introduce one more term, so that we can understand my name for the second scale. If your only goal in life is to eat every rock that you find, I might hold up a rock and remark that your aim commits you to eating this rock. In this sense of commitment, φ is a commitment iff ~φ is impermissible. If you can be committed to eating this rock without it being permissible to do so, then you are in what is often called a prohibition dilemma (both φ and ~φ are impermissible). But set such things aside. More relevant to this paper is that φ is required iff φ is both permissible and a commitment (i.e., ~φ is impermissible).

Commitment Scale determines whether the act is a commitment (whether the alternative is impermissible): φ is a commitment iff the requiring weight for φ (RWφ) is weightier than the justifying weight against it (JW). The two scales work together to determine whether φ is required. An act is required just when Permission Scale says that φ is permissible and Commitment Scale says that ~φ is impermissible.

figure b

Dual Scale is a model for how reasons are to be weighed. It does not itself come with a view about the weights of various kinds of reasons. (The diagrams for Single and Dual Scale are borrowed from my forthcoming.)

Elsewhere I develop and defend this model at length (2022, forthcoming, manuscript). Here I work with a simplified version. In the rest of this sub-section, I use the Liv and Bernie cases to illustrate how the model works. In the next sub-section, I show that, given three assumptions, Portmore’s 3 commits Portmore to both 3* of the extension and Dual Scale.

Liv must choose whether to jump on the grenade to save five soldiers (Sacrifice) or remain in safety (Safety). The standard otherist account supposes that the lives of the five soldiers have both justifying and requiring weight (say, 500 JWSacrifice and 500 RWSacrifice). The account treats self-interested reasons, such as the value of Liv’s life, as weighty merely justifying reasons (say, 1000 JWSafety and 0 RWSafety). Permission Scale entails that Sacrifice is permissible, because the justifying weight of Liv’s altruistic reason is weightier than the non-existent requiring weight of her self-interested reason (500 JWSacrifice > 0 RWSafety). Commitment Scale entails that the Sacrifice isn’t a commitment, because the requiring weight of the altruistic reason is outweighed by the justifying weight of her self-interested reason (500 RWSacrifice < 1000 JWSafety).

figure c

Since the altruistic act is permissible but not a commitment (and so not required), one goes beyond the call of duty in performing it. At least, one goes beyond the call on the plausible assumption that it is morally better to save the five soldiers than to save one’s own life.

Bernie must choose whether to jump on the grenade to spare a soldier from a mild burn (Sacrifice) or remain in (Safety). Let’s suppose that the prevention of a mild burn has 1 unit of justifying and requiring weight (1 JWSacrifice and 1 RWSacrifice). A minor change to the standard otherist account will allow us to make sense of Bernie’s being morally required to remain in safety. Let’s say that self-interested reasons have 100 times less requiring than justifying weight. In the Liv case, we assumed that the agent’s life had 1000 justifying weight for Safety. So that means Bernie’s life has a measly 10 requiring weight for Safety; however, that measly amount is enough to make Bernie required to remain in safety. Permission Scale holds that Sacrifice is impermissible (1 JWSacrifice < 10 RWSafety). Commitment Scale says that Sacrifice is not a commitment (1 RWSacrifice < 1000 JWSafety). Thus, Sacrifice is neither permissible nor a commitment. Bernie is, in other words, morally required to remain in safety.Footnote 33

figure d

6.2 From 3 to both 3* and dual scale

Now that we understand Dual Scale, I will rely on three assumptions and Portmore’s 3 to establish both 3* and Dual Scale. The details are tedious. If you are willing to assume that Dual Scale is true, then defending the transition from 3 to 3* is easy: the Permission Scale vindicates Portmore’s 3 (Portmore’s 3 is essentially the “only if” part of the Permission Scale) and the Commitment Scale vindicates the extension’s 3* (3* is essentially the “only if” part of Commitment Scale). If you are satisfied, then feel free to skip to Sect. 7. If you don’t want to assume that Dual Scale is true, then you’ll have to wade through the tedium (sorry!).

Recall:

3. If an agent has some moral requiring reason, MRR, for φ but is permitted to ~φ (and so is not all-in required to φ), then there is a reason with at least as much justifying weight for ~φ as MRR has moral requiring weight for φ.

3 entails that ~φ is permissible only if JW ≥ RWφ. My first assumption is one that Portmore seems to make, namely (Sufficiency) ~φ is permissible if JW ≥ RWφ.Footnote 34 The conjunction of Portmore’s 3 (the “only if” part) and Sufficiency (the “if” part) gives us the full:

Permission Scale: φ is permissible iff JWφ ≥ RW.

Permission Scale entails Commitment Scale. The picture of Dual Scale from Sect. 6.1 illustrates this. Notice that Commitment Scale is just the mirror image of Permission Scale after swapping φ and ~φ. For example, after you swap φ and ~φ, the left side of Permission Scale is the right side of Commitment Scale. This shouldn’t be surprising. Permission Scale models whether φ is permissible. Commitment Scale models whether the alternative is impermissible (aka, whether φ is a commitment). Put the two models together, and you model whether φ is required. To verify that Permission Scale entails Commitment Scale, we just need to clarify the relationship between assignments of permissibility of φ and impermissibility of ~φ.

My second assumption is:

No Overlap: no act is both (morally) permissible and impermissible.Footnote 35

Given (the “if” part of) Permission Scale and No Overlap, it follows that φ is impermissible only if JWφ < RW. My third and final assumption is:

No Gaps: every act is (morally) permissible or impermissible.

Given (the “only if” part of) Permission Scale and No Gaps, it follows that that φ is impermissible if JWφ < RW. Put these new “if” and “only if” parts together, and we get the intermediate Impermissibility Rule: φ is impermissible iff JWφ < RW. To get Commitment Scale, just apply the Impermissibility Rule to ~φ: ~φ is impermissible (aka φ is a commitment) iff JW < RWφ (equivalently: RWφ > JW). And then we have:

Commitment Scale: φ is a commitment (the alternative is impermissible) iff the RWφ > JW.

So far, I’ve shown that once Portmore endorses 3 (and my three assumptions), he is stuck with Dual Scale whether he likes it or not. And once he’s stuck with Dual Scale, he is stuck with 3* too. 3* is essentially the “only if” part of the Commitment Scale. Recall:

3*. If an agent has a moral justifying reason, MJR, for φ but is not all-in morally permitted to φ, then there is a reason with more moral requiring weight for ~φ than MJR has justifying weight for φ.

3* says that φ is impermissible (~φ is a commitment) only if RW > JWφ. When we are evaluating the deontic status of ~φ, Commitment Scale tells us the same thing, that φ is impermissible (aka: ~φ is a commitment) only if RW > JWφ. In short, I have argued that endorsing 3 commits one to endorsing 3*, as well as the Dual Scale model of weighing reasons.

7 Portmore’s last chance

Portmore assumed 4 (Liv’s self-interested reason is the only candidate to explain Liv’s moral permission to remain in safety). My extension assumed 4* (Bernie’s self-interested reason is the only candidate to explain his moral requirement to remain in safety). Portmore’s last chance to break the parallel is for his assumption to be more plausible than mine. But it isn’t.

7.1 An objection to both 4 and 4*

Standard otherists deny that self-interest has moral requiring weight, but they don’t necessarily deny that agents have moral duties to themselves. Thus, they might concede that Bernie is morally required to remain in safety but then find some reason besides self-interest to explain this moral requirement. The following reasons are distinct from Bernie’s self-interest and yet are candidates to explain his moral requirement to remain in safety:

Autonomy: Bernie has moral requiring weight to prevent the destruction of his rational autonomy, which would result from his death (Schofield, 2019: 228).

Talents: Bernie has moral requiring weight to develop his worthwhile talents (Portmore 129). Chances are that his death would prevent at least one worthwhile talent from being sufficiently developed.

Promises: Promises to oneself have moral requiring weight, at least until we let ourselves off the hook (Rosati, 2011). Given that certain goals or career aspirations may involve making promises to oneself, it is plausible that Bernie’s death would break some promise to himself.

Hence, 4* is false. There are other candidates besides Bernie’s self-interested reasons to explain why he is morally required to remain in safety.

Nonetheless, the parallel between Portmore’s argument and the extension remains intact. 4 is also false. There are other candidates to explain Liv’s permission to remain in safety, such as altruistic but partial reasons. Liv’s death will harm those beloved who love her, especially any dependents, and it is widely held that we have very weighty reasons to promote the interests of our beloved.

The Liv case is mine, but 4 is also false for Portmore’s Fiona case (125). Fiona can use her savings self-interestedly (as the down payment that finalizes her home purchase) or altruistically (as a donation to Oxfam). Fiona is morally permitted to forgo the vastly more beneficial Oxfam donation, but her self-interest is not the only candidate to explain her moral permission. If Fiona doesn’t complete the home purchase, she presumably breaks significant promises and harms people, such as the owners of the house she promised to buy and the realtors who don’t get paid. These harms to others are especially weighty because she would be causing rather than merely allowing them.

The above objections to 4 and 4* win the battle but lose the war. They win the battle, because they show that in the single Liv (Fiona) and Bernie cases at hand there is another candidate explanation. They lose the war, because 4 and 4* are probably true for some Liv and Bernie case or another. In the rest of the section, I defend a general strategy for defending 4* and explain why you can’t reject that general strategy without rejecting Portmore’s 4.

7.2 Why the extension wins the war

Consider what happens if we gradually decrease the cost of jumping on the grenade. We notice a corresponding decrease in how much moral requiring weight there is to remain in safety. There is less moral requiring weight when Bernie is just paralyzed from the neck down and less still when he just loses his non-dominant hand. Eventually, we reach a point at which he is no longer required to remain in safety, holding fixed that the only benefit to jumping on the grenade is preventing the mild burn.Footnote 36 In other words:

Requiring Weight Correlation: the weight of Bernie’s self-interested reasons is systematically correlated with the moral requiring weight that he has to remain in safety.

The extension’s conclusion—that self-interested reasons have moral requiring weight—if true, provides a simple, straightforward explanation of Requiring Weight Correlation. Unless a potential alternative explains the Requiring Weight Correlation or at least covaries with Bernie’s self-interested reasons, it will fail to explain Bernie’s moral permission in some version of the Bernie case.

For example, consider reasons to protect one’s autonomy, reasons to develop one’s talents, and reasons to keep promises to oneself. They neither explain Requiring Weight Correlation, nor covary with self-interested reasons. There is, therefore, some version of the Bernie case in which these alternatives are not candidate explanations. Simply revise the Bernie case as follows: jumping on the grenade will not kill Bernie, but will paralyze him from the waist down; Bernie has a unique but undeveloped talent for musical composition, one worth developing at the expense of any other talent he might have (cf. Portmore 129); and he has made no promises to himself about the future, except perhaps that he will try to be more altruistic.

In this revised case, Bernie is still rationally required to remain in safety. It is foolish to endure lifelong paralysis for the sake of preventing someone else’s mild burn. Thanks to moral rationalism about permission and Dual Scale (or 3*), it follows that Bernie is morally required to remain in safety and these self-interested reasons suffice for (some reason that has) moral requiring weight. Now, however, the only candidate to explain his moral requirement to remain in safety are those very self-interested reasons. In the revised case, jumping on the grenade is no obstacle to preserving his rational autonomy, developing his talents, or keeping promises to himself.

Perhaps I’ve missed some other candidate explanation. But don’t get tunnel vision. Don’t forget that an alternative needs to do more than explain Bernie’s moral requirement in the above two versions of the Bernie case. My revisions to the Bernie case were an instance of:

General Strategy*: modify the Bernie case so that (1) the alleged alternative is not a candidate to explain Bernie’s moral requirement to remain in safety, and (2) the greater self-interested benefits make it rationally impermissible to secure the smaller altruistic benefits.

The resulting Bernie case, when combined with moral rationalism about permission and Dual Scale (or 3*), underwrites the extension. To block the extension’s conclusion (while endorsing 1*–3*), you’ll need an alternative that blocks General Strategy*. To block General Strategy*, you’ll need an alternative that explains Requiring Weight Correlation or covaries with self-interested reasons. (Portmore’s 4 is false in the original Liv case, but he can defend 4 with a parallel strategy.Footnote 37)

If General Strategy* fails, then Portmore’s 4 is false. Bernie’s moral requiring weight to remain in safety systematically covaries with the weight of his self-interested reasons. So, to block General Strategy*, you need an alternative that has both moral requiring weight and systematically covaries with the weight of his self-interested reasons. Suppose there is one. Call it Alt. Since Alt co-varies with self-interest, it will be present not only in all Bernie cases, but also in all Liv cases. Portmore does not allow requiring weight to outstrip justifying weight (2011: 137–143). Hence, Alt would have justifying weight too, and therefore would be an alternative to self-interest’s having justifying weight. In short, any alternative that makes the extension’s 4* false for every Bernie case will make Portmore’s 4 false for every Liv case. 4 and 4* stand or fall together.

8 Conclusion

In this paper, I brought together three neglected topics: going too far beyond the call of duty, moral rationalism about permission, and how to weigh reasons when some reasons have a different proportion of justifying and requiring weight than others. I brought them together to challenge the standard otherist account of supererogation, as championed by Portmore.

If self-interest isn’t a moral reason, as the otherist contends, it is initially unclear how the otherist can make sense of supererogation’s optionality. By relying on moral rationalism about requirements, Portmore argues that self-interested reasons have moral justifying weight, even though they are non-moral reasons. He assumes that if self-interested reasons had moral requiring weight, they would be moral reasons, which would be tantamount to giving up on otherism. And thus Portmore will balk at the idea that Bernie’s self-interested reasons have moral requiring weight.

I have argued, however, that Portmore’s position is unstable. Portmore’s argument that self-interested reasons have moral justifying weight can be extended to show that they also have moral requiring weight. Consequently, Portmore and like-minded otherists must give up the moral rationalism that motivates their account of supererogation’s optionality or they must concede both that Bernie’s self-interested reason has moral requiring weight and that Bernie is morally required to remain in safety. In short, I have shown that Portmore must choose between two things near and dear to his heart: his moral rationalism and his otherism.Footnote 38