1 Introduction

Recently, there has been much discussion of “consequentializing” moral theories.Footnote 1 The idea is that if we allow value to be agent- and time-relative, we can take an apparently non-consequentialist theory and reformulate it in consequentialist terms, so that it directs all agents to maximize the good.Footnote 2 If a deontologist proposes an absolute constraint against lying, for example, the consequentializer can reformulate that as (roughly) asserting that each agent should assign infinite disvalue to her own lies. Some of the recent literature has revolved around the mechanics of consequentializing: can we consequentialize all non-consequentialist theories, and, if so, how?Footnote 3 In this paper, I will set aside that concern, granting what Dreier calls the Extensional Equivalence Thesis, which says that “each plausible moral view has a consequentialist extensional equivalent: the two views agree on the deontic status of every act” (Dreier 2011, p. 98).

The more important question, I think, is: assuming we can consequentialize some apparently non-consequentialist theory, should we? The debate on this point has been hampered by a failure—by both advocates and critics of consequentializing—to realize that what may appear to be a unified front is not. Most of the literature treats consequentializing as a single movement, and no one clearly distinguishes the three distinct justifications that have been offered for consequentializing non-consequentialist theories. This is especially important since, as I will show, two of the arguments rely on incompatible premises. The first aim of this paper is accordingly to distinguish and clarify those three arguments. I will then evaluate each argument in enough depth to show that, at least as developed thus far, none of the arguments constitute a serious threat to non-consequentialism. Consequentializers have not yet given the committed non-consequentialist a reason to revise or reformulate her theory. Along the way, I will show how the literature has suffered, due to a failure to distinguish the different arguments. Finally, I will briefly suggest that the failure of the consequentializers’ arguments gives us reason to be wary of certain work in deontic logic, linguistics, decision theory, and economics, which use models that have a consequentialist structure to represent non-consequentialist moral theories.

2 The intuitive argument

The first argument for consequentializing is probably the most familiar.Footnote 4 It has roots in a number of places, but it is stated most clearly by Douglas Portmore:

[M]any non-consequentialists acknowledged that there is something deeply compelling about consequentialism… [T]he motivation for consequentializing is to keep what’s compelling about act-utilitarianism (i.e. its consequentialism) while avoiding what’s problematic about it (i.e. its abundant counter-intuitive implications). The consequentializing project is, then, to come up with a theory that achieves these two aims by combining consequentialism’s criterion of rightness with a more sophisticated account of how outcomes are to be ranked. (Portmore 2007, p. 41)Footnote 5

Portmore’s idea is this: there is something attractive about the structure of act-utilitarianism and, more generally, act-consequentialism. According to most consequentializers it is, roughly, the idea that it is always permissible to bring about the best outcome.Footnote 6 Following the existing literature, let’s call this the Compelling Idea. This attractive structure, though, is paired with deeply counter-intuitive verdicts on individual cases. Many non-consequentialist theories, on the other hand, deliver intuitive verdicts on cases, but have a structure that is less attractive than consequentialism’s. By consequentializing a non-consequentialist theory, we can produce a theory that retains the intuitive verdicts of non-consequentialism while also gaining the attractive structure of consequentialism—in other words, it can accommodate the Compelling Idea. Thus, we get a theory that is more intuitively compelling than either standard versions of consequentialism or standard versions of non-consequentialism.

That is what I will call the intuitive argument for consequentializing. It depends on three key claims: (1) the Compelling Idea truly is compelling, (2) consequentialized versions of non-consequentialist theories can capture the Compelling Idea, and (3) nothing equally compelling is lost in consequentializing a theory. The first two of these claims have been discussed at some length in the literature. Although some philosophers have rejected the first, most—including a number of non-consequentialists—have agreed that there is something attractive about the idea that agents should always be permitted to maximize the good.Footnote 7 I will accordingly grant that claim. Mark Schroeder (2007) has forcefully argued that the second claim is false. In consequentializing a non-consequentialist theory, the consequentializer permits each agent to bring about the outcome that is best-relative-to-her. Goodness-relative-to, Schroeder argues, is not the same thing as (or even closely related to) goodness simpliciter. Thus, Schroeder argues, a consequentialized theory that appeals to agent-relative value does not capture the Compelling Idea. Portmore and Dreier have responded, arguing that there is a relevant connection between goodness and goodness-relative-to, such that consequentialized theories can be said to capture the Compelling Idea.Footnote 8 I won’t here wade into that debate, because even if Portmore and Dreier are correct, the intuitive argument for consequentializing still requires the third claim. And I don’t believe that most committed non-consequentialists have reason to accept that third claim.

Unlike the first two claims, the third claim has not been extensively discussed. Portmore and a few others do recognize its potential importance, though:

[M]oral theories do much more than just yield moral verdicts. Importantly, they provide different competing rationales for the deontic verdicts that they yield. Thus…an act-utilitarian, a Kantian, and a contractualist can all agree that the extension of permissible acts is just those that maximize utility, but even so they will provide different explanations for why this is so, for they necessarily accept different views about what the fundamental right-making and wrong-making features of acts are. (Portmore 2007, p. 60)Footnote 9

So Portmore seems to acknowledge that a contractualist, say, could accept that the consequentialized version of her theory captured the Compelling Idea and could accept that that Idea was legitimately compelling, while nevertheless rejecting consequentialization. She could do that by claiming that whatever intuitive plausibility the Compelling Idea has, it is outweighed by the intuitive plausibility of grounding rightness on interpersonal agreement. This type of response will be widely available. Nearly all non-consequentialists—even those who place great weight on intuitions about cases—argue that there is something attractive about the fundamental right-making properties they identify. If those properties wouldn’t survive consequentialization, then something compelling will be lost through consequentialization.Footnote 10

Second, in consequentializing a non-consequentialist theory, the account of value attributed to the consequentialized theory may seem intuitively implausible. If a non-consequentialist theory, for example, includes a constraint against lying even to prevent many future lies, then the consequentialized theory would need to assign (agent-relative) disvalue to my lying now that is many times greater than the disvalue it assigns to others’ lies, or even to my own future lies. At the limit, a theory that recognizes an absolute constraint against lying will make this difference in disvalue infinite. A non-consequentialist may reasonably find this axiology counter-intuitive:

[T]he theory of value must distinguish acts according to when and by whom they are performed, and assign greatly different values to such acts even when they are, in other respects, morally very similar. Such a theory would make claims about the value of actions that, taken by themselves, would be very hard to believe. (Woodard 2013, p. 262)Footnote 11

This seems to constitute a second respect in which consequentializing a theory could come at intuitive cost, and therefore a second way the non-consequentialist could reject the third claim.

If, then, there are multiple ways to reject the third claim, why hasn’t it been well-explored in the literature? I suspect it is because it is hard to know how to productively engage with it. If I found both the Compelling Idea and contractualism’s account of the fundamental right-making property to be intuitively compelling, I might be able to decide which of the two I found more compelling. But I’m not sure how or if I could argue with someone who reached the opposite conclusion. In general, it is difficult to make philosophical arguments about the relative intuitive plausibility of different considerations. This means that the intuitive argument for consequentialization is likely to retain the character of a helpful suggestion. Portmore and his allies can tell the non-consequentialist that they believe her theory would be more intuitively plausible if it were consequentialized. But if the non-consequentialist disagrees, there won’t be much more to say. Even, then, if the first two claims of the intuitive argument are correct, most non-consequentialists still have a reasonable route to resisting consequentialization.

3 The assimilation argument

In the last section, I outlined the first argument for consequentializing non-consequentialist moral theories, and I argued that, even if we grant two controversial premises, the argument relies on an intuitive weighing. So there seems to be no reason to think it need be convincing to a committed non-consequentialist. Nevertheless, several prominent consequentializers have claimed that non-consequentialists are compelled to consequentialize their theories:

The simple answer we may now give is that every moral view is consequentialist, that we common sense moralists as much as anyone are out to maximize the good. Of course, our understanding of the good may be an agent centered one, whereas the typical challenger has an agent neutral understanding, but this contrast will have to be engaged by some other argument. We don’t have to be embarrassed by the charge that we are ignoring the good, because the charge is just false. (Dreier 1993, pp. 24–5)

I will argue that the equivalence [between a theory and its consequentialized counterpart] is a strong one: equivalent theories, in this sense, are really just notational variants of one another. A moralist who subscribed to some non-consequentialist theory would not really be disagreeing with another moralist who subscribed to an equivalent consequentialist counterpart. (Dreier 2011, p. 97)

Since we are now all under the consequentialist umbrella, the question now becomes not whether we should be consequentialists or not, but whether we should be value-neutralists or value-relativists. (Louise 2004, p. 536)

[A]ll moral theories - including duty ethics, rights-based theories, and virtue ethics - can be represented in some utility function as claims about consequences. If correct, this shows that the theoretical divide between, say, duty ethics and utilitarianism is no greater than the divide between hedonistic utilitarianism and preferentialism. People advocating rival moral theories just make slightly different claims about how to evaluate consequences. (Peterson 2010, p. 155)Footnote 12

Since conclusions this strong obviously can’t be drawn from an argument like the intuitive one presented in the last section, these consequentializers must have a different argument in mind.

In a recent article, Dreier presents the most sophisticated version of what I will call the assimilation argument. According to what Dreier calls the Extensionality Thesis, “nothing but [deontic] extension matters in a moral view” (2011, p. 98).Footnote 13 It is clear enough how this Thesis, combined with the claim that all plausible non-consequentialist theories can be consequentialized, would yield Dreier’s conclusion that “every [plausible] moral view is consequentialist”: any plausible non-consequentialist theory could be put in consequentialist form, then by the Extensionality Thesis we could conclude that the consequentialized version of the theory is not significantly different from its non-consequentialist counterpart. Thus, every plausible moral theory is a version of consequentialism. (Similar considerations suggest that every plausible moral theory is also a version of deontology—a point Dreier accepts and to which I will return later.Footnote 14)

So, the assimilation argument straightforwardly depends on the Extensionality Thesis. But isn’t that Thesis just very implausible? As Portmore noted, ethical theories purport to do more than simply identify what acts are right and wrong; they also seek to explain why those acts are right and wrong. The Extensionality Thesis therefore seems to be false, because ethical theories that declare the same actions right may nevertheless differ in their explanations of why those actions are right. Whereas a consequentialist may say an action is right because it maximizes the good, a Kantian may say the same action is right because it respects people as ends-in-themselves.

What, then, is Dreier’s argument for what appears to be such an implausible claim? He begins by noting that we shouldn’t simply look to what a theory labels “good”:

[R]ecall Rawls’ insistence that equality in an outcome does not make the outcome good, but does contribute to making right the act that produces it. Suppose you held Rawls’ view and I held the consequentialized counterpart. If we first thought that we were disagreeing, in that you claimed equality is not a good itself but only a contributor to right-making, and I claimed that equality is a good feature of outcomes and therefore a contributor to right-making, we could quickly realize that we had no disagreement at all, but only a notational difference. (Dreier 2011, p. 113)

Dreier’s claim here seems plausible. What difference does it really make that one theory calls equality good-making while the other eschews that label, when equality plays exactly the same functional role in each theory? Accordingly, Dreier concludes that when deciding whether a theory is consequentialist, we shouldn’t look to what the theory calls good or what relationship it claims holds between the good and the right. Instead, we should look at the functional role different elements play within a theory and what relationship they have to deontic verdicts. If two theories have the same deontic extension (i.e. declare the same actions to be right and wrong) and have the same internal structure, then we should treat those theories as equivalent—regardless of how they apply normative labels like “good”.

Still, though, even if we agree with Dreier that what matters is the functional role played by different properties, why think that all theories will have a structure that can appropriately be described as consequentialist? Dreier says,

I will set out my general reason for suspecting that the Extensionality Thesis is true. It is a reason that I owe to Foot, though she might not accept it in this form… Foot suggests that the notion of a good state of affairs is not a pre-theoretic one, but rather one that is induced by certain kinds of moral theories… I mean to join Foot by suggesting that any notion that we do have is moored securely to the role that it plays in proper choice. Insofar as we have some pre-theoretic idea, its specific content is too weak and thin for it to come apart from the notion of what we are to choose. (Dreier 2011, pp. 114–15)Footnote 15

Dreier’s idea (following Foot) is that we don’t have a concept of goodness (applied to outcomes) that is independent of thoughts about what we ought to do. In calling one outcome better than another, all we are really saying is that it is a more appropriate object of choice. That is, goodness just is choice-worthiness. If this is right, then the “internal structure” of any moral theory can reasonably be described as consequentialist: as one in which agents are directed to maximize the good.Footnote 16

4 The functional role of goodness

The key element of Dreier’s assimilation argument is the Foot-ian idea that goodness is, at least essentially, the same as choice-worthiness. This, supplemented by Dreier’s argument that we shouldn’t be overly concerned with the way different theorists use terms like ‘good’, in short order leads to the conclusion that any theory is a mere “notational variant” of some version of consequentialism. To reject the assimilation argument, then, the non-consequentialist must reject the identification of goodness with choice-worthiness. Notice that there are two different ways to do this. First, the non-consequentialist could identify a functional role for goodness other than choice-worthiness, or, second, she could argue that although goodness does contribute to choice-worthiness, it is not the only contributor to choice-worthiness and hence is not identical with it. I think there is some promise in a response of the first sort. If goodness, in addition to or instead of its role in choice-worthiness, plays a role in the normative assessment of attitudes—perhaps desire, hope, or regret—that are themselves not directly subject to deontic appraisal, that could constitute a distinct functional role for goodness, distinguishing it from choice-worthiness. Here, though, I want to pursue the second type of response to Dreier, because it is one that many non-consequentialists already endorse.

Many non-consequentialists explicitly identify goodness as one of several contributors to choice-worthiness. According to many deontologists, for example, we have a general obligation to promote the good. There are also, however, certain constraints on our actions which limit the ways in which we are permitted to do so. Thus, on many deontological theories a worse outcome could be more choice-worthy than a better one, if achieving the better one is ruled out by a constraint. On such theories, goodness is clearly not identical to choice-worthiness, and Dreier’s assimilation argument for consequentialization therefore fails.

This response seems quite straightforward, and it has the benefit of relying on claims explicitly made by many non-consequentialists. Dreier, though, would caution us not to put too much emphasis on the terms used by the deontologist. (Recall his example of the Rawlsian who described equality as right-making but not good-making.) To resist Dreier’s argument, the deontologist therefore needs to provide a reason for insisting that her terminology is apt, better reflecting the normative landscape than the terminology used by her consequentialist counterpart. (The Rawlsian arguably had no good reason to refuse to describe equality as good-making.)

I think the deontologist can provide such a reason. Robert Nozick, for example, considers a perspective that is in certain ways a cartoonish version of many deontological views. He imagines that someone might both be moved by Kantian ideals of respect and the separateness of persons, while also placing some value in the welfare of sentient beings. He therefore proposes a theory which directs agents to:

(1) [M]aximize the total happiness of all living beings; (2) [while placing] stringent side constraints on what one may do to human beings. Human beings may not be used or sacrificed for the benefit of others; animals may be used or sacrificed for the benefit of other people or animals only if those benefits are greater than the loss inflicted. (Nozick 1974, p. 39)Footnote 17

On this theory, the property grounding the second clause (rationality and the separateness of persons) is very different from the property grounding the first clause (happiness). This gives the theorist a prima facie reason to insist that, according to her theory, there exist two distinct moral properties. Further, the two properties have different internal structures and accordingly rank actions in very different ways. The first—happiness—is scalar. It produces a fine-grained ordering and (according to the theory) can be traded-off. (The gain of happiness to one living being can be weighed against the loss of happiness to another.) The second, on the other hand, is binary, lumping actions into two buckets: permitted and forbidden. (It says to never perform actions which violate a constraint, but makes no distinctions amongst actions which violate constraints, or amongst those which don’t violate constraints.)Footnote 18 Given these different structures and ranking functions, it does not seem at all arbitrary for this theorist to declare that the first property corresponds to goodness, while the second does not. When this deontologist insists that, according to her theory, there are two contributors to choice-worthiness, only one of which is goodness, she is therefore not using words idiosyncratically or arbitrarily; she is responding to the heterogeneous normative landscape her theory suggests.

Dreier says that the content of our idea of goodness “is too weak and thin…to come apart from the notion of what we are to choose.” He doesn’t provide additional support for this claim, and indeed even characterizes it as a “suspicion” and a “hunch” (2011, pp. 111, 114). Accordingly, I think he is most naturally understood as offering a challenge to the non-consequentialist, to explain what goodness could be, distinct from choice-worthiness. If this is the right interpretation of Dreier, then many deontologists have an answer to it. They recognize, on the one hand, a general moral reason to promote wellbeing or to help others; and, on the other, certain considerations—constraints, rights, special obligations—that limit the ways in which we may pursue that goal. At least in cases where the grounds of these two components are different, it seems reasonable for the deontologist to say that, on her theory, goodness is best understood as only one contributor to choice-worthiness and hence isn’t identical to it.Footnote 19

Is there any other way of understanding Dreier, that might rescue the assimilation argument? The obvious possibility would be for Dreier to offer a more robust argument for the claim that goodness should be conceptually identified with choice-worthiness. But he doesn’t offer such an argument, and it is hard for me to see how it would go, especially given the existence of deontologists who apparently provide a competing analysis of goodness.Footnote 20

5 The inconsistency of the intuitive and assimilating arguments

The third argument for consequentializing is rather different than the first two, and so before moving to it, it is worth pausing here to see how distinguishing the intuitive and assimilation arguments exposes quite a bit of confusion in the existing literature. The intuitive argument depends on the attractiveness of permitting agents to maximize the good—that is, on what we earlier called the Compelling Idea. But according to the assimilation argument, our concept of goodness doesn’t have significant content beyond choice-worthiness. If “good” simply means “worthy of choice,” though, then the claim that agents ought to be permitted to maximize the good—the Compelling Idea—is no longer intuitive; it is analytic! Put another way, the assimilation argument tries to show that a non-consequentialist theory and its consequentialized counterpart are really just the same theory. (As we saw earlier, Dreier says that a non-consequentialist “would not really be disagreeing” with someone who held the consequentialized counterpart theory.) The intuitive argument relies on the claim that a consequentialized theory is more intuitive than its non-consequentialist counterpart. But for one theory to be more intuitive than another, it must be distinct from it. The two arguments therefore appear to be incompatible: if consequentialism is the only game in town, it can’t be more intuitive than the (non-existent) alternatives.

This is an important result, because it shows that proponents of the intuitive and assimilating arguments aren’t on the same side. Though they both agree that (apparently) non-consequentialist theories should be consequentialized, they do so not merely for different reasons, but for incompatible reasons. The success of Portmore’s argument depends on showing that Dreier’s is wrong. (Portmore does argue against assimilation-type arguments, though his presentation leaves it unclear whether he realizes how crucial their rejection is to the success of his project.)Footnote 21 And the success of Dreier’s argument depends on showing that Portmore’s is wrong. (Dreier does not show any clear awareness of this.) A failure to appreciate this has, I think, caused much confusion. In the remainder of this section, I’ll illustrate this by discussing two recent and important articles on consequentializing.

First, take Paul Hurley’s recent essay on consequentializing and deontologizing. He pitches the article as an attack on consequentializing in general, and then begins by describing the consequentializer as asserting, “Philosophers who take themselves to be opposing consequentialism with alternatives are…in the grips of a deep confusion. Properly understood, they are merely opposing one form of consequentialism with another” (2013, pp. 123–4). This gloss accurately characterizes assimilating consequentializers. When I claim to be a deontologist, according to Dreier, I am not really disagreeing with someone who asserts the consequentialized counterpart of my theory. But Hurley’s gloss does not accurately characterize intuitive consequentializers. Portmore acknowledges that I can consistently reject consequentialism in favor of deontology. Of course, Portmore thinks that in doing so I assert a theory that is less plausible than its consequentialized counterpart. But that doesn’t mean I am confused—just mistaken. Hurley, therefore, despite claiming to target consequentializers in general, initially seems to take only assimilating consequentializers as his target.

As the article unfolds, Hurley’s key claim is that any consequentialist theory can be “deontologized”, and that the resulting deontological theory can capture a version of the Compelling Idea that is at least as compelling as the version captured by consequentialism. Thus, the advantage supposedly had by consequentialism can also be claimed by deontology. The Compelling Idea, though, was a key element only of the intuitive argument. In undermining the intuitive advantage consequentialism was supposed to have over deontology, Hurley therefore presents an objection to the intuitive argument, but leaves the assimilation argument untouched. Indeed, it seems to me that assimilators like Dreier should welcome the conclusion of Hurley’s argument. If, as Dreier says, the Extensionality Thesis is true and all that matters in a moral theory is its deontic extension, then we should expect that there will be a number of alternative frameworks which we can use to represent moral theories. The choice between frameworks will need to be made on other grounds. (What grounds? I’ll return to this in the next section.) Overall, then, Hurley’s argument is pitched as an attack on consequentializing in general, begins by characterizing consequentializing in assimilating terms, but then offers an argument that challenges only intuitive consequentializers.Footnote 22

Mark Schroeder’s (2007) well-known critique of consequentializing and the responses to it exhibit a similar confusion. As we saw above, Schroeder’s main point is that the good-relative-to relation doesn’t have any clear connection to goodness, and accordingly that the consequentializer can’t claim to capture the Compelling Idea. Though Schroeder doesn’t explicitly distinguish different arguments for consequentializing, his main target therefore appears to be the intuitive argument.Footnote 23 As we saw earlier, the intuitive argument does crucially depend on the claim that consequentialized theories can capture the Compelling Idea, and so, appropriately, intuitive consequentializers have responded to Schroeder.Footnote 24 Dreier, however, has also attempted to respond to Schroeder, to show that a consequentialized theory can still capture the Compelling Idea (2011, p. 101). This is somewhat mysterious. As we saw above, Dreier’s argument depends on two claims: the Extensional Equivalence Thesis, which says that all plausible moral theories have a consequentialist extensional equivalent; and the Extensionality Thesis, which says that nothing but deontic extension matters in a moral theory. Those two claims suffice to establish Dreier’s primary conclusion, that all moral theories are versions of consequentialism. The Compelling Idea doesn’t appear in that argument.

So what is going on? Why does Dreier bother to defend a claim, the Compelling Idea, which doesn’t figure in his argument—and indeed (as we’ve now seen) is inconsistent with it? Dreier worries that Schroeder’s argument undermines the Extensional Equivalence Thesis (2011, p. 98). I think he is right about this—though not, as he seems to think, because Schroeder undermines the ability of consequentialized theories to capture the Compelling Idea. The Extensional Equivalence Thesis requires that there exist a consequentialist extensional equivalent for each moral theory. It does not require that the resulting theory capture the Compelling Idea. (That version of the Extensional Equivalence Thesis would say that each plausible moral theory has a consequentialist extensional equivalent which retains whatever is compelling about consequentialism. This addition is not necessary to make Dreier’s argument valid.) The problem for Dreier is that, when Schroeder argues that the good-relative-to relation has no connection to goodness, in addition to undermining consequentialism’s ability to capture the Compelling Idea he plausibly also undermines the claim that there even exists a consequentialist equivalent to many theories. For, if consequentialism by definition requires that agents maximize the good, and goodness-relative-to is not a species of goodness, then theories which direct agents to maximize goodness-relative-to are not consequentialist.

Does this mean, then, that after a long detour we should conclude that Schroeder’s argument does after all apply to both intuitive and assimilating consequentializers, and that confusing the arguments has done no harm? No. The force of Schroeder’s argument is different in each case, leaving Dreier and other proponents of assimilation with a much lower bar to clear. While a proponent of the intuitive argument must respond to Schroeder by showing how we can consequentialize any given theory while retaining the Compelling Idea, Dreier need only show that we can consequentialize the theory. That means that all Dreier must establish is that goodness-relative-to can fairly be described as a type of goodness. Even a revisionary analysis—one that does not retain the intuitively compelling character required for the intuitive argument—would be sufficient to complete the assimilation argument. I do not here have the space to discuss the substance of the various replies to Schroeder. I will simply note that, due to the different burdens of proof, it is quite possible that even if intuitionists like Portmore do not have a good response to Schroeder, assimilationists like Dreier may.Footnote 25

6 The pragmatic argument

The preceding has shown, I hope, that intuitive consequentializers, like Portmore, are offering a very different argument than assimilating consequentializers, like Dreier. And (as we saw by looking at Hurley and Schroeder), the debate over consequentializing looks quite different when we keep this distinction in mind. In the remainder of this article I would like to discuss a third argument for consequentializing which, in certain respects, is of wider importance than the first two.

The pragmatic argument for consequentializing claims that we should consequentialize non-consequentialist theories because doing so will enable us to make progress in theorizing about ethics. Consequentializing is therefore instrumentally useful. Before moving on to discuss what specific benefits consequentializing is supposed to provide, we should note a few things about the argument. First, unlike the assimilating and intuitive arguments, the pragmatic argument is compatible with the other two arguments for consequentializing. This is not surprising. Whereas the earlier two arguments both make claims about the nature of morality, and therefore potentially come into conflict with one another, the pragmatic argument makes claims only about the effects of representing moral theories in different ways. It therefore can be deployed as a supplement to the other views—as, for example, Dreier does.Footnote 26 Second, because the pragmatic argument makes no direct claims about the nature of morality itself, it is not an objection to non-consequentialism. If the pragmatic argument is correct, we have reason to represent theories in a consequentialist way, but that doesn’t mean that consequentialism is true. Third, because the pragmatic argument tells us to represent theories in a way that may diverge from their actual structure, there will be limitations on what we can accomplish with consequentialized theories. Colyvan, Cox, and Steele are clear about this:

Virtue theory and, in particular, deontology had to be shoehorned into the consequentialist framework of decision theory. As we’ve argued, we are not claiming to have provided explanatory models of these two [theories]… Nor have we claimed to faithfully represent the justifications available to such agents. Indeed, our models either misrepresent or make opaque such justifications (2010, p. 523).

The consequentialized version of a non-consequentialist theory can tell us what agents ought to do according to that theory, but it cannot in any straightforward way tell us why they ought to do so.Footnote 27

With those points—and in particular that significant limitation of pragmatic consequentialization—in mind, let us now turn to the benefits that pragmatic consequentializers think we can secure through consequentialization.

[B]y consequentializing a theory we can keep clearer about what the important structural differences are among competing moral theories… [C]onsequentializing all [moral theories] will help shine the light on distinctions that are important, like [agent-]centeredness and perhaps causal versus constitutive connections between act and consequence. (Dreier 2011, p. 115)

In fact, we hold that the issue of mixed acts [acts which have a non-zero probability of yielding different outcomes] is not at all clear-cut when it comes to deontological ethics. Ethical discussions are rarely conducted in probabilistic terms, and so it follows that matters such as the status of mixed acts tend to be overlooked. The question of whether duties should be both agent and time relative is another issue that is not typically addressed by deontologists… Let us just say that a valuable aspect of the [consequentialist] modeling process is that it focuses attention on such questions. (Colyvan et al. 2010, p. 516)

I would like to remind the reader of three conceptual tools…which are currently available to the consequentialist but will remain unavailable to traditional nonconsequentialists until their theories have been consequentialized. The three conceptual tools are: (1) the distinction between act and rule based versions [sic] moral theories, (2) the distinction between the actual consequence and expected consequence of an act, and (3) the distinction between sequential and non-sequential decision making. For example, if we consequentialize duty ethics we will be able to distinguish between versions of duty ethics according to which one ought to act such that as many duties as possible are actually fulfilled, and versions according to which it is the expected duty-fulfillment that matters…

An additional advantage of consequentializing ethical theories is that this helps us to achieve a form of conceptual unity… [W]e reduce the set of primitive concepts needed for stating different ethical views. This is an important achievement, since it makes it easier for proponents of different ethical views to communicate with each other in a fruitful way. (Peterson 2010, p. 168)

These purported benefits can be grouped into two broad categories, which I will consider in turn.

First, consequentializing can enable us to bring specific tools or distinctions to bear on non-consequentialist theories. The list provided above is quite long (agent-centeredness; causal vs. constitutive consequences; consideration of mixed acts; agent-/time-relativity; act- vs. rule-based theories; actual vs. expected consequences; sequential vs. non-sequential decision-making), and so I don’t have the space to comment on each individually. Briefly, though, it seems to me that each of the items is either of dubious value to non-consequentialists (e.g. the distinction between act- and rule-based theories) or else doesn’t clearly require consequentialization. (The distinction between agent-centered and agent-neutral theories has been noted, observed, and discussed in the context of many non-consequentialist, non-consequentialized theories.)Footnote 28 At the very least, pragmatic consequentializers owe us a more careful discussion of these topics, to show why they are important and why they can’t (or wouldn’t) reasonably be achieved without consequentialization.

Further, in some cases, drawing our attention to these distinctions can be positively misleading. Colyvan, Cox, and Steele, for example—who as I noted above are well aware of the limits of pragmatic consquentializing—say that it is valuable to call attention to mixed acts (acts which may result in different outcomes with non-zero probabilities). They put the following question to the deontologist: “How does the deontologist want to rank a mixed act that could yield (with, say, equal probability) either the satisfaction of an obligation or a morally neutral action?” (2010, p. 514). Notice that this way of framing the action involves several presuppositions, which many deontologists reject. Most importantly, it assumes that the ultimate object of moral evaluation is an outcome or state-of-affairs, and accordingly that the morally relevant description of an action is a probability distribution over possible outcomes. Many deontologists reject this (Hurley 2013, 2014). To a Kantian, for example, the proper object of moral evaluation may be an agent’s maxim. If actions are distinguished by their maxims, and our obligations are to act on certain maxims (rather than to produce outcomes), then Colyvan, Cox, and Steele’s case can’t arise. There can’t be a case where a given action has a probability of satisfying an obligation or of being morally neutral. (In virtue of its maxim, the action either satisfies an obligation or it doesn’t; no uncertainty is possible.) If, therefore, deontologists have failed to discuss “mixed acts”, it may be for good reason: acts, characterized in such a way, may not constitute a morally relevant category according to deontology. If consequentializing draws our attention to such cases then at best it distracts us, and at worst it will lead to confusion, as deontologists struggle to answer questions that don’t really make sense according to their theories.

This, then, leads us to the second category of benefit that is supposed to come from consequentialization. Both Dreier and Peterson claim that consequentializing can enable us to more easily compare different moral theories and can enable “proponents of different ethical views to communicate with each other in a fruitful way.” There are, of course, advantages to a lingua franca. Foremost among them are that we can precisely describe differences between theories and avoid certain misunderstandings. But there are also costs. In choosing one language in which to converse, we predictably skew discussion in ways determined by the expressive features of that language. The structure of consequentialism, for example, makes it seem natural to treat actions with uncertain consequences as “mixed acts” and to calculate expected values, and it also makes it seem natural to sum different values together into an aggregate. But it is not clear that morality must work that way. Similarly, our perceptions of what seems simple and what seems complex are heavily dependent on the representational framework we work within. If representing certain common non-consequentialist concepts, like moral dilemmas or defeasible obligations, requires complicated mathematics, then those concepts may seem to be complex, to be avoided in our theorizing unless we have very strong reason to accept them.Footnote 29 But given a different common language, those same concepts would be representable in very simple terms, appearing to be very natural, prima facie plausible elements of an ethical theory.

I doubt there is any such thing as a theoretical structure that is neutral between competing moral theories. But if there is one, consequentialism isn’t it. That’s not to say that the pragmatic argument for consequentializing moral theories fails. It may be that, despite the costs I’ve described, the benefits to moral theorizing of consequentializing competing theories make it worth doing. But pragmatic consequentializers haven’t yet made that case. Making such a case would require moving beyond the often-vague discussions quoted above, and showing some concrete advantages gained through consequentialization that likely would not have been gained without consequentializing. And it would also require more seriously investigating the costs of consequentializing, which, for the reasons I’ve given above, seem likely to be significant. Until that has been done, I think non-consequentialists can remain justifiably skeptical that pragmatic considerations justify consequentializing their theories.Footnote 30

7 The importance of the (failure of the) pragmatic argument

Pragmatic consequentializing has, speaking generously, played a very minor role in moral philosophy. But pragmatic consequentializing is central to other disciplines. In economics, linguistics, decision theory, and deontic logic, it is common to use a model or structure that looks consequentialist to represent a range of ethical approaches. The lessons from our discussion of pragmatic consequentializing in the previous section can shed light on those uses, and in particular can help us to be alert for cases in which the imposition of a consequentialist structure on a non-consequentialist theory can have subtle but important effects. In this section, I will briefly illustrate this with two examples: the use of cost-effectiveness analyses in economics, and the system of deontic logic developed by Paul McNamara.

Many philosophers and economists have claimed that cost-benefit and cost-effectiveness analyses, when used as decision-making devices (rather than simply as inputs to decision-making), are fundamentally consequentialist.Footnote 31 Others, however, have argued that this is not the case, and that such analyses can embody at least some non-consequentialist moral views.Footnote 32 What explains this disagreement? In part, it is the result of differing definitions of “consequentialism” and “cost-effectiveness analysis.” (The same applies to “cost-benefit analysis”, but for ease of expression, I will not mention it in what follows.) But a large part of it, I think, is due to the fact that many advocates of cost-effectiveness analysis (CEA) are best understood as pragmatic consequentializers. They accept the truth of a non-consequentialist ethical view, but at the same time believe that the technical machinery of CEA has many virtues. These virtues, they believe, compensate for any difficulty or cost involved in imposing a consequentialist structure on a non-consequentialist theory. If this is right, then those who argue that CEAs need not be consequentialist are correct, since a pragmatic consequentializer can recognize that the theory she is working with is in fact a non-consequentialist one, even if it is put in a consequentialized form. But those on the other side of the debate have a point, too. Although CEA need not presuppose consequentialism, it does require that non-consequentialist theories be consequentialized. For the reasons we saw in the last section, this may in practice tend to skew CEAs in consequentialist directions.

I think this has in fact been the case. One of the features that is most commonly taken to be characteristic of non-consequentialism is a concern for the distribution of goods, rather than simply the sum total. Consequentialism, on the other hand, in its simplest and most common form does not incorporate any independent concern for distribution. Because CEA requires a consequentialized framework, its simplest form therefore does not incorporate any independent concern for distribution. Now, as economists have long recognized, there are many ways to introduce a concern for distribution into CEA. (Indeed, this is just the consequentializer’s point.) But it remains true that economists rarely move beyond the distribution-insensitive default when conducting actual CEAs, even when they—and many of the decision-makers who rely on their data—agree that distribution matters.

For example, in a major article discussing the construction and use of disability-adjusted life years (DALYs), a summary measure of population health used in CEAs, Christopher Murray proposes the following criteria for determining whether the base DALY measure should be “adjusted” to account for ethical values:

For the construction of DALYs, I propose a principle of “filtered consensus”… If many individuals after deliberation hold a preference or value then this value should be considered seriously. We should investigate…the likely reasons why many individuals hold such a view. If these reasons appear to be persuasive and do not contravene important “ideal-regarding principles,” these preferences should be incorporated into the construction of DALYs. (1996, p. 5)

Given that Murray proposes modification of the DALY only in the case of a consensus, it is not surprising that he ultimately chooses not to modify DALYs to account for distribution. After a long discussion of the potential importance of distributive concerns, in which Murray cites much evidence that most people do take distribution to be important in some way, he concludes:

At this juncture, given the conflicting nature of the evidence on distributional concerns and the contentious basis for these concerns, it would seem more reasonable not to explicitly incorporate distributional preferences into cost-effectiveness calculations. (1996, p. 63)

Under an approach like Murray’s, which is standard among economists, the simplest version of a CEA is treated as a default, to be modified only when there is a strong, unified push to do so. The adoption of a consequentialized framework therefore gives a distribution-insensitive approach the weight of inertia.Footnote 33 Why does this matter? Because we have reason to think—and, perhaps more importantly, the same economists themselves admit—that decision-makers frequently make decisions on the basis of such analyses, without fully appreciating what factors the analyses do and don’t incorporate.Footnote 34 The consequentialized nature of CEA, therefore, has likely led decision-makers to make decisions that are relatively insensitive to distribution. Given that the dominant view among bioethicists, health policy-makers, health economists, and the public has been that distribution does matter when it comes to health, this is troubling.

Philosophical logic provides another example of a pragmatic consequentializer. In a series of articles over the past 20 years, Paul McNamara has developed an impressive system of deontic logic. One of his goals in doing so has been to develop a logic for moral discourse that goes beyond the standard categories of obligation, permission, and prohibition, and instead is able to incorporate the much richer set of concepts that characterize our moral life. In recent articles, for example, he has attempted to work out the inferential relationships concerning supererogation, moral offence, moral indifference, praiseworthiness, and blameworthiness.Footnote 35 The key to McNamara’s system is its underlying semantics. In addition to defining a relation of moral acceptability between possible worlds (as in standard systems of deontic logic), McNamara adds an ordering: worlds can be related not only as morally acceptable or unacceptable, but also as more or less morally acceptable. This allows him to define an action as involving “more than the minimum” or being “beyond the call of duty” (concepts he argues are more basic than supererogation), if it is part of some acceptable worlds and there is some acceptable world such that it is not a part of it or any inferior world (1996a, p, 182). Since McNamara’s semantics requires a single ordering of worlds, it is not surprising that it essentially requires consequentializing non-consequentialist theories. Given a simple theory that combines a general ranking in terms of happiness with a set of absolute deontic constraints, for example, McNamara’s ranking will place all actions that don’t violate constraints above all actions that do violate constraints. And then, within each category, it will rank actions in terms of happiness.

As McNamara’s many articles show, this system has proven to be very fruitful. Consequentializing here has produced important benefits, allowing us to get a better understanding of a variety of important moral concepts. But I believe it may also obscure certain non-consequentialist moral concepts. Take, for example, what I will call “better-but-wrong” actions. Define these as actions which are morally forbidden, despite having an outcome that is better, in the morally-relevant sense, than other permissible alternatives.Footnote 36 Notice that better-but-wrong actions can’t exist on a consequentialist theory, where moral permissibility is defined in terms of moral goodness, but they will be a part of almost any theory that includes standard deontological constraints. If constraints are understood as limitations on our ability to promote the good, then meaningful constraints will create situations where actions with better outcomes are nevertheless forbidden. Better-but-wrong actions, therefore, are distinctly non-consequentialist.

What might better-but-wrong actions look like? Take any of the stock examples used to show the controversial nature of deontic constraints: a doctor covertly vaccinating children against the wishes of their parents, a lawyer tricking a dying billionaire into willing her fortune to a good cause, or Robin Hood robbing from the rich to give to the poor. In cases like these, many deontologists will say that the action in question is wrong, but nevertheless has a better result than the permissible alternatives. (Let us stipulate that, in each case, the goodness resulting from the action falls just short of overriding the constraint.) It seems to me, and to many other deontologists,Footnote 37 that actions such as these form a distinctive moral category, worth distinguishing from more typical “worse-and-wrong” actions. They may give rise to a distinctive moral phenomenology, call for different responses from affected parties, have interesting connections to aretaic notions, and may be worth treating separately in theories of punishment, forgiveness, and restitution. I can’t argue for all of this here—that would require a separate article—but I hope the possibility at least seems worth exploring. Note, though, that this category of action will not be salient in a system like McNamara’s. Better-but-wrong actions, in a consequentialized system, will be ranked (at best) just below the least-good permissible action. Thus, they won’t be distinguished from actions, like not giving quite enough to charity, which also fall just short of the line of permissibility, but which seem to be intuitively unlike the examples I described above.

In relying on a consequentialized model, therefore, McNamara has made it harder to identify a potentially significant moral category, and it is no coincidence that it is a distinctively non-consequentialist one. Of course, to say this moral category is hard to identify is not to say it can’t be identified. Indeed, McNamara does very briefly discuss the possibility that when a theory has two orderings that must be combined into one to fit his semantics, it may sometimes be fruitful to look back to the original, uncombined orderings, to define certain moral concepts (1996b, pp. 439–442). Better-but-wrong actions, for example, will be ones which rank high on the “goodness” ordering and low on the “constraint” ordering. But even if it is true that this category can be identified, its lack of salience makes it unlikely to be identified. (Despite mentioning the possibility of referring to the uncombined orderings, McNamara has never done so in his published work, so far as I can tell.) Even if, therefore, in theory this need not be a problem, in practice it has been a problem. And, when we are speaking of a pragmatic argument, “in practice” is what matters.

In closing, let me be clear that I don’t take these examples to show fatal problems for cost-effectiveness analysis or for McNamara’s system of deontic logic. There are many virtues of those systems, and it is quite possible or even likely that their benefits outweigh their costs. (In this respect, I take McNamara and the economists to be in a better position than pragmatic consequentializers in moral philosophy, in that they have clearly shown certain advantages that come from consequentializing in their domains.) The important thing to recognize is that the consequentialization required for both cost-effectiveness analysis and for McNamara’s system has costs. We need to be aware of these costs, taking steps to counteract them whenever possible. In economics, this may mean making a special effort to build distributive considerations into a cost-effectiveness analysis, discounting any initial reluctance we may have to make CEAs mathematically more complex or any impulse we may have to wait for a consensus before moving forward. For a deontic logician working with McNamara’s system, it may mean making a concerted effort to seek out distinctively non-consequentialist moral concepts.

8 Conclusions

For the past few years “consequentializing” has been a hot topic. I’ve shown, however, that the arguments offered for consequentializing differ to the point where it isn’t helpful to treat consequentialization as a single movement. Although intuitive, assimilating, and pragmatic consequentializers all agree that non-consequentialists ought to consequentialize their theories, the ‘ought’ in each case has a very different tenor and accordingly has different implications for the truth of non-consequentialism. The intuitive argument claims that non-consequentialist theories would be improved, were they consequentialized. It is, accordingly, an argument against non-consequentialism. The assimilating argument claims that non-consequentialists are compelled to consequentialize their theories, or that they are already consequentialists. At least on Dreier’s interpretation, this blurs or erases the distinction between consequentialism and non-consequentialism. And the pragmatic argument claims that consequentializing will lead to more fruitful work in moral theory. This is in no way a challenge to the truth of non-consequentialism.

In addition to distinguishing those views, I’ve argued that the intuitive argument, though live, need not convince a committed non-consequentialist, since it relies on an intuitive weighing of the plausibility of different factors. I’ve argued that the assimilation argument—the most ambitious of the three—doesn’t work as presented, and its prospects aren’t promising. And I’ve suggested that proponents of the pragmatic argument have neither convincingly shown the benefits that flow from consequentializing, nor fully acknowledged the potentially significant costs of doing so. This last point is especially important, since there are many pragmatic consequentializers outside of ethical theory. In fields like economics, deontic logic, decision theory, and linguistics, where it is common to use a consequentialist structure to represent ethical considerations, we need to take care to ensure that results are not skewed in ways friendly to traditional versions of consequentialism and hostile to traditional versions of deontology.