Skip to main content

Reasoning with Comparative Moral Judgements: An Argument for Moral Bayesianism

  • Chapter
  • First Online:
Applications of Formal Philosophy

Part of the book series: Logic, Argumentation & Reasoning ((LARI,volume 14))

Abstract

The paper discusses the notion of reasoning with comparative moral judgements (i.e. judgements of the form “act a is morally superior to act b”) from the point of view of several meta-ethical positions. Using a simple formal result, it is argued that only a version of moral cognitivism that is committed to the claim that moral beliefs come in degrees can give a normatively plausible account of such reasoning. Some implications of accepting such a version of moral cognitivism are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The term ‘moral judgements’ is used in our everyday discourse, in philosophical discourse and in discourses in other disciplines (such as psychology) in two different ways. Sometimes it is used in order to refer to an act (typically a verbal act) of judging. E.g. when I say ‘it is wrong to lie’ I am performing a verbal act and this act is a moral judgement. Other times, however, it is used in order to refer to the mental attitude that is expressed by such an act. E.g. the reason for my saying ‘it is wrong to lie’ is my mental judgement that it is wrong to lie. In this paper I am going to use the term in the second way.

  2. 2.

    The formal result presented in Sect. 6.3 applies, of course, not only to CMJs, but also to comparative judgements regarding any other type of linear ordering (such as rational preferences for example). In fact a version of it can be applied to any set of propositions which is governed by a semantics that rules out as inconsistent at least one possible distribution of truth values over any three propositions in the set. However, my justification for the claim that the characterization of reasoning with comparative judgements that I present should be accepted by all non-Bayesian (with regards to a given type of comparative judgement) does not apply to reasoning with comparative judgements about all types of linear orderings. Specifically, I do not try to suggest (and I think it is not the case that) it holds with regards to a rational agent’s personal preference ordering.

  3. 3.

    In any case, it takes moral judgements to have truth values. For a discussion of different types of moral cognitivist positions see [27]. The discussion here is independent of the issues discussed there.

  4. 4.

    Two examples of discussions in which at least some of the participants explicitly accept MB are the literature discussing moral uncertainty (for example see [10, 16, 22, 25]) and the literature discussing the desire as belief thesis (for example see [1, 2, 19,20,21]). I use the term ‘moral Bayesianism’ to highlight the relevance—rarely discussed—of this thesis to moral reasoning.

  5. 5.

    Van Roojen [27] offers a good overview of different non-cognitivist views.

  6. 6.

    I do not know of any published work that explicitly argues against MB from a cognitivist point of view. However, in private conversations and other types of informal communication I have met the non-Bayesian cognitivist position quite often.

  7. 7.

    Introduced in a much more detailed way by [25].

  8. 8.

    An anonymous referee commented that, while in the trolley problem we can intuitively accept any conjunction of two out of the three propositions involved, the example presented in Table 6.1 assigns to each one of these conjunctions a low probability (of 1/3). I am not sure that I share the referee’s intuition (when I think about a conjunction of two out of the three propositions I become immediately aware of the fact that they entail the negation of the third), but in any case it is easy to construct an example in which each one of the three conjunctions (i.e. SF, SN and FN) gets a high probability while the conjunction of all three of them gets a low probability (though not 0 and see footnote 10). For example, this is the case when the agent assigns a credence value of 9/24 to SFN, and a credence value of 5/24 to SF-N, S-FN and -SFN.

  9. 9.

    For example, consider Peter Singer’s argument from “Famine, Affluence, and Morality” [23], which has a structure almost identical to the one discussed here, as it is based on the apparent inconsistency between the following three claims: 1. It is obligatory to save a drowning child when the child is in a pond next to you, even if by doing this you will ruin your new pair of shoes. 2. It is permissible not to save dying children in far away countries, even if it is possible to do so for the cost of a new pair of shoes. 3. There are no morally significant differences between the case of the child drowning in a pond next to you and the case of dying children in far away countries.

  10. 10.

    However, see [5] for an argument against the demand that a rational agent must hold a deductively consistent set of full beliefs at all times (Kyborg rejected this demand too). As noted, I do not take the discussion in this section to be a knock-down argument for MB. The aim of the discussion is merely to motivate MB. The real argument for MB is presented in the following sections. Still, it is worth mentioning that in [4] an extension of the position presented in [5] is used in order to suggest a new subjective interpretation for probability. Thus, it seems (but I cannot say this with complete certainty) that, to the extent that the non-Bayesian can give a satisfactory account for the phenomenon described in the main text in terms of full beliefs, it is possible to view this account as a version of MB.

  11. 11.

    And as mentioned in footnote 8, I am not sure about the case of the conjunction of only two out of the three propositions.

  12. 12.

    There are actually three possible readings of the requirement. Let ‘i > j’ stand for ‘i is morally superior to j’ with appropriate prefixes which make it a readable that-clause. Then, the three readings are the following ones. 1. If one accepts A > B and B > C then one ought to accept A > C. 2. If one accepts A > B then one ought to accept ‘if B > C then A > C’. 3. One ought to accept ‘if A > B and B > C then A > C. I think one should accept the requirement under all three readings. In any case, nothing in my argument depends on this.

  13. 13.

    I am not trying to perform here a conceptual analysis of the concept of reasoning. Maybe an arbitrary change in one’s CMJs does constitute an instance of reasoning with CMJs. Broome [3] discussion of reasoning with preferences seems to assume that this might be the case. Broome, then, limits his discussion to what he calls ‘correct reasoning’. I do not mind, of course, accepting such terminology. The important point is that there is a sense in which we do demand from a rational moral agent, who is involved in a process in which he changes some of the CMJs he holds, to do so in a non-arbitrary way. Whether this demand is a necessary condition in order for the process to count as reasoning or only a necessary condition in order for the process to count as correct reasoning does not concern me. For convenience, then, I will assume that the former condition holds.

  14. 14.

    You can read the last two paragraphs while interpreting the phrases ‘takes to be a reason’ and ‘takes to be relevant’ anyway you like. I think of them as referring to beliefs: ‘takes X to be a reason’ means ‘believes that X is a reason’; but if you prefer to understand the agent’s attitudes toward the status of the information he gets differently this is okay too. In any case, if you do want to talk about reasoning in the way I have characterized here, you must make a distinction between information that the agent takes to be relevant and information that he does not.

  15. 15.

    See [8, 9] (for example) for a similar characterization of reasoning with preferences and for preference change generally.

  16. 16.

    There is an implicit semantics in the background that—for the sake of simplicity—I choose not to make explicit. It is easy to see what it should look like, though, in our simple case. Hansson [8, 9] provide a much more general framework.

  17. 17.

    Stability is a special case of Hansson’s vacuity, and respectfulness is equivalent to Hansson’s success.

  18. 18.

    I was not able to locate an earlier proof of this (very simple) result. However, there are obvious formal connections between this result and the literature dealing with judgement aggregation on the one hand (for a good introduction see [15]) and the literature dealing with preference change on the other hand (such as [8, 9]). The literature that explores the relation between Bayesian updating and the AGM approach to belief revision is also of obvious relevancy here. See for example [14].

  19. 19.

    It is easy to see that the result can be generalized to any system of propositions that includes at least three pairwise inconsistent propositions. The inconsistency need not be always in the form of intransitivity. I thank Christian List for pointing this out to me. However, as indicated in footnote 2, I do not try to suggest that the result is of any philosophical interest when considered under different interpretations. Maybe it is under some interpretations, but I could not think of any other than the CMJ interpretation presented here.

  20. 20.

    For simplicity I will deal here only with the simple case in which the agent has only raised the probability he assigns to one proposition. Nothing in the discussion depends on this.

  21. 21.

    The proofs are straightforward but still they require some space. As the questioner’s purpose is only to highlight how severe the threat is that JC’s incommutativity poses to MB (which is the position I argue for), my argument in this paper does not depend on the truth of the answers I gave in the main text and so, for the sake of simplicity, I chose to omit the proofs. The answers are correct, though.

  22. 22.

    I thank Richard Bardley for explaining this point to me.

  23. 23.

    Now we can see that the case of two inconsistent inputs is just a limiting case of two probabilistically dependent inputs.

References

  1. Bradley, R., & List, C. (2009). Desire-as-belief revisited. Analysis, 69(1), 31–37.

    Article  Google Scholar 

  2. Broome, J. (1991). Desire, belief and expectation. Mind, 100(2), 265–267.

    Article  Google Scholar 

  3. Broome, J. (2006). Reasoning with preferences? Royal Institute of Philosophy Supplements, 59, 183–208.

    Article  Google Scholar 

  4. Easwaran, K. Dr. truthlove, or how i learned to stop worrying and love bayesian probabilities. forthcoming.

    Google Scholar 

  5. Easwaran, K., & Fitelson, B. (2015). Accuracy, coherence, and evidence. Oxford Studies in Epistemology, 5, 61.

    Article  Google Scholar 

  6. Field, H. (1978). A note on Jeffrey conditionalization. Philosophy of Science, 45(3), 361–367.

    Article  Google Scholar 

  7. Greene, J.D. (2007). The secret joke of kants soul. Moral Psychology: Historical and Contemporary Readings, 359–372.

    Google Scholar 

  8. Hansson, S. O. (1995). Changes in preference. Theory and Decision, 38(1), 1–28.

    Article  Google Scholar 

  9. Hansson, S. O. (2001). The structure of values and norms. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  10. Jackson, F., & Smith, M. (2006). Absolutist moral theories and uncertainty. The Journal of philosophy, 267–283.

    Google Scholar 

  11. Jeffrey, R. (1992). Probability and the art of judgment. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  12. Joyce, J. M. (1998). A nonpragmatic vindication of probabilism. Philosophy of Science, 65(4), 575–603.

    Article  Google Scholar 

  13. Kyburg, H.E. (1961). Probability and the logic of rational belief.

    Google Scholar 

  14. Lepage, F., & Morgan, C. (2011). Revision with conditional probability functions: Two impossibility results. In Dynamic formal epistemology (pp. 161–172). Berlin: Springer.

    Google Scholar 

  15. List, C. (2012). The theory of judgment aggregation: An introductory review. Synthese, 187(1), 179–207.

    Article  Google Scholar 

  16. Lockhart, T. (2000). Moral uncertainty and its consequences.

    Google Scholar 

  17. Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in cognitive sciences, 11(4), 143–152.

    Article  Google Scholar 

  18. Nissan-Rozen, I. (2012). Doing the best one can: A new justification for the use of lotteries. Erasmus Journal for Philosophy and Economics, 5(1), 45–72.

    Article  Google Scholar 

  19. Oddie, G. (1994). Harmony, purity, truth. Mind, 103(412), 451–472.

    Article  Google Scholar 

  20. Piller, C. (2000). Doing what is best. The Philosophical Quarterly, 50(199), 208–226.

    Article  Google Scholar 

  21. Price, H. (1989). Defending desire-as-belief. Mind, 98(389), 119–127.

    Article  Google Scholar 

  22. Sepielli, A. (2009). What to do when you don’t know what to do. Oxford studies in Metaethics, (4). Oxford University Press.

    Google Scholar 

  23. Singer, P. (1972). Famine, affluence, and morality. Philosophy and Public Affairs, 229–243.

    Google Scholar 

  24. Singer, P. (2005). Intuitions, heuristics, and utilitarianism. Behavioral and Brain Sciences, 28, 560–1.

    Article  Google Scholar 

  25. Smith, M. (2002). Evaluation, uncertainty and motivation. Ethical Theory and Moral Practice, 5(3), 305–320.

    Article  Google Scholar 

  26. Sunstein, C. R. (2005). Moral heuristics. Behavioral and brain sciences, 28(4), 531–541.

    Google Scholar 

  27. Van Roojen, M. (2009). Moral cognitivism vs. non-cognitivism. Stanford Encyclopaedia of Philosophy.

    Google Scholar 

Download references

Acknowledgements

This research has been supported by the Israeli Science Foundation (grant number:1042/13). I thank Richard Bradley, Christian List and two anonymous referees for their useful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ittay Nissan-Rozen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Nissan-Rozen, I. (2017). Reasoning with Comparative Moral Judgements: An Argument for Moral Bayesianism. In: Urbaniak, R., Payette, G. (eds) Applications of Formal Philosophy. Logic, Argumentation & Reasoning, vol 14. Springer, Cham. https://doi.org/10.1007/978-3-319-58507-9_6

Download citation

Publish with us

Policies and ethics