Introduction

Beauchamp and Childress’ seminal text The Principles of Biomedical Ethics has had profound effects on the field of bioethics. Part of the popularity this text enjoys is surely due to the rigor and precision with which Beauchamp and Childress put forth their ideas. Yet methodological diligence can hardly explain the immense popularity of Principles and its unrivaled dominance in ethics education for clinicians. There must be something else contributing to this success. Beauchamp and Childress’ ethical principles, grounded in a concept of common morality, are intuitively appealing to modern, Western bioethicists and medical practitioners. That we should value a moral system that justifies autonomy, beneficence, nonmaleficence, and justice simply makes sense. Surely anyone who is committed to what is right and good will believe these things as well—how could they not?

Beauchamp and Childress seem to be giving voice to beliefs already existent in the subconsciousness of their audience—just as they claim to be doing. The linchpin of their work, the common morality, amounts to the universal moral beliefs already held by “all persons committed to morality” (Beauchamp and Childress 2019, p. 3). Universally accepted moral claims must constitute a short list to start, and scholars have convincingly challenged the universality of certain first-order claims included in Beauchamp and Childress’ vision of the common morality, as I will discuss below. The response of Beauchamp and Childress is to admit a narrower content to the common morality (Beauchamp 2003, p. 260) and, recently, to constrain the scope of the common morality’s application (Beauchamp and Childress 2019, p. 447). While one may worry that the content of the common morality will become vanishingly small, Beauchamp and Childress argue these revisions are not sufficient to render the common morality nonexistent. Furthermore, they argue that, if undertaken, proper investigations would provide empirical evidence for the common morality. In short, they ask for faith that the common morality abides. And that faith is easy to come by because, again, there is that general sense of attraction to the principles derived from the common morality that makes it seem that Beauchamp and Childress have touched on something compelling and real.

This essay differs from previous critiques of common morality approaches in that my goal is not to cast suspicion on the common morality but rather interrogate the source of its considerable persuasiveness despite, as others have noted, a notable lack of evidence that any such thing truly exists. I will offer one possible explanation for how something so content-thin, difficult to describe, and susceptible to challenge as the common morality has been accepted as the foundation of Beauchamp and Childress’ principlism, perhaps the most widely embraced moral theory of modern Western medicine and bioethics. After briefly describing Beauchamp and Childress’ theory of the common moralityFootnote 1 and some of the myriad critiques, I turn to the philosophy of Charles Taylor to show that features necessary to Beauchamp and Childress’ metaethics reveal a very particular perspective rooted in the historical development of a specifically Western modernity. To my knowledge, no other author has put Taylor in conversation with Beauchamp and Childress’ common morality. By using Taylor’s work to interrogate the “background understandings” of the common morality for the first time, I argue that the idea of a common morality and, by extension, the principles have an appeal outstripping evidence because they are rooted in a shared, cultural experience among modern Westerners. My aim is to offer this suggestion: the common morality that is nearly void of prescriptive ethical content, and yet so oddly compelling, may not be universal at all but rather the trace of a transformation in worldview and self-concept shared by a subset of modern Westerners. It has been more than 40 years since Beauchamp and Childress first released this masterpiece text, but its importance, widespread use, and few but notable weaknesses make it still “worth troubling over” (DeGrazia 2003, p. 220).

The common morality of Beauchamp and Childress

The common morality is the workhorse of Beauchamp and Childress’ principlism. Their basic premise is that there is a set of moral principles, ideals, and virtues that are commonly held across all cultures and, in the first six editions of their book, throughout time. The common morality is thus “applicable to all persons in all places, and we appropriately judge all human conduct by its standards (Beauchamp and Childress 2019, p. 3).” Beauchamp and Childress make some suggestions of what might be included in the common morality, such as “do not kill”, “keep your promises”, “gratitude is a virtue”, but they do not attempt to inventory all of the content of the common morality (Beauchamp and Childress 2019, p. 3). They do assert that a norm included in the common morality must be universally held by all morally committed people and suited to promoting the objectives of morality, which Beauchamp and Childress describe as “promoting human flourishing by counteracting conditions that cause the quality of people’s lives to worsen (Beauchamp 2003).”

Particular moralities, such as that grounding their bioethical principles, are justified through considered judgments drawn from the common morality (Beauchamp and Childress 2019, p. 440). Particular moralities are context-specific, concrete, content-rich norms that elaborate on the “conspicuously abstract, universal, and content-thin” general moral standards of the common morality (Beauchamp 2003). The common morality thus acts as an anchor against relativism by making every particular morality obligated to coherence with a set of universal core beliefs. The importance of the universality or “commonness” of the common morality to Beauchamp and Childress’ larger project can not be overstated. The common morality guards against relativism and allows people to judge the moral worthiness of others’ particular moralities based on conformity to the common morality. Beauchamp and Childress claim that the common morality is not a priori but rather learned and transmitted through generations and across cultures (2019, p. 4). The common morality is not an object independent of human life and is possibly subject to change (ibid., p. 446). They sidestep found exceptions to the common morality by explaining them as evidence that whatever norm is in question must have never been in the common morality and instead belongs to a particular morality. They are comfortable with this because they have accepted the common morality to be content-thin (but not content-less). Importantly, they delimit the common morality to the set of universal norms shared by “all persons committed to morality” (ibid., p. 3, emphasis added). People who do not act in accordance with the common morality do not threaten its universality because such people can be regarded as not morally committed. Those, like the Taliban fighter committed to destruction of America even onto death is zealously committed to “a supremely valued point of view” but not “morally committed” because he does not adhere to their concept of the common morality or the promotion of human flourishing (Beauchamp 2003).

Critiques of Beauchamp and Childress’ theory of the common morality

There have been many critiques of common morality theory as proposed by Beauchamp, Childress, and others (e.g., Gert). A frequent criticism is that the common morality is too content-thin and vague to be practically useful (Kukla 2014). Some have critiqued common morality theorists for mistaking descriptive ethics for prescriptive ethics, for elevating what people already believe into prescriptive norms (e.g., Brand-Ballard 2003; Trotter 2020). Some have praised Beauchamp and Childress for the same thing, lauding that common morality theories have us start from a lived, embodied, universal ethic (Kukla 2014). Others have critiqued seeming inconsistencies within the common morality (e.g., Kagan 1989; Brand-Ballard 2003). Some argue that particular moralities contain original moral content that cannot be derived from a common morality (Rhodes 2020). But most importantly for our purposes, many critiques follow two themes: (1). There is insufficient empirical evidence for a common morality, and (2). There is reason to believe the so-called common morality is not common at all but excludes morally serious voices. I focus on these two themes of criticism because, if true, they comprise a devastating attack on the idea of a common morality and yet they are often brushed aside by common morality theorists, including Beauchamp and Childress. I hope to impress upon the reader that the more reasonable position is a prima facie rejection of the idea of a common morality. Nonetheless, the idea of a common morality that can justify midlevel principles is both pervasive and oddly persuasive.

A resounding theme of critique is that empirical evidence does not support a common morality. There are multiple ways to look for evidence of a common morality. One could employ historical analysis within one or multiple cultures or people groups. One could employ sociological or anthropological “point-in-time” studies of existing groups. Or one could employ a cross-sectional empirical study that surveys “morally committed” people for common moral beliefs, such as Beauchamp and Childress propose (2019, pp. 450–452). Beauchamp and Childress argue that this last method of investigation, if undertaken, would show that there is such a thing as a common morality. It is worth noting that after decades of so arguing, such a study has not been achieved. But the first two methods of investigation have been undertaken, and the results suggest such ethical diversity that a common morality seems far less plausible. As Turner (2003) notes, some communities prize peace while others prize war,Footnote 2 undercutting Beauchamp and Childress’ claim that nonmaleficence is an uncontroversial, universal norm (2019, p. 451). Anthropology and history are rife with examples of the way in which culture and locale fundamentally diversify human experience and even concepts of what it means to be a human self and moral subject (Kleinman and Fitz-Henry 2007; Rogers 2009). Some communities do not even have a word for something like morality, conscience, or the individual moral agent (Jacobson-Widding 1997). Anthropological evidence of profoundly diverse moralities does not prove that there is no common morality; as Beauchamp and Childress argue, these may just be particular moralities derived from the common morality. However, it is notable that evidence for a common morality is lacking while existing evidence suggests moral diversity over commonality. Beauchamp and Childress dismiss anthropological and ethnographic data for methodological reasons (2019, p. 450). I suspect that they intuit empirical “scientific” studies are superior because they presume a universal standard of truth, therein leaning decidedly towards finding something “common” (Kleinman 1995, p. 85). Beauchamp and Childress ask us to believe that the hypothetical study of their design would prove them right and would be more trustworthy than existing evidence to the contrary. They ask, in short, for a faith act.

A second theme of critique is that the presumption of a common morality and its prima facie outlining (see Beauchamp and Childress 2019, p. 3) excludes important moral voices. Common morality theorists face the problem that their claim to universality can be disproven with a single counterexample. They sidestep this by claiming there are temporary exceptions to the common morality or that the people who represent counterexamples are not morally committed people. Of course, the people considered not morally serious take umbrage. Turner asks, “to what extent is the common morality they describe an Anglo-American liberal democratic morality that has little relationship to values in other settings around the world?” (2003, p. 202). Trotter argues that the common morality really represents a “shared bias” among “liberal humanists who are the majority in higher education generally, and scholarly bioethics in particular” (2020, pp. 432, 434). Karlsen and Solbakk astutely ask: “Must not the universal claims attributed to the common morality in the end be ascribed to the dominion of a particular type of civilisation, and with it a form of rationality, which has only been able to expand by contracting alternative forms of social organisation, life and thought (2011, p. 590)?” Beauchamp and Childress, to their credit, have responded to such critiques by arguing that the common morality may not have previously included people like women or enslaved people. However, this compromise is flawed, as I explore below.

Taylor’s narrative of the development of secular modernity

Lack of evidence for the common morality and people telling us they are excluded by a supposedly universal entity should make us skeptical of the common morality. Yet it remains the bedrock of principlism and thus the dominant theory of bioethics. Why? I now take up the task of a metanalysis of the odd durability of the idea of a common morality. Critics suggest, I believe rightly, that common morality theorists miss the crucial importance and diversity of “interpretive horizons” (Turner 2003) or pre-reflective background knowledge (Kukla 2014). In other words, common morality proponents fail to appreciate the particularity of unsaid assumptions in their theory. So I turn now to a thinker who prioritizes examination of background understandings, the unsaid assumptions that allow us to think the way that we do. I turn to Charles Taylor with the hope that his introspection on Western, modern habits of thought will illuminate why bioethics is stuck in the rut of believing in a common morality.

Charles Taylor is often considered a part of the communitarianism school, which is in part a reaction to the Rawlsian ideas upon which Beauchamp and Childress rely heavily. Much of Taylor’s work, including his major text A Secular Age, focuses on building a narrative description of WesternFootnote 3 society’s transition from its premodern inception—where magic, spirits, demons, and gods ruled an “enchanted world” and in which the human self was often not under his/her own control—to the modern secular age marked by what Taylor calls “exclusive humanism.” Taylor refuses simplistic ideas of a homogenous premodern society contrasted with a homogenous modern society or of a single continuous process of transformation. Rather he tracks historical trends in the Western world over the last millennium to give a “zig-zag account” (Taylor 2007, p. 95) of the rise of our contemporary lived understanding, that is, “the way we naïvely take things to be (ibid, p. 30).” It is Taylor’s contention that Western modernity arose through particular theological and political events. The pre-philosophical assumptions that ground our modern world—e.g., individuals are moral agents, science is lauded, and human life is primary—were not inevitable products of the march of progress but rather part of a particular achievement played out in the specific history of Western society (Taylor 1999, 2007, p. 255). The secular age in which we now live has an original moral vision with a new framework of “background understandings” and a new sense of self that is difficult for the modern thinker to escape (Taylor 1999). This new age is not necessarily an age of unbelief, but an age of believing differently, an era with “a new context in which all searching and questioning about the moral and spiritual must proceed (Taylor 2007, p. 20).” What Taylor fights against is the assumption permeating Western society that the way we perceive the world and ourselves today is the only possible conclusion of our development over time. Instead, the modern perspective by which we are bound was a choice among multiple possibilities (Taylor 1999).

Essential features of the common morality

I will next identify three essential claims of Beauchamp and Childress’ common morality theory, why these claims appear, in my view, preposterous on the face, and, using Taylor’s work, why we tend to accept these claims despite lacking evidence. These claims are: the common morality is universal; the common morality is epistemically accessible; and the common morality prioritizes human autonomy.Footnote 4

The universality of the common morality

The first essential feature of the common morality is that its authority rests on claims to universality. Beauchamp and Childress explicitly state that “This [common] morality is not merely a morality, in contrast to other moralities. It is applicable to all persons in all places, and we appropriately judge all human conduct by its standards (2019, p. 3).” The common morality, to have claim on all morally committed people, must be transculturally stable. In the first six editions of Principles, Beauchamp and Childress claimed that the common morality is also temporally stable, but this claim was dropped from the current edition, likely due to critiques that societies have changed their ethics over time. However, accepting that the common morality is not trans-temporally universal creates secondary problems. In their latest conception, the common morality can change over time but not across cultures. But what happens when cultures fail to change in synchrony?

Suppose that society A changes over time such that, in its new iteration, claim X of the common morality is no longer applicable; the prima facie nature of the claims of common morality is exposed as no longer true (as Beauchamp and Childress concede can happen (2019, p. 447). What can this mean for society B, which necessarily shares in the common morality, but for which circumstances have not changed and feature X of what was the common morality is still ethically sound? This must mean that feature X has been downgraded from being a universal, abstract norm to a norm peculiar to society B. There is something that inspires unease in the claim that I must follow a common morality that can be deflated by the changing circumstances of someone else’s life on the other side of the globe. Do I have the same moral obligation to a norm once it has been rendered particular by a different society? It is not hard to imagine that only a handful of historical changes dyssynchronously occurring across cultures can reduce the already content-thin common morality to a ghost. Indeed, all that we have exposed with this analysis is the process by which pluralism develops and calls into question assumptions of universal beliefs.

Beauchamp and Childress try to side-step this difficulty by claiming that while it is possible that conditions of life can change so that the common morality must also change, such change is unlikely. In previous editions of Principles, Beauchamp and Childress found it “difficult to construct a historical example of a central moral norm that has been or might be valid for a limited duration and was abandoned because some good moral reason was found for its displacement,” although if it had, “the very possibility of such change seems to weaken the claim that there is a common morality with essential conditions and normative authority for all moral agents (2013, p. 413).” They are right that admitting moral change weakens the claim to an authoritative common morality. In the eighth edition of the Principles, they admit the common morality has expanded in the scope of its application, now admitting formally enslaved people and women as full moral agents. It is certainly odd to argue that universal norms can change in “the scope of their application” yet always remain universal. Furthermore, as they predicted in 2013, the authority of the common morality is immediately undermined by such change; why should I, as a woman, submit to the moral claims of the common morality if it could make such a grievous mistake as only recently admitting my moral status and thus humanity? I have little incentive to adhere to this “common” morality even if I want to be morally serious. No, a stronger formulation of the common morality is present in earlier editions of the Principles before the admission of changes in scope. For the most part, Beauchamp and Childress believe that if life’s circumstances change drastically—say, a war breaks out—that norms in the common morality may be further specified or exceptions may be allowed for time—for instance, a soldier may justly kill. But such exemptions are likely to be few and temporary and, despite shifting a little, the norms of the common morality remain largely stable.

How is it that Beauchamp and Childress have such faith in the stability of the common morality? With just a brief analysis, the universality of the common morality seems dubious; yet I admit sharing their incredulity that “do not kill” or “trustworthiness is a virtue” could ever not be universally accepted. Are we right to believe that no circumstances could exist where these maxims were not true or, more radically, their opposites were true? Or are we simply lacking in imagination? Charles Taylor’s essay “Two theories of modernity” gives a compelling account of how some modern Westerners tend towards the mistake of universalizing our particular beliefs and failing to see the true depth of pluralistic possibility (1999).

According to Taylor, the rise of Western modernity can be narrated in two distinct ways: via a cultural story or an acultural story. In the cultural narrative, the rise of the modern West is seen as the rise of a new, unique cultureFootnote 5 with a distinct moral vision. There is no mistaking this culture for a universal way of life. In the acultural narrative however, the historical changes that led us to modernity are seen as “culture-neutral operations.” That is, any culture could be the input for what are considered inevitable processes of human development. By this view, every progressing society is destined for this same transition and to arrive at an experience of modernity that matches ours. Taylor identifies three “massive errors” in this acultural view of modernity. The first two errors explain why modern Westerners so often fail to see that other moral visions are possible; the third error explains why we sense that there is something nearly transcendent, like a common morality, guiding our sentiments.

First, the acultural narrative leads us to neglect that Western modernity is an original moral vision. We mistake changes that are particular to Western culture as inescapable, the unavoidable products of development, technology, industrialization, and rationality. At the same time, we dismiss the particularity of other changes that were pivotal to the development of this modern moral vision, such as the development of an atomistic vision of the self with a “self-evident” right to freedom, the compulsion to separate fact from value, or a view of nature shaped by centuries of scientific methodology. We wind up imposing a “falsely uniform pattern” on non-Westerners and non-moderns and find ourselves unable to imagine that any people have ever seen themselves in ways different from how we see ourselves today (Taylor 1999). The Enlightenment project shines glaringly bright, blinding us to any unfamiliar gradations and making everything beyond an arm’s length look monolithically dark.

The second error of the acultural narrative is viewing modernity as resulting from the casting off of old, unreasonable religious and metaphysical beliefs. This leads us to believe in what Taylor calls “subtraction stories” of human progress: each society is bound to throw off their old “traditional illusions” and get down to the truth of being—as the modern secular West has done. All roads of progress lead to instrumental reason, individualism, and secularity once we can get past the hang-ups of traditional belief. The starting point of a society is thus irrelevant because development and truth-finding will always lead to convergence in this version of modernity. On the other hand, the cultural narrative holds that the starting point of a society is relevant; changes among society might occur in parallel but “they will not converge, because new differences will emerge from the old (Taylor 2003).” Taylor gives as evidence of the validity of the cultural narrative the examples of how Indonesian Islam has come to be quite different from the Middle Eastern Islamic World or how Christianity is deeply marked by Greek philosophy because the religion was birthed out of the Roman Empire. The cultural narrative is further obscured by the economic dominance and colonialism of the West, which tends to reinforce the first two errors of the acultural narrative of Western modernity; traditional cultures must usually acquiesce to Western modernity’s market-industrial economy, or else be swallowed up and forced to change anyway. In this sense, the economic power of the modern West may be inescapable, but its moral vision and cultural trajectory are not. As Taylor puts it, “modernity as lived from the inside” can be quite different around the world from a Westerner’s internal experience of modernity (2003).

Beauchamp and Childress are in the grips of the first two errors of the acultural narrative, that is, failing to see that Western modernity has an original moral vision and believing Western modernity has gotten down to the truth that our ancestors failed to see. Consequently, they fail to recognize the depth of moral diversity possible in this world. This explains how Beauchamp and Childress can believe that any deviations from the common morality are bound to be temporary or exceedingly rare; if one believes that we are all headed in the same direction, one can have faith that any deviations from what we hold to be indisputable moral truths will be only temporary. All roads lead to here.

The third error of the acultural view that Taylor articulates explains why Beauchamp and Childress hold on to a sense of a nearly inarticulable moral concept grounding all other moral peculiarities. The third error is neglecting that Westerners have developed particular “background understandings” of the world in the transition to modernity. Here Taylor draws on multiple thinkers, including Wittgenstein, to point out that explicit beliefs are formed against and only make sense in relation to a “background of unformulated (and perhaps in part unformulable) understandings (1999, p. 165).” We do not usually actively believe that the world did not start just five minutes ago, but we treat the world and deal with the world as if it has been here from time immemorial. This background understanding enables us to interact with the world as we do. It is a mistake to see these background understandings as universally handed down between all generations of peoples by teaching. Rather, background understandings are built through “habitus”—the inherently context-specific ways we are taught to behave and practice behaving until they become an “unreflecting, second nature”—and through the symbolic meanings available to us in our particular culture (Taylor 1999, p. 166). The importance of the background understandings, the habitus, and the symbolic meanings available to us is that these can serve as pretheoretical commands for behavior and can delimit our repertoire of possible beliefs.

Through the lens of this explanation, we can understand why Beauchamp and Childress think there is some compelling force that often leads people to similar moral conclusions; people who share background understandings will find that their shared cognitive schema tends to lead them to similarFootnote 6 conclusions. Beauchamp and Childress’ mistake is taking their social group’s moral inclinations, which were born from culturally-specific pretheoretical convictions, as corroboration of a universal, content-full moral vision. There is no evidence that background understandings should be universal because they are built through our ways of living in particular cultures and societies. Westerners have by-and-large been headed down a divergent path from the rest of the world for millennia, and it is reasonable to assume we have shaped different understandings than others. In neglecting this peculiarity in background understanding, Beauchamp and Childress fail to see how deeply different people can be. The result is an ethnocentrism that is not necessarily morally wrong but makes suspicious any claims of the existence of a common morality with universal moral authority. It is possible that what we take as indications of a universal common morality—that certain moral norms are ‘self-evident’ stable truths—could instead bespeak a mistake in narrating our self-history that blinds us to the true divergence of cultures and ethical standards.

The epistemic accessibility of the common morality

A second necessary claim of Beauchamp and Childress’ common morality is that humans have epistemic access to the content of the common morality. Beauchamp and Childress assume that the common morality is both knowable and known. They argue that the common morality can be empirically investigated by carefully selecting those people who are morally committed and systematically determining which moral norms they have in common. While it may be hard for one morally committed person to sort out which of her moral beliefs belong to the common morality and which to her particular moralities, this experiment could possibly delineate which norms are universal (2019, p. 450). In this set-up, people are the medium for finding the common morality, which begs the question of how people come to know the content of the common morality.

Beauchamp and Childress do claim that the common morality is taught in every society, but there must be something stronger than oral tradition at play here or else the common morality would be only a rhetorical entity with neither the moral authority nor stability they claim. Neither do we get to the pretheoretical content of the common morality through reason (although we can get to the content of particular moralities through reason). Yet Beauchamp and Childress nonetheless believe that people are capable epistemic vehicles. By their view, morally committed people will be able to reliably capture the norms of the common morality in their beliefs and their collective assessment of the common morality can be measured (2019, p. 450). This is a rather astounding claim.

J. L. Mackie succinctly describes why this seems a suspicious claim. Not only must the common moralityFootnote 7 be quite different from anything in the universe in its ability to transcend all the different ways people think about things or come to any other kind of belief, this understanding of the common morality demands that we have a special kind of epistemic faculty hitherto unknown:

When we ask the awkward question, how can we be aware of this authoritative prescriptivity… none of our ordinary accounts of sensory perception or introspection or the framing and confirming of explanatory hypotheses or inference or logical construction or conceptual analysis, or any combination of these, will provide a satisfactory answer (Mackie 1977, pp. 38–39).

Beauchamp and Childress’ claim that humans can reliably know a pseudo-objective, universal morality is an odd claim indeed, and requires a much more robust justification than they give to begin to be plausible. A second, perhaps less catastrophic, difficulty is that Beauchamp and Childress seem to be on the verge of mistaking replicability for truth (Porter 1995).

Despite these problems, it seems to make sense that people are able to intuit, for lack of a better word, that some things are right and others wrong. Every person that I know would agree that lying is wrong, even if we struggled to articulate exactly why or how we know so. But why do we believe so strongly that we can just know certain ethical truths when Mackie shows us that, on the surface of things, that claim is unusual to the point of being absurd? Again, Taylor is here to guide us through attempts to understand the basis of these claims.

We must examine the worldview implied by belief in a knowable universal morality. Taylor asserts that a feature of much of modern thought is a belief that the natural world is morally neutral and that moral order is either constructed by or largely acted out within humanity. In other words, fact and value are made distinct and value is localized within humanity to a greater extent than it was in pre-modernity (Taylor 2003). Beauchamp and Childress would agree with this anthropocentric claim about moral value: “the common morality comprises moral beliefs that all morally committed persons believe. It does not consist of timeless, detached standards of truth that exist independently of a history of moral beliefs (2019, p. 4, emphasis original).” That is, things are given meaning when they elicit certain responses—such as beliefs or moral actions—from sentient beings (Taylor 2007, p. 31). If ethical value is seated in humanity, then it follows that humans can know that for which they are the natural vehicles. However, what Taylor points out to those prone to the mistakes of acultural narration is that this worldview is not ubiquitous. It was achieved through particular choices made within a specific historical context.

Taylor’s A Secular Age traces the historical events that led to a particular Western modernity that was not inevitable but the result of multiple choices made along the way. While I can only briefly touch on aspects of Taylor’s line of argument in A Secular Age here, an important thesis is that in the transition from pre-modernity to modernity, Western society underwent a subtle process of “immanentization” by which what is found meaningful in life came to be increasingly enclosed within the material realm (Smith 2014, p. 48). The Good was more often understood as human flourishing rather than some transcendent mystery. This immanentization has even touched religious activity, which often has an increased focus on the moral order of this world compared to its pre-modern counterpart. Take for instance Christianity’s increased emphasis on the “goodness of Creation,” on social justice, on loving Christ through loving one’s earthly neighbor. In other words, the source of value is more often confined to the human frame. Whereas the pre-modern Westerner saw himself as tied up with and porous to an enchanted world permeated by magic and divinity, the processes of immanentization made it possible for some modern Westerners to view meaningfulness and moral significance as internally constituted and experienced. For such a modern Westerner, the experience of meaningfulness occurs within the human mind and the idea of an “introspective self-awareness” is second nature. But this would have been nonsense to our pre-modern ancestors (Smith 2014, p. 31).

How did this anthropocentric view come to be possible? Consider two brief examples from Taylor’s extensive argument. First, in the transition away from Latin Christendom, there was a shift from focus on the Christ of Judgment to the Christ Incarnate, the Man-God. As the human, suffering Christ was emphasized, there was a corresponding growth in conceptualizing religious devotion as serving other humans: “God’s goals for us shrink to the single end of our encompassing this order of mutual benefit he has designed for us” (Taylor 2007, p. 221). Reformed theologies, in their rejection of monastic vocations, emphasized “an enhanced status for (what had formerly been described as) profane life” (Taylor 1989, p. 216). Everyday work and relationships become a way for the individual to live out his or her particular calling. In the words of Puritan minister John Dod:

Whatsoever our callings be, we serve the Lord Christ in them…Though your worke be base, yet it is not a base thing to serve such a master in it. They are the most worthy servants, whatsoever their imploiment bee, that do with most conscionable, and dutifull hearts and minds, serve the Lord, where hee hath placed them, in those works, which hee hath allotted unto him (Taylor 1989, p. 223).

The hallowed was brought down into the previously unhallowed world and thus there was increased focus on this world and on human beings, even for those who continued to practice a religion.

A similar mode of anthropocentric shift occurred in the emerging civil order that marked the last half-millennium. As warlords were replaced with a sedentary elite and urbanization grew, there was an increased focus on the moral requirements of “polite civilization.” No longer was the civil order considered a reflection of the divine order (consider the divine right of kings) but an economic order based on exchanges between humans. The purpose of social intercourse moved from the transcendent to the immanent, namely mutual, human benefit (Taylor 1989, p. 237).Footnote 8 Unlike its predecessor, this new moral, social, and political order did not need the transcendent to operate. God did not necessarily die; it simply became unnecessary for the divine to intrude on human life (Taylor 1989, pp. 238–239). The claims of mystery, grace, eternity, and revealed knowledge that previously preoccupied the pre-modern world were eclipsed (Taylor 1989, pp. 238–239). The universe lost its enigmatic divine order and it was possible to believe that God relates to humans by creating a moral order within the material world that humans are able to comprehend (Taylor 2003, 2007, p. 221). Justice came to require that we control and correct nature and economies—for instance, “mitigating the negative effects of life’s lotteries” (Beauchamp and Childress 2013, p. 263). As the seat of value became internal to humans, there was no longer any reason to believe that anything was inscrutable.

The previous were two examples among many historical events, e.g. fifteenth century nominalism, Renaissance humanism, and the scientific revolution (Taylor 2007, p. 90), that led Western people to take increasing interest in “nature-for-its-own-sake” and less interest in the divine. The end result is the modern focus on the immanent, the primacy of humanism, and an epistemic Pelagianism to which Beauchamp and Childress are beholden (Smith 2014, p. 52). Humans became capable discerners (and sometimes authors) of value, of the Good. The relevance to our argument here is that Taylor carefully lays out that the events that constituted this particular anthropocentric transition were specific to Western Christendom.Footnote 9 This realization points out that the worldview many live by today—the worldview that makes possible the epistemic access the common morality needs—is not universal but particular to Western Christendom’s transition to modernity. If this necessary pre-condition of the common morality is not universal, how can the common morality itself be universal? Suspicion is high.

The primacy of autonomy in the common morality

The third and final essential feature of the common morality that I will examine here is the primacy of the autonomous individual. Respect for autonomy happens to be one of the four principles that Beauchamp and Childress describe as constituting the particular morality of bioethics. They explicitly deny that the autonomy principle is the primary principle in bioethical morality. Nonetheless, other writers have been drawn into believing that Beauchamp and Childress prioritize autonomy over their other particular principles.Footnote 10 This is not a surprising interpretation of Beauchamp and Childress because the enactment of the other three principles, beneficence, nonmaleficence, and justice, relies on the ability of a moral agent to freely decide to do so. It is my contention that Beauchamp and Childress are actually committed to the primacy of autonomy by their description of the common morality.

To begin, Beauchamp and Childress claim that the common morality endorses human rights (2019, p. 4). Embedded within the idea of morally prescribed entitlements, such are rights, is a conceptual scaffolding of atomistic individualism. In fact, the very definition of the common morality that they give alludes to the primacy of autonomy: “We call the set of universal norms shared by all persons committed to morality the common morality” (Beauchamp and Childress 2019, p. 3, emphasis added). This definition requires that a moral commitment be made by moral agents; this presumes that (1) there are discrete units of moral agencyFootnote 11 and (2) moral agents can make choices about their ethical actions. This seems so obvious that it is hard imagine a concept of ethics that does not require this to be so. Yet, as I will show shortly, this lack of imagination is another symptom of the particular history of the West.

Before moving on to Taylor’s response, let me take a moment to address the possible argument that the term “committed” in Beauchamp and Childress’ definition of the common morality might imply a passive sense of the word rather than a connection to self-governance and choice. The former interpretation would be inconsistent with Beauchamp and Childress’ moral stance. If being a serious moral agent could mean having some external commitment to morality imposed on oneself, then Beauchamp and Childress would need to depend on some extra-human moral force—which their theory seems to in no way indicate—or allow the imposition of moral choice by another person. Because Beauchamp and Childress describe autonomy as self-rule that is characterized by both freedom from controlling influences and a capacity for intentional action (2019, p. 100), a passive sense of being morally committed is opposed to their assertion that “the principle of respect for the autonomous choices of persons runs as deep in common morality as any principle… (Beauchamp and Childress 2019, p. 99).” Therefore, self-rule actualized in moral commitment is essential to their common morality.

Taylor spends the majority of A Secular Age tracing the development of the idea of a discrete or “buffered” self. He argues that we currently have a pre-theoretical belief about the self that is structured by a background understanding of “inside/outside geography,” that is, the self as insulated within the person, in the ‘mind’ in a broad sense of that word (Taylor 2007, p. 32). This is a newer development in the history of humanity and, again, has a particular history in the rise of Western modernity. Looking into the past we can see examples of how people used to see themselves as “porous” or not clearly demarcated from what we would today call the “external” world. For example, inanimate objects used to be routinely viewed as loci of power and agency. Consider saintly relics that had the power to cause benefit or suffering not just through the personality of the saint but also through the things themselves (ibid., pp. 32, 37). The “line between personal agency and impersonal force was not at all clearly drawn” in the pre-modern enchanted world. In general, our pre-modern ancestors were comfortable living without the boundaries between self and non-self that seem essential to us today. This is not to say that Western moderns are never affected by things, respond to things, or change the way they see the world because of what they observe. But for them, meaning-making is generated within the agent. For some modern people there is a continued belief in a non-human agent such as God, but no longer do we generally think of material things as “full of god-power.” Again, the theological shift to focusing on the human aspect of Christ was vital in leading to a focus on “ordinary people in their individuality.” As the church began to teach personal judgment, personal responsibility, and personal salvation in the Middle Ages, the individual became possible and primary (ibid., pp. 65–66).

For an illustration of the transition from the pre-modern porous self to the modern buffered self, consider how we understand mental illness today. A modern person who is depressed is told, “’it’s just your body chemistry, you’re hungry, or there is a hormone malfunction (Taylor 2007, p. 37)’” He feels relief because he can now distance himself from the feeling, “which is ipso facto declared not justified (ibid.).” This is the disengagement and buffering of the self that is distinctive of some modern Westerners. But compare this to the pre-modern for whom black bile is the embodiment of melancholy. It is a thing with evil meaning; if he is depressed, he feels no relief from learning it is due to black bile because there is no distancing in this explanation. He is in the grip of the evil, of “the real thing (ibid., p. 37).” To the modern, constraints to freedom and self-governance are something that happens to him, an injustice or misfortune. To the pre-modern, there is no life that is not always open to and vulnerable to the impulses of the cosmos.

Primacy of the individual and the possibility of autonomy or self-governance is a newer development, one that marked the specific transition to modernity in Western society. The idea that the good is primarily manifested in individual choice of conduct—the heart of modern Western ethics and the common morality—was not part of how our ancestors interacted with the world. Again we see that an essential feature of the common morality was not always supported by the Western way of life, raising our suspicion that there are contemporary cultures that might not embrace such metaethics either (Jacobson-Widding 1997). Any ethic that relies on a pretheoretical understanding of the buffered self and makes claims of universality must show that all societies underwent the same or similar transition as the West. This is a tall order that seems unlikely to obtain.

The authority of the common morality

Claims to universality, epistemic access, and the primacy of autonomy are three features necessary to the common morality as constructed in Principles. In examining these features, I have described how the work of Charles Taylor supports my suggestion that the common morality—vanishingly thin, difficult to pin down, and compelling to an entire field of Western bioethics nonetheless—is conceivable because of a very particular historical trajectory. That the rise of these metaethical features are particular to Western history suggests that they may not be as ubiquitous as we assume. There was a time when these second-order claims were unacceptable (or unimaginable!), and they became acceptable through historical developments peculiar to Western history. There is no reason to believe that other cultures, different from the pre-modern West to begin with and undergoing different historical events, would come to these same conclusions about the nature of the world, of the self, and of the good that make the existence of a common morality plausible. This heightens the suspicion that the common morality is not common at all and therefore suffers from “a credibility problem” (DeGrazia 2003, p. 222).

What is left for the work of Beauchamp and Childress then if the common morality cannot claim universal moral authority? The common morality, if built on shared experiences of a particular community, is much more likely a consensus than a (more or less) objective moral entity. Acknowledging that the common morality is more akin to consensus actually makes Beauchamp and Childress’ theory more compelling because it frees their work from the contradiction of an ethnocentrically constructed ‘universal’ morality. Additionally, content can be restored to their starting premise because—to borrow a phrase from Engelhardt—moral friends can agree on a larger cache of norms. Circumscribing the common morality to a specific community (Kuczewski 2009) directly undercuts Beauchamp and Childress’ claim to a universal bioethics, which I believe is best. For too long, bioethicists have underestimated the true diversity in moral worldviews and commitments among patients (Kleinman 1995). Globalization may bring us closer to something like a common morality (Sullivan 2007) but we need to ask if this is more convergence of values (that might be morally prescriptive) or better understood as moral colonialism.

If we understand principlism as a product of consensus, we can also address one pesky hazard of Beauchamp and Childress’ apologetics for the common morality. In the final chapter of Principles, they seem to suggest that there is some good more authoritative than the common morality when they try to explain why it is that the common morality did not prohibit slavery: “Where we are in the common morality is not necessarily where we should be (2019, p. 448).” If the common morality is the set of moral ideals held by all morally committed people, then a claim that the common morality should be something else is an attempt to use a particular morality (derived from the common morality) to correct the common morality. This is a circular claim that obscures the source of moral authority. If instead the common morality is actually a consensus among a particular group, then it is more reasonable for Beauchamp and Childress to argue that it fails to encompass some ideals that they hold.

There are those who fear that giving up on the common morality means a descent into the chaos of relativism. Perhaps. But trying to impose a false order out of discomfort with the alternative is an error. Deep pluralism already exists, and the world is still standing. Ironically, my analysis shows that the origins of the concept of common morality might itself be a convincing example of just how deep ethical pluralism goes. Our ethics are profoundly shaped by our histories and our cultures; if these are divergent then our ethics are likely so as well.

Summary

In conclusion, I have shown that Beauchamp and Childress’ concept of the common morality relies on three essential assumptions: that the common morality is universal, that it is epistemically accessible to humans, and that human autonomy is primary. The work of Charles Taylor gives an account of how the specific history of Western modernity made these features of the common morality philosophically conceivable and suggests that other cultures might not have experienced the same shifts to make these claims thinkable. Therefore, the metaethics of Beauchamp and Childress work are not universal but particular. This fact, compounded with the already content-thin and tenuous first-order claims of the common morality, must pique our suspicion. It seems more plausible that the common morality is less of a moral system than the mark of an experienced transformation particular to the West. We therefore ought to ground authority claims of any bioethical principles in a concept of consensus rather than a common morality theory.