Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Developments in technology have prompted ethical concerns for as long as recorded history. Writing itself is denounced by Plato in the Phaedrus, and other technological developments since then that have attracted moral censure include the mechanical clock, the crossbow, printing, the steam engine, vaccinations, and nuclear power, to name only the most notorious examples. It is as if the extent of man’s curiosity and genius for invention were equalled only by his apparent discomfort with these faculties. This discomfort is encoded in many ancient myths, from the Hebrew story of the expulsion from the Garden of Eden, to the Greek tale of Prometheus and the Mayan legend of the rebellion of the tools.

Although contemporary developments such as genetic engineering, nanotechnology, and the use of stem cells in medical research are new, there is nothing new, therefore, about the aversion that many people today feel towards new technology. Indeed, it is all depressingly familiar.

What is new is a certain willingness by some scholars to endow this aversion with some normative weight. Traditionally, philosophers (in the Western tradition at least) have regarded emotional reactions as inimical to rational appraisal. In his famous metaphor of the chariot, Plato portrayed the passions as horses and reason as the charioteer. The message is clear; the passions may provide motive power, but it is up to the charioteer to steer them in the right direction. Immanuel Kant too argued that moral decisions should be a matter for pure reason, excluding all “pathological” emotional considerations. In standard accounts of the history of Western philosophy, Kant’s views are usually contrasted with those of David Hume, who argued that approbation or blame “cannot be the work of the judgement, but of the heart; and is not a speculative proposition or affirmation, but an active feeling or sentiment” (Hume 1777 Appendix 1). However, it is worth noting that Kant and Hume agree on a fundamental idea; namely, that moral judgements are irrational to the extent that they are determined by emotional considerations. Kant believes that moral judgements can and should be made without emotional involvement, and are rendered irrational to the extent that they are contaminated by emotion. Hume believes that moral judgements are always determined by emotional considerations, and concludes that they are therefore irrational (or at least arational). Neither Kant nor Hume attribute any normative weight to our emotional reactions.

Recently, however, some thinkers have proposed an alternative view, according to which emotions can be a normative guide in making moral judgments. Perhaps the best known proponent of this view is Leon Kass, who argues that feelings of disgust may be the manifestation of a kind of moral “wisdom” (Kass 1997). Kass is certainly not alone, however, and nor is disgust the only emotion that these attempts to endow emotions with normative weight have focused on. Roeser, for example, agrees with Kass that emotions can be a normative guide in making moral judgements, but her focus is on sympathy, empathy, fear and indignation (Roeser 2006b).

Critics of Kass have rightly pointed out that human history is littered with examples of things that were once considered disgusting but which we now recognise were inappropriate objects of revulsion. Homosexuality, working women, and other races were all considered disgusting by very large numbers of people, and sometimes whole societies. Yet few would say today that those feelings were appropriate. As John Harris points out, “we ought to have a rational caution about following the yuk factor because we know it has led us not only in the wrong direction but in a thoroughly corrupt direction” (cited in Ahuja 2007). History teaches us that we cannot rely on the emotion of disgust to provide our moral compass. Like other emotions, disgust can be educated, but it can also have dubious causes.

The same arguments could be made, of course, against claims for the moral significance of other emotions besides disgust such as Roeser’s claims for the moral significance of sympathy. More important than any claims about the moral significance of particular emotions such as disgust or fear, though, is the logically prior claim that emotions of whatever kind can carry normative weight. To my mind, this is Kass’s most fundamental error; the argument about disgust is important only as a special case of the more general claim.

Prima facie, it would seem that Kass and the other thinkers who share his views on the normative weight of emotions are simply making an elementary philosophical blunder by failing to observe the is-ought distinction. If one starts with the premise that research involving embryonic stem cells is disgusting, and concludes (after any number of intermediate steps) that one ought not to engage in such research, then it is clear that at some point in the argument one has made an invalid inference unless one of those intermediate steps is a premise to the effect that one ought not to do disgusting things. It is then clear that the moral weight of the argument depends on this crucial moral claim, and not on the empirical facts.

However, the proponents of the moral emotion view (as I shall call it here) would presumably reject such a criticism on the grounds that it is too simplistic. Roeser, for example, claims to base her views on “recent developments in neurobiology, psychology and the philosophy of emotions”, which, she thinks, show that “emotions and rationality are not mutually exclusive, but rather, in order to be practically rational, we need to have emotions” (Roeser 2007). Roeser takes these empirical findings in psychology to provide some support for her specific version of the philosophical position known as ethical intuitionism – the thesis that we sometimes have intuitive awareness of value, or intuitive knowledge of evaluative facts, which forms the foundation of our ethical knowledge. According to Roeser, ethical intuitions are paradigmatically cognitive moral emotions with which we perceive objective moral truths (Roeser 2006a).

Moral realism is the critical premise on which all of Roeser’s claims about the normative status of emotions depend. To refute these claims decisively, then, it would be necessary to show that moral realism is false. Limitations of space make it impossible, however, to rehearse the well-known and well-established arguments against this notion here. For the purpose of this article, I will limit myself to dealing only with what Roeser herself calls “the main argument for moral realism” (Roeser 2006b, p. 692). This argument begins by assuming that if there were no moral truths, there would not be an objective standard against which to evaluate a situation. It then appeals to our moral intuitions, which tell us clearly that certain moral practices are wrong, and by modus tollens infers that there must be moral truths. This is a valid argument, but the conclusion is only true if one accepts that our moral intuitions are good guides to the truth. Yet this is exactly what the argument purports to show, so the reasoning is circular. “This might sound like wishful thinking or circular reasoning,” admits Roeser, but then adds; “it is rather to be understood as ‘inference’ to the best explanation” (Roeser 2006b, p. 692). This is disingenuous; no amount of denial will obscure the blatant circularity.

Nor does the occasional reference to “cognitive” theories of emotion provide any support for any species of moral realism. I suppose Roeser is right to claim that “cognitive theories of emotions allow for the idea that emotions are basic perceptions of moral reality” (Roeser 2006b, p. 692), but the mere fact that cognitive theories of emotion might be logically consistent with the thesis of moral realism does not provide any grounds for thinking that it is true. Some cognitive theories of emotion hold that emotions are judgements of value (Nussbaum 2001), but the sense in which the term “value” is used here is not a moral or ethical one. Rather, what “value” means in this context is the relation that some event or fact has to an organism’s desires or intentions. Something has value in this sense if and only if it is either a potential aid or a potential obstacle to the achievement of one’s desires or intentions, irrespective of any moral or ethical matters. It is simply a category mistake, therefore, to think that cognitive theories of emotion provide any support for any species of moral realism.

It is likewise a mistake to think that certain contemporary views on the role that emotions play in practical reasoning have any bearing on the question of whether or not emotions have normative weight. The view that humans need emotions in order to be practically rational has become increasingly popular in the past decade (eg. de Sousa 1987; Evans 2002). But, like the cognitive theories of emotion with which this view is often closely associated, it is entirely a matter of empirical psychology, and has no necessary link with any species of moral realism.

In what follows, then, I simply assume that values, norms and ethics are all subjective phenomena, in the sense that we may have opinions about them, but there are no facts of matter. This does not imply, of course, that no practice can be morally better or worse than another. It simply means that statements about the relative moral value of different practice must always be relativised to a given person or community. Democracy may be morally better than dictatorship for this person, or for that community, but never per se.

According to Roeser, her philosophical framework “is meant as a ‘third way’ between Kant and Hume” (personal communication). But like many “third ways”, this is really no more than incoherence masquerading as complexity. The truth does not always lie half-way between two opposing views. Sometimes, the dichotomy exhausts the space of logical possibilities. In such cases, to reject the dichotomy as being “too simplistic” is intellectually dishonest. It muddies the water and prevents clear debate.

Until Roeser and the other proponents of the moral emotion view provide a clearly articulated explication of their much-vaunted “third way”, then, we must treat their claims as mere hand-waving. Neither they nor anyone else has yet provided sufficient reasons to question the widely held view that emotions provide no evidence at all for or against any moral or ethical claim.

Does this mean that emotions convey no ethical or moral information? Certainly not. Emotional reactions often (but not always) convey information about the ethical and moral beliefs of the person exhibiting the reaction. If a person reacts with anger when she reads about a businessman who retires with a fat pension after almost bankrupting his company, I can reasonably infer that among her moral beliefs is one that places a high value on accountability and fairness.

Although this idea is hardly new, it is still underdeveloped. When combined with recent developments in psychological ethics, however, such as Jonathan Haidt’s “moral foundations theory”, it gives rise to some interesting consequences.

Haidt argues that there are five psychological systems that provide the foundations for the world’s many moralities. Each system is specialised for detecting and reacting emotionally to distinct issues: harm/care, fairness/reciprocity, ingroup/loyalty, authority/respect, and purity/sanctity. When the harm/care system is triggered, the emotions of fear and compassion may be activated. The fairness/reciprocity system evokes primarily the emotions of anger, gratitude and guilt. The ingroup/loyalty system involves strong social emotions related to recognizing, trusting, and cooperating with members of one’s co-residing ingroup, while being wary and distrustful of members of other groups. Emotions of pride, shame, awe and admiration, are manifestations of the authority/respect system. Finally, activation of the purity/sanctity system is associated most strongly with the emotion of disgust:

Disgust appears to function as a guardian of the body in all cultures, responding to elicitors that are biologically or culturally linked to disease transmission (feces, vomit, rotting corpses, and animals whose habits associate them with such vectors). However, in most human societies disgust has become a social emotion as well, attached at a minimum to those whose appearance (deformity, obesity, or diseased state), or occupation (the lowest castes in caste-based societies are usually involved in disposing of excrement or corpses) makes people feel queasy. In many cultures, disgust goes beyond such contaminant-related issues and supports a set of virtues and vices linked to bodily activities in general, and religious activities in particular. Those who seem ruled by carnal passions (lust, gluttony, greed, and anger) are seen as debased, impure, and less than human, while those who live so that the soul is in charge of the body (chaste, spiritually minded, pious) are seen as elevated and sanctified. (Haidt and Graham 2007, p. 116)

Haidt’s theory allows us to make much more systematic inferences about the information that emotional reactions often convey about the ethical and moral beliefs of the person exhibiting the reaction. For example, if someone appeals to the emotion of fear when expounding on their moral opposition to GM crops, we can infer that the risks they associate with this technology are largely to do with the possible harm that this technology could do (by, for example, damaging the digestive system of those who consume them). Alternatively, if the emotion of anger plays a larger role in someone’s opposition to GM crops, we might infer that the risks they associate with this technology have more to do with possible injustice (such as increasing the profits of large corporations at the expense of small farmers). Or, again, if it is the emotion of disgust that seems to motivate the opponent of GM crops, it may be that the risks that weigh most heavily on their mind are spiritual or theological ones (such as “tampering with God’s creation”).

Haidt has also argued that political liberals tend to base their moral intuitions primarily upon just two systems (the harm/care and fairness/reciprocity systems), while political conservatives generally rely upon all five systems. Liberals therefore often misunderstand the moral motivations of conservatives, explaining them as a product of various non-moral processes such as system justification or social dominance orientation. The fact that bioconservatives like Kass see wisdom in the emotion of disgust is clearly in line with Haidt’s claim that the values of purity and sanctity tend to play an especially important role in the moral beliefs of political conservatives. Similarly, the fact that liberals like Harris disparage the appeal to this emotion is also in line with Haidt’s view that purity and sanctity do not even figure as concepts in liberal moral systems.

Haidt’s thesis is not necessarily disproven by the recent appropriation of disgust by liberal thinkers. Dan Kahan, for example, has argued that even a liberal society needs to build law on the basis of disgust and attempts “to redeem disgust in the eyes of those who value equality, solidarity, and other progressive values” (Kahan 2000). Liberals should not, he argues, cede the “powerful rhetorical capital of that sentiment to political reactionaries” just because prominent defenders of disgust have often used it to defend conservative ideas. While this may seem an interesting tactical manoeuvre, if Haidt is right about the deeper psychological foundations of moral discourse, it is not likely to win much support among liberals. Time will tell.

Haidt’s analysis is valuable here, not just because of his theses about specific emotions such as disgust, but also for the more general light that it throws on the debate about the role of emotions in moral reasoning. Perhaps the debate between Harris and Kass is not about the importance of emotion per se in moral reasoning, but about the relative value of particular emotions in moral reasoning. If this is the case, then it might be more perspicuous to view the debate between Harris and Kass, not as simply a rerun of the Kant/Hume debate, with Harris playing the role of Kant and Kass the role of Hume, but rather as a debate between different species of Humean ethics. If this is true, we would expect Harris and Kass to agree on the importance and relevance of emotions like compassion and pity to moral debate, since both liberals and conservatives base their moral intuitions on the harm/care system with which such emotions are associated. A true Kantian, of course, would take these emotions to be just as irrelevant to moral reasoning as the emotion of disgust.

Even a Kantian can, however, find something of value in this analysis. The fact that a person’s emotional reaction can be used to infer their implicit moral values does not, of course, imply that emotions carry any normative weight. The Kantian is nevertheless perfectly entitled to avail himself of such emotional evidence to help tease out the moral values which are at stake in the argument. Once emotions have been used in this way, the argument can proceed in an entirely unemotional way.

In the case of arguments about risky technologies, the Kantian can use the evidence provided by emotional reactions to help clarify what exactly the risks are that a person associates with a given technological development. When this has been established, however, the likelihood of those risks will be assessed by rational means alone – that is, by statistical evidence, without reference to emotion-laden perceptions. For example, suppose that my reaction to some new development in biotechnology is fear – fear that the acceptance of this vital new technology may be hampered by misleading propaganda put about by environmentalists. That would suggest that the risks that matter most to me are risks of possible harm – in this case, the harm done to humanity by depriving people of a means for improving quality of life – rather than the risk of injustice or some imaginary “theological risk”. That, in my view, is where the “moral” issues end. What remains is for me to gather empirical data about the likelihood of the risks I care about. This is a purely statistical matter.

The findings that have accumulated over four decades of research in the heuristics and biases programme must remain the key reference point here. These findings show conclusively that emotions almost always tend to reduce the rationality of decisions regarding the moral acceptability of technological risks by causing us to pay more attention to potential harms or potential benefits than is warranted by the evidence. Sometimes, enthusiasm can lead proponents to pay too much attention to the benefits of a technology and not to pay enough attention to risks. More often, however, it is the other way round, with the risks getting too much attention and the benefits being downplayed. The prevalence of this “luddite bias” may have some evolutionary basis; many emotional subsystems in the brain seem to be biased in the direction of perceiving threats at the expense of missing benefits, and overall there are many more negative emotions than positive ones. Thus people tend to be better at imagining the potential harm of new technologies than imagining the benefits.

As Cass Sunstein has pointed out in Laws of Fear, a truly rational analysis will always balance the risks of developing a given technology against the risks of not developing that technology (Sunstein 2005). The luddite bias is therefore an obstacle to rational analysis. One may attempt to overcome this obstacle by systematic debiasing methods, such as forcing oneself to list as many potential benefits as potential harms when considering a new technology. Given the powerful emotional nature of the luddite bias, however, intellectual corrective procedures may not be enough to counteract it, and it may therefore be necessary to employ emotional debiasing techniques too. For example, one might attempt to elicit the corresponding positive emotion for each negative emotion. When considering the possibility that GM foods might be toxic, for example, we should also consider the possibility that they might help avert starvation in developing countries, and we should try to elicit the emotion of compassion for the millions of people who might be helped in this way. Alternatively, if we are carried away by enthusiasm for a particular technological development, we might try to elicit a reasonable degree of fear for the potential risks.

This process is not, of course, a substitute for the rational assessment of the likelihood of the potential harms and benefits, but merely attempts to make sure that the emotional input into the decision-making process is fair and balanced and so less likely to distort the unbiased gathering of relevant information.

1 Conclusion

I have outlined a way in which emotions may play a role in assessing the moral acceptability of the risks associated with new technologies which does not impair the rationality of such assessments. Even a Kantian could claim that emotions could enhance the rationality of such assessments. This underlines the importance of spelling out precisely the nature of claims about the “rationality of emotion”, which can cover a multitude of sins. All a Kantian would mean by such a phrase is that emotions can help to clarify what exactly the risks are that a person associates with a given technological development. Their role is purely to provide empirical evidence concerning the implicit values of a given person.