Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This handbook is about a branch of engineering. The characteristic question in engineering is: Can it be done? Increasingly, though, engineers have had to reckon with another question: Should it be done? Like it or not, that second kind of question looms large in emotion-oriented computing. This part of the handbook aims to encourage people to address it and to provide resources that allow them to address it effectively.

Some of the arguments around the ‘should’ question are thoroughly familiar. How do the costs and benefits balance financially? Will there be a disproportionate impact on certain social groups? How will the quality of the environment be affected? These are not new issues, but they are still difficult.

The research described in this handbook is about machines whose functions are much less familiar. Generally, they are machines designed to do things that we associate with humans. Specifically, they are machines designed to do things that we associate with the emotional side of human life. It should be no surprise that extending into these new domains raises new questions in the domain of ‘should’. The difficulty is that the arguments on all sides seem to be shrouded in mist.

Concerns often begin when popular writing throws up images of machines emerging from the laboratory fully equipped to carry out a function that we had thought of as uniquely human – very possibly with more than human accuracy. Closer inspection almost always shows that the machines reproduce at most a small part of the corresponding human ability – using the same word to refer to both is a matter of convenience rather than accuracy. Nevertheless, the mist remains: Who knows how much further the next generation of machines might go? Questions extend from there: If the machines acquire these new abilities, how will humans be affected? Of course there will be positive effects, otherwise the machines would not be taken up; but they may be a Trojan horse, and far-reaching effects on human society may be waiting to emerge. Compounding that, it is not clear what standards should be used to measure the good or the harm that a possible new system might do. In the extreme, might it not be better to replace the current version of humanity with a mechanised successor? The mist stretches on into the far distance.

Unfortunately, the mistiness of the arguments does not mean that there is nothing worrying beneath them. There are close links between emotion and morality. For example, saying that someone is jealous or sulking is to pass a negative moral judgement. It is disturbing to think that we might give machines licence to pass moral judgements under the cover of describing emotions. On the other side, there is an obvious case for saying that a machine showing signs of emotion is engaged in a kind of deception. There are very strong moral reasons for objecting to technologies that are designed specifically to achieve deception – particularly if the deception is likely to be targeted on the most vulnerable in society.

Even more concretely, contemporary technologies depend on databases of material that shows humans expressing relevant emotions. That raises unmistakable ethical questions. There may be good reasons to develop systems that can recognise people in fear of their lives, but it is absolutely not ethical to obtain the necessary recordings by inducing that kind of fear in randomly selected members of the public. Less obviously, there are ethical issues surrounding the dissemination of audio and visual recordings that are recognisable. It can be thoroughly disturbing for the subject of a recording to find him- or herself appearing in an undignified pose in conferences, journals or, worse still, press coverage. Those who are not personally moved by that line of thought should remember that the law is.

The chapters in this part of the handbook aim to give people working in the area the resources that they need to steer between unreasonable fear of misty possibilities and cavalier indifference to serious concerns. They range from pure principle to solid practicality.

1 The Chapters

The chapter by McGuinness deals head-on with a problem which, as many people in the area know from painful experience, can all too easily make it impossible to reach any kind of conclusion in an ethical debate. When people try to justify ethical conclusions in a particular case, they naturally try to ground their arguments in general ethical principles. The problem is that there are several quite different kinds of premise that can be used to ground ethical arguments. Most of them can be traced back in one form or the other for millennia and millennia of arguments have produced no way of deciding which is best. As a result, attempts to argue through specific ethical issues tend, with awful regularity, to slide back into arguments about the alternative types of ethical principles that might justify one choice or other and those are arguments that we know there is no prospect of resolving.

McGuinness points emotion-oriented computing towards a solution that has gained widespread acceptance in other areas which have real ethical concerns – above all, medical practice. It is called principalism. Its strength is that it refuses to begin the doomed search for moral axioms. It argues that particular moral judgements are essentially perceptual. We see that some things are acceptable and some are not. By and large, those that are acceptable are acceptable on any reasonable set of moral premises – naturally enough, because perceptual judgements provide the tests by which we measure the acceptability of a set of moral premises. What principalism adds to pure intuition is a set of guidelines that remind us what are the broad areas that should be considered in a moral judgement. Also, and not to be underestimated, it provides good and sophisticated authority for insisting that it is naive to think that ethical judgements should be derived from ethical first principles.

The two following articles move from the general basis of ethical judgement to particular ethical issues that affect emotion-oriented computing. Goldie, Döring and Cowie offer a broad view of the reasons why the area gives rise to particular moral and ethical concerns, while Baumann and Döring focus on one key concern, which is the impression that emotion-oriented systems might infringe human autonomy.

Goldie et al. describe their brief as dealing with ‘the obstinate, vaguely defined concerns that seem troubling to the general public’. The argument is not that the concerns are objectively valid but that people working in the area need to understand them – otherwise, they will be permanently at cross purposes with the jury in the court of public opinion. Four problem areas are identified. The first is that people in general lack a framework that allows them to think clearly about even natural emotion, much less artificial analogues. Lacking a framework, they have no way to judge what is possible and therefore, they have no way to think through possible risks and benefits. The second problem area is what seems to be a deep-rooted human mistrust of things that behave or look like humans but that are actually not – in a word, impostors. Creatures that lure unwary people into trusting them, and then suddenly reveal their inhumanity, are part and parcel of the world’s folklore. As the authors put it, that kind of feeling ‘is an obstinately disturbing echo that disrupts attempts to hold cool, rational discussion about EOT’. Thirdly, the authors discuss the threat that ethics itself may be undermined by bringing people into contact with new kinds of agent. If we decide that we should treat machines as moral entities, we risk finding that humans slide alarmingly down the scale of moral value – for we would hardly build machines that reproduced the human race’s uglier tendencies. If we deny them moral status, then we risk promoting systematic indifference to the fate of agents that are complex, intelligent and at least apparently emotional, and there is no guarantee that the indifference will stop with machines. Lastly, there are hugely complicated issues in ensuring proper legal control over a new type of device – particularly when it is unlikely that the law makers will have expert understanding of its potential. It will be no surprise to those who know Goldie’s (2010) work that the chapter is intensely thought provoking.

In contrast, Baumann and Döring focus on one type of concern, which is at least philosophically relatively well defined. One of the areas highlighted by principalism is autonomy, implying ‘a duty of non-interference, for example, respect for the decision-making capacity of an individual even if the consequences of these decisions are not in their best interests’. Emphasis on autonomy is not unique to principalism; it is a major theme in contemporary philosophy. Baumann and Döring show that many of the issues that worry the public, including those that are raised in the last part of the previous chapter, can be linked to that theme and that leads them to a clear prescription: ‘developers have a prima facie duty to build machines that cannot (under normal conditions) infringe the autonomy of persons’. The chapter aims to equip people to fulfil that duty. The first part summarises contemporary thinking about autonomy. It then moves on to show why emotion-oriented technology carries a risk of infringing personal autonomy. It then considers principles that system developers might use to minimise the risk. It is worth noting, as the authors do themselves, that the chapter expands one part of the agenda that was identified in the previous two chapters. Autonomy is one of four themes in principalism, and present risks of violating clear ethical principles form one of Goldie et al.’s four sources of public concern. It is an expansion that may be particularly useful in practice, though.

The final chapter in the section focuses completely on practicalities. The established method of keeping research on track is monitored by an ethical committee. The first author of the chapter is Ian Sneddon, who has chaired ethical committees both in his own institution and for HUMAINE. The chapter is grounded in the experience of ensuring that committees are adequately informed and can reach decisions that are defensible and timely. The issues are set in the historical framework of the HUMAINE ethics committee, which was the first committee to be concerned specifically with emotion-oriented computing. The chapter sets many of the issues discussed in earlier chapters in the context of a working committee, such as the philosophical basis of ethical decision making. However, it also works through the very varied pragmatic issues, such as the codes that can be used, obtaining informed consent and storage of emotion data, which are the bread and butter of an ethics committee’s work.

2 The Good We Hope to Do, the Harm We Should Avoid

Several of the chapters make the point that it would be wrong to focus exclusively on the problems that emotion-oriented technology might create if we were not vigilant. It is worth ending on that note.

It is a feature of the area that the research is idealistic. Our society requires people to interact with automatic systems. At best, the interactions currently depend on the humans adopting modes of interaction that suit the computers. That excludes people who find that kind of adjustment difficult or impossible, and it creates dangers in situations where, for instance, an emergency disrupts the human’s ordinary ability to make the adjustment. It means that huge amounts of information that are held electronically are in practice highly unlikely to be absorbed by humans who could benefit from it (handbooks and manuals are a prime example). It leads to interactions that may be designed with the best will in the world but that most humans find aversive (automatic answering systems are a prime example). It would be easy to extend the list.

The point is not at all new, but it is worth repeating that in all these areas, pressures to develop systems with at least some emotional competence are not created by a community that has chosen to pursue its own research agenda. Rather:

They arise willy-nilly when artefacts are inserted, into situations that are emotionally complex, to perform a task to which emotional significance is normally attached. There is a moral obligation to attend to them, so that if we intervene in such situations, we do it in ways that make things better, not worse (Cowie, 2010, p. 172).

Fulfilling moral obligations tends not to be easy, but ignoring them should not be an option.