Keywords

Introduction

When lecturing on the ethics of technology, I like to invite the audience to engage in a thought experiment. How would our lives look like without technology. People look around, and then the air conditioner comes to a grinding halt. The next moment walls come tumbling down, and the roof disappears into thin air. We flinch as we find ourselves on the ground, with the chairs gone. We have a moment to realize how gray our neighbor actually is, and how pale her face, before we realize we have to cover our own naked bodies too. Fortunately, contact lenses and glasses have disappeared. And as our tongue starts exploring the sudden cavities in our mouth, a man collapses as his pacemaker gives out. It is hard to call for help, without a phone.

The point is, of course, that modern lives have become utterly entangled with the technological artifacts and systems we created. Indeed, we live in a human-made world: a techno-tope rather than in a biotope. In a sense, this has always been the case. We date back the origins of the human species as far back as – well – the earliest artifacts we manage to find. However, technological artifacts have become much more ubiquitous since the so-called Scientific Revolution of the seventeenth and eighteenth centuries and more particularly since the Industrial Revolution of the nineteenth and twentieth centuries. How we relate to the natural world around us, to our fellow human beings, and even to ourselves, all these relations are co-shaped by the dynamics of a rapidly evolving technology.

Some people are happier than others about this, but regardless of one’s optimistic or pessimistic stance, no one can avoid the question: under what conditions is which technology conducive to human (and nonhuman) flourishing? How to design, develop, use, and distribute technology? This is the vast and rapidly expanding domain of technology ethics (Franssen 2009). Increasingly, these ethical questions are posed and explored in public debates. Topics include nuclear energy, fracking, climate change, genetic engineering, nanoparticles, production of life-saving drugs for poor countries, the impact of web surfing, privacy, mobility and migration, robots, and so on and so forth. Actually, in technological cultures, it is much harder to think of controversies in which technology does not play a major role, than of controversies where it does.

The aim of this chapter is not to tell the reader what is good and bad technology or how it should (not) be used. Instead, the aim is to introduce the reader to recurring argumentative patterns in this debate. Although technologies change and diverge, debates about new and emerging science and technologies (abbreviated as NEST) show remarkable structural similarities. Indeed, NEST-ethical debates more or less follow a shared grammar. It is this grammar that the following chapter seeks to identify and elucidate (This chapter builds upon: Swierstra and Rip (2007).). If citizens, technology developers, and policy makers learn to understand and apply this grammar, this will help them to think and discuss about the ethical aspects of new and emerging technologies.

Section “General Definitions” briefly explains some key concepts and offers a reflection on the relation between morality, ethics, and technology. Section “Visions of Technological Development” discusses two perspectives on technological development that play an important role in the preliminary debate whether ethical assessment of technology makes any sense in the first place. Section “Meta-Ethics” introduces the reader to some meta-ethical arguments regarding the dynamic relation between ethics and technology. Section “Normative Ethics” provides an overview of the normative ethical arguments in favor of a certain proposed technology and juxtaposes them to the arguments used by more skeptical opponents. I am confident that the reader will recognize most if not all of these general arguments. In the concluding Section ”To Conclude,” I point out some ways in which real-life deliberations and discussions about NEST benefit from knowing these arguments and argumentative patterns.

General Definitions

Morality and Ethics

The concepts “morality” and “ethics” are often used interchangeably. For our goal, however, it is useful to distinguish the two. Here, morality refers to a special category of values and norms that guide us in our ordinary lives. “Values” indicate what we think is important; “norms” prescribe the conduct that will help realize these values in practice. Values typically are formulated in the form of substantives; norms in the form of short sentences that start with “You should” or “You should not.” For example, trust is a value, whereas that you should stick to your promises is the associated norm. Norms articulate the “how” of values; values articulate the “why” of norms. Not all norms are moral norms, nor are all values moral values. We reserve the adjective “moral” for values and norms that carry a lot of weight, either because they formulate fundamental rules of social intercourse or because they articulate what it means to flourish as a human being (Sayer 2011). These fundamental norms typically take the form of obligations and prohibitions. Their special status is manifest in the fact that moral rules are accompanied by praise and blame, rewards and punishments, to motivate people to live according to these norms and values. Take note: this is a formal (empty) definition of “moral.” The definition does not tell us what norms and values are moral. People can and do disagree about whether a particular rule is moral or not. For instance, (female) chastity is held to be a central moral value in some cultures, whereas in others it is considered as a private lifestyle choice.

An interesting observation about morality is that it largely exists in the form of routines that are so obvious that in the normal course of our lives, we are barely aware of their impact on our thinking, feeling, and acting (Dewey 1994; Keulartz et al. 2004). It would be quite disturbing if you would first consider to kill an annoying colleague, only to then decide after ample reflection that (unfortunately) this would be immoral. The idea simply should not have entered your conscious thinking in the first place (Williams 1985). So naturally comes the moral taboo on murder to us that we obey automatically. As a result, the most obvious and powerful moral rules and values tend to be the least visible and articulate in daily practice.

Ethics, by contrast, refers to explicit reflecting on and discussing morality. A precondition for ethics is that moral norms or values have become explicit. The main reason they do so is because they have for some reason become problematic. At such a moment, morality stops being self-evident, commonsensical, and invisible and gets articulated into explicit topics of reflection and conversation. In such a situation, “cold” morality turns into “hot” ethics: invisible, solid, moral routines become fluid in ethics, so they – if necessary – can be readjusted to the new situation.

Under what circumstances do morals stop being self-evident? First, a moral norm becomes visible when it is violated or disobeyed. Breaking a rule is the surest way to make one aware that the rule existed in the first place. Second, and related to the first reason, our moral routines get shaken up into visibility when others contest them. If we meet people with different (sub)cultural backgrounds, we not only realize that they have different norms and values, but we also become more consciously aware of our own – at least to the extent that they differ from the ones held high by the other person. A third way moral norms or values lose their self-evidence is when they conflict. Most people value honesty and trustworthiness. But what if a friend confides in you that he betrays his wife, and then his wife later calls to inquire whether you know something? In a moral dilemma, we are forced to choose which norm to obey or what value to put first, and that forces us to articulate the values we want to, but now cannot, pursue.

Technology, Morality, and Ethics

Technological societies embody a fourth way in which morality is turned into ethics. Moral routines thrive in a stable environment because there they receive constant confirmation. But as modern, technological societies are highly dynamic, they do not provide for such a stable environment. Technological change “heats up” morality into ethics. When asked around 1995 whether they would like to have a mobile phone, many people responded negative because they did not like the idea that one would be available everywhere and every time. Twenty years later emotions flair and accusations fly if someone does not respond quickly enough. Twenty years ago, phoning someone was considered to be non-problematic. Nowadays many prefer to text, because phoning is considered to be intrusive (Turkle 2010).

A more futuristic example: presently everyone opposes doping in sports as that gives unfair advantages. The only factor that is allowed to determine who wins and loses is how good one is. But no one finds it problematic that, apart from training and willpower, all kinds of “natural” biological differences between athletes impact their performance. What if advances in biotechnology would allow us to neutralize these “natural differences”? Would we then still deem doping unfair? Or would we see it as a moral obligation to help the “biologically disadvantaged,” because only by doping them the competition can really be considered “fair”?

In short, new technologies have the potential to destabilize parts of our tacit, implicit morality and thus turn them into topics for explicit ethical reflection, debate, and struggle.

Rule Ethics and Life Ethics: Consequences, Principles, Justice, and the Good Life

Ethical discussions are about good and bad. We engage in such discussions not for fun, but because it is unclear what to do, for example, because we have to choose between two evils or because applying existing norms results in counterintuitive consequences. In such situations people broadly use two kinds of ethical considerations: rule-ethical arguments and good life arguments – sometimes abbreviated as the right or the good (Rawls 1988).

The rule-ethical approach is almost automatically chosen when interests conflict. To avoid that the strongest wins, ethics aims for solutions based on impartial rules, acceptable to all involved parties. Reasonable people, is the idea, voluntarily conform to conclusions drawn on such grounds. The question is then of course: how do you find or design such rules? There exist – in general terms – three different answers to that rule-ethical question.

The first answer comes in the form of a meta-rule that states: Choose the practical alternative with the best consequences for the largest group. This type of rule ethics is called consequentialist. To determine what the “best” consequences are, consequentialist ethics usually refers to “pain” and ”happiness,” as no one likes pain and everyone aims for happiness. Ethics then is about maximizing the happiness (or minimizing the pain) of as many stakeholders as possible. Consequentialism thus predominantly prioritizes the interests of the collective, the common interest, over the interests of the individual.

The second answer comes in the form of a meta-rule that states: Choose the practical alternative that meets a fundamental moral principle. Such principles are supposed to capture the fundamental rules of, and preconditions for, successful social interaction. Often such rules come in the form of prohibitions, duties, and rights. There are things you simply do not do because they are intrinsically wrong, regardless of the consequences. Even though one would have the chance to ultimately save thousands, we do not test new drugs on prisoners. As this example, illustrates, moral principles tend to defend the rights of the individual against the interests of the collective. Therefore moral principles under normal circumstances outweigh consequences (although that is a rule of thumb that allows for exceptions; see section “Normative Ethics”).

A third answer to the ethical question “what moral rules should we obey to solve our conflicts in a reasonable way” is geared toward a special, but crucial, subcategory of ethical issues: how to fairly or equitably distribute a scarce good, that is, a good for which demand exceeds supply. This is the problem of distributive justice. Which distribution is just depends on the criterion we think is adequate: equality, merit, need, and chance are the most common candidates for such criteria. I will return to these in section “Normative Ethics.”

In brief, rule ethics includes discussions about consequences, principles, and (distributive) justice. In modern, pluralist and liberal , societies, public debate is conducted mainly on the basis of this kind of rule-ethical arguments. The reason for this is that rule ethics is considered as sufficiently objective and impartial to allow for a reasonable consensus in many cases. Initially, however, ethics was much broader. For the ancient Greeks, for example, ethics in the first place revolved around the questions: how to live a good (admirable) life? How to be a good person? Religions and ancient wisdom teachings too mostly revolve around this question: how to be a good Christian, Jew, Muslim, Hindu, Buddhist, Taoist, or Confucian? Since the religious wars of the sixteenth century that marked the beginning of “modernity” in the West, religion, together with this “good life ethics,” has been progressively banished from the public sphere. In modern, liberal and pluralist, societies, public ethics tends to focus on a thin ethics of “traffic rules” that allow peaceful coexistence of parties that would otherwise be fighting. What constitutes a good life is delegated to the individual’s personal discretion and thus to the private sphere (Swierstra 2002). In a democracy, each citizen should be free to pursue her or his conception of the good life – of course on the condition that she/he does not harm others in doing so (Mill 1989). However, this public-private split is less sharp than often suggested. For instance, discussions about politicians, euthanasia, or – less dramatic – the influence of computer games on the character of the players usually contain more or less explicit references to ideas about what constitutes a good life or a good person (Sandel 2010).

Normative Ethics, Meta-ethics, and Descriptive Ethics

Finally, a last conceptual distinction is relevant for the ethics of new and emerging science and technology: meta-ethics versus normative ethics. Normative ethics analyzes ethical problems and attempts to provide a reasoned solution. Cloning, is that allowed? Is human enhancement a good idea? Should genetic tests be promoted, prohibited, or released? Should violent computer games be outlawed?

Meta-ethics refers to the philosophical reflection on the methods and foundations of normative ethics. It asks, for instance, whether ethical debates can be rational, and if so, in what sense of “rational.” Or it examines whether ultimately not all ethical arguments can be traced back to consequences or rather to principles. Meta-ethics is primarily fodder for philosophers but not exclusively. Anyone who in a discussion groans that “values are ultimately merely subjective, aren’t they, so let’s talk about something else” engages in meta-ethics. We will see in the section “Meta-Ethics” that some meta-ethical arguments play an important role in NEST-ethical discussions.

Descriptive ethics, finally, is geared toward describing existing moralities or – as in the following sections – existing ways to discuss ethical questions. Descriptive ethics is thus a form of sociology or ethnography. Whereas the findings of normative ethics are to be evaluated in terms of “right” and “wrong,” the empirical claims of descriptive ethics ask for an evaluation in terms of “true” or “false.” So, the ideas and argumentative patterns explicated in the following sections are a form of descriptive ethics. This means that you as a reader are invited to check my inventory of recurring arguments and argumentative patterns not in terms of whether you agree with them but in terms of whether the inventory is realistic and complete.

Visions of Technological Development

In the following sections, I introduce the reader to recurring arguments and argumentative patterns in a NEST-ethical debate. I reconstruct this debate as one between technology optimists and pessimists. But I want to stress that nowadays undiluted technology optimism and pessimism have become equally rare. Even the staunchest optimist no longer denies that technological development created major problems (even if she/he holds that the solution to these problems is more and better technology). And even the blackest pessimist no longer denies that we inhabit an irreversibly technological world (even if she/he holds that we need more social and less technological solutions). Nor blindly embracing nor blindly rejecting technology (with a capital T) is a realistic option nowadays.

The first issue usually addressed in NEST-ethical discussions is not (meta- or normative) ethical but factual: can technology development be influenced? (Bijker et al. 1987 (2012); Smits and Marx 1994) If we lack sufficient grip on scientific and technological developments, an ethical assessment makes no sense. Some (even partial) control over technology is a precondition for any NEST-ethics.

To the question whether technology can be steered, roughly two negative answers exist. The first answer is descriptive: you cannot steer, even if you try. The second negative answer is normative: you should not steer, because the costs of doing so outweigh the possible benefits. The affirmative response similarly has a descriptive and a normative form. On the one hand, historical and sociological evidence suggests that social actors do guide the course of technology development (descriptive). On the other hand, one could argue that this societal influence is not sufficiently subjected to adequate (usually understood as: democratic) control (normative).

Determinism: Descriptive

The position that technological development cannot be guided is known as “technological determinism.” This determinism is justified in at least three different ways. Technology development is first presented as autonomous, necessary, and solely determined by the laws of nature. Technological development is like a train that has no option but to follow the rails (even if the direction of those rails can only be established retrospectively). Steering is not an option; drive! Although this variant of technological determinism has long been dominant, it seems to have lost much of its force. This may have to do with the fact that technology development is becoming increasingly expensive. As a result it becomes more difficult to disregard that technological developments depend on societal factors like corporate and/or political funding.

Over the last decades a second justification for technological determinism has gained ground. The (international, global) competition has taken over the role of the unstoppable force driving the development of technology. If we do not develop this technology, then our competitors will. In an open market all we can do is play along and try to compete successfully.

So-called “technological path dependencies” provide a third argument why influencing the course of technological development is futile. A technology is not like an intangible idea, which can be simply refuted and replaced. On the contrary, technology possesses a material robustness and it is embedded in a techno-social network. As a consequence, a technology cannot be easily reviewed or overturned. Even if we now think electric cars are a better idea than gasoline cars, we have erected a physical infrastructure around the petrol car consisting of engines, petrol stations, refineries, and providers. Furthermore, we have become attached to the sound of the explosion engine and to the way it behaves. These factors make it hard to change tracks. The technological choices of the past limit our technological options for the future. We are forced to work in the old mold.

Determinism: Normative

Whereas the preceding arguments were predominantly descriptive, the next ones are mainly normative: even if we could control technological development, we should not attempt to because such control can only be achieved at the expense of other fundamental values.

The general version of this normative justification of technological determinism is that scientific and technical development are inseparable from social progress. Whoever attempts to interrupt or steer technological development automatically jeopardizes social progress.

A more specific, economics version of this argument holds that in a free market economy, technology cannot be regulated. Manufacturers and consumers should be free to supply and purchase what they choose to – at least as long as they do not harm others. State interference can only result in market inefficiencies and other woes.

In another version of this argument, academic freedom plays the crucial role. Only the academic community can and must decide which scientific and technological research is worthwhile. Society has given universities a mandate to investigate freely, regardless of political, religious, economic, and ideological interferences. Paradoxically, it is precisely because of this unfettered freedom that universities can serve society. Society should not steer scientific and technological progress; it only decides whether scientific and technological findings will be applied and how. The reason society is allowed this latter type of decision is that these are not based on facts (what is the case) but on values (what should be the case), and here scientific and technological experts have no say.

Voluntarism: Descriptive and Normative

Opposing this (descriptive and normative) determinism is what can be referred to as “voluntarism.” Drawing on historical and sociological research, voluntarists claim that social factors constantly influence technological development. Scientific and technological research is done by people, and so by definition people exercise influence (Collins and Pinch 1998; Sismondo 2004). The question is not one of determinism but of politics: who is pulling the strings, and who should do so ideally? (Winner 1980; Morozov 2014)

In public discussions on NEST, often a certain distrust is detectable. Behind the joyful expectations around a NEST, some suspect the “spin” of interest groups that try to “sell” their technologies. How to be sure that it is not large industries, such as the pharmaceutical industry, the food industry, or the oil companies, who decide what is examined or produced, hiding their influence behind the veil of technological determinism?

Even many who believe that business and government are not doing so badly, concur with greater democratic control over the course of technological development. This plea usually translates first into the requirement of transparency: it should at least be clear who decides what and on what grounds? Why do we invest in this and not rather in that? A minimal democratic standard is that it must be possible to hold agents accountable for their choices. This is a powerful incentive for technology developers to make sure that their products and production are safe, healthy, and sustainable. A more radical democratic requirement is that citizens should be in a position to think and talk in advance about the technology that will eventually help shape their lives. This requirement would shift the discussion from “how to avoid harmful technologies” to the more positive, aspirational, question “among all the possible technologies that we can spend money and energy on, which ones do we hold to be the most worthwhile?” (Von Schomberg 2013; Stilgoe et al. 2013)

This question whether a particular NEST is desirable or not usually only gets posed after the technology has been introduced (in)to society. First, we find ourselves invited to marvel at the cloned sheep Dolly, smart energy meters, and Google glasses, and only afterwards there is room for the ethical questions. Of course, we do not want to waste time worrying about technological fantasies that may never materialize. But retrospective assessing has a major drawback: often so much has been invested in the NEST, so many stakeholders have rallied around it, and it has become so intertwined with other technologies that the genie cannot be put back into the bottle. “Retrospective ethics” quickly degenerates into commenting on a fait accompli. For “prospective ethics” the problem is the reverse. “Upstream” much is still unclear. How feasible are the technological expectations? And thinking about the social consequences of a NEST, how can we be sure that society will at that time not have changed drastically anyway? (Martin 2010) Prospective ethics is unavoidably speculative, and the chance to reach agreement under such conditions seems nil.

This problem is known as the knowledge-control dilemma of Collingridge who formulated it in 1980 (Collingridge 1980). When there is still something to steer, we lack the necessary knowledge to do so; by the time we have that knowledge, the technology has already “solidified” and become socially embedded. Advocates of a NEST can mobilize this dilemma to avoid ethical debate: “It is now too early for such a debate. First wait until further research has distinguished fact from fiction.” Only to then later point out, when the technology has materialized and we are finally able to distinguish fact from fiction, “that now unfortunately it is no longer realistic to try to turn things back.” This rhetoric works because it refers to a real dilemma.

Skeptics, however, have ways to respond. Firstly, they argue that ethics is not “added” to scientific and technological research, as if these would be value free. Most research is motivated by the desire, hope, and expectation that it will help achieve wonderful things: less disease, less hunger, more wealth, and more justice. (Such positive expectations also serve to mobilize financial and/or political support for the research.) In this sense, research is guided by ethical considerations from the outset. And if early expectations regarding NEST are speculative, this applies as much to hopes as it does to fears.

Similarly, skeptics object to the argument that when a technology has been developed, it is too late for ethics. In the first place, one can never pinpoint a precise moment when a technique is indeed “finished.” Artifacts are intermediate stages in a continuous development with several generations. For example, it is impossible to say at what stage is your phone “ready.” Secondly, even in a late state, an artifact can be improved at on the basis of ethical concerns. Not only are, for example, modern cars more sustainable than previous generations, but moral concerns also played a major incentive behind research geared at producing stem cells without having to harvest them from embryos.

Meta-ethics

In NEST-ethical debates two meta-ethical issues play a central role. The first concerns the likelihood that the NEST will eventually confront us with problems that require ethical deliberation. Optimists assess that chance as negligible; their opponents are more pessimistic. The second meta-ethical issue is whether morality must adapt to technology or vice versa. Here we find fundamentalists and relativists opposing one another.

Trust in Technology

NEST-ethical discussions are superfluous if scientific and technological progress always results in social progress. This optimistic stance is based on the so-called linear model of technology development. According to this model there runs a straight line from basic research through applied research, product development and dissemination in society, to social progress.

Technology optimists present themselves self-consciously as prophets of a new age and downplay social and ecological problems: these problems will be solved by technological progress itself. Pessimists by contrast point out that their technology always causes unintended and unforeseen problems and that we should not proceed as long as we do not yet know how big those problems will be and whether we have indeed solutions for them. The so-called precautionary principle dictates that the burden of proof lies with the optimists who think it is safe to proceed with a certain technological development.

Optimism and pessimism are based on conflicting images of technology. Optimists tend to see technology as a neutral instrument. Technology simply provides us with new possibilities; it is up to us to make good and wise use of them. If someone kills another person using a hammer, one does not blame the hammer or its designer, only its user. Or, as the motto of the National Rifle Association has it, “ guns don’t kill people; people kill people.” Pessimists by contrast tend to stress that technology is neither passive nor neutral (Ihde 1993). It is not neutral because it incorporates specific values, such as the desire to maximize efficiency or the desire to control (natural and social) reality. Technology is not passive either. In Europe we do not hand out guns, as we think that guns in a certain sense make killers. For the pessimist, technology is therefore never to be trusted.

It is important, however, to stress that the naked fact that technologies are active and value laden is in itself not sufficient to become a pessimist: we can also try to build moral values into the technology – e.g., sustainability or compassion – or use the technology to “nudge” people to do the right thing, for example, using speed bumps to motivate drivers to slow when passing a school (Akrich 1992; Latour 1992; Thaler and Sunstein 2008). Similarly, instrumentalism doesn’t necessarily imply optimism. If one has a bleak view of humankind, one can be very pessimist about how technologies will be used.

Interaction Technique and Morality

When a new technology is introduced, commonly its revolutionary character is stressed. In part this reflects the pride and enthusiasm of the scientists and technology developers involved, but the hype also serves to generate attention and to mobilize financial, political, and policy support (Borup et al. 2006). When in 2000 the human genome was presented, British Prime Minister Blair spoke of “a revolution in medical science, which will prove to be much more important than the discovery of antibiotics in the last century.” His US colleague President Clinton drew a comparison with the discoveries of Galileo and the splitting of the atom: “Today we are learning the language in which God created life.”

But interestingly enough, although with regard to science and technology revolution is stressed, the public is simultaneously assured that with regard to our morality, it will be business as usual. The revolutionary technology, so it is said, will only help to realize our existing, unproblematic goals. All that changes is that we can do the things we want to do more effectively. Modernity is exceptional in its embrace of scientific and technological change, whereas in more traditional cultures new knowledge or new technology is often seen as threatening. But with these traditional cultures, most moderns still share a deep-rooted conservatism regarding morality. Only a few people are willing to face the idea that not only our facts and our artifacts but also our values are “provisional” and subject to change.

The question is whether moral and technological change can really be separated that easily (Jasanoff 2004). Is it not rather logical to assume, skeptics will not fail to emphasize, that revolutionary technologies will destabilize our moral routines too? (Swierstra et al. 2010) Four arguments characterize this part of the NEST-ethical discussion.

The Argument of Precedence

Technology advocates can try to ease the fear that the NEST is at odds with the accepted morality by downplaying its “novelty.” They can point to non-controversial precursors of the controversial technique in history or in nature, thus demonstrating that morally speaking there is nothing new under the sun. Is genetic modification something new and scary? Of course not; we have been doing that since the beginning of agriculture and animal husbandry, only slower and less efficiently. Is cloning a revolution? No, nature does it herself; we call those clones “twins.” Is nanotechnology a miracle? No, beer brewing is also based on manipulation of nature at the nanolevel. Does human enhancement constitute a revolution? Of course not, it is essentially the same as sending your child to school.

Obviously not everybody will be convinced by this “argument of precedence.” Opponents will identify morally relevant differences between the claimed precedent and the NEST under discussion. To them, there are substantial differences between twins and clones (twins are the same age, clones not) or between animal breeding and its genetic manipulation (one uses chance and proceeds slowly; the other is goal-oriented and fast) that make that the NEST is not automatically as acceptable as its claimed “precedent.”

The Slippery Slope Argument

A special way to debunk the argument of precedence is through warning that the new technology will put us on a “slippery slope” (Van der Burg 1991). In this argument, the conflict between the proposed technology and the existing morality gets relocated to the future. Expect this argument to be used in cases where the new technology at first sight seems rather innocuous or beneficial. For example, is it not great that with a simple genetic test we can avoid that a severely disabled child in constant pain is born? But, so opponents will object, where lies the limit? Does this technology ultimately not lead to a Brave New World with no place for “differently abled” fellow citizens?

Or suppose we develop a gene therapy to make criminals less aggressive. That seems noble, but will we not degenerate into a society where those in power will manipulate our minds?

Of course, technology optimists are not convinced that such a slippery slope even exists. For them, the slippery slope argument is a form of determinism that disregards that at every step of the way, we can always decide to stop and even retrace our steps.

The Habituation Argument Versus the Argument of Moral Decline

Creating a reassuring continuity with the past or with nature is not always a promising argumentation strategy. And some technology optimists refuse to bow to what they see as moral conservatism. They will openly admit that the NEST is at odds with current morality. To then assertively add: all the better! Since the invention of fire and the wheel, technological innovations have always met with initial resistance and moral panics. But this resistance always fades out after a while. When the first train huffed and puffed between Amsterdam and Haarlem, academics warned that such inhuman speed would bring women to miscarriage and would spoil the milk in the udders of the shocked cows. And the first test tube baby was considered either as a monster or a miracle, but it is now hard to find a school class without an IVF child. In brief, people get used to the new technology; given some time, morality slavishly adapts to the new technical reality (Haldane 1924).

Opponents of this view cannot of course deny that morality in the past has often coevolved with the technology and that most people indeed tend to acquiesce in this. But this fact, of course, does not justify the normative conclusion that those moral adaptations should be welcomed. Maybe one day we will produce people on an assembly line – as described in Aldous Huxley’s Brave New World (1932) – and maybe we will even come to accept this as normal. But that does not mean that such a technology therefore would also be good, anymore than that the majority of the Germans in Nazi-Germany being anti-Semite would make anti-Semitism morally okay! It just means that large groups of people can go morally astray. As there is moral progress, there is also moral decline. These voices in the debate oppose the moral relativism they perceive behind the habituation argument.

Normative Ethics

The arguments explored in the previous section circumvent the direct normative question whether a certain NEST is morally desirable or not. In the following section I describe patterns in normative ethical discussions about NEST that deal with this question. I distinguish the arguments depending on whether they are related to consequences, principles, justice, or the good life (see section “Visions of Technological Development”). Obviously, arguments of technology proponents call forth matching counterarguments by skeptics and vice versa.

Consequences

Technology is not always designed with a specific purpose in mind, such as to provide for a specific social need (demand pull). It is also common for researchers to give their curiosity free rein, to be especially interested in a proof of principle, or to stumble almost by accident on something for which the marketing department subsequently finds a purpose – if necessary by creating new needs (technology push). Even if there was from the outset an intended purpose, that is often not the (sole) purpose that is finally realized. The first steam machine was designed to pump water from the mine, and it took a while before someone had the bright idea to put wheels under the device. As soon as a technology exists, engineers and users start to invent new applications that often overshadow the original intentions behind the technology.

That being said, it is still true that a NEST is usually presented to the outside world as an instrument to realize specific (desirable) consequences. As mentioned earlier, hardcore technology optimism is no longer as strong as it once was. Not only is it now clear to everyone that technology also has drawbacks – such as for the environment – but scientific research and technological development have also become increasingly expensive and large scale. Technological research now easily spans several nations, and costs quickly run in the millions if not billions. As a result, society – or a corporation – is no longer willing to write a blank check for scientists and technologists. These have to mobilize financial, political, and public support for their projects by justifying their projects in terms of their utility. To do this they typically apply consequentialist ethical arguments, in the form of expectations, hopes, and promises: if you invest now, tomorrow you will reap the benefits (cure for cancer, solution to hunger, peace through better communication, etc). These promises and expectations are rarely recognized as ethical arguments, but they are of course.

Opponents of a NEST also apply such consequentialist arguments, only now these take the form of doubts or fears rather than hopes. Whether the arguments are in favor of a NEST or against it, they can be challenged in four ways:

  1. (a)

    Consequentialist arguments take the form of (positive or negative) expectations about the future. Their speculative character means that their plausibility can always be questioned. And that becomes easier if the promised future lies further in the distance and if more factors play a role. If the intended effects of a particular NEST are assessed to be improbable, the argument for investing in it weakens accordingly. (Or vice versa: if the projected risks are found to be small, there is less reason for precaution.)

  2. (b)

    Even if expectations are deemed plausible, this does not end the discussion. After all, maybe we possess a superior alternative to the proposed NEST. For example, in the debate about nuclear energy, some argue for wind and solar energy, even if they accept the technical feasibility of nuclear energy. Or some argue that we do not need genetically modified rice to alleviate hunger in the Third World, if only we would distribute wealth more fairly or help to install democratic ways of governance.

  3. (c)

    Consequentialist arguments are also contested by pointing out unintended and undesirable side effects (Tenner 1996). Through trial and error we have learned that technologies always do more than what they were developed for. Technology Assessment (TA) was created in the seventies to explore such side effects of NEST in advance. Adverse environmental impacts are the best-known example of such unintended and undesirable consequences, but one can also think of the students who suddenly discovered Ritalin as a means to increase the concentration during examinations. If the intended effects are outweighed by the unintended and undesirable side effects, this can turn the scales on the NEST in question.

  4. (d)

    Finally, a consequentialist argument can be criticized on the grounds that the hoped-for result is actually not as desirable as it is presented to be. For example, so-called trans-humanists are enthusiastic advocates of various techniques that promise to physically and mentally “enhance” people. But what exactly do we mean by “enhanced”? Is it really progress if we grow taller, if we do not age, or if pills help us to concentrate longer so that we can work longer days? (Sharon 2014)

I now turn to argumentation patterns that revolve around (the interpretation and application of) ethical principles.

Rights, Duties, and Responsibilities

The principle ethical (or “deontological”) part of morality exists in the form of (moral, not necessarily legal!) prohibitions, rights, obligations, and responsibilities. Rights and prohibitions/duties/responsibilities correspond with each other: if X has a certain right, it means that Y has a corresponding prohibition/duty/responsibility to ensure that that right is respected. It is useful to distinguish positive rights (claim rights) from negative rights (freedoms). In the first case the other party has to do something; in the second case they must abstain from doing something.

An important principle ethical argument in favor of a NEST is that it will help to fulfill a duty that is based on important positive rights, such as the right to good health care or to information or to a life without hunger or terrorism. In this case we have the moral obligation, or responsibility, to develop this (medical) technology. In NEST-ethical discussions, negative rights are also frequently mobilized: if I want to develop a particular technology and I do not harm anyone in doing so, others may not prevent this. This is the much mobilized moral principle of free choice, for instance, “I am not forcing you to use preimplantation genetic diagnosis, so you should not deny me my right to use it.”

Moral principles carry much weight, and they cannot – as in the case with consequentialist arguments – be undercut by questioning their plausibility. This is because principles apply regardless of the consequences. This does not mean they cannot be disputed. The weakness of general principles is that there often exists a large gap between the principle and the concrete problems to which they are applied. Principles always need to be interpreted and applied wisely. This gap leaves room for doubt, which typically takes four forms:

  1. (a)

    The principle mobilized by advocates of a NEST can be outweighed by another, conflicting moral principle. For example, electronic patient records may indeed lead to better care and thus help reduce human suffering (which is a moral duty), but does that gain indeed overrule the increased risk of privacy violations (privacy is a moral right)? Or does the right to “own” one’s body materials (like body tissue containing genetic information) carry as much weight as the right of patients who can be helped with this material?

  2. (b)

    Opponents can also argue that the principle invoked by the proponents is indeed important but does not apply in the case of this particular technology. For example, “infertile couples indeed have a right to enhance their children, e.g., by sending them to school, but this right does not extend to germ line intervention.”

  3. (c)

    An opponent can object that although proponents indeed appeal to a crucial moral principle and although that principle does indeed apply to the NEST in question, it still does not justify it. For instance, advocates of human enhancement justify this technology by appealing to individual freedom: if people want to make use of these techniques, they have got the moral right to do so. Opponents, however, object that human enhancement is actually incompatible with individual freedom, because in a competitive society everyone will eventually be forced to enhance herself/himself.

  4. (d)

    A last way to cast doubt on a deontological justification (or refutation) of a NEST is by appealing to consequentialist ethics. There are (rare) cases when the damage to the collective is so huge that it can be justified to restrict the moral rights of the individual. In open societies this is not done lightly, but even these have laws that suspend civil liberties in times of national danger. Discussions about NEST rarely touch on national danger, but there are recurring references to the so-called tragedy of the commons. In specific situations respecting individual rights causes collective disaster. Many people think, for example, that it is unwise to grant prospective parents the moral right to determine the sex of their children, as that would (does) lead to a huge imbalance in the sex ratio, with devastating consequences for society as a whole. What is smart for one can be dumb for all. In such a situation consequentialism can indeed sometimes trump deontology.

Justice

A third part of NEST-ethical discussions concerns distributive justice. This issue presupposes some form of scarcity as only then the ethical problem of distribution manifests itself. In the case of NEST, scarcity occurs at two stages. In the development stage it has to be decided on which scientific research and technological development to spend scarce resources like money, time, and energy. After a technology has been introduced to society, new questions arise regarding the distribution of its benefits and costs. The answer to the first question depends partly on a satisfactory answer to the second question. Few think it is ethically okay to spend scarce resources if the technology will eventually only benefit a privileged group while allocating the costs to the poor.

In the case of a new technology, benefits and costs should be shared justly. But what is “just”? In practice, four criteria vie with each other. Scarce goods are distributed on the basis of equality, merit, need, or chance. In some situations we prefer one criterion, in other situations another. For example, education is mostly distributed on the basis of equality; piece rates are paid based on performance; health care is given on the basis of need; and if someone wins a lottery, we simply congratulate her or him.

In NEST-ethical discussions we see mostly the first three criteria at work. In the general rhetorics, a technology is usually supposed to benefit everyone more or less equally. For example, for many it would be unfair if the rich could enhance themselves and the poor not. A recurring motif in NEST-ethical discussions is therefore how to avoid a gap between technological haves and have-nots. The merit criterion is particularly prominent in discussions about intellectual property rights: it is only fair that those who invested in, and took risks for, developing a NEST also reap the (first, financial) benefits. The need criterion is evident in arguments like it is perverse to use biotechnology to help obese Westerners rather than help the millions of poor who suffer from malaria.

Even if discussants agree that technology should profit everyone more or less equally, there is room for (political) disagreement about how to achieve this desired outcome. In the discussion regarding technological haves and have-nots, we encounter two opposing views. The first view defends the “trickle down effect”: it is inevitable that first only the wealthy benefit from a new technology, but thanks to this technological avant-garde, eventually prices will drop so that ultimately everyone profits. The wealthy elite paves the way for the poor masses. This optimism is challenged by skeptics who argue that in many cases the distance between the elite and the large mass did not get smaller at all. Even if the poor do profit, the rich profit much more. The position of the poor may improve in absolute terms, but in relative terms they are now worse off. The only solution, in the eyes of these critics, is that the government intervenes on behalf of the poor and powerless, for example, by forcing the pharmaceutical industry to give priority to drugs aimed at diseases common in poor countries.

The Good Life

Rule-ethical considerations are widely accepted as legitimate contributions to public opinion (which does not mean that everyone will agree with all considerations). Good life considerations are more controversial . The discussion rules of a liberal, pluralist, society determine that there is little public patience with such considerations. Received opinion dictates that everyone should be free to live their lives as they see fit, as long as they do not get in other people’s way. The question of the good life, including the sometimes religiously motivated answers, usually gets dismissed as a private concern. The assumption is that the question of what it means for a human being to live a good life does not allow for rational, objective, and impartial debate. This impression is strengthened by the fact that good life ethics typically takes the form of stories: myths, stories from a Holy Book, fairy tales, fables, novels, films, urban legends, etc. This kind of “narrative argument” is often considered to be less compelling than arguments that point at consequences, principles, or justice.

But the proclaimed privatization of good life ethics is theory more than practice. In NEST-ethical discussions we constantly encounter visions of the good life, based on deep-rooted beliefs about what it means to be human and about our place in the cosmos. Robots are already helping to ease the burden of caring for our loved ones, but to what extent do we really want to be “freed” from that burden? Is care not an integral element of the good life? Does playing violent computer games make us more violent prone and less empathic, less good persons? Does the pervasive availability of Internet porn change our experience of sex and intimacy for the better or worse? In relation to NEST two more abstract good life ethical issues are particularly relevant. The first issue is how we should deal with boundaries. The second issue is to what extent people should use science and technology to exert control over reality.

Boundaries

Boundaries basically allow for two types of reaction: either you respect them or you try to transgress them.

Advocates of a NEST typically defend the second option. Their patron saint is Prometheus, the Titan who ignored the express prohibition by Zeus and gave fire (technology) to humankind. Here, boundaries get dismissed as frontiers: temporary limitations that somehow demand to be overstepped. Nothing in the world is as it has to be; in principle, everything is a candidate for change and improvement. Technology changes reality into an object of our choice. It is up to us moderns to decide about the shape of the world. History is filled with examples of heroes who pushed the boundaries of human knowledge and skill because they were untroubled by taboos or apparent impossibilities. In the words of American philosopher Richard Dworkin (2000): “Playing God is indeed playing with fire. But that is what we mortals have done since Prometheus, the patron saint of dangerous discoveries. We play with fire and take the consequences, because the alternative is cowardice in the face of the unknown.” As is evident from this quote, this is a fairly masculine discourse, in which doubters get easily dismissed as “sissies.”

But boundaries and limits also have defenders:

  1. (a)

    Some stress the importance of religiously sanctioned boundaries: “It is not good for us to play God.” We all know what happened when Adam and Eve ate the apple and when their descendants built the Tower of Babel. For instance, in the debate on genetic modification some appeal to the idea of a God-willed creation possessing an intrinsic order that demands our respect. Although this religious argument is familiar, in its pure form it is actually rare in Western democracies. One reason for this is that the argument carries little weight in a secular society. In addition, it is essentially an authority argument, and that has not much tracking in a democratic society. Finally, it is not so clear what God actually wants from us. Some even argue that God, as He created us in His image, wants us to be creators like Him. For these reasons, the argument is usually accompanied by an auxiliary explanation of why God has good reasons to set these boundaries, such as the inviolability of all life (opposing genetic selection) or that death is exactly the thing that gives meaning to our lives (opposing research on immortality).

  2. (b)

    Closely related to the previous argument is the appeal to nature. The claim is that nature has its own intrinsic value and order, which commands our respect. Many people want a “natural” life, back to “nature”, and so on. It is not a coincidence that in these discussions there are constant references to Frankenstein’s monster, an “unnatural” mixture of live and dead material, an impossible combination of creature and artifact. Like the appeal to God’s will, the appeal to nature is controversial . As early as the eighteenth century, the English philosopher David Hume protested against the “naturalistic fallacy.” That something is the case (factual) can never serve as an argument that it should be the case (normative). Nature is often terrible, and few are willing to stand naked and refrain from unnatural products like clothes, penicillin, and computers. Some go so far to argue that humans are “naturally” unnatural. Even the idea that there is such a thing as a natural order is increasingly under attack. According to modern biology, nature is subject to permanent change and chance and the result of mindless tinkering rather than of a master plan. Nature herself appears to be the first to transgress natural boundaries.

    Interestingly, the debate is not settled with these objections. It proves to be surprisingly hard to genuinely bid farewell to the ideal of “naturalness.” Think about how food is packaged and marketed or about how patients may refuse to have “chemicals in their body,” but also about a futuristic technology as tissue engineering that is “sold” in terms of “using the natural capacities of the body to heal itself.” It is as yet unclear what basic (moral) intuitions may resonate in this persistent appeal to nature. It is possible that the desire to respect natural limits serves as an important counterweight to the tendency to approach nature in purely instrumental and objectifying terms. I will return to this motif below.

  3. (c)

    Less controversial is the argument that points to our cognitive limitations. There are boundaries to what we can know. Not only do we not know everything we would like to know, how big certain risks are exactly, but we do not even know what we do not know – technologies always surprise us by suddenly failing us, as Icarus found out when he flew too close to the sun, or by having unexpected effects. Although science and technology promise mastery and control, they can also unleash forces we are unable to control. This fear underlies the ancient myth of Pandora’s box – which was opened out of curiosity and then contained all the plagues of humanity – and the more recent fable of the sorcerer’s apprentice, who called forth forces he then did not know how to control.

    This admittance of our cognitive limits is reflected in policy in the form of the previously mentioned precautionary principle, formulated in the 1970s by the German philosopher Hans Jonas (1973). This principle states that we should refrain from developing new technologies with unknown hazards for humans and the environment. Or more precisely, in such cases it is up to the advocates of the technology to make it plausible that these dangers do not exist or are manageable.

    Again, many disagree. NEST-advocates emphasize that concerns about possible hazards are largely based on emotion and speculation. Instead, they insist on the need for “sound science”: worries about new technology only deserve attention when they are based on irrefutable scientific evidence. The debate on global climate offers ample examples of both positions.

    Fifty years ago, the German-American philosopher Günther Anders (1956) added a new dimension to this awareness of our cognitive limitations: not only are we not in a position to imagine what our technologies will do, but it is equally uncertain whether our moral imagination can keep up with what we can do. In earlier times, it was common that people could imagine more than they could actually do. According to Anders, in modern times, the reverse is true: we are now able to do – or destroy – more than we are able to imagine. We cannot really imagine the suffering inflicted by the A-bombs thrown on Hiroshima and Nagasaki. One or two deaths can be conceived, but not a hundred thousand deaths. And exactly because our moral imagination fails, it becomes easier to throw the bomb. Similar concerns can be found in contemporary debates about, for example, research on aging: does our moral imagination suffice to realistically imagine a world without death?

  4. (d)

    Finally, there are boundaries placed on us by tradition. Precisely because of its dynamic nature, technology is at odds with inherited ways of doing, seeing, and appreciating. The past is littered with traditional communities that have disintegrated as a result of the advent of science, technology, and modernity. According to the founding-father of conservatism, the eighteenth century politician and philosopher Edmund Burke, tradition forms a reservoir of practical wisdom accumulated by humankind through much trial and error. But for such traditional life and worldviews, the “disenchanted” modern world of science and technology has no patience. Therefore, science and technology, or so say the skeptics, pave the way for a cold and inhospitable materialism, selfishness, and nihilism.

Control

It was the mission of the enlightenment to subject nature. As moderns, we are all to a larger or lesser degree shaped by that ambition. Technology promises to emancipate us from our dependence on the mercy of the world around us by submitting and controlling it. That is technology’s essential promise. And who would opt for a return to a pre-technological world? In that world our lives would indeed be miserable, cold, fragile, and short.

But our appreciation of “technology” of course does not imply that we should be happy with all technologies or that we would not be allowed to think that certain forms of technological mastery go too far. In fact, our attitude toward technological mastery is ambivalent. As we saw above, some people want to subject technological progress to religious, natural, cognitive, or moral boundaries. But in the NEST-ethical debate, there are also positions that question this desire to control itself. This happens in three ways: the desire to control is wrong because it produces perverse effects, because it is futile and self-refuting, and because it jeopardizes other important values. (I take these three arguments from Hirschman (1991).)

A first motif in NEST-ethical discussions about technological control and mastery is that it is self-refuting. In a perverse turnaround, the technology that promises to free us ends up enslaving us. Just as the master could never trust his slave, so moderns cannot trust their technologies. Do we control the machine or does the machine control us? How many hours a day are we spending in obeying the dictates of technology – slaving at the conveyor belt, answering our mails, etc. Technology promised us control over the world, so the critics say, but has now apparently turned into something uncontrollable itself. Even technology enthusiasts agree with this (an opinion we discussed above under the heading of technological determinism) (Ellul and Merton 1964).

A second motif is that technological control will ultimately prove to be futile. Technology promises to satisfy our needs and so to take away our discomfort, our frustration, our feeling of lack, and our suffering. But, such is the gist of this criticism, this endeavor underestimates the adaptive nature of human needs. Thanks to technology we live more comfortably than kings in the Middle Ages. But are we, with all this technology, really happier today than before, so the critics ask rhetorically? Do our smartphones really make us feel more fulfilled than our parents were? Have all these time-saving devices really resulted in more leisure time for all? Are our needs today better satisfied than before, or have they coevolved with technological development, so that on balance nothing was won and technological progress was futile?

Thirdly, the desire to control can jeopardize other important values. According to the critics, technology has a tendency to define all problems in technical terms, which then of course require a technical solution. If you go around with a hammer in your hand, inevitably you will end up looking for nails. But some problems are not technical, but, for example, social or cultural. Can we really solve environmental problems by applying smarter technologies alone, or do we not – at least also – need a change in mentality and behavior? This narrowing of the focus is referred to as a “technical fix.” (To which technology enthusiasts of course counter that their opponents suffer from a social fix, expecting too much from frail human consciences.)

Technological control marginalizes other, social, ways to approach problems and thus jeopardizes important values incorporated in those alternative solutions, e.g., solidarity, cooperation, democracy, etc. But there may be a deeper sense in which technological control jeopardizes core values. When we control nature, we simply take what we want. From this perspective, the technological drive to control is little else than a form of violation. The pollution and depletion of mother earth is but the foreseeable outcome of this attitude.

This control attitude not only does injustice to nature, the victim, but also to ourselves, the violators. The American philosopher Albert Borgmann (1984) points to the impoverishment of our existence as a result of technology, which offers us everything on a silver platter. Modern consumers are barely aware of the fact that their house is warm, that there is food on their table, and that they move from A to B. Because they delegate all the work to technology, these feats have become devoid of existential meaning. Technology reduces us to passive consumers of existence, rather than people who live life to the fullest – including the imperfections, the suffering, and all the hard work inherent in real life.

Whoever subjects his environment, or other beings, no longer has a relationship with that environment, with those other beings, in the true sense of the word. For a genuine relationship requires reciprocity: the other person must possess some form of robustness, thanks to which she/he can appear as a real “other” rather than as a passive extension of ourselves. Much as we sometimes wish that our partner would meet our desires more perfectly, there are very few people who would opt for a custom-made love-bot. Too much control leads to boredom and a feeling of loneliness. Most people want to “receive” children, not “order” them. They want to love the child with all his or her “given” quirks, instead of designing an ideal child (Sandel 2009). We may have a deeply seated desire for being in control, but we have an equally deeply seated need for reality to resist our touch. This existential need underlies our ill-articulated fear of a resistance-free world, which, though no longer carrying the risk of suffering, loss, humiliation, and defeat, is also unable to provide warmth, gratitude, satisfaction, depth, meaning, and surprise.

The most famous example of this abhorrence, evoked by perfect technological control over humans and the world, is Brave New World (1932). In this novel, Aldous Huxley evokes a world where everyone is happy, cheerful, well fed, and healthy. And yet, or because of this, it is a deeply uncanny world. The novel expresses something that cannot easily be expressed in terms of the prevailing rule-ethical language of pain, happiness, rights, and justice. Its narrative confronts us with the metaphysical loneliness of humans who ultimately only meet their mirror image as the world obediently bends to our wishes and desires and thus becomes ephemeral. Although we avoid suffering and misfortune as much as possible, we simultaneously recognize that both are inextricably linked to a meaningful and truly human existence. Technology’s promise of control thus both enables us to live human lives and jeopardizes it.

To Conclude

In the previous sections, the reader was introduced to arguments and counterarguments that dominate discussions on new and emerging science and technology. Why is this useful? To be clear, the goal is not to suggest that these discussions are superfluous because it is essentially always the same discussion. Not only do we have to judge over and over again how plausible hopes and fears are in particular cases, we also have to weigh rights against each other and evaluate which moral principles apply to concrete cases and how to apply them wisely. Similar with the arguments pertaining to distributive justice and the good life, what criterion to apply in which situation and to what degree to trust on the trickle down effects, these are deeply political questions that will need to be debated over and over. And although there are marked differences in how people assess boundaries and technological control, most people are sufficiently ambivalent on these issues that they keep reflecting on them.

So, the aim of the overview is not to write a computer program that will do the deliberation for us by ticking off the boxes. The aim is rather the opposite. NEST-ethical debates can be enriched if participants know what considerations have proven relevant in previous discussions. NEST-ethics may not provide any answers, but it can help us in asking the right questions.