Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

It has been said that most people overestimate how much technological progress there will be in the short term and underestimate how much there will be in the long term. (Bostrom 2006, p. 47)

In 2008, the Dutch University of Maastricht hosted an international conference on the theme of ‘Human–Robot Relationships’. The conference was convened to assess the possibility of humans and robots forming intimate, loving relationships in the future. This topic proved a source of considerable disagreement among the 30 participants. Not only were they divided on the question of how far technological advancements could take us, but many were also undecided as to whether there was any need for such personalised relationships with robots in the first place. ‘If intimate relationships with robots is the answer,’ one participant went on to remark in the Dutch newspaper NRC Handelsblad (issue of June 21, 2008), ‘then what on earth is it an answer to?’

Not yet a reality, such relationships are currently only found in novels and films. Consider, for example, the 2001 film A.I. (Artificial Intelligence) by Stanley Kubrick and Steven Spielberg. In this film a fictional world is depicted in which scientists have managed to create a boy robot called David, who is not only capable of learning and operating autonomously, but who can also experience love. An older example of this genre is the well-known series of robot narratives written by American author Isaac Asimov. A biochemist by training, Asimov wrote nine futuristic stories between 1950 and 1960 in which he envisaged ever-evolving robots that could come to serve as perfect substitutes for actual human beings.

As yet, David and the Asimov robots are confined to the realm of fiction, existing only as imaginary figures on screen or on paper. Scientists, however, do hope to turn fiction into reality, by continuing to develop sophisticated robotic machines that resemble humans in appearance and behaviour and by finding ways of making them interact with their surroundings in an intuitive manner. Such machines are designed to replicate human movement and to mimic and respond to human behaviour just as David and Asimov’s robots would. Still, they are not generally considered human: although they share many characteristics with humans, they remain machines at heart. Whoever attempts to cut open David’s body will not draw blood from his vessels, but will instead encounter computer chips and an interior riddled with cables and wires.

In our contribution to this volume, we will point to a number of developments emerging with regard to the construction of robots that look and behave more and more like human beings. At the same time, another development can be discerned which could be called the mirror image of this increase in the construction of humanlike androids: namely, our own increasing resemblance to machines through the application of computer technology. As a consequence, homo sapiens is slowly becoming a cyborg, a physical merger of man and machine. In popular culture, examples of machine-enhanced beings are already heavily featured, such as Star Trek’s ‘Borg’, the race of enhanced humanoids made famous by the popular television series.

The most important thesis of this chapter can be stated as follows: as a result of the ever-growing resemblance between humans and machines, the boundary between what constitutes a human and what constitutes a machine is becoming less distinct. This will have implications for how we conceive of ourselves as human beings and for what it means to live in a ‘human-oriented’ society.

We first turn our attention to humanoid cyborgs, by discussing a number of cases where computer networks and chip implants have enabled humans to communicate both with one another and with objects such as electronic doors and wheelchairs. Subsequently, we touch on the construction of android robots that are capable of mimicking gestures and movements specific to humans. Lastly, we review several examples of robots which possess a strong visual likeness to their human creators. Included in the reference list at the end of the chapter is a list of websites containing more information on each of these topics. Central to our discussion is a concern with the similarities and differences between humans and machines, leading to the question of whether machines can operate with a degree of autonomy from their human inventors.

Part Human, Part Machine

British scientist Kevin Warwick is renowned for being an extraordinarily creative experimenter. He achieved great successes experimenting not just in a lab using computers, but by conducting experiments on himself as well. In 1998, he had a small chip inserted into his arm which enabled him to maintain a wireless connection to a computer. With this experiment, Warwick aimed to show how computers and humans might communicate without relying on the use of a mouse or a keyboard. Warwick is of the opinion that in the future we will all be hooked up to computer networks with which we share a continuous, wireless rapport.

By means of radio waves, the computer was able to pinpoint Warwick’s location. Furthermore, it was programmed to respond to several of his activities:

At the main entrance, a voice box operated by the computer said ‘Hello’ when I entered; the computer detected my progress through the building, opening the door to my lab for me as I approached it and switching on the lights. For the nine days the implant was in place, I performed seemingly magical acts simply by walking in a particular direction. (K. Warwick, 'Cyborg 1.0', www.wired.com/wired/archive/8.02/warwick.html, accessed September 2012)

In 2002, another experiment was carried out where the chip implanted into Warwick’s arm was not only linked to the external computer network, but was also connected internally to his central nervous system. In this experiment, the computer not only tracked and responded to movement, but it also enabled Warwick to steer a wheelchair and even guide an artificial hand. Thus, he had found a way to both receive and send signals from and to a computer from a distance.

A possible practical application of this latter type of experiment could be to aid patients who are physically impaired as a result of the loss of a limb or who are otherwise restricted in their ability to move around freely. An interesting question raised by this experiment would be whether such biomedical instruments should just be considered as expedients for helping human beings with disabilities, or whether they are part of the development that constitutes the creation of modified or enhanced humans.

Warwick himself is outspoken in his answer to this question. He believes the chip will not only replace existing body functions, but will also give rise to a set of wholly novel applications. Theoretically, those who carry a chip in their arm will be capable of more than immediate, localised operations. They will be able to execute commands from anywhere in the world by virtue of being connected to a worldwide network of intelligent systems.

As an extension to this experiment and in order to demonstrate its effectiveness, Warwick equipped his wife with a simpler version of the chip. By connecting the computers that received the chips’ signals, each person was made aware of the other’s actions—even when they were at separate locations. Mrs Warwick could sense the actions performed by her husband, such as lifting a finger, and vice versa. The experience of undergoing such an experiment was described by Warwick as getting a sense of what it would be like to be an actual cyborg:

I was born human. But this was an accident of fate—a condition merely of time and place. I believe it’s something we have the power to change. (http://www.wired.com/wired/archive/8.02/warwick.html, accessed September 2012)

BrainGain: A Built-In Computer

The research project set up by the Dutch BrainGain consortium also revolves around the possibility of enabling direct communication between a computer system and the brain. Its primary focus is on improving connections between brain and computer by means of ‘Brain Computer Interfacing’ (BCI). Users of BCI are trained to concentrate on specific thoughts in order to mentally control a computer or to receive electronic stimuli to the brain. BrainGain is a project involving co-operation from several universities, university medical centres, the Dutch research institute TNO, patient advocacy groups and electronics companies such as Royal Philips and Siemens. These partners share the goal of investigating to what extent individuals can learn to control their brainwave activity by means of neurofeedback training, a process where users consciously activate certain brain regions in order to operate computers by thought alone.

One of the findings of modern neuroscience is that when a person passively observes certain instruments or tools, activity is also triggered in those brain regions that are involved in the actual use of the instruments. Furthermore, it appears that the precise location of the brain activity is determined by what type of tool is being observed. BrainGain investigates whether it can build on these insights. If, for example, the act of just thinking about a tool similarly results in discernible patterns of brain activity, it might be possible for a person to instruct a computer to operate the tool in question—provided the brainwave patterns are sufficiently pronounced.

Ideally, this kind of research will bring solutions to the problems of persons who suffer from poor mobility or from a communication deficit. In addition, patients suffering from a form of muscle dysfunction such as ALS, epilepsy or paraplegia are also likely to reap benefits from it. Due to an increased ability to actively interact with their surroundings, these patients will require less care and assistance in performing daily tasks. Training them to focus on a single sound, object or movement might result in the kind of neural activity required to operate a computer, which can be programmed to perform useful functions such as opening doors or switching on the lights.

BrainGain resembles the second Warwick experiment in its ambition to provide disabled patients with technology that will improve their quality of life. A possible advantage comes from the fact that the computer receives signals directly from the brain rather than from a chip embedded in the arm. Once this technology has reached the appropriate stages of development, it will also offer solutions to patients with a damaged connection between brain and body parts such as the arm, for instance as a result of injury or amputation.

A Closer Look at the Work of Warwick and BrainGain

A central objective of both the BrainGain project and Kevin Warwick’s research programme is to find ways of compensating for the loss of basic human skills in individuals. A case in point are persons whose freedom of movement or communicative abilities have been curtailed as a result of muscle loss or speech impairment. Researchers devote much effort towards developing medical instruments and accessories that might be of help to these patients in compensating for their disabilities. Thus, one major impetus behind such research is that it offers possible solutions to persons with a work-limiting disability.

Besides compensating for disabilities, which is mainly of benefit to elderly people and patients with a chronic illness or physical disability, this technology can also be used to facilitate the improvement or enhancement of unimpaired human capabilities. The use of such intelligent technology might thus also open the doors to novel applications. In Warwick’s case, having a chip in his arm meant he was capable of more than operating doors and switching on lights. The chip also linked him to his wife through a computerised network that allowed him to communicate with her from afar in a new way.

One of the questions raised by such experiments is how far Warwick and his colleague scientists will go in their quest for human improvement. This issue is linked to matters of controlling the new technology. Will man or computer be in charge of the controls for operating this technology and for switching it off? Usually we are led to think that computers are under the direct control of humans. After all, are we not the ones who program them and who decide when and where they should be used and what actions they may perform? In this view, computers are regarded as so-called ‘standalone’ machines that are under the direct influence of their inventors. It seems logical to think that the maker of a machine, or any other individual for that matter, should be able to pull the plug at any time.

Yet two reasons can be given as to why this view can no longer pass unchallenged. The first is that computers, like the chips developed by BrainGain and Warwick, are not isolated entities. They are in constant connection to large-scale networks which cannot easily be switched on and off—rather like the Internet. The latter is sustained by many different connections, commands and operations that are carried out simultaneously from many different locations, and it thus possesses a high degree of autonomy. ‘Pulling the plug’ in order to turn off the Internet is not a feasible option when these computerised systems are connected to huge networks that can no longer be traced back to a single user or operator.

A second reason for challenging traditional ideas about humans being in control of computers is the latter’s increasing processing power. Anyone who regularly replaces their personal computer is aware of the fact that computer capacity increases at a fast rate. This increase is mostly to be found in processing power and memory size—that is to say, in computers’ capacity to quickly store and access large amounts of data.

Nevertheless, the degree to which computers are able to understand and interpret spoken language as well as visual and auditory signals is still rather limited. The chess computer that beat Gary Kasparov, for instance, did not possess the grandmaster’s knowledge or intelligence. In fact, all it did was run calculations on what would be its best next move by processing huge numbers of possible move sequences. This is not to detract from the largely untapped potential of computers to increase their ability to learn. By adopting evolutionary techniques which can be acquired at a much faster rate than is the case in natural evolution, computers may in time become worthy of being called intelligent creatures.

In a 2001 interview with German magazine Focus, the renowned British physicist Stephen Hawking, who is almost fully disabled as a result of a crippling muscle disease, warned that artificial intelligence can pose serious threats to humankind. According to Hawking, every 18 months the computational capacity of computers doubles, making it difficult for humans to keep up. There is a serious risk of computer intelligence increasing to the point where we can witness machines that are able to function with full autonomy. Hawking added the admonition that for this very reason, humans should try and match the development of computers. It is of the utmost importance for humans to try and maintain a biological superiority to electronic systems.

This point is also stressed by Kevin Warwick, who asks ‘what is wrong with adding something that gives you extra capabilities?’ As far as he is concerned, we can either let artificial devices determine our destiny for us or try and augment our own human intelligence. The creation of interfaces which link the human brain to computers will ensure that human cognition is supported by artificial intelligence, which should guarantee that humans remain masters of their artificial creations. However, as any hacker will tell you, once a network is in place it is difficult to stay in control of all the digital traffic. Therefore, Warwick’s vision of the future is susceptible to concerns about the degree of control that can be exercised over networked brains. Warwick’s answer to this concern is that, in a world of cyborgs, the notion of ‘self’ is likely to have a fundamentally different sense, thus rendering ideas of ‘self-control’ equally open to new conceptualisation.

Constructing Intelligent Computers

The development of intelligent machines first gained scientific interest in the mid-1950s with the advent of artificial intelligence (AI). Inspired by the development of computer technology, scientists commenced research on the construction of intelligent computers that were patterned after the human brain. Since little was known about the workings of the brain, it was hoped that the creation of intelligent computer systems would also provide us with greater insight into human intelligence. In the early years of artificial intelligence research, many believed that it would take 10–20 years before computers could be built that would be able to perform any task a regular human was capable of doing. This deadline was not met, and to this day a truly intelligent computer system has not been created—although a lot has been accomplished in the meantime.

One of the early focal points of AI research was the development of chess computers. It was believed that if computers could beat humans in chess, this would be an important step towards the development of systems that were truly worthy of the honorific ‘intelligent’. The victory of IBM’s computer Deep Blue over chess grandmaster Garry Kasparov in 1997 finally ended years of speculation about whether such an intelligent chess computer could really be produced. This accomplishment was considered a significant achievement in the history of AI, although it did not signify that computers now had the capacity to function independently and autonomously from humans.

Apart from the development of chess machines, research also focused on the construction of robots, particularly in Japan. Robots are first mentioned in the science fiction literature of the early 1920s as mechanical beings which resemble humans and may be used as replacements for human labour (in fact, the word ‘robot’ is derived from the Czech word for forced labour). Even as far back as the eighteenth century, various machines were constructed which mimicked human or animal behaviours—a famous example being an eighteenth-century Japanese doll that could offer tea to its guests. Japan, as will be seen in this chapter, is one of the countries at the forefront of the development of humanlike robots.

The first generation of robots consisted mainly of so-called ‘service robots’, which relieved humans of certain tasks that these robots could perform non-stop. A prime example are the robots used on factory assembly lines or conveyor belt systems, such as the type of robot used in the car industry. Service robots thus represent the first successful application of robot technologies. They can be deployed on tasks that humans experience as highly monotonous, that require high levels of precision or that could prove hazardous for living beings (such as the neutralisation of explosives).

A number of ‘consumer robots’ have also been created, functioning, for example, as robotic companions or playmates. Examples of this type of creation include Sony’s robot dog Aibo (which is programmed to obey its master); Honda’s Asimo (one of the first robots capable of walking upright, reminiscent of an astronaut in its looks, see Fig. 7.1); Paro (a therapeutic robot seal); the Tamagotchi (a digital companion which became popular in the early 1990s and has recently become available for smartphones); and finally the robotic housemate that is Robosapiens 2004.

Fig. 7.1
figure 1

The Asimo robot by Honda: a walking android. Image: GNSIN

These service and consumer robots exist to serve humankind; they are literally controlled and programmed by their human masters. In addition, an important part of scientific research is geared towards the construction of self-improving robots that can function with some degree of autonomy. A central feature of this type of research is the interaction between humans and machines. Unlike service robots, these machines are programmed to acquire data by means of environmental interactions and to use these data to increase their range of capabilities. In this way a number of robots have been developed which possess eerily human characteristics. As we shall demonstrate, such robots evoke a sense of the eerie precisely because they further elide the distinction between robot and man.

In what follows, we will first introduce several examples of robots that imitate human facial expressions in order to communicate with their environment. Second, we discuss a number of robots that have physical attributes similar to humans, which they use to walk, jump, dance and so forth. Lastly, we cover a set of robots which are so much alike to humans in movement and appearance that they can almost pass for actual humans.

The Human Interface

The Kismet robot built by MIT in 1999 is best described as a kind of social robot with human features. Its main human quality is its propensity for facial manipulation (see Fig. 7.2). Kismet can interact with its environment through visual, auditory and proprioceptive sensing systems. This enables it to produce a wide range of facial expressions. For instance, Kismet can choose to convey looks of cheerfulness, calmness, sadness, disgust or anger. The way the robot behaves is modelled on very basic human reactions to external events.

Fig. 7.2
figure 2

MIT’s Kismet robot can assume various facial expressions. It has visual, auditory and sensory sensors in order to interact with its environment. Image: Nadya Peek

The goal with which scientists have constructed Kismet is to find out whether a robot can be trained to adopt new behaviours through its experiences with the world. The behaviour adopted by Kismet (looking away when an object gets too close, conveying looks of glee, surprise or puzzlement) has not been predetermined, but is a consequence of its interactions with surrounding elements.

When no visual stimuli are presented to Kismet, its facial expression changes. It conveys looks of sadness and displays behaviour which we associate with loneliness, and consequently starts a quest for human companionship. By implementing various learning mechanisms, it is hoped that Kismet will continue to develop into a robot that possesses a degree of social intelligence and is capable of engaging humans in social interactions.

A comparable initiative is the iCat project by Philips, which aims to study man–machine interactions. The iCat robot possesses an internal camera which it uses to scan objects and facial expressions. Through a series of built-in microphones, iCat is able to detect sounds and pinpoint whence they originated, and thus to use those sounds as a platform for speech recognition tasks. It further contains sensors that register tactile input. Another key aspect of the robot is its ability to manipulate its facial expressions to make them strongly correspond to those of humans (see Fig. 7.3).

Fig. 7.3
figure 3

iCat by Philips. Source: safeliving.wordpress.com/2007/11/20/telecare-session/icat-by-philips-expressions

Whereas Kismet was a unique, one-off project, Philips aims to produce the iCat in larger numbers in order to facilitate future research. For this reason, copies of the iCat robot are being employed in several research locations. Surrounding the iCat is also an online community of users who exchange tips and experiences with one another. This creates an environment in which a collaborative development and improvement of its design is made possible.

Both the Kismet and the iCat project opt for a facial configuration which is clearly different from that of humans, yet which also maintains a correspondence of features. Both robots display emotions that are very recognisable to humans. Research has shown that the closer a robot resembles a human being in looks, the more critical humans become of its expression and behaviour. This is why both Kismet and iCat make use of the techniques of cartoon drawing. These robots, while clearly having non-human appearances, at the same time possess features which are easily recognisable as belonging to humans. Just as cartoon images, Kismet and the iCat robot come across with more conviction by displaying highly exaggerated expressions and emotions. Coming to our next example, we encounter a completely different type of robot, namely a robot which imitates humans not in expression but in movement.

Dancing and Jumping Robots

Along with communication robots that are geared towards interaction with humans, there exists a wide range of robots that move and dance, of which the Beatbot is the funniest to behold. It kind of looks like a yellow snowman or rubber duck which bobs along to the beat of different kinds of music. Children in particular get much excited by the sight of this robot, which does not just respond to music, but also interacts with its surroundings by responding to the movements of those in its vicinity.

A dancing robot is quite something to behold in itself. But to encounter robots that dance with each other and even flawlessly execute a complex Japanese choreography is truly a high point of modern robotics. Sony’s QRIO robot is capable of such performances, as is documented in many videos that can be found on YouTube. Balancing itself on two legs, it can move both arms and legs in a graceful manner. Its appearance matches the prototypical look of a robot; it is not more than a steel construction with angular dimensions. Its movements, on the other hand, do reflect those of humans.

Dexter, a robot produced by the American company Anybots, is a closer approximation to the human musculoskeletal system. It is the very first robot to have the ability to walk on two feet, as well as being the first to be equipped with a flexible spine. Its walking motion is not exactly elegant; in fact, it is more of a shuffling trot than an actual walk. Dexter looks more impressive, however, when jumping, which it is capable of doing in a very realistic fashion. Because of its flexible spine, it is even able to get back on its feet without human assistance—a feat which few robots are capable of performing. However, for all its advantages, Dexter is also still very much a robotic being, as little to no effort was put into giving it a human appearance.

Robots as Human Substitutes

Japanese researcher Hiroshi Ishiguro raised the bar significantly when he produced an actual copy of himself. In 2006, after a successful trial with an android that looked like a well-known Japanese TV host, Professor Ishiguro developed a robot which was a realistic duplicate of himself (see Fig. 7.4). Viewing the moving images of Ishiguro’s mechanical doppelgänger, one quickly understands all the enthusiasm surrounding this robot: it almost perfectly resembles its creator in both looks and behaviour. Sitting in its chair, breathing, gazing around while slightly tapping its toes—the robot does everything we would expect from a ‘real’ human being. Every effort was made to create a robot that looked exactly like the original, and the result is a robot that comes across as highly convincing (as evidenced by its ability to put on a very impressive poker face).

Fig. 7.4
figure 4

Hiroshi Ishiguro and his doppelgänger, the geminoid HI-1. Image: ATR intelligent robotics and communication laboratories. Geminoid was developed by Hiroshi Ishiguro laboratory, advanced telecommunications research institute international (ATR). Trademark: Geminoid is a registered trademark of advanced telecommunications research institute international (ATR)

The robot is operated remotely through Ishiguro’s voice, body posture and facial movements, which are scanned and then reproduced by the robot with extreme precision. Professor Ishiguro can be miles away, and still the robot is capable of receiving signals and of giving a perfect imitation of Ishiguro’s actions. Although this might come in handy, say, when being an hour’s drive away from the location where one’s presence is required for teaching, it does not mean, of course, that the robot can give lectures independently. Then again, that is not the point of Ishiguro’s research, which sets out to study man–machine interactions. A more interesting question would be to what extent people feel themselves in the presence of Ishiguro when dealing with his android, or, conversely, to what extent they engage the android in communication in a manner that is natural when dealing with other humans. What makes us recognise other people as fellow human beings? Is it a question of looks or behaviour? In the end, these are some of the deciding factors for determining whether or not humans can be engineered.

Through his experiments, Ishiguro has discovered that humans who only get so much as a brief glimpse of a stationary robot will have no problems identifying it as such. However, when the robot is also making slight movements, the majority will believe they are dealing with a fellow human. Especially involuntary movements, such as tilting the head, rotating the neck while gazing around or making small hand movements, are identified as contributing elements to the persuasiveness of an android robot. Although the ability to speak is also significant, it is mostly small body movements which give the robot a lifelike impression. The so-called ‘body movement factor’ thus seems to be of greater importance than speech for achieving realistic communication between androids and humans.

Overall, the clips of Ishiguro’s doppelgänger come across as highly convincing. In addition to Professor Ishiguro’s clone, another android was created by scientists which was designed to look like Einstein, and there are several more specimens that walk, talk and move like humans. All the same, these androids are also reminders that a lot of work is yet to be done. Ishiguro’s android, for example, cannot walk or move on its own but is fixed to a chair. It is also fully dependent on Ishiguro’s presence to perform some of the more complex tasks of which it is capable. This goes to show that android robots are still far from being regarded as fellow human beings.

Enhanced Humans

On the one hand we have provided a sketch of the rise of cyborgs that have the ability to communicate via fixed computer networks, and on the other hand we noted the increase in android robots that show such a great resemblance to humans that we might even become convinced of their humanity. What does this mean for the way in which humans and machines will coexist in the future? Two other questions are relevant to this issue. (1) To what extent is it technologically feasible for humans to become cyborgs engaged in large-scale communication with computer systems, and for android entities to function on an equal basis next to humans? (2) To what extent is it socially desirable to construct such android robots and cyborgs? There remain wide differences of opinion on both these questions.

A large number of experts are convinced that within the next 50 years computers will advance to the point of existing on an equal footing with humans. They argue that computers will one day display the same behaviour as humans, communicate in the same way as we do and be capable of showing human emotion. According to these scientists, it is irrelevant whether these emotions are of the same nature as human emotion or represent a form of trained behaviour. The Ishiguro robot shows that technology has progressed significantly, but it is also demonstrative of the fact that we still have a long way to go. Computers are not yet capable of thinking independently or of understanding casual utterances of language, nor do they have the ability to participate in social activities. Hence, a real breakthrough on these terrains is not yet on the horizon.

Scientists like Warwick and Hawking are nonetheless convinced that someday advanced computer programmes will not only equal the abilities of humans, but will even be capable of doing much more than us. They warn that if we do not watch our step, it is a real possibility that robots will eventually take over.

This potential loss of human control over machines is the central theme of the 1999 movie The Matrix. Kevin Warwick views its narrative as being not just fictional, but as representing a possible vision of our future. The film is set in the year 2199, at which time machines have defeated mankind, yet humans are not themselves aware of this situation. They are trapped in a virtual reality, a computer-generated dream world that resembles the society of the late twentieth century. Human bodies are carefully preserved in large incubators in order to supply the system with energy. According to Warwick, the thought that humans will serve as an energy source for computers is evidently undesirable, but not entirely unthinkable.

Not everyone shares his belief in the technological possibilities for realising the wonderful yet often apocalyptic visions of science fiction writers. A large group of scientists are still quite sceptical in their opinion on the question of whether machines can take the place of humans. According to this view, which is put forward by, among others, cognitive scientists and philosophers, the chances of us ever attending lectures given by Ishiguro’s replica are very slim.

One of the most influential philosophers in the field is the American philosopher Hubert Dreyfus. Dreyfus does not find it surprising that no computers exist which can match the human brain. Intelligent systems are built around the notion that the human brain functions rationally, and that it structurally processes and reproduces information in a manner comparable to the algorithms utilised by chess computers. Dreyfus believes this to be an incorrect assumption. Humans do not apply rules when deciding how to act in a given situation; most persons probably not even do so when playing chess. The brain operates on intuition and experience. This bears little connection to the mathematical processing of pieces of information, according to Dreyfus.

Support for his theory comes from Ab Dijksterhuis, professor of social psychology at the Radboud University Nijmegen in the Netherlands. Dijksterhuis created a furore in 2007 with his contention that humans ‘think with feeling’. According to him, conscious reflection is much overrated. We think that our consciousness is called upon when making important decisions and that it defines our intelligence. Yet most choices—even wise ones—are made subconsciously. The way we think and act is partially determined by the circumstances—it is situated, as Philip Brey calls it. Our situatedness is context- as well as location-dependent. It cannot be preprogrammed and can thus never be simulated by a computer.

The second question of whether or not it is culturally desirable to improve ourselves touches on an issue which is discussed in almost every chapter of this book: namely, to what extent do we want to enhance humans? The answer to this question is closely linked to what meaning we give to the word enhancement. A common distinction is made between healing and enhancing. We normally do not object to the idea of healing, nor do we object to the development of therapies or expedients for helping patients with an illness or ailment of sorts. But as soon as the debate turns to questions of enhancement, it becomes more contentious. A fine line exists between enhancement and healing. Glasses, for instance, are viewed as a means to compensate for deficiencies of eyesight. Yet telescopes are regarded as instruments that belong to the category of enhancement (see Rose 2006). But are not both examples of human adaptation and even modification?

It is often stressed in debates on this topic that humankind has always been involved in attempts at self-improvement. According to the American philosopher and ethicist Arthur Caplan, those who favour the idea that human nature is invariable and that new technologies will turn it into something unnatural are guilty of committing a basic fallacy. Such reasoning is based on the idea of the existence of a natural state in which humans can, and by necessity must, exist. According to German philosopher Peter Sloterdijk, the question of whether humans ought to use technology for self-enhancement is no longer a valid one, as humans have already been strongly shaped by technology. Human evolution is entangled with the evolution of technology, as the dissertation of Pieter Lemmens, a Dutch philosopher, tells us. Humans are deeply reliant on technology, whether in the form of ‘hard’ appliances such as computers and machines or ‘soft’ techniques like education and other forms of pedagogy and socialisation.

If we are already striving to improve our intelligence, skills and health via a system of education—including health education—then why should we not also implement new technologies for this purpose? This is the question advanced by British philosopher John Harris. If parents who are well-off have the opportunity to send their children to private schools, then someday they might also have the opportunity to augment their children’s brain capacity. Is there a difference?

As we noted earlier, Kevin Warwick also sees the potential upside to the technological enhancement of humans. He joins the philosophers we mentioned earlier in seeing little difference between this kind of enhancement and other forms of human improvement or healing.

People with pacemakers and cochlear implants [an electronic implant inserted into the ear; used to restore a sense of sound for patients whose hearing is lost or damaged] are getting a benefit from technology. What is wrong with adding something that gives you extra capabilities? (The Guardian October 4, 2001)

John Harris dares to go a step further. He not only thinks that human enhancement should be allowed, but even regards it as our moral duty. Thus, natural selection, an arbitrary process to which humankind has been subject for centuries, could be replaced by a more rational and deliberate selection process:

This new process of evolutionary change will replace natural selection with deliberate selection, Darwinian Evolution with Enhanced Evolution. (Harris 2007, p. 21)

Human Machines and Mechanical Humans?

What does this increasing interest in the development of cyborgs and android robots mean for future societies? What sort of novel relationships and new codes of conduct can or should we expect? In an attempt to provide answers to these questions, the American author Asimov articulated a series of rules or laws in his novels which he felt all robots should obey. He called them the three laws of robotics:

  1. 1.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. 2.

    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

  3. 3.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

    A fourth or ‘zeroth’ law was added later, which takes precedence over the other three:

  4. 4.

    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws are clearly designed to stress the control that should be exercised over robots. Robots exist to benefit mankind, not the other way around. For Asimov, humans and machines represent two classes of beings that are clearly distinct from one another.

In the film The Matrix this distinction between man and machine is less clear-cut. Neo, the main character in the movie, is part of a group of humans who set out to destroy the Matrix’s powerful machines by running their own competing computer programmes. Neo and his friends navigate between the virtual world of the Matrix and the ‘real world’ outside the system. Although at first glance The Matrix appears to feature a battle between humans and machines, Neo and his crew frequently hook up their own brains to the Matrix computer system in order to gain access to the virtual reality in which other humans dwell. This reveals the existence of a close interrelationship between humans and machines. The concept of individuality in the Matrix thus also has quite a different sense.

As we have highlighted throughout this chapter, it is common practice for today’s scientific community to speak not of man or machine, but rather of man and machine. The line between the two categories will in all likelihood become increasingly blurry. Androids are becoming more and more lifelike, whereas humans, wired to large computer networks, show ever more traits that are typical of cyborgs. Scientists such as Warwick and Hawking not only see this development as defining our future, but they advocate an active role for humans as custodians of that future. The human brain ought to be enhanced by technology in order to retain control over new forms of artificial intelligence and artificial life.

The most important question regarding the future is not whether androids and cyborgs will be welcomed by society, but how society must try to redefine the relationship between the two categories. If the distinguishing characteristics of humans and machines continue to soften, Asimov’s laws of robotics will no longer hold valid. The answer to the question of how android robots and human cyborgs may coexist—as illustrated by Koops’s contribution on human rights (Chap. 12, this volume)—presupposes new social, legal and economic orders. The question is already relevant today. It is only a matter of time before it becomes urgent.