Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Research in brain–computer interfaces (BCIs) is motivated by the hope to implement this technology into the everyday lives of severely impaired people (Birbaumer 1999, 2006; Dornhege et al. 2007; Daly and Wolpaw 2008; Mak and Wolpaw 2009; Millán et al. 2010; McCullagh et al. 2010; Zickler 2009).

In addition to scientific BCI research, there is a growing interest in reflections on the anthropological, social, and ethical implications of BCI technology (Fenton and Alpert 2008; Haselager et al. 2009; Tamburrini 2009; Hildt 2010; Nijboer et al. 2011). From an anthropological point of view, of particular interest are questions that relate to human–computer interaction in BCIs. Is it adequate to describe this interaction between man and technical device as mere tool use comparable to the use of a drilling machine or a mobile phone? Or is it more adequate to assume man and technical device to form some sort of functional unit or hybrid? Are there reconfigurations taking place (Suchman 2007)? Can the technical device be integrated into the body’s representation?

Furthermore, human–computer interaction in BCIs points to a human future scenario of cyborgs discussed by transhumanist positions: The vision of the biology of human beings being more and more substituted by technology, which may even end up in the vision of taking over the whole human body through technology. Taking into account that the European tradition of anthropological thinking has taken the human being to be in a rather impaired and incomplete condition, constantly in need of culture and technology, it might be an interesting question whether the substitution of parts of the body by technological devices would fight or rather serve the human essence.

In what follows, this chapter first reflects on the role of neurotechnology and human–computer interaction in transhumanist thinking. Then, some phenomenological positions are introduced that might serve as a basis for reflecting upon human–computer interaction in BCIs. In this, the concept of “transparency” is considered of particular interest. Afterwards some empirical results of a pilot study which investigated people’s experiences concerning human–computer interaction in BCI use are presented and discussed against the theoretical background.

2 Transhumanism

Imagine the perfect BCI device: All technical trouble is overcome, the devices are reliable, smart, and cheap, the number of potential applications is unlimited, and the users learn to operate them easily. Is this not the opportunity to overcome all the disadvantages of our fleshly existence and to replace the body with technical devices driven by thoughts only? Could not a brain alone, connected via several interfaces to worldwide sensors and tools, live a rich and interesting life? And could not even the brain itself, the last remaining part of the body, eventually be replaced by more durable hardware? Could not mankind, as the next step in evolution, transform itself into a web-like superorganism?

This thinking is characteristic of the movement of transhumanism (Hayles 1999; Pepperell 2003; Graham 2004; Miah 2008; Birnbacher 2008; Krüger 2004; Herbrechter 2009). Transhumanism is the idea that the current constraints of being human can be overcome by technological means and that humans could be transformed into other, ‘better’ beings. There may be different branches and strategies of transhumanism (biochemical ones, electronic ones), but at their core they commonly aim at enhancing the mind, increasing pleasure, and overcoming death. Many of them consider leaving the body as we know it behind and moving the mind to a more durable basis. To refer to only one example, could we not upload the human mind to a supercomputer and by doing so, although leaving all the flesh behind, nevertheless save the actual, the ‘real’, human being? (Moravec 1988; Kurzweil 1999; Tipler 1994; Bostrom 2008, 2009) Many well-known scientists and engineers are representatives of the transhumanist movement. For an appropriate example in the BCI context, one might think of Artificial Intelligence (A.I.) researchers like Marvin Minsky or Hans Moravec. In a programmatic article Minsky (1994) claims: “To lengthen our lives, and improve our minds, in the future we will need to change our bodies and brains. […] We must then invent strategies to augment our brains and gain greater wisdom. Eventually we will entirely replace our brains […]” Minsky expects that researchers “will also conceive of entirely new abilities that biology has never provided. As these inventions accumulate, we’ll try to connect them to our brains […] In the end, we will find ways to replace every part of the body and brain – and thus repair all the defects and flaws that make our lives so brief.”

Positions like this raise several anthropological questions. Are these transhumanist visions overambitious or impossible? Is there an aspect of ‘human essence’ that cannot or should not be touched by technical manipulation? Can humans, may they invent and engineer as they like, ever escape their general situation by technical means? Or will those technical means always remain makeshifts of an assistive, compensatory character? Can humans achieve any design by technical means, even the complete reshaping of themselves?

We can expect, roughly said, two types of answers, stressing either particular essential features of the human being or the non-fixed character of human existence. Defenders of the opinion that there are at least some essential features that might be considered to be ‘eternal’ would say that of course man has some options and some degrees of freedom, but he cannot escape his essential corset. The other way of thinking accepts only openness as an appropriate description and suggests implicitly that in principle man always has the option to change himself more or less completely.

No doubt, both paradigms are vulnerable to criticism. It would be hard for the eternal-features paradigm to present only one feature that has not been changed or even lost in at least some people in history. On the other hand, the option paradigm has to concede that the freedom of man is in fact limited by many undeniable conditions. These conditions might be altered at some point, but now, right now, they are present and do limit human activities.

It is relevant that the transhumanist visions express a current ideal or future scenario, and that research in telepresence, cyberspace, BCI, and whole-brain emulation can be considered to be in line with, if not already quite close to, these visions. From a transhumanist perspective, BCI technology can in some respects be seen as a step towards the aim of overcoming the bodily existence of the human being. As an interface between brain and computer (‘wetware’ connected to hardware), BCI technology in some sense illustrates the transhumanist ambition of replacing biological structures by technological enhancements (Grübler 2012). So the question arises: Can BCI applications really substitute parts of the body?

In the following, this core question will be further elaborated on the basis of some positions in philosophical anthropology.

3 Human–Computer Interaction

During recent years increasingly intensive discussions about the way people interact with machines have taken place (cf. Suchman 2007). Typical instances that have been used to illustrate arguments and interpretations are media use and human–computer interaction. However, for the theoretical backing of these discussions much older models and theories were used. The most prominent ones come from Edmund Husserl (1976) and Maurice Merleau-Ponty (1962). The latter argued that (even primitive) tools and devices people use are integrated in their bodily experiences and modify the way they construe ‘their’ world. Contradicting the ‘objectivist’ point of view – the readymade body uses a readymade tool to manipulate the readymade world – this phenomenological point of view stresses the undivided unit the human being practically forms with the world he or she is actively engaged in. On the basis of this kind of thinking the extension of the body or the incorporation of tools in several fields of prosthetics or cyberspace applications was spoken about (Clausen 2006; Hildt 2010). It has, for instance, been argued that prostheses become part of the users (e.g. McDonnell et al. 1989; Hochberg and Taylor 2007). This seems to support the idea that human beings may be altered by technological means and that they might drift away from their current state of being. In addition, it was further argued that there is an important difference between the inclusion of technical devices into the body and the extension of the body via technical devices (De Preester and Tsakiris 2009). According to this thesis, the body seems to have an internal, inborn model allowing for an integration of artificial limbs as far as they emulate the complete biological body. Only artificial parts of that kind can be experienced as parts of that body.

In the context of human–machine interaction, Martin Heidegger’s reflections on tool use are also of interest. With regard to tool use, Heidegger (1962, §16) described a mechanism of status change which sometimes happens to things we deal with in the world. These things, like tools for instance, are invisible in use, transparent. Their ontological status is ‘ready-to-hand.’ But sometimes one might be unable to do the planned work because the tool is broken or out of order. In such cases, the tools lose their character of being ‘ready-to-hand’ and are now only ‘present-at-hand.’ Then, the practical context of a transparent use is destroyed. The tool is a mere object now, a piece of matter. Of course, the body is not a broken tool, but there are some similarities in structure: We usually do not focus on the body or on the tool we use, but just do whatever we want to do. We are engaged in the world. The body as well as the tool are invisible in use, transparent. That means that we fully concentrate ourselves on the ‘work’ we try to do. Controlling the body or the tool is not what we have in mind, but doing the ‘job’. A trivial example is bicycling: We just go to the left or go to a certain destination without thinking about the details.

All technology that enables transparent practice has at least the potential for success. If BCI technology allows for activities that ‘feel’ like this we would say that the substitution of the body’s motor activities is successful. If, on the other hand, it turns out that in BCI use the technology will always remain an issue that stands between man and his ‘work’, this practice would be very different from our regular experiences in normal life.

Based on phenomenological observations like these, the notion of transparency has become relevant especially within several theories of media use and in the field of cyberspace or telepresence. Here, transparency has already been an issue in several empirical investigations (cf. Murray and Sixsmith 1999; Dolezal 2009). Tentatively, one might summarize the results as follows: The more technologies connect visual feedback consistently with motor activity, the greater their potential for becoming transparent.

For BCIs the former is not given by definition. The question, therefore, is whether they nevertheless have the potential for transparent practice. Or should this not be the case for reasons of principle? At least there are several theories claiming that personal life, self-consciousness, and the sense of one’s own actions is primarily mediated by motions (so called motor theories). For instance, on the basis of a survey of modern studies, Tsakiris and Haggard (2005) conclude: “Overall, the ‘agentic self’ seems to be constituted by voluntary movement […]” and “[…] in both phylogenetic and ontogenetic terms, perception and cognition begin with movement” (404). The core question here is, then, whether it is really the movements as such or rather the realization of intentions which, technologically mediated and without any self-movement, have the effect of maintaining all these self-constituting phenomena.

Given these theories, it is an interesting question how BCI users experience BCI use: Do BCIs have the potential to become transparent in use? In order to approach an answer to this question, semi-structured interviews were undertaken in a pilot study. Some of the results of this pilot study will be presented in the next section.

4 Some Empirical Results

To find out more about these questions concerning human–computer interaction, participants in non-invasive EEG-based BCI studies were asked about their experiences with BCI use (cf. also Grübler et al. 2014): 20 persons (15 male, 5 female, aged 29–71) in three countries took part in semi-structured interviews. Seven of them are stroke patients who went through a BCI-based experimental rehabilitation. Thirteen are motor-impaired people who did BCI training sessions and, if they achieved control over the interface, tested prototypes for communication, entertainment, or telepresence. Six out of these thirteen users had been training successfully and went on testing prototypes, while seven dropped out after several training sessions without achieving command over the interface. Most of the participants used the strategy of motor imagery, i.e. they imagined movements to trigger the functions of the devices. Only one participant used the P300 approach, which capitalizes on rather passive (reactive) potentials for choosing functionalities on the screen.

The interviews consisted of 23 questions, of which only two are of interest here, however. One of them is: “Did you have the impression that you and the BCI-based device together form some kind of functional unit? Or, in other words, did you experience the BCI device, the moment you used it, in any sense as part of you?” The other question of interest is: “While using the BCI, could you directly concentrate yourself on the work you tried to do? I mean: Could you forget about the technology and the learned strategies of using it and just do what you wanted to do?” Both questions, using different wording, ask for issues related to transparency.

Concerning the first question, four participants answered that they had the impression of forming a unit with the technology; all the others rejected this. A 71-year-old stroke patient who had trained to move a virtual hand on the screen said: “Yes, when I saw the fake hand opening and closing I felt it like a natural movement of my real hand.”

A participant who did BCI training said: “I felt we were one entity when I could visually see what was happening on the computer, advancing, retreating, and trying to put the arrows where needed.” Another participant who successfully tested a BCI prototype reported: “I think it was a part of me, yes [pause] because my brain was involved in it […].”

In contrast, a person who did not manage to control the BCI said: “Not at all, because [the device] did a bit what it wanted, my brain just as it wanted while I was focusing. And then with all the failures there all the time […]. So I cannot say we were really hooked atoms, him and me.”

Another interviewee said after BCI training: “I considered it to be more like a tool, like a computer keyboard, like an aid.”

While only four users said they had the impression of forming some kind of functional unit with the BCI device, 15 participants said that they were able to concentrate on their ‘work’ while using the BCI and that they could forget about the technology and the learned strategies. As an example we cite a 43-year-old tetraplegic who used the BCI to control a telepresence robot by motor imagery (in his case moving hands). After receiving a confirmative answer, the interviewer added an in-depth question: “If you now want to move the bar [i.e. the bar on the computer screen that is used for training and feedback] left or right or turn the robot left or right, do you think about ‘robot turning left or right’ or do you think about ‘right hand movement and left hand movement’?” The user’s answer was: “No, I’m thinking about the hardware… not about me.”

Some participants stressed that they were able to forget about the technology after having had several training sessions and before fatigue set in. For example, one interviewee said after prototype testing: “Generally yes, it was fine, well then I think I was a little tired at the end of the sessions, so it was a bit more difficult but in general I think I managed more or less well to focus on the work and exercise more than on the technology.”

Another participant, when asked whether he was able to forget about the technology and the learned strategies of using it, and just to do what he wanted to do, said: “It was difficult but that’s what I was trying to do. But it is true that the more time passed, the less I was bothered by the technology, the less I thought about it.”

In sum, the preliminary results obtained point to some ambivalence concerning the question of transparency in BCI use. Whereas some participants clearly report the impression of having formed a functional unit with the technical device or the impression of having been able to forget about the technology, others rejected these aspects. One reason for this may be that both the questions were rather abstract, so that possibly not every participant clearly understood what we were asking for. In addition, it seems plausible to assume that the impression of transparency in BCI use is tightly linked to the experience of having control over the BCI (which was not the case in all of the subjects). Nevertheless, according to the answers obtained by some of the participants in this pilot study, it seems that transparency is a phenomenon that can be observed in BCI use.

Furthermore, if one compares the answers the individual participants gave to each of the two questions, a considerable inconsistency attracts attention: Whereas a majority of the participants said that they were able to concentrate on their ‘work’ and to forget about the technology, when we asked whether they felt themselves to be a functional unit with the technology, most persons hesitated to agree. Since both questions related to transparency in BCI use, this inconsistency seems rather surprising. This can be seen more clearly when placing the answers coming from one and the same person side by side (see Table 15.1).

Table 15.1 Some examples of pairs of answers given by individual participants

We can see participants on the left side of the table harshly oppose something that they seem to embrace on the right side. It seems that they are reluctant towards ideas like “forming some kind of functional unit with a technical device” which may imply “melting with technology” or “becoming part of a hybrid”. Instead, they seem to prefer to describe their interaction with the technology rather as ‘controlling a tool’. It may be speculated that one reason for this may be that phrases like ‘forming a functional unit with a technical device’ raise negative feelings or fears linked to ideas such as technicalization or cyborgization that might imply losing one’s essence or identity.

5 Conclusion

Concerning the empirical pilot study reported here, our impression is that the results point towards the potential of non-invasive EEG-based BCI technology to become ‘transparent’ in use. The users were, partially, able to control their environment – even without motor activities – and focus on their aims and intentions. This emulates the way we normally act in the world. Therefore, the substitution of parts of the body seems to be possible to a certain extent. Human practice and human essence might be flexible enough to realize a ‘full’ life on the basis of other, unusual interaction strategies.

It is an interesting by-product of the interview study to see a tendency in the participants to distance themselves from technology, to avoid ideas relating to forming a functional unit or becoming part of some man–machine hybrid.