1 Introduction

Major research challenges in service and personal robotics concern the development of robotic systems interacting with humans in homes, hospitals, offices, workshops, and other typically human habitats (Siciliano and Khatib 2008). In order to cope with dynamic and partially structured human habitats, robots must be endowed with flexible goal-reaching strategies. These functional similarities with human beings in the way of plastic and goal-directed behaviours may come with remarkable bodily similarities in humanoid robots.

Psychological reactions towards service and personal robots, ranging from wonder and plain acceptance to cautious circumspection and outright hostility, are documented in popular science reports and analysed in Human-robot Interaction (HRI) studies. Psychological attitudes towards service and personal robots are selectively examined here from the vantage point of psychoanalysis. Significant case studies include the well-known uncanny valley effect, brain-actuated robots evoking magic mental powers, parental attitudes towards robotic children, idealisations of robotic soldiers, persecutory fantasies involving robotic components and systems. Freudian theories of narcissism, animism, ideal ego and ego ideal, infantile sexuality and complexes are brought to bear on these various items.

With his reflective work on art and literature, religion, war, anthropology, and mass psychology, Sigmund Freud paved the way to applying the bulk of psychoanalytic theorizing beyond strictly therapeutic contexts—albeit in isolation from transference and other patient-analyst dynamic relationships. Attention has been drawn on human-computer interaction (HCI) as a fruitful technological domain for psychoanalytic discourse (Turkle 2004; Scalzone and Zontini 2008). The ensuing case studies vividly demonstrate that the present horizons of HRI afford fertile grounds for psychoanalytic interpretive and explanatory efforts.

2 Robotic systems and the uncanny

A phenomenological regularity hypothesized by Masahiro Mori predicts that robots become monotonically more agreeable and familiar to people as their shapes, expressions, and movements take on increasingly anthropomorphic features (Mori 1970). According to the same hypothesis, however, a sudden plunge of robot familiarity and acceptance occurs when resemblance to humans comes close to identity. This abrupt deflection in the graph of Mori’s correlation (see Fig. 1) is aptly called the “uncanny valley,” insofar as lack of familiarity is accompanied there by subjective experiences of uncanny feelings.

Fig. 1
figure 1

Familiarity of robots plotted as a function of their human likeness. Adapted from (Mori 1970)

Mori conjectured the regular occurrence of the uncanny valley effect from his observations of human reactions to mannequins and early humanoid robots. More systematic investigations of human attitudes towards robot photographs (Hanson 2005) and computer-generated faces (MacDorman et al. 2009) failed to detect similarly robust correlations. Nevertheless, these various studies converge on predicting that imperfections in human-like features, such as faulty facial proportions, are a likely source of the uncanny.

2.1 Freudian approach to the uncanny

MacDorman et al. (2009) sort out proposed explanations of the uncanny according to whether their explanans involves automatic responses to or else more properly cognitive processing of perceptual stimuli. Freud’s own account (Freud 1919) builds on a critical analysis of an early cognitive processing explanation (Jensch 1906), according to which the uncanny stems from a state of perceptual categorization uncertainty, especially in connection with human or non-human, animate or inanimate, alive or dead perceptual judgments. Let’s see.

Freud regarded states of cognitive uncertainty as neither necessary nor sufficient to produce uncanny reactions to wax statues, dolls, and human-like automata. In this connection, he pointed out that an uncanny effect may occur even though the perceived object is conclusively verified to be an inanimate wax statue; and that children experience uncertainties as to whether the doll they are playing with is animate without ipso facto developing uncanny feelings. Freud’s alternative account postulates two necessary conditions for uncanny experiences to occur in the presence of evoking perceptual conditions: (a) the repression of some mental contents; (b) the circumstance that the repressed mental contents are “something familiar which ought to have been kept concealed but nevertheless comes to light” (Freud 1919, p. 241).

Freud mentions the castration complex and animistic conceptions of the world as key elements underpinning uncanny experiences in real life. The primitive beliefs characterizing animistic conceptions of the world are endorsed by children in a normal stage of their development and persist unconsciously in adult life as repressed mental contents. Animistic beliefs notably concern magic wish-fulfilling, the omnipotence of thoughts, and the distribution of magic powers onto various animate and inanimate entities. Thus, the sight of wax statues or moving automata resembling zombies or corpses apparently corroborates the unconscious belief that these entities are endowed with magic powers. Moreover, the emotional ambivalence towards one’s own dead,Footnote 1 when resonating with these animistic beliefs, may internally evoke the idea that the dead are not inevitably going to make benevolent uses of their magic powers.

The Freudian account of animistic beliefs unconsciously affecting the mental life of adults presupposes an intra-systemic splitting of the ego into two different components. The defensive mechanism of reality rejection by disavowal brings about this division. Thus, along with a conscious part of the ego—abiding by the reality principle and acknowledging shared reality—one finds an unconscious part of the ego—working according to the animistic mode of thought and enacting magic wish-fulfilling. From a genetic viewpoint, the defensive mechanism of reality rejection leads to a splitting of the ego and the ensuing establishment of fetishes as an idealized substitute for the maternal penis/phallus.Footnote 2 Reality rejection by disavowal, splitting of the ego, and fetish formation come to constitute a ternary psychological structure defending the individual from primary anxieties. These interacting processes, as we shall shortly see in Sect. 5, play a crucial role in various pathological conditions and case studies involving fantasies about automata.

2.2 Theory-of-mind sources of the uncanny

Compared to wax statues and mechanical automata circulating at the time of Freud’s reflections, contemporary humanoid robots are likely to afford richer perceptual opportunities for uncanny experiences to occur—especially in view of their incomparably wider repertoire of human-like movements and goal-oriented behaviours, natural language competence, and learning abilities. Interestingly, in the very early days of machine learning, Norbert Wiener identified a potential source of the uncanny in the newly demonstrated learning abilities of computer programs:

In playing against such a machine, which absorbs part of its playing personality from its opponent, this playing personality will not be absolutely rigid. The opponent may find that stratagems which have worked in the past, will fail to work in the future. The machine may develop an uncanny canniness. (Wiener 1964, p. 21, our italics.)

Wiener’s description of the behavioural plasticity of learning programs makes a distinctive use of psychological terms. Likewise, the so-called intentional stance (Dennett 1987) allows one to ascribe beliefs, desires, and intentions to complex computational and robotic systems in order to predict their behaviours. And similar intentional attributions are made in the framework of belief-desire-intention models (BDI models for short) in order to provide intelligible and concise descriptions of the perception and action planning functionalities of artificial agents. In particular, BDI models represent software or robotic agents as capable of acquiring beliefs about the world, identifying desires that are compatible with their beliefs, and selecting intentions to act that are appropriate to attain desired goals (Bratman 1987; Wooldridge 2000). In these various contexts, robotic and computational agents are construed as purposeful systems, whose goals and actions are accounted for in terms of the ascription of intentional attitudes.

Intentional talk about computational and robotic systems admits either conventionalist or realist interpretations. Each of these ontological options affords, in its turn, novel opportunities for the uncanny to emerge. Consider first the conventionalist, “as-if” interpretations of intentional talk about machines, according to which successful prediction of goal-driven behaviours or effective design outcomes are sufficient motives to use intentional talk about robots, and yet insufficient to ascribe them genuine intentional states. This conventionalist construal of the intentional stance carries with it the problem of justifying the application of different ontological standards to robots and humans, respectively. Are there distinctive motives that one can adduce to single out human beings as genuine intentional agents from the class of entities, including some robots and computer programs, to which the intentional stance is systematically and successfully applied? Lingering doubts about a positive answer to this question may evoke the uncanny feeling that intentional talk about human beings is nothing but a convenient predictive tool. After all, we may turn out to be—just like present-day robots—complicated goal-seeking mechanisms possessing no genuine desires and intentions. Realist interpretations of the intentional stance are not immune from similar quandaries, insofar as one attributes genuine intentionality to a different species of intentional agents. In particular, robotic intentions may appear to be strange, unfamiliar, and difficult to predict on the basis of theory-of-mind models that are tailored to human emotional responses and empathy for the shared human condition.

In concluding this section, it is worth noting that the psychoanalytic significance of both conventionalist and realist interpretations of the intentional stance towards robotic agents goes well beyond the problem of identifying novel sources of the uncanny. Indeed, both interpretations are involved, as we shall see in Sect. 5, in psychoanalytic case studies concerning autistic children and delusions of persecution.

3 Psychoanalysis and media reports on robotic systems

In his Thoughts for the times on war and death, Freud remarked:

It is an inevitable result of all this that we should seek in the world of fiction, in literature and in the theatre compensation for what has been lost in life… For it is really too sad that in life it should be as it is in chess, where one false move may force us to resign the game, but with the difference that we can start no second game, no return-match. In the realm of fiction we find the plurality of lives which we need. (SE, vol. 14, p. 291).

Psychologically compensating technological scenarios once explored in literary work only are now in the purview of robotics research programmes. These scenarios impinge on the general public through their dissemination in the media and popular science reports. To illustrate, consider first brain-computer interfaces (BCIs) and brain-actuated robots in the light of Freudian accounts of narcissism and magic thinking.

3.1 Brain-actuated robots

A BCI system processes brain activity online and enables one to control both information and communication technology (ICT) devices and robotic systems on this basis. More specifically, computational classification processes enable one to identify neural “signatures” of designated mental states from neural activity recordings (such as the non-invasive and high temporal resolution recordings that an electroencephalogram (EEG) makes available). And each one of these recognizable mental states can be translated into some specific control command for robotic wheelchairs, virtual keyboards for word-processing, robotic arms for grasping and manipulating, and a variety of other ICT devices (del Millán et al. 2010). On this account, BCI technologies promise to afford special technological support for the benefit of severely paralyzed patients who are unable to use their neuromuscular pathways to communicate and act (Tamburrini 2009a).

The openings of several popular science and media reports convey the idea that BCI technologies enable one to affect the environment by the force of thought only. These openings are usually supplemented with more naturalistically oriented accounts of BCI working in terms of neural activity recording and classification. Freud’s theory of narcissism makes an explanatory basis available for understanding the compensating psychological role of BCI presentations, which strike the chord of the “force of thought.” Indeed, adults give up in their conscious life both primary narcissism and animism, coming to terms with the reality principle and acknowledging death as inevitable. However, one unconsciously seeks compensations for these psychological losses and blows for self-love. Reports on BCI technologies provide some such compensation—an opportunity for what Freud called a fictitious “return-match” or “second game”—taking the form of illusory enhanced control of the external world by magic wish-fulfilling.

Similar psychological compensations are available to users of BCI systems. These users experience a regular association between their thinking activity and ensuing modifications of the environment, which entrenched Humean habits may induce one to construe as “direct” causal links—requiring no intervening muscular movements—between one’s own mental efforts and machine action. For any tension to arise within the phenomenologically accurate, “force-of-thought” account of BCI functioning, one has to step out from the first-person, subjective perspective of a BCI user. To illustrate, consider the BCI speller communication protocol involving the consecutive flashing of alphabet letters on a screen, and requiring the user to concentrate on the letter she wants to select and write. After a few showings of the intended letter on the screen, the user finds out that the BCI system fulfils her writing intent. One has to switch to third-person accounts of this process in order to override the subjective impression of an unmediated correlation between one’s own mental efforts and machine behaviours. Indeed, a third-person account tells one that intent fulfilment is caused by the identification of a revealing P300 signal from the EEG; and that a P300 signal is produced by the user’s brain whenever an infrequent perceptual item of interest (the letter she wants to write) appears among more frequent and non-salient perceptual items (the other letters of the alphabet).

3.2 Robots as children

Freud adduced the animistic conception of the world as evidence for revising his early theory of libido. In particular, Freud came to postulate an original direction of the libido (cathexis) towards the ego in the early stages of human life (see Freud 1914, p. 75). He surmised that various kinds of object-cathexes occurring later on in life take their origin in these primary narcissistic orientations. Thus, an adult may come to love what he himself was, and notably the child that he once was.

If we look at the attitude of affectionate parents towards their children, we have to recognize that it is a revival and reproduction of their own narcissism, which they have long since abandoned. […] Illness, death, renunciation of enjoyment, restrictions on his own will, shall not touch him; the laws of nature and of society shall be abrogated in his favour; he shall once more really be the centre and core of creation—‘His Majesty the Baby’, as we once fancied ourselves. […] At the touchiest point in the narcissistic system, the immortality of the ego, which is so hard pressed by reality, security is achieved by taking refuge in the child. (Freud 1914, pp. 90–91).

Consider, in the light of this Freudian account of parental love, an article on robotic learning and development published in the section News of the journal Nature:

Giulio Sandini cannot help smiling as his child reaches out a hand and tries to grasp the red ball that Sandini keeps waving before his eyes. “He is getting really good at it,” he says, with the proud tone of any father. True, most fathers would expect more from their three-year-old than the ability to grasp a ball. But Sandini is indulgent: although the object of his affection has the wide eyes and rounded cheeks of a little boy, he is, in fact, a robot. His name is iCub or, as the team calls him, iCub Number 1. (Nosengo 2009, p. 1076).

A positive analogy is drawn here between robotic and real children on the one hand and engineers and parents on the other hand. Psychoanalytic theorizing supports this positive analogy by reference to fantasies and unconscious projections concerning child birth and nurture. By the same light, however, one can easily find significant points where the positive analogy breaks down, and a negative analogy emerges instead. Here is a rough sketch of positive and negative analogy interleaving.

Shapes, behaviours, and actions of a robotic child may resemble those of a real child. The article on the robotic child iCub emphasizes these similarities, contextually attributing affectionate parental attitudes to a robotic engineer. In the context of the positive analogy, psychoanalytic models suggest an interpretation of these attributions of parental attitudes as a revival, reproduction, and projection on the robotic child of the primary narcissism that human beings go through in the early stages of their life. In the context of the negative analogy, however, one ought to note that an engineer concomitantly plays maternal and paternal roles with respect to her robotic child. And the latter was designed by the engineer rather than being generated by two individuals. Real children are protagonists of their own life-course, and typically retain an important share of personal responsibility in determining their own worth. Robotic children do not possess similar responsibilities and roles, so that robotic engineers must take full credit or blame for the overall quality, accomplishments, and failures of their robotic “children.”

Another major interpretive key that psychoanalysis makes available to account for psychological attitudes towards robotic “children” is worth mentioning here, even though its detailed treatment goes beyond the scope of this paper. This interpretive key is afforded by Freud’s well-known theory of the castration complex, which posits a symbolic similarity relation between penis and child (Freud 1917). A real child may become in fantasy a symbolic substitute for the presumed loss of the penis by castration, thereby compensating the mother for this fantasized loss. Similarly, in the light of the castration dynamics taking place during the Oedipal process, a robotic infant may come to represent idealized genitals that symbolically replace the missing object.Footnote 3

3.3 Robotic soldiers

Here is an excerpt from an article appearing in the New York Times on February 16, 2005Footnote 4:

The American military is working on a new generation of soldiers, far different from the army it has. “They don’t get hungry,” said Gordon Johnson of the Joint Forces Command at the Pentagon. “They’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes” The robot soldier is coming.

The robotic soldier is unaffected by critical “weaknesses” of human soldiers—fear, paralysing empathy for companions or enemies, temporary inability to remember or comply with orders, limited energies, and basic human needs for food and rest.

Robotic soldiers are prized for their moral qualities too. The International Herald Tribune (Nov. 26, 2008) attributes to robotic engineer Ronald Arkin the claim that robots that are autonomous in their firing decisions will eventually behave more ethically than human soldiers in the battlefield. Critical discussion of similar technological outlooks and their ideological uses (see, for example, Capurro and Nagenborg 2009; Asaro 2009; Tamburrini 2009b; Weber 2009) is presently supplemented by an analysis of their psychological underpinnings in the light of psychoanalytic theorizing.

To begin with, consider ideal combatant qualities of robots, that is, strengths of robot soldiers that are only imperfectly approximated by human soldiers. These idealized descriptions of robotic soldiers express narcissistic projections on the ideal ego: “This ideal ego is now the target of the self-love which was enjoyed in childhood by the actual ego. The subject’s narcissism makes its appearance displaced on to this new ideal ego, which, like the infantile ego, finds itself possessed of every perfection that is of value” (Freud 1914, p. 94). As the robotic soldier is a potentially formidable weapon, its ideal combatant qualities more specifically flow from projections of the destructive component of omnipotent infantile narcissism. This destructive component is to be clearly distinguished from the libidinal component of infantile narcissism: the former takes its origin, unlike the latter, in frustrations occurring in the initial months of life and related recovery attempts that the child enacts by releasing aggressiveness.

Let us now turn to robotic soldiers potentially surpassing human soldiers in their moral behaviour. In this imagined scenario, one is entitled to delegate morally significant decisions to robots, possibly shifting the entire burden of moral requests and responsibilities from humans to robots. In psychoanalytic models, similar scenarios presuppose an intra-systemic splitting between the ego and the super-ego. And the ensuing projections on robots primarily involve paternal qualities and moral consciousness as expressions of the super-ego. Media amplifications of such guesses about robotic morality are extrapolated from the context of discussions among specialists, and passed on to audiences mostly lacking the technological competence that is needed to assess their plausibility. De-contextualized technological guesses prepare fertile grounds for protective delusions (megalomania) (Freud 1896, p. 227), which involve an altered ego projecting a morally good and idealized object on the robot.Footnote 5

4 Idealization and devaluation of robotic systems

Superseded technologies, like hopelessly aged science-fiction, are discarded as targets of unconscious narcissistic projections. Consider the phototropic torpedo dubbed dog of war, which was envisaged during WWI as dual military upgrading of a light-seeking device called electric dog. Based on a negative feedback mechanism, this torpedo would be able to identify light sources aboard target vessels, and eventually hit those moving targets in virtue of its light-seeking behaviour.Footnote 6 The dog of war was heralded with attributes that do not differ much from those bestowed upon robotic soldiers almost ninety years later:

The electric dog, which now is but an uncanny scientific curiosity, may within the very near future become in truth a real ‘dog of war,’ without fear, without heart, without the human element so often susceptible to trickery, with but one purpose: to overtake and slay whatever comes within range of its senses at the will of its master. (Miessner 1916, p.199).

Let us note in passing the “uncanny” quality that is attributed to the electric dog and concentrate, for our present purposes, on the evident commonalities between presentations of robotic soldiers and dogs of war. These commonalities suggest that the same narcissistic contents come to be projected time after time onto different target objects that are picked out from the more innovative devices that gradually become available. But what are the psychological processes initially driving and then leading one to retract these narcissistic projections? This question is profitably examined in the light of Freud’s observations about the transition from worshipping to depreciating attitudes towards genital organs:

People will not reach a proper understanding of the activities of children’s sexuality and will probably take refuge in declaring that what has been said here is incredible, so long as they cling to the attitude taken up by our civilization of depreciating the genitals and the sexual functions. To understand the mental life of children we require analogies from primitive times. […] the genitals were the pride and hope of living beings; they were worshipped as gods and transmitted the divine nature of their functions to all newly learned human activities […] In the course of cultural development so much of the divine and sacred was ultimately extracted from sexuality that the exhausted remnant fell into contempt. But in view of the indelibility that is characteristic of all mental traces, it is surely not surprising that even the most primitive forms of genital-worship can be shown to have existed in very recent times and that the language, customs and superstitions of mankind to-day contain survivals from every phase of this process of development. (Freud 1910, pp. 96–97)

Evolving mental attitudes towards robots—which exert on us a special seductive power on account of their adaptive and intelligent action capabilities—can be accommodated within this interpretive framework. Novel technological developments are accompanied by fantasizing activities that enable individuals to project onto technological devices their genitals and their ego ideal, in addition to various magic and worshipped functions of their psyche. Human genitals, as Freud emphasized, “transmitted the divine nature of their functions to all newly learned human activities,” including the ability to control and act through the intermediary of machines. Thus, the more specific idealizations concerning perceptual, reasoning, planning, and acting abilities of robots stir up the illusion of magically attaining unlimited action and mental processing powers by means of robotic systems.

Omnipotence and omniscience projections on robotic systems presuppose an internal splitting process and a projective identification mechanismFootnote 7 investing the object: one splits the object into a bad and a good object; solely negative attributes are projected onto the bad object; positive attributes attaining their maximum values are projected onto the good object; and one preferably tends to identify oneself with the good object. One’s own mourning for the loss of divine properties—such as omniscience and omnipotence—are defused and modulated by means of this projective identification mechanism investing the robot as a good object.

Freud regarded omnipotence of thoughts and wish-fulfilling—more in general magic thinking—as forerunners of technology, insofar as primitive minds assign to magic thinking the controlling and protective functions that one is usually inclined to assign to modern technologies:

In their struggle against the powers of the world around them their first weapon was magic, the earliest fore-runner of the technology of to-day. Their reliance on magic was, as we suppose, derived from their overvaluation of their own intellectual operations, from their belief in the ‘omnipotence of thoughts’, which, incidentally, we come upon again in our obsessional neurotic patients.” (Freud 1932, p. 165).

This instrumental role of magic thinking takes on more subtle connotations in contemporary society. Scanty technological resources force primitive people to rely on a magical use of magical thinking to protect themselves from external threats. In technologically affluent societies, one may additionally direct the animistic mode of thought towards a variety of advanced technologies, thereby installing a magical use of technological thinking. This use of technological thinking is potentially more deceptive and dangerous than the magical use of magical thinking: it is more easily mistook for genuine rational thinking, even though its rational outer shell is mere camouflage and vehicle for rudimentary animistic thoughts that one releases to defend oneself from primary anxieties.

Magic control of technologies may engender the illusion of enhanced control on one’s own emotional life. In particular, a narcissistic relationship with a robot may shield one from recognizing the other and one’s own dependence from the other. Thus, the idea that we are cold mechatronic apparatuses like robots has—along with its frightening side—a protective function, insofar as it shields one from separation and abandonment anxieties rooted in the protracted condition of infantile helplessness (Hilflosigkeit) and extended need for parental care.

In establishing similar narcissistic relationships with a machine, one actively confounds oneself. In some cases, it is as if one discontinues functioning according to the normal mode and pace of human organisms: like an autistic child, one switches to the automatic functioning of the robotic systems that autistic minds fantasize about or assemble from various materials (see Sect. 5 below). In some other cases, technological advances may act as powerful triggers of illusions that are conducive to the manic and eagerly desired unification between ego and ego ideal.Footnote 8 This technological path to manic unification may replace the unification shortcuts afforded by ideology and mysticism, which symbolize the fusion between the individual as subject and his primary maternal object. The alternative solution that religion points to requires one to undertake a unification path that is both longer and disseminated with sacrifices (see Chasseguet-Smirgel 1975, p. 236 f.).

In the long run, projective idealizations on robots are likely to wane, as one realizes that a real robot cannot meet unreasonable omnipotence expectations. Accordingly, the human subject takes back on herself the divine component formerly projected onto the robot, thus re-establishing original narcissism as far as the involved bodily and mental features are concerned. The “exhausted remnant,” to use Freud’s own words, is the machine-genital, which falls into contempt and cultural devaluation. This psychological turn adds a special psychological motivation to technical reasons for developing ever new and more powerful technological devices. Indeed, newly introduced models supply fresh materials for omnipotence dreams that one cannot project any longer on their exhausted predecessors. More occasionally, as we shall presently see, this psychological turn sets the stage for a delusion of persecution.

5 Influencing machines, life-supporting machines, and robots

Individuals remain occasionally subjugated to the machine insofar as the split object of projections—now discarded and utterly despised—becomes a thoroughly “bad” and vindictive object. In paranoid delusion, one desperately attempts to set up defences against such destructive intrusions into the psyche. A delusion of persecution, concerning former targets of omnipotence and aggressiveness projections, may ultimately reflect the need for controlling and holding off the primary maternal object on account of its overwhelming and frightening omnipotence. Likewise, machines may come to be identified with fetishes—symbolic substitutes for the omnipotent maternal phallus that help one to negate castration anxieties.Footnote 9 A robot may even become an autistic objectFootnote 10 protecting one from utterly unrepresentable and ultimately unthinkable subjective fragmentation anxieties. Mental phenomena of these various sorts may occur in both normal and mental illness conditions. Their manifestations in mental illness are selectively recalled here by reference to some well-known psychoanalytic case studies.

Tausk (1919) describes influencing delusions, especially observed in schizophrenic patients, which take the form of imaginary “influencing machines.” These machines are endowed with properties that are borrowed from coeval technologies. Tausk’s influencing machines may affect one by means of levers, wires, cogs, luminous rays, air currents, and so on—inducing visual hallucinations, uncanny bodily stimulations and sensations, in addition to inserting or draining off thoughts and feelings. These effects of “mysterious forces which the patient’s knowledge of physics is inadequate to explain” additionally include “motor phenomena in the body, erections and seminal emissions, that are intended to deprive the patient of his male potency and weaken him,” skin eruptions, abscesses, and a variety of other bodily symptoms. (Tausk 1919, pp. 521–522).

According to Tausk, the influencing machine is a symbolic projection of genital organs:

The evolution by distortion of the human apparatus into a machine is a projection that corresponds to the development of the pathological process which converts the ego into a diffuse sexual being, or—expressed in the language of the genital period—into a genital, a machine independent of the aims of the ego and subordinated to a foreign will. It is no longer subordinated to the will of the ego, but dominates it. Here, too, we are reminded of the astonishment of boys when they become aware for the first time of erection. And the fact that the erection is shortly conceived as an exceptional and mysterious feat, supports the assumption that erection is felt to be a thing independent of the ego, a part of the outer world not completely mastered. (Tausk 1919, p. 556)

In both normal and pathological conditions, one may perceive an erection and other stimuli from the genital organs as something alien, insofar as those stimuli escape the ego’s direct control like tics and epileptic seizures do. The arousal associated to those alien stimuli may be felt as threatening, and projected externally, on this very account, onto the influencing machine. More generally, the influencing machine may become a projection of the whole mental and bodily apparatus, which is experienced as uncontrollable by the ego. In those circumstances, one identifies the origin of one’s own bodily sensations and thoughts in the pervasive action of the influencing machine.

Autistic children are often reported to imagine machines that they occasionally develop into some material models. Bettelheim (1967) describes the case of Joey, a child moving like a remote-controlled mechanical man. Joey assembled wires and vacuum tubes into his “machines.” These contraptions kept him alive insofar as he connected to and extracted from them life-supporting energy. Eventually, however, he felt that he had lost control of and destroyed his machines.

Tustin (1972) describes the behaviour of David, an autistic child, in the course of his psychoanalytic treatment. To protect his psyche, David built a cardboard model of an armour’s headpiece and hand. The cardboard headpiece encasing his head represented the mental armour formed by his autistic defence mechanisms. The cardboard armour as a whole represented a body he could get in—a body that he identified with his father’s body. According to Tustin, David felt that his getting wrapped up into some sort of robotic exoskeleton was expedient to survival. This manoeuvre, however, was an impediment to psychological development too. Indeed, his attempt to possess a body resulted into his being locked within his protective shell, that is, into an imprisoned mind and body. Let us note, in passing, that David’s psychological “solution” appears to be meaningfully related to the fantasy of returning into the maternal uterus. One may read off this fantasy—as Freud pointed out in public discussion of Tausk’s work (Tausk 1919, p. 545, n. 9)—from the Egyptian practice of putting mummies into cases resembling human bodies.

To sum up, robots may become idealized objects in fantasy, that is, omnipotent fetishes helping one to deny castration and death anxieties. However, in both normal and pathological conditions an initial robotphilia may turn into different varieties of robotphobia. The above case studies suggest that adaptive and intelligent robots lend themselves to pathological projections of various kinds, such as those leading one to identify machines with persecutory objects or autistic objects. The latter are typically experienced as a total “me” fending off a threatening “not-me” (Tustin, 1972), so as to protect one from subjective fragmentation anxieties.

6 Conclusions

Robots are endowed with wide repertoires of goal-oriented behaviours and human-like movements, natural language use, perceptual, reasoning, and learning capabilities. These cognate capabilities make robots both alien and uniquely similar to us. Projections of omnipotence and omniscience fantasies may take the form of idealized robotic systems. In some cases, a robot may even take on the role of a narcissistic double reassuringly representing a reproducible—and thereby undying—self. The other side of the same coin reveals itself in the uncanny feelings elicited by robotic bodily and mental features. More crucially, projective identification mechanisms provide a basis for turning narcissistically idealized objects into persecutory objects. A robot may come to be viewed as a potentially threatening object, to the point of becoming a powerful source of anxieties under the form of a rebellious Golem. However, threats that humans may see as coming from alien features of robots more likely descend from the overly human inclinations that one projects onto robotic systems. These overly human attributions enhance by positive feedback and set in motion one’s own imaginative activities about robots into a variety of different directions. Accordingly, conflicts between human beings and intelligent machinery are first and foremost conflicts between human beings and their own inner world, rather than conflicts between human beings and robots.

The psychological suggestion that robots may improve the life of human beings and alleviate the limitations that reality imposes on them is substantively based on genuine and impressive advances in robotics. However, imaginative activities about robots may influence practical decision-making about the funding of research programmes in robotics and the installation of service robots in a variety of human habitats. Indeed, the special seductive power of robotic systems depends on an idealization of robotic systems too. In particular, it was noted above that an extreme idealization of robotic perceptual, reasoning, memory, and action capabilities fuels the illusion that one can magically let the dream of omniscience and omnipotence come true by controlling robotic behaviours. Thus, psychoanalysis distinctively contributes to unveil magical uses of scientific and technological thinking that may unduly influence public debate on the use of robots in our societies and funding schemes for research programmes in robotics. Clearly, psychoanalytic interpretive and explanatory exercises play a similar role in connection with presumed robotic threats arising from malevolent idealizations of robotic technologies and systems.

Robotics evokes myths of animate beings being created from inanimate matter. The ambivalence of creation myths invests robotic scientists too. Adam revolted against his divine creator; a robotic creature may likewise revolt against its human creator. But this simile breaks down at crucial junctures. To begin with, human creators of robotic systems are not omnipotent, unable as they are to exert full control on both robots and themselves. Moreover, human beings are afraid that robots will drive them away from their privileged—but admittedly less than divine—position in the world. This displacement fear involves both sides of the projective identification coin. On the one side, humans are afraid to lose control of robotic behaviours just like the ego fails to control the unconscious. On the other side, idealized robotic abilities point to a fourth major blow against the naïve self-love of human beings, which the future may bring to them in the wake of previous scientific blows delivered by Copernicus, Darwin, and Freud, respectively.

In the course of centuries the naïve self-love of men has had to submit to two major blows at the hands of science. The first was when they learnt that our earth was not the centre of the universe but only a tiny fragment of a cosmic system of scarcely imaginable vastness. This is associated in our minds with the name of Copernicus, though something similar had already been asserted by Alexandrian science. The second blow fell when biological research destroyed man’s supposedly privileged place in creation and proved his descent from the animal kingdom and his ineradicable animal nature. This revaluation has been accomplished in our own days by Darwin, Wallace and their predecessors, though not without the most violent contemporary opposition. But human megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in its mind. (Freud 19161917, pp. 284–285).

In the foreseeable future, a fourth blow is unlikely to come from a replication of human consciousness and emotions in robotic systems: this objective appears to be a remote technological dream in the light of present limitations of scientific knowledge about the mechanisms underlying both conscious experience and the feeling of emotions. More pressing threats to the naïve self-love of human beings are likely to come from the development of robotic intelligence possessing neither consciousness nor emotions. In their relatively short history, both robotics and AI have provided plentiful demonstration that intelligent perception, reasoning, and action without consciousness and emotions are technologically possible. One may even suspect that robots, unfettered as they are by human emotional impediments, may soon achieve superior practical reasoning and decision-making capabilities. Accordingly, a fourth major blow against human self-love might come at the hands of intelligent robots lacking consciousness and emotions. These robots might become indispensable to us in virtue of their better-than-human perceptual, reasoning and action skills, thus revealing a new and more unsettling kind of human weakness and dependence—that is, the dependence of human intelligence from machine intelligence.

Awareness of unconscious projections onto robotic systems poses threats neither to the scientific and technological significance of robotics nor to the valued work of robotic engineers. To begin with, it is worth noting that the intellectual and social value of engineering—along with other forms of technological, scientific, and artistic creativity—is duly emphasized in the framework of Freudian psychoanalysis. Indeed, the automatic and unconscious process of sublimation enables one to mould and harness one’s own libidinal and aggressive drives so as to induce complex behaviours that are both gratifying for individuals and useful for society as a whole. In addition to this, psychoanalytic interpretations may even provide new hints and feedback for human-centred design in robotics. We have seen that Freudian accounts of magic thinking make available both an explanatory model for the uncanny valley effect and a rationale for Mori’s practical precepts about the design of humanoid robot shapes and movements (Mori 1970). More generally, depth-psychology provides technology designers with unique insights into the unconscious reactions to and projections onto robotic systems that influence HRI. On the whole, the awareness of unconscious mental life fostered by psychoanalysis makes a unique perspective available to understand the place of robotics in human culture, to identify unconscious sources of biases and resistances towards robotic technologies in contemporary society, to evaluate the potential impact of these inclinations on robotic research funding policies, and ultimately to achieve a deeper understanding of what it is to be human in a world of ubiquitous, increasingly adaptive, and intelligent technologies.