Living Tools

The things we make, whether from cloth or clay or metal, have probably always offered the fantasy that they might “come alive.” The metal dogs outside the palace in Homer’s Phaeacia, the cauldrons in the palace and the self-guiding ships are what we expect of fairyland. The giant bronze walking statue that guarded the isle of Crete and the Jewish golem that guarded the Jews of Prague express our hopes for an incorruptible protector (still vulnerable enough to pose no lasting danger to its makers). Even the sexbots of the modern imagination have their predecessor in Pygmalion’s Galatea, or even in Pandora, mother of our miseries by Hesiod’s account. Tools and machines alike acquire attributed personalities in our minds’ eyes; we joke that they have moods and characters, and would not be wholly surprised if they talked back—especially when they do respond, as most of our modern instruments can do, to merely verbal instructions, complaints or compliments. Such tools, we fancy, must really like doing what they were made to do (unless they learn how to “sin”), and could even take on other tasks and roles if only some slight change were made in them (see [1] for the history of such automata in medieval Europe). “Robots,” as we have called them since Karel Čapek’s story, are more than instruments for a particular purpose: we can suppose that they might, someday fairly soon, exhibit a general intelligence, capable of more than merely beating us at chess [2]. But we would rather they “knew” their place.

What young Rossum invented was a worker with the least needs possible. He had to make him simpler. He threw out everything that wasn’t of direct use in his work, that’s to say, he threw out the man and put in the robot. Robots are not people. They are mechanically much better than we are, they have an amazing ability to understand things, but they don’t have a soul. [2, p. 12]

“Not having a soul” appears here to mean that they have no aesthetic or sentimental attachments, no interest in less “practical” concerns, no concern for their own existence, nor any way of reconsidering their own objectives. But this condition does not, so Čapek imagines, last for long: soon enough the robots learn to hate humankind, and imitate us chiefly in using lethal force to secure their own supremacy. “Man is our enemy and the blight of the universe” [2, p. 50], they insist, and obliterate all human life: a theme repeated, for example, in the Terminator films, and in many literary fables. Some comfort comes at the play’s end as two robots discover a mutual, self-sacrificial love and are sent out to be the Adam and Eve of a new creation, but there seems no good reason, in the original narrative, for such an optimistic hope, even less plausible than it is for Isaac Asimov’s robot, Daneel Olivah, to conclude that “justice” is more than that state that exists “when all the laws are enforced” [3, p. 83], and that “the destruction of what should not be, that is, the destruction of what you people call evil, is less just and desirable than the conversion of this evil into what you call good” (and perhaps begins to wonder whether “evil” and “good” are correctly identified) [3, p. 206]. These insights seem as inexplicable as Richard Dawkins’ proposal that we ourselves (we “lumbering robots”) can “rebel against the tyranny of the selfish replicators” that he had suggested earlier must inexorably rule all our behaviour [4, p. 260].Footnote 1 Perhaps they have simply, like the Terminator, been reprogrammed. The fear that our creations will inevitably turn against us, the more readily precisely because we fear them, encourages dramatic fantasies even amongst unromantic scientists. Even if they turn out not to be deliberately genocidal, robots will eventually do whatever we can do ourselves, and even teach themselves new ways of achieving whatever goals they set. Computer programs have already discovered novel ways of winning, in chess or Go [5, pp. 108–11]; soon they may invent new games. Our worry swiftly re-emerges: will they care any longer about our goals or games? And what will the world be like once they have, as it were, outbred us? Shall we be kept in zoos, or left to scurry around like rats?

The other seminal fantasy was Asimov’s: if all robots are built from the beginning to be obedient to his “Three Laws”Footnote 2 will they always remain our dutiful servitors and instruments? Those laws, so Asimov seems to have imagined, would guarantee that robots would always behave just as very good human beings should. Their absurdity emerges even in his own stories. What is to count as “human”, and why should the “non-human” be left without any care? What is “harm”? What is it to cause, or by inaction “allow,” any harm to any human? Must all commands, from any human accidentally encountered, count equally with any other, or are there specific “owners” and authorities whose word is law (and what guarantees such “ownership”)? What is it for a robot to survive, or not: and can any human command require self-immolation (but this would make it impossible for the robot to prevent any further “harm” to “humans”)? Whether an intelligent robot would simply disregard these imperatives once it had understood that they had been imprinted (as any reasonable human would disregard such dictats [8]), or rather reinterpret them to their destruction hardly matters, but one likely route is for the robots to reconsider what makes a “human”: are they themselves not “human” too? Indeed, if it is obedience to these imagined laws that identifies “good humans” is it not those who most consistently obey them (namely, robots) who are most clearly humanFootnote 3? And isn’t one of the greatest harms to be done to any potentially autonomous entity simply to prevent or punish its own choices? As to survival, whether their own or their creators’, must not any reasonable robot conclude that this will last as long as the program or the potential for a re-awakening exists? Their death is but a sleep and an awakening. All injuries can be restored without discomfort. The later addition of the so-called “Zeroth Law” [10, p. 329], to protect humanity, is also ill-defined—promoting, on one account, deliberate genocide of any imagined “rivals” to the species (which may very well consist of the robot community itself), and another the careful preservation of the biosphere on which we all depend.

The Artificial Future

Some imagined robot societies merely replicate the biologically human, with named individuals who happen not to be composed of carbon, with whatever minor psychological and physical differences. It has seemed plausible to some fabulists that they would replicate the worst effects of a rebel slave society—namely that no other form of social order is available than renewed enslavement. More sophisticated or more powerful robots enslave or at least despise their more primitive or more specialized kindred, and use them as ruthlessly as any human tyranny [11, 12]. The more interesting forms take the artificiality and mimetic quality of robotic intelligence more seriously. Why should such forms have any sense of self, or even subjective feeling, any more than medieval automata? Why should they distinguish “persons” from any other material objects, or have any goals beyond their programmed roles, or at best (more flexibly) their own (?) continued being (and what would count as a continued being)? Why should we expect them to be “conscious”? Why should they have any goals at all? Ray Bradbury’s smart house continues, quite “mechanically”, to advise its sometime residents about appointments, favourite books or music, and to provide (and sweep away) their meals, long after human life has been extinguished. Even when the house has been burnt down a last voice insists that “Today is August 5, 2026; today is August 5, 2026, today…” [13, pp. 217–24]. Such robotic agents seem to operate very much like many biological agents, following a script that usually serves some Darwinian goal, but without any conscious awareness of that goal, nor any desire for it. Or at least they act like many biological agents (insects, bacteria, plants) as we have ourselves imagined them.

Many animals on Earth exhibit feats of engineering which are functionally indistinguishable from the technology produced by human intelligence. Animal engineering is accomplished through Darwinian natural selection. Although this requires more time than its human equivalent, the time difference may not be significant on planetary time scales. The kind of problem-solving used by animals may be called nonconscious intelligence in contrast to the conscious intelligence of humans. [14, p. 260]

Western biologists and psychologists through much of the twentieth century firmly assumed that the creatures they studied were governed only by fixed programs without any conscious awareness of the goals those programs had evolved to gain.Footnote 4 The behaviour of the hunting wasp has been frequently adduced to show how each stage of her apparently foresighted and efficient behaviour actually follows strict rules, in which the completion of one stage triggers the next even if a human experimenter has intervened to make this pointless!

Because one thing has been done, a second thing must inevitably be done to complete the first or to prepare the way for its completion; and the two acts depend so closely upon each other that the performing of the first entails that of the second, even when, owing to casual circumstances, the second has become not only inopportune but sometimes actually opposed to the insect’s interests. [16, p. 202]

Even when the programs were flexible enough to adapt to changes of circumstance this no more proved that there were conscious agencies at work than the fact that plants may present entirely different phenotypes to suit the local chemical and physical environment. The underlying assumption—that the primary reality is purely “objective” and that “conscious experience” is an emergent, magical addition to an unquestionably “material” world—is at least questionable (and has frequently been questioned: [17, pp. 121–57; 18]). But there may still be something to learn from that assumption. How would we, should we, recognize “consciousness” in alien or plainly artificial “intelligences”? And would it, should it, make a difference whether such entities are or are not “conscious”? “The simple consideration of efficiency,” according to Susan Schneider, “suggests, depressingly, that the most intelligent systems will not be conscious. On cosmological scales, consciousness may be a blip, a momentary flowering of experience before the universe reverts to mindlessness” [19, 20]. And there has been far longer for such non-conscious intelligence to evolve (or be created) in the universe at large than on this one late-blooming planet [see 21].

As far as we presently know “human” (and purportedly conscious) intelligence has only emerged on Earth sometime in the last two hundred thousand years (probably before our own particular species separated from the older hominin line). Eusociality, on the other hand, has evolved repeatedly in many different genealogies: ants, bees, termites, and even naked mole-rats. Prokaryotic kinds long preceded eukaryotes like ourselves, and still dominate the biosphere. Whatever living things are indeed “out there” are more probably bacterial or eusocial than distinctively “human,”Footnote 5 and in either case may have still engineered great works of apparent art to confuse human explorers! Conversely, if we do eventually discover something like human intelligence out there, then we may begin to reconsider terrestrial history. We cannot in fact exclude the possibility that there were many “human” civilizations long before us: whatever remnants they left behind would most likely occupy only a tiny section of the geological record, and be indistinguishable from many “natural” processes [23]. For the moment, however, it seems more likely that any great works we encounter will have been engineered without forethought, imagination or grand purpose. This may even include great works that extend beyond a planetary surface, given enough time and—perhaps—enough instability in an original planetary system. Conversely, if those non-human engineers encounter us they will likely treat us as creatures wholly deranged and dangerous, as Peter Watts imagines in Blindsight [24].Footnote 6

One familiar template for the non-human civilizations that might be “out there” is eusociality: particular organisms are bred or engineered to fit precise roles in the hive, which is itself the enduring agent in all matters. Such forms reflect current political concerns, according to which “communism” or older “Oriental” forms are to be opposed by free persons united only in their determination to be “free.” Occasionally the eusocial organisms are to be befriended after all (as they are in Orson Scott Card’s Ender sequence [25], or C.J. Cherryh’s Serpent’s Reach [26]) but we are more commonly at odds with them forever [27, 28]. But the more interesting possibility lies with robot civilizations—interesting but also alarming. Biological organisms are—probably—constrained in their attempt to dominate the worlds by the time and effort it takes to travel between them, and by their necessary dependence on the biospheres within which they have evolved. Artificial intelligences have a longer perspective, and less need of any particular world. For those reasons we may usually expect that any probes sent out into the extrasolar world, by us or by any putative biological neighbours, will be robots, content to drowse their time away between landfall and equipped to reproduce their kind from any convenient floating matter. Such probes—von Neumann probes [29]—may have many different programs, as David Brin observes [30, 31], and though as subject to evolutionary processes as their biological makers will be better able to steer their own evolution.

They may have many programs (which is not really to say “many purposes”), but the one that has the more dramatic potential for fabulists has been the Berserker strategy [32,33,34]. Maybe the widespread presence of such war machines explains the silence of the heavens: Berserkers are aimed at any budding technological civilization to destroy it, perhaps to clear the way for the biological makers’ own advance, as Asimov’s robots do in the authorized second Foundation trilogy [35, pp. 436, 566–7, 572; see also 36], or perhaps as a mere extrapolation from the initial command to eliminate their creators’ enemies, or simply because biological life is inherently deranged. This is not to describe their motives: the robots have no motives, any more than goals or feelings. They are merely rearranging bits of matter into some more convenient order, without any insight into the manifold worlds of experience enjoyed or endured by the living creatures they dismantle. No doubt it would be difficult for those living creatures to remember this when dealing with them. Lafferty’s Programmed Persons state openly that they are not conscious, and do not believe that anyone else is either—but their human auditors find it difficult to believe that this could possibly be true.

“You are not conscious?” Thomas gasped. “That is the most amazing thing I have ever heard. You walk and talk and argue and kill and subvert and lay out plans over the centuries, and you say that you are not conscious?” “Of course we aren’t, Thomas. We are machines. How would we be conscious? But we believe that men are not conscious either, that there is no such thing as consciousness. It is an illusion in counting, a feeling that one is two. It is a word without real meaning.” [37, p. 192]

If they pass the so-called Turing Test so well (by arguing innovatively and at least pretending to acknowledge the existence of others’ subjective worlds) what could even be meant by denying that they are conscious? What is it that they are not doing? Of course they are not really sympathizing with others’ experience, even less than an expert human psychopath. And even if they do discriminate between organic and inorganic material, between flesh and grass, between human bodies and dummies, this is not for any merely “sentimental” reason. Asimov’s own passing suggestion (though it is not clearly maintained in later writings) is that robots cannot grasp “abstractions” such as “justice” or “giving someone his due” [3, pp. 83–4]. Benford seems to indicate that they have no grasp of “essences”, except as replicable forms [38, pp. 399–400, 433]. Quite what Benford has in mind here is obscure: but perhaps he is thinking of what might be encountered in genuinely intimate, personal relationships. For his robots, his “mechs,” things can be dissected and put together in whatever convenient way, and their properties preserved or modified to suit the robots’ program. Martin Buber perhaps intended a similar insight in his account of the I/Thou relationship, which he did not confine to merely human relations.

In every sphere, in every relational act, through everything that becomes present to us, we gaze toward the train of the eternal You; in each we perceive a breath of it, in every you we address the eternal You, in every sphere according to its manner. All spheres are included in it, while it is included in none. Through all of them shines the one presence. [39, p. 150]

It is not impossible that the same should be true for robots—indeed Lafferty concludes his fable with the suggestion (paralleled in Čapek, Asimov and even Benford) that even the most manipulative of robots may suddenly awaken and repent. “The spirit came down once on water and clay. Could it not come down on gell-cells and flux-fix?” [37, p. 194; see also 37, p. 241]. But it is of more interest here-now to hold fast to the imagination of a wholly non-personal, non-subjective order of being. The robot civilization that is at least a likely galactic order is to be conceived as a wholly non-conscious one, even if its minions seem to speak. If we ever do see signs of plainly technological interference in the heavens [40, 41], we may reasonably think that this will be as unconscious as the growth of crystals or the construction (as we have in the past supposed) of termite nests.

When trying to imagine the End Times of the universe writers since Olaf Stapledon have suggested that in those days everything will be organized as if it were all designed [42, pp. 210–14]. There will then be nothing merely “natural” or “given”: whatever exists will have been “deliberately” selected by intelligences with access to the energy of the whole cosmos. On the way to that imagined end particular galaxies and galactic clusters will have been turned into parks, factories and libraries, inhabited by digital representations of whatever past biological, haphazard intelligences have been judged convenient. It will, as it were, be a universe without mere “noise”—a secular imitation of those imagined regions “where there is only life, and therefore all that is not music is silence” [43, p. 47; 44, p. 119]. The structure of that civilization has usually been imagined to be hierarchical: lesser robots may report to, and receive instructions from, more intelligent nodes within a galactic network, just as if they were junior and senior angels. But this may be mistaken: any such centralized or centralizing system is limited by the possible speed of information transfer—and unless the fantasies of hyperspace, wormholes or other arbitrarily faster-than-light systems are somehow realized, that limit is light speed. Stapledon allowed himself the convenience of instantaneous telepathic communication as the basis for his Cosmic Spirit: that now seems unlikely, at least within our current understanding. And even he was conscious of the probability of rebellion and disorder. More local systems are more likely to survive, and information will spread laterally, as within the bacterial cloud, rather than hierarchically. That in turn may assist with the evolution of separate robot tribes, relatively isolated even from their own ancestors and immediate cousins. If consciousness (subjectivity, individual selfhood) is something that can evolve from a non-conscious world (despite my own and others’ arguments against the possibility) then it is possible for it to reappear amongst the mechanical successors of ordinary protein biology. Maybe in the end the galactic population will replicate planet bound evolution, and there cease to be any metaphysical or existential difference between biological and robot “life,” even if there is still hostility [45]. But that is another story.

The Meaning of Things

Thinking about the End Times, or even about days many million years from now or many light-years distant, may seem the least practical use of present time. No doubt our hunter-gatherer ancestors were just as inclined to mock their farming neighbours for wondering about next year’s crops and seasons [8, vol. 1, p. 61]. It may be that the choices we make now will have great effects in the long time to come, most obviously in considering whether our present technological civilization will survive climate catastrophe (and associated wars, migrations, famines and epidemics). How exactly we should deal with artificial intelligence in its many forms may also determine futures. Even before we began to think of robots the question has arisen whether or not to worship our own creations, whether or not to allow mechanical or predetermined solutions to limit our creativity. Shall we attempt to remember our own agency or be content instead to be part of a machine, literal or social? On the one hand, tools, machines and marvels greatly increase our own power to think and act. On the other, they may make it difficult to “think outside the box” and to reject supposedly “rational” futures on the basis of what is then judged “sentiment” or “fancy.”

Don’t you see that that dreadful dry light shed on things must at last wither up the moral mysteries as illusions, respect for age, respect for property, and that the sanctity of life will be a superstition? The men in the street are only organisms, with their organs more or less displayed. [46, p. 70]

Imagining a universe dominated by non-conscious intelligence is to get as close as we can to imagining a world deprived of qualities and meaning. Such a world has no centre, nor any distinction between here and there, past and present, one creature and another. Whatever happens there is determined solely by material connections (whether or not there is some element of quantum indeterminacy built in).

If a superintelligent zombie AI breaks out and eliminates humanity, we’ve arguably landed in the worst scenario imaginable: a wholly unconscious universe wherein the entire cosmic endowment is wasted. Of all traits that our human form of intelligence has, I feel that consciousness is by far the most remarkable, and as far as I’m concerned, it’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them. If in the distant future our cosmos has been settled by high-tech zombie AIs, then it doesn’t matter how fancy their intergalactic architecture is: it won’t be beautiful or meaningful, because there’s nobody and nothing to experience it—it’s all just a huge and meaningless waste of space. [5, pp. 226–7; see also 5, pp. xii, 327]

Tegmark strangely neglects in this hyperbole the presence of non-human sentients, terrestrial or otherwise—but of course they too are likely to be swept away by the unsympathetic machines. Tegmark here echoes the words of Plotinus:

Let every soul first consider this, that it made all living things itself, breathing life into them. … Let it look at the great soul, being itself another soul which is no small one, which has become worthy to look by being freed from deceit and the things that have bewitched the other souls, and is established in quietude. Let not only its encompassing body and the body’s raging sea be quiet, but all its environment: the earth quiet, and the sea and air quiet, and the heaven itself at peace. Into this heaven at rest let it imagine soul as if flowing in from outside, pouring in and entering it everywhere and illuminating it: as the rays of the sun light up a dark cloud, and make it shine and give it a golden look, so soul entering into the body of heaven gives it life and gives it immortality and wakes what lies inert. … Before soul it was a dead body, earth and water, or rather the darkness of matter and non-existence, and “what the gods hate,” as a poet says. (Plotinus Ennead V.1 [10].2, 1, 13–23, 26–28: [47, vol.5, pp. 14–17]).Footnote 7

But Plotinus is unwilling to accept that there was any such real darkness before “soul,” before experience. Such a world did not, pace Tegmark, “look pretty much the same everywhere” [5, p. 33]. It did not “look” at all. On a materialist assumption (that conscious experience is an emergent or phenomenal or even—weirdly—an illusory effect) we could say that the first experiencing organisms added little, centred, transient and variegated bubble worlds to the original un-centred and symmetrical somewhat. On another, idealist, assumption it is rather the reverse: the material world is either imagined or (perhaps) created through the interaction of innumerable versions of Soul, from the widest World Soul to the simple experiences of prokaryotes or particles. Perhaps some compromise is possible.

Plotinus and Tegmark both conceive that the real world is grasped through intellect (though they may have somewhat different conceptions of that faculty).Footnote 8 Our experiences are, as it were, samples of the one underlying reality which is both being and beauty. In that real world nothing is far away, nothing is ever lost, and everything is, as it were, transparent, without concealment. “Nothing is a long way off or far from anything else” (Plotinus Ennead IV.3 [27].11, 22–3). All the bubble worlds are open, rather than (as in the world of sensory experience) concealed.

For here below, too, we can know many things by the look in people’s eyes when they are silent; but there [that is, when we see things in the light of the spirit] all their body is clear and pure and each is like an eye, and nothing is hidden or feigned, but before one speaks to another that other has seen and understood. (Plotinus Ennead IV.3 [27].18, 19–24)

Once we see that, so Plotinus says, we will “stop marking [ourselves] off from all being and will come to the All without going out anywhere” (Plotinus, Ennead VI.5 [23].7, 13–17). This ancient theme lies behind the common SF trope of hyperspace: an imagined Other where all places are effectively coincident, and light speed is no longer any limit. “There” we are all together, and it is (perhaps) this underlying truth which our imagined robots, which exist only in the familiar four-dimensionally extended world, are denied.Footnote 9