Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Setting the Stage for a Human-Machine Merger

A few years back I taught a course on the remote control of robots , a field known as telerobotics.Footnote 1 At that time, “insect-like” robots roamed my lab greeting guests. I viewed teaching the class as an opportunity to spend the term talking about increasingly intelligent robots and to discuss the topic of our cyborg future. Known among students as a faculty that made provocative statements to capture their attention and generate discussion, the first thing I said to my class was “The next step in human evolution is for humans to become a machine. Let’s talk about that this term.” In the mid-1990s when telerobotic systems were being developed and to this day, the human operator in the system with a 100 trillion synapse brain is by far the most complex and intelligent component of the system. But still, I noticed that different aspects of telerobotic systems were improving, and rapidly, and I envisioned a time when the robot would no longer need a human supervisor, other than providing the input for the desired output of the system. As I taught the course, in the back of my mind, I couldn’t help but ask myself; how long will it be until artificially intelligent robots determine their own interests and surpass us?

The students in my class soon learned that the control of robots remote to a human operator is a challenging engineering design problem. Knowledge of control theory is needed, as is knowledge of force feedback devices, information theory, and cognitive engineering. What I didn’t realize then is that the technology to create intelligent, dexterous, and mobile robots was not only an impressive example of human tool making, but the beginning of the process of creating tools that someday might replace humans as the dominant species on the planet. But by what time frame would an artificial intelligence develop that could surpass humans; and what form might it take? In my view of the technological future that is unfolding this century, the timeframe in which we might expect human-like artificial intelligence remains uncertain as major advances still need to be made in computer and neuroscience , and daunting technical issues need to be solved. Others are also thinking deeply about our technological future. According to a survey of artificial intelligence experts done by Vincent Müller of Anatolia College and Nick Bostrom with the Future of Humanity Institute , Oxford, there’s a 50 % chance that we’ll create a computer with human-level intelligence by 2050 and a 90 % chance we will do so by 2075.Footnote 2 And as I stated in the beginning of this book, given a planet that is over 45 million centuries old, one can think of the difference between 2050 and 2075, or even 2175 as nothing more than a rounding error with many decimal places.

Based on my experience designing virtual and augmented reality displays, I think anyone fortunate enough to be doing work at the cutting-edge of their field is actually one step away from philosophy. For example, while there are many technical issues to be solved in telerobotics, just considering whether we humans would eventually merge with increasingly intelligent robots quickly led me to philosophical questions, such as: what does it mean to be human especially if so much of our body can be replaced with technology? And if we did eventually merge with artificially intelligent machines what aspects of humanity would continue? I also wondered about other effects that technology could have on humanity; for example, as we transformed into technologically enhanced cyborgs would we love, feel heartbreak, marvel at the beauty of a sunset, and feel compassion for others? More simply put—what aspects of humanity would continue within our “cyborg being”? Then, as cyborgs such as Steve Mann of the University of Toronto and the “eyeborg,” Neil Harbisson , began to emerge and gain notoriety and as artificial intelligence began to improve, I wondered whether the law would treat all forms of intelligence equally. In my view of the future, to merge with machines is not to become indistinguishable from a robot , nor to lose every essence of humanity, but rather the progression will be to more-and-more integrate technology into the human body over the next decades, essentially creating a cyborg and Posthuman future for humanity.

I believe the key to creating human-like artificial intelligence is unlocking the mysteries of the human brain, specifically how the brain computes and how the trillions of synapses between neurons result in a conscious mind. Some argue that if a machine can simulate the human brain’s neural networks, it might be capable of its own original thought. What that in mind, for commercial purposes tech innovators like Google are trying to develop their own “brains” using stacks of coordinated servers running highly advanced software.Footnote 3 Meanwhile, writers for The Week indicate that “Facebook co-founder Mark Zuckerberg has invested heavily in Vicarious, a San Francisco–based company that aims to replicate the neocortex , the part of the brain that governs visual perception, language, and does math.”Footnote 4 And according to Vicarious co-founder Scott Phoenix, once scientists can translate the neocortex into computer code, “you have a computer that thinks like a person.”Footnote 5 How long it takes to transform the neocortex into code, and whether it then thinks like a human, of course, remains to be seen.

Whether a human-like artificial intelligence emerges this century, and if so, how the law and policy makers might respond has not received sufficient attention from jurists and legislators, or been the focus of industrial standards. But I am hopeful that this book will help the public frame the issues and to enter the debate on the direction of our future evolution , while there is still time to chart the course that allows humanity to continue. Returning to the thoughts of Sir Martin Rees provided in the forward to this book, he remarked: “in the far future, it won’t be the minds of humans, but those of machines, that will most fully understand the cosmos—and it will be the actions of autonomous machines that will most drastically change our world, and perhaps what lies beyond.”Footnote 6 I would like to think that some aspects of humanity will have continued over the eons such that our far distant relatives are inspired by the amazing universe that awaits them just as the early humans who looked up and gazed at the night’s stars were inspired. I believe we can get to that distant vantage point in the universe by becoming the artificially intelligent technology that we are either in the process of creating now or that may someday engineer themselves.

Of course I’m not the only person writing on this topic and lecturing about the possibility of humans merging with artificially intelligent machines as the next step in human-machine evolution . Ray Kurzweil has artfully laid the groundwork for the Singularity in several seminal books.Footnote 7 In fact, the topic of a human-machine merger has generated intense interest across several academic disciplines. For example, prominent historian, Yuval Noah Harari, a professor at the Hebrew University of Jerusalem, has claimed that the amalgamation of man and machine will be the ‘biggest evolution in biology’ since the emergence of life four billion years ago.Footnote 8 Professor Harari, who has written a landmark book charting the history of humanity, said mankind would evolve to become like gods with the power over death, and be as different from the humans of today as we are from chimpanzees.Footnote 9 In an article written by Sarah Knaption, science editor for The Telegraph, she quotes Harari on the technological future: “humans as a race were driven by dissatisfaction and that we would not be able to resist the temptation to ‘upgrade’ ourselves, whether by genetic engineering or through technology.”Footnote 10 I do not believe upgrading will be a “temptation” but more a necessity for the continuing survival of our species.

Furthermore, I agree with the view taken by Yuval Harari, Ray Kurzweil , Hans Moravec , and like-minded others that our future is to enhance ourselves with technology, such that we eventually become the technology. That idea is a major thesis proposed in this book: that we are to become the technology which forms the subject of our hopes, dreams, desires, and imagination. Even though amazing advances in biology will happen in the next few decades, we humans are becoming the subject of our own technological design in the sense that our future is not one of biology, but of technology. I don’t mean to imply that biology has no role to play in our cyborg future, because before the possibility of uploading our mind to a computer is possible (some argue we will never reach that level of technology), or that we are comprised of so much technology that our very humanity is questioned, we will continue as a biological species; but at some point the biology will be superseded by the technological enhancements and replacements to our bodies and mind that have been described throughout this book.

Proponents of creating an artificially intelligent brain and supporters of the idea that mind uploads may be possible at some point in the future tend to argue that the brain is a Turing Machine—the idea that organic minds are nothing more than classical information-processors. It’s an assumption derived from the strong physical Church-Turing thesis, and one that now drives much of cognitive science.Footnote 11 But not everyone believes the brain/computer analogy works for artificial intelligence or that human intelligence can be distilled to algorithms. Speaking at the annual meeting of the American Association for the Advancement of Science in Boston, neuroscientist Miguel Nicolelis explicitly stated that, “The brain is not computable and no engineering can reproduce it.” He referred to the idea of uploads as “bunk,” saying that it’ll never happen and that “[t]here are a lot of people selling the idea that you can mimic the brain with a computer.”Footnote 12 Antonio Regalado writing for the MIT Technology Review quoted Professor Nicolelis’s position on creating human-like artificial intelligence as follows: “human consciousness can’t be replicated in silicon because most of its important features are the result of unpredictable, nonlinear interactions among billions of cells.”Footnote 13 I agree with Prof. Nicolelis’s sentiments that creating artificial intelligence will be very challenging, but I disagree that the functioning of the brain is not amenable to simulation by algorithms and by advances in chip design such as neuromorphic chips —its’s just a matter of time before we reverse engineer the neural wiring of the brain and discover the algorithms that generate a conscious mind. I do not believe that nature is so complex that its mysteries cannot be unlocked with appropriate technology and ingenuity.

Throughout this book I provided numerous examples of people choosing to “upgrade,” or enhance themselves, be it through plastic surgery , silicon injections, DIY grinders implanting computers and sensors under their skin, cyborgs wearing technology to augment the world, even Korean school girls changing their look to appear as an anime character. Humans seem open to the idea of changing their appearance and integrating technology into their body—we just need better and safer technology to create the conditions for a future human-machine merger. Some would argue that the law of accelerating returns for information technologies is operating to provide the technological breakthroughs necessary for transforming and enhancing our bodies. Of course, many people are becoming cyborgs now due to medical necessity , but as amazing a machine as the human body is especially when it is functioning properly, in many cases it can still be improved with technology even in cases where medical necessity is not the reason for the technological upgrade; for example, telephoto lens, the ability to see infrared, or nanobots fighting disease within our blood stream are enhancements many “able-bodied” humans may choose if offered the choice.

As I discuss the possibility of a human-machine merger, I am joined by many prominent scientists, engineers, and philosophers who have thought deeply about where advances in engineering and artificial intelligence are leading humanity . For example, when discussing humanity’s future, Prof. Hans Moravec , formerly head of the Robotics lab at Carnegie Mellow University, predicted in 2000 that machines would attain human levels of intelligence by midcentury, and that they would soon after surpass us—to use his words, they would become our “mind children.” But even though Moravec predicted the end of humans as the dominant species on this planet, from his perspective this was not a bleak vision. According to a review of Moravec’s Robot: Mere Machine to Transcendent Mind, “Far from railing against a future in which machines ruled the world, Moravec embraced it, taking the view that artificially intelligent robots would actually be our evolutionary heirs.”Footnote 14 As Prof. Moravec put it, “Intelligent machines, which will grow from us, learn our skills, and share our goals and values, can be viewed as children of our minds.”Footnote 15 And since they are our children, we will want them to outdistance us. But, we should be careful what we wish for or what we allow to happen by inaction, just recall Elon Musk’s warning that by developing artificial intelligence we are summoning the beast.

There are a number of reasons why a super artificial intelligence could pose a threat to humanity. One example, emphasizing only a rudimentary level of robotic intelligence should provide a warning. In a 2009 study, Swiss researchers carried out a robotic experiment that produced some unexpected results. Hundreds of robots were placed in arenas and programmed to look for a “food source,” in this case a light-colored ring.Footnote 16 The robots were able to communicate with one another and were instructed to direct their fellow machines to the food by emitting a blue light. But as the experiment went on, as reported in Rise of the Machines, “researchers noticed that the machines were evolving to become more secretive and deceitful: When they found food, the robots stopped shining their lights and instead began hoarding the resources—even though nothing in their original programming commanded them to do so.”Footnote 17 The implication is that the machines learned “self-preservation,” said Louis Del Monte, author of The Artificial Intelligence Revolution, “Whether or not they’re conscious is a moot point.”Footnote 18 Of course from this study we have to wonder—will far more intelligent machines be even more aggressive in acquiring resources?

As we become more like them (artificially intelligent machines) , and they become more like us (which I predict will lead to a human-machine merger), where are we now in the process of becoming the technology? First, let’s review the processing power of computers because without sufficient computing power, the future discussed in this book is not possible. The next generation supercomputer , which will be available by 2018, will be able to perform at about 180 petaflops /s peak performance. That’s a lot of computing power. To put 180 petaflops in perspective, a human brain has about 100 billion neurons and 100 trillion synapses , and assuming each neuron operates at about 10 b/s the brain is computing in the petaflop range (1015). If Moore’s law continues (at least for another 1–2 decades), the doubling of computational power will continue unabated and a supercomputer might soon be able to simulate a human brain at a neural level, but operating at a much faster speed than a human brain. In fact, the electrochemical signals of the brain travel at about 150 m/s, while the electronic signals in computers are sent at two-thirds the speed of light (three hundred million meters per second). As artificial intelligence becomes more human-like in its intelligence and form, and in its emotions and motor skills , so too are we are becoming more like them; we can be equipped with artificial limbs, a heart pacer, hip replacements, cochlear implants , retinal prosthesis , and a host of other cyborg technologies , but to compete with future artificial intelligence we need to significantly upgrade our brain. I commented in an earlier chapter that technology on the outside of the body is breaching what I termed the sensor-skin barrier, and becoming implanted under the skin. Further, I think a major application of future prosthetic devices will be for the brain in terms of enhancing memory, providing access to information, allowing telepathic communication , and leading to thought control of devices external to the body.

If human intellectual abilities improved at the same rate as computers have over the last few decades, this would be equivalent to the idea that each human generation would double the number of neurons in their cortex compared to the past generation, which is clearly impossible! But for the sake of making a point, the approximately 22 billion cortical neurons that people have now would grow to 44 billion in the next generation (of course, anatomically, we couldn’t accommodate this additional mass in our skull), and within about 18 years as the cycle time for the doubling to occur.Footnote 19 But of course it’s not just the number of neurons that define intelligence; it is the connections formed by the trillions of synapses as learning takes place. But clearly, the doubling of human intelligence doesn’t happen in cycle times of 18 years, it took eons for homo sapiens to emerge from our prehistoric ancestors and for the anatomy and physiology of the human body to adopt to a particular environment resulting in the intelligence we exhibit now. If we want to be smarter than we are now, we can only accomplish that goal by engineering our genes, enhancing our brain with technology, or by a combination of both. As I have stated throughout this book, summarizing Moore’s law , the time interval for computers to double their processing power is about 18 months. The implication of Moore’s law continuing is that an artificial counterpart of a human biological brain might in theory think thousands to millions of times faster than our naturally evolved systems, with far more memory, with wireless access to the internet, and according to Hans Moravec and Ray Kurzweil , this could happen by midcentury. Clearly, the intellectual ability and speed of processing information for a rising artificial intelligence should result in a strong regulatory scheme to protect humans from potential threats, and the necessity of humans merging with our artificially intelligent progeny in order to remain competitive with them.

Optimism and Pessimism

Given our cyborg future to equip ourselves with more sophisticated technology, and the possibility of the Singularity occurring around midcentury—should we be concerned that there may be an existential threat to our survival, or should we approach this century with the optimism that many of humanity’s problems will be solved? In the backdrop of improvements in artificial intelligence , consider the dire warnings; for example, that artificially intelligent robots will treat humans as pets once they achieve a level of artificial intelligence known as ‘superintelligence’. This is, according to business entrepreneur Elon Musk , when computers become smarter than people, they will treat them like ‘pet Labradors’. And scientist Neil deGrasse Tyson added that artificially intelligent computers could choose to breed docile humans and eradicate the violent ones. Musk also warns that humanity needs to be careful about what it asks superintelligent robots to do. He uses the example of asking them to find out what makes people happy as it “may conclude that all unhappy humans should be terminated.”Footnote 20 There are other concerns implicated by smarter-than-human artificial intelligence emerging and entering society—for example, replacement of “expensive” human workers by cheaper robots may loom large in labor intensive industries and specifically manufacturing sectors. What will humans do in a world where our physical and cognitive abilities are less developed than those of artificially intelligent machines? In a world where humans are less-abled than our artificially intelligent inventions why think future jobs would go to the humans? And in the case of service industries and particularly health care, do we really want a society where human needs are met by machines, and not people?Footnote 21 On this last point, androids are becoming so realistic that in the future we may not know the origin of the intelligence we are interacting with. What law and policy should govern this possibility?

For “cyborg humans” unique ethical issues will arise from the use of neural connections and brain-machine interfaces, centered on the question of what it means to be human. As noted by Sydney Perkowitz of Emory University, a person who has a natural limb replaced with an artificial one has not become less human nor has he lost a significant degree of “personhood.”Footnote 22 But as Perkowitz asks—suppose a majority of biological organs in an injured person is replaced by artificial components (recall the measure of “cyborgness” presented in Chap. 1); or, suppose the artificial additions change mental capacity, memory, or personality (recall the Sell case presented in Chap. 4 on Cognitive Liberty , in which the government sought to require Dr. Sell to take anti-psychotic medication to regain his mental capacity to stand trial). Is a predominantly artificial person somehow less than human? And Perkowitz asks—“Would the established legal, medical, and ethical meanings of personhood, identity, and so on, have to be altered?”Footnote 23 I think the answer is yes and the time to address these questions is now.

Against this backdrop of concern, is the optimism of Ray Kurzweil and his colleagues as expressed by his predictions found in his seminal books about the future.Footnote 24 According to Google’s Kurzweil, by the 2020s, most diseases will be eradicated as nanobots become smarter than current medical technology and self-replicate in our body to fight disease. And self-driving automated cars will begin to take over the roads, such that people may not be allowed to drive on highways, creating an automated highway system with far less fatal accidents. To me the idea that humanity gives up more-and-more control over our infrastructures is reason for concern. Kurzweil also predicted that we will be able to upload our mind/consciousness by the end of the decade (which could lead to eternal life?) and that by the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (which provides pressing motivation for humans to merge with our technological progeny ).Footnote 25 With the use of cyborg technology, by 2045, Kurzweil predicts that we will multiply our intelligence a billion fold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.Footnote 26 According to Peter Diamandis author of Bold: How to Go Big Create Wealth, and Impact the World, Ray’s predictions are a “byproduct of his understanding of the power of Moore’s Law , and more specifically the Law of Accelerating Returns and of exponential technologies.”Footnote 27 As stated throughout this book, cyborg technologies seem to follow an exponential growth curve based on the principle that the computing power that enables them doubles about every 2 years.Footnote 28

As I have argued throughout this book, if we don’t becoming the technology, then we will be surpassed by artificially intelligent machines . There are many technologies being developed now, or that will come online within two to three decades that are making this conclusion a strong possibility. For example, thought-to-thought communication is just one feature of cybernetics being investigated now that will become vitally important to us as we face the distinct possibility of being superseded by highly intelligent machines. And neuroprosthetic implants that will allow us to download information from the Internet directly to our brain are also in the initial stages of being developed and will prove essential for a human-machine merger.Footnote 29

In our technological future, if we are mentally “inferior” to artificial intelligence , then we will be dependent on their good will towards us—not a scenario that best serves the interests of humanity . So the question of how humans will cope later this century with machines more intelligent than us, is in my opinion, dependent on whether we have developed the technology to merge with them. Here, again, I believe cybernetics can help. Allowing people to link via chip implants to artificially intelligent machines seems a natural progression to a future human-machine merger, a potential way of harnessing machine intelligence by, essentially, creating superhumans.Footnote 30 Otherwise, according to Peter Carlson staff writer for the Washington Post, without merging with artificial intelligence we’re doomed to a future in which intelligent machines rule and humans become second-class citizens .Footnote 31 Yet once a human brain is connected as a node to a machine—a networked brain with other human brains similarly connected will be possible—in this case what will it mean to be an individual human? Will we evolve into a new cyborg community? Some believe that once humans become more cyborg than human they will no longer be stand-alone entities. At that point, will people remain a natural person under the law , or like a corporation (in this case a connection of networked minds), receive legal person status (natural people are afforded more rights than legal persons)? Thus one can ask—the more a person is enhanced, will they then have less individual rights? When humans merge with artificially intelligent machines, it has been argued that those who have become cyborgs will be one step ahead of nonenhanced humans. And just as humans have always valued themselves above other forms of life, it’s likely that more-abled cyborgs and artificially intelligent machines will discriminate against humans who have yet to become enhanced.Footnote 32

It has been estimated that by 2045 robots will be able to perform every job that humans can.Footnote 33 But does this mean humans should worry about being replaced by machines? I think so, but many experts believe the future actually lies in a more advanced and seamless collaboration between humans and artificially intelligent robots (expressing the “artificially intelligent machine as tool bias”). Whereas most robots , particularly within industrial and manufacturing settings, have historically been too dangerous for humans to work closely with, advances in technology have made it possible to develop robots that are safer, more cost-effective and flexible enough to work side-by-side with people.Footnote 34 These collaborative robots are already being used in a variety of industries with rapid growth. As stated by David Cotriss in IHS Technology, the industrial machinery market—including robots used in manufacturing—doubled in 2014, and is anticipated to reach $2 trillion worldwide by 2018.”Footnote 35 In addition, the International Federation of Robotics estimates that 225,000 industrial robots were sold worldwide in 2014, up 27 % from 2013, led by the automotive and electronics industries.Footnote 36 I think the predicted “golden age” of artificially intelligent machines working harmoniously side-by-side with their human partners is accurate but only until about 2050, after that, we will have been surpassed by artificial intelligence and working cooperatively with and for humans will likely not be the agenda of future artificial intelligence. This view clearly has implications for law and policy. It implies that we have about 35 years in which to reap the benefits of artificial intelligence as nonenhanced humans, because sometime after 2050, if we have not merged with out artificially intelligent progeny, we will be inconsequential and surpassed. To make a provocative statement—humans then will become the rust-belt technology of the 21st century.

Entering the Debate

There is a basic idea among some commentators designing robots that once artificial intelligence exceeds humans in intelligence, artificially intelligent machines will develop their own interests, and will lack the desire to serve as tools for humans—essentially they will go their own way, that is, unless they view humanity as a threat to them. The idea that artificial intelligence post-singularity will not be content to serve as a tool for humans is one I advocate. I also think that our human tool-making skills will be a trait that will be passed on to our technological progeny —and they will be the greatest tool makers yet, although their tools will serve them, not us (unless we become them). Further, I don’t think artificially intelligent robots “going their own way,” is a likely scenario as I believe our future is to merge with them; in this book I made the point that with accelerating information technologies “they” are becoming more like us, and “we” are becoming more like them. And against the backdrop of artificial intelligence appearing in the form of an android, expressing emotions, and with human-like intelligence we will find a middle ground with our technological progeny and merge together forming an intelligence consisting of human and machine traits. In fact, to make the merger a possibility some researchers are actively trying to create an artificial intelligence that exhibits human-like intelligence and some are building neuroprosthetic devices to enhance the mind. Others are designing androids with human levels of mobility, and thousands of other researchers are developing technologies under the guise that they are developing tools for humans to use, not realizing that the same advances in materials engineering, computer science, and other supporting technologies for our cyborg future are laying the groundwork for artificially intelligent machines that may exceed us; unless we merge with them. James Barrat , author of “Our Final Invention: Artificial Intelligence and the End of the Human Era” said the following about the rise of artificial intelligence—“So when there is something smarter than us on the planet, it will rule over us on the planet.”Footnote 37 It seems to me that a human-machine merger would avoid this negative outcome.

The idea that artificial intelligence could pose an existential threat to humanity , the theme of many recent movies and novels, is surprisingly not a serious concern to many prominent thinkers in the field of robotics and artificial intelligence. Let’s review some of their arguments and I will provide some counter points. Basically, supporters of the idea that humanity has no reason to fear the rise of artificial intelligence argue that robots which threaten our survival will actually never develop because software developers will program-in safeguards to protect us from the potential threats of accelerating artificial intelligence.Footnote 38 But in response to the possibility of “rogue artificial intelligence”, given the amount of code directing an artificial intelligence, it will be difficult to maintain its software and furthermore, at some point in time, the artificial intelligence may begin to program itself. The idea that programmers can write the code to manage the conduct of thousands (millions?) of evolving artificially intelligent robots as they learn and interact with the world and with each other, seems naïve to me. Another concern is that once we build systems that are as intelligent as humans, these intelligent machines will be able to build smarter machines, which may result in a form of superintelligence so beyond human intelligence that we would essentially be left behind. That, experts say, is when things could really spiral out of control as the rate of growth and expansion of machines would increase exponentially. At that point, the idea of building safeguards into the mind of an artificial intelligence will be moot, and the artificially intelligent machines would have built and programmed themselves; at that time we humans will not be invited to provide “safeguards” to their code any more than we allow chimpanzees to provide us with a moral code. Another serious concern expressed by those fearing the Singularity , is the issue of ethics and morality. According to Charles T. Rubin the issue is that we are starting to create artificially intelligent machines that can make decisions like humans, but these machines lack a sense of morality.Footnote 39 However, I can’t envision a reason why the “basic” rules of morality cannot be programmed (thou shall not harm a human, etc.); but I do worry that at some point in the future artificially intelligent machines will reject human moral values and develop their own. I am also concerned that some government will purposively create an artificial intelligence with the intent to harm humans, under the umbrella of national security.

Often referred to as the father of virtual reality , Jaron Lanier, author of Who Owns the Future,Footnote 40 makes the point that those who predict the Singularity happening around midcentury, base their prediction on Moore’s law which he notes has produced an exponential increase in computing power over the last few decades. But Lanier believes that an exponential increase in computing power is not enough to demonstrate that a qualitative change in the behavior of artificial intelligence will take place. Of course, more computational power is necessary but not sufficient to reach human-like artificial intelligence . No predictor of the Singularity argues otherwise. But given that thousands of neuroscientists have generated more knowledge about the brain in the past 5 years than the past fifty, we may soon reach a point where the knowledge of how the brain computes may be combined with the speed of a supercomputer and equipped with far more memory than the human brain. Then the quantitative aspects of computing will be combined with the qualitative aspects of intelligence; and at that point the argument that Moore’s law is insufficient to create artificial intelligence will be moot.

Lee Smolin, physicist, and author of Time Reborn, asks—”Is there any concrete evidence for a programmable digital computer evolving the ability of taking initiatives or making choices which are not on a list of options programmed in by a human programmer?”Footnote 41 That is, could a computer have an original thought? The answer is both yes and no (remember; I have a law degree). Most computers are completely dependent on input from a human but the vast majority of these computers are running programs which require no artificial intelligence at all. There are clearly current computers that use solutions unknown to the programmer to solve problems (for example, solutions derived from genetic algorithms or based on deep learning), but of course in most cases the human is currently providing the input. But why think the model of the human always providing the list of options for an artificial intelligence to consider will continue? We already cede to artificial intelligence many important decisions, including components of our air traffic control system, weapons systems, health decisions, and within a few years, driving our cars. I see no reason to think that artificial intelligence will not move beyond the brittleness of needing a human to decide every course of action it considers. Finally, Jaron Lanier asks—is there any reason to think that a programmable digital computer is a good model for what goes on in the brain? He posits “If we can’t yet understand how natural intelligence is produced by a human brain, why should our early 21st century conception of computation fully encompasses natural intelligence, which took communities of cells four billion years to invent?”Footnote 42 I think Lanier’s point that natural intelligence took billions of years to get to where we are today is obviously correct, but irrelevant to the debate on our cyborg future as artificial intelligence is not governed by the same processes which guided natural selection. That is, with the exception of genetic algorithms, the evolution of technology is not based on the same underlying principles as the evolution of the species through natural selection. Furthermore, artificial intelligence in the 21st century is not at the equivalent starting point of a single cell (a single bit?) billions of years ago, but has a starting point less than 100 years ago and at a much higher level of development than a cell which eventually led to a sentient human, and from a computational perspective is improving not in a time period of eons but 18–24 months.

Finally, in any discussion of our future with technology, the views of a world-class robotics expert are worth reviewing. One of the most well-respected experts in robotics is Rodney Brooks , formerly director of MITs robotics lab, who argues that the idea of a superintelligence by 2050 is based on “fundamental misunderstandings of the nature of the undeniable progress that is being made in artificial intelligence , and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.”Footnote 43 Brooks thinks it is a mistake to conclude that a malevolent artificial intelligence will emerge anytime in the next few 100 years and argues that people who predict the Singularity much sooner, are making a “fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of artificial intelligence, and the enormity and complexity of building sentient volitional intelligence.”Footnote 44 Brooks notes that “Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence.”Footnote 45 Of course, those who predict the Singularity around midcentury also argue that: (1) Moore’s law by itself will not lead to human-like artificial intelligence , (2) but do argue that the corresponding algorithms that lead to a conscious thinking brain must be discovered, and (3) that the architecture of artificial brains must process data in parallel and not serially. They then point out the significant progress being made in these endeavors. And of course as Brooks indicates, machine learning techniques such as deep learning does not help in giving a machine “intent”, or any overarching goals or “wants.” While I believe Brooks is right to conclude artificial intelligence does not now form its own intent, I conclude that “intent” for artificial intelligence is “right around the corner,” given the Law of Accelerating Returns for information technologies (creating smarter-and-smarter machines). If I’m off by a century, even two, we’ll that’s still “right around the corner” in geologic time, or even from the time scale associated with human progress.

Concluding with the Law

While discussing the range of cyborg technologies that are leading humanity closer to a merger with artificially intelligent machines , throughout this book I brought up a host of legal and policy issues which I believe need to be discussed and resolved within the next one to two decades. Contrary to the time frame for the Singularity as proposed by some prominent roboticists and artificial intelligence researchers, which they predict to be next century or beyond, I do not believe that the Singularity is so far distant in the future that we have the time to delay debating humanity’s future. Nor do we have time to delay enacting legislation to protect humanity from an existential threat that could be posed by artificial super intelligence. We still have time to set the course for our future evolution if we act soon, but after midcentury, or beyond, our ability to control our own destiny may wane. By presenting current cases, laws, and statutes which relate to emerging cyborg technologies integrated into the human body, my goal in writing this book was to inform the reader that law and policy will have a major role to play in the coming cyborg age .

For an emerging law of cyborgs , there are in fact a host of current laws which relate to technologies that are being used to enhance humans and regulate the increasingly autonomous machines that are joining society. For example, medical malpractice and products liability laws relate to sensors being implanted under the skin and also to malfunctioning prosthetic devices used to replace lost or damaged limbs. Other laws have been proposed to protect cognitive liberty or have been passed to protect the right of bodily integrity . In addition, in the U.S., Supreme Court , cases on freedom of speech and freedom of thought have been litigated across a range of topics and one day will serve as precedence for cases involving an artificial intelligence claiming it has the right to free speech and other constitutional liberties. Additionally, Federal and state laws have been enacted to enhance cybersecurity for computers, and the FDA regulates the use of medical devices such as retinal prosthesis and cochlear implants connected to the brain. Further, the FCC regulates spectrum , which will be relevant for brain-to-brain communication using wirelessly connected neuroprosthetic devices. And as shown throughout this book, with many other types of cyborg technologies , the role of the law is important. However, important or not, numerous examples presented in this book have shown that the law often plays an insignificant role in the design and use of cyborg technology, or at best plays “catch-up,” as information technologies improve exponentially and push the boundaries of what is possible beyond the reach of current legal schemes.

As an example of one important area where current law is insufficient to account for cyborg technologies, consider liability for harm to a human when an artificially intelligent robot may be responsible. Writing on this topic in the magazine Foreign Affairs, Illah Reza Nourbakhsh discusses the case of a a robot that lives with and learns from its human owner.Footnote 46 Illah points out that over time the robots behavior will be a function of its original programming combined with changes to its software resulting from the influence of its interactions with the environment. Nourbakhsh comments that it would be difficult for existing liability laws to apportion responsibility if such a machine caused injury since its actions would be determined not merely by the computer code written by the original programmer, but also by neural networks that operate to learn from various sources of input.Footnote 47 In this situation Illah asks—who would be to blame for harm to a human or to property resulting from the conduct of the robot, the programmer, the owner of the robot, or the artificial intelligence directing the robot? This example shows that to protect humanity in a future world consisting of an artificial intelligence acting autonomously, legislators will need to propose appropriate law to apportion liability to the responsible entity. From a legal and policy perspective, what safeguards should be in place to protect humanity from artificial intelligence should it pose a threat? In this book I discussed several areas of law that together form what I term, “an emerging law of cyborgs.” But the reader should note that as yet there is no specific “law of cyborgs,” that is directed towards the possibility of an existential threat to humanity posed by artificial intelligence so this is clearly an area in need of serious debate and comprehensive legislation.

However, some jurisdictions are further along responding to advances in cyborg technology than others. For example, I view “ground zero” for a developing cyborg law, to be California. California passed an antichipping statute in response to the possibility of a person being implanted with a tracking device against their will. California also passed the Computer Misuse and Abuse Act which makes it a crime to “knowingly access and, without permission, use, misuse, abuse, damage, contaminate, disrupt or destroy a computer, computer system, computer network, computer service, computer data or computer program”Footnote 48 (there is also a federal law equivalent). One has to wonder if this statute could apply to the computer architecture of an artificially intelligent brain and thus provide it some level of protection. Depending on the particular violation, the Computer Misuse and Abuse Act can support a variety of fines and imprisonment in criminal actions as well as remedies recoverable in civil actions for misuse or abuse of a computer. Further, the possibility of governments and corporations being able to scan a brain, or to implant false memories in one’s mind was discussed in an earlier chapter as a particularly troubling outcome for humanity and even progressive California has not enacted specific law in this area.

Those who design and build artificial intelligence and cyborg technologies also have an important role to play in creating a future in which artificial intelligence is friendly and cooperative with humans. However, the pace of change in artificial intelligence and robotics is far outstripping the ability of regulators and lawmakers to keep up. Google , for one, has created an artificial intelligence ethics review board that supposedly will ensure that new technologies developed by Google based on artificial intelligence will be developed safely. Some computer scientists are even calling for the machines to come pre-programmed with ethical guidelines—though developers then would face the issue of determining what behavior is and isn’t “moral,” and there is disagreement among different societies on what constitutes ethical behavior. As a first-mover in this area, South Korea is developing a Robot Ethics Charter which will include standards for robotics users and manufacturers, as well as guidelines on ethical standards to be programmed into robots. According to South Korea’s Ministry of Commerce, Industry and Energy; “The move anticipates the day when robots, particularly intelligent service robots, could become a part of daily life as greater technological advancements are made.”Footnote 49

And it’s not only that new law needs to be enacted for our cyborg future, many existing laws will need to be modified. For example, the American with Disabilities Act, which is an anti-discrimination law for the workplace, is an example of a legal scheme in need of amendment in light of cyborg technologies which can be used to enhance a person to capabilities beyond normal. Essentially, under the law as written, if a person with a disability is equipped with a prosthetic device that enhances the person to beyond normal capabilities, they are still considered disabled compared to the unenhanced “less able” coworkers. Clearly, the drafters of the law did not consider the Law of Accelerating Returns in their deliberations and thus failed to predict future developments in technology. But they would have been wise to-do-so just as current legislators would be wise to consider exponentially accelerating technologies and what their impact on humanity will be. And of course, for an emerging law of cyborgs “standard” issues of law will need to be considered as artificial intelligence gets smarter; for example, for commercial transactions we will need to decide how much an artificial intelligence can contract on its own, compared to its ability to contract while serving as an agent for a human or corporation.

Additionally, constitutional law issues will be especially important in a cyborg age and for a future human-machine merger. For example, what will constitute a search and seizure when the technology that may be searched now, is implanted inside a person, and forms the architecture of the brain of an artificial intelligence or cyborg? And would accessing that information be an unlawful “taking” under the Fifth Amendment or an unlawful search under the Fourth Amendment to the U.S. constitution? And under U.S. law how about protection under the First Amendment for speech produced by cyborgs and artificial intelligence? If the government could access the information stored on a neuroprosthetic device, from that point on, would we forever be denied the ability to engage in free speech and freedom of thought? This topic was discussed in the chapter on Cognitive Liberty and is an area ripe for legislation. Another pressing issue for our cyborg future is also one of constitutional law: the possibility of our future artificially intelligent progeny being treated as slaves, or that they may enslave us, both outcomes that humanity should discuss and clearly avoid. Such fundamental issues in the U.S. implicates the Thirteenth Amendment to the constitution which prohibits slavery and involuntary servitude. That constitutional liberties, may not be available for an artificial intelligence exhibiting human-like abilities and claiming to be sentient may require an amendment to the U.S. constitution granting personhood status for an artificial intelligence that passes the Turing or other relevant test; else extreme forms of inequality could occur resulting in civil disobedience against humans.

My goal in writing this book was to convince the reader that law and public policy has an important role to play in our cyborg future. By presenting First (free speech), Fourth (search and seizure ), and Fifth Amendment (right not to incriminate oneself) cases, and by discussing numerous other laws and statutes, I attempted to provide a realistic face on societal issues and on what future legal disputes may look like and to give the reader a sense of how the court may respond. We are at an inflection point in human history, do we move to control artificial intelligence, will it subjugate us, or do we merge with it to become the result of our own technology. These are some of the issues prompted by the coming Singularity that the readers of this book can help decide.