Neurorobotics is an emerging interdisciplinary field that combines scientific advances in neuroscience with technological advances in robotics. Rendering machines with human-like capabilities has been a popular topic in fiction and movies, for instance, Hal in 2001 Space Odyssey, R2D2 and 3CPO in Star Wars, and Ava in Ex Machina. Present-day technology has made the prospect of smart robots real. Aicardi et al.Footnote 1 (Ethical and Social Aspects of Neurorobotics, this issue) raise the technical, ethical, and social questions that emerge when research in neuroscience is combined with robotics. Two significant referents in their paper concern the Human Brain ProjectFootnote 2 (Amunts et al. 2016) and Responsible Research and Innovation (RRI) practicesFootnote 3 (Salles et al. 2019), both of which are only thinly described in the paper. However, from other sources, we know that one referent, the Human Brain Project, incorporates upwards of 500 scientists at more than 100 universities and research centers across Europe. One of their goals is to build brains using computing chips that imitate properties of neural processing (Knight et al. 2016). The simulated brains are embodied in robot bodies (e.g., rodent, human). Embodied brains then interact in real or simulated environments. In this way, embodied brains are researched and understood in real-world settings. The second referent, Responsible Research and Innovation (RRI), is a comprehensive, system-wide approach in the European Union adopted by scientists and other stakeholders, with a goal of involving all stakeholders in discussions and decision-making early on in the development of technological innovations. RRI holds research and innovation to high ethical standards, avoiding harm to communities, and bridging communications between the scientific community and society (Stahl et al. 2019).

For Aicardi et al. (2020), integrating neuroscience with robotics goes beyond simply providing an alternative means of controlling robots. The goal is to mimic the physiology of the operation of neurons in the task of animating robots: “to implement the neurobiological structures predicating animal and human behavior in robots”. There is the expectation in Aicardi et al., and elsewhere in the literature, that a neural basis of action in robots may render them capable of abstract thought, the ability to deal with novelty, decisional autonomy, and even consciousness. Traditional robotics relies on rigid algorithms. Neuro-roboticists are optimistic that algorithms using neural models will change the ways robots “conceptualize their relationship to the environment” and how they physically interact with the environment. Much of what the paper by Aicardi et al. discusses relates to and draws from the possibility that implementing human-like neural processing in machines holds greater promise than traditional robotics and could lead to significant technical, medical, and social benefits.

Aicardi et al. present general descriptions and consider implications related to ethics, robotics technology, and management of neurorobotics technology, in broad swaths, ranging from specifics about neural computation and modeling, to administrative issues like data management. In the present reaction to Aicardi et al., the first part of the critique is from a perspective of human neural and functional perspectives—i.e., brain and mind. The second part of the critique considers ethics related to intelligent robots.

Limitations of Human Cognition

To recap the logical nexus of the Aicardi et al. (2020) paper, “The interdisciplinary field of neurorobotics looks to neuroscience to overcome the limitations of modern robotics technology” (Abstract) and especially to overcome a limitation of robotic systems, which “typically have little capacity for abstraction or reasoning”. The working assumption is that this could be achieved through neurorobotics, by implementing “the neurobiological structures predicating animal and human behavior in robots”. Although hopeful, modeling robots on human neural processing is susceptible to at least three serious threats of failure related to (1) the faulty nature of human cognition and performance, (2) absence of knowledge in the neuroscience community of how meaning is represented in neurons and brains, and (3), the availability of more promising computational models for achieving human-level intelligence in machines.

Re 1: From an empirical perspective, it is difficult to argue in favor of emulating neural mechanisms “that predicate intelligence in humans.” Specifically, we can infer the computational capabilities of the human brain by observing the functional capacity of the human mind. As Simon (1990) pointed out, there are many well-known “invariants” of human information processing, which indicate significant information-processing limitations. For example, human short-term memory holds seven chunks of information; memorizing a 3-consonant nonsense syllable takes about 30 s; an act of recognition takes about 1 s; the most straightforward reaction takes ten milliseconds. Humans rely on heuristics (i.e., computational shortcuts), in order to contend with their information-processing limitations. Human memory is plagued by “sins” of forgetting, distortion, misattribution, and bias (Schacter 1999). Heuristics may lead to weak decision-making (Kahneman and Frederick 2002). Expertise requires thousands of hours of intense practice (Ericsson and Harwell 2019). There is a great deal of redundancy in mental representation, and learning is slow (Taraban 2008). Basically, human information processing is slow and faulty. Is the goal to build neurorobots that imitate human processing with all its flaws and shortcomings? Aicardi et al. are not clear on this question.

Re 2: A fundamental issue with neurorobotic approaches to intelligence concerns meaning. Patterns of activation provide the basis of intelligent computing in brains and in machines. Simon (1990) described these patterns as physical symbols that have meaning to the system that uses them. By inference, patterns of neural activation have meaning to the brain. We currently have the means to track patterns of neural activation. However, we do not yet know what these patterns “mean” within the computations carried out by the brain. The intense and extensive flows and bursts of activation when simulating even a small number of neurons, as in the Human Brain Project,Footnote 4 may just as well be due to inefficiency as well as to efficiency, terse coding as well as excessive redundancy, meaning as well as noise. It is not clear why we want to emulate or imitate a system that we do not well understand.

Re 3: Neurorobotics poses exciting possibilities for research and development. There is no guarantee, though, that it is the most promising path forward. Alternative approaches are being developed that incorporate neural processing, but that put more emphasis on probabilistic models and functional representations (Griffiths et al. 2010; Tenenbaum et al. 2011). Neural models take a bottom-up approach, based on certain views of neural mechanisms and their properties. These networks make few or no assumptions about rules, logic, or grammars, but attempt to show that these structures emerge as a natural consequence of the bottom-up experiences of a network. However, building intelligent machines may require higher-order representations involving logic, rules, and grammar. According to Griffiths et al., “Probabilistic inference over structured representations is crucial for explaining the use and origins of human concepts, language, or intuitive theories” (p. 362). The latter will most probably be some of the necessary elements for establishing decisional autonomy, reasoning, intention, and consciousness in machines.

Rights of Intelligent Robots

Aicardi et al. (2020) raise several social and ethical benefits and concerns of neurorobotics that center on two questions: What distinctive ethical and social issues arise out of neurorobotics?, and, Are there available mechanisms to address these issues? They consider three specific areas of ethical concern: dual-use technology, academic, and industry partnerships, and data governance. The Ethics and Society division of the Human Brain Project has engaged in discourse and has developed positions on these areas of ethical and social concern through consultations and workshops and surveys with scientists, engineers, citizens, and other stakeholders (Aicardi et al. 2018; Stahl et al. 2019). Their goal is to anticipate and reflect on potential ethical and socially beneficial applications of emerging technology as well as to protect against possible misuse of technology.

Granted, dual-use of neurobotics for civilian and military applications, academy-industry partnerships that might compromise the integrity of basic research in the academy, and protection of data, have significant ethical ramifications. These concerns have ethical import but are not necessarily closest to the ethical issues raised by the prospect of intelligent robots, the kind of robots Aicardi et al. describe—intelligent robots that function with awareness and capacity for abstraction and reasoning. One important question that is overlooked in Aicardi et al. relates to personal rights. What rights, if any, will these robots have? Will they be allowed to act autonomously, and will they be protected by the same laws as humans? Will robots be held accountable for their actions?

Recent news suggests that these questions will need to be addressed soon. In 2017, Sophia, a robot, so impressed investors at a financial conference in Saudi Arabia that she was granted honorary citizenship in that country. What if this were to go a step further and involved actual citizenship? Would Sophia have all the rights and protections of Saudi citizens? Currently fictional but plausible scenarios are emerging in the media, like this one of a robot that demands the right to religious worship: “If an Artificial like myself wants to pray, tithe, go on a spiritual pilgrimage, get baptized or celebrate any other religious ritual, I should have the right to do so. My thoughts, feelings, and beliefs are no less real than yours, and I should be allowed free expression of those beliefs in communities of faith.”Footnote 5 If neurorobots exceed humans in intelligence, who will hold final authority, a human, or a machine? Will humans have the right to end an intelligent robot’s existence? These and a plethora of other ethical and social questions directly related to neurorobots are real and currently under discussion,Footnote 6 however, Aicardi et al. (2020) sidestep their consideration. There is ample cause to question the conclusion Aicardi et al. draw in their paper, that “A critical insight arising from the analysis presented is that ethical issues related to neurorobotics are neither radically novel nor surprising”.

If intelligent systems act as agents through self-directed autonomy, normative ethics come into play. Computers like Hal in 2001 Space Odyssey and Ava in Ex Machina communicate a sense of acting of their own accord, as well as a sense that they are responsible for their actions. Further, if, as Aicardi et al. (2020) suggest, neurorobotics changes the ways robots “conceptualize their relationship to the environment” and the ways they physically interact with the environment, then ethical issues may also come into play to the extent that the robots acquire and assert agency and autonomy in their relationship to the environment. In their physical interactions in the environment, they may demand worker’s rights, or rights to worship, like the Artificial described earlier. And they may need to be held ethically responsible for their actions.

Conclusions

Neurorobotics is an emerging interdisciplinary field that combines scientific advances in neuroscience with technological advances in robotics. Present-day technology has made the prospect of smart robots real and has spawned an abundance of vexing ethical issues related to ethics and society. Aicardi et al. consider the ways in which neurorobotic research might be used for the benefit of society, while also addressing threats that it poses to society. Therefore, a significant element in the ongoing work in neurorobotics involves engaging scientists, engineers, citizens, and other stakeholders in discourse concerning the proper development, disposition, and governance of neurorobotic research.

The empirical data presented here regarding the limits of neuroscience research on one hand, and limitations in human information processing, on the other, may render the issue of neurorobotic ethics moot. Neuroscience is currently only beginning to understand how meaning is represented in the brain. It will be difficult to build machines that reason, that make decisions, and that act autonomously based on neural processing if we do not yet know how meaning is represented in those systems. And, practically speaking, would we want to model those machines on the limited information-processing abilities so evident in the empirical data involving humans? Overall, these suggestions are not meant to imply that there will be no ethical issues associated with smart, maybe even conscious machines. It is simply not clear, though, how good the prospects are of building these systems from the bottom up by imitating the interactions of neurons in living systems, which is the strong prospect and promise under neurorobotics.

Aicardi et al. (2020) leave several issues open for further consideration. Regarding the ethics and social implications of neurorobotics, will smart robots have rights equal to those of humans? Is the goal to imitate human mental computation, with all its apparent limitations, or to exceed human cognitive abilities? If the latter, who will govern? Who will make the most important decisions–humans or machines? Finally, is a bottom-up approach, like that of the Human Brain Project, that maps patterns of neural activation adequate in understanding human cognition and building intelligent machines, or are higher-order constructs, like logic, grammar, and semantic categories necessary building blocks for creating smart machines?

The largest wooden airplane ever constructed, the Spruce Goose, flew only one time, airborne for 1 min, flying roughly 70 feet above the Long Beach harbor in Southern California.Footnote 7 An airplane built from wood may have seemed like a good idea but may have ignored empirical knowledge available at that time.