Cognitive technology

Cognitive technology (CT), also referred to as cognition-related technology, is an umbrella term used to designate the realm of technologies that assist, enhance or simulate cognitive processes or that can be used by humans “for the achievement of cognitive aims” (Dascal and Dror 2005).

The notion of CT was originally coined in the context of educational psychology to describe strategies and tools that could facilitate cognitive processes such as learning and problem solving (Sweller 1989). With advances in personal computing, the notion of CT has been increasingly used to refer to “virtual environments, new computer devices and software tools” (Beynon et al. 2003) or other “informational artifacts” (Gorayska and Mey 1996) that can support or expand human cognition. An important step towards the establishment of CT as an area of scientific investigation was the creation in the late 1990s of a Cognitive Technology Society (Walker and Herrmann 2004) and the subsequent organization, during the early 2000s, of various CT-focused international conferences where experts from various fields of the cognitive sciences gathered to discuss “the impacts these technologies will have on human cognitive and social capacities” (Beynon et al. 2003).Footnote 1

In the last decade, in parallel with advances in Artificial Intelligence (AI), the label of CT has gained momentum in computer science and in the ICT industry to describe information technologies capable of performing cognitive tasks traditionally performed by humans (Manuti and de Palma 2018; Schatsky et al. 2015), in particular when they are used to “assist and influence humans’ mental activities” (Kiger 2017). Among other companies, IBM has put CT at the center of their business transformation in what they called the “cognitive era”.Footnote 2

CT is a macro-domain encompassing, at least, two major sub-domains:

  1. a.

    Neurotechnologies: Systems or devices that interface human nervous systems to assist, enhance or monitor natural cognitive processes.

  2. b.

    Artificial Intelligent Systems: Artificial systems that simulate (aspects of) intelligence and exhibit it across a wide range of processes including reasoning, planning, learning, natural language processing, perception and the ability to move and manipulate objects in the physical space.

Neurotechnologies include brain-computer interfaces (BCIs), electrical and magnetic brain stimulation, neurosensor-based vehicle operator systems, real-time neuromonitoring, neural prosthetics and others. These technologies are capable of establishing either invasive or non-invasive connection pathways between (human) nervous systems and computing devices for a variety of purposes. For example, medical applications of BCI technology have shown clinical effectiveness in monitoring, repairing, assisting or augmenting cognitive or sensory-motor functions in patients experiencing cognitive or sensory-motor impairments including spinal cord injury (Ikegami et al. 2011), stroke (Buch et al. 2008), motor neuron disease such as amyotrophic lateral sclerosis (ALS) and muscular dystrophy (Kübler et al. 2005; McCane et al. 2015), and, more recently, age-related cognitive decline (Lee et al. 2013), and dementia (Liberati et al. 2012). In parallel, direct-to-consumer applications of electroncephalogrphy-based neuromonitoring are gaining increasing commercial interest as tools for self-monitoring, self-quantification as well as tools for physical and mental training.

Artificial intelligent systems include virtual personal assistants, question answering computer systems (such as IBM Watson), intelligent robots, self-repairing hardware and others. These systems mimic (components of) functions that humans usually associate with cognitive agents such as flexibility, automatic self-improvement through experience (as in the case of machine learning algorithms), perception of the external environment (e.g. speech recognition, facial recognition, object recognition etc.), motion and manipulation (e.g. mapping, motion planning, path planning and localization) and knowledge representation.

Both neurotechnologies and artificial intelligent systems fall into the category of CT when they are utilized with the purpose of influencing, assisting, or augmenting human cognitive capacities. However, these two subdomains tend to differ with regard to how such an influence on cognition is realized. In most cases, neurotechnologies mostly affect cognition by intervening on “internal information processing systems”, i.e. by mapping or electrically modifying its underlying neurobiology. In contrast, artificial intelligent systems mostly intervene at the level of “external processing systems” (Bostrom and Sandberg 2009), that is they emulate (aspects of) human intelligence and provide external cognitive resources to support human cognition without any direct interface with the nervous system, a phenomenon known as environmental enrichment (Halperin and Healey 2011). Since the external processes enabled by artificial intelligent systems are, under some circumstances, functionally similar—according to some researchers, even equivalent—to internal processing, authors have argued that these technologies might be considered, under such circumstances, extensions of the human mind (Clark 2001; Clark and Chalmers 1998; Fitz and Reiner 2016). For example, Clark (p. 4) has argued that CTs “do far more than merely allow for the external storage and transmission of ideas” and rather “constitute […] a cascade of mindware upgrades: cognitive upheavals in which the effective architecture of the human mind is altered and transformed” (Clark 2003).

In recent years, these two domains have experienced a strong convergence. In fact, AI features have been increasingly embedded in most advanced neurotechnologies. For example, most current BCIs use components of artificial intelligence, especially classifiers based on machine learning (ML) algorithms, to extract, classify and decode brain signals (Müller et al. 2008). At the same time, several artificial intelligent systems are provided with the capacity of being controlled via direct brain-machines interfaces or are designed to mimic the functioning of the human brain. These include smartphones and wearables (Powell et al. 2013), semi-autonomous cars (Göhring et al. 2013), unmanned aerial vehicles (Kosmyna et al. 2015), and assistive robots (Tonin et al. 2011). This convergence is also occurring at market level with the increasing involvement in the neurotechnology sector of major players in artificial intelligence. For example, IBM, a major producer of artificial intelligent systems and developer of the famous intelligent digital assistant Watson, has entered the neurotechnology market and is among the top-15 patent holders in pervasive neurotechnology (Fernandez and Nikhil 2015). This market integration has even led to the creation of entire new research and business ventures precisely designed with the mission of accelerating the convergence of neurotechnology and artificial intelligent systems. An example of this trend is a newly launched venture called Neuralink. During the Code Conference 2016, entrepreneur Elon Musk announced a plan to accelerate the convergence between neurotechnology and artificial intelligence systems, followed by great media coverage. Although Musk himself remained cryptic about this project, he initially dubbed it “neural-lace” to emphasize the element of entwining brains and artificial systems together. In March 2017, Musk unveiled his project and launched Neuralink, a company whose stated mission is to “merge the human brain with AI” (Statt 2017).

It is worth to point out that CT is a functional characterization; hence it is not based on the type of hardware or software but on the type of function that a certain technology executes, namely assisting, supporting or expanding human cognitive capacities. Therefore, CT is creating an increasing need for addressing the ethical and social implications of CTs regardless of their hardware/software realization, but based on how these technologies influence human cognition.

This consideration has generated more interaction and dialogue among two main research communities: the Neuroethics community—primarily concerned with the ethics of neurotechnology (Goering and Yuste 2016; Illes and Bird 2006; Jotterand and Ienca 2017)—and the Computer Ethics community—primarily concerned with the ethics of computer systems and AI (Floridi 2010). The more technology is capable of interfacing, assisting and, possibly, expanding human cognition, there higher the need for comprehensive conceptual and normative approaches that study (the ethics of) cognition across the entire bio-digital continuum. Some early signs of convergence at the level of ethical and social assessment are already observable. For example, the 2016 Annual Meeting of the International Neuroethics Society in San Diego featured a public event on future and emerging technologies where a panel of experts discussed the ethical and social implications of both neurotechnologies and artificial intelligent systems such as care robots and intelligent digital assistants (Ienca 2016). Similarly, the 2017 IEEE TechEthics Conference in Washington D.C. (https://techethics.ieee.org/events/dc-2017) featured one keynote talk and one panel on neurotechnology.

In light of the increasing convergence between these two main sub-domains, this paper will address the ethics and governance of cognitive technologies in a unitary manner.

Ethics, security and the dual-use dilemma

Some implications of cognitive technology have sparked ethical controversy. These include issues of cognitive enhancement and augmentation (Farah et al. 2004; Sententia 2004; Yuste et al. 2017), superhuman intelligence (Russell et al. 2015), agency and identity (Gilbert 2017; Yuste et al. 2017), human–machine hybridization (Ienca 2018), algorithmic bias (Kirkpatrick 2016; Yuste et al. 2017) and others. More recently, the application of CTs for purposes such as military dominance, surveillance, and cybercriminality has also associated CT to the ethical problem of dual-use (Ienca et al. 2018; Taddeo and Floridi 2018).

Dual-use technologies are artefacts that can be coopted “for making things quite unrelated to their primary purposes” (Forge 2010), in particular when these secondary purposes involve activities that are ethically questionable or potentially detrimental to individuals and groups such as military operations, terrorism, general criminality etc. In ethical terms, dual-use potentials inherent in technological artefacts are often presented as ethical conflicts between opposing ethical duties (Selgelid 2009); for example, between the promotion of good through free technological development vs. the prevention of possible collateral harm resulting from the cooptation of such technological potential for new purposes. A common example is the conflict between health promotion through effective clinical applications of a civil technology X vs. the provision of resources for the harming of innocents through military operations involving X.

Information technologies have instantiated a dual-use potential since their very first applications (Floridi 2014b). During the Second World War, Alan Turing’s early work on computability was coopted for military purposes, especially for the cryptanalysis of Morse-coded radio communications of the Axis powers enciphered using Enigma machines (Hodges 2012). The first contracts for packet network systems, including the development of the ARPANET, were awarded by the US Department of Defense as early as the 1960s and the first rogue program to spread through a network was created as early as in 1971.Footnote 3 Today, several subcomponents of the digital revolution—sometimes referred to as the “4th revolution” (Floridi 2014a), including networks, mobile communications technologies and robotics, demonstrably raise dual-use concerns.

Reports show that cyber-attacks have been growing in frequency and size in recent years. According to the Europol’s 2016 Internet Organised Crime Threat Assessment (IOCTA), cybercrime offences “remain on an upward trend and have reached very high levels” (Europol 2016). In October 2016, a massive cyber-attack targeted one of the central nodes of Internet traffic in the US, striking Twitter, Paypal, Spotify and sites of an infrastructure company in New Hampshire. Such increase in volume, scope and material cost of cybercrime has dramatically affected public perceptions on information security. Survey data of the World Economic Forum’s Global Risk Report 2016, show that cyber-attacks are perceived among the top five risks globally (WEF 2016). Increasingly, cyber-attacks have become a critical problem not only for private businesses, but also for public entities such as democratic governments (Mitterlehner 2014), healthcare institutions (Ehrenfeld 2017), and national security organizations (Nissenbaum 2005). Cyberterrorist acts have increased in number, magnitude and variety causing destruction and harm to personal computers, networks and the public Internet—including large-scale disruption of government systems, hospital records, and national security programs—for personal or ideological objectives (Matusitz 2005). When occurring between State actors, cyberoffences have shown the potential to influence geopolitical scenarios and strategic equilibria (Deibert 2015; Lacy and Prince 2018). A widely media-covered example is the role of cyber-attacks during the 2016 US presidential election culminated in the unprecedented hacking of a presidential candidate’s email server and the following diplomatic crisis between the US and Russia (Stewart III, 2017). Concurrently, cyberwarfare concerns have emerged as a consequence of using CTs like artificial neural networks, gun data computers, secure cryptoprocessors, and robotics for military purposes (Gershgorn 2016; Sapaty 2015). The large-scale deployment of AI has been associated by experts with an increased risk to trigger a cyber arms race, which could ultimately escalate into conventional warfare (Taddeo and Floridi 2018). As Taddeo has observed, these emerging trends in cybercrime, cyberterrorism, and cyberwarfare “remark on the extent to which our societies depend on ICTs” and show how information technology has changed “the very infrastructure on which our societies rely” (Taddeo 2017). Following a socio-technological trend known as the Internet of Things (IoT), a large number of physical devices are becoming increasingly embedded with computing technology for a variety of purposes. Internetworked technologies embedded with electronics, software, sensors, actuators, and network connectivity are being tested or preliminary deployed by armed forces and governmental agencies (Callam 2015).

One common feature of these diverse cybercrime and cyberwarfare trends is that they often involve the use of computing systems with the deliberate purpose or unintended consequence of eroding basic democratic principles like individual freedoms, civil liberties, rule of law and democratic elections. This has raised the question of whether democratic principles and values will survive the digital era (Helbing et al. 2017).

This technology-mediated erosion of democratic principles is not exclusively caused by cyberterrorism and cyberwarfare. Global surveillance programs reportedly run by national security agencies and other governmental actors are also fueling controversies over the violation of civil liberties and other democratic principles. Government agencies in various countries have proven able to deploy technology infrastructures for mass surveillance, enabling the collection of digital detritus—e-mails, calls, text messages, cellphone location data and a catalog of computer viruses, from individual citizens and groups. The government of China, for example, has reportedly installed over 20 million surveillance cameras across the country over the last few years and merged state surveillance with big data analytics to curb social unrest (Langfitt 2013). In 2014, the Chinese Ministry of Industry and Information Technology ordered a major mobile telephone company, to put a real name registration scheme into effect and to “regulate the dissemination of objectionable information over the network” (Limited 2014). In Russia, the Federal Security Service is legally allowed to use a system for Internet-based search and surveillance called System for Operative Investigative Activities (SORM). Since 2000, FSB is no longer required to provide telecommunications and Internet companies documentation on targets of interest prior to accessing information and in 2014 SORM-usage was extended to monitoring of social networks, chats and online forums (Paganini 2014). In response to these attempts of invasive governmental control, unauthorized disclosures of national security documents—as in the famous case of Edward J. Snowden vs. the United States’ National Security Agency (NSA)—have been advocated by some authors as a proportionate response to preserve personal privacy and set the limits of invasive State-based surveillance (Lyon 2014).

This paper will argue that cognitive technologies can further “jeopardize democracy” (Vincent 2018) if they are not adequately aligned with fundamental democratic values and principles. I will proceed as follows. First, I will review dual-use issues associated with CT. Second, I will argue that the preferable approach to the governance of CT in light of dual-use risk is neither strict regulation nor lassaiz-faire but rather proactive democratization. In particular, I will argue that the potential held by CT for influencing human cognition urges the development of inclusive strategies that can direct cognitive technology for the benefit of people and the whole democratic society, not just restricted groups. Based on these considerations, I will outline six possible steps towards the proactive democratization of cognitive technology in the upcoming decade.

Dual-use cognitive technology

Cognitive technologies hold a promising potential for improving the life of human beings through a wide spectrum of non-hostile civil applications. For example, intelligent cognitive assistants are opening new possibilities for supporting people suffering from cognitive deficits such as older people and people with dementia (Ienca et al. 2017; Jamieson et al. 2014). Similarly, BCIs are becoming increasingly effective in enabling novel opportunities for communication in patients suffering from stroke, spinal cord injury or amyotrophic lateral sclerosis (ALS) (Buch et al. 2008; Ikegami et al. 2011; Kübler et al. 2005).

At the same time, however, these technologies have recently shown some malleability to dual-use, especially in the context of military applications. In recent years, several global players including USA, EU, Russia, Iran, India, China and Japan have been actively working on military applications of neurotechnology, especially BCI (Moore 2013). Tennison and Moreno (2012) have comprehensively reviewed the spectrum of neurotechnologies with applications in military and national security contexts with special focus on projects funded via the United States’ Defense Advanced Research Projects Agency (DARPA). Their review identified three main categories of dual-use neurotechnology: brain-computer interfaces (BCIs), neurotechnologies for warfighter enhancement, and neurotechnological systems for deception detection and interrogation (Tennison and Moreno 2012). In a similar fashion, Miranda et al. have assessed DARPA-funded BCI-applications for military purposes. Their review identifies two major avenues of ongoing research: (1) restoring neural and/or behavioral function in warfighters, and (2) enhancing training and performance in warfighters and intelligence agents (Miranda et al. 2015). For example, the Neurotechnology for Intelligence Analysts (NIA) program was designed to develop BCI systems utilizing non-invasively recorded EEG signals to significantly increase the efficiency and throughput of imagery analysis (Miranda et al. 2015). Using the same technological paradigm, national security uses of BCI include the acquisition of neural information gathered from warfighters’ brains to modify their equipment accordingly and the development of a Cognitive Technology Threat Warning System (CT2WS) that convert subconscious, neurological responses to danger into consciously available information (Kirkpatrick 2007).

Current military applications of artificial intelligent systems mostly focus on non-cognitive applications such as unmanned aerial vehicles (UAVs—commonly known as a drones), unmanned ground vehicles (UGVs) such as the MIDARS, a four-wheeled robot that automatically performs random or preprogrammed patrols, and other autonomous or semi-autonomous robots such as Atlas, a bipedal humanoid robot designed for search and rescue tasks. In the near future, however, artificial intelligent systems will likely be used to augment physical and cognitive capacities of combatants. For example, by the end of 2017 the US Department of Defense is announced to launch the Tactical Assault Light Operator Suit (Talos), a military hardware that encloses soldiers within a computerized exoskeleton (White 2014). In parallel, augmented reality (AR) systems are being tested with the purpose of enhancing attention, learning (Mao et al. 2017) and situational awareness (Gans et al. 2015). Particular ethical concern was raised by a special type of robotic applications, the so-called lethal autonomous weapons (LAWs). Unlike vehicles that are remote-controlled by a pilot or designed for non-combatting tasks such as reconnaissance, surveillance, and sniper detection, LAWs are designed to replace an important component of human cognition, namely decision-making.

Besides State-funded military applications, cognitive technologies have proven to hold dual-use potentials also in relation to non-State cyberterrorism and general cybercrime. Pycroft et al. (2016) have illustrated the possibility of targeting attacks against users of invasive neuromodulation technologies—especially deep brain stimulation (DBS), where the attackers may take control of the user’s motor function, emotional dimension or simply disrupts the device’s functionality (Pycroft et al. 2016). In experimental settings, Martinovic et al. (2012) have demonstrated the actual feasibility of performing side-channel attacks against users of currently marketed BCIs to reveal private and sensitive information about the users such as their pin-codes, bank membership, months of birth, debit card numbers, home location and faces of known persons (Martinovic et al. 2012). Hacking attacks have been proven feasible also against artificial intelligent systems, especially autonomous cars. The findings presented to the 2011 National Academies Committee on Electronic Vehicle Controls and Unintended Acceleration demonstrated the possibility of taking control of a car’s computer system without direct physical access exploiting the car’s Bluetooth connection (National Academies of Sciences 2012).

Finally, several cognitive technologies can be used as powerful surveillance tools for national security, judicial and military purposes due to their dual-use character. While no deception detection technology is being currently used in official security operations, several devices currently in-development either are directly DARPA-commissioned (Langleben et al. 2005, 2016) or market their services to national security agencies including the Department of Homeland Security such as the No Lie MRI device (Hughes 2010). This evidence shows that CTs can be potentially coopted for a number of purposes that involve the possible diminishment or even violation of democratic principles and values.

In this scenario, it is important for the future of democratic societies to anticipate possible challenges associated with the governance of cognitive technology and prevent that these systems can be coopted by malevolent governmental or non-governmental actors for anti-democratic aims including the triggering of a cyber arms race, the limitation of individual liberties, disproportionate mass-surveillance, the exacerbation of intra- and intergroup differences in social dominance, or direct harm to individuals and groups. This risk is believed to be particularly cogent in light of the ongoing “shrinking” of Western democracy as a consequence of the recent rise of nationalism and authoritarian populism (Chacko and Jayasuriya 2017; Inglehart and Norris 2016). In such a rapidly changing global scenario, it is vital for democratic societies to prevent that cognitive technologies can be used to accelerate the crisis of democracy or to empower actors pursuing anti-democratic goals. In contrast, coordinated and proactive approaches are required to make sure that future developments of CT will be compatible with the principles of liberal democracy or even expand those principles through the human-centered permeation of such technologies in human societies. This paper proposes a preliminary characterization of the basic principles and safeguards to democratize cognitive technology in the upcoming decades.

Democratizing cognitive technology

Given their high dual-use potential, cognitive technologies have raised ethical concerns and elicited several proposals for policy response. Back in 2006, delegates of a workshop organized at Arizona State University addressed the issue of sociocultural risk in relation to cognitive technology.Footnote 4 Their analysis identified in cognition-related technology a “capacity for sociocultural change” due to its potential to change human intelligence and performance capabilities, and anticipated that such potential could have destabilizing effects on individuals and groups (Sarewitz and Karas 2007). In the resulting white paper, experts delineated an entire spectrum of possible approaches to the governance and regulation of cognitive technologies that could prevent misuse and unintended risks. The two extremes of this spectrum were represented by the following options:

  1. a.

    Lassaiz-faire approaches—which emphasize the individual freedom of technology producers and end-users as well as the alleged capacity of financial markets to filter out potentially detrimental applications

  2. b.

    Strict regulatory approaches—which emphasize the need for State-led regulatory interventions (often based on essentialist views on human cognition according to which the natural cognitive boundaries should not be trespassed through technology)

Lassaiz-faire approaches are being often advocated by producers of commercial neurotechnologies with the purpose of reducing FDA oversight on novel commercial products, especially limiting the applicability of FDA regulations on mobile medical applications to neurodevices for mental wellbeing.Footnote 5 In contrast, particularly restrictive approaches were recently advocated by critics of dual-use artificial cognitive systems. The most restrictive of these approaches is the call for a collective ban or moratorium. While a collective ban is usually considered “much to extreme a response” in the context of dual-use neurotechnology (Giordano 2014), it has been advocated by a large number of experts in relation to LAW. Through the group Campaign to Stop Killer Robots (https://www.stopkillerrobots.org/) over 1000 experts in artificial intelligence signed an open letter calling for a global ban on LAWs arguing that it could trigger an arms race in military artificial intelligence and robotics.

This paper attempts to find a third way between these extreme approaches and argues that the best response to dual-use cognitive technology in a free society is a calibrated combination of technological freedom and risk-management strategies based on the principles of open development, responsible innovation and liberal democracy. I call this approach democratization of cognitive technology. In the following, I will describe this approach by delineating its core ethical principles and make a case for its implementation as a proactive strategy for the governance of CT and its accelerating impacts on human capabilities in a free society.

By democratization of a technological domain, I mean, very generally, a process of group decision-making about a certain technology characterized by the possibility of fair access to the technology by all participants and a principle of equality among the participants across various stages of the collective decision-making process.Footnote 6 Consequently, democratizing cognitive technology implies a process of decision making about CT that will guarantee a possibility of fair access to CT for all users and a principle of equality among users during various stages of decision-making (including design, development and application).

In its general definition, this democratizing approach has elements of analogy with both strict-regulatory and lassaiz-faire approaches. With the strict-regulatory approaches it shares the observation that (i) cognitive technology requires urgent ethical assessment and policy interventions to minimize the risks associated with its dual-use potential, and (ii) that markets alone may not be conceptually and practically equipped to provide such assessment and intervention. This observation is based on a threefold factor.

First, novelty: cognitive technology is a relatively recent field of technological development. Consequently, it is still characterized by conceptual muddles and policy vacuums (Moor 2005) that prevent the maximization of benefits of these technologies while minimizing the risks. Many of these muddles and vacuums facilitate new opportunities for malicious exploitation generated by rapid changes in the technological or social environment, unprepared technological infrastructures, defective legal coverage, and the increase in quantity, variety and velocity of data flows (Dupont 2013).

Second, magnitude: CTs hold the potential of influencing human cognitive capabilities, hence determining a non-negligible effect on human cultural evolution and global equilibria. As observed by Moor (2005), neurotechnologies “could be the most revolutionary of all of the technologies” (Moor 2005) given their capacity to reconstruct, manipulate or augment cognitive processes, and impact human societies in manners that are currently difficult to predict. In the military context, B.E. Moore, lieutenant colonel of the United States Air Force, has predicted that BCI technology «has the potential to revolutionize military dominance much the same way nuclear weapons have done» (Moore 2013). Similar predictions have been made also in relation to artificial intelligent systems (Bostrom 2014). Due to its novelty, this alleged revolutionary potential of CT is still largely unexpressed. To date, for example, artificial intelligent systems are still distinguishable (from the Turing’s test perspective) from human intelligence across many cognitive tasks, while current neurotechnologies enable only a small degree of access to and modification of human neural processing. However, on the long term, the dual-use potential of CTs could enable unprecedented levels of intrusion into personal privacy or modification of personal autonomy (Ienca and Haselager 2016), concentration of economic power, and possibilities for offending individuals and groups (Dupont 2013; Yudkowsky 2008). As such, CTs could affect the fundamental mediators of human social interaction in the information era. Special oversight may be required to guarantee that these potentially revolutionary changes occur in accordance with the mechanisms and values of democratic societies.

The third factor is timing: given their historical novelty, cognitive technologies are still at an initial stage of market maturity and societal adoption. During this introduction phase, a technological trend shows a higher degree of malleability (Moor 2005). Therefore, control or change is less difficult to achieve compared to when the technology has become entrenched. Assumed that CT will be a critical component of our future, human societies are now at a historic juncture in which they can make proactive decisions on the type of co-existence they want to establish with these technologies. Privileging lassaiz-faire approaches at this stage of development would defer risk-management interventions to a time when cognitive technology is extensively developed and widely used, hence refractory to modification.

At the same time, the democratizing approach shares with lassaiz-faire approaches the observation that over-regulation can (a) obliterate the benefits of cognitive technology for society at large, and, if managed by non-democratic or flawed democratic governments, (b) produce an undesirable concentration of power and control. In fact, if adequately implemented, CTs open the prospects of unparalleled improvement in the quality of life of human societies across a wide range of domains: medical, economic, infrastructural, communicational etc. For example, Russel et al. project that, thank to AI, “the eradication of disease and poverty is not unfathomable” (Russell et al. 2015). Therefore, over-regulatory strategies that limit technological freedom and open development could constrain technological progress and the resulting benefits for individuals and society at large. Second, top-down approaches to regulation could concentrate the power generated by CT among restricted political or economic groups, hence exacerbate existing political and economic inequalities. This risk has accompanied many breakthroughs in the history of technology. For example, during the introduction stage of the computer revolution, US authorities debated “whether a central government database for all United States citizens should be created” (Moor 2005). The creation of such government database would have produced a very different type of World Wide Web than the current one, with services distributed top-down, more concentration of power and control, and increased intrusion into individual privacy. The resulting decision not to create the data base contributed to the current informational landscape.

In the next section, I will describe a proactive democratizing approach to cognitive technology by delineating its core ethical principles. In addition, I will list, as an ostensive description, examples of currently ongoing projects and cooperative efforts that go into the direction of democratizing cognitive technology. It is worth noting that this description should not be seen as an exhaustive characterization of the ethics of CT or as a complete solution to the problems posed by dual-use dilemmas in cognitive technology. Of course, the answer to specific ethical dilemmas rising within this technological domain (e.g. trolley dilemmas for artificial intelligent agents, the personal autonomy of BCI users or the moral desirability of artificial superintelligence) may not necessarily depend on the level of democratic openness of the domain itself. Rather, this description is aimed at providing a preliminary conceptual and normative clarification of the democratizing approach and opening a public debate on its realization.

Paths to democratization: the six principles

This proposal for democratizing cognitive technology consists of the combination of six normative principles:

  1. I.

    Avoidance of centralized control

  2. II.

    Openness

  3. III.

    Transparency

  4. IV.

    Inclusiveness

  5. V.

    User-centeredness

  6. VI.

    Convergence

These six principles condense and accentuate recurrent normative stances in the literature on the link between computing technology and democracy (Gil de Zúñiga et al. 2010; Helbing et al. 2017; Helbing and Pournaras 2015), and set out a way forward towards for responsible and democratic development in cognitive technology. These principles can be used to guide the discussion on responsible innovation in CT at various levels of technology governance including individual researchers, funding agencies, as well as national and international regulatory bodies.

Avoidance of centralized control is the principle according to which it is morally preferable to avoid centralized control on CT to prevent risks associated with unrestricted accumulation of capital, power, and control over the technology among organized groups such as large corporations or governments. This preventive measure is designed to mitigate two critical types of technological risk. Type one risk: reduction in number of actors within the technological domain. To appreciate this type of risk, consider by analogy the transformation of the Internet over time, especially the transition from Web 1.0 to 2.0. While Web 1.0 was characterized by a coexistence of many service generators, the increase in data volumes and users typical of Web 2.0 is counterbalanced by a contraction of the number of actors, with most online traffic being driven by a limited number of powerful actors such as Google, Facebook, or YouTube. In the context of CT, the centralization of technological power among certain State or non-State actors could result in monopolistic operations or even destabilize economic, geopolitical and military dominance. In parallel, at the intrastate level, it could centralize power among restricted groups or elites hence potentially enable disproportionate control over the rest of the population and their civil liberties. I call this second scenario type two risk and can be conceptualized as an asymmetry between the level of governmental surveillance of individual citizens and the level of surveillance of governments by individual citizens.

Normative interventions aimed at limiting this risk of centralization may be conceptualized as cyberethical counterparts of anti-trust laws. Just like anti-trust laws are required to prevent monopolies and eliminate anti-competitive practices, proactive regulatory interventions may be required to prevent practices that restrain access to or development of CT, or cause the accumulation of power and control among restricted entities (Posner 2009). Such safeguards should apply to all societal actors and levels (including design, coding, and physical manufacturing), and are intended to allow smaller actors such as small groups or single individuals to enter the domain of cognitive technology and take advantage of its benefits. According the principle of avoiding centralized control, decentralized development models should be privileged over centralized models. Successful examples of decentralized development are open and participatory platforms such as the free encyclopedia Wikipedia, the open-source software operating system Linux (Helbing and Pournaras 2015) and the use of distributed ledger technology in trading and governance (Collomb and Sok 2016; Ølnes et al. 2017). An interesting attempt to implement the principle of decentralized control in the context of CT is Nervousnet, a large-scale distributed platform using sensor networks “to measure the world around us and to build a collective data commons”, which is often presented as a “digital nervous system” (Helbing and Pournaras 2015).

Openness is the principle of promoting universal access to (components of) the design or blueprint of cognitive technologies, and the universal redistribution of that design or blueprint, through an open and collaborative process of peer production. This principle also entails that the outputs of research in cognitive technology should be free of restrictions on access and use. Openness and the avoidance of control are critical requirements to make these same capabilities that will be recorded through or infused in cognitive technology—the cognitive capabilities—available to everyone. A good example in this direction is Microsoft’s effort to take those same capabilities infused in intelligent apps and made them available as a set of application programming interfaces (APIs) to every developer.Footnote 7 This attempt is a form of democratization because it enables everyone to use the same building blocks that Microsoft uses to build intelligent devices or to make existing applications more intelligent. Another important step towards the democratization of cognitive technology through openness is Microsoft-sponsored research company Open AI. Open AI is a nonprofit company dedicated to precluding malicious AI, producing benevolent and safe AI, and ensuring that “AI’s benefits are as widely and evenly distributed as possible” (“Open AI” 2016). Examples of successful application of the openness principle have also emerged within the domain of neurotechnology, especially brain-computer interfacing. A positive example is OpenBCI, an open source brain-computer interface platform created by Joel Murphy and Conor Russomanno in 2013. Open BCI’s mission is to “provide anyone with a computer, the tools necessary to sample the electrical activity of their brains” and “harness the power of the open source movement to accelerate ethical innovation of human–computer interface technologies.”Footnote 8 Today, Open BCI already offers an assortment of open source, versatile and affordable bio-sensing systems to sample electrical brain activity (EEG), some of which can be 3D printed. Development is open and new discoveries are made and shared through “an open forum of shared knowledge and concerted effort, by people from a variety of backgrounds.” Openness of cognitive technology has been seen as a critical strategy for harnessing collective intelligence (Helbing et al. 2017). In fact, a pervasive distribution of CT across all socioeconomic strata of society could empower people and enable a more informed and participative deliberation.

In a more abstract sense, openness in CT involves the principle of infusing every application that we interact with, on any device, at any point in time, with (components of) cognitive technology. This process is currently ongoing. For example, an increasing number of routinely used applications incorporate (components of) artificial intelligence. These include search engines, social media, e-commerce services, video-games, medical devices and many others. At the same time, an increasing number of applications are designed to interface human cognition through neurotechnology. For example, several mobile communication companies including Samsung and Apple are testing brain-controlled handheld devices (Powell et al. 2013). In this more general sense, openness is strictly linked to the avoidance of centralized control. In fact, the more cognitive capabilities are pervasively embedded and disseminated across the entire digital ecosystem, the harder it is for actors to centralize power and exert control over those systems. In the words of engineer and entrepreneur Elon Musk: “if everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower” (Mascarenhas 2016). Therefore, the principle of openness incentivizes the infusion of cognitive capabilities into an increasing number and variety of technologies in order to prevent their uneven accumulation among restricted applications or tools.

It is worth considering, however, that while “openness may reduce the probability of AI benefits being monopolized by a small group” (Bostrom 2017) it could also cause unintended detrimental consequences. For example, Bostrom (2017) has argued that a high degree of openness could exacerbate a racing dynamic in which competitors trying to be the first to develop advanced AI may accept higher levels of existential risk in order to accelerate progress (Bostrom 2017). Further research is required to assess which degree of openness would ensure the optimal balance between benefits sharing and individual, national or international security.

Transparency is the principle of enabling a general public understanding of the internal processes of cognitive technologies. This is particularly challenging for approaches such as artificial neural networks, which learn or evolve to carry out a task in absence of clear mappings to chains of inference that are easy for humans to understand. This path to democratization through transparency is critical for artificial cognitive systems. For example, the principle of transparency is at core of IBM’s “Guiding Ethics Principles for the Cognitive Era”, a recently released ethics framework characterizing IBM’s digital transformation. According to this framework, “for cognitive systems to fulfil their world-changing potential”, it is vital to ensure the trust of end-users in the systems through transparency enhancing strategies (IBM-THINK 2017). In particular, there is a need for transparency in relation to (a) when and for what purposes AI is being applied in cognitive solutions, (b) the major sources of data and expertise “that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions”; (c) data protection and ownership. It is worth noting that the transparency principle has also educational relevance, since it allows making the necessary informational tools to learn and use cognitive technologies available for everyone, including students, workers and general citizens. Ideally, with advancing CT, such educational function will be institutionalized by the school system with the purpose of helping future citizens acquire the skills, knowledge and norms to engage successfully and securely with cognitive systems and use those skills and knowledge for achieving their life objectives.

An example of practical realization of algorithmic transparency is Automatic Statistician, an intelligent software capable of spotting trends and anomalies in data sets and presenting its conclusion, including a detailed explanation of its reasoning (Ghahramani 2015). According to the researcher who created this software, such transparency is “absolutely critical” not only for applications in science but also for many commercial applications (ibid). At the policy level, authors have linked the principle of transparency to public trust and proposed that “in order to create sufficient transparency and trust, leading scientific institutions should act as trustees of the data and algorithms that currently evade democratic control” (Helbing et al. 2017). This proposal would be particularly relevant in the context of data and algorithms related to reasoning and decision-making as these could have a profound impact on individual deliberation and social cohesion.

Inclusiveness is the principle of ensuring that no group of individuals or minority is marginalized or left behind during the process of permeation of cognitive technology in our society. For example, a 2012 study co-authored by a senior FBI technologist, found that face recognition algorithms of commercial vendors consistently performed 5–10% worse on African Americans than on Caucasians (Klare et al. 2012). As the use of face recognition technology is expected to progress significantly in the next years, it is fundamental to ensure that no ethnic group will benefit from this technology less than other groups. The inclusiveness principle is at the core of The Algorithmic Justice League (AJL) was launched by Joy Buolamwini in November 2016. AJL provides a free platform to detect algorithmic bias that “can result in exclusionary experiences and discriminatory practices” and create “inclusive training sets.”Footnote 9

The principle of inclusiveness does not apply exclusively to facial or physiognomic traits but to any other ethically relevant social bias that may intendedly or unintendedly emerge during CT development. These include cultural, political and language bias etc. An example of minimization of cultural and language bias is the internationalization strategy outlined by Open AI’s Software Requirements Specification. As the specification states: modules should be internationalized, in the sense that they “need to conform to the local language, locales, currencies etc., according to the settings specified in the configuration file or the environment in which they are running in.”Footnote 10 The principle of inclusiveness is strictly related to the transparency principle. In fact, building algorithms which explain their reasoning and decision making is the best way to guarantee that hidden biases will be understood and promptly eliminated. In addition, it is important to create larger, more inclusive and diverse data sets with which to train the algorithms. Pluralism and diversity are critical notions for implementing the principle of inclusiveness in cognitive technology.

The principle of user-centeredness advocates that emerging cognitive technologies should be designed, developed and implemented according to the users’ needs and personal choices. User-centered approaches to the development of cognitive technology are necessary to guarantee that end-users (as widely as possible characterized, in accordance with the principles of openness and inclusiveness) are involved in the design, development and implementation of cognitive technologies on an equal footage. This principle has both methodological and social relevance. Methodologically, user-centered approaches have been observed to increase the capacity of cognitive technology to fulfill the needs and wishes of end-users, reduce friction in human–machine interaction, facilitate usability hence increase overall user satisfaction (Ienca et al. 2017; Kübler et al. 2014). User-centered approaches have been observed to increase technology uptake and social adoption among end-users (De Vito Dabbs et al. 2009). Furthermore, such approaches ensure that technology is truly designed for the benefit of users instead of making users passive buyers of novel commercial products. For example, Kübler et al. have showed that user-centered design is a viable and effective approach to evaluate the usability of BCI-controlled applications, including among vulnerable end-users severe impairment (Kübler et al. 2014). Similar approaches have been pursued with BCIs based on event related potentials for brain spelling (Kaufmann et al. 2012) and painting (Zickler et al. 2013). User-centeredness is particularly important in relation to cognitive technologies developed for assisting patients with cognitive disorders. In fact, these people (e.g. older adults with dementia) are often frail and vulnerable individuals, hence entitled to outmost respect of their needs and wishes (Ienca et al. 2016). At the level of technology implementation, the principle of user-centeredness would prescribe increased individual control over one’s own cognitive processing and the adaptation of cognitive technologies to the needs, wishes and capabilities of individual users.

Finally, the principle of convergence can be described both in a narrow and in a broad sense. In the narrow sense, convergence is the principle of interoperability, intercommunication and ease of integration among all components of cognitive technology (i.e. the cognitive tools or modules): in order to reach the common goal of measuring, enhancing or emulating cognition, all cognitive tools must, at some important level, speak the same language and behave in a mutually consistent manner. It is worth noting, however, that excessive interoperability might result in increased data insecurity, hence must be carefully balanced over other ethical principles and technical safeguards. In a broader and more abstract sense, it is also the principle of converging different types of cognitive technology, especially neurotechnology, on the one hand, and artificial intelligent systems on the other hand. As described in the first section of this paper, such convergence is already occurring. For example, BCIs have been combined with artificial intelligent systems (environment-sensing, obstacle-avoidance and pathfinding capabilities) to achieve shared control and context based filtering of user commands, hence enhance the overall performance of the brain–machine combination (Millán et al. 2010; Tonin et al. 2010). In addition, a proposals to make this link closer and more reliable via brain-computer interaction are being pursued by various companies including Facebook, Neuralink, Kernel and Emotiv (Gent 2017). Similar convergence-aimed solutions have been pioneered also at the microscopic level. A promising example is a minimally invasive three-dimensional interpenetration of electronics within artificial structures or biological brains (Liu et al. 2015). Such mesh-brain implants have already demonstrated the capacity to successfully integrate into a mouse brain and enable neuronal recordings (Fu et al. 2016). While convergence in the narrow sense is necessary to guarantee the successful functioning of cognitive technology, broad-sense convergence might, on the medium-to-long term, empower individuals and provide ultimate control and protection against malevolent applications of cognitive technology.

Conclusions

Cognitive technologies have the potential to accelerate technological innovation and provide significant benefit for individuals and societies. At the same time, due to their dual-use potential, they can be potentially coopted by State and non-State actors for non-benign purposes including cybercrime, cyberterrorism, cyberwarfare and mass surveillance. In light of the recent global crisis of democracy, increased militarization of the digital infosphere, and concurrent potentiation of cognitive technologies, it is important to proactively design strategies that can mitigate emerging risks and align the future of CT with the basic principles of liberal democracy in free and open societies.

In this paper, I described a proactive approach to the democratization of CT based on six normative ethical principles: avoidance of centralized control, openness, transparency, inclusiveness, user-centeredness and convergence. This approach is designed to universalize and evenly distribute the potential benefits of CT and mitigate the risk that such emerging technological trend could be coopted by State or non-State actors in ways that are inconsistent with the principles of liberal democracy or detrimental to individuals and groups. While this paper offered a preliminary and general characterization of how to democratize cognitive technology, future research is required to expand this proposal into a comprehensive ethical, legal and political framework.