Abstract
This chapter provides an analysis of philosophical and legal aspects of AI-driven cognitive human enhancement technologies that complement human rights norms in the context of the right to mental integrity. The philosophical analysis focuses on extended cognition theories in philosophy of mind. Such theories individuate a list of criteria to assess whether an external artefact can be cognitively integrated with human cognitive processes. This chapter shows that two AI-cognitive human enhancement technologies—brain computer interfaces and intelligent personal assistants—do not completely satisfy the criteria of extended cognition due to their unique capabilities. Subsequently, the legal analysis concentrates on the debate on the right to mental integrity to see whether the human mind is safeguarded in the face of such concerns at international and European levels. Although the right to mental integrity has been recognized in international and European human rights law, the meaning and the scope of the concept has remained unclear. To fill this gap, this chapter engages with the issue of an adequate form of cognitive integration and assumes that, if external artefacts such as AI-cognitive human enhancement technologies are not completely or sufficiently integrated with human cognitive processes, such artefacts may not serve mental integrity of individuals. In the light of this analysis, this chapter comes to the conclusion that it is necessary to introduce absolute protection to mental integrity in conjunction with mental privacy to protect the individual from any intrusion of mental states.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
See Kommers et al. 1992.
- 2.
The debate is available at Neuralink Progress Update, Summer 2020. https://www.youtube.com/watch?v=DVvmgjBL74w Accessed on 22 February 2021.
- 3.
Knapp (2019) Elon Musk Sees His Neuralink7 Merging your Brain with A.I. https://www.forbes.com/sites/alexknapp/2019/07/17/elon-musk-sees-his-neuralink-merging-your-brain-with-ai/?sh=37883c754b07. Accessed on 22 February 2021.
- 4.
Ienca and Andorno 2017.
- 5.
- 6.
- 7.
SIENNA 2018 D.3.1., p. 19.
- 8.
Ibid.
- 9.
Wolpaw and Winter Wolpaw 2012.
- 10.
Umair et al. 2017. See a more detailed definition in Wolpaw and Winter Wolpaw 2012, p. 3: “A BCI is a system that measures central nervous system (CNS) activity and converts it into artificial output that replaces, restores, enhances, supplements, or improves natural CNS output and thereby changes the ongoing interactions between the CNS and its external or internal environment.”
- 11.
Binnendijk et al. 2020.
- 12.
Burke et al. 2015.
- 13.
Zhang et al. 2020.
- 14.
SIENNA 2018 D 3.1, p. 34.
- 15.
- 16.
See Amazon (2018) Alexa Skills Kit. https://developer.amazon.com/en-US/docs/alexa/ask-overviews/what-is-the-alexa-skills-kit.html. Accessed 21 February 2021.
- 17.
The two criteria were individuated by Maedche et al. 2016.
- 18.
For a review of application domains of such systems, see also Knote et al. 2018.
- 19.
These are related to intelligence, clarity and creativity, which are the three targets for interventions of cognitive enhancement individuated by the Sienna Report, see Sect. 25.2 of this chapter.
- 20.
Big Data and AI can lead to hypernudge, a more insidious, dynamic, and efficacious form of nudge, see Yeung 2017 on this point.
- 21.
It is important to note that there are dense theoretical debates about different approaches on extended cognition, however, it is beyond the aim of this chapter to engage with these theories in detail. Therefore, we will only consider the first wave of extended cognition.
- 22.
Clark and Chalmers 1998. For a recent perspective on this position, see Clark 2008. However, to be precise, the extended cognition is distinguished from the extended mind thesis, according to which cognitive states such as beliefs and others can have as their supervenience base extra-organismic elements, see on this point Carter et al. 2018.
- 23.
- 24.
Hernandez-Orallo and Vold 2019.
- 25.
Pellegrino and Garasic 2020.
- 26.
Clark and Chalmers 1998, p. 8.
- 27.
- 28.
- 29.
See on this point Nalepa et al. 2018.
- 30.
See Bernard and Arnold 2019.
- 31.
Palermos 2014, p. 33.
- 32.
Morana et al. 2019.
- 33.
I.e., by changing the brain directly. This is the criterion adopted by Vincent et al. 2020 to distinguish between “core” and “penumbral” neurointerventions.
- 34.
Recently, another author that draws from extended cognition theory has been Carter 2020. He discusses the two terms of ‘cognitive integration’ and ‘cognitive enhancement’ and advances the thesis that the notion of enhancement as such is “theoretically unimportant for accounting for why certain kinds of high-tech epistemic dependence genuinely threaten to undermine intellectual autonomy and others such kinds of dependence don’t” (Ivi, conclusions). Another recent attempt to connect enhancement and extended cognition can be found in Carter and Pritchard 2019, who propose a “cognitive achievement account of cognitive enhancement”. However, a distinction between enhancement and ‘extension’ as used in this chapter should be drawn: an ‘extension’ includes a move beyond intracranialism, i.e., the conception that locates cognitive processes inside the brain, and does not influence cognition but rather constitutes it. Adopting the terms ‘extension’ and ‘cognitive integration’ could arguably be a way to conceptually sharpen the understanding of some forms of HETs.
- 35.
This is the reason why some extended cognitive theorists decided to adopt distinct views, see Palermos 2014 and the second wave of extended cognition, for example Heersmink 2015. Heersmink argues that cognitive integration comes in degree and requires different dimensions, i.e., information flow, reliability, durability, procedural and informational transparency, individualization, transformation (how the cognitive processes change with the use of artefact). An artefact has not to be equally integrated in all dimensions. On the issues related to the parity principle, see also Heinrichs 2017.
- 36.
Chung et al. 2017. See Mecacci and Haselager 2017 for an interesting ethical framework regarding mental privacy and brain reading. In the predominant typology of influence types used in bioethics literature, manipulation is an influence that subverts agents’ rational capacity and bypasses mental processes such as understanding, and it has been understood as a term “in-between” rational persuasion and coercion by force, see Faden and Beauchamp 1986, Blumental-Barby 2012.
- 37.
Ienca and Haselager 2016.
- 38.
- 39.
Jiang 2019.
- 40.
Ienca et al. 2018.
- 41.
Gilbert et al. 2017.
- 42.
Ibid.
- 43.
Chambers and Beaney 2020.
- 44.
- 45.
Indeed, it is suggested that when it comes to robotics and HETs, in Europe, some anchor point “can be found in the common heritage of human rights and values” for regulatory dilemmas posed by such devices. See the debate in Leenes et al. 2017. See also an example of using human rights as “normative anchor points of governance” in the context of human enhancement technologies in Ruggiu 2018.
- 46.
Convention for the Protection of Human Rights and Fundamental Freedoms, Rome, 4 November 1950, in force 3 September 1953, 213 UNTS 221. Article 8 of the Convention: “Everyone has the right to respect for his private and family life, his home and his correspondence.” It is important to note there are discussions assessing the relationship between emerging technologies and freedom of thought claiming that Article 9 of the ECHR should be understood as an absolute right to protect the integrity of the forum internum. However, considering the current jurisprudence of the ECtHR on Article 9, it is hard to consider neural activities or the assessment of cognitive processes as “thought”. Therefore, this section only focuses on the case-law interpreting the concept of mental integrity. See a recent discussion on the freedom of thought and brain computer interfaces in O’Callaghan and Shiner 2021.
- 47.
ECtHR, Tysiac v. Poland, No. 5410/03, 20 March 2007, para 107.
- 48.
Marshall 2009, p. 3.
- 49.
ECtHR, Bensaid v. United Kingdom, No. 44599/98, 6 February 2001, para 47.
- 50.
ECtHR, Taliadorou and Stylianou v. Cyprus, No. 39627/05 and 39631/05, 16 October 2008, par. 57-58. See also ECtHR, Kyriakides v. Cyprus, No. 39058/05, 16 October 2008. See also the case of Bati and others v. Turkey where the ECtHR emphasized that various forms of ill-treatment without leaving physical marks can harm mental integrity, no. 33097/96, and 57834/00, 3 June 2004, par. 114. It is important to note that these judgments should be considered in the light of Article 5 of the proposed Artificial Intelligence Act by the European Commission. Article 5 of the AI Act prohibits AI practices that manipulate individuals’ behaviours through “subliminal techniques” that can cause “physical or psychological harms”. See a recent comment on the proposed AI Act in Biber 2021.
- 51.
The Convention on the Rights of Person with Disabilities and its Optional Protocol, adopted on 13 December 20o6 and opened for signature on 30 March 2007.
- 52.
Charter of Fundamental Rights of the European Union, Nizza, 7 December 2000, in force 1 December 2009, OJ 2012 No. C-326/2. Article 3(1) of the Charter: “Everyone has the right to respect for his or her physical and mental integrity.”
- 53.
Michalowski 2014.
- 54.
Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine, entered into force on 1 December 1999.
- 55.
Council of Europe, Explanatory Report to the Convention for the Protection of Human Rights and Dignity of the Human Being with Regard to the Application of Biology and Medicine: Convention on Human Rights and Biomedicine, 4 April 1997, para 22.
- 56.
CJEU, Case C-377/98, Netherlands v. European Parliament and Council (2001) ECR-I 7079, para 70.
- 57.
Bublitz 2020.
- 58.
Bublitz 2020, pp. 387–408. See also the report of the Parliamentary Assembly of the Council of Europe 2020. In the US, there is an initiative called “NeuroRights Initiative” established by the Columbia University, working to incorporate five specific neuro-rights into international human rights law namely, the right to personal identity, the right to free will, the right to mental privacy, the right to equal access to mental augmentation, and the right to protection from algorithmic bias, available at https://neurorightsfoundation.org/mission. In the world, Chile took the first step on neuro-rights to prevent the misuse of artificial intelligence and neurotechnology, see the debate in Munoz “Chile- right to free will needs definition”, 29.10.2019, available at https://www.nature.com/articles/d41586-019-03295-9.
- 59.
Bublitz 2020, p. 387 (arguing that although legal systems engage with bodily integrity in detail even in exceptional situations such as pregnancy or organ transplantation, they highly disregard the interventions on the human mind).
- 60.
See Lavazza 2018.
- 61.
See Bublitz 2013.
- 62.
See a discussion on the challenges of informed consent in implantable BCI research in Klein 2016, who identifies six core risk domains as central to the informed consent, namely safety, cognitive and communicative impairment, inappropriate expectations, involuntariness, affective impairment, and privacy and security. In terms of informational privacy, the author discusses that BCI systems are able to generate a trove of potentially intimate personal information, such as toileting, sex, counselling children, comforting a loved one, unexpressed thoughts, personality characteristics, or emotions.
- 63.
- 64.
For some dystopian examples of state surveillance already happening in China see a well-written article in Anderson (2020) The Panopticon is Already Here https://www.theatlantic.com/magazine/archive/2020/09/china-ai-surveillance/614197/. Accessed on 21 February 2021.
- 65.
Schmerling (2019). The Ethical and Legal Implications of Brain-Computer Interfaces https://www.calcalistech.com/ctech/articles/0,7340,L-3762798,00.html. Accessed on 22 February 2021.
- 66.
- 67.
References
Bernard D, Arnold A (2019) Cognitive interaction with virtual assistants: From philosophical foundations to illustrative examples in aeronautics. Computers in Industry, 107: 33–49, ISSN 0166-3615, https://doi.org/10.1016/j.compind.2019.01.010
Beyleveld D, Brownsword R (2001) Human Dignity in Bioethics and Biolaw. Oxford University Press, Oxford
Biber SE (2021) Machines Learning the Rule of Law: EU Proposes the World’s First AI Act. Verfassungsblog, https://verfassungsblog.de/ai-rol/, DOI: https://doi.org/10.17176/20210714-015912-0
Binnendijk A, Marler T, Bartel EM (2020) Brain Computer Interfaces: U.S. Military Applications and Implications, An Initial Assessment. Rand Corporation.
Blumental-Barby JS (2012) Between reason and coercion: Ethically permissible influence in health care and health policy contexts. Kennedy Institute of Ethics Journal, 22(4): 345–366
Bublitz JC (2013) My mind is mine!? Cognitive liberty as a legal concept. In: Hildt E, Franke AG (eds) Cognitive Enhancement. An Interdisciplinary Perspective. Springer, Dordrecht, pp 233–26.
Bublitz JC (2020) The Nascent Right to Psychological Integrity and Mental Self-Determination. In: von Arnauld A, von der Decken K, Susi M (eds) The Cambridge Handbook of New Human Rights Recognition, Novelty, Rhetoric. Cambridge University Press, Cambridge, 387–408
Burke JF, Merkow MB, Jacobs J, Kahana MJ, Zaghloul KA (2015) Brain Computer Interface to Enhance Episodic Memory in Human Participants. Frontiers in Human Neuroscience, 8: 1055
Carter JA (2020) Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese 197: 937–2961. https://doi.org/10.1007/s11229-017-1549-y
Carter JA, Clark A, Kallestrup J, Palermos SO, Pritchard D (2018) Extended Epistemology: an introduction. In: Carter JA, Clark A, Kallestrup J, Palermos SO, Pritchard D (eds) Extended Epistemology. Oxford University Press, Oxford, pp 1–16
Carter JA, Palermos SO (2016) The ethics of extended cognition: Is having your computer compromised a personal assault? Journal of the American Philosophical Association, 2(4): 542–560
Carter JA, Pritchard D (2019) The Epistemology of Cognitive Enhancement. J Med Philos; 44(2):220–242. doi: https://doi.org/10.1093/jmp/jhy040
Carter S, Nielsen M (2017) Using artificial intelligence to augment human intelligence. Distill. https://distill.pub/2017/aia
Chambers R, Beaney P (2020) The potential of placing a digital assistant in patients’ homes. Br J Gen Practice 70(690): 8–9
Chung H, Iorga M, Voas J, Lee S (2017) Alexa, Can I Trust You? Computer 50 (9): 100–104. https://doi.org/10.1109/MC.2017.3571053. ISSN 0018-9162
Clark A (1997) Being There: Putting Brain, Body, and World Together Again. MIT Press, Cambridge, MA
Clark A (2008) Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford University Press, New York
Clark A (2010) Memento’s revenge: The Extended Mind, extended. In: Menary R (ed) The Extended Mind. MIT Press, Cambridge, MA, pp 43–66
Clark A (2011) Supersizing the Mind. Oxford University Press, Oxford
Clark A, Chalmers D (1998) The extended mind. Analysis 58(1): 7–19
European Parliament STOA (2009) Human Enhancement Study. https://www.europarl.europa.eu/RegData/etudes/etudes/join/2009/417483/IPOL-JOIN_ET(2009)417483_EN.pdf Accessed on 21 February 2021
Faden R, Beauchamp T (1986) A History and Theory of Informed Consent. Oxford University Press, Oxford
Floridi L (2016) On Human Dignity as a Foundation for the Right to Privacy. Philos. Technol. 29: 307–312 https://doi.org/10.1007/s13347-016-0220-8
Gilbert F, Goddard E, Viaña JNM, Carter A, Horne M (2017) I miss being me: Phenomenological effects of deep brain stimulation. AJOB Neuroscience,8(2): 96–109. https://doi.org/10.1080/21507740.2017.1320319
Hallinan D, Schütz P, Friedewald M, de Hert P (2014) Neurodata and Neuroprivacy: Data Protection Outdated? Surveillance and Society. 12: 55–72. https://doi.org/10.24908/ss.v12i1.4500
Heersmink R (2015) Dimensions of Integration in embedded and extended cognitive systems. Phenomenology and the Cognitive Sciences. 13 (3): 577–598
Heinrichs JH (2017) Against Strong Ethical Parity: Situated Cognition Theses and Transcranial Brain Stimulation. Front. Hum. Neurosci. 11:171. doi: https://doi.org/10.3389/fnhum.2017.00171
Hernandez-Orallo J, Vold K (2019) AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI, 507–513. https://doi.org/10.1145/3306618.3314238
Ienca M, Andorno R (2017) Towards New Human Rights in the Age of Neuroscience and Neuroscience and Neurotechnology. Life Sciences, Society and Policy, 13, https://doi.org/10.1186/s40504-017-0050-1
Ienca M, Haselager P (2016) Hacking the brain: brain–computer interfacing technology and the ethics of neurosecurity. Ethics and Information Technology, 18(2): 117–129
Ienca M, Haselager P, Emanuel EJ (2018) Brain leaks and consumer neurotechnology. Nature biotechnology, 36(9): 805–810
Jiang R (2019) Introducing New Alexa Healthcare Skills. https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2019/04/introducing-new-alexa-healthcare-skills Accessed on 22 February 2021
Klein E (2016) Informed consent in implantable BCI research: identifying risks and exploring meaning. Science and Engineering Ethics, 22(5): 1299–1317
Knote R, Janson A, Eingebord L, Sollner M (2018) The What and How of Smart Personal Assistant: Principles and Application Domains for IS Research. In: Multikonferenz Wirtschaftsinformatik, Luneburg, Germany
Kommers P, Jonassen DH, Mayes T (eds) (1992) Cognitive tools for learning. Springer-Verlag, Heidelberg FRG
Kudina O (2021) Alexa, who am I?: Voice Assistants and Hermeneutic Lemniscate as the Technologically Mediated Sense-Making. Hum Stud, https://doi.org./https://doi.org/10.1007/s10746-021-09572-9
Lavazza A (2018) Freedom of thought and mental integrity: The moral requirements for any neural prosthesis. Frontiers in Neuroscience. https://doi.org/10.3389/fnins.2018.00082
Le N-T, Wartschinski L (2018) A cognitive assistant for improving human reasoning skills. International Journal of Human-Computer Studies, 117. https://doi.org/10.1016/j.ijhcs.2018.02.005
Leenes R, Palmerini E, Koops B-J, Bertolini A, Salvini P, Lucivero F (2017) Regulatory challenges of robotics: Some guidelines for addressing legal and ethical issues. Law, Innovation and Technology. 9: 1–44. https://doi.org/10.1080/17579961.2017.1304921
Ligthart S, Douglas T, Bublitz C, Kooijmans T, Meynen G (2020) Forensic Brain-Reading and Mental Privacy in European Human Rights Law: Foundations and Challenges. Neuroethics, Springer, https://doi.org/10.1007/s12152-020-09438
Ludwig D (2015) Extended cognition and the explosion of knowledge. Philosophical Psychology, 28(3): 355–368
Maedche A, Morana S, Shacht S, Werth D, Krumeich J (2016) Advanced User Assistance Systems. Business & Information Systems Engineering, 58, 5: 367–370
Maier T, Menold J, McComb C (2019) Towards an Ontology of Cognitive Assistants. Proceedings of the 22nd International Conference on Engineering Design (ICED19), Delft, The Netherlands, 5–8 August 2019. DOI:https://doi.org/10.1017/dsi.2019.270
Marshall J (2009) Personal Freedom through Human Rights Law? Autonomy, Integrity and Integrity under the European Convention on Human Rights. Martinus Nijhoff Publishers, Leiden/Boston
Mecacci G, Haselager P (2017) Identifying criteria for the evaluation of the implications of brain reading for mental privacy. Science and Engineering Ethics. https://doi.org/10.1007/s11948-017-0003-3
Michalowski S (2014) Right to Integrity of the Person. In: Peers S, Hervey T, Kenner J, Ward A (eds) The EU Charter of Fundamental Rights: A Commentary. Hart Publishing, London, pp 39–60
Morana S, Pfeiffer J, Adam MTP (2019) User Assistance for Intelligent Systems. Business & Information Systems Engineering, 62 (3):189–192. https://aisel.aisnet.org/bise/vol62/iss3/1
Nalepa GJ, Costa A, Novais P, Julian V (2018) Cognitive assistants. International Journal of Human-Computer Studies, 117: 1–68
O’Callaghan P, Shiner B (2021) The Right to Freedom of Thought in the European Convention on Human Rights. European Journal of Comparative Law and Governance
Palermos SO (2014) Loops, Constitution, and Cognitive Extension. Cognitive Systems Research, 27: 25–41
Pellegrino G, Garasic M D (2020) Artificial Intelligence and extended minds. Why not? Rivista Internazionale di Filosofia e Psicologia, 11, 2: 150–168
Ruggiu D (2013) A Right-Based Model of Governance: The Case of Human Enhancement and the Role of Ethics in Europe. In: Konrad K, Coenen C, Dijkstra A, Milburn C, van Lente H (eds) Shaping Emerging Technologies: Governance, Innovation, Discourse. Ios Press / Aka, Berlin, pp 103–115
Ruggiu D (2018) Implementing a Responsible, Research and Innovation Framework for Human Enhancement According to Human Rights: The Right to Bodily Integrity and the Rise of ‘Enhanced Societies’. Law, Innovation and Technology: 1–40
Sharon T (2016) The Googlization of health research: from disruptive innovation to disruptive ethics. Personalized Medicine. 13(6): 563–574
Sharon T (2020) Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09547-x
Smart P (2012) The Web-Extended Mind. Metaphilosophy, 43, 4: 426–445
Smart P (2018) Emerging Digital Technologies: Implications for Extended Conceptions of Cognition and Knowledge. In: Carter AJ, Clark A, Kallestrup J, Palermos OS, Pritchard D (eds) Extended Epistemology. Oxford University Press, Oxford, pp 266–304
Steinert S, Friedrich O (2020) Wired Emotions: Ethical Issues of Affective Brain–Computer Interfaces. Sci Eng Ethics 26: 351–367 https://doi.org/10.1007/s11948-019-00087-2
The Parliamentary Assembly of the Council of Europe (2020) The Brain-Computer Interface: New Rights or New Threats to Fundamental Freedoms, 24 September 2020, available at https://pace.coe.int/en/files/28722 Accessed 21 February 2021
The SIENNA Project (2018) D.3.1. State-of-the-Art Review, Human Enhancement https://www.sienna-project.eu/digitalAssets/788/c_788666-l_1-k_d3.1sotahet.pdf Accessed 21 February 2021
The SIENNA Project (2019) D.3.2. Analysis of the Legal and Human Rights Requirements for Human Enhancement Technologies in and outside the EU https://www.sienna-project.eu/news/news-item/?tarContentId=883290 Accessed 21 February 2021
Umair A, Ashfaq U, Khan MG (2017) Recent Trends, Applications, and Challenges of Brain-Computer Interfacing (BCI). International Journal of Intelligent Systems and Applications (IJISA), 9, 2: 58–65
Vincent NA, Nadelhoffer T, McCoy A (2020) Law Viewed Through the Lens of Neurointerventions. In: Vincent N A, Nadelhoffer T, McCoy A (eds) Neurointerventions and the Law: Regulating Human Mental Capacity. Oxford University Press, New York
Wolpaw JR, Winter Wolpaw E (2012) Brain-Computer Interfaces: Something New Under the Sun. In: Wolpaw J R, Winter Wolpaw E (eds) Brain-Computer Interfaces: Principles and Practice. Oxford University Press, Oxford, pp 3–14
Yeung K (2017) ‘Hypernudge’: Big Data as a mode of regulation by design. Information, Communication & Society, 20,1: 118–136
Zhang X, Yao L, Wang X, Monaghan JJM, Mcalpine D, Zhang Y (2020) A survey on deep learning-based non-invasive brain signals: recent advances and new frontiers. J Neural Eng. doi: https://doi.org/10.1088/1741-2552/abc902 Epub ahead of print. PMID: 33171452
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 T.M.C. Asser Press and the authors
About this chapter
Cite this chapter
Biber, S.E., Capasso, M. (2022). The Right to Mental Integrity in the Age of Artificial Intelligence: Cognitive Human Enhancement Technologies. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_25
Download citation
DOI: https://doi.org/10.1007/978-94-6265-523-2_25
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-522-5
Online ISBN: 978-94-6265-523-2
eBook Packages: Law and CriminologyLaw and Criminology (R0)