Skip to main content

Artificial Intelligence in Extended Minds: Intrapersonal Diffusion of Responsibility and Legal Multiple Personality

  • Chapter
  • First Online:
Technology, Anthropology, and Dimensions of Responsibility

Abstract

Can an artificially intelligent tool be a part of a human’s extended mind? There are two opposing streams of thought in this regard. One of them can be identified as the externalist perspective in the philosophy of mind, which tries to explain complex states and processes of an individual as co-constituted by elements of the individual’s material and social environment. The other strand is normative and explanatory atomism which insists that what is to be explained and evaluated is the behaviour of individuals. In the present contribution, it is argued that counterintuitive results turn up once atomism tries to appropriate insights from psychological externalism and holism. These results are made visible by technological innovations, especially artificially intelligent systems, but they do not result from these innovations alone. They are rather implicit in situated cognition approaches which join both theoretical strands. This has repercussions for explanatory as well as ethical theorising based on situated cognition approaches. It is a fairly rare constellation that new technological options, namely artificial intelligence, raise doubts concerning a philosophical theory, namely extended mind theory.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Extended mind theory has served as the basis for a number of successful explanatory projects, as demonstrated in a number of articles and editions at least from (Clark 1999) on.

  2. 2.

    Extensions of cognitive systems can either be functionally isomorphic or functionally complementary to the system core. These two types of extensions have been in the focus of what Sutton (Sutton 2010) calls the first and second wave of extended mind theory.

  3. 3.

    Whether an external object co-realises the mind depends on how closely it is integrated with the agent’s organism and mental functioning. Under the heading a number of dimensions have been proposed to measure how closely a given object is integrated (Heersmink 2015). The core criteria used in the following are: 1) reliability, 2) durability, 3) trust, 4) procedural transparency, 5) individualization, and 6) transformation. In a nutshell, reliability refers to the device’s ability to perform its function given varying circumstances, durability to the duration of individual uses and the continuity of repeated use of a device. Trust refers to the relation the user has to the device’s contribution to the cognitive process in question, and procedural transparency is a shorthand for ease of use and the phenomenon that the device as device fades into the background of the action performed with the device. Individualization refers to the possibility of tailoring the device to the needs and habits of its user and transformation refers to the change which the device brings about in the cognitive habits and performances of its user.

  4. 4.

    Nearly all versions of glue and trust criteria and their successors refer to a tool being a constant in an individual’s life or the relation being durable. See for example (Clark and Chalmers 1998; Heersmink 2015).

  5. 5.

    A closely related version (1*) would consider being extended as a stable property of the mind across time, What changes with time is merely the scope of extension of the person’s mind at t depending on the realisers of current cognitive processes. Again, it wouldn’t make sense to discern ‘mind’ and ‘extended mind’.

  6. 6.

    This reductio argument seems to imply that there is no such thing as a core mind because all beliefs are considered extensions. Thus, only a blank slate could be considered a core mind, and the blank slate theory has more than enough problems of its own. If one were, however, to consider on propositionally structured beliefs as extensions, the core mind would be that of very young children, whose belief-like states are propositionally structured yet.

  7. 7.

    This point has been made repeatedly, often in criticism to the example or in order to draw additional distinction, such as that between functionally isomorphic and functionally complementary mind extensions (Sutton 2010).

  8. 8.

    Artificial intelligence research got its name in 1955, when John McCarthy used the term in a research proposal (A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. August 1955, p. 1).

  9. 9.

    The original idea was to describe them in terms of symbol manipulation. Human cognition was modelled on a level of representational content and computational manipulation of such content (Fodor 1975; Newell and Simon 1972; Simon and Newell 1971). This approach was dubbed GOFAI—good old fashioned artificial intelligence—only shortly after (Haugeland 1985). It has been supplemented by an understanding of human cognition at the level of biological activity, to be precise the activity of neurons. Neuronal network theory had already been invented in the 1950 s (Rosenblatt 1958), but for fifty years it was hampered by the computational resources available to research. Only with the advent of specialised processors in the form of graphic-cards did neural network simulation really start to lift of.

  10. 10.

    It should be mentioned that there already is a real-world case of a robot having citizenship: Saudi Arabia provided a robot called Sophia by Hanson Robotics with its citizenship (and endowed it with more rights than female human inhabitants of the country).

  11. 11.

    To be more precise: There are cases of liability for which lack of control is an excusing factor. However, in many cases lack of control does not limit liability, most prominently: parents are liable for the actions of their children.

References

  • Beck, S. (2016). The problem of ascribing legal responsibility in the case of robotics. AI & SOCIETY, 31(4), 473–481. https://doi.org/10.1007/s00146-015-0624-5.

    Article  Google Scholar 

  • Clark, A. (1997). Being there. Putting brain, body, and world together again. Cambridge: MIT Press.

    Google Scholar 

  • Clark, A. (1999). An embodied cognitive science? Trends in Cognitive Sciences, 3(9), 345–351. https://doi.org/10.1016/S1364-6613(99)01361-3.

    Article  Google Scholar 

  • Clark, A., & Chalmers, D. (1998). The extended mind (Active externalism). Analysis, 58(1), 7–19. https://doi.org/10.1111/1467-8284.00096.

    Article  Google Scholar 

  • Copp, D. (2006). On the agency of certain collective entities: An argument from “Normative Autonomy”. Midwest Studies In Philosophy, 30(1), 194–221. https://doi.org/10.1111/j.1475-4975.2006.00135.x.

    Article  Google Scholar 

  • Dennett, D. C. (1971). Intentional systems. Journal of Philosophy, 68(February), 87–106.

    Article  Google Scholar 

  • Fodor, J. A. (1975). The language of thought. Cambridge: Harvard University Press.

    Google Scholar 

  • Harris, C. (2010). Collaborative remembering: When can remembering with others be beneficial? Paper presented at the 9th conference of the Australasian Society for Cognitive Science (ASCS09), Sydney: Macquarie Centre for Cognitive Science.

    Google Scholar 

  • Haugeland, J. (1985). Artificial intelligence: The very idea (Vol. 38). Cambridge: MIT Press.

    Google Scholar 

  • Heersmink, R. (2015). Dimensions of integration in embedded and extended cognitive systems. Phenomenology and the Cognitive Sciences, 14(3), 577–598. https://doi.org/10.1007/s11097-014-9355-1.

    Article  Google Scholar 

  • Hutchins, E. (1995). Cognition in the wild. Cambridge: MIT Press.

    Google Scholar 

  • Hutchins, E. (2014). The cultural ecosystem of human cognition. Philosophical Psychology, 27(1), 34–49. https://doi.org/10.1080/09515089.2013.830548.

    Article  Google Scholar 

  • Newell, A., & Simon, H. A. (1972). Human problem solving. Oxford: Prentice-Hall.

    Google Scholar 

  • Pettit, P. (2007). Responsibility incorporated. Ethics, 117(2), 171–201. https://doi.org/10.1086/510695.

    Article  Google Scholar 

  • Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 56(6), 386–408.

    Article  Google Scholar 

  • Simon, H. A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist, 26(2), 145–159.

    Article  Google Scholar 

  • Smart, P., Heersmink, R., & Clowes, R. W. (2017). The cognitive ecology of the internet. In S. J. Cowley & F. Vallée-Tourangeau (Eds.), Cognition beyond the brain: Computation, interactivity and human artifice (pp. 251–282). Cham: Springer International Publishing.

    Chapter  Google Scholar 

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x.

    Article  Google Scholar 

  • Sutton, J. (2010). Exograms and interdisciplinarity: History, the extended mind, and the civilizing process. In R. Menary (Ed.), The extended mind (pp. 189–226). Cambridge: MIT Press.

    Chapter  Google Scholar 

  • Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.

    Google Scholar 

  • van de Poel, I. (2011). The relation between forward-looking and backward-looking responsibility. In N. A. Vincent, I. van de Poel, & J. van den Hoven (Eds.), Moral responsibility: Beyond free will and determinism (pp. 37–52). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Vincent, N. A. (2011). A Structured Taxonomy of Responsibility Concepts. In N. A. Vincent, I. van de Poel, & J. van den Hoven (Eds.), Moral responsibility: Beyond free will and determinism (pp. 15–35). Dordrecht: Springer.

    Chapter  Google Scholar 

  • Wilson, R. A., & Clark, A. (2009). How to situate cognition: Letting nature take its course. In M. Aydede & P. Robbins (Eds.), The Cambridge handbook of situated cognition (pp. 55–77). Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan-Hendrik Heinrichs .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer-Verlag GmbH Germany, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Heinrichs, JH. (2020). Artificial Intelligence in Extended Minds: Intrapersonal Diffusion of Responsibility and Legal Multiple Personality. In: Beck, B., Kühler, M. (eds) Technology, Anthropology, and Dimensions of Responsibility. Techno:Phil – Aktuelle Herausforderungen der Technikphilosophie , vol 1. J.B. Metzler, Stuttgart. https://doi.org/10.1007/978-3-476-04896-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-476-04896-7_12

  • Published:

  • Publisher Name: J.B. Metzler, Stuttgart

  • Print ISBN: 978-3-476-04895-0

  • Online ISBN: 978-3-476-04896-7

  • eBook Packages: J.B. Metzler Humanities (German Language)

Publish with us

Policies and ethics