Skip to main content

The Mind Technology Problem and the Deep History of Mind Design

  • Chapter
  • First Online:
The Mind-Technology Problem

Part of the book series: Studies in Brain and Mind ((SIBM,volume 18))

  • 827 Accesses

  • The original version of this chapter was revised: The incorrect references and its citations for Turing (1950a, b) has been corrected now. The correction to this chapter is available at https://doi.org/10.1007/978-3-030-72644-7_14

Abstract

We are living through a new phase in human development where much of everyday life – at least in the most technologically developed parts of the world – has come to depend upon our interaction with “smart” artefacts. Alongside this increasing adoption and ever-deepening reliance on intelligent machines, important changes have been taking place, often in the background, as to how we think of ourselves and how we conceptualize our relationship with technology. As we design, create and learn to live with a new order of artefacts which exhibit behavior that, were it to be carried out by human beings would be seen as intelligent, the ways in which we conceptualize intelligence, minds, reasoning and related notions such as self and agency are undergoing profound shifts. We argue that the basic background assumptions informing our concepts of mind, and the underlying conceptual scheme structuring our reasoning about minds has recently been transformed in the process. This shift has changed the nature and quality of both our folk understanding of mind, our scientific psychology, and the philosophical problems that the interaction of these realms produce. Many of the traditional problems in the philosophy of mind have become reconfigured in the process. This introduction sets the scene for our book that treats this reconfiguration of our concepts of mind and of technology, and the new casting of philosophical problems this reconfiguration engenders.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Change history

  • 12 April 2022

    The book was inadvertently published with errors in Chapter 1 and Chapter 13.

Notes

  1. 1.

    In May 1643 Princess Elisabeth of Bohemia wrote to Descartes whose work she had been following closely and posed the problem of interaction in an especially pointed way. She asked, “how the mind of human being, being only a thinking substance, can determine the bodily spirits in producing bodily actions.” Princess Elisabeth, pushing upon the central problem arising from substance dualism, found herself unsatisfied with Descartes’s attempts to resolves the question to her satisfaction. In exasperation she finally writes “it would be easier for me to concede matter and extension to the mind than it would be for me to concede the capacity to move a body and be moved by one to an immaterial thing.” Cited in Jaegwon Kim’s Philosophy of Mind (Kim 2006, pp. 41–42). Kim notes that this is a (very) early example of the causal argument from materialism, that holds that mental causation implies materialism, for, it is hard to see how any putative immaterial substance might interact with the rest of the causal order. This is also a compelling incidence of how it is sometimes possible to think against the grain of even a highly dominant conceptual scheme.

  2. 2.

    Indeed, it is only since the 1960s that there has been – at least in the Anglo-Saxon world – university courses which are explicitly targeted at philosophy of mind. Many of such courses are organized around the Mind Body problem.

  3. 3.

    The sense that the mind and body are distinct has arguably been part of folk-psychology and religious views of the world for centuries, as well as metaphysical views from Plato to Descartes. That the mind, or the soul, is separate from matter was something that seems to be introduced only at a time when mechanist views of the rest of nature are being clearly articulated for the first time.

  4. 4.

    Amy Kind for example argues that dualism was much the preferred view of the early modern period and materialist and what we would now call physicalist positions were much out of favour (Kind 2018). La Mettrie’s ([1747] Man a Machine, was very much against the tide of ideas of the time although it anticipated major theme of twentieth century philosophy.

  5. 5.

    Chalmers informational dualism is a notable exception here (Chalmers 2002). Arguably, in popular culture and in folk psychology a form of dualism is widespread.

  6. 6.

    An important exception to this generalization is Richard Gregory’s monumental (1981) Mind in Science: A History of Explanations in Psychology. Important work on how our conceptual schemes are more generally constrained by technology and the history of invention can be found in Postman (1993). One field where the background metaphors for mind are considered is cognitive linguistics (Fauconnier and Turner 2002; Lakoff and Johnson 2003 [1980]), and perhaps especially in Lakoff and Johnson (1999). However even cognitive linguists tend to chiefly pay attention to the way that concepts are shaped by the nature of human embodiment. The idea that our use of technology may similarly shape our abstract reasoning about the nature of mind is less explored.

  7. 7.

    Freud humbly pointed to his own place in the history of ideas when he argued that his idea of the unconscious should be seen as a third discontinuity following the ideas of Darwin and Copernicus.

  8. 8.

    This is cited in Mazlish (1993).

  9. 9.

    The first three discontinuities were first described by Freud (1920). Luciano Floridi has recently developed a related thesis in his (2014) book The Fourth Revolution: How the Infosphere is Shaping Human Reality where he uses the nomenclature of “the information revolution” to label the fourth discontinuity. Floridi does not mention Mazlish’s (1993) formulation although there are great similarities in the thinking as well as interesting differences between the two; we will discuss in the next section.

  10. 10.

    An idea resisted by many including Gibson 1979; Tallis 2004; and Varela et al. 1991.

  11. 11.

    The early modern period does give us a few well-known examples of technologies as metaphors for parts of the human body, such as the hydraulic metaphor used in the time of Descartes to illustrate the ways that bodies were supposed to move, to the mills of Leibniz’s thought experiments. Later in the nineteenth century, telegraph connections were sometimes used as model of the interconnection of brains.

  12. 12.

    Luciano Floridi (2014, p. 91) gives an interesting reconstruction of how Hobbes ideas from his 1651 treatise Leviathan may have been triggered by the creation and publicity of the of the Pascalina, the creation of which was influential throughout Europe.

  13. 13.

    Boden observes in Mind as Machine: A History of Cognitive Science that the idea of “‘Machine as Man’ is an ancient idea, and a technical practice. Ingenious android machines whose movements resembled human behaviour, albeit in highly limited ways, were already built 2500 years. ‘Man as Machine’ is much more recent.” (Boden 2006, p. 51).

  14. 14.

    Hans Moravec (1988) calls these creations Mind Children and however we choose to treat them we cannot now doubt their centrality to our lives. It is because our further co-habitation with our creations is a central problematic of our age that Mary Shelley’s Frankenstein; or, The Modern Prometheus (Shelley 2018 [1818]) remains the prescient touchstone text of our epoch.

  15. 15.

    The best and most authoritative account of the intellectual ferment that gave rise to Artificial Intelligence can be found in Margaret Boden’s 2006 two volume set Mind As Machine: A History of Cognitive Science. Boden discusses “three inspirational interdisciplinary meetings”, including the 1956 Dartmouth meeting, but also the IEEE 3-day symposium convened at MIT in mid-September and a later meeting held in 1958 in London as times and places where the central ideas of AI really germinated (see Boden 2006, Chapter 6.iii).

  16. 16.

    For further details and excellent discussion of early work in artificial intelligence including its many successes and its limitations see Boden 2006, Chapter 10: When GOFAI was NEWFAI.

  17. 17.

    Questions about whether predictive processing constitutes the real mechanisms of mind go far beyond the scope of this introduction. The interested reader is referred to Clark (2015b), Hohwy (2013) and the Chapter by Paul Smart (this volume).

  18. 18.

    This weak AI can now be seen very much in the tradition of Pascal’s calculating machine which was supposedly designed in order to ease the burden of the laborious calculations that Pascal’s father needed to perform as part of his work as a supervisor of taxes.

  19. 19.

    Today the term Artificial General Intelligence (AGI) is sometimes used in order to describe artificial systems which have human level intelligence (e.g., Goertzel and Pennachin 2007). It is important to note however that even an AGI that could replicate all the factors of human intelligence would not necessarily be a strong AI. It is conceptually possible to build an AGI that could match or even outperform human beings in any or every particular domain but still not be subjective in Searle’s sense. Whether this is because, as Searle argues, symbol processing systems are just not the right sort of mechanistic systems to be subjective is an open question. Recent work in machine consciousness (e.g., Clowes et al. 2007; Holland 2003) seeks, among other things, to attempt to understand if non-organic mechanistic systems – predominantly computational systems – could ever be subjective or put another way, conscious.

  20. 20.

    Armstrong’s much collected and influential essay from the book “The Causal Theory of Mind” (Armstrong 1980) formulates a causal analysis of mind in functionalist terms without mentioning computers; although he does imply that perceptual states in the brain are informational states.

  21. 21.

    The idea that the mind is literally software for the brain remains controversial and has recently come under sustained attack (Piccinini 2010).

  22. 22.

    See the Schneider and Corabi paper in this volume, but also Schneider’s book Artificial You: AI and the Future of Your Mind (Schneider 2019) for an extended and highly illuminating discussion. The burden of her recent book – and several essays in this one – is that the idea of the software metaphor of mind creates multiple problems, not least when we consider the notion of personal identity (see the papers by Schneider and Corabi, and Piccinini this volume).

  23. 23.

    For a recent exploration of this theme see Ihde and Malafouris (2019). The idea however has a long history (Vygotsky and Luria 1994).

  24. 24.

    Further discussion of EMT can be found throughout book especially in Chapter XYZ.

  25. 25.

    Its full title is The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

  26. 26.

    In this, the foundations of Floridi’s view clearly echo John Searle’s (1980) position that no amount computational syntactic processing can ever add up to the inherent meaningfulness and subjectivity of real minds.

  27. 27.

    In some respects, the notion of human reality is a little vague and seems to be used in a variety of different ways by Floridi, but a central meaning is how we now are coming to thinking of the different between “real life” and the virtual and online world. It is Floridi’s claim that this distinction is increasingly breaking down and we are already starting to live in a blurred “onlife” reality.

  28. 28.

    Albeit see the discussion at the beginning of Sect. 1.3 on some of the cross-cultural diversity that there is in how humans have variously thought of AI.

  29. 29.

    Ur is the original Mesopotamian city in modern day Iraq, once a coastal city near the mouth of the Euphrates and today part of dessert landscape. It appears to be the case that the building of cities, the invention of writing and complex bureaucracies are roughly coeval developments of the human species.

  30. 30.

    See Chapter 6, “Intelligence Inscribing the World” of Floridi (2014) for a treatment of how he sees human civilization adapting to the reality of cohabiting with light or weak AI. Light AI does not appear to be fully defined but functions in the discussion to pick out the sort of intelligence we find in smart systems to which no true intelligence should be attributed; (whatever real intelligence is).

  31. 31.

    Floridi writes “The two souls of AI have been variously and not always consistently named. Sometimes the distinctions weak vs. strong AI, or good old-fashioned vs. new or nouvelle AI, have been used to capture the difference. I prefer to use the less loaded distinction between light vs. strong AI.” (Floridi 2014, p. 141) While Floridi is right about the inconsistent naming this explanation risks confusing matters further. Strong and Weak AI were originally used to distinguish two approaches what AI was supposed to be doing, whether buidling systems that might really be subjective or minds, or merely doing a form of non-subjective processing. GOFAI and nouvelle AI were different approaches to how these different goals could be obtained (See e.g., Brooks 1990).

  32. 32.

    See for instance the discussion in Fodor (2009).

  33. 33.

    This is one reason the original extended mind paper is the most cited philosophy paper of the last 20 years.

  34. 34.

    E-Memory might also be defined as a heterogenous set of digital or electronic systems that provide similar or replacement functions that would otherwise be provided by human biological memory, see (Clowes 2013) for further discussion and the slightly problematic nature of these definitions.

  35. 35.

    Some of the questions about the nature of the epistemic framework we can use to accommodate an increasingly diverse world, cognitive agents and their relations with, on the one hand, technologies and, on the other, social practices are treated in Gloria Andrada’s paper in this volume.

  36. 36.

    It is only in the time of cloud-technology that we could begin to seriously worry that our minds were leaking out to machines in the way Nicholas Carr articulated (Carr 2008, 2010). It is important to see that this is a residual of how we think about our minds in relation to the current form (cloud tech) and deep tendencies (e.g. Moore’s Law, pervasive computing) of computer technology. These problems are unlikely to subside anytime soon and many of the intellectual tools we need to grasp have yet to be invented. The hope is that explicitly laying out some of the distinctive difficulties of our conceptual epoch in this volume we can move forward.

  37. 37.

    For a predecessor view to Zawidzki see McGeer (2001). Also of relevance is Gallagher (2001).

  38. 38.

    For a nice description analysis of how these algorithms were developed see Somers (2019).

  39. 39.

    For a critique and discussion of Turkle’s Alone Together see Clowes (2011). For a detailed discussion of what it would take for an artificial being to count as a person see Clowes (2020a).

  40. 40.

    In fact, GPT-3 is yet another application of the deep learning model. Its scope, at least at the time of going to press, has so far been defined by its major successes in unexpected areas. Quite what its limits are, is so far unknown.

  41. 41.

    See Steve Fuller’s discussion of these themes and how they intersect with the questions of post-humanism and transhumanism (Fuller 2011) and the concluding chapter from Georg Theiner for further reflection on this topic.

  42. 42.

    See Frankish’s paper in this volume for a more extensive bibliography on the two system view. For a deeper analysis of its background and how it has intersected with the history of philosophy see Frankish (2010).

  43. 43.

    See Chapter 7 of Dennett’s (1991) Consciousness Explained for a detailed discussion of autostimulation and why this cultural invention may hold the key to unlocking the latent powers of the human brain to new purposes.

  44. 44.

    Many of these themes have been developed in deep and exciting new ways in Schneider’s book Artificial You: AI and the Future of Your Mind (Schneider 2019).

  45. 45.

    See also Clark’s discussion of the principle of ecological assembly (see Clark 2008).

  46. 46.

    See further discussion in Clowes (2015).

References

  • Armstrong, D. M. (1980). The causal theory of the mind.

    Google Scholar 

  • Armstrong, D. M. (1983). The nature of mind and other essays.

    Google Scholar 

  • Baggini, J. (2018). How the world thinks: A global history of philosophy. Granta Books.

    Google Scholar 

  • Barkow, J. H., Cosmides, L., & Tooby, J. (1992). The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press.

    Google Scholar 

  • Bengio, Y. (2009). Learning deep architectures for AI. Foundations and trends® in Machine Learning, 2(1), 1–127.

    Article  Google Scholar 

  • Benzon, W. L. (2020). GPT-3: Waterloo or Rubicon? Here be dragons. Here be Dragons (August 5, 2020).

    Google Scholar 

  • Bickhard, M. H. (2009). The interactivist model. Synthese, 166(3), 547–591.

    Article  Google Scholar 

  • Boden, M. A. (1977). Artificial intelligence and natural man. Hassocks: Harvester Press.

    Google Scholar 

  • Boden, M. A. (1990). The creative mind: Myths and mechanisms. London: Sphere Books Ltd.

    Google Scholar 

  • Boden, M. A. (2006). Mind as machine: A history of cognitive science two-volume set. Oxford: Oxford University Press.

    Google Scholar 

  • Bostrom, N. (2014). Superintelligence. Dunod.

    Google Scholar 

  • Bostrom, N., & Sandberg, A. (2008). Whole brain emulation: A roadmap. Lancaster University. Accessed 21 Jan, 21, 2015.

    Google Scholar 

  • Bratman, M. (2007). Structures of Agency: Essays. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Brooks, R. (1990). Elephants don’t play chess. Robotics and Autonomous Systems, 6, 3–15h.

    Article  Google Scholar 

  • Brooks, R. (1991a). Intelligence without reason. Paper presented at the International Joint Conference on Artificial Intelligence.

    Google Scholar 

  • Brooks, R. (1991b). Intelligence without representation. Artificial Intelligence, 47, 139–160.

    Article  Google Scholar 

  • Brooks, R. (2002). Robot: The future of flesh and machines. Cambridge, MA: Allen Lane: The Penguin Press.

    Google Scholar 

  • Brooks, R., Breazeal, C., Marjanovic, M., Scassellati, B., & Williamson, M. W. (1999). The cog project: Building a humanoid robot.

    Google Scholar 

  • Bruner, J. S. (1956). Freud and the image of man. American Psychologist, 11(9), 463.

    Article  Google Scholar 

  • Callaway, E. (2020). ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures.

    Google Scholar 

  • Carr, N. (2008). Is Google making us stupid? Yearbook of the National Society for the Study of Education, 107(2), 89–94.

    Article  Google Scholar 

  • Carr, N. (2010). The shallows: How the internet is changing the way we think, read and remember. London: Atlantic Books.

    Google Scholar 

  • Carter, J. A., Clark, A., Kallestrup, J., Palermos, S. O., & Pritchard, D. (2018). Extended epistemology. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Castells, M. (1996). The information age: Economy, society and culture (3 volumes). Oxford: Blackwell, 1997, 1998.

    Google Scholar 

  • Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain. Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121(6), 1053–1063.

    Article  Google Scholar 

  • Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

    Google Scholar 

  • Chalmers, D. (2002). Consciousness and its place in nature. In S. Stich & T. Warfield (Eds.), Blackwell guide to the philosophy of mind. (Reprinted from: online at Chalmers website. http://consc.net/papers/nature.html).

    Google Scholar 

  • Chalmers, D. (2015). Panpsychism and panprotopsychism. In T. Alter & Y. Nagasawa (Eds.), Consciousness in the physical world: Perspectives on Russellian Monism. Oxford: Oxford University Press.

    Google Scholar 

  • Chappelle, W. L., Goodman, T., Reardon, L., & Thompson, W. (2014a). An analysis of post-traumatic stress symptoms in United States Air Force drone operators. Journal of Anxiety Disorders, 28(5), 480–487.

    Article  Google Scholar 

  • Chappelle, W. L., McDonald, K. D., Prince, L., Goodman, T., Ray-Sannerud, B. N., & Thompson, W. (2014b). Symptoms of psychological distress and post-traumatic stress disorder in United States Air Force “drone” operators. Military Medicine, 179(Suppl_8), 63–70.

    Article  Google Scholar 

  • Clark, A. (2003). Natural born cyborgs: Minds, technologies and the future of human intelligence. New York: Oxford University Press.

    Google Scholar 

  • Clark, A. (2006). Soft selves and ecological control. In D. Spurrett, D. Ross, H. Kincaid, & L. Stephens (Eds.), Distributed cognition and the will. Cambridge, MA: MIT Press.

    Google Scholar 

  • Clark, A. (2008). Supersizing the mind. New York: Oxford University Press.

    Book  Google Scholar 

  • Clark, A. (2012). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.

    Article  Google Scholar 

  • Clark, A. (2015a). Predicting peace: The end of the Representation Wars Open MIND: Open MIND. MIND Group: Frankfurt am Main.

    Google Scholar 

  • Clark, A. (2015b). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford University Press.

    Google Scholar 

  • Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 10–23.

    Article  Google Scholar 

  • Clowes, R. W. (2011, Monday 31st October). Electric selves? Review of alone together: Why we expect more from technology and less from each other, by Sherry Turkle. Culture Wars.

    Google Scholar 

  • Clowes, R. W. (2013). The cognitive integration of E-memory. Review of Philosophy and Psychology, (4), 107–133.

    Google Scholar 

  • Clowes, R. W. (2015). Thinking in the cloud: The cognitive incorporation of cloud-based technology. Philosophy and Technology, 28(2), 261–296.

    Article  Google Scholar 

  • Clowes, R. W. (2017). Extended memory. In S. Bernecker & K. Michaelian (Eds.), Routledge handbook on the philosophy of memory (pp. 243–255). Abingdon/Oxford: Routledge.

    Chapter  Google Scholar 

  • Clowes, R. W. (2019). Immaterial engagement: Human agency and the cognitive ecology of the internet. Phenomenology and the Cognitive Sciences, 18(1), 259–279. https://doi.org/10.1007/s11097-018-9560-4.

    Article  Google Scholar 

  • Clowes, R. W. (2020a). Breaking the code: Strong agency and becoming a person. In T. Shanahan & P. R. Smart (Eds.), Blade runner 2049: A philosophical exploration (pp. 108–126). Abingdon/Oxon: Routledge.

    Google Scholar 

  • Clowes, R. W. (2020b). The internet extended person: Exoself or Doppleganger? Limité. Límite. Revista Interdisciplinaria de Filosofía y Psicología, 15(22).

    Google Scholar 

  • Clowes, R. W., Torrance, S., & Chrisley, R. (2007). Machine Consciousness: Embodiment and Imagination (editorial introduction). Journal of Consciousness Studies, 14(7), 7–14.

    Google Scholar 

  • Coeckelbergh, M. (2020). AI ethics. MIT Press.

    Book  Google Scholar 

  • Copenhaver, R., & Shields, C. (2019a). General introduction to history of the philosophy of mind, six volumes. In R. Copenhaver & C. Shields (Eds.), History of the philosophy of mind, Six Volumes. Routledge.

    Google Scholar 

  • Copenhaver, R., & Shields, C. (2019b). History of the philosophy of mind, Six Volumes.

    Google Scholar 

  • Corabi, J., & Schneider, S. (2012). Metaphysics of uploading. Journal of Consciousness Studies, 19(7–8), 26–44.

    Google Scholar 

  • Corabi, J., & Schneider, S. (2014). If you upload, will you survive? Intelligence Unbound: Future of Uploaded and Machine Minds, The, 131–145.

    Google Scholar 

  • Dehaene, S. (2009). Reading in the brain: The science and evolution of a human invention. Viking Pr.

    Google Scholar 

  • Dennett, D. C. (1978). Artificial intelligence as philosophy and psychology Brainstorms. Montgometry: Bradford Brooks.

    Google Scholar 

  • Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. Minds, Machines and Evolution, 129–151.

    Google Scholar 

  • Dennett, D. C. (1991). Consciousness explained. Harmondsworth: Penguin Books.

    Google Scholar 

  • Dennett, D. C. (1996a). Facing backwards on the problems of consciousness. Journal of Consciousness Studies, 3(1), 4–6.

    Google Scholar 

  • Dennett, D. C. (1996b). Kinds of minds: Towards an understanding of consciousness. New York: Phoenix Books.

    Google Scholar 

  • Ding, J. (2018). Deciphering China’s AI dream. Future of Humanity Institute Technical Report.

    Google Scholar 

  • Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper.

    Google Scholar 

  • Evans, J. S. B. (2010). Thinking twice: Two minds in one brain. New York: Oxford University Press.

    Google Scholar 

  • Fauconnier, G., & Turner, M. (2002). The way we think: Conceptual blending and the mind’s hidden complexities. New York: Basic Books.

    Google Scholar 

  • Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press.

    Google Scholar 

  • Floridi, L. (2015). The Onlife Manifesto: Being Human in a Hyperconnected Era. Cham/Heidelberg/New York/Dordrecht/London: Springer.

    Book  Google Scholar 

  • Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 1–3.

    Google Scholar 

  • Floridi, L., & Chiriatti, M. (2020a). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694.

    Article  Google Scholar 

  • Floridi, L., & Chiriatti, M. (2020b). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 1–14.

    Google Scholar 

  • Fodor, J. (1975a). The language of thought. Cambridge, MA: MIT Press.

    Google Scholar 

  • Fodor, J. A. (1975b). The language of thought. New York: Harvard University Press.

    Google Scholar 

  • Fodor, J. (2009). Where is my mind. London Review of Books, 31(3), 13–15.

    Google Scholar 

  • Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1), 5–20.

    Article  Google Scholar 

  • Frankish, K. (2010). Dual-process and dual-system theories of reasoning. Philosophy Compass, 5(10), 914–926.

    Article  Google Scholar 

  • Freud, S. (1920). A general introduction to psychoanalysis. Createspace Independent Publishing Platform.

    Book  Google Scholar 

  • Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11), e1000211.

    Article  Google Scholar 

  • Fuller, S. (2011). Humanity 2.0: Foundations for 21st century social thought. London: Palgrave Macmillan.

    Book  Google Scholar 

  • Gallagher, S. (2001). The practice of mind. Journal of Consciousness Studies, 8(5–7), 83–108.

    Google Scholar 

  • Gardner, H. (1985). The mind’s new science. New York: Basic Books.

    Google Scholar 

  • Gerken, M. (2014). Outsourced cognition. Philosophical Issues, 24(1), 127–158.

    Article  Google Scholar 

  • Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.

    Google Scholar 

  • Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Springer.

    Book  Google Scholar 

  • Greenfield, S. (2015). Mind change: How digital technologies are leaving their mark on our brains. Random House.

    Google Scholar 

  • Gregory, R. L. (1981). Mind in science: A history of explanations in psychology. Cambridge: Cambridge University Press.

    Google Scholar 

  • Heersmink, R. (2016). Distributed selves: Personal identity and extended memory systems. Synthese, 1–17.

    Google Scholar 

  • Heersmink, R. (2020). Varieties of the extended self. Consciousness and Cognition, 85, 103001.

    Google Scholar 

  • Hendler, J. (2008). Avoiding another AI winter. IEEE Intelligent Systems, (2), 2–4.

    Google Scholar 

  • Hohwy, J. (2013). The predictive mind. Oxford University Press.

    Book  Google Scholar 

  • Holland, O. (2003). Editorial Introduction. Journal of Consciousness Studies, 10(4), 1–6.

    Google Scholar 

  • Hutto, D. D. (2008). Folk psychological narratives: The sociocultural basis of understanding reasons. The MIT Press.

    Google Scholar 

  • Ihde, D. (1990). Technology and the lifeworld: From garden to Earth. Indiana University Press.

    Google Scholar 

  • Ihde, D., & Malafouris, L. (2019). Homo faber revisited: Postphenomenology and material engagement theory. Philosophy & Technology, 32(2), 195–214. https://doi.org/10.1007/s13347-018-0321-7.

    Article  Google Scholar 

  • Ito, J. (2018). Why westerners fear robots and the Japanese do not. Wired.

    Book  Google Scholar 

  • Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32, 127–136.

    Article  Google Scholar 

  • Kahneman, D. (2011). Thinking, fast and slow. Macmillan.

    Google Scholar 

  • Kharpal, A. (2017). Japan has no fear of AI — It could boost growth despite population decline, Abe says. cnbc.com. Retrieved from https://www.cnbc.com/2017/03/19/japan-has-no-fear-of-ai%2D%2Dit-could-boost-growth-despite-population-decline-abe-says.html

  • Kim, J. (2006). Philosophy of mind (2nd ed.). Cambridge, MA: Westview.

    Google Scholar 

  • Kind, A. (2018). The mind-body problem in 20th-century philosophy. Philosophy of mind in the twentieth and twenty-first centuries. The History of the Philosophy of Mind, 6, 1.

    Google Scholar 

  • Knappett, C., & Malafouris, L. (2008). Material and nonhuman agency: An introduction. Material Agency: Towards a Non-Anthropocentric Approach, ix–xix.

    Google Scholar 

  • Kohs, G. (Writer). (2017). AlphaGo. In G. Krieg, J. Rosen, & K. Proudfoot (Producer): RO*CO FILMS.

    Google Scholar 

  • Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh. New York: Basic Books.

    Google Scholar 

  • Lakoff, G., & Johnson, M. (2003 [1980]). Metaphors we live by. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Laland, K. N., Odling-Smee, J., & Feldman, M. W. (2000). Niche construction, biological evolution, and cultural change. Behavioral and Brain Sciences, 23, 131–175.

    Article  Google Scholar 

  • Lanier, J. (2010). You are not a gadget: A manifesto. London: Allen Lane.

    Google Scholar 

  • Loh, K. K., & Kanai, R. (2015). How has the internet reshaped human cognition? The Neuroscientist, 1073858415595005.

    Google Scholar 

  • Luria, A. R. (1976). Cognitive development: Its cultural and social foundations.

    Google Scholar 

  • Malafouris, L. (2010a). Grasping the concept of number: How did the sapient mind move beyond approximation. In I. Morley & C. Renfrew (Eds.), The archaeology of measurement: Comprehending heaven, earth and time in ancient societies (pp. 35–42). Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Malafouris, L. (2010b). Metaplasticity and the human becoming: Principles of neuroarchaeology. Journal of Anthropological Sciences, 88(4), 49–72.

    Google Scholar 

  • Malafouris, L. (2013). How things shape the mind: A theory of material engagement. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Malafouris, L. (2016). On human becoming and incompleteness: A material engagement approach to the study of embodiment in evolution and culture. Embodiment in evolution and culture, 289–305.

    Google Scholar 

  • Mazlish, B. (1993). The fourth discontinuity: The co-evolution of humans and machines. Yale University Press.

    Google Scholar 

  • McGeer, V. (2001). Psycho-practice, psycho-theory and the contrastive case of autism. How practices of mind become second-nature. Journal of Consciousness Studies, 5–7, 109–132.

    Google Scholar 

  • Menary, R. (2010). Cognitive integration and the extended mind. In R. Menary (Ed.), The extended mind (pp. 227–244). London: Bradford Book, MIT Press.

    Chapter  Google Scholar 

  • Menary, R. (2014). Neural plasticity, neuronal recycling and niche construction. Mind & Language, 29(3), 286–303.

    Article  Google Scholar 

  • Milkowski, M. (2013). Explaining the computational mind. MIT Press.

    Book  Google Scholar 

  • Minsky, M., & Papert, S. (1969). Perceptrons: An essay in computational geometry. MIT Press.

    Google Scholar 

  • Mithen, S. (1996). The prehistory of the mind. London: Thames Hudson.

    Google Scholar 

  • Moor, J. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87–87.

    Google Scholar 

  • Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.

    Google Scholar 

  • Ney, A., & Albert, D. Z. (2013). The wave function: Essays on the metaphysics of quantum mechanics. Oxford University Press.

    Book  Google Scholar 

  • Norman, D. A. (1993). Things that make us smart (Defending human attributes in the age of the machine). Addison-Wesley.

    Google Scholar 

  • Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.

    Book  Google Scholar 

  • Piaget, J. (1954). The construction of reality in the child. New York: Basic.

    Book  Google Scholar 

  • Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.

    Article  Google Scholar 

  • Piccinini, G. (this volume). The myth of mind uploading.

    Google Scholar 

  • Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage.

    Google Scholar 

  • Putnam, H. (1967). Psychological predicates. Art, mind, and religion, 1, 37–48.

    Google Scholar 

  • Putnam, H. (1980). The nature of mental states. Readings in Philosophy of Psychology, 1, 223–231.

    Google Scholar 

  • Rumelhart, D. E., & McClelland, J. L. (1986a). Parallel distributed processing: Exploring the microstructure of cognition (Vol. 1). Cambridge MA: MIT Press.

    Book  Google Scholar 

  • Rumelhart, D. E., & McClelland, J. L. (1986b). Parallel distributed processing: Exploring the microstructure of cognition (Vol. 2). Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Russell, B. (1927). The analysis of matter. London: Kegan Paul, Trench, Trubner & Co.

    Google Scholar 

  • Russell, S. J. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin Audio.

    Google Scholar 

  • Ryle, G. (1949). The concept of mind. London: Hutchinson.

    Google Scholar 

  • Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.

    Article  Google Scholar 

  • Sapolsky, R. M. (1997). Junk food monkeys and other essays on the biology of the human predicament. London: Headline.

    Google Scholar 

  • Schneider, S. (2009). Mindscan: Transcending and enhancing the human brain. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (pp. 260–276). Hoboken: Wiley- Blackwell.

    Google Scholar 

  • Schneider, S. (2011). The language of thought: A New philosophical direction. MIT Press.

    Book  Google Scholar 

  • Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Schwitzgebel, E. (2019). Introspection. In E. N. Zalta (Ed.), (Winter 2019 Edition ed., Vol. The Stanford Encyclopedia of Philosophy).

    Google Scholar 

  • Searle, J. R. (1980). Mind, brains and programs. Behavioral and Brain Sciences, 3(3), 417–457.

    Article  Google Scholar 

  • Shelley, M. W. (2018). Frankenstein: The 1818 text. Penguin.

    Google Scholar 

  • Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354–359.

    Article  Google Scholar 

  • Smart, P. (2018). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In J. A. Carter, A. Clark, J. Kallestrup, S. O. Palermos, & D. Pritchard (Eds.), Extended epistemology (pp. 266–304). Oxford: Oxford University Press.

    Google Scholar 

  • Smart, P. R., Heersmink, R., & Clowes, R. W. (2017). The cognitive ecology of the internet. In S. J. Cowley & F. Vallée-Tourangeau (Eds.), Cognition beyond the brain (2nd ed., pp. 251–282). Springer.

    Chapter  Google Scholar 

  • Smart, P. R., Madaan, A., & Hall, W. (2018). Where the smart things are: Social machines and the internet of things. Phenomenology and the Cognitive Sciences, 1–25.

    Google Scholar 

  • Smart, P., Chu, M.-C. M., O'Hara, K., Carr, L., & Hall, W. (2019). Geopolitical drivers of personal data: The four horsemen of the datapocalypse.

    Google Scholar 

  • Somers, J. (2019). How the artificial-intelligence program AlphaZero mastered its games.

    Google Scholar 

  • Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.

    Article  Google Scholar 

  • Sterelny, K. (2011). From hominins to humans: How sapiens became behaviourally modern. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1566), 809–822.

    Article  Google Scholar 

  • Sutton, J. (2010). Exograms and interdisciplinarity: History, the extended mind, and the civilizing process. In R. Menary (Ed.), The extended mind (pp. 189–225). London: Bradford Book, MIT Press.

    Chapter  Google Scholar 

  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

    Article  Google Scholar 

  • Tallis, R. (2004). Why the mind is not a computer: A pocket lexicon of neuromythology (Vol. 13). Imprint Academic.

    Google Scholar 

  • Toffler, A. (1980). The third wave (Vol. 484). Bantam Books, New York.

    Google Scholar 

  • Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(1), 230–265.

    Article  Google Scholar 

  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    Google Scholar 

  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.

    Google Scholar 

  • Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind. Cambridge, MA: MIT Press.

    Book  Google Scholar 

  • Velleman, J. D. (2009). The possibility of practical reason. Michigan Publishing, University of Michigan Library.

    Book  Google Scholar 

  • Vision, G. (2018). The provenance of consciousness. In E. Vitaliadis & C. Mekos (Eds.), Brute facts (pp. 155–176). Oxford: Oxford University Press.

    Google Scholar 

  • Vygotsky, L. S. (1962). Thought and language (E. Hanfmann & G. Vakar, Trans.). Cambridge, MA: MIT Press.

    Google Scholar 

  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge MA: Harvard University Press.

    Google Scholar 

  • Vygotsky, L. S., & Luria, A. R. (1994). Tool and symbol in child development. In R. Van Der Veer & J. Valsiner (Eds.), The Vygotsky reader. Cambridge MA: Basil Blackwell.

    Google Scholar 

  • Wegner, D. M., & Ward, A. F. (2013, December 1). The internet has become the external hard drive for our memories. Scientific American.

    Google Scholar 

  • Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation.

    Google Scholar 

  • Wilkes, K. V. (1984). Pragmatics in science and theory in common sense. Inquiry, 27, 339–361.

    Article  Google Scholar 

  • Wootton, D. (2015). The invention of science: a new history of the scientific revolution. Penguin, UK.

    Google Scholar 

  • Zawidzki, T. W. (2013). Mindshaping: A new framework for understanding human social cognition. MIT Press.

    Book  Google Scholar 

Download references

Acknowledgements

Robert W. Clowes’s work is supported by FCT, ‘Fundação para a Ciência e a Tecnologia, I.P.’ by the Stimulus of Scientific Employment grant (DL 57/2016/CP1453/CT0021) and personal grant (SFRH/BPD/70440/2010).

Klaus Gärtner’s work is endorsed by the financial support of FCT, ‘Fundação para a Ciência e a Tecnologia, I.P.’ under the Stimulus of Scientific Employment (DL 57/2016/CP1479/CT0081) and by the Centro de Filosofia das Ciências da Universidade de Lisboa (UIDB/00678/2020).

This work is endorsed by the FCT project “Emergence in the Natural Sciences: Towards a New Paradigm” (PTDC/FER-HFC/30665/2017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert W. Clowes .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Clowes, R.W., Gärtner, K., Hipólito, I. (2021). The Mind Technology Problem and the Deep History of Mind Design. In: Clowes, R.W., Gärtner, K., Hipólito, I. (eds) The Mind-Technology Problem . Studies in Brain and Mind, vol 18. Springer, Cham. https://doi.org/10.1007/978-3-030-72644-7_1

Download citation

Publish with us

Policies and ethics