Abstract
We are living through a new phase in human development where much of everyday life – at least in the most technologically developed parts of the world – has come to depend upon our interaction with “smart” artefacts. Alongside this increasing adoption and ever-deepening reliance on intelligent machines, important changes have been taking place, often in the background, as to how we think of ourselves and how we conceptualize our relationship with technology. As we design, create and learn to live with a new order of artefacts which exhibit behavior that, were it to be carried out by human beings would be seen as intelligent, the ways in which we conceptualize intelligence, minds, reasoning and related notions such as self and agency are undergoing profound shifts. We argue that the basic background assumptions informing our concepts of mind, and the underlying conceptual scheme structuring our reasoning about minds has recently been transformed in the process. This shift has changed the nature and quality of both our folk understanding of mind, our scientific psychology, and the philosophical problems that the interaction of these realms produce. Many of the traditional problems in the philosophy of mind have become reconfigured in the process. This introduction sets the scene for our book that treats this reconfiguration of our concepts of mind and of technology, and the new casting of philosophical problems this reconfiguration engenders.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Change history
12 April 2022
The book was inadvertently published with errors in Chapter 1 and Chapter 13.
Notes
- 1.
In May 1643 Princess Elisabeth of Bohemia wrote to Descartes whose work she had been following closely and posed the problem of interaction in an especially pointed way. She asked, “how the mind of human being, being only a thinking substance, can determine the bodily spirits in producing bodily actions.” Princess Elisabeth, pushing upon the central problem arising from substance dualism, found herself unsatisfied with Descartes’s attempts to resolves the question to her satisfaction. In exasperation she finally writes “it would be easier for me to concede matter and extension to the mind than it would be for me to concede the capacity to move a body and be moved by one to an immaterial thing.” Cited in Jaegwon Kim’s Philosophy of Mind (Kim 2006, pp. 41–42). Kim notes that this is a (very) early example of the causal argument from materialism, that holds that mental causation implies materialism, for, it is hard to see how any putative immaterial substance might interact with the rest of the causal order. This is also a compelling incidence of how it is sometimes possible to think against the grain of even a highly dominant conceptual scheme.
- 2.
Indeed, it is only since the 1960s that there has been – at least in the Anglo-Saxon world – university courses which are explicitly targeted at philosophy of mind. Many of such courses are organized around the Mind Body problem.
- 3.
The sense that the mind and body are distinct has arguably been part of folk-psychology and religious views of the world for centuries, as well as metaphysical views from Plato to Descartes. That the mind, or the soul, is separate from matter was something that seems to be introduced only at a time when mechanist views of the rest of nature are being clearly articulated for the first time.
- 4.
Amy Kind for example argues that dualism was much the preferred view of the early modern period and materialist and what we would now call physicalist positions were much out of favour (Kind 2018). La Mettrie’s ([1747] Man a Machine, was very much against the tide of ideas of the time although it anticipated major theme of twentieth century philosophy.
- 5.
Chalmers informational dualism is a notable exception here (Chalmers 2002). Arguably, in popular culture and in folk psychology a form of dualism is widespread.
- 6.
An important exception to this generalization is Richard Gregory’s monumental (1981) Mind in Science: A History of Explanations in Psychology. Important work on how our conceptual schemes are more generally constrained by technology and the history of invention can be found in Postman (1993). One field where the background metaphors for mind are considered is cognitive linguistics (Fauconnier and Turner 2002; Lakoff and Johnson 2003 [1980]), and perhaps especially in Lakoff and Johnson (1999). However even cognitive linguists tend to chiefly pay attention to the way that concepts are shaped by the nature of human embodiment. The idea that our use of technology may similarly shape our abstract reasoning about the nature of mind is less explored.
- 7.
Freud humbly pointed to his own place in the history of ideas when he argued that his idea of the unconscious should be seen as a third discontinuity following the ideas of Darwin and Copernicus.
- 8.
This is cited in Mazlish (1993).
- 9.
The first three discontinuities were first described by Freud (1920). Luciano Floridi has recently developed a related thesis in his (2014) book The Fourth Revolution: How the Infosphere is Shaping Human Reality where he uses the nomenclature of “the information revolution” to label the fourth discontinuity. Floridi does not mention Mazlish’s (1993) formulation although there are great similarities in the thinking as well as interesting differences between the two; we will discuss in the next section.
- 10.
- 11.
The early modern period does give us a few well-known examples of technologies as metaphors for parts of the human body, such as the hydraulic metaphor used in the time of Descartes to illustrate the ways that bodies were supposed to move, to the mills of Leibniz’s thought experiments. Later in the nineteenth century, telegraph connections were sometimes used as model of the interconnection of brains.
- 12.
Luciano Floridi (2014, p. 91) gives an interesting reconstruction of how Hobbes ideas from his 1651 treatise Leviathan may have been triggered by the creation and publicity of the of the Pascalina, the creation of which was influential throughout Europe.
- 13.
Boden observes in Mind as Machine: A History of Cognitive Science that the idea of “‘Machine as Man’ is an ancient idea, and a technical practice. Ingenious android machines whose movements resembled human behaviour, albeit in highly limited ways, were already built 2500 years. ‘Man as Machine’ is much more recent.” (Boden 2006, p. 51).
- 14.
Hans Moravec (1988) calls these creations Mind Children and however we choose to treat them we cannot now doubt their centrality to our lives. It is because our further co-habitation with our creations is a central problematic of our age that Mary Shelley’s Frankenstein; or, The Modern Prometheus (Shelley 2018 [1818]) remains the prescient touchstone text of our epoch.
- 15.
The best and most authoritative account of the intellectual ferment that gave rise to Artificial Intelligence can be found in Margaret Boden’s 2006 two volume set Mind As Machine: A History of Cognitive Science. Boden discusses “three inspirational interdisciplinary meetings”, including the 1956 Dartmouth meeting, but also the IEEE 3-day symposium convened at MIT in mid-September and a later meeting held in 1958 in London as times and places where the central ideas of AI really germinated (see Boden 2006, Chapter 6.iii).
- 16.
For further details and excellent discussion of early work in artificial intelligence including its many successes and its limitations see Boden 2006, Chapter 10: When GOFAI was NEWFAI.
- 17.
- 18.
This weak AI can now be seen very much in the tradition of Pascal’s calculating machine which was supposedly designed in order to ease the burden of the laborious calculations that Pascal’s father needed to perform as part of his work as a supervisor of taxes.
- 19.
Today the term Artificial General Intelligence (AGI) is sometimes used in order to describe artificial systems which have human level intelligence (e.g., Goertzel and Pennachin 2007). It is important to note however that even an AGI that could replicate all the factors of human intelligence would not necessarily be a strong AI. It is conceptually possible to build an AGI that could match or even outperform human beings in any or every particular domain but still not be subjective in Searle’s sense. Whether this is because, as Searle argues, symbol processing systems are just not the right sort of mechanistic systems to be subjective is an open question. Recent work in machine consciousness (e.g., Clowes et al. 2007; Holland 2003) seeks, among other things, to attempt to understand if non-organic mechanistic systems – predominantly computational systems – could ever be subjective or put another way, conscious.
- 20.
Armstrong’s much collected and influential essay from the book “The Causal Theory of Mind” (Armstrong 1980) formulates a causal analysis of mind in functionalist terms without mentioning computers; although he does imply that perceptual states in the brain are informational states.
- 21.
The idea that the mind is literally software for the brain remains controversial and has recently come under sustained attack (Piccinini 2010).
- 22.
See the Schneider and Corabi paper in this volume, but also Schneider’s book Artificial You: AI and the Future of Your Mind (Schneider 2019) for an extended and highly illuminating discussion. The burden of her recent book – and several essays in this one – is that the idea of the software metaphor of mind creates multiple problems, not least when we consider the notion of personal identity (see the papers by Schneider and Corabi, and Piccinini this volume).
- 23.
- 24.
Further discussion of EMT can be found throughout book especially in Chapter XYZ.
- 25.
Its full title is The Fourth Revolution: How the Infosphere is Reshaping Human Reality.
- 26.
In this, the foundations of Floridi’s view clearly echo John Searle’s (1980) position that no amount computational syntactic processing can ever add up to the inherent meaningfulness and subjectivity of real minds.
- 27.
In some respects, the notion of human reality is a little vague and seems to be used in a variety of different ways by Floridi, but a central meaning is how we now are coming to thinking of the different between “real life” and the virtual and online world. It is Floridi’s claim that this distinction is increasingly breaking down and we are already starting to live in a blurred “onlife” reality.
- 28.
Albeit see the discussion at the beginning of Sect. 1.3 on some of the cross-cultural diversity that there is in how humans have variously thought of AI.
- 29.
Ur is the original Mesopotamian city in modern day Iraq, once a coastal city near the mouth of the Euphrates and today part of dessert landscape. It appears to be the case that the building of cities, the invention of writing and complex bureaucracies are roughly coeval developments of the human species.
- 30.
See Chapter 6, “Intelligence Inscribing the World” of Floridi (2014) for a treatment of how he sees human civilization adapting to the reality of cohabiting with light or weak AI. Light AI does not appear to be fully defined but functions in the discussion to pick out the sort of intelligence we find in smart systems to which no true intelligence should be attributed; (whatever real intelligence is).
- 31.
Floridi writes “The two souls of AI have been variously and not always consistently named. Sometimes the distinctions weak vs. strong AI, or good old-fashioned vs. new or nouvelle AI, have been used to capture the difference. I prefer to use the less loaded distinction between light vs. strong AI.” (Floridi 2014, p. 141) While Floridi is right about the inconsistent naming this explanation risks confusing matters further. Strong and Weak AI were originally used to distinguish two approaches what AI was supposed to be doing, whether buidling systems that might really be subjective or minds, or merely doing a form of non-subjective processing. GOFAI and nouvelle AI were different approaches to how these different goals could be obtained (See e.g., Brooks 1990).
- 32.
See for instance the discussion in Fodor (2009).
- 33.
This is one reason the original extended mind paper is the most cited philosophy paper of the last 20 years.
- 34.
E-Memory might also be defined as a heterogenous set of digital or electronic systems that provide similar or replacement functions that would otherwise be provided by human biological memory, see (Clowes 2013) for further discussion and the slightly problematic nature of these definitions.
- 35.
Some of the questions about the nature of the epistemic framework we can use to accommodate an increasingly diverse world, cognitive agents and their relations with, on the one hand, technologies and, on the other, social practices are treated in Gloria Andrada’s paper in this volume.
- 36.
It is only in the time of cloud-technology that we could begin to seriously worry that our minds were leaking out to machines in the way Nicholas Carr articulated (Carr 2008, 2010). It is important to see that this is a residual of how we think about our minds in relation to the current form (cloud tech) and deep tendencies (e.g. Moore’s Law, pervasive computing) of computer technology. These problems are unlikely to subside anytime soon and many of the intellectual tools we need to grasp have yet to be invented. The hope is that explicitly laying out some of the distinctive difficulties of our conceptual epoch in this volume we can move forward.
- 37.
- 38.
For a nice description analysis of how these algorithms were developed see Somers (2019).
- 39.
- 40.
In fact, GPT-3 is yet another application of the deep learning model. Its scope, at least at the time of going to press, has so far been defined by its major successes in unexpected areas. Quite what its limits are, is so far unknown.
- 41.
See Steve Fuller’s discussion of these themes and how they intersect with the questions of post-humanism and transhumanism (Fuller 2011) and the concluding chapter from Georg Theiner for further reflection on this topic.
- 42.
See Frankish’s paper in this volume for a more extensive bibliography on the two system view. For a deeper analysis of its background and how it has intersected with the history of philosophy see Frankish (2010).
- 43.
See Chapter 7 of Dennett’s (1991) Consciousness Explained for a detailed discussion of autostimulation and why this cultural invention may hold the key to unlocking the latent powers of the human brain to new purposes.
- 44.
Many of these themes have been developed in deep and exciting new ways in Schneider’s book Artificial You: AI and the Future of Your Mind (Schneider 2019).
- 45.
See also Clark’s discussion of the principle of ecological assembly (see Clark 2008).
- 46.
See further discussion in Clowes (2015).
References
Armstrong, D. M. (1980). The causal theory of the mind.
Armstrong, D. M. (1983). The nature of mind and other essays.
Baggini, J. (2018). How the world thinks: A global history of philosophy. Granta Books.
Barkow, J. H., Cosmides, L., & Tooby, J. (1992). The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press.
Bengio, Y. (2009). Learning deep architectures for AI. Foundations and trends® in Machine Learning, 2(1), 1–127.
Benzon, W. L. (2020). GPT-3: Waterloo or Rubicon? Here be dragons. Here be Dragons (August 5, 2020).
Bickhard, M. H. (2009). The interactivist model. Synthese, 166(3), 547–591.
Boden, M. A. (1977). Artificial intelligence and natural man. Hassocks: Harvester Press.
Boden, M. A. (1990). The creative mind: Myths and mechanisms. London: Sphere Books Ltd.
Boden, M. A. (2006). Mind as machine: A history of cognitive science two-volume set. Oxford: Oxford University Press.
Bostrom, N. (2014). Superintelligence. Dunod.
Bostrom, N., & Sandberg, A. (2008). Whole brain emulation: A roadmap. Lancaster University. Accessed 21 Jan, 21, 2015.
Bratman, M. (2007). Structures of Agency: Essays. Oxford: Oxford University Press.
Brooks, R. (1990). Elephants don’t play chess. Robotics and Autonomous Systems, 6, 3–15h.
Brooks, R. (1991a). Intelligence without reason. Paper presented at the International Joint Conference on Artificial Intelligence.
Brooks, R. (1991b). Intelligence without representation. Artificial Intelligence, 47, 139–160.
Brooks, R. (2002). Robot: The future of flesh and machines. Cambridge, MA: Allen Lane: The Penguin Press.
Brooks, R., Breazeal, C., Marjanovic, M., Scassellati, B., & Williamson, M. W. (1999). The cog project: Building a humanoid robot.
Bruner, J. S. (1956). Freud and the image of man. American Psychologist, 11(9), 463.
Callaway, E. (2020). ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures.
Carr, N. (2008). Is Google making us stupid? Yearbook of the National Society for the Study of Education, 107(2), 89–94.
Carr, N. (2010). The shallows: How the internet is changing the way we think, read and remember. London: Atlantic Books.
Carter, J. A., Clark, A., Kallestrup, J., Palermos, S. O., & Pritchard, D. (2018). Extended epistemology. Oxford: Oxford University Press.
Castells, M. (1996). The information age: Economy, society and culture (3 volumes). Oxford: Blackwell, 1997, 1998.
Castro-Caldas, A., Petersson, K. M., Reis, A., Stone-Elander, S., & Ingvar, M. (1998). The illiterate brain. Learning to read and write during childhood influences the functional organization of the adult brain. Brain, 121(6), 1053–1063.
Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.
Chalmers, D. (2002). Consciousness and its place in nature. In S. Stich & T. Warfield (Eds.), Blackwell guide to the philosophy of mind. (Reprinted from: online at Chalmers website. http://consc.net/papers/nature.html).
Chalmers, D. (2015). Panpsychism and panprotopsychism. In T. Alter & Y. Nagasawa (Eds.), Consciousness in the physical world: Perspectives on Russellian Monism. Oxford: Oxford University Press.
Chappelle, W. L., Goodman, T., Reardon, L., & Thompson, W. (2014a). An analysis of post-traumatic stress symptoms in United States Air Force drone operators. Journal of Anxiety Disorders, 28(5), 480–487.
Chappelle, W. L., McDonald, K. D., Prince, L., Goodman, T., Ray-Sannerud, B. N., & Thompson, W. (2014b). Symptoms of psychological distress and post-traumatic stress disorder in United States Air Force “drone” operators. Military Medicine, 179(Suppl_8), 63–70.
Clark, A. (2003). Natural born cyborgs: Minds, technologies and the future of human intelligence. New York: Oxford University Press.
Clark, A. (2006). Soft selves and ecological control. In D. Spurrett, D. Ross, H. Kincaid, & L. Stephens (Eds.), Distributed cognition and the will. Cambridge, MA: MIT Press.
Clark, A. (2008). Supersizing the mind. New York: Oxford University Press.
Clark, A. (2012). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Clark, A. (2015a). Predicting peace: The end of the Representation Wars Open MIND: Open MIND. MIND Group: Frankfurt am Main.
Clark, A. (2015b). Surfing uncertainty: Prediction, action, and the embodied mind. Oxford: Oxford University Press.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58, 10–23.
Clowes, R. W. (2011, Monday 31st October). Electric selves? Review of alone together: Why we expect more from technology and less from each other, by Sherry Turkle. Culture Wars.
Clowes, R. W. (2013). The cognitive integration of E-memory. Review of Philosophy and Psychology, (4), 107–133.
Clowes, R. W. (2015). Thinking in the cloud: The cognitive incorporation of cloud-based technology. Philosophy and Technology, 28(2), 261–296.
Clowes, R. W. (2017). Extended memory. In S. Bernecker & K. Michaelian (Eds.), Routledge handbook on the philosophy of memory (pp. 243–255). Abingdon/Oxford: Routledge.
Clowes, R. W. (2019). Immaterial engagement: Human agency and the cognitive ecology of the internet. Phenomenology and the Cognitive Sciences, 18(1), 259–279. https://doi.org/10.1007/s11097-018-9560-4.
Clowes, R. W. (2020a). Breaking the code: Strong agency and becoming a person. In T. Shanahan & P. R. Smart (Eds.), Blade runner 2049: A philosophical exploration (pp. 108–126). Abingdon/Oxon: Routledge.
Clowes, R. W. (2020b). The internet extended person: Exoself or Doppleganger? Limité. Límite. Revista Interdisciplinaria de Filosofía y Psicología, 15(22).
Clowes, R. W., Torrance, S., & Chrisley, R. (2007). Machine Consciousness: Embodiment and Imagination (editorial introduction). Journal of Consciousness Studies, 14(7), 7–14.
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Copenhaver, R., & Shields, C. (2019a). General introduction to history of the philosophy of mind, six volumes. In R. Copenhaver & C. Shields (Eds.), History of the philosophy of mind, Six Volumes. Routledge.
Copenhaver, R., & Shields, C. (2019b). History of the philosophy of mind, Six Volumes.
Corabi, J., & Schneider, S. (2012). Metaphysics of uploading. Journal of Consciousness Studies, 19(7–8), 26–44.
Corabi, J., & Schneider, S. (2014). If you upload, will you survive? Intelligence Unbound: Future of Uploaded and Machine Minds, The, 131–145.
Dehaene, S. (2009). Reading in the brain: The science and evolution of a human invention. Viking Pr.
Dennett, D. C. (1978). Artificial intelligence as philosophy and psychology Brainstorms. Montgometry: Bradford Brooks.
Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. Minds, Machines and Evolution, 129–151.
Dennett, D. C. (1991). Consciousness explained. Harmondsworth: Penguin Books.
Dennett, D. C. (1996a). Facing backwards on the problems of consciousness. Journal of Consciousness Studies, 3(1), 4–6.
Dennett, D. C. (1996b). Kinds of minds: Towards an understanding of consciousness. New York: Phoenix Books.
Ding, J. (2018). Deciphering China’s AI dream. Future of Humanity Institute Technical Report.
Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper.
Evans, J. S. B. (2010). Thinking twice: Two minds in one brain. New York: Oxford University Press.
Fauconnier, G., & Turner, M. (2002). The way we think: Conceptual blending and the mind’s hidden complexities. New York: Basic Books.
Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford: Oxford University Press.
Floridi, L. (2015). The Onlife Manifesto: Being Human in a Hyperconnected Era. Cham/Heidelberg/New York/Dordrecht/London: Springer.
Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology, 1–3.
Floridi, L., & Chiriatti, M. (2020a). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681–694.
Floridi, L., & Chiriatti, M. (2020b). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 1–14.
Fodor, J. (1975a). The language of thought. Cambridge, MA: MIT Press.
Fodor, J. A. (1975b). The language of thought. New York: Harvard University Press.
Fodor, J. (2009). Where is my mind. London Review of Books, 31(3), 13–15.
Frankfurt, H. G. (1971). Freedom of the will and the concept of a person. The Journal of Philosophy, 68(1), 5–20.
Frankish, K. (2010). Dual-process and dual-system theories of reasoning. Philosophy Compass, 5(10), 914–926.
Freud, S. (1920). A general introduction to psychoanalysis. Createspace Independent Publishing Platform.
Friston, K. (2008). Hierarchical models in the brain. PLoS Computational Biology, 4(11), e1000211.
Fuller, S. (2011). Humanity 2.0: Foundations for 21st century social thought. London: Palgrave Macmillan.
Gallagher, S. (2001). The practice of mind. Journal of Consciousness Studies, 8(5–7), 83–108.
Gardner, H. (1985). The mind’s new science. New York: Basic Books.
Gerken, M. (2014). Outsourced cognition. Philosophical Issues, 24(1), 127–158.
Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.
Goertzel, B., & Pennachin, C. (2007). Artificial general intelligence (Vol. 2). Springer.
Greenfield, S. (2015). Mind change: How digital technologies are leaving their mark on our brains. Random House.
Gregory, R. L. (1981). Mind in science: A history of explanations in psychology. Cambridge: Cambridge University Press.
Heersmink, R. (2016). Distributed selves: Personal identity and extended memory systems. Synthese, 1–17.
Heersmink, R. (2020). Varieties of the extended self. Consciousness and Cognition, 85, 103001.
Hendler, J. (2008). Avoiding another AI winter. IEEE Intelligent Systems, (2), 2–4.
Hohwy, J. (2013). The predictive mind. Oxford University Press.
Holland, O. (2003). Editorial Introduction. Journal of Consciousness Studies, 10(4), 1–6.
Hutto, D. D. (2008). Folk psychological narratives: The sociocultural basis of understanding reasons. The MIT Press.
Ihde, D. (1990). Technology and the lifeworld: From garden to Earth. Indiana University Press.
Ihde, D., & Malafouris, L. (2019). Homo faber revisited: Postphenomenology and material engagement theory. Philosophy & Technology, 32(2), 195–214. https://doi.org/10.1007/s13347-018-0321-7.
Ito, J. (2018). Why westerners fear robots and the Japanese do not. Wired.
Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32, 127–136.
Kahneman, D. (2011). Thinking, fast and slow. Macmillan.
Kharpal, A. (2017). Japan has no fear of AI — It could boost growth despite population decline, Abe says. cnbc.com. Retrieved from https://www.cnbc.com/2017/03/19/japan-has-no-fear-of-ai%2D%2Dit-could-boost-growth-despite-population-decline-abe-says.html
Kim, J. (2006). Philosophy of mind (2nd ed.). Cambridge, MA: Westview.
Kind, A. (2018). The mind-body problem in 20th-century philosophy. Philosophy of mind in the twentieth and twenty-first centuries. The History of the Philosophy of Mind, 6, 1.
Knappett, C., & Malafouris, L. (2008). Material and nonhuman agency: An introduction. Material Agency: Towards a Non-Anthropocentric Approach, ix–xix.
Kohs, G. (Writer). (2017). AlphaGo. In G. Krieg, J. Rosen, & K. Proudfoot (Producer): RO*CO FILMS.
Korsgaard, C. M. (2009). Self-constitution: Agency, identity, and integrity. Oxford: Oxford University Press.
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh. New York: Basic Books.
Lakoff, G., & Johnson, M. (2003 [1980]). Metaphors we live by. Chicago: University of Chicago Press.
Laland, K. N., Odling-Smee, J., & Feldman, M. W. (2000). Niche construction, biological evolution, and cultural change. Behavioral and Brain Sciences, 23, 131–175.
Lanier, J. (2010). You are not a gadget: A manifesto. London: Allen Lane.
Loh, K. K., & Kanai, R. (2015). How has the internet reshaped human cognition? The Neuroscientist, 1073858415595005.
Luria, A. R. (1976). Cognitive development: Its cultural and social foundations.
Malafouris, L. (2010a). Grasping the concept of number: How did the sapient mind move beyond approximation. In I. Morley & C. Renfrew (Eds.), The archaeology of measurement: Comprehending heaven, earth and time in ancient societies (pp. 35–42). Cambridge: Cambridge University Press.
Malafouris, L. (2010b). Metaplasticity and the human becoming: Principles of neuroarchaeology. Journal of Anthropological Sciences, 88(4), 49–72.
Malafouris, L. (2013). How things shape the mind: A theory of material engagement. Cambridge, MA: MIT Press.
Malafouris, L. (2016). On human becoming and incompleteness: A material engagement approach to the study of embodiment in evolution and culture. Embodiment in evolution and culture, 289–305.
Mazlish, B. (1993). The fourth discontinuity: The co-evolution of humans and machines. Yale University Press.
McGeer, V. (2001). Psycho-practice, psycho-theory and the contrastive case of autism. How practices of mind become second-nature. Journal of Consciousness Studies, 5–7, 109–132.
Menary, R. (2010). Cognitive integration and the extended mind. In R. Menary (Ed.), The extended mind (pp. 227–244). London: Bradford Book, MIT Press.
Menary, R. (2014). Neural plasticity, neuronal recycling and niche construction. Mind & Language, 29(3), 286–303.
Milkowski, M. (2013). Explaining the computational mind. MIT Press.
Minsky, M., & Papert, S. (1969). Perceptrons: An essay in computational geometry. MIT Press.
Mithen, S. (1996). The prehistory of the mind. London: Thames Hudson.
Moor, J. (2006). The Dartmouth College artificial intelligence conference: The next fifty years. AI Magazine, 27(4), 87–87.
Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs: Prentice-Hall.
Ney, A., & Albert, D. Z. (2013). The wave function: Essays on the metaphysics of quantum mechanics. Oxford University Press.
Norman, D. A. (1993). Things that make us smart (Defending human attributes in the age of the machine). Addison-Wesley.
Ong, W. J. (1982). Orality and literacy: The technologizing of the word. London: Methuen.
Piaget, J. (1954). The construction of reality in the child. New York: Basic.
Piccinini, G. (2010). The mind as neural software? Understanding functionalism, computationalism, and computational functionalism. Philosophy and Phenomenological Research, 81(2), 269–311.
Piccinini, G. (this volume). The myth of mind uploading.
Postman, N. (1993). Technopoly: The surrender of culture to technology. New York: Vintage.
Putnam, H. (1967). Psychological predicates. Art, mind, and religion, 1, 37–48.
Putnam, H. (1980). The nature of mental states. Readings in Philosophy of Psychology, 1, 223–231.
Rumelhart, D. E., & McClelland, J. L. (1986a). Parallel distributed processing: Exploring the microstructure of cognition (Vol. 1). Cambridge MA: MIT Press.
Rumelhart, D. E., & McClelland, J. L. (1986b). Parallel distributed processing: Exploring the microstructure of cognition (Vol. 2). Cambridge, MA: MIT Press.
Russell, B. (1927). The analysis of matter. London: Kegan Paul, Trench, Trubner & Co.
Russell, S. J. (2019). Human compatible: Artificial intelligence and the problem of control. Penguin Audio.
Ryle, G. (1949). The concept of mind. London: Hutchinson.
Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.
Sapolsky, R. M. (1997). Junk food monkeys and other essays on the biology of the human predicament. London: Headline.
Schneider, S. (2009). Mindscan: Transcending and enhancing the human brain. In S. Schneider (Ed.), Science fiction and philosophy: From time travel to superintelligence (pp. 260–276). Hoboken: Wiley- Blackwell.
Schneider, S. (2011). The language of thought: A New philosophical direction. MIT Press.
Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton: Princeton University Press.
Schwitzgebel, E. (2019). Introspection. In E. N. Zalta (Ed.), (Winter 2019 Edition ed., Vol. The Stanford Encyclopedia of Philosophy).
Searle, J. R. (1980). Mind, brains and programs. Behavioral and Brain Sciences, 3(3), 417–457.
Shelley, M. W. (2018). Frankenstein: The 1818 text. Penguin.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550(7676), 354–359.
Smart, P. (2018). Emerging digital technologies: Implications for extended conceptions of cognition and knowledge. In J. A. Carter, A. Clark, J. Kallestrup, S. O. Palermos, & D. Pritchard (Eds.), Extended epistemology (pp. 266–304). Oxford: Oxford University Press.
Smart, P. R., Heersmink, R., & Clowes, R. W. (2017). The cognitive ecology of the internet. In S. J. Cowley & F. Vallée-Tourangeau (Eds.), Cognition beyond the brain (2nd ed., pp. 251–282). Springer.
Smart, P. R., Madaan, A., & Hall, W. (2018). Where the smart things are: Social machines and the internet of things. Phenomenology and the Cognitive Sciences, 1–25.
Smart, P., Chu, M.-C. M., O'Hara, K., Carr, L., & Hall, W. (2019). Geopolitical drivers of personal data: The four horsemen of the datapocalypse.
Somers, J. (2019). How the artificial-intelligence program AlphaZero mastered its games.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778.
Sterelny, K. (2011). From hominins to humans: How sapiens became behaviourally modern. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1566), 809–822.
Sutton, J. (2010). Exograms and interdisciplinarity: History, the extended mind, and the civilizing process. In R. Menary (Ed.), The extended mind (pp. 189–225). London: Bradford Book, MIT Press.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Tallis, R. (2004). Why the mind is not a computer: A pocket lexicon of neuromythology (Vol. 13). Imprint Academic.
Toffler, A. (1980). The third wave (Vol. 484). Bantam Books, New York.
Turing, A. M. (1937). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 2(1), 230–265.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York: Basic Books.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind. Cambridge, MA: MIT Press.
Velleman, J. D. (2009). The possibility of practical reason. Michigan Publishing, University of Michigan Library.
Vision, G. (2018). The provenance of consciousness. In E. Vitaliadis & C. Mekos (Eds.), Brute facts (pp. 155–176). Oxford: Oxford University Press.
Vygotsky, L. S. (1962). Thought and language (E. Hanfmann & G. Vakar, Trans.). Cambridge, MA: MIT Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge MA: Harvard University Press.
Vygotsky, L. S., & Luria, A. R. (1994). Tool and symbol in child development. In R. Van Der Veer & J. Valsiner (Eds.), The Vygotsky reader. Cambridge MA: Basil Blackwell.
Wegner, D. M., & Ward, A. F. (2013, December 1). The internet has become the external hard drive for our memories. Scientific American.
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation.
Wilkes, K. V. (1984). Pragmatics in science and theory in common sense. Inquiry, 27, 339–361.
Wootton, D. (2015). The invention of science: a new history of the scientific revolution. Penguin, UK.
Zawidzki, T. W. (2013). Mindshaping: A new framework for understanding human social cognition. MIT Press.
Acknowledgements
Robert W. Clowes’s work is supported by FCT, ‘Fundação para a Ciência e a Tecnologia, I.P.’ by the Stimulus of Scientific Employment grant (DL 57/2016/CP1453/CT0021) and personal grant (SFRH/BPD/70440/2010).
Klaus Gärtner’s work is endorsed by the financial support of FCT, ‘Fundação para a Ciência e a Tecnologia, I.P.’ under the Stimulus of Scientific Employment (DL 57/2016/CP1479/CT0081) and by the Centro de Filosofia das Ciências da Universidade de Lisboa (UIDB/00678/2020).
This work is endorsed by the FCT project “Emergence in the Natural Sciences: Towards a New Paradigm” (PTDC/FER-HFC/30665/2017).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Clowes, R.W., Gärtner, K., Hipólito, I. (2021). The Mind Technology Problem and the Deep History of Mind Design. In: Clowes, R.W., Gärtner, K., Hipólito, I. (eds) The Mind-Technology Problem . Studies in Brain and Mind, vol 18. Springer, Cham. https://doi.org/10.1007/978-3-030-72644-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-72644-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72643-0
Online ISBN: 978-3-030-72644-7
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)