Skip to main content

Your Brain Is Like a Computer: Function, Analogy, Simplification

  • Chapter
  • First Online:
Neural Mechanisms

Part of the book series: Studies in Brain and Mind ((SIBM,volume 17))

Abstract

The relationship between brain and computer is a perennial theme in theoretical neuroscience, but it has received relatively little attention in the philosophy of neuroscience. This paper argues that much of the popularity of the brain-computer comparison (e.g. circuit models of neurons and brain areas since McCulloch and Pitts, Bull Math Biophys 5: 115–33, 1943) can be explained by their utility as ways of simplifying the brain. More specifically, by justifying a sharp distinction between aspects of neural anatomy and physiology that serve information-processing, and those that are ‘mere metabolic support,’ the computational framework provides a means of abstracting away from the complexities of cellular neurobiology, as those details come to be classified as irrelevant to the (computational) functions of the system. I argue that the relation between brain and computer should be understood as one of analogy, and consider the implications of this interpretation for notions of multiple realisation. I suggest some limitations of our understanding of the brain and cognition that may stem from the radical abstraction imposed by the computational framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 139.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    See Morar (2015) on Leibniz’s invention of a mechanical calculator for the four arithmetical functions, and the history of reception of Leibniz’s contributions in this area.

  2. 2.

    See Kline (2015) and Pickering (2010) for overviews of the cybernetic movement in the USA and UK, respectively.

  3. 3.

    “up to that time [of results of Lettvin et al. (1959)], Walter had the belief that if you could master logic, and really master it, the world in fact would become more and more transparent. In some sense or another logic was literally the key to understanding the world. It was apparent to him after we had done the frog’s eye that even if logic played a part, it didn’t play the important or central part that one would have expected.” Lettvin, interviewed in Anderson and Rosenfeld (1998: 10)

  4. 4.

    I do not mean to suggest that there is a uniform opinion amongst neuroscientists on what the nature of neural information processing is. Views on this have certainly differentiated since McCulloch and Pitts.

  5. 5.

    In this I am following the definition of analogies in the philosophy of science literature on analogical reasoning. As Dardashti, Thébault, and Winsberg (2017) put it, instances where an isomorphism obtains are a subset of all the cases of analogies in science, and they support stronger inferences than the other cases. See Knuuttila and Loettgers (2014: 87) for further discussion of why analogical reasoning in science goes beyond the isolation of structures that map from model to target.

  6. 6.

    Note that this should not be confused with the issue of whether the dominant mode of explanation in neuroscience is mechanistic or computational. Those on the mechanist side of this debate, such as Kaplan (2011), acknowledge the importance of computationalism in theoretical neuroscience, and argue furthermore that computational models provide mechanistic explanations. Another point is that those promoting dynamical systems theory as a better theoretical framework than computationalism for some neural systems (e.g. Shenoy et al. 2013) do not dispute the dominance of computationalism in neuroscience as it stands.

  7. 7.

    Haueis (2018) also discusses the distinction between cognitive and non-cognitive functions of the nervous system.

  8. 8.

    Cao (2014) recommends going beyond the neuron doctrine to consider synapses and glia also as functional units of the nervous system. This raises the question of the technical feasibility of gathering synapse-resolution data of neural responses, and attempting to model the brain in such a fine-grained way (noting that each cortical neuron receives, on average, tens of thousands of inputs). If the neuron doctrine provides a “good enough” framework for modelling the brain, especially useful for the activation patterns associated with observable behaviours (perception, learning, decision making) which involve large populations of neurons, then there is little reason to attempt the impossible and replace neurons with synapses as the fundamental signalling systems, even if one acknowledges that in the brain much information processing does occur within synapses. Below I take up the issue of the importance of these details that are relegated to the background in the classic neuro-computational picture.

  9. 9.

    A point made vivid by Jonas and Kording (2017).

  10. 10.

    Lettvin often uses this word in his characterisation of the ‘engineering-stance’ in neuroscience. It should not be confused with the notion of ‘process models’ in psychology, or other kinds of mechanistic models.

  11. 11.

    Pickering (2010: 6) takes this methodology to be the standard practice for cybernetics in neuroscience, though many of the artificial devices were not computer programmes:

    Just how did the cyberneticians attack the adaptive brain? The answer is, in the first instance, by building electromechanical devices that were themselves adaptive and which could thus be understood as perspicuous and suggestive models for understanding the brain itself. The simplest such model was the servomechanism—an engineering device that reacts to fluctuations in its environment in such a way as to cancel them out. A domestic thermostat is a servomechanism; so was the nineteenth-century steam-engine ‘governor’ which led Wiener to the word ‘cybernetics.’

  12. 12.

    I am alluding here to multiple realisation – a topic to be discussed directly in Sect. 11.3. But the point can still be made without supposing there are cases in which one would want to say that an artificial and a neural system are two different realisers of the same function. Consider just the comparison between a fairly abstract and a highly detailed model of a neural circuit (e.g. a model where neurons are just represented as a time series of spike rates, and a ‘compartment model’ which represents some of the anatomical structure of the neuron). If the former is an equally good working model of the function of interest, then it is a reasonable working assumption that the behaviour of the neural system can be understood without reference to sub-cellular structure.

  13. 13.

    “A factor is constitutively relevant when (ideal) interventions on putative component parts can be used to change the explanandum phenomenon as a whole and, conversely, interventions on the explanandum phenomenon as a whole can produce changes in the component parts” (Craver and Kaplan 2018: 20).

  14. 14.

    Craver and Kaplan (2018: p. 19 fn 16) appeal to the purely causal notion of “screening off” in order to address the question of why complete (ontic) explanations do not end in quarks. The idea is that “low-level differences” will be ignored if they “make no relevant difference once the higher-level behaviour is fixed.” I would like to point out that for the kind of abstractions I mention here, screening off should not be expected to occur – i.e. these excluded details do causally affect neuronal behavior in ways that are not fully summarized by the “higher level” variables of net excitation and inhibition, because of non-linearities in the behaviour of the cell. This suggests that a search for “relevant details” that proceeded only by the method of searching for “higher level” causal variables to replace “lower level” ones would not result in the abstractions found to be most useful in computational neuroscience.

  15. 15.

    There is latitude here in the abstracting assumptions. I have described a case where total inhibition is subtracted from total excitation, whereas McCulloch and Pitts (1943: 118) posit that inhibitory input at any one synapse will cancel out the effects of excitation.

  16. 16.

    See also Knuuttila and Loettgers (2014: 79) on the contrast between physics and engineering based approaches within synthetic biology research. One might also be reminded of the so-called “design stance” (Dennett 1987).

  17. 17.

    And I certainly am not claiming that the computational perspective should float free from experimentally derived facts regarding neural mechanisms. Theorising unconstrained by experimental results risks producing elegant models that do not apply to the actual brain.

  18. 18.

    See Canguilhem (1965/2008) for many remarkable thoughts on the relationship between the mechanistic and finalistic perspectives on nature. The problematic idea that there is an exclusive rather than complementary relationship between mechanism and teleology is evident in the description by Craver and Tabery (2017) of mechanism as a self-contained “scientific worldview”:

    Some have held that natural phenomena should be understood teleologically. Others have been convinced that understanding the natural world is nothing more than being able to predict its behavior. Commitment to mechanism as a framework concept is commitment to something distinct from and, for many, exclusive of, these alternative conceptions. If this appears trivial, rather than a central achievement in the history of science, it is because the mechanistic perspective now so thoroughly dominates our scientific worldview.

  19. 19.

    It is worth quoting Rosenblueth, Wiener and Bigelow (1943: 23) at length:

    Teleology has been interpreted in the past to imply purpose and the vague concept of a “final cause” has been often added. This concept of final causes has led to the opposition of teleology to determinism. A discussion of causality, determinism and final causes is beyond the scope of this essay. It may be pointed out, however, that purposefulness, as defined here, is quite independent of causality, initial or final. Teleology has been discredited chiefly because it was defined to imply a cause subsequent in time to a given effect. When this aspect of teleology was dismissed, however, the associated recognition of the importance of purpose was also unfortunately discarded. Since we consider purposefulness a concept necessary for the understanding of certain modes of behavior we suggest that a teleological study is useful if it avoids problems of causality and concerns itself merely with an investigation of purpose.

    Note also that Francis Walshe, quoted above on the complementary relationship between the physicist’s and engineer’s stances in neuroscience, was quite critical of Rosenblueth et al.’s paper, highlighting the mismatch between the operation of feedback in the cerebellum and in the artificial system, which, he argues, means the literal interpretation of the cybernetic model is not warranted (Walshe 1951). See also Mayr (1988: 46) for the argument that control via negative feedback is not sufficient to capture the range of behaviours described as teleological, pace Rosenblueth et al.

  20. 20.

    As Canguilhem (1963: 510) describes, “texts, taken from Quesnay, Vaucanson and Le Cat, do not indeed leave any doubt that their common plan was to use the resources of automatism as a dodge, or as a trick with theoretical intent, in order to elucidate the mechanism of physiological functions by the reduction of the unknown to the known, and by complete reproduction of analogous effects in an experimentally intelligible manner.”

  21. 21.

    A potential misinterpretation of Sect. 11.2 may push one towards the literal interpretation. If one thinks that the brain -- like a digital computer designed to be indifferent to e.g. variation in magnetic grains in a hard drive -- is a device that “ignores its own complexity”, then an abstract computational description of the system can be equally, literally true of the brain as of the machine. However, the point of Sect. 11.2 is to explain how and why neuroscientists ignore the complexity of the brain, leaving it a live possibility that those details do matter to cognition in animals (see Sect. 11.3.3).

  22. 22.

    This strong view is best exemplified in the work of researchers at the interface between neuroscience and the deep learning style of AI, such as Hassabis et al. (2017) and Yamins and DiCarlo (2016). It subscribes to the computational theory of mind much discussed in the in philosophy of mind, psychology, and cognitive science. In this paper I do not say anything directly about the interpretation of computational models in branches of cognitive science other than neuroscience. However, there are certainly implications to the extent that my account causes trouble for the computational theory of mind.

  23. 23.

    Another tenet of functionalism is the classic account of multiple-realisation which gives the abstract computational “level” of neuro-modelling a robust ontological interpretation. Elsewhere I call this approach MR 1.0 and argue that it be replaced with an ontologically modest view, MR 2.0, which treats the computational as a level of explanation rather than a level of being (Chirimuuta 2018b). MR 2.0 is consistent with the analogical interpretation of computational models offered below (Sect. 11.3.2); indeed, the analogical interpretation is intended to be an elaboration of some of the ideas presented in my earlier paper.

  24. 24.

    We might also consider here the bivalence of the word “function”, which has both a mathematical and a biological sense (Longuenesse 2005: 93). Interestingly, the two meanings coincide in formal realism, where the function is at once the mathematical operation computed by the neurons, and the biological purpose of this activity. Note that because the relevant forms in computational neuroscience are mathematical ones, formal realism here has a Platonic as well as an Aristotelian feel: the underlying order of the brain is a mathematical one. Elsewhere I say more about the Platonic dimension (Chirimuuta 2020).

  25. 25.

    Examples of formal realism in philosophy are Egan (2017), Shagrir (2010) and Shagrir (2018).

  26. 26.

    The analogical interpretation should be understood in the specific sense described here, not to be confused with the “analog-model” account of the brain (Shagrir 2010), which I classify as a formal realism. The reader here may be reminded of the philosophical discussion, responding to Putnam (1988), over whether the computational mappings of state transitions are arbitrarily up to human observers, or constrained by the causal structure of the implementing system. The important difference with my discussion is that it is centred on scientific practice. While no geologist has claimed that their lumps of rock implement finite-state automata, many neuroscientists claim to have discovered functions implemented in the brain. Thus I am starting with the claim of formal realism as it has been put forward from the science, and my alternative to it is shaped by considerations of modelling practice within the science.

  27. 27.

    Kant (1929: B519, note a) gives “formal idealism” as a gloss for “transcendental idealism”. The former term draws attention to the point that the idealism in Kant’s philosophy is restricted to the way that our knowledge of nature is formed or structured by our cognitive capacities rather than a structure pre-given in things-in-themselves.

  28. 28.

    See also Chirimuuta (2020) for an argument against formal realism, based on the existence of empirically adequate but incompatible mathematical models of certain brain areas.

  29. 29.

    For a more lengthy discussion of this research and the explanations it affords see Chirimuuta (2018a).

  30. 30.

    Marr and Ullman (1981); Marr (1982: 54–65).

  31. 31.

    Marr (1982:64) makes the stronger (but hedged) claim that these neurons are computing the function:

    “it is not too unreasonable to propose that the ∇2G function is what is carried by the X cells of the retina and lateral geniculate body, positive values being carried by the on-center X cells, and negative values by the off-center X cells.”

    This amounts to a formal realism, so I do not propose my weaker interpretation of the case as one proposed by Marr himself – see Egan (2017) and Shagrir (2010) for discussions of this example which instead endorse the literal interpretation. That said, I do think Marr can be read as making the abstractive inference. A short biographical note: I first heard of this example during an undergraduate lecture by the late and much missed Tom Troscianko. Intrigued by the idea that the retina does calculus, I decided to do my final year research project with him, and then went on to do graduate research with one of his collaborators. I am still wondering….

  32. 32.

    NB – the inference is not that the differences in implementation are irrelevant tout court, but that they can reasonably be ignored for this kind of investigation of this particular capacity.

  33. 33.

    One might be remined of Kripke’s plus/quus argument that any finite series of observations of a natural system can in principle be modelled by quite different mathematical functions. (I thank Brian McLaughlin for this observation.) However, my argument should really be taken as one grounded in the concerns of scientific practice, where Kripke’s in principle alternative models would be ruled out for pragmatic reasons for they add mathematical complexity without improving the fit to the dataset. My point is, in essence, that as a consequence of the complexity of the neural events, and thus of the datasets gleaned from them, the determination of signal versus noise is not unambiguous and for that reason the datasets afford numerous plausible mathematical descriptions. Marr treated the asymmetry in the responses as noise and left it out of his model; another scientist would have been equally justified in treating it as signal, a feature to be included in the model.

  34. 34.

    “Despite their great degree of mathematical complexity, it does not appear that cybernetic models are always safe from this accident. The magical aspect of simulation is strongly resistant to the exorcism of science.” Canguilhem (1963: 515); Cf. Dreyfus (1972: 79–80).

  35. 35.

    Quoted approvingly by Canguilhem (1963: 516).

  36. 36.

    Of course other disanalogies are most likely relevant here, such as the “noisiness” of neural components in comparison with electronic ones. Also, the embodiment of organic intelligence, whereas most expert systems are disembodied, not capable of acting in the physical world. But note that embodied AI systems (e.g. autonomous cars) have also proved to be limited in their operation outside of controlled conditions, suggesting that embodiment by itself doesn’t overcome the obstacles to creating a general AI.

  37. 37.

    As Smith (2011: 100) relates, “the animal body is not a ‘mere’ machine but a special kind of machine, a ‘more exquisite’ or ‘more divine’ machine, ……. This is the machine of nature, or the organic body, whose exquisiteness resides in the fact that it remains a machine in its least parts, which is to say that there is no stage in its decomposition at which one arrives at nonmachinic components.”

References

  • Adrian, E. D. (1954). Address of the President Dr E. D. Adrian, O.M., at the anniversary meeting, 30 November 1953. Proceedings of the Royal Society of London B, 142, 1–9.

    Google Scholar 

  • Aizawa, K. (2018). Multiple realization and multiple “ways” of realization: A progress report. Studies in History and Philosophy of Science, 68, 3–9.

    Article  Google Scholar 

  • Allen, C., Bekoff, M., & Lauder, G. (Eds.). (1998). Natures purposes: Analyses of function and design in biology. Cambrdige, MA: MIT Press.

    Google Scholar 

  • Anderson, J. A., & Rosenfeld, E. (Eds.). (1998). Talking nets: An oral history of neural networks. Cambridge, MA: MIT Press.

    Google Scholar 

  • Arbib, M. A. (2016). Afterword: Warren McCulloch’s search for the logic of the nervous system. In W. S. McCulloch (Ed.), Embodiments of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Bartha, P. (2016). Analogy and analogical reasoning. In The Stanford Encyclopedia of Philosophy.

    Google Scholar 

  • Bullock, T. H., Bennett, M. V. L., Johnston, D., Josephson, R., Marder, E., & Field, R. D. (2005). The neuron doctrine, redux. Science, 310, 791–793.

    Article  Google Scholar 

  • Burnyeat, M. F. (1992). Is an Aristotelian philosophy of mind still credible (A Draft). In M. C. Nussbaum & A. O. Rorty (Eds.), Essays on Aristotle’s de anima. Oxford: Oxford University Press.

    Google Scholar 

  • Canguilhem, G. (1963). The role of analogies and models in biological discovery. In A. C. Crombie (Ed.), Scientific change. New York: Basic Books.

    Google Scholar 

  • Canguilhem, G. (1965/2008). Machine and Organism. In P. Marrati & T. Meyers (Eds.), Knowledge of life. New York: Fordham University Press.

    Google Scholar 

  • Cao, R. (2014). Signaling in the brain: In search of functional units. Philosophy of Science, 81, 891–901.

    Article  Google Scholar 

  • Cassirer, E. (1950). The problem of knowledge: Philosophy, science, and history since Hegel. New Haven: Yale University Press.

    Google Scholar 

  • Chirimuuta, M. (2017). Crash testing an engineering framework in neuroscience: Does the idea of robustness break down? Philosophy of Science, 84, 1140–1151.

    Article  Google Scholar 

  • Chirimuuta, M. (2018a). Explanation in computational neuroscience: Causal and non-causal. British Journal for the Philosophy of Science, 69, 849–880.

    Article  Google Scholar 

  • Chirimuuta, M. (2018b). Marr, Mayr, and MR: What functionalism should now be about. Philosophical Psychology, 31, 403–418.

    Article  Google Scholar 

  • Chirimuuta, M. (2020). Charting the heraclitean brain: Perspectivism and simplification in models of the motor cortex. In M. Massimi & C. McCoy (Eds.), Understanding perspectivism: Scientific challenges and methodological prospects. New York: Routledge.

    Google Scholar 

  • Craver, C. F., & Darden, L. (2013). In search of mechanisms. Chicago, IL: Chicago University Press.

    Book  Google Scholar 

  • Craver, C. F., & Kaplan, D. M. (2018). Are more details better? On the norms of completeness for mechanistic explanations. British Journal for the Philosophy of Science, 71, 287–319.

    Article  Google Scholar 

  • Craver, C. F., & Tabery, J. (2017). Mechanisms in science. In The Stanford Encyclopedia of Philosophy. Stanford: Stanford University.

    Google Scholar 

  • Dardashti, R., Thébault, K. P. Y., & Winsberg, E. (2017). Confirmation via analogue simulation: What dumb holes could tell us about gravity. British Journal for the Philosophy of Science, 68, 55–89.

    Article  Google Scholar 

  • Daugman, J. G. (2001). Brain metaphor and brain theory. In W. Bechtel, P. Mandik, J. Mundale, & R. S. Stufflebeam (Eds.), Philosophy and the neurosciences: A reader. Oxford: Blackwell.

    Google Scholar 

  • Davis, M. (2000). The universal computer: The road from Leibniz to Turing. New York: W. W. Norton & Company.

    Google Scholar 

  • Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dreher, B., & Sanderson, K. J. (1973). Receptive Field Analysis: Responses to Moving Visual Contours by Single Lateral Geniculate Neurones in the Cat. The Journal of Physiology, 234, 95–118.

    Article  Google Scholar 

  • Dreyfus, H. L. (1972). What computers can’t do: A critique of artificial reason. New York: Harper & Row.

    Google Scholar 

  • Egan, F. (2017). Function-theoretic explanation and the search for neural mechanisms. In D. M. Kaplan (Ed.), Explanation and integration in mind and brain science. Oxford: Oxford University Press.

    Google Scholar 

  • Fairhall, A. (2014). The receptive field is dead. Long live the receptive field? Current Opinion in Neurobiology, 25, ix–xii.

    Article  Google Scholar 

  • Frégnac, Y. (2017). Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? Science, 358, 470–477.

    Article  Google Scholar 

  • Godfrey-Smith, P. (2016). Mind, matter, and metabolism. Journal of Philosophy, 113, 481–506.

    Article  Google Scholar 

  • Goldstein, K. (1934/1939). The organism: A holistic approach to biology derived from pathological data in man. New York: American Book Company.

    Google Scholar 

  • Grant. (2018). Synapse molecular complexity and the plasticity behaviour problem. Brain and Neuroscience Advances, 2, 1–7.

    Article  Google Scholar 

  • Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95, 245–258.

    Article  Google Scholar 

  • Haueis, P. (2018). Beyond cognitive myopia: A patchwork approach to the concept of neural function. Synthese, 195, 5373–5402.

    Article  Google Scholar 

  • Hesse, M. B. (1966). Models and analogies in science. Notre Dame, Indiana: Indiana University Press.

    Google Scholar 

  • Jonas, E., & Kording, K. (2017). Could a neuroscientist understand a microprocessor? PLoS Computational Biology, 13, e1005268.

    Article  Google Scholar 

  • Kant, I. (1929). The critique of pure reason. Basingstoke: Palgrave.

    Google Scholar 

  • Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183, 339–373.

    Article  Google Scholar 

  • Kline, R. R. (2015). The cybernetics moment: or why we call our age the information age. Baltimore, MA: John Hopkins University Press.

    Book  Google Scholar 

  • Knuuttila, T., & Loettgers, A. (2014). Varieties of noise: Analogical reasoning in synthetic biology. Studies in History and Philosophy of Science, 48, 76–88.

    Article  Google Scholar 

  • Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 1–72.

    Article  Google Scholar 

  • Lettvin, J. (2016). Foreword to the 1988 reissue. In W. S. McCulloch (Ed.), Embodiments of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Lettvin, J. Y., Maturana, H. R., McCulloch, W. S., & Pitts, W. H. (1959). What the frog’s eye tells the frog’s brain. Proceedings of the IRE, 47, 1940–1959.

    Article  Google Scholar 

  • Longuenesse, B. (2005). Kant on the human standpoint. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Mante, V., Sussillo, D., Shenoy, K. V., & Newsome, W. T. (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature, 503, 78–84.

    Article  Google Scholar 

  • Marcus, G. (2015). The computational brain. In G. Marcus & J. Freeman (Eds.), The future of the brain. Princeton: Princeton University Press.

    Chapter  Google Scholar 

  • Marr, D. (1982). Vision: a computational investigation into the human representation and processing of visual information. San Francisco: W. H. Freeman.

    Google Scholar 

  • Marr, D., & Ullman, S. (1981). Directional selectivity and its use in early visual processing. Proceedings of the Royal Society of London B, 211, 151–180.

    Google Scholar 

  • Mayr, E. (1988). The multiple meanings of teleological. In E. Mayr (Ed.), Toward a new philosophy of biology. Cambridge, MA: Belknap Press of Harvard University Press.

    Google Scholar 

  • McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133.

    Article  Google Scholar 

  • Miłkowski, M. (2018). From computer metaphor to computational modeling: The evolution of computationalism. Minds and Machines, 28, 515–541.

    Article  Google Scholar 

  • Morar, F.-S. (2015). Reinventing machines: the transmission history of the Leibniz calculator. British Society for the History of Science, 48, 123–146.

    Article  Google Scholar 

  • Morrison, M. (2011). One phenomenon, many models: Inconsistency and complementarity. Studies in History and Philosophy of Science, 42, 342–351.

    Article  Google Scholar 

  • Nussbaum, M. C., & Putnam, H. (1992). Changing Aristotle’s mind. In M. C. Nussbaum & A. O. Rorty (Eds.), Essays on Aristotle’s de Anima. Oxford: Oxford University Press.

    Google Scholar 

  • Papert, S. (2016). Introduction. In W. S. McCulloch (Ed.), Embodiments of mind. Cambridge, MA: MIT Press.

    Google Scholar 

  • Piccinini, G. (2004). The first computational theory of mind and brain: A close look at mcculloch and pitts’s “Logical Calculus of Ideas Immanent in Nervous Activity”. Synthese, 141, 175–215.

    Article  Google Scholar 

  • Pickering, A. (2010). The cybernetic brain: Sketches of another future. Chicago: Chicago University Press.

    Book  Google Scholar 

  • Polger, T. W., & Shapiro, L. A. (2016). The multiple realization book. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Putnam, H. (1988). Representation and reality. Cambridge, MA: MIT Press.

    Google Scholar 

  • Ritchie, J. B., & Piccinini, G. (2018). Computational implementation. In M. Sprevak & M. Colombo (Eds.), The Routledge handbook of the computational mind. London: Routledge.

    Google Scholar 

  • Rodieck, R. W., & Stone, J. (1965). Response of cat retinal ganglion cells to moving visual patterns. Journal of Neurophysiology, 28, 819–832.

    Article  Google Scholar 

  • Rosenblueth, A., Wiener, N., & Bigelow, J. (1943). Behavior, purpose and teleology. Philosophy of Science, 10, 18–24.

    Article  Google Scholar 

  • Seidengart, J. (2012). Cassirer, reader, publisher, and interpreter of Leibniz’s philosophy. In R. Kroemer & Y. C. Drian (Eds.), New essays in Leibniz reception: In science and philosophy of science (pp. 1800–2000). Springer: Basel.

    Google Scholar 

  • Shagrir, O. (2010). Brains as analog-model computers. Studies in History and Philosophy of Science, 41, 271–279.

    Article  Google Scholar 

  • Shagrir, O. (2018). The brain as an input–output model of the world. Minds and Machines, 28, 53–75.

    Article  Google Scholar 

  • Shenoy, K. V., Sahani, M., & Churchland, M. M. (2013). Cortical control of arm movements: A dynamical systems perspective. Annual Review of Neuroscience, 36, 337–359.

    Article  Google Scholar 

  • Smith, J. E. H. (2011). Divine machines: Leibniz and the sciences of life. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Sprevak, M. (2018). Triviality arguments about computational implementation. In M. Sprevak & M. Colombo (Eds.), Routledge handbook of the computational mind. London: Routledge.

    Chapter  Google Scholar 

  • Sterling, P., & Laughlin, S. (2015). Principles of neural design. Cambridge, MA: MIT Press.

    Google Scholar 

  • Walshe, F. M. R. (1951). The hypothesis of cybernetics. British Journal for the Philosophy of Science, 2, 161–163.

    Article  Google Scholar 

  • Walshe, F. M. R. (1961). Contributions of John Hughlings Jackson to neurology. Archives of Neurology, 5, 119–131.

    Article  Google Scholar 

  • Yamins, D. L. K., & DiCarlo, J. J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19, 356–365.

    Article  Google Scholar 

Download references

Acknowledgments

I am most grateful to audiences at the Ludwig Maximilian University (Workshop on Analogical Reasoning in Science), Rutgers University (Center for Cognitive Science Colloquium), University of Edinburgh, and the 2019 Workshop on the Philosophy of Mind and Cognitive Science (Valparaíso, Chile) for very thoughtful discussions of this paper. Furthermore, I owe much to comments from Cameron Buckner, Philipp Haueis, Brendan Ritchie, Bill Wimsatt and an anonymous referee.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mazviita Chirimuuta .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Chirimuuta, M. (2021). Your Brain Is Like a Computer: Function, Analogy, Simplification. In: Calzavarini, F., Viola, M. (eds) Neural Mechanisms. Studies in Brain and Mind, vol 17. Springer, Cham. https://doi.org/10.1007/978-3-030-54092-0_11

Download citation

Publish with us

Policies and ethics