Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Origin of Space and Neurogeometry

2.1.1 Geometric, Physical, and Sensorimotor Conceptionsof Space

The origin and status of spatial representations is a long-standing question that has been much discussed in the history and philosophy of science. It has been approached from several different angles up to now, including those of mathematics, physics, physiology, and psychology.

  1. 1.

    From the mathematical point of view, starting out with the basic reference provided by Euclidean geometry, the concept of space has been gradually generalized: non-Euclidean geometries and geometries specified by their transformation groups, Riemannian geometry, differential geometry with Cartan connections , and others.

  2. 2.

    From the physical point of view, starting with the idea that space and time form an a priori background structure for physical phenomena, a physical genesis of space has been gradually built up: from general relativity, based on Riemannian geometry, and non-AbelianFootnote 1 gauge theories in quantum field theory which are based on Cartan geometry, to the admirable synthesis of quantum physics and geometry developed by Alain Connes under the name of ‘non-commutative geometry’.

  3. 3.

    As regards perceptual space or ‘perceived space’, from the profound insights of physiologists like Helmholtz, geometers like Poincaré, phenomenologists like Husserl, and psychologists like the proponents of Gestalt theory (Stumpf, Klüver, Kanizsa, etc.), up to contemporary studies on the physiology of perception and action, we have deepened our understanding of the way the classic Euclidean space derives from our sensorimotor relationship with the environment, where solid objects play a fundamental role.

All these developments have been either directly mathematical or else based on considerable progress in mathematics. This is obvious for level (1) above, since these are simply the most impressive developments of geometry, which followed on so quickly from Gauss to Riemann and Poincaré, then Weyl and Cartan, and today Alain Connes .Footnote 2

The link with mathematics is no less obvious for level (2), which concerns physical space. Here, it is worth emphasizing the way the astonishing progress in the formalisms of fundamental physics can be considered as a ‘geometrization of physics’. This is not the place to go further into this vast subject. Let us just say that the geometrization process consists in identifying more and more geometrical structures and symmetry groups of physical theories in such a way as to understand the whole complexity and diversity of observed physical phenomena in an ever more synthetic way. This is indeed the main process of mathematization since, on the one hand, it ‘reduces’ more and more physical phenomena to a priori geometrical statements, while on the other hand, it ‘unfolds’ these a priori statements in a profusion of different models, exploiting to the full the characteristic ‘generativity’ of mathematics. In the words of Jean-Marie Souriau , one of the founders of geometric quantization, who made this point so clearly [7]:

Philosophically, [geometrization] means reducing physics to geometric symmetries in order to do a priori [i.e., ‘rational’] physics.

In other words, as Souriau puts it:

There is nothing more in physical theory than symmetry groups, except the mathematical construction which allows us to show that there is nothing more.

Regarding this point, the interested reader may consult [8] and [6] and the references therein.

Regarding level (3), the question of perceptual space, the connection with fundamental mathematical structures, is less obvious, and it is precisely here that the present book aims to bring new insights. In fact, as we know, perception and motricity are tightly linked from a functional point of view, and one of the main ideas developed from Helmholtz to Poincaré was to relate the geometry of external space to our sensorimotor relationship with solid objects in our environment.

Consider for example the way Hermann von Helmholtz responded to Bernhard Riemann’s famous Habilitationsvortrag in 1854, viz. Über die Hypothesen welche der Geometrie zu Grunde liegen [9], by his no less famous reply [10] Über die Tatsachen, die der Geometrie zu Grunde liegen.Footnote 3 Helmholtz suggested reducing the problem of perceptual space to a system of axioms specifying, not Riemann’s infinitesimal metric elements, but rather the transformations of space which are observed experimentally to be the congruences (of free motions) between rigid bodies. An elegant examination of this so-called Riemann–Helmholtz problem, concerning the origin of the geometry of external space, can be found in Joël Merker’s Le problème de Riemann–Helmholtz–Lie [11].

There are four axioms, and the first three are rather natural:

  1. 1.

    The points of the space E can be represented by the values of three coordinates, in such a way that transformations correspond to (smooth) variations of these coordinates. (For Euclidean \(\mathbb {R}^{3}\), this gives the six-dimensional group of transformations comprising the three translational degrees of freedom and the three rotational degrees of freedom.)

  2. 2.

    There exists a function f(ab) defined on \(E\times E\) which is invariant under all transformations.

  3. 3.

    Any point in E can be carried to any other point of E by a transformation (transitivity).

The fourth axiom, called the monodromy axiom, is much less obvious:

  1. 4.

    If we choose two points a and b in a rigid body, then there is one remaining degree of freedom (rotations with axis ab in the case of Euclidean \(\mathbb {R}^{3}\)), and such a ‘rotation’ must move all the points and bring the body point by point back onto itself after one complete turn.

In volume III of their famous treatise Theorie der Transformationsgruppen [12], Sophus Lie and his disciple Friedrich Engel spelt out the above axioms, rectifying certain errors made by Helmholtz and classifying all the solutions, noting that Euclidean geometry was only one solution among others (see [13]). To do this, they used the theory of the groups and algebras now known by the name of Lie groups and Lie algebras, something we shall make constant use of throughout this book.

Regarding Henri Poincaré, some of his basic ideas about physical space and perceived space are discussed in Chaps. 4 and 5 of Science and Hypothesis [14], entitled Space and Geometry and Experience and Geometry, respectively. The principle of geometric conventionalism asserts that the geometry we apply in physics is conventional, i.e. neither true nor false, that its axioms are neither experimental (criticism of empiricism), nor synthetic a priori (criticism of the narrow idealist interpretation of Kantian apriorism), and that the same factual physical contents can be described within alternative geometrical frameworks. As a convention, a geometry provides a language for description and does not possess any experimental or empirical truth in itself. By introducing the thesis that the group concept is an a priori feature of our understanding that thus ‘pre-exists in our minds, at least potentially’, Poincaré was putting forward a version of the transcendental ideality of space which is compatible with the existence of several different geometries and hence with the progress made in theoretical physics. Let us recall here the conclusion of Space and Geometry:

The object of geometry is the study of a particular ‘group’; but the general concept of group pre-exists in our minds, at least potentially. It is imposed on us not as a form of our sensibility, but as a form of our understanding; only, from among all possible groups, we must choose one that will be the standard, so to speak, to which we shall refer natural phenomena. Experience guides us in this choice, which it does not impose on us. It tells us not what is the truest, but what is the most convenient geometry.Footnote 4

Poincaré expands on this idea in Experience and Geometry, where he explains that the principles of geometry are not experimental facts. A given physical fact can always be expressed by changing the convention represented by the geometrical framework and changing the laws of physics; e.g., one can keep Euclidean geometry but reject the principle that light rays follow geodesics . This point of view was already anticipated by Clifford: there is an equivalence between (1) physical causes of changes in a space thought of a priori as flat and (2) a non-trivial (curved) space geometry. Physical experiments are always carried out on bodies, never on space. Therefore, they cannot help us to decide upon the geometry.

Regarding perceived space, Poincaré considered that its geometry must come essentially from our fundamental sensorimotor experience of the motions of solid bodies (see, in particular, Science and Method [15]). This constitutes our notion of space and, by distinguishing between proprioceptive internal changes and external changes that may balance them, leads to the aprioricity of the group concept and to the idea that geometry is conventional.Footnote 5

2.1.2 The Neurogeometric Approach

With this in mind, it should be said that there is (at least) a fourth way to inquire as to the origin of spatial representations. Until recently, it had only been the subject of a few bold and generally incorrect speculations, due to the lack of experimental evidence. This fourth way concerns the highly complex neurophysiological processes through which the geometrical structures of the external space are constituted as a result of the internal activities of our brains.

This can be tackled by considering at least two main lines of approach:

  • Sensorimotor and locomotor positioning and navigation of an organism moving through space. For example, the now classic book The Hippocampus as a Cognitive Map (1978) by John O’KeefeFootnote 6 and Lynn Nadel [17] has a long first chapter with a historical slant relating this new work on navigation to the philosophies of space propounded by philosophers and mathematicians such as Leibniz, Euler , Kant , Helmholtz, and Poincaré. Similar discussions can be found in the many works of neurophysiologists of perception and action like Alain Berthoz.

  • The geometrical structuring of visual images. This will be our main subject in the present book.

We coined the term ‘neurogeometry’ to refer to this neural origin of perceived space. The aim in the present work is to take a first step in this direction, something made possible by the huge amount of new and fascinating experimental results now available thanks to new imaging techniques. As long as the brain remained, from an experimental point of view, a ‘black box’, there was no way of developing such an approach. What made this possible was thus that the brain became, at least to some extent, a ‘transparent box’.

Brain imaging techniques are here the equivalent of the new observational methods that are always found to underlie any scientific revolution. We shall show that their results can be modelled using sophisticated mathematics that corresponds in the deepest possible ways to mathematics already invented by certain outstanding geometers like those already mentioned, and in particular Lie and Cartan, when they set out to understand mathematically how the geometry of the external world (Euclidean or otherwise) could come about. We would thus like to insert a new page in the age-old story of the foundations of geometry. There will be two main aims:

  • To provide models for a whole new set of neurophysiological data.

  • To fit these models into modern developments in the foundations of geometry.

The analogy with the history of the theories of physics could be illuminating here. Just as the modern theories of fundamental physics (general relativity, gauge theories, Higgs field, etc.) have led to ever further geometrization of empirical physical phenomena which, in its turn, provides a better understanding of the physical genesis of space, so neurogeometry consists in a geometrization of empirical neural phenomena which, in its turn, provides a better understanding of the neural genesis of space. Our whole investigation will be based on this ‘dialectic’ between the geometrization of internal neural dynamics and the neural foundations of external geometry.

2.2 Perceptual Geometry, Neurogeometry, and Gestalt Geometry

Let us begin by giving a few points of reference and some clarifications:

  1. 1.

    Following on from the great geometers, phenomenologists, and psychologists who have turned their attention to our perception of forms, as discussed above, a certain number of eminent scholars have recently made considerable contributions to the geometry of visual perception. We may mention René Thom , who developed the first general dynamical theory of shapes, Jan Koenderink, who applied Thom’s theory of singularities to visual neurophysiology, the heirs to the Gestalt psychologists, and in particular, Gaetano Kanizsa, to whom we shall return at length, David Marr , who, at the end of the 1970s, brought a host of new insights into the problem of vision, and David Mumford (Fields medallist like René Thom), who completely revolutionized the area. When we talk about neurogeometry here, what we shall aim for is the neural implementation of the algorithms of this geometry, the problem being to understand how perceptual ‘macrostructures’ and their morphodynamics can emerge from the underlying neural ‘microlevel’.

  2. 2.

    The aspect of neurophysiology that is relevant in this research is functional neuroanatomy. It is not concerned with the biochemical details of the individual neurons (ion channels, membrane potentials, etc.), but treats them rather as functional units, e.g. threshold automata in neural network models, connecting to form neuroanatomically specifiable populations. We shall say a few words about the ‘micro’ cellular level relevant to molecular biology, but most of what follows will concern a ‘meso’ functional level.

  3. 3.

    One characteristic of perception is that perceptual ‘phenomenal consciousness’ results from integration, in the neurophysiological sense, of the partial processing carried out by a great many different brain modules connected together in an extremely complicated way with a high level of feedback . Processing is highly modular (whence the very specific nature of pathologies), but consciousness is highly integrated. This means that models for specific areas are necessarily incomplete. Here, we shall be dealing mainly with the first area, known as V1 (or area 17 in cats), of the primary visual cortex. This does of course limit the discussion, but we shall see that much can already be said and that this provides a good example of what is meant by neurogeometry . Furthermore, despite being so restrictive, this case can also be considered as fundamental if we adopt David Mumford and Tai Sing Lee’s ‘high-resolution buffer hypothesis’. According to this, V1 takes part in any higher level processing which requires high resolutions (see Mumford [18] and Lee et al. [19]).

  4. 4.

    We stress that neurogeometry is about the internal geometry (already referred to here as ‘immanent’) of low-level vision, and not therefore the conventional ‘transcendent’ geometry of the perceived external 3D Euclidean space. It concerns a much more fundamental level, and to use the nice expression adopted by Misha Gromov to speak about sub-Riemannian geometry, it tries to understand perceived space from within.

  5. 5.

    In neurogeometry, anything that is not implemented neurally does not exist. This means that all the mathematical concepts used operationally in the models must have some material counterpart. There is a similar situation in computer science, where the software only works if it is compiled and realized materially in the physics of the hardware. It is not easy to implement this equivalence between geometric idealities and neural materialism. Indeed, on the one hand, trivial mathematical structures such as alignments, gluing of local charts, or direct products are implemented neurally in a very subtle way that is hard to study experimentally, and on the other hand, certain properties of the modelling structures will not be implemented and so will have no empirical meaning. The reader should bear this crucial point in mind: when a set of empirical phenomena is modelled by mathematical structures of a certain kind, only certain aspects of these structures will be open to empirical interpretation.Footnote 7

  6. 6.

    Furthermore, implementations can differ significantly depending on the species, and the same abstract functional structure can be achieved materially in different ways in the various layers of V1 (see Sect. 4.9.4 in Chap. 4.). We need therefore to carry out very careful interspecific comparative studies on rats, ferrets, tree shrews (tupaias), cats, macaques, humans, etc.

  7. 7.

    The neurons in V1 have small receptive fields and thus process information from the photoreceptors in a very local manner, i.e. localized in the visual field. The main problem is to know how these local data are organized into global structures such as lines, edges, surfaces, and shapes. This is a problem of ‘integration’ in the mathematical sense, and here, the concept of functional architecture—referring to the design of the connectivity of neurons within an area—proves to be crucial. The enigmatic phenomena studied by Gestalt theory relate to the fact that perception ‘integrates’ local data and ‘fills in the gaps’, if there are any. In this sense, neurogeometry could be qualified as Gestalt geometry.

2.3 Geometry’s ‘Twofold Way’

Let us stress once more that, in neurogeometry, there is a twofold relationship between the geometry and neurophysiology of vision. As we shall explain in detail, it is the functional architecture of the visual areas, the precise organization of their neural connections , which generates the geometric properties of perceptual space, i.e. the perceived 3D space in which the objects of the external world are situated. We may thus envisage a ‘neural \(\rightarrow \) spatial genesis’ of the kind ‘functional architecture \(\rightarrow \) geometric properties of external space’. But as we shall see later, there exist geometric models of the functional architectures themselves; that is, the latter implement well-defined sui generis geometrical structures. It is important to distinguish carefully between the two levels at which geometry enters the discussion. The whole purpose of this book would become incomprehensible if they were confused. As we have seen, to formulate the distinction, we may return to the classical philosophical opposition between immanence and transcendence. The geometry of functional architectures is immanent in perception, internal and local, and its global structure is obtained by integration and coherent association of local data. In contrast, the geometry of perceived space is transcendent in the sense that it concerns the outside world and is given to us immediately as global.

But it turns out that neurally implemented immanent geometry can itself be modelled using deep geometric structures already introduced by the geometers mentioned earlier, such as Elie Cartan, Hermann Weyl , René Thom , Alain Connes , and Misha Gromov, to understand the genesis of transcendent geometry. This implies that, once modelled in this way, the neural genesis of space can be internalized in the mathematics and thereby identified with a mathematical genesis of a macro and global geometry from a micro and local one, globalized by integration and coherent matching. This should come as no surprise, because the genesis of physical space occurs in exactly the same way: once physics has been mathematized, it is identified with the genesis of classical geometry from Riemannian geometry (in general relativity) or from the non-commutative geometry called ‘quantum’ or ‘spectral’ geometry in quantum field theory. The diagram in Fig. 2.1 explains this interaction between the different philosophical levels of understanding geometry.

Fig. 2.1
figure 1

Connections between ‘immanent’ geometry and ‘transcendent’ geometry

2.4 Idealities and Material Processes

To clarify this key point, let us make an analogy. Although it differs with regard to content, the new direction provided by neurogeometry is methodologically speaking of the same kind as the one taken during the last century with the advent of the Turing machine, \(\lambda \)-calculus, and computers. This computational revolution took the symbols that underlie logical idealities and turned them into material operations. It explained how the dominant logical idealism and analytic apriorism expounded from Bolzano to Frege could be naturalized and even physicalized. In other words, it explained how logical ‘software’ could be implemented in physical ‘hardware’.

We are doing just the same here. The aim of the ‘neurogeometric’ approach is to obtain an explicit understanding of the material operations that underlie the geometric idealities of the synthetic a priori and to explain how some kind of geometric ‘software’ could be implemented in our neural ‘hardware’, hence the following analogy:

 

Idealities

Type of a priori

Implementation

Logic

Logical idealities

Analytic

\({\lambda }\)-calculus

Geometry

Spatial idealities

Synthetic

Neurogeometry

Let us say a little more about this analogy. In logic, we have what is known as the Curry–Howard correspondence which relates low-level machine language with the high-level language of logic. Low-level calculations are described, for example, by the \(\lambda \)-terms of a \(\lambda \)-calculus which describes the programs. In the simplest \(\lambda \)-calculus, the \(\lambda \)-terms (the programs) are constructed inductively by iterating two basic operations:

  • the application MN of one \(\lambda \)-term M to another \(\lambda \)-term N,

  • the abstraction operation \(\lambda x.M\) transforming the free occurrences of the variable x in M into places for other \(\lambda \)-terms.

The basic rule of \(\lambda \)-calculus (which corresponds to executing the programme described by the \(\lambda \)-term) is known as \(\beta \)-reduction. It consists in applying a \(\lambda \)-term \(\lambda x.M\) to another \(\lambda \)-term N by substituting N in all the free occurrences of x in M, which can be written \((\lambda x.M)N\rightarrow _{\beta }M[x:=N]\). The normalization of a \(\lambda \)-term is a sequence of \(\beta \)-reductions which stops at a \(\beta \)-irreducible \(\lambda \)-term. The normalizable \(\lambda \)-terms thus describe effective computations which stop and deliver a result. The fundamental link with logic comes from the typing of the \(\lambda \)-terms M into types \(\mu \) (notation \(M:\mu \)). Intuitively, if \(M:\mu \) is a \(\lambda \)-term of type \(\mu \), and if \(x:\sigma \) is a variable of type \(\sigma \), then the abstraction \(\lambda x.M\) has the type \(\sigma \rightarrow \mu \) of functions of source \(\sigma \) and target \(\mu \) . Likewise, if \(M:\sigma \rightarrow \tau \) is a \(\lambda \)-term with functional type \(\sigma \rightarrow \tau \) and if \(N:\sigma \) is of type \(\sigma \), then MN is of type \(\tau \). In fact, these are the types which correspond to the formulas of a logic system: intuitionistic propositional logic. The Curry–Howard correspondence between programs and proofs is summarized here:

\({\varvec{\lambda }}\) -calculus, programs

Logic, proofs

Low level

High level

Code

Expression

Compilation

Decompilation

Execution of the program

Theorem

Encoding

Typing

Instruction

Logic rule

It is this kind of correspondence that we shall describe in this book but yet with three fundamental differences:

  1. 1.

    The low-level calculations will be neural calculations and not programs written in a machine language.

  2. 2.

    The high-level structures will not be expressions of a logic system, but geometrical structures.

  3. 3.

    In contrast to computers (universal Turing machines), the neural hardware is dedicated to certain tasks, and its concrete physical activity is thus equivalent to the abstract ideal ‘calculation’ it carries out.

These points can be displayed as follows:

Neural ‘calculation’

Geometry

Low level

High level

Neural code

Geometric structures

Compilation

Decompilation

Neural activity

Geometric construction

Encoding

Typing

Instruction

Construction rule

2.5 Mathematical Prerequisites and the Nature of Models

By its very nature, the following will raise certain issues relating to didactic presentation, issues that might prove off-putting to some readers. Indeed, we shall use many mathematical concepts generally considered to be rather ‘advanced’: differential forms , connections , Lie groups, contact structures and symplectic structures, sub-Riemannian geometry , variational models, non-commutative harmonic analysis, and so on. We shall define these as we go along, assuming a basic understanding of differential and integral calculus, linear algebra, and elementary group theory. These are basic concepts that will be familiar to any science student and which are in any case easy to find in a good enyclopaedia.

Having said that, the reader may wonder quite rightly why such mathematics is relevant here. Our long experience as teacher and researcher in cognitive science has shown us that biologists and psychologists are often intrigued, even shocked, by the idea that non-trivial mathematical models (going beyond simple methods of data analysis) should be needed in their field of study.

A first source of suspicion comes from the idea that mathematics should only be applicable to intrinsically rational phenomena and that, insofar as evolution results from a ‘tinkering’ process, biological structures could not be intrinsically rational and so could not as a matter of principle be expressible in terms of mathematics. There are several possible answers to this. To begin with, there is no metaphysical reason why physical phenomena themselves should be intrinsically rational. It is rather because our efforts to express them mathematically have been so successful that they now appear a posteriori to be so rational. Secondly, what characterizes physical rationality expressed in this way is the existence of simple laws, from Kepler and Newton to superstring Lagrangians.Footnote 8 But modelling goes well beyond what is governed by laws. For example, many differential equations can be applied to a whole range of different fields: Turing-type reaction–diffusion equations for morphogenetic processes, the Hodgkin–Huxley equation [20] for the propagation of action potentials , the spin glass equations of statistical physics for neural networks, the Lotka–Volterra equations for ecology, and so on. There is thus no deep reason why there should be any natural limit to the use of mathematical models.

Another argument often put forward is that if we make the hypothesis that algorithms are implemented neurally, this would mean that neurons ‘calculate’, which is impossible. But this argument is also mistaken. In Mechanics the planets do not ‘calculate’ their trajectories. The only thing we can say is that theories based on laws involving global interactions (as is the case with Newton’s universal law of gravitation) are problematic and that the interactions must be localized (something achieved by general relativity). However, in neuroscience, we can be sure of the locality of the interactions, because these interactions occur through material connections between neurons. What passes for a neural ‘calculation’ is essentially the propagation of activity along connections, and this is a ‘calculation’ because the connections are organized into highly specific functional architectures. In other words, it is the structure of the functional architectures—in a sense, the ‘design’ of the neural ‘hardware’—which amounts to a calculation.

A third argument is that even if we are convinced of the relevance of mathematical models in neurophysiology, we should at least seek out the simplest possible models and that we should in principle be suspicious of any complexity in this context. Once again, this is simply a prejudice and indeed constitutes another fallacy. To see this, we only need to return to the beginnings of differential and integral calculus and mathematical physics. To solve what seemed to be very simple problems, such as calculating the length of the arc of an ellipse, new functions had to be invented, viz. the elliptical functions, much more complicated than the trigonometric functions. Likewise in mechanics, to solve apparently very simple problems, such as the problem of a hanging chain, i.e. the shape of the curve adopted by a chain of uniform linear density when suspended by its two ends and subject to the force of gravity alone, the pendulum, or the shape of a uniform elastic rod when curved (elastica problem), mathematicians had to solve specific differential equations or variational problems, which were what they were and which turned out to involve astonishing internal complexity. Newton’s law of gravitation is expressed by an extremely simple second-order differential equation, but in most cases, when the relevant forces are fed in, it becomes a specific differential equation whose solutions have nothing simple about them at all. The complexity of the solutions often makes them quite inaccessible, as illustrated by the n-body problem.

The emergence of complexity is in fact perfectly commonplace, and we shall return to this in the second volume. It is often due to the fact that the integration of a differential equation involves iterating the infinitesimal generator of the equation. But the iteration of operations generally leads to a great deal of complexity, even if these operations are very simple. Fractals provide us with many examples.

2.6 Mathematical Structures and Biophysical Data

For our investigation of neurophysiology, we should like to return to the spirit of the pioneers of the seventeenth and eighteenth centuries, such as Euler and Lagrange, in their investigation of mechanics. Indeed, there is really no reason why ‘calculation’ of perceptual geometry by the visual cortex should be simpler and less subtle mathematically than the calculation of the arc of an ellipse, the hanging chain, the pendulum, or the elastic rod. Empirical phenomena have to be taken as they are. The important thing is to model them correctly, and it is perfectly understandable that, in order to do that, we must appeal to somewhat elaborate mathematics.

However, we are fully aware that we may convince neither the neurocognitivists nor the mathematicians, because we know from experience that the transition from ‘neither, nor’ to ‘both, and’ can be a difficult one. As soon as we leave the field of physics, whose practitioners have been making mathematical models of empirical reality for centuries, we find a ‘gap’, often even a ‘gulf’, and not only theoretical, but institutionalized, between mathematical structures and empirical observations (here neurophysiological). Experimenters tend to want to preserve the full complexity of the data they have acquired using highly sophisticated equipment and thus tend to prefer computer simulations rather than formal models which always simplify the data in order to extract structural properties. The computational programs ‘Blue Brain’ and ‘Human Brain’, to be discussed in Sect. 4.3.1 of Chap. 4, are good examples. And this mistrust on the part of experimental neuroscience will find little to counterbalance it from the mathematicians because, as one might imagine, many of these will only see in these models elementary special cases of structures they have long been perfectly familiar with, even though they may be considered insurmountably difficult to grasp by their neurocognitivist colleagues.

But we shall nevertheless take this risk, making the optimistic hypothesis that some readers will feel that, as far as the neuroscience of vision and neural genesis of perceived space are concerned, the gap between mathematics and experimentation is actually less difficult to negotiate than one might think.

In fact, we consider neurogeometry to be intrinsically cross-disciplinary, that is intrinsically involving many different disciplines, something forced upon us by the very nature of the phenomena it seeks to theorize, but with the long-term aim of becoming a discipline in its own right. Until now, the basis of neuromathematical projects has consisted above all of (ordinary or partial) differential equations for neural activity. Our purpose will be to introduce more abstract methods of differential geometry.

So let us stress that we shall therefore concentrate on geometric models. On the other hand, this will not prevent us from giving a glimpse of other methods when the opportunity arises. In this way, the reader will get a better idea of the wealth and diversity of neuroscience models.

2.7 Levels of Investigation: Micro, Meso, and Macro

Another potential problem here is the sheer breadth of the topics treated here. Of course, we shall focus on modelling the functional architecture of the primary visual areas and in particular V1. But despite the apparently rather limited nature of the subject, we shall nevertheless only discuss a very small part of it. It is easy to understand why. To begin with, we shall only be dealing with the so-called functional, integrative, and computational neurosciences, and apart from the discussion in Sect. 5.12 of Chap. 5 which we shall explain when the time comes, we shall not be concerned with any aspect of molecular biology or genetics. This said there are still three levels of investigation for the purpose at hand: those of microneurophysiology, mesogeometry, and macrodynamics. These will receive differing amounts of attention.

For instance, one of our basic experimental inputs (see Sect. 4.3 of Chap. 4) will be the fact that the single neurons in V1 detect a retinal position \(a=(x,y)\) and a preferred orientation p at a, although naturally at a certain scale. The data (ap) is called a contact element in differential geometry, and we shall thus consider the single neurons of V1 as filters extracting contact elements from the optical signal. But just this simple claim is the subject of a huge experimental effort. For example, one needs to compare the situations for different species and take into account the fact that, in these results, neurons are treated as linear filters acting on stimuli reduced to single bars (simulating the edge of an object) or systems of parallel bars in motion (drifting gratings), while it is clear that there are significant nonlinearities and also that natural stimuli may have very different structures.Footnote 9 One must also take into account the fact that the imaging techniques used do not have sufficient spatial resolution to distinguish individual neurons,Footnote 10 whence one is in fact dealing with local averages over small groups of neurons, and a piece of geometric data like a contact element (ap) reflects an average of the underlying activity. The geometric quantity we refer to as a ‘contact element’ thus represents a mesoscopic entity when compared with the microscopic level of individual neurons.

One consequence of this choice of a mesoscopic level for neurogeometry is that what we shall call a ‘neuron’ will actually be a small patch of neurons, and we shall thus say little about true elementary neural circuits. There is an extensive literature on this subject and some sophisticated engineering, but we shall only refer to it from time to time.

It should also be noted that even a very high resolution would not remove the problem of levels. Indeed, the neural code is a population coding, where each elementary operation activates a large number of neurons. A ‘high-resolution’ neurogeometry that was truly microscopic would therefore have to be based on the tools of stochastic differential geometry, something pointed out by specialists such as David Mumford, Jack Cowan , and Daniel Bennequin. So let us stress once again that the neurogeometry developed here will idealize things by sticking to a mesoscopic level. The global structures, processes, and dynamics that we shall study will thus be based on gluing together mesoscopic geometric elements.

All the various aspects of the microlevel are currently the subject of ever more highly specialized studies. What this means is that, while our neurogeometric mesomodels are mathematically rather sophisticated, they concern only a very limited part of what contemporary neuroscience can teach us, and in a highly simplified way, so they are only a first step into this new field. What we would like to advocate in neuroscience is mainly the geometric framework, which seems relevant and natural for the mathematical modelling of functional architectures.

2.8 The Context of Cognitive Science

As we have rather briefly specified above, this book is about the problem of modelling in cognitive science, that is in the natural science of cognitive faculties and mental activities. Let us therefore say a few words about this context.Footnote 11 The cognitive sciences bring together all the various disciplines that tackle the question of human, animal, and artificial intelligence, starting with the underlying neurobiological substrate, its embodiment, that is the relationship between mental activity and the body apart from just the neural aspects, and its relationship also with the emotions, but going on to include its formal and mathematical structure (there are many different types of model in cognitive science), its computer simulations, and its linguistic, psychological, and social realizations.

The different areas of research in the cognitive sciences, specifically perception, action, reasoning, and language, are carried out with an endogenous, intrinsic, and unified ‘polyscientific’ approach, whose cross-disciplinary nature is imposed by the subjects of study and combines statistical physics, differential geometry, cognitive, computational, and integrative neuroscience, cognitive psychology, artificial intelligence, logic, linguistics, philosophy, and the social sciences. Biological evolution has produced an amazing biochemical machine, the brain, with intellectual, mental, and symbolic capacities. In a few tenths, or even hundredths of a second, this machine can recognize a complex visual shape, calculate the sequence of instructions required by the muscles to catch a ball in flight, or decode an acoustic message by identifying the words and their meaning. It includes a whole range of processing levels, from low-level peripheral sensory processing, such as retinal processing of an optical signal, to high-level central abstract symbolic processing, such as judgement and inference, or aesthetic assessment.

The aim of cognitive science is thus to explain mental phenomena—be they states, entities, structures, events, or processes—in a strictly naturalistic and causal way. These are the problems which, by definition, have long been studied by physiology and psychology. They have also been the subject of extensive and rigorous conceptual analysis by philosophers from Aristotle to Descartes , Hume , Locke , Leibniz , Kant , and many others, who have reflected upon the nature of ‘ideas’, ‘human understanding’, and ‘mental faculties’. As a science of the ‘mind’, the cognitive sciences are thus by definition natural sciences, bringing with them a vast philosophical legacy. The novel aspect of the current scientific situation is, on the one hand, the remarkable harvest of results obtained over the past few decades in neuroscience and, in particular in brain imaging, and, on the other hand, the integration of theoretical work on cognition, not just in the natural sciences, statistical physics, and biochemistry, but also in the formal sciences of geometry, logic, and theoretical computing. What is more, insofar as the cognitive sciences also concern artificial cognition, they are now inseparable from information processing systems and methods for analyzing and synthesizing image and sound, not to mention artificial intelligence (AI) and robotics.

We therefore stress once more that the cross-disciplinary nature of cognitive science is intrinsic and endogenous: it is imposed by the very nature of the entities, structures, and mental processes it investigates. An ability such as the perception of objects in three-dimensional space on the basis of ‘pixellated’ two-dimensional retinal data can be studied on a formal level (to identify the mathematical and formal features of the problem of constituting objects bounded by edges and filled with perceived qualities), on a behavioural level (studying the computational procedures, i.e. processes of integration, recognition, inference, and interpretation), and on the level of the biological substrate (investigation of neurophysiological mechanisms). This ability thus involves several levels of integration in both space and time.

The cognitive sciences treat all these mental phenomena a priori as a broad class of natural phenomena. They do for the mental what biology has been doing for the living since the nineteenth century. Consequently, their status depends on the way we extend the concept of ‘nature’. If we understand ‘nature’ in the narrow (strictly physicalist) sense, this leads to a reductionist or ‘eliminativist’ understanding of the mental. But if ‘nature’ is taken in a broader sense, we arrive at an ‘emergentist’ understanding of the mental, e.g. emergence of macrostructures from microinteractions in complex systems, as in thermodynamics and sociology. But whichever option is chosen, the approach will be naturalistic and monistic, rejecting any Cartesian form of dualism between mind and body (two substances).

The term ‘natural sciences’ also includes mathematical modelling, computer simulation, and an experimental approach. Cognitive science has become a new frontier in the contemporary hard technosciences, with considerable technological spin-offs (neural networks, robotics, hybrid natural–artificial systems, and so on). The effect has been to completely break down the conventional boundaries between the physical and mathematical sciences, the biological sciences, and the social sciences. Thanks to what are now called the convergent technologies, the physical, the biological, and the mental come together into a unified understanding of complexity in nature.

This naturalization of all that is mental—and at the end of the day, that means also consciousness, intentionality, and meaning—brings with it formidable epistemological challenges, and it will thus be impossible to develop the cognitive sciences without facing up to a whole set of problems relating to the theory of knowledge.

2.9 Complex Systems and the Physics of the Mental

As ‘hard’ technosciences, the cognitive sciences are inextricably related to the study of complexity and derive from the intellectual environment that came into being in the 1940 to 50s, so admirably exemplified by exceptional scholars such as John von Neumann , Norbert Wiener , Warren McCulloch , and Walter Pitts . They belong to the movement that saw the joint emergence of the theories, techniques, and methods of computers, neural networks , cellular automata, information processing, and self-organizing, self-regulating complex systems.Footnote 12 After several decades of progress in constant interaction with neuroscience, cognitive psychology, linguistics, and certain approaches to economics, these activities are now mature enough to justify referring to them as a ‘science’.

This is part of a deep trend. There has been a gradual development of mathematical physics to treat the organizational complexity of material systems and the emergence of patterns and shapes, but also cognitive activities as ‘unphysical’ as conceptual categorization and learning. We began by understanding how shapes could ‘emerge’ and ‘self-organize’ in a stable manner on the macroscopic scale as causal consequences of complex interactions on the microscopic scale. Collective microphysical phenomena, both cooperative and competitive, provide the causal origin of joint behaviour on a macroscopic level which can break the homogeneity of a substrate. The classic physical example is provided by critical phenomena like phase transitions. It was then realized that neural networks are the same kind of system, but in which emergent shapes and structures can be interpreted as cognitive processes.

If rather similar models crop up in rather disparate fields of empirical investigation, this is because complex systems possess certain relatively universal properties.Footnote 13 By definition, these are large systems of interacting elementary units with emergent global macroscopic properties arising from cooperative or competitive collective interactions between these units. These systems contrast with classical deterministic mechanical systems in the following ways:

  • They are singular and individuated, largely contingent, not concretely deterministic, even when they are ideally so: they are sensitive to tiny variations in their control parameters, a sensitivity that can induce divergence effects.

  • They are historical products, resulting from processes of evolution and adaptation.

  • They are out of equilibrium and have an internal regulation that keeps them within their range of viability.

They have little to do with classical mechanistic determinism. They are analyzed using new physical and mathematical theories and a computational approach making heavy use of computer simulation. The role of nonlinear dynamical systems (attractors, structural stability properties, and bifurcations), chaos theory, fractals, statistical physics (renormalization group), self-organized criticality, algorithmic complexity, genetic algorithms, and cellular automata has become key to understanding their statistical and computational properties. In short, through the engineering of self-organized, non-hierarchical, distributed, and acentered artificial systems, we are beginning to be able to model and simulate reasonably well biological systems (immunological systems, neural networks , evolutionary processes), ecological systems, cognitive systems, social systems, and economic systems.

2.10 The Philosophical Problem of Cognitive Science

Cognitive science can be approached in a purely operational and instrumental way, but its development nevertheless raises many issues on the philosophical level because, as we have just seen, it questions the traditional dividing line between the science of nature and the science of mind. To be more specific here, let us return for a moment to certain epistemological basics.

In the formalization of the so-called exact sciences, there is a lot more than, on the one hand, the processing of empirical data using universally applicable methods such as statistics, factor analysis, principal component analysis, data mining and, on the other hand, the axiomatization of theoretical concepts. These two types of formalization also exist in the social sciences and involve general methods that are independent of the source of the data and the kinds of things to which they are applied.

But in the physical sciences, there is also modelling in a stronger sense which is of a quite different kind. For this modelling in the strong sense, methods are specific to the theoretical conceptualization of a particular kind of object and can be used to reconstruct the phenomena in some real field from its constitutive theoretical concepts. Mathematical physics is able to reconstruct the whole diversity of physical phenomena from its theoretical concepts. This completely changes the status and function of concepts. We no longer subsume empirical diversity by abstraction under the unity of theoretical categories and concepts. Rather, concepts are transformed into algorithms for reconstructing the diversity of phenomena. Put another way, conceptual analysis is converted into a computational synthesis.

At the present time, the ideal of a computational synthesis of phenomena has only really been achieved in physics, which is restricted to a very narrow and highly constrained region of empirical reality. Huge regions of phenomena have been left outside the reconstruction zone, even though a fair number of these regions have been studied in detail by many empirical and descriptive disciplines. Here, we may cite:

  • The whole macroscopic organizational and morphological complexity of material systems.

  • All cognitive operations, including categorization, inference, induction, learning.

  • The whole semiotic and linguistic dimension of meaning.

  • And in fact anything having to do with phenomenality itself as a process of phenomenalization of an underlying physical objectivity.

In other words, it is only by restricting phenomenal reality to its most elementary form (essentially, the trajectories of material bodies, fluids, particles, and fields) that we have been able to carry through the programme of reconstruction and computational synthesis. For the other classes of phenomena, this project has long come up against unsurmountable epistemological obstacles.

At this point, it was taken as self-evident that there was an unavoidable scission between phenomenology (being as it appears to us in the perceived world and the cognitive faculties that process it) and physics (the objective being of the material world). However, we may say that it is not so much self-evident as a straightforward prejudice. In any case, this disjunction transformed the perceived world into a world of subjective-relative appearances—mental projections—with no objective content and belonging to psychology. Beyond psychology, the most that could be attributed to these appearances in the way of objectivity was a logical form of objectivity to be found in the theories of meaning and mental contents, from Bolzano and Frege , Husserl and Russell , to contemporary analytical philosophy.

We may say that the current work aims to go beyond this scission by developing a mathematical neurophysics of the phenomenology of the perceived world and common sense. The neurogeometry of vision presented here will be one aspect of this.

2.11 Some Examples

To end this introduction, let us mention some of the most striking examples of perfectly intuitive but theoretically problematic perceptual features that we shall attempt to understand.

2.11.1 The Gestalt Concept of Good Continuation

Figure 2.2 shows small aligned segments against a background of random distractors.Footnote 14 The alignment seems to jump out at us, and indeed, this is typical of what is known as a ‘pop out’ phenomenon. It results from ‘binding’ and integration of local information into a global structure. Psychophysical experiments have shown that it is indeed the global alignment that causes this effect. But what is the meaning of a global alignment on the neural level? For each of us conscious sentient beings, it is trivially and immediately obvious from the perceptual point of view. But each neuron only filters a tiny part of the visual field. There is no homunculus in the brain. There is no ‘ghost in the machine’, and the perceptual consciousness of a given individual is precisely the great mystery that we would like to explain. On the neural level, the Gestalt principle of ‘good continuation ’, which asserts that alignment will be perceptually prominent, can thus be taken to identify a formidable problem.

Fig. 2.2
figure 2

Example of ‘good continuation’. From Hess et al. [24]

2.11.2 Kanizsa’s Illusory Contours

Figure 2.3 shows an example of a still more spectacular phenomenon. The red sectors of the concentric grey rings specify the boundary conditions generating the illusory (or subjective) contours which constitute one of the most enigmatic manifestations of the Gestalt properties of completion of missing sensory data. Furthermore, a pink-tinted square emerges from this configuration (neon or watercolour effect), showing that not only does the visual system construct long-range contours that do not exist in the sensory stimulus, but these hallucinated contours can serve as the edges for a colour-spreading process that is just as much a hallucination.

Fig. 2.3
figure 3

Example of a Kanizsa style illusory contour with a neon effect

The transition from local to global works over a very long range here on the neural length scale, and this is why these phenomena have always been considered so particularly enigmatic.

2.11.3 Entoptic Phenomena

Our third example is the even more surprising case of visual hallucinations in which there is absolutely no stimulus, while the percept is richly structured from the geometric standpoint. Some of these purely geometric hallucinations relating to what has been called ‘entoptic vision’ were already classified long ago by Heinrich Klüver, who first brought Gestalt theory to the USA. Figure 2.4 shows some examples of these visual patterns perceived under the influence of mezcal. It also shows some neurogeometric models with a remarkable empirical fit which are due in particular to Paul Bressloff, Jack Cowan, Martin Golubitsky (see Bressloff et al. [25]) and will be discussed further in the second volume.

Fig. 2.4
figure 4

Left I, II, III, IV: Visual hallucinations observed by Klüver. Right a, b, c, d: Neurogeometric models for the Klüver data. See [25] and the second volume

2.11.4 The Cut Locus

Our last example concerns the cut locus of a figure, also called the generalized symmetry axis or ‘skeleton’ . Following the psychologist Blum [26], Thom [27] always stressed its fundamental role in perception (see Fig. 2.5).

Fig. 2.5
figure 5

Example of a cut locus. From Kimia [28]

Once again, imaging can show us the neural reality of the construction of this inner skeleton, for which there is no trace whatever in the sensory input, the latter consisting merely of an outer contour. Figures 2.6 and 2.7, produced by David Mumford’s disciple Tai Sing Lee, illustrate the response of a population of simple V1 neurons, whose preferred orientation is vertical, to textures with edges specified by opposing orientations. Up to around 80–100 ms, the early response involves only the local orientation of the stimulus. Between 100 and 300 ms, the response concerns the overall perceptual structure and the cut locus appears. These experiments are rather delicate to carry out, and they are much debated, but the detection of cut loci seems to be well demonstrated experimentally.

Fig. 2.6
figure 6

Response to a stimulus whose form is specified by opposing textural orientations. From Lee [29]

Fig. 2.7
figure 7

Recording of the construction of the cut locus. From Lee [29]

All these examples share the fact that the geometry of the percept is constructed—Husserl would say ‘constituted’—from sensory data which do not contain it, whence it must originate somewhere else. Put another way, they all involve subjective Gestalts. This is indeed why we chose them, because, as claimed by Jancke et al. [30], these subjective global structures ‘reveal fundamental principles of cortical processing’, the kind of principles that interest us here.

The origins of visual perceptual geometry can be found in the functional architecture which implements an immanent geometry, and it is the latter that provides the focus of neurogeometry. So the time has come to get down to business, by presenting some neurophysiological data for the receptive profiles and receptive fields of the visual neurons.