Abstract
Animals navigate to specific destinations for survival and reproduction. Notable examples include birds, fishes, and insects that are driven by their inherited motivation and acquired memory to migrate thousands of kilometers. The navigational abilities of these animals depend on their small and imprecise sensory organs and brains. Thus, understanding the mechanisms underlying animal navigation may lead to the development of novel tools and algorithms that can be used for more effective human-computer interactions in self-driving cars, autonomous robots and/or human navigation. How are such navigational abilities implemented in the animal brain? Neurons (i.e., nerve cells) that respond to external signals related to the animal’s direction and/or travel distance have been found in insects, and neurons that encode the animal’s place, direction, or speed have been identified in rats and mice. Although the research findings accumulated to date are not sufficient for a complete understanding of the neural mechanisms underlying navigation in the animal brain, they do provide key insights. In this review, we discuss the importance of neurobiological studies of navigation for engineering and computer science researchers and briefly summarize the current knowledge of the neural bases of navigation in model animals, including insects, rodents, and worms. In addition, we describe how modern engineering and computer technologies, such as virtual reality and machine learning, can help advance navigation research in animals.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Neurobiological Research of Animal Navigation: Why It Matters
Most, if not all, animals navigate to their destinations over various distances to forage for food, escape from their enemies, and/or find mating partners by using their inherited motivation and acquired memory. To accomplish this, animals utilize multiple types of information and a variety of strategies. When they navigate short distances, the spatial position of their goal can be accurately located if they are able to precisely recognize it using binocular vision or binaural hearing. However, when animals navigate using olfaction, the precise localization of the goal becomes more difficult, even for short distances, because odor does not produce a well-shaped gradient in space but rather diffuses nonuniformly as plumes. The difficulties in olfactory navigation can be easily understood if you close your eyes and try to reach for an odor source. Nevertheless, certain animals efficiently reach odor sources by using specialized strategies, such as zig-zag turns [1]. For long-range navigation, some animals, such as birds, fishes, and insects, can navigate to distant destinations, even if they are hundreds or thousands of kilometers away. Although the navigation goals cannot be directly recognized in these cases, these animals utilize global information, such as sun positioning and/or the geomagnetic field, when they navigate [2, 3]. It should be noted that, because global cues change according to time and season, animals need to correctly recognize their current temporal situation and compensate for errors in the relationship between the global information and their own positions.
These amazing navigational abilities of animals are comparable to the engineering requirements of very precise modern technologies. To obtain a precision of 10 meters in localization in global positioning systems (GPS), latitude and longitude need to be measured with an accuracy that is exact up to the 4th digit after the decimal point. However, neurobiological investigations of animal navigation have revealed that neurons in sensory organs and the central nervous system code positional information not necessarily accurately but even in a variable and stochastic manner, unlike electronic devices. This suggests that the principles of circuit operation and/or the information processing algorithms underlying animal navigation must differ from those of current technology. Thus, an understanding of the neurobiological mechanisms underlying animal navigation will contribute to the technological development of compact, efficient, and inexpensive mobile systems that can be used to monitor and assist in the navigation of autonomous objects, such as robots, and people. Alternatively, a neurobiological understanding of animal navigation might be useful for improving human social engineering and ergonomics problems, such as architectural design difficulties and the safe and efficient guidance of people toward or away from public facilities while preventing them getting lost, because the behaviors of navigation in humans and animals have many similarities.
2 Information Used for Navigation
Animals adopt multiple navigational strategies. In particular, they are thought to integrate the sensory information available in the external environment and the internal information that is computed and stored within the animal’s nervous system. Examples of these information types are summarized in Table 1, and further details are described in Sects. 3.1 and 3.2.
3 Exploring the Neural Bases of Navigation in Model Animals
Neurobiological studies of navigation, particularly those investigating how the external and internal information is represented and integrated in the form of neural activity, have been extensively performed using representative model animal species, such as insects (e.g., ants, bees, and locusts), rodents (mostly rats and mice), and worms. Some insect species are elegant examples of how various kinds of environmental information are transformed into the neural activity underlying the species’ amazing navigational capabilities [12]. Another advantage of studying insects is their relatively simple and compact brains. Although tens of thousands of neurons form elaborate networks in the brains of insects, their brains allow for easier experimental manipulations compared to the brains of higher animal species. Rodents are small mammals with brains that share many similarities to the human brain, and they can be trained to perform a variety of cognitive tasks that are relevant to human behavior. Studies conducted in highly controllable laboratory environments have revealed subsets of neurons that specifically respond to different aspects of navigation, such as location, direction, and geometric border [11]. Finally, the nematode Caenorhabditis elegans, which is a tiny worm, has been studied in a number of laboratories because it has only 302 neurons that make up networks with connections that have been fully elucidated [13]. This neural simplicity, together with additional favorable characteristics, such as the ease of performing genetic manipulations and monitoring behavior-associated neural activity, have resulted in these worms being extensively studied to investigate how neural circuits constructed via genetic blueprints generate animal behavior. Thus, in the next section, we will discuss the current understanding of the neural mechanisms of navigation, which have been revealed through the achievement of studies of insects, rodents, and worms.
3.1 Insects
Many insect species exhibit inherited navigational behaviors, such as orientation and migration. For example, the monarch butterfly Danaus plexippus and the desert locust Schistocerca gregaria have well-known migrational habits; they travel back and forth every year between distant areas that are often over 1,000 km apart [2]. Like many migratory birds, these insects move to a suitable area for breeding. In the next season, the newly born population also successfully travels back to the place that their parental generation left, even though they have not previously experienced the place. These observations indicate that their migrational behavior was genetically programmed during evolution. In addition, many insects exhibit learning-dependent navigation. Especially in species that have their own nests, including many social insects, they explore around the nest searching for food and memorize the locations of food so that they can visit the same places again [14]. More surprisingly, honey bees share information on food location with colony members using a waggle dance [14]. Despite their sophisticated navigational behavior, the structures of the insect brain are relatively simple compared to those in vertebrates, such as rodents. Thus, elucidating the brain mechanisms underlying navigation in insects may be useful for revealing the essential components of the sophisticated navigation of animals.
Neural Mechanisms for Detecting External Information
Types of Information.
Even though the spatial resolution of the compound eye of an insect is much lower (normally below 1/20) than that of humans, many insects navigate mainly based on visual information. Referring a polarized skylight (Fig. 1Aa) is a well-known method by which insects deduce orientation [15]. The position of the sun, moon, and/or stars is another type of global information that helps navigating insects know their heading direction [12]. In addition, especially in some social insects, local visual information, such as landmarks or panoramic views, is used for memorizing a familiar place, such as a nest or frequently visited feeding place [16]. Among the types of visually guided information used for navigation in insects described above, polarization vision is the most intensively studied and perhaps only example. We will now briefly review the latest findings on the neural mechanisms underlying insect polarization vision.
Processing the Information Related to a Polarized Skylight.
Polarization vision in insects is mediated by a specialized region in the compound eye called the dorsal rim area (DRA, Fig. 1C). The ommatidia, which are the optical units of the compound eye, in this area are extremely polarization-sensitive because of their structural and physiological properties [17]. Information on the electric field vector (e-vector) of the light waves of the polarized skylight that is detected in the DRA is then delivered to the central complex (CC, Fig. 1C), which is one of the integrative centers in the insect brain. Many types of CC neurons that respond to polarized light stimuli have been identified in various insect families, including Orthoptera (locusts and crickets) and Hymenoptera (bees and ants) [12, 18]. In locusts, e-vector information on polarized light is topographically represented in the protocerebral bridge (PB, Fig. 1C), which is part of the CC, thus suggesting that this area is the highest center of polarization vision in insects [19].
The CC Acts as an Internal Compass.
To utilize the polarized skylight as a global cue for navigation, time compensation is necessary because the polarization pattern in the sky changes with respect to solar elevation. Although the mechanisms underlying this compensation are still unclear, a group of neurons that sends time-compensated polarized light information to the CC by integrating the polarized light and chromatic gradient information has been found in locusts [20]. These findings suggest that the CC is not only the highest brain center of polarization vision but also a potential internal compass that monitors the insect’s orientation during navigation. Consistently, some CC neurons in Drosophila fruit files and cockroaches show responses that are similar to those of the head-direction cells in mammals (see below) [7, 8].
Neural Mechanisms Underlying Self-motion Monitoring
Behaviorally, ants and bees estimate their travel distances by step counts and optic flow, which is image flow caused by self-motion (Fig. 1Ab), respectively [21, 22]. Many types of neurons respond to optic flow stimuli in the optic lobe (OL, Fig. 1B and C), which is the primary visual center in the insect brain, and additional information processing pathways have not been described for a long time. Recently, a group of neurons in the noduli (No, Fig. 1C), which is one of the input sites to the CC, was found to encode the direction and speed of optic flow stimuli, indicating that they might convey moving distance information to the CC [10]. Because the CC is considered an internal compass as mentioned above, the CC likely integrates information on direction and distance and then computes path integration. To date, however, little is known about how the CC controls navigational behavior. The lateral accessory lobe (LAL), which is the main output region from the CC, might send steering commands to the thoracic ganglia, which is the motor center for the legs and wings (Fig. 1C). We need more information to investigate how the CC stores navigational memories and controls behavior.
3.2 Rodents
Spatial Cell Types and the Information They Encode.
Navigation-related neural activity in rodents is usually recorded in freely moving animals that are performing random foraging or goal-directed navigational tasks in a recording enclosure or maze placed in a laboratory room (Fig. 2A). Decades of studies have discovered several distinct cell types in the hippocampal-entorhinal network that are relevant to navigation (Fig. 2B; [11]). Namely, place cells in the hippocampus are activated (or “fire”) specifically when an animal is in a particular location in the environment [23]. Grid cells are found in the medial entorhinal cortex (MEC), which provides the main cortical input to the hippocampus, and these cells exhibit grid-like, periodic, and hexagonal firing patterns across the environment [24]. Head direction (HD) cells in the presubiculum and MEC fire when an animal faces a particular direction [5, 6]. Border cells in the MEC and boundary cells in the subiculum fire along geometric borders of the local environment, such as walls [25, 26]. Speed cells in the MEC are a very recent addition to these cell types, and these cells change their firing rates linearly with an animal’s running speed [27].
Neural Circuits of Spatial Representation.
The location-specific firing of place cells is shaped by visual and other types of sensory information from environmental cues in addition to self-motion cues, such as motor and vestibular information. When external cues are rotated, many place cells rotate their firing fields accordingly [28]. Moreover, once formed, place cells maintain their firing fields when the lights are off, and they can be formed even in an environment in darkness [29]. The activity of place cells is also influenced by many factors, such as context (e.g., the shape and pattern of the environment), events and objects in the environment, and the internal states of the animal, such as motivation and working memory. A current view hypothesizes that angular and linear self-motion integration (i.e., path integration) plays an important role in determining place cell firing and that errors accumulated from the integration are corrected by the association of the place fields with external cues. How then is the place-specific activity generated? Place cells greatly change their firing in different environments, while grid cells and HD cells maintain their coherent ensemble activity, thus suggesting that these cell types provide an intrinsic metric of space as part of the path-integration system. Since the discovery of grid cells, the place fields of the hippocampal neurons have been assumed to be created by the linear summation of inputs from upstream grid cells with different spatial scales. However, experimental evidence to date does not support this simplistic view [30, 31], which implies that place cells are created through multiple mechanisms, at least one of which is likely to be grid cell-independent. Moreover, in young infant rats, HD cells are already present at 14 days of life, which is generally before the rats exhibit significant spatial exploration and their eyes are open, whereas place cells begin to develop later, around 16 days after birth, and grid cells appear only around 20 days of age [32, 33]. These findings suggest that the directional system that develops first in life might provide inputs that shape other spatial cells.
A Possible Role of Place Cells in Navigation.
Place cells in the hippocampus exhibit specific activity when the animal passes through particular locations, thus implying that they represent real-time information regarding the current position of the animal when the animal is moving. However, this raises a question about the mechanism by which these place cells contribute to the animal’s navigation towards their goal. One possible answer may exist in the phenomenon called “awake replay”, in which temporally-compressed reactivations of sequences of place cells that reflect past trajectories occur when the animal is awake but not moving [34]. A recent study has demonstrated that place cell sequence events like those in awake replay not only encode spatial trajectories from the animal’s current location to remembered goals but also predict immediate future behavioral paths [35], thus suggesting a role of place cells in trajectory-finding in goal-directed navigation.
3.3 Worms
A species of nematode, Caenorhabditis elegans (hereafter simply called worms), which is ~1 mm in length and only possesses ~1,000 cells, has been used worldwide to study the mechanisms of simple brain function as well as other biological phenomena. Some of these studies have resulted in three Nobel prizes (2002 and 2006 for Physiology or Medicine and 2008 for Chemistry). These simple worms are widely used as the main subject of neurobiology for the following reasons. (1) Their nervous system, which consists of only 302 neurons, exhibits simple forms of brain functions, such as sensory perception and learning and memory [36]. (2) The molecular mechanisms regulating the activities of neurons and the neurotransmission of neural activities, which are mediated by small chemical compounds, such as glutamate, GABA, dopamine, and serotonin, depend on gene products that are functionally very similar to those of higher animals. (3) Because worms crawl relatively slowly (~0.1 mm/s) on the surface of an agar layer, their behavior can be easily monitored with high precision and less noise. (4) The relationships between behavior and genes, which are the blueprints of all life activities, can be easily analyzed by a large repertoire of sophisticated genetic techniques. (5) Because their bodies are transparent and exogenous genes can be easily introduced, optical monitoring (“imaging”) and manipulations of neural activities are feasible by using genetically engineered gene products, such as calcium indicators and light-driven ion channels and pumps [37, 38].
In addition, a comprehensive platform for performing quantitative analyses of the behavior and neurophysiology in the worm’s olfactory navigation has been developed by one of the author’s groups [39]. This platform allows for accurate estimates of the dynamic odor concentration changes that each worm experience during olfactory navigation and reproduces the odor concentration changes on a robotic microscope system that automatically tracks and monitors the neuronal activity of freely behaving worms (Fig. 3A). Because high-quality time-series data can be obtained on sensory stimuli and the behavior and neural activity in between, it has become possible to accurately describe the neural activity as a mathematical model (see next paragraph). Moreover, the high-quality data has recently been shown to be useful for machine learning analyses of feature extraction (see Sect. 4).
Tanimoto et al. [39] used the robotic microscope system to reveal the following unexpected computational abilities of worm sensory neurons. (1) Increases and decreases in odor concentrations are sensed by different sensory neurons. (2) Concentration increases in an unpreferred odor are transformed into neural activity that reflects the time-differential of the odor concentration, which causes the immediate behavioral response of randomly changing the migratory direction (Fig. 3B). (3) In contrast, concentration decreases in the unpreferred odor are transformed into neural activity that reflects the time-integral of the odor concentration changes, which causes a delayed behavioral response to switching the behavioral state from random searching to straight migration in that direction. Interestingly, the temporal integration of sensory information that causes delayed behavioral changes is one of the critical features of decision-making in monkeys and in humans [59]. Thus, our results indicate that worms can make decisions based on similar neural mechanisms as those in humans. We have identified the genes responsible for the decision-making in worms (Fig. 3C) [39], whose counterparts in humans might also be responsible for our decision-making.
4 Engineering and Computational Methods Used in Animal Navigation Research
Novel engineering and computational approaches have been adopted in neurobiological research of animal navigation to fulfill the requirement of highly accurate manipulations and measurements of sensory input, behavioral output, and neural activity. Here, we introduce two example technologies: navigation tasks using virtual reality (VR) and machine learning analyses of navigation behavior.
4.1 Navigation Tasks Using VR
VR refers to a computer-simulated and immersive environment that can present a variety of sensory stimuli in an interactive manner and thereby provide animal and human subjects with simulated experiences of navigation while their brain activities are recorded with various methods, such as functional magnetic resonance imaging (fMRI) and electrophysiology. Virtual navigation tasks have been used successfully to record place cell and grid cell activity in humans [40, 41], which demonstrates the usefulness of VR in bridging the gap between findings obtained from animals and those in humans.
In recent years, VR has been increasingly used in navigation research in insects and rodents since its first successful applications in these animals [42, 43]. A recent behavioral study has demonstrated that goal-directed navigation in VR in mice requires activity of the hippocampus [44], which is also required in real-world situations. VR enables measurements of navigation-related neural activity in these animals with high-resolution electrophysiological and optical recording techniques, such as whole-cell patch-clamp recording or two-photon calcium imaging, which require extremely stable fixation of the subject’s head under an electrode or microscope [45]. Another advantage of the use of VR for navigation tasks is the experimental flexibility and controllability of stimulus presentations, which was exemplified in a study in which visual information and the subject’s movement were put into conflict during the recording of hippocampal place cells in mice [46]. These advantages of VR make it an attractive behavioral paradigm for use in animal navigation research, and it will help explore yet undiscovered cellular and circuit mechanisms and neural representation schemes underlying navigation in the future.
4.2 Discovering Features of Navigation Behavior Using Machine Learning
Behavior is the final output of massive amounts of neural activity in the brain. However, descriptions of this behavior have been seriously limited compared to those of neural activities. Currently, the dynamic activities of thousands of neurons can be simultaneously measured by using the optical monitoring methods described above. Thus, neural activity can be expressed as the time-series vector data of thousands of dimensions. In contrast, behavior can still only be described with simple measures, such as speed, direction, and the probability of reaching the goal. Even worse, most video records of animal behavior are simply stored in laboratories without detailed analyses. Such a large asymmetry in the richness of neural and behavioral data is being recognized as one of the recent major problems in neuroscience [47,48,49].
The poor descriptions of behavior are due to the difficulties in the analysis of measured behavioral data. Recent developments in GPS and small cameras allow us to easily record the positions and postures of animals with high precision for an extended period of time. Most, if not all, of the behavioral features of recorded animals, such as velocity, acceleration, heading direction, body rotation, posture, and so on, must be dynamically affected by the environmental information. Then, on which of the behavioral changes should we focus in analyses? Should the behavioral changes be analyzed in a temporal window of milliseconds, seconds, or minutes? Also, should we consider temporal delays between sensory stimuli and behavioral responses? If so, how long should the delay be? Moreover, which sensory stimuli we should pay attention to? These questions show there are too many factors to consider and it has been extremely difficult to figure out the relationships between sensory input and behavioral output. The same is also true in finding relationships between neural activities and behavior.
One way to solve this problem is to use machine learning. The first step in analyzing raw animal behavior data is to classify the behaviors into several distinct behavioral states. It is easy to imagine that animal behavior can be classified into several states, such as sleeping, feeding, chasing prey, fighting, etc. However, these classifications have traditionally been performed manually by researchers watching the videos for a long time, in which an ambiguous boundaries have frequently become problematic. Machine learning techniques have recently been successfully used for behavioral classification based on combinations of characteristic patterns of basic behavioral features, such as velocity, acceleration, heading direction, etc. [50,51,52]. Moreover, behavioral patterns triggered by artificial activation of limited numbers of neurons have been classified by unsupervised learning to estimate functional connectivity in the brain in an unbiased way [53, 54].
However, machine learning techniques have not been used to understand dynamic brain function that links sensory information and behavioral responses because it has been difficult to accurately measure the sensory information that animals receive during behavior. Even if can be measured, no methods have been established to determine which feature of sensory information affects the behavior.
Yamazaki et al. [55] have revealed experience-dependent changes in the olfactory navigation of worms by analyzing the relationships between changes in odor concentrations and behavior. That group previously developed a method for measuring odor concentrations at specific spatiotemporal points in the small arena used for monitoring worm olfactory navigation, and these results led to the construction of a dynamic model of odor gradient involving evaporation and diffusion in the arena [39]. The model allowed for estimations of the changes in the odor concentrations that each worm experienced during navigation with an accuracy of nM/s. The analyses indicated that odor learning resulted in the worms ignoring small changes in odor concentrations for efficient olfactory navigation. Interestingly, changes in neural activity that were consistent with the behavioral changes were revealed by the robotic microscope system [55].
5 Future Prospects
Understanding how the brain works is one of the big challenges in modern science, as demonstrated by the national brain science projects being conducted in the US, EU, and several countries in Asia [56, 57]. Navigation is one of the prototypical brain functions that is used for understanding the dynamic nature of information processing in the brain because sensory inputs and behavioral outputs can be quantitatively measured and the goal of the behavior is rather straightforward. As we described above, the use of model animals has resulted in significant contributions to neurobiological studies of navigation. However, the findings that have been accumulated from these animals to date are still limited and insufficient to allow for an understanding of dynamic information processing in navigation. To conclude this paper, we suggest the following key issues to be examined in future research.
-
(1)
The characteristic of sensory information that plays the dominant role in navigation needs to be elucidated at a finer resolution. In the real world, navigation is a multisensory process that involves vision, audition, olfaction, and self-motion, and the relative importance of each modality may change in a short time span. For example, in olfactory navigation, animals continuously and simultaneously sense various odorants, each of which changes its concentration in different time scales. The situation is much more complex for visual and auditory stimuli because of the richness of information they contain. Thus, a method needs to be developed to detect the changes in sensory information that are significantly correlated with those in behavior by considering the highly dynamic nature of some environmental stimuli.
-
(2)
Determining which aspect of behavior is changed by the sensory input is important. As described above, deciding which behavioral feature to focus on in the analysis is difficult. One possible way to solve the problem is to extract the important behavioral features with machine learning and then try to find causal relationships between the sensory inputs and behavioral outputs. However, sensory information and behavioral responses are often in a closed-loop relationship in such a way that changes in sensory information cause behavioral changes, which subsequently elicit further updates of the sensory information that the animal receives. In order to reveal the causal relationships between them, it is important to establish a working hypothesis from the observed sensory behaviors and test it in an open-loop configuration. VR setups with compensated feedback may also be useful toward this direction of research.
-
(3)
Similar problems exist in large-scale neural activity data (i.e., hundreds or thousands of time-series neural activity data). Researchers often have trouble interpreting large high-dimensional datasets without specific hypotheses. However, if making a hypothesis of the relationship between sensory information and behavior becomes easier with the aid of analytical tools, such as machine learning, the identification of neural activities that represent higher-order information, such as abstract sensory information, motor planning, Bayesian-like inferences, and decision-making will become much easier [58, 59].
-
(4)
Close collaborations between neurobiologists and data scientists are necessary. Navigation research is by nature a multidisciplinary science. In order to discover previously unknown principles by using biologically relevant data analyses, researchers in these two fields need to communicate intensively by speaking a common language. Although it is difficult and not necessary for a researcher to be an expert in both branches of science, a mutual understanding between these two fields is key for fruitful collaborations.
The rapid advance of state-of-the-art technologies, as reviewed partly in this article, will help us tackle these problems and unquestionably lead to a better understanding of the neural mechanisms underlying navigation in the future.
References
Kanzaki, R., Sugi, N., Shibuya, T.: Self-generated zigzag turning of Bombyx mori males during pheromone-mediated upwind walking. Zool. Sci. 9, 515–527 (1992)
Brower, L.P.: Monarch butterfly orientation: missing pieces of a magnificent puzzle. J. Exp. Biol. 199, 93–103 (1996)
Wiltschko, W., Wiltschko, R.: Magnetic orientation and magnetoreception in birds and other animals. J. Comp. Physiol. A 191, 675–693 (2005)
Goto, Y., Yoda, K., Sato, K.: Asymmetry hidden in birds’ tracks reveals wind, heading, and orientation ability over the ocean. Sci. Adv. 3, e1700097 (2017)
Taube, J.S., Muller, R.U., Ranck, J.B.: Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 10, 420–435 (1990)
Taube, J.S., Muller, R.U., Ranck, J.B.: Head-direction cells recorded from the postsubiculum in freely moving rats. II. Effects of environmental manipulations. J. Neurosci. 10, 436–447 (1990)
Seelig, J.D., Jayaraman, V.: Neural dynamics for landmark orientation and angular path integration. Nature 521, 186–191 (2015)
Varga, A.G., Ritzmann, R.E.: Cellular basis of head direction and contextual cues in the insect brain. Curr. Biol. 26, 1816–1828 (2016)
Wehner, R., Wehner, S.: Insect navigation: the use of maps or Ariadne’s thread? Ethol. Ecol. Evol. 2, 27–48 (1990)
Stone, T., Webb, B., Adden, A., Weddig, N.B., Honkanen, A., Templin, R., Wcislo, W., Scimeca, L., Warrant, E., Heinze, S.: An anatomically constrained model for path integration in the bee brain. Curr. Biol. 27, 3069–3085 (2017)
Moser, E.I., Moser, M.-B., McNaughton, B.L.: Spatial representation in the hippocampal formation: a history. Nat. Neurosci. 20, 1448–1464 (2017)
Heinze, S.: Unraveling the neural basis of insect navigation. Curr. Opin. Insect Sci. 24, 58–67 (2017)
White, J.G., Southgate, E., Thomson, J.N., Brenner, S.: The Structure of the nervous system of the nematode Caenorhabditis elegans. Phil. Trans. R. Soc. B 314, 1–340 (1986)
von Frisch, K.: Dance Language and Orientation of Bees. Harvard University Press, Cambridge (1993)
Wehner, R., Labhart, T.: Polarisation vision. In: Warrant, E., Nilsson, D.-E. (eds.) Invertebrate Vision, pp. 291–348. Cambridge University Press, Cambridge (2006)
Collett, T.S., Collett, M.: Memory use in insect visual navigation. Nat. Rev. Neurosci. 3, 542–552 (2002)
Labhart, T., Meyer, E.P.: Detectors for polarized skylight in insects: a survey of ommatidial specializations in the dorsal rim area of the compound eye. Microsc. Res. Tech. 47, 368–379 (1999)
Sakura, M., Lambrinos, D., Labhart, T.: Polarized skylight navigation in insects: model and electrophysiology of e-vector coding by neurons in the central complex. J. Neurophysiol. 99, 667–682 (2008)
Heinze, S., Homberg, U.: Maplike representation of celestial e-vector orientations in the brain of an insect. Science 315, 995–997 (2007)
Pfeiffer, K., Homberg, U.: Coding of azimuthal directions via time-compensated combination of celestial compass cues. Curr. Biol. 17, 960–965 (2007)
Srinivasan, M.V., Zhang, S., Altwein, M., Tautz, J.: Honeybee navigation: nature and calibration of the “Odometer”. Science 287, 851–953 (2000)
Wittlinger, M., Wehner, R., Wolf, H.: The ant odometer: stepping on stilts and stumps. Science 312, 1965–1967 (2006)
O’Keefe, J., Dostrovsky, J.: The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34, 171–175 (1971)
Hafting, T., Fyhn, M., Molden, S., Moser, M.-B., Moser, E.I.: Microstructure of a spatial map in the entorhinal cortex. Nature 436, 801–806 (2005)
Lever, C., Burton, S., Jeewajee, A., O’Keefe, J., Burgess, N.: Boundary vector cells in the subiculum of the hippocampal formation. J. Neurosci. 29, 9771–9777 (2009)
Solstad, T., Boccara, C.N., Kropff, E., Moser, M.-B., Moser, E.I.: Representation of geometric borders in the entorhinal cortex. Science 322, 1865–1868 (2008)
Kropff, E., Carmichael, J.E., Moser, M.-B., Moser, E.I.: Speed cells in the medial entorhinal cortex. Nature 523, 419–424 (2015)
O’Keefe, J., Conway, D.H.: Hippocampal place units in the freely moving rat: why they fire where they fire. Exp. Brain Res. 31, 573–590 (1978)
Quirk, G.J., Muller, R.U., Kubie, J.L.: The firing of hippocampal place cells in the dark depends on the rat’s recent experience. J. Neurosci. 10, 2008–2017 (1990)
Brandon, M.P., Koenig, J., Leutgeb, J.K., Leutgeb, S.: New and distinct hippocampal place codes are generated in a new environment during septal inactivation. Neuron 82, 789–796 (2014)
Koenig, J., Linder, A.N., Leutgeb, J.K., Leutgeb, S.: The spatial periodicity of grid cells is not sustained during reduced theta oscillations. Science 332, 592–595 (2011)
Langston, R.F., Ainge, J.A., Couey, J.J., Canto, C.B., Bjerknes, T.L., Witter, M.P., Moser, E.I., Moser, M.-B.: Development of the spatial representation system in the rat. Science 328, 1576–1580 (2010)
Wills, T.J., Cacucci, F., Burgess, N., O’Keefe, J.: Development of the hippocampal cognitive map in preweanling rats. Science 328, 1573–1576 (2010)
Foster, D.J.: Replay comes of age. Annu. Rev. Neurosci. 40, 581–602 (2017)
Pfeiffer, B.E., Foster, D.J.: Hippocampal place-cell sequences depict future paths to remembered goals. Nature 497, 74–79 (2013)
De Bono, M., Maricq, A.V.: Neuronal substrates of complex behaviors in C. elegans. Annu. Rev. Neurosci. 28, 451–501 (2005)
Tian, L., Akerboom, J., Schreiter, E.R., Looger, L.L.: Neural activity imaging with genetically encoded calcium indicators. Prog. Brain Res. 196, 79–94 (2012)
Tye, K.M., Deisseroth, K.: Optogenetic investigation of neural circuits underlying brain disease in animal models. Nat. Rev. Neurosci. 13, 251–266 (2012)
Tanimoto, Y., Yamazoe-Umemoto, A., Fujita, K., Kawazoe, Y., Miyanishi, Y., Yamazaki, S.J., Fei, X., Busch, K.E., Gengyo-Ando, K., Nakai, J., Iino, Y., Iwasaki, Y., Hashimoto, K., Kimura, K.D.: Calcium dynamics regulating the timing of decision-making in C. elegans. eLife 6, 13819 (2017)
Doeller, C.F., Barry, C., Burgess, N.: Evidence for grid cells in a human memory network. Nature 463, 657–661 (2010)
Ekstrom, A.D., Kahana, M.J., Caplan, J.B., Fields, T.A., Isham, E.A., Newman, E.L., Fried, I.: Cellular networks underlying human spatial navigation. Nature 425, 184–188 (2003)
Fry, S.N., Rohrseitz, N., Straw, A.D., Dickinson, M.H.: TrackFly: virtual reality for a behavioral system analysis in free-flying fruit flies. J. Neurosci. Methods 171, 110–117 (2008)
Hölscher, C., Schnee, A., Dahmen, H., Setia, L., Mallot, H.A.: Rats are able to navigate in virtual environments. J. Exp. Biol. 208, 561–569 (2005)
Sato, M., Kawano, M., Mizuta, K., Islam, T., Lee, M.G., Hayashi, Y.: Hippocampus-dependent goal localization by head-fixed mice in virtual reality. eNeuro 4, ENURO.0369-16.2017 (2017)
Dombeck, D.A., Reiser, M.B.: Real neuroscience in virtual worlds. Curr. Opin. Neurobiol. 22, 3–10 (2012)
Chen, G., King, J.A., Burgess, N., O’Keefe, J.: How vision and movement combine in the hippocampal place code. PNAS 110, 378–383 (2013)
Anderson, D.J., Perona, P.: Toward a science of computational ethology. Neuron 84, 18–31 (2014)
Gomez-Marin, A., Paton, J.J., Kampff, A.R., Costa, R.M., Mainen, Z.F.: Big behavioral data: psychology, ethology and the foundations of neuroscience. Nat. Neurosci. 17, 1455–1462 (2014)
Krakauer, J.W., Ghazanfar, A.A., Gomez-Marin, A., MacIver, M.A., Poeppel, D.: Neuroscience needs behavior: correcting a reductionist bias. Neuron 93, 480–490 (2017)
Baek, J.-H., Cosman, P., Feng, Z., Silver, J., Schafer, W.R.: Using machine vision to analyze and classify Caenorhabditis elegans behavioral phenotypes quantitatively. J. Neurosci. Methods 118, 9–21 (2002)
Branson, K., Robie, A.A., Bender, J., Perona, P., Dickinson, M.H.: High-throughput ethomics in large groups of Drosophila. Nat. Methods 6, 451–457 (2009)
Dankert, H., Wang, L., Hoopfer, E.D., Anderson, D.J., Perona, P.: Automated monitoring and analysis of social behavior in Drosophila. Nat. Methods 6, 297–303 (2009)
Robie, A.A., Hirokawa, J., Edwards, A.W., Umayam, L.A., Lee, A., Phillips, M.L., Card, G.M., Korff, W., Rubin, G.M., Simpson, J.H., Reiser, M.B., Branson, K.: Mapping the Neural substrates of behavior. Cell 170, 393–406 (2017)
Vogelstein, J.T., Park, Y., Ohyama, T., Kerr, R.A., Truman, J.W., Priebe, C.E., Zlatic, M.: Discovery of brainwide neural-behavioral maps via multiscale unsupervised structure learning. Science 344, 386–392 (2014)
Yamazaki, S.J., Ikejiri, Y., Hiramatsu, F., Fujita, K., Tanimoto, Y., Yamazoe-Umemoto, A., Yamada, Y., Hashimoto, K., Hiryu, S., Maekawa, T., Kimura, K.D.: Experience-dependent modulation of behavioral features in sensory navigation of nematodes and bats revealed by machine learning. bioRxiv, 198879 (2017)
Brose, K.: Global neuroscience. Neuron 92, 557–558 (2016)
Yuste, R., Bargmann, C.: Toward a global BRAIN initiative. Cell 168, 956–959 (2017)
Funamizu, A., Kuhn, B., Doya, K.: Neural substrate of dynamic Bayesian inference in the cerebral cortex. Nat. Neurosci. 19, 1682–1689 (2016)
Gold, J.I., Shadlen, M.N.: The neural basis of decision making. Annu. Rev. Neurosci. 30, 535–574 (2007)
Acknowledgments
This work was supported by KAKENHI JP 16H06545 (K.D.K), 17H05985 (M. Sato), and 17H05975 (M. Sakura).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Kimura, K.D., Sato, M., Sakura, M. (2018). Neural Mechanisms of Animal Navigation. In: Streitz, N., Konomi, S. (eds) Distributed, Ambient and Pervasive Interactions: Technologies and Contexts. DAPI 2018. Lecture Notes in Computer Science(), vol 10922. Springer, Cham. https://doi.org/10.1007/978-3-319-91131-1_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-91131-1_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91130-4
Online ISBN: 978-3-319-91131-1
eBook Packages: Computer ScienceComputer Science (R0)