Abstract
Panoramic views offer information on heading direction and on location to visually navigating animals. This review covers the properties of panoramic views and the information they provide to navigating animals, irrespective of image representation. Heading direction can be retrieved by alignment matching between memorized and currently experienced views, and a gradient descent in image differences can lead back to the location at which a view was memorized (positional image matching). Central place foraging insects, such as ants, bees and wasps, conduct distinctly choreographed learning walks and learning flights upon first leaving their nest that are likely to be designed to systematically collect scene memories tagged with information provided by path integration on the direction of and the distance to the nest. Equally, traveling along routes, ants have been shown to engage in scanning movements, in particular when routes are unfamiliar, again suggesting a systematic process of acquiring and comparing views. The review discusses what we know and do not know about how view memories are represented in the brain of insects, how they are acquired and how they are subsequently used for traveling along routes and for pinpointing places.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
The ability of animals to find their way around the world underpins many fundamental biological processes, such as central place foraging, pollination and parental care. Navigational competence involves knowing routes and places or in the case of path integration the direction and distance of a goal location (Fig. 1a). Vision plays a dominant role at least in local, if not also in global navigation, because the non-uniform light distribution in the celestial hemisphere provides a compass reference and thus cues to heading direction and in the terrestrial hemisphere cues to both heading direction and to location. Celestial compass cues can be used over large distances of travel, because they are at relative infinity, but need to be time-compensated due to the rotation of the earth. On a local scale, infinitely distant celestial cues cannot be used to pinpoint locations. In comparison, the compass cues provided by the terrestrial landmark panorama are geostationary but degrade with distance traveled depending on the distance distribution of visual features in the scene. However, places in the natural world are uniquely defined by the view of the landmark panorama at these locations.
Celestial compass cues support path integration and the control of heading direction in migration (Heinze and Reppert 2011) and in straight line navigation (el Jundi et al 2016; Dacke et al. 2019), but in certain situations are supplemented or calibrated by reference to the magnetic field (Fleischmann et al. 2018; Dreyer et al. 2018). Equally, many insects pinpointing goals such as their nests, either through path integration or through visual guidance, are in addition guided by olfaction (Buehlmann et al. 2020) or any other cue that uniquely defines the location of goals (Buehlmann et al. 2012a).
Here I review the navigational guidance provided by terrestrial landmark panoramas, how navigation-relevant visual information is acquired by insects and what constraints there may be on how this information is processed, stored and used by the insect brain. Throughout, I will be discussing global image difference functions as the most assumption-free way of describing and quantifying the navigational information provided by views of natural scenes.
The navigational information content of views
A panoramic snapshot taken at a place in the natural environment offers two pieces of navigation-relevant information: it provides directional or compass information and it provides location information by uniquely identifying the place. The snapshot provides compass information because a misalignment with the facing direction of the animal when the snapshot was memorized leads to an increase in the image difference between the original snapshot and the current view. Most importantly, image differences increase smoothly with the size of the misalignment, so that an agent sensitive to image differences can find the original snapshot orientation by gradient descent in image differences (or alignment matching, Collett et al. 2013a). The function of the image difference between a reference image and rotated views over the angle of rotation is called the rotational image difference function (rotIDF, Fig. 1b, Zeil et al. 2003).
In any given scene, the depth of the rotIDF degrades with time and with distance from the reference location. The temporal degradation at different time frames is due to the movement of clouds, the movement of wind-driven vegetation and the movement of the sun which together lead to changes in illumination and to the movement of shadows. An example of a scene on a windy day is shown in Fig. 2, demonstrating that in the dorsal visual field, 45° above the horizon, the depth of the rotIDF decreases within minutes as clouds move across the sky and subsequently remains flat without a prominent minimum in a consistent compass direction. However, the more terrestrial features contribute to the view, by increasing the vertical visual field from the horizon to the zenith (0°–90°) or from below the horizon to the zenith (− 45° to 90°), the more robust, the rotIDF becomes against temporal degradation (Fig. 2). At this particular location, with dense vegetation and on a windy and cloudy day, the rotIDF between the scene at time 0 and the scene 20 min later shows a clear minimum in the direction in which the reference image was recorded. The movement of vegetation and that of shadows does not appear to contribute much to the development of image differences over time (Fig. 2). The effects of such scene dynamics are further reduced by the way in which photoreceptor signals are being processed by early stages of the visual system such as local contrast normalization, provided, for instance, by lateral inhibition (Stürzl and Zeil 2007).
While the dynamics of visual scenes pose potential problems for visual navigation by decreasing the reliability of information, such as heading direction, over time the decrease of the depth of the rotIDF with distance from the reference location (Fig. 1b) actually constitutes a gain in information: the minima of the rotIDFs between the reference image and images taken at different distances from the reference location increase smoothly with distance, reflecting changes in views that are exclusively due to translation (Figs. 1b, 3). The fact that in natural environments, this translational image difference function (transIDF) is smooth has important consequences for visual navigation (Zeil et al. 2003; Philippides et al. 2011). It means that a place in the natural world is uniquely defined by the view experienced and memorized at this place, so that an agent sensitive to image differences can return to that place by gradient descent operating on the transIDF value (Zeil et al. 2003).
Rotational and translational image difference functions thus can be used to travel along routes (Baddeley et al. 2011, 2012) and to pinpoint places (Zeil et al. 2003; Narendra et al. 2013). The move-and-compare or gradient descent process that leads to the recovery of heading direction (rotIDF) and to the return to a goal location (transIDF) is essentially a generalized form of the snapshot matching model put forward by Cartwright and Collett (1983, 1987). The difference being that the original snapshot model required the identification of individual discrete landmark objects in both the memorized and the currently experienced view to minimize mismatch, while there is no need for such image segmentation in complex natural scenes. This is, because the same task can be performed by minimizing global image differences, via ‘alignment matching’ (rotIDF, Zeil et al. 2003; Collett et al. 2013a) and ‘positional image matching’ (transIDF, Zeil et al. 2003; Collett et al. 2013a).
The properties of image difference functions in natural scenes
A number of properties of image difference functions are worth noting: the depth and width of both rotIDFs and transIDFs depend on image contrast (Stürzl and Zeil 2007; Zahedi and Zeil 2018), on the spatial frequency spectrum of scenes (Zeil et al. 2003; Stürzl and Zeil 2007) and on contour orientation (Zahedi and Zeil 2018). As a corollary, image differences as experienced by the brain are also affected by the way in which photoreceptor signals are processed and encoded, although the basic information they provide remains unaffected. rotIDF and transIDF information can not only be recovered from pixel-based image comparisons, but also from Fourier (Stürzl and Mallot 2006) or wavelet-transformed images (Meyer et al. 2020) or by tracking image features (e.g., Fleer and Möller 2017). Image difference functions share many properties with optic flow (Koenderink and van Doorn 1987), with all visual features independent of distance contributing to rotIDFs (rotational optic flow), while transIDFs are dependent on the distance distribution of objects (translational optic flow; Zeil et al. 2003; Stürzl and Zeil 2007). The range over which they offer navigational guidance (their catchments) depends on the distance distribution of visual objects in the environment, on their apparent size and relative contribution to the panoramic scene. For instance, the more distant features contribute to the scene along a route, the larger the distance over which the rotIDFs have a detectable minimum, and therefore the transIDF a detectable gradient (Fig. 3 left). Conversely, if the scene is dominated by a dense clutter of nearby objects, rotIDFs will degrade quickly with distance from the reference location, meaning that the catchment or depth of the transIDF will be small (Fig. 3 right). It would be important to consider these spatial constraints when simulating view-based navigation in synthetic environments (e.g., Baddeley et al. 2012, Ardin et al. 2016, see Wystrach et al. 2016).
The catchments of panoramic images are in fact volumes that define the three-dimensional space in which a gradient descent leads into the reference location (Fig. 4a, b; Zeil et al. 2003; Murray and Zeil 2017). These catchment volumes become larger, the greater the height above ground at which a reference image is taken, mainly due to the increased distance of visual features as height above ground increases (Murray and Zeil 2017). Once heights well above vegetation are reached, these catchment volumes will thus likely become very large (e.g., Gaffin et al. 2015), a property that should receive more attention when considering the navigational information available to insects (and birds) flying over distances of 100 s of meters and beyond.
Global image differences can thus be used to map the visual information that is in principle available in natural navigation environments (Fig. 4c), either by capturing panoramic images directly (Zeil et al. 2003; Narendra et al. 2013; Müller et al. 2018) or by rendering panoramic images in 3D models of natural environments (Stürzl et al. 2015; Murray and Zeil 2017). Drones could be used more systematically in future for mapping navigational information in three dimensions and for reconstructing what navigating insects see (Müller et al. 2018; Polster et al. 2019; Paffhausen et al. 2021), in particular when they are learning the location of their nests (e.g., Stürzl et al. 2015, 2016) and when they explore new environments for the first time (e.g., Degen et al. 2015; Osborne et al. 2013; Woodgate et al. 2016).
Image representation and the information content of views
The properties of image difference functions have a number of interesting consequences for how views may be processed and stored in (insect) brains. First, resolution is not an issue and representing views in low resolution may actually be of advantage, because the width of IDFs becomes larger, meaning the catchment area becomes wider, when scenes are low-pass filtered (Fig. 5a,b; Stürzl et al. 2015; Wystrach et al. 2016). At least the directional information provided by panoramic views (i.e., the rotIDF) can be recovered even from a 1° wide and low-pass filtered strip of the scene (Fig. 5c) acting like a barcode, as long as the full panorama is covered (Wystrach et al. 2016). Because of this, very coarse and sparsely distributed filters, including sparse motion signal distributions (Zanker and Zeil 2005), can be used to store views and subsequently determine whether a currently experienced view is familiar based on the minimum of the rotIDF (Fig. 5d and e, Baddeley et al. 2011, 2012). Such coarse and sparsely distributed filters have been found in the Drosophila central complex (Seelig and Jayaraman 2013) and have been shown to be involved in place learning (Ofstad et al. 2011) and in determining heading direction relative to the landmark panorama (Seelig and Jayaraman 2015) because the activity of these filters does represent both the rotIDF and the transIDF (Fig. 5f, Dewar et al. 2015).
Besides the coarse filters discovered in Drosophila, it is presently not known how views are represented in the brain of insects, in particular of central place foragers, such as ants, bees and wasps. As pointed out earlier, views may be represented in the local or global spatial frequency domain (Stürzl and Mallot 2006; Meyer et al. 2020; Stone et al. 2018; Sun et al. 2020) or by feature extraction (e.g., Fleer and Möller 2017). Activities in any of the diverse filter banks present in the medulla and lobula complex of insects—orientation, wavelet, color or motion filters—can potentially represent the navigational information provided by the visual panorama. Indeed, some evidence demonstrates that ground-nesting wasps (Zeil 1993b), ground-nesting bees (Brünnert et al. 1994), honeybees (Lehrer and Collett 1994) and bumblebees (Dittmar et al. 2010) can make use of motion parallax cues to determine the distance of landmarks close to the nest, suggesting that they generate, memorize and use motion signal distributions (Zeil 1993b; Dittmar 2011). In addition, pre-processing of views is likely to involve spectral or spectral contrast processing, in particular UV-green contrast, which renders views invariant to illumination changes and provides high contrast between the landmark panorama and the sky (Möller 2002; Kollmeier et al. 2007). UV contrast is crucial for ants determining heading direction from the rotIDF (Schultheiss et al. 2016) and makes place recognition in outdoor robotics experiments robust against changes in illumination (Stone et al. 2016).
Acquiring views
At the beginning of their foraging careers, individuals of central place foraging ants, bees and wasps engage in a series of excursions around the nest and in the wider environment that have been called learning flights, learning walks and orientation or exploration flights (reviewed in Zeil et al. 1996; Collett and Zeil 2018; Zeil and Fleischmann 2019). These learning routines are also performed by experienced foragers after the visual appearance of the nest environment has been modified or after a returning insect has been forced to search for the nest by a local disturbance, such as small objects displaced by wind or covers placed over the nest entrance by researchers (e.g., Zeil 1993a).
In the vicinity of the nest, the insects pivot around the nest entrance systematically experiencing views across the nest location from different compass directions. The goal anchor of these pivoting movements is provided by path integration in the case of ants (Müller and Wehner 2010) and by visual tracking and potentially also path integration in bees and wasps (Zeil 1993b; Samet et al. 2014; Schulte et al. 2019). While bees and wasps keep the nest entrance in the left or the right visual field on alternate loops in opposite directions around the nest, ants walk in one direction at a time and alternate between looking in the direction of the nest and looking in the opposite direction (Fig. 6a). Both learning walks and learning flights have a distinct and very regular spatio-temporal organization with segments of pure translation approximately perpendicular to the home vector direction alternating with saccadic gaze changes, which in the case of the learning flights in ground-nesting wasps keep the retinal position of the goal at about 45° in the lateral visual field (Zeil 1993a; Zeil et al. 1996; Stürzl et al. 2016), or in the frontal visual field in the case of social wasps and bumblebees (Fig. 6B; Collett 1995; Collett et al. 2013b). In the case of learning walks, segments of translation alternate with head and body rotations first toward the nest followed by a 180° rotation in the opposite direction (Fig. 6a; Jayatilaka et al. 2018; Zeil and Fleischmann 2019). It is interesting to note that learning flights are 5D events, taking place in x,y,z,t and include gaze direction, while learning walks are 4D events, happening in x,y,t and gaze direction. Considering that ants operating in 4D have evolved from wasps operating in 5D, it is a challenge to understand how they coped with the transformation from 5D to 4D.
The behaviors of insects performing learning walks and learning flights give the distinct impression of a systematic sampling of the visual scene around the nest, so that early naturalists described them as ‘locality studies’ (Peckham and Peckham 1905) that allow insects to gather information about the nest location relative to the visual scene. Many observations and experiments have shown that these learning routines are about place information: for instance, learning flights and walks are also made after foragers discovered a new food location (e.g., ants: Nicholson et al. 1999, Müller and Wehner 2010; social wasps: Collett and Lehrer 1993, Collett 1995; honeybees: Lehrer and Collett 1994; bumblebees: Robert et al. 2017, 2018); when honeybee hives are transported to new locations, apiarists have to make sure that foragers perform learning flights, because otherwise the bees would end up in the old location (Wolf 1926); the search for the nest of homing ground-nesting wasps can be shifted by shifting the pivoting point of their preceding learning flights with a moveable and visually high-contrast collar around the nest entrance (Zeil 1993a); and homing with the aid of visual landmarks gradually improves with the number and range covered by successive learning walks (Fleischmann et al. 2016; Deeti and Cheng 2021).
What do insects learn during learning flights and learning walks? For naïve foragers, which for the first time are exposed to the environmental light field, learning walks and learning flights serve to calibrate their celestial compass systems (Grob et al. 2019). But in addition, there are clear opportunities to associate the visual appearance of the nest (in the case of flying insects) and the current state of the home vector as computed by the path integration system with the panoramic scene from different compass directions (Müller and Wehner 2010; Graham et al. 2010). During learning walks, ants repeatedly turn toward and away from the nest direction and thus would be able to associate the landmark panorama with the current length and direction of the home vector both when gaze direction and home vector are aligned and when they are not aligned. The choreography of these learning routines generates additional information: the linear motion parallax generated by the translational movements of the insects causes close objects to stand out against a stationary distant background (Boeddeker et al. 2010, 2015; Braun et al. 2012; Lobecke et al. 2018), while the pivoting parallax created either by smooth counterturning or by integrating view changes across path segments, will emphasize objects close to the nest at the pivoting center against background features that move at the pivoting speed across the visual field (Zeil 1993a,b; Zeil et al. 1996; Voss and Zeil 1998; Riabinina et al. 2014; Doussot et al. 2021). If views and view changes are combined, learning flights and probably also learning walks, would potentially allow the insects to build 3D models of their nest environment. This can be shown by feeding reconstructed learning flight views of ground-nesting wasps into a camera-based modeling software (Pix4DMapper by Pix4D, Lausanne, Switzerland, Stürzl et al. 2015), which results in a local 3D model of the immediate nest environment (Fig. 6c, see also Baddeley et al. 2009). The distinct vertical height oscillations during learning flights (Zeil 1993a; Lobecke et al. 2018) may play a role in distance scaling not only the odometer of the insects, but also such allocentric representations (Bergantin et al. 2021; Doussot et al. 2021).
It is worth noting that the action-dependent sensory activity patterns generated during learning flights and learning walks automatically select the dominant navigational cues and the mechanism that can be subsequently used for homing: In a featureless environment, such as the salt-flats inhabited by some Cataglyphis ants, any rotation of the insect will lead to changes in the activity patterns of receptors sensitive to the magnetic field and of photoreceptors in the dorsal visual field (including the ocelli), in particular those sensitive to the plane of polarization of light. The pattern of celestial light does not change when the insect translates, so that the rotIDFs of particular views do not deteriorate with distance from their reference locations. As an ant moves further and further away from the nest, looking back in the nest direction from different distances but with similar bearings, she will experience the same depth of the rotIDF as she turns, independent of distance. In such a landscape, there is no transIDF gradient, because the scene does not change with translation. If, however, there is visual structure in the landscape with dense visual features provided by nearby objects, then views will change as the insect translates while moving further and further away from the nest. This goes some way to explain why ants inhabiting landmark-rich environments rely less and less on path integration, depending on the navigational information provided by their habitat (Narendra 2007; Buehlmann et al. 2011; Cheng et al. 2012; Cheung et al. 2012; Narendra et al. 2013).
In landmark-rich habitats, ants travel and memorize idiosyncratic routes and can recapitulate them with stunning accuracy (Wehner et al. 1996; Kohler and Wehner 2005; Mangan and Webb 2012). This ability can be modeled by assuming that the insects learn views along the route and can retrace their steps by monitoring the familiarity (based on the rotIDF) of currently experienced views (e.g., Baddeley et al. 2011, 2012; Ardin et al. 2016). However, the rules governing the acquisition of route views are not known. It would make sense if acquisition would be driven by view changes rather than by temporal or spatial sampling as has been assumed in some simulation studies (e.g.,Baddeley et al. 2012; Differt and Stürzl 2021). There is no point in learning, if the scene does not change. Modeling indeed suggests that this may be the automatic result of feeding views through an associative network where only new features lead to changes in the network (Antoine Wystrach, Personal communication).
Using views for navigation
If insects memorize views during learning flights, learning walks, exploration flights or when traveling along new routes, they would be expected to use them on their return along routes or when pinpointing goals by some kind of move-and-compare, or gradient descent strategy, as was originally suggested by Cartwright and Collett (1983, 1987) and shown to work in principle with panoramic images in complex natural scenes by Zeil et al. (2003). These alignment and position matching strategies (Collett et al. 2013a) will differ depending on the degrees of freedom of motility available to animals (Dale and Collett 2001): pedestrian ants, for instance, move more or less in a 2D world, they can rotate but with a constant orientation can only move forward. Flying insects, in contrast, have more translational degrees of freedom, by being able to move sideways and up and down without having to change their orientation. Matching strategies will also depend in what format insects store images: Some current models of route following in ants do implement alignment matching through rotational scanning (to detect the rotIDF minima, Baddeley et al. 2011, 2012), while others assume image representations that are rotation invariant which do not require physical rotation for familiarity detection (Kodzhabashev and Mangan 2015; Stone et al. 2018; Differt and Stürzl 2021; Sun et al. 2020). Whether or not insects are to some degree immune against rotational misalignments between current and remembered views will also determine whether there is a need for tight control of head roll and pitch during visual navigation (Boeddeker and Hemmi 2010; Ardin et al. 2015; Raderschall et al. 2016; Doussot et al. 2021).
So what do we know about how insects behave when traveling along routes and when pinpointing places? Students of ant navigation have repeatedly noted the scanning movements the insects perform (e.g., Wystrach et al. 2014, 2019, 2020), but their dynamics and scene dependence have rarely been analyzed closely. Wood ants, Formica rufa L., perform large scanning movements when approaching a feeder in an experimental arena with high-contrast patterns associated with the feeder and correct the mismatch between memorized views and the current scene with large saccades (Lent et al. 2010, 2013). Both M. croslandi (Murray et al. 2020; Clement et al. 2022) and the meat ant Iridomyrmex purpureus (Clement et al. 2022) show regular gaze oscillations when running on a trackball outdoors. The amplitudes of their scanning movements change depending on whether they see a familiar or unfamiliar scene, or whether they are placed over the nest (Murray et al. 2020; Clement et al. 2022). It has been suggested that this modulation reflects the interaction between attractive nest-directed view memories and repellent memories of views directed away from the nest determining a ‘directional drive’ (Murray et al. 2020; Le Moël and Wystrach 2020). The scanning behavior of ants during their learning walks implicates that ants may learn such attractive and repellent views (Jayatilaka et al. 2018; Zeil and Fleischmann 2019).
The scanning behaviors are particularly noticeable in the first foraging trips of naïve ants and when a familiar panorama or route is altered (Wystrach et al. 2014; Islam et al. 2021). Scanning is also observed when experienced foragers are released at a site they have never visited, but which is within the catchment areas of views they know: they look around briefly and then in most cases head off into the direction of their nest (Narendra et al. 2013; Zeil et al. 2014). The ability of both Jack jumper ants (M. croslandi) and desert ants (Cataglyphis velox) to home backwards when loaded with heavy prey items critically depends on their frequent turning round to face in the home direction (Schwarz et al. 2017, 2020). Lastly, and interestingly, night active bull ants descending from trees engage in yaw, roll and pitch scanning when getting their nest bearing from the local panorama (Freas et al. 2018).
All these observations indicate that ants need to scan the scene to obtain navigational guidance from a comparison between memorized views and what they currently see. What happens during scanning does not seem to be straight forward ‘alignment matching’ (Collett et al. 2013a) as has been observed in experimental arenas where ants correct their heading direction when feeder associated high-contrast patterns are moved (Lent et al. 2010) or where ants on track balls realign themselves quickly after large instantaneous rotations of a natural panorama (Fig. 7a, Kócsi et al. 2020). Under natural conditions, the scanning directions in both M. croslandi and M. bagoti are not clearly related to the rotIDF as a measure of familiarity (Zeil et al. 2014; Wystrach et al. 2014). However, scanning amplitudes are modulated by uncertainty: when ants are confronted with an unfamiliar scene, their scanning amplitudes increase presumably because the scene does not match with any memorized view (Murray et al. 2020; Clement et al. 2022). But scanning amplitudes also increase as the ants come close to the nest (Fig. 7b, c) or are tethered on a trackball above the nest location (Murray et al. 2020), presumably because ants close to the nest experience good matches in all heading directions.
The degree to which vision also guides the final approach to the nest in ants remains unclear. Simulations using natural scenes and neurally inspired acquisition, storage and recall models (e.g., Le Moël and Wystrach 2020) get agents to within 1 m of a goal location, but pinpointing it requires centimeter accuracy. To solve this task, at least some species of ants are guided by tactile and olfactory cues (Seidl and Wehner 2006; Buehlmann et al. 2012b, 2020).
As far as flying insects are concerned much work has gone into understanding the relationship between learning flights and return flights at nests and at feeding sites of bees (Hempel de Ibarra et al. 2009; Dittmar et al. 2010; Dittmar 2011; Braun et al. 2012; Philippides et al. 2013; Robert et al. 2017, 2018) and wasps (Zeil 1993a, b; Collett 1995; Stürzl et al. 2016). During their return to a goal, honeybees (Boeddeker et al. 2010; Dittmar et al. 2010), bumblebees (Collett et al. 2013a, b; Hempel de Ibarra et al. 2009; Robert et al. 2018) and wasps (Zeil 1993b; Collett 1995; Stürzl et al. 2016) tend to face into similar directions as during their learning flights. Movement patterns are similar between learning and return flights (Zeil 1993b; Philippides et al. 2013) and when looked at in detail, return flights have a saccadic structure: rapid gaze changes, which may indicate alignment matching, alternate with sideways movements (Zeil 1993b; Boeddeker et al. 2010; Braun et al. 2012). Such sideways movements generate motion parallax information (Mertes et al. 2014) and possibly help to match image motion patterns experienced during learning flights (Zeil 1993b; Dittmar et al. 2010; Dittmar 2011), but can also be seen as a way to probe translational image difference functions (Zeil et al. 2003; Doussot et al. 2020). Landmarks close to the nest are kept in retinal positions similar to those experienced during learning flights (Zeil 1993b; Collett et al. 2013b) and in some cases appear to serve as beacons: social wasps, for instance, head toward a feeder-defining landmark and subsequently turn so that the retinal position of the landmark is the same as that experienced during their learning flights (Collett 1995). Ground-nesting wasps, as they pivot around the nest entrance during learning flights keep the nest entrance in the left or right visual field, depending on pivoting direction. Upon returning to the nest, they move left or right when they encounter learning flight views associated with the nest-right or nest-left condition (Fig. 7d, Stürzl et al. 2016).
The homing task differs in pedestrian ants and flying insects, not only due to different movement constraints, notably in the ability of flying insects to translate in directions other than the gaze direction (Dale and Collett 2001), but also in the amount of visual clutter. Ants with their visual systems close to the ground have to deal with a lot of visual clutter from objects and vegetation that are easily shifted by wind, rain and big-hoofed animals, which obscure the more distant panorama and thus offer unreliable features for visual navigation, while flying insects, such as ground-nesting wasps and bees, may be able to better deal with this ground-plane clutter because they have a ‘birds-eye’ view of the scene around their nests and a less obstructed view of the wider landmark panorama. These differences would need to be considered when trying to understand the functional significance of the regular and conspicuous rotational scanning movements made by navigating ants along routes and when pinpointing the nest and the rotational and translational scanning observed in flying insects approaching a goal (e.g., Zeil 1993b; Boeddeker et al. 2010; Boeddeker and Hemmi 2010; Collett et al. 2013b).
Outlook
We have grown used to considering panoramic images and the information they provide by studying them in un-warped, rectangular forms, because they are computationally convenient. What we actually would need to be doing to get an impression what insect memory centers are confronted with, is to consider scenes projected on the unit sphere, at the sampling array of the animals we are concerned with, filtered through their spectral and polarization sensitivities and through their orientation and motion filters. While we now have new ways of mapping the sampling array of compound eyes (Rigosi et al. 2021; Bagheri et al. 2020) and methods that are able to render scenes in the way they would be represented by a compound eye (Stürzl et al. 2010, 2015; Millward et al. 2022), the problem remains that we do not know crucial compound eye parameters for most of the animals we study. Further, we continue to be ignorant about many aspects of insect navigation behavior, in particular beyond the range of our cameras. It would be important to know, for instance, how learning flights turn into exploration flights as well as being able to record the flight height, the fine-grained flight behavior and the gaze directions on these flights. Navigation is an experience-dependent process and it would, therefore, seem to be crucial to monitor in detail the foraging careers of individual insects, with a view to identifying the opportunities insects have to gather navigation-relevant information and how this shapes the way in which they subsequently use that information. Lastly, we continue to have little insight into the neural dynamics of freely behaving insects under the complex natural conditions in which they operate. In short, while the recent progress in understanding the behavioral, computational and neural basis of insect navigation is exhilarating, we should not underestimate—but also cherish the opportunities offered by—the extent of our ignorance.
References
Ardin P, Mangan M, Wystrach A, Webb B (2015) How variation in head pitch could affect image matching algorithms for ant navigation. J Comp Physiol A 201:585–597. https://doi.org/10.1007/s00359-015-1005-8
Ardin P, Peng F, Mangan M, Lagogiannis K, Webb B (2016) Using an insect mushroom body circuit to encode route memory in complex natural environments. PLoS Comp Biol 12:e1004683. https://doi.org/10.1371/journal.pcbi.1004683
Baddeley B, Philippides A, Graham P, Hempel de Ibarra N, Collett TS, Husbands P (2009) What can be learnt from analysing insect orientation flights using probabilistic SLAM? Biol Cybernet 101:169–182. https://doi.org/10.1007/s00422-009-0327-4
Baddeley B, Graham P, Philippides A, Husbands P (2011) Holistic visual encoding of ant-like routes: navigation without waypoints. Adapt Behav 19:3–15. https://doi.org/10.1177/1059712310395410
Baddeley B, Graham P, Husbands P, Philippides A (2012) A model of ant route navigation driven by scene familiarity. PloS Comp Biol 8:e1002336. https://doi.org/10.1371/journal.pcbi.1002336
Bagheri ZM, Jessop A-L, Kato S, Partridge JC, Shaw J, Ogawa Y, Hemmi JM (2020) A new method for mapping spatial resolution in compound eyes suggests two visual streaks in fiddler crabs. J Exp Biol 223:jeb210195. https://doi.org/10.1242/jeb.210195
Bergantin L, Harbaoui N, Raharijaona T, Ruffier F (2021) Oscillations make a self-scaled model for honeybees’ visual odometer reliable regardless of flight trajectory. J Roy Soc Interface 18:20210567. https://doi.org/10.1098/rsif.2021.0567
Boeddeker N, Hemmi JM (2010) Visual gaze control during peering flight manoeuvres in honeybees. Proc R Soc B 277:1209–1217. https://doi.org/10.1098/rspb.2009.1928
Boeddeker N, Dittmar L, Stürzl W, Egelhaaf M (2010) The fine structure of honeybee head and body yaw movements in a homing task. Proc R Soc B 277:1899–1906. https://doi.org/10.1098/rspb.2009.2326
Boeddeker N, Mertes M, Dittmar L, Egelhaaf M (2015) Bumblebee homing: the fine structure of head turning movements. PLoS ONE 10(9):e0135020. https://doi.org/10.1371/journal.pone.0135020
Braun E, Dittmar L, Boeddeker N, Egelhaaf M (2012) Prototypical components of honeybee homing flight behaviour depend on the visual appearance of objects surrounding the goal. Front Behav Neurosci 6:1. https://doi.org/10.3389/fnbeh.2012.00001
Brünnert U, Kelber A, Zeil J (1994) Ground-nesting bees determine the location of their nest relative to a landmark by other than angular size cues. J Comp Physiol A 175:363–369. https://doi.org/10.1007/BF00192995
Buehlmann C, Cheng K, Wehner R (2011) Vector-based and landmark-guided navigation in desert ants inhabiting landmark-free and landmark-rich environments. J Exp Biol 214:2845–2853. https://doi.org/10.1242/jeb.054601
Buehlmann C, Hansson BS, Knaden M (2012a) Desert ants learn vibration and magnetic landmarks. PLoS ONE 7:e33117. https://doi.org/10.1371/journal.pone.0033117
Buehlmann C, Hansson BS, Knaden M (2012b) Path integration controls nest-plume following in desert ants. Curr Biol 22:645–649. https://doi.org/10.1016/j.cub.2012.02.029
Buehlmann C, Mangan M, Graham P (2020) Multimodal interactions in insect navigation. Anim Cogn 23:1129–1141. https://doi.org/10.1007/s10071-020-01383-2
Cartwright BA, Collett TS (1983) Landmark learning in bees—experiments and models. J Comp Physiol 151:521–543. https://doi.org/10.1007/BF00605469
Cartwright BA, Collett TS (1987) Landmark maps for honeybees. Biol Cybernet 57:85–93. https://doi.org/10.1007/BF00318718
Cheng K, Middleton EJT, Wehner R (2012) Vector-based and landmark-guided navigation in desert ants of the same species inhabiting landmark-free and landmark-rich environments. J Exp Biol 215:3169–3174. https://doi.org/10.1242/jeb.070417
Cheung A, Hiby L, Narendra A (2012) Ant navigation: fractional use of the home vector. PLoS ONE 7(11):e50451. https://doi.org/10.1371/journal.pone.0050451
Clement L, Schwarz S, Wystrach A (2022) An intrinsic oscillator underlies visual navigation in ants. SSRN J. https://doi.org/10.1101/2022.04.22.489150
Collett TS (1995) Making learning easy: the acquisition of visual information during the orientation flights of social wasps. J Comp Physiol A 177:737–747. https://doi.org/10.1007/BF00187632
Collett TS, Lehrer M (1993) Looking and learning: a spatial pattern in the orientation flight of the wasp Vespula vulgaris. Proc R Soc B 252:129–134. https://doi.org/10.1098/rspb.1993.0056
Collett TS, Zeil J (2018) Insect learning flights and walks. Curr Biol 28:R984–R988. https://doi.org/10.1016/j.cub.2018.04.050
Collett M, Chittka L, Collett TS (2013a) Spatial memory in insect navigation. Curr Biol 23:R789–R800. https://doi.org/10.1016/j.cub.2013.07.020
Collett TS, de Ibarra NH, Riabinina O, Philippides A (2013b) Coordinating compass-based and nest-based flight directions during bumblebee learning and return flights. J Exp Biol 216:1105–1113. https://doi.org/10.1242/jeb.081463
Dacke M, Bell ATA, Foster JJ, Baird EJ, Strube-Bloss MF, Byrne MJ, el Jundi B (2019) Multimodal cue integration in the dung beetle compass. Proc Natl Acad Sci USA 116:14248–14253. https://doi.org/10.1073/pnas.1904308116
Dale K, Collett TS (2001) Using artificial evolution and selection to model insect navigation. Curr Biol 11:1305–1316. https://doi.org/10.1016/S0960-9822(01)00418-3
Deeti S, Cheng K (2021) Learning walks in an Australian desert ant, Melophorus bagoti. J Exp Biol 224:jeb242177. https://doi.org/10.1242/jeb.242177
Degen J, Kirbach A, Reiter L, Lehmann K, Norton P, Storms M, Koblofsky M, Winter S, Georgieva PB, Nguyen H, Chamkhi H, Greggers U, Menzel R (2015) Exploratory behaviour of honeybees during orientation flights. Anim Behav 102:45–57. https://doi.org/10.1016/j.anbehav.2014.12.030
Dewar ADM, Wystrach A, Graham P, Philippides A (2015) Navigation-specific neural coding in the visual system of Drosophila. BioSystems 136:120–127. https://doi.org/10.1016/j.biosystems.2015.07.008
Differt D, Stürzl W (2021) A generalized multi-snapshot model of 3D homing and route following. Adapt Behav 29:531–548. https://doi.org/10.1177/1059712320911271
Dittmar L (2011) Static and dynamic snapshots for goal localization in insects? Comm Int Biol 4:17–20. https://doi.org/10.4161/psb.4.1.13763
Dittmar L, Stürzl W, Baird E, Boeddeker N, Egelhaaf M (2010) Goal seeking in honeybees: matching of optic flow snapshots? J Exp Biol 213:2913–2923. https://doi.org/10.1242/jeb.043737
Doussot C, Bertrand OJN, Egelhaaf M (2020) Visually guided homing of bumblebees in ambiguous situations: a behavioural and modelling study. PLoS Comput Biol 16(10):e1008272. https://doi.org/10.1371/journal.pcbi.1008272
Doussot C, Bertrand OJN, Egelhaaf M (2021) The critical role of head movements for spatial representation during bumblebees learning flight. Front Behav Neurosci 14:606590. https://doi.org/10.3389/fnbeh.2020.606590
Dreyer D, Frost B, Mouritsen H, Gunther A, Green K, Whitehouse M, Johnsen S, Heinze S, Warrant E (2018) The earth’s magnetic field and visual landmarks steer migratory flight behavior in the nocturnal Australian bogong moth. Curr Biol 28:2160–2166. https://doi.org/10.1016/j.cub.2018.05.030
el Jundi B, Foster JJ, Khaldy L, Byrne MJ, Dacke M, Baird E (2016) A snapshot-based mechanism for celestial orientation. Curr Biol 26:1456–1462. https://doi.org/10.1016/j.cub.2016.03.030
Fleer D, Möller R (2017) Comparing holistic and feature-based visual methods for estimating the relative pose of mobile robots. Robot Autonom Syst 89:51–74. https://doi.org/10.1016/j.robot.2016.12.001
Fleischmann PN, Christian M, Müller VL, Rössler W, Wehner R (2016) Ontogeny of learning walks and the acquisition of landmark information in desert ants, Cataglyphis fortis. J Exp Biol 219:3137–3145. https://doi.org/10.1242/jeb.140459
Fleischmann PN, Grob R, Müller VL, Wehner R, Rössler W (2018) The geomagnetic field is a compass cue in Cataglyphis ant navigation. Curr Biol 28:1440–1444. https://doi.org/10.1016/j.cub.2018.03.043
Freas CA, Wystrach A, Narendra A, Cheng K (2018) The view from the trees: nocturnal bull ants, Myrmecia midas, use the surrounding panorama while descending from trees. Front Psychol 9:16. https://doi.org/10.3389/fpsyg.2018.00016
Gaffin DD, Dewar A, Graham P, Philippides A (2015) Insect-inspired navigation algorithm for an aerial agent using satellite imagery. PLoS ONE 10(4):e0122077. https://doi.org/10.1371/journal.pone.0122077
Graham P, Philippides A, Badderley B (2010) Animal cognition: multi-modal interactions in ant learning. Curr Biol 20:R639–R640. https://doi.org/10.1016/j.cub.2010.06.018
Grob R, Fleischmann PN, Rössler W (2019) Learning to navigate—how desert ants calibrate their compass systems. Neuroforum 25(2):109–120. https://doi.org/10.1515/nf-2018-0011
Heinze S, Reppert SM (2011) Sun compass integration of skylight cues in migratory monarch butterflies. Neuron 69:345–358. https://doi.org/10.1016/j.neuron.2010.12.025
Hempel de Ibarra N, Philippides A, Riabinina O, Collett TS (2009) Preferred viewing directions of bumblebees (Bombus terrestris L.) when learning and approaching their nest site. J Exp Biol 212:3193–3204. https://doi.org/10.1242/jeb.029751
Islam M, Deeti S, Kamhi JF, Cheng K (2021) Minding the gap: learning and visual scanning behaviour in nocturnal bull ants. J Exp Biol 224:jeb242245. https://doi.org/10.1242/jeb.242245
Jayatilaka P, Murray T, Narendra A, Zeil J (2018) The choreography of learning walks in the Australian jack jumper ant Myrmecia croslandi. J Exp Biol 22:jeb185306. https://doi.org/10.1242/jeb.185306
Kócsi Z, Murray T, Dahmen HJ, Narendra A, Zeil J (2020) The Antarium: a reconstructed visual reality device for ant navigation research. Front Behav Neurosci 14:599374. https://doi.org/10.3389/fnbeh.2020.599374
Kodzhabashev A, Mangan M (2015) Route following without scanning. In: Wilson SP, Verschure PFMJ, Mura A, Prescott TJ (eds) Biomimetic and biohybrid systems. Springer, Berlin, pp 199–210. https://doi.org/10.1007/978-3-319-22979-9_20
Koenderink JJ, van Doorn AJ (1987) Facts on optic flow. Biol Cybernet 56:247–254. https://doi.org/10.1007/BF00365219
Kohler M, Wehner R (2005) Idiosyncratic route-based memories in desert ants, Melophorus bagoti: how do they interact with path-integration vectors? Neurobiol Learn Mem 83:1–12. https://doi.org/10.1016/j.nlm.2004.05.011
Kollmeier T, Röben F, Schenck W, Möller R (2007) Spectral contrasts for landmark navigation. J Opt Soc Am A 24:1–10. https://doi.org/10.1364/josaa.24.000001
Le Moël F, Wystrach A (2020) Opponent processes in visual memories: a model of attraction and repulsion in navigating insects’ mushroom bodies. PLoS Comput Biol 16(2):e1007631. https://doi.org/10.1371/journal.pcbi.1007631
Lehrer M, Collett TS (1994) Approaching and departing bees learn different cues to the distance of a landmark. J Comp Physiol A 175:171–177. https://doi.org/10.1007/BF00215113
Lent DD, Graham P, Collett TS (2010) Image-matching during ant navigation occurs through saccade-like body turns controlled by learned visual features. Proc Natl Acad Sci USA 107(37):16348–16353. https://doi.org/10.1073/pnas.1006021107
Lent DD, Graham P, Collett TS (2013) Phase-dependent visual control of the zigzag paths of navigating wood ants. Curr Biol 23:2393–2399. https://doi.org/10.1016/j.cub.2013.10.014
Lobecke A, Roland Kern R, Egelhaaf M (2018) Taking a goal-centred dynamic snapshot as a possibility for local homing in initially naïve bumblebees. J Exp Biol 221:jeb168674. https://doi.org/10.1242/jeb.168674
Mangan M, Webb B (2012) Spontaneous formation of multiple routes in individual desert ants (Cataglyphis velox). Behav Ecol 23:944–954. https://doi.org/10.1093/beheco/ars051
Mertes M, Dittmar L, Egelhaaf M, Boeddeker N (2014) Visual motion sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task. Front Behav Neurosci 8:335. https://doi.org/10.3389/fnbeh.2014.00335
Meyer S, Nowotny T, Graham P, Dewar A, Philippides A (2020) Snapshot navigation in the wavelet domain. In: Vouloutsi V, Mura A, Tauber F, Speck T, Prescott TJ, Verschure PFMJ (eds) Lecture notes in artificial intelligence, vol 12413. Springer Nature Switzerland, Cham, pp 245–256. https://doi.org/10.1007/978-3-030-64313-3
Millward B, Maddock S, Mangan M (2022) CompoundRay, an open-source tool for high-speed and high-fidelity rendering of compound eyes. Elife 11:e73893. https://doi.org/10.7554/eLife.73893
Möller R (2002) Insects could exploit UV-green contrast for landmark navigation. J Theor Biol 214:619–631. https://doi.org/10.1006/jtbi.2001.2484
Müller M, Wehner R (2010) Path integration provides a scaffold for landmark learning in desert ants. Curr Biol 20:1368–1371. https://doi.org/10.1016/j.cub.2010.06.035
Müller J, Nawrot M, Menzel R, Landgraf T (2018) A neural network model for familiarity and context learning during honeybee foraging flights. Biol Cybernet 112:113–126. https://doi.org/10.1007/s00422-017-0732-z
Murray T, Zeil J (2017) Quantifying navigational information: the catchment volumes of panoramic snapshots in outdoor scenes. PLoS ONE 12(10):e0187226. https://doi.org/10.1371/journal.pone.0187226
Murray T, Kócsi Z, Dahmen HJ, Narendra A, Le Möel F, Wystrach A, Zeil J (2020) The role of attractive and repellent scene memories in ant homing (Myrmecia croslandi). J Exp Biol 223:jeb210021. https://doi.org/10.1242/jeb.210021
Narendra A (2007) Homing strategies of the Australian desert ant Melophorus bagoti I. Proportional path-integration takes the ant half-way home. J Exp Biol 210:1798–1803. https://doi.org/10.1242/jeb.02768
Narendra A, Gourmaud S, Zeil J (2013) Mapping the navigational knowledge of individually foraging ants Myrmecia croslandi. Proc Roy Soc B 280:20130683. https://doi.org/10.1098/rspb.2013.0683
Nicholson DJ, Judd SPD, Cartwright BA, Collett TS (1999) Learning walks and landmark guidance in wood ants (Formica rufa). J Exp Biol 202:1831–1838. https://doi.org/10.1242/jeb.202.13.1831
Ofstad TA, Zuker CS, Reiser MB (2011) Visual place learning in Drosophila melanogaster. Nature 474(7350):204–207. https://doi.org/10.1038/nature10131
Osborne JL, Smith A, Clark SJ, Reynolds DR, Barron MC, Lim KS, Reynolds AM (2013) The ontogeny of bumblebee flight trajectories: from naïve explorers to experienced foragers. PLoS ONE 8:e78681. https://doi.org/10.1371/journal.pone.0078681
Paffhausen BH, Petrasch J, Wild B, Meurers T, Schülke T, Polster J, Fuchs I, Drexler H, Kuriatnyk O, Menzel R, Landgraf T (2021) A flying platform to investigate neuronal correlates of navigation in the honey bee (Apis mellifera). Front Behav Neurosci 15:690571. https://doi.org/10.3389/fnbeh.2021.690571
Peckham GW, Peckham EG (1905) Wasps social and solitary. Westminster Archibald Constable & Co, Ltd Haymarket, pp 311
Philippides A, Baddeley B, Ken Cheng K, Graham P (2011) How might ants use panoramic views for route navigation? J Exp Biol 214:445–451. https://doi.org/10.1242/jeb.046755
Philippides A, Hempel de Ibarra N, Riabinina O, Collett TS (2013) Bumblebee calligraphy: the design and control of flight motifs in the learning and return flights of Bombus terrestris. J Exp Biol 216:1093–1104. https://doi.org/10.1242/jeb.081455
Polster J, Petrasch J, Menzel R, Landgraf T (2019) Reconstructing the visual perception of honey bees in complex 3-D worlds. ArXiv. https://doi.org/10.48550/arXiv.1811.07560
Raderschall CA, Narendra A, Zeil J (2016) Head roll stabilisation in the nocturnal bull ant Myrmecia pyriformis: implications for visual navigation. J Exp Biol 219:1449–1457. https://doi.org/10.1242/jeb.134049
Riabinina O, Hempel de Ibarra N, Philippides A, Collett TS (2014) Head movements and the optic flow generated during the learning flights of bumblebees. J Exp Biol 217:2633–2642. https://doi.org/10.1242/jeb.102897
Rigosi E, Warrant EJ, O’Carroll DC (2021) A new, fluorescence-based method for visualizing the pseudopupil and assessing optical acuity in the dark compound eyes of honeybees and other insects. Sci Rep 11:21267. https://doi.org/10.1038/s41598-021-00407-2
Robert T, Frasnelli E, Collett TS, Hempel de Ibarra N (2017) Male bumblebees perform learning flights on leaving a flower but not when leaving their nest. J Exp Biol 220:930–937. https://doi.org/10.1242/jeb.151126
Robert T, Frasnelli E, Hempel de Ibarra N, Collett TS (2018) Variations on a theme: bumblebee learning flights from the nest and from flowers. J Exp Biol 221:jeb172601. https://doi.org/10.1242/jeb.172601
Samet N, Zeil J, Mair E, Boeddeker N, Stürzl W (2014) Ground-nesting insects could use visual tracking for monitoring nest position during learning flights. In: del Pobil AP, Chinellato E, Martínez-Martín E, Hallam J, Cervera E, Morales A (eds) From animals to animats 13, vol 8575. Springer, Berlin, pp 108–120. https://doi.org/10.1007/978-3-319-08864-8_11
Schulte P, Zeil J, Stürzl W (2019) An insect-inspired model for acquiring views for homing. Biol Cybernet 113:439–451. https://doi.org/10.1007/s00422-019-00800-1
Schultheiss P, Wystrach A, Schwarz S, Tack A, Delor J, Nooten SS, Bibost A-L, Freas CA, Cheng K (2016) Crucial role of ultraviolet light for desert ants in determining direction from the terrestrial panorama. Anim Behav 115:19–28. https://doi.org/10.1016/j.anbehav.2016.02.027
Schwarz S, Mangan M, Zeil J, Webb B, Wystrach A (2017) How ants use vision when homing backward. Curr Biol 27:401–407. https://doi.org/10.1016/j.cub.2016.12.019
Schwarz S, Clement L, Gkanias E, Wystrach A (2020) How do backward-walking ants (Cataglyphis velox) cope with navigational uncertainty? Anim Behav 164:133–142. https://doi.org/10.1016/j.anbehav.2020.04.006
Seelig JD, Jayaraman V (2013) Feature detection and orientation tuning in the Drosophila central complex. Nature 503:262–266. https://doi.org/10.1038/nature12601
Seelig JD, Jayaraman V (2015) Neural dynamics for landmark orientation and angular path integration. Nature 521:186–191. https://doi.org/10.1038/nature14446
Seidl T, Wehner R (2006) Visual and tactile learning of ground structures in desert ants. J Exp Biol 209:3336–3344. https://doi.org/10.1242/jeb.02364
Stone T, Mangan M, Wystrach A, Webb B (2018) Rotation invariant visual processing for spatial memory in insects. Interface Focus 8:20180010. https://doi.org/10.1098/rsfs.2018.0010
Stone T, Differt D, Milford M, Webb B (2016) Skyline-based localisation for aggressively manoeuvring robots using UV sensors and spherical harmonics. In Proc 2016 IEEE Int Conf on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016, pp. 5615–5622. New York, NY: IEEE. https://doi.org/10.1109/ICRA.2016.7487780
Stürzl W, Mallot HA (2006) Efficient visual homing based on Fourier transformed panoramic images. Robot Autonom Syst 54:300–313. https://doi.org/10.1016/j.robot.2005.12.001
Stürzl W, Zeil J (2007) Depth, contrast and view-based homing in outdoor scenes. Biol Cybernet 96:519–531. https://doi.org/10.1007/s00422-007-0147-3
Stürzl W, Boeddeker N, Dittmar L, Egelhaaf M (2010) Mimicking honeybee eyes with a 280° field of view catadioptric imaging system. Bioinspir Biomimet 5:036002. https://doi.org/10.1088/1748-3182/5/3/036002
Stürzl W, Grixa I, Mair E, Narendra A, Zeil J (2015) Three-dimensional models of natural environments and the mapping of navigational information. J Comp Physiol A 201:563–584. https://doi.org/10.1007/s00359-015-1002-y
Stürzl W, Zeil J, Boeddeker N, Hemmi JM (2016) How wasps acquire and use views for homing. Curr Biol 26:470–482. https://doi.org/10.1016/j.cub.2015.12.052
Sun X, Yue S, Mangan M (2020) A decentralised neural model explaining optimal integration of navigational strategies in insects. Elife 9:e54026. https://doi.org/10.7554/eLife.54026
Voss R, Zeil J (1998) Active vision in insects: an analysis of object-directed zig-zag flights in wasps (Odynerus spinipes, Eumenidae). J Comp Physiol A 182:377–387. https://doi.org/10.1007/s003590050187
Wehner R, Michel B, Antonsen P (1996) Visual navigation in insects: coupling of egocentric and geocentric information. J Exp Biol 199:129–140. https://doi.org/10.1242/jeb.199.1.129
Wolf E (1926) Über das Heimkehrvermögen der Bienen I. Z Vergl Physiol 3:615–691. https://doi.org/10.1007/BF00354117
Woodgate JL, Makinson JC, Lim KS, Reynolds AM, Chittka L (2016) Life-long radar tracking of bumblebees. PLoS ONE 11:e0160333. https://doi.org/10.1371/journal.pone.0160333
Wystrach A, Philippides A, Aurejac A, Cheng K, Graham P (2014) Visual scanning behaviours and their role in the navigation of the Australian desert ant Melophorus bagoti. J Comp Physiol A 200:615–626. https://doi.org/10.1007/s00359-014-0900-8
Wystrach A, Dewar A, Philippides A, Graham P (2016) How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation. J Comp Physiol A 202:87–95. https://doi.org/10.1007/s00359-015-1052-1
Wystrach A, Schwarz S, Graham P, Cheng K (2019) Running paths to nowhere: repetition of routes shows how navigating ants modulate online the weights accorded to cues. Anim Cogn 22:213–222. https://doi.org/10.1007/s10071-019-01236-7
Wystrach A, Buehlmann C, Schwarz S, Cheng K, Graham P (2020) Rapid aversive and memory trace learning during route navigation in desert ants. Curr Biol 30:1927–1933. https://doi.org/10.1016/j.cub.2020.02.082
Zahedi MS, Zeil J (2018) Fractal dimension and the navigational information provided by natural scenes. PLoS ONE 13(5):e0196227. https://doi.org/10.1371/journal.pone.0196227
Zanker JM, Zeil J (2005) Movement-induced motion signal distributions in outdoor scenes. Netw Comput Neural Syst 16:357–376. https://doi.org/10.1080/09548980500497758
Zeil J (1993a) Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): I. Description of flight. J Comp Physiol A172:189–205. https://doi.org/10.1007/BF00189396
Zeil J (1993b) Orientation flights of solitary wasps (Cerceris; Sphecidae; Hymenoptera): II. Similarities between orientation and return flights and the use of motion parallax. J Comp Physiol A 172:207–222. https://doi.org/10.1007/BF00189397
Zeil J (2012) Visual homing: an insect perspective. Curr Opin Neurobiol 22:285–293. https://doi.org/10.1016/j.conb.2011.12.008
Zeil J, Fleischmann P (2019) The learning walks of ants (Formicidae, Hymenoptera). Myrmecol News 29:93–110. https://doi.org/10.25849/myrmecol.news_029:093
Zeil J, Kelber A, Voss R (1996) Structure and function of learning flights in bees and wasps. J Exp Biol 199:245–252. https://doi.org/10.1242/jeb.199.1.245
Zeil J, Hofmann MI, Chahl JS (2003) The catchment areas of panoramic snapshots in outdoor scenes. J Opt Soc Am A 20:450–469. https://doi.org/10.1364/josaa.20.000450
Zeil J, Narendra A, Stürzl W (2014) Looking and homing: how displaced ants decide where to go. Phil Trans Roy Soc B 369:20130034. https://doi.org/10.1098/rstb.2013.0034
Acknowledgements
I thank Tom Collett and Andy Philippides for their constructive comments on this review.
Author information
Authors and Affiliations
Contributions
JZ wrote and reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Handling editor: Uwe Homberg.
For Tom Collett and Rüdiger Wehner.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zeil, J. Visual navigation: properties, acquisition and use of views. J Comp Physiol A 209, 499–514 (2023). https://doi.org/10.1007/s00359-022-01599-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00359-022-01599-2