Main

To characterize the influence of spatial position on the responses of area V1, we took mice expressing the calcium indicator GCaMP6 in excitatory cells and placed them in a corridor in virtual reality (Fig. 1a). The corridor had a pair of landmarks (a grating and a plaid) that repeated twice, thus creating two visually matching segments 40 cm apart (Fig. 1a, b; Extended Data Fig. 1). We identified V1 using the retinotopic map measured using wide-field imaging (Fig. 1c). We then used a two-photon microscope to view medial V1, focusing our analysis on neurons with receptive field centres more lateral than 40° azimuth (Fig. 1c), which were driven as the mouse passed the landmarks. As expected, given the repetition of visual scenes in the two segments of the corridor, some V1 neurons had a response profile with two equal peaks 40 cm apart (Fig. 1d). Other V1 neurons, however, responded differently to the same visual stimuli in the two segments (Fig. 1d). These results indicate that visual activity in V1 can be strongly modulated by an animal’s position in an environment.

Fig. 1: Responses in V1 are modulated by spatial position.
figure 1

a, Mice ran on a cylindrical treadmill to navigate a virtual corridor. The corridor had two landmarks that repeated after 40 cm, creating visually matching segments (red and blue bars). b, Screenshots showing the right half of the corridor at pairs of positions 40 cm apart. c, Example retinotopic map of the cortical surface. Grey curve shows the border of V1. Squares denote the field of view in two-photon imaging sessions targeted to medial V1 (inset shows the field with green frame). We analysed responses from neurons with receptive field centres greater than 40° azimuth (curve). d, Normalized response as a function of position in the corridor for six example V1 neurons. Dotted lines show predictions, assuming identical responses in matching segments of the corridor. e, Normalized response as a function of position, obtained from odd trials, for 4,958 V1 neurons. Neurons are ordered by the position of their maximum response. f, As in e for even trials. Curves indicate preferred position (yellow) and preferred position ± 40 cm (blue and red). g, Cumulative distribution of the spatial modulation ratio in even trials: response at non-preferred position (40 cm from peak response) divided by response at preferred position for cells with responses within the visually matching segments (median ± m.a.d., 0.61 ± 0.31; significantly less than 1, P < 10−104, n = 2,422, Wilcoxon two-sided signed rank test). h, As in g, stratifying the data by running speed and considering a model without spatial selectivity, the non-spatial model. The curves corresponding to low (cyan) and high (purple) speeds overlap and appear as a single dashed curve (P = 0.21, Wilcoxon two-sided signed rank test). Grey curve, spatial modulation ratios from a non-spatial model considering visual and behavioural factors (Extended Data Fig. 7).

This modulation of visual responses by spatial position occurred in the majority of V1 neurons (Fig. 1e–g). We imaged 8,610 V1 neurons across 18 sessions in 4 mice and selected 4,958 neurons with receptive field centres beyond 40° azimuth and reliable firing along the corridor (see Methods). We divided the trials in half, and used the odd-numbered trials to find the position at which each neuron fired maximally. The resulting representation reveals a striking preference of V1 neurons for spatial position (Fig. 1e), with most neurons giving stronger responses in one position (preferred position) than in the visually matching position 40 cm away (non-preferred position). To avoid circularity, we quantified this preference on the other half of the data (the even-numbered trials) and found that the preference for position was robust (Fig. 1f). Indeed, among the neurons that responded when the mouse traversed the visually matching segments (n = 2,422), the responses at the non-preferred position were markedly smaller than at the preferred position (Fig. 1g; Extended Data Fig. 2). We defined a spatial modulation ratio for each cell as the ratio of responses at the two visually matching positions (non-preferred/preferred, in the even trials). The median spatial modulation ratio was 0.61 ± 0.31 ( ± median absolute deviation, m.a.d.), significantly less than 1 (P < 10−104, Wilcoxon two-sided signed rank test). Neurons preferred the first or second sections in similar proportions (49% versus 51%), making it unlikely that a global factor such as visual adaptation could explain their preference.

The modulation of V1 responses by spatial position could not be explained by visual factors. To confirm that the receptive fields of most neurons saw similar stimuli in the two visually matching locations, we ran a model of receptive field responses (a simulation of V1 complex cells) on the sequences of images. As expected, this model generated spatial modulation ratios close to 1 (0.97 ± 0.17, Extended Data Fig. 3). We next asked whether the different responses seen in the two locations could be due to differences in images far outside the receptive field, particularly the end (grey) wall of the corridor. To test this, we placed two additional mice in a modified virtual reality environment, in which the two sections of the corridor were pixel-to-pixel identical (Extended Data Fig. 4). The spatial modulation ratio was again overwhelmingly less than 1 (0.62 ± 0.26; P < 10−81; n = 1,044 neurons), confirming that spatial modulation of V1 responses could not be explained by distant visual cues.

Spatial modulation of V1 responses could also not be explained by running speed, deviations in pupil position and diameter, or reward. Given that V1 neurons are influenced by running speed and visual speed8,9, their different responses in visually matching segments of the corridor could reflect speed differences. To control for this, we stratified the data according to three running speed ranges (low, medium, or high; Extended Data Fig. 5). Even within a group (medium speed), the spatial modulation ratio was substantially below 1 (0.47 ± 0.22; P < 10−33). Moreover, the spatial ratio of responses was identical at low and high speeds (Fig. 1h). We could also exclude a role of reward or deviations in pupil position and size, as the spatial modulation ratio was markedly below 1 even in sessions during which the animals ran without a reward (0.57 ± 0.37; P < 10−14), or when there were no changes in pupil size (0.63 ± 0.33, P < 10−45) or pupil position (0.63 ± 0.33, P < 10−27; Extended Data Fig. 6). To assess the joint contribution of visual, task-related, and position variables, we developed three prediction models (Extended Data Fig. 7). The first depended only on the visual scenes, which repeat twice, and on trial onset and offset, which introduce transients (visual model). The second additionally depended on running speed, reward times, pupil size, and eye position (non-spatial model). The third, in addition, allowed responses to differ in amplitude in the matching segments (spatial model). Only the last model could fit the activity of cells with unequal peaks, thus matching the spatial modulation ratios seen in the data (Extended Data Fig. 7c, d). By contrast, the first two models predicted spatial modulation ratios closer to 1 (Fig. 1h; Extended Data Fig. 7c, d).

Having established that V1 responses are modulated by spatial position, we next investigated whether the underlying modulatory signals reflect the spatial position encoded in the brain’s navigational systems (Fig. 2). We recorded simultaneously from V1 and hippocampal area CA1 using two 32-channel electrodes (Fig. 2a). To gauge a mouse’s estimate of position, we trained the mice to lick a spout for water reward upon reaching a specific region of the corridor (Fig. 2b; Supplementary Video 1; Extended Data Fig. 8). All four mice (wild type) learned to perform this task with more than 80% accuracy and relied strongly on vision: performance persisted when we changed the gain relating wheel rotation to progression in the corridor3,10 and performance decreased when we lowered visual contrast (Extended Data Fig. 8).

Fig. 2: V1 and CA1 neural populations represent spatial positions in the virtual corridor and make correlated errors.
figure 2

a, Example of reconstructed electrode tracks (red: DiI); green shows cells labelled with DAPI. Panel shows tracks from one array (four shanks) in CA1, and a second electrode (one shank) in V1. b, In the task, water was delivered when mice licked in a reward zone (green area). c, Normalized activity as a function of position in the corridor, for 226 V1 neurons (8 sessions). Neurons are ordered by the position of their maximum response. Curves indicate preferred position (yellow) and preferred position ± 40 cm (blue and red). d, Similar plot for CA1 place cells (334 neurons; 8 sessions). e, Density map showing the distribution of position decoded from the activity of simultaneously recorded V1 neurons (y-axis) as a function of the animal’s position (x-axis), averaged across recording sessions (n = 8), and considering only correct trials. The red diagonal stripe indicates accurate estimation of position. f, Similar plot for CA1 neurons. g, Density map showing the joint distribution of position decoding errors from V1 and CA1 in one example session at one position (74 cm; left), together with a similar analysis on data shuffled while preserving the correlation due to running speed and position (right). h, Pearson’s correlation coefficient of decoding errors in V1 and CA1 for each recording session (n = 3,800; 21,000 time points), against similar analysis of shuffled data. Correlations are above shuffling control (P = 0.0115, two-sided t-test, n = 8 sessions). i, Difference between joint distribution of V1 and CA1 decoded position and shuffled control, for the example in g. j, Difference between joint density map of V1 and CA1 decoded position, and shuffled control, averaged across positions (n = 50) and sessions (n = 8).

Many neurons in both visual cortex and hippocampus had place-specific response profiles, thus encoding the mouse’s spatial position (Fig. 2c–f). Consistent with our observations from two-photon imaging, V1 neurons responded more strongly in one of the two visually matching segments of the corridor (Fig. 2c, Extended Data Fig. 9c). In turn, hippocampal CA1 neurons exhibited place fields3,10,11, responding in a single corridor location (Fig. 2d, Extended Data Fig. 9a–c). Therefore, responses in both V1 and CA1 encoded the position of the mouse in the environment, with no ambiguity between the two visually matching segments. Indeed, an independent Bayes decoder was able to read out the mouse’s position from the activity of neurons recorded from V1 (33 ± 17 neurons per session, n = 8 sessions; Fig. 2e) or from CA1 (42 ± 20 neurons per session, n = 8 sessions; Fig. 2f).

Furthermore, when the visual cortex and hippocampus made errors in estimating the mouse’s position, these errors were correlated with each other (Fig. 2g, h). The distributions of errors in position decoded from V1 and CA1 peaked at zero (Fig. 2g) but were significantly correlated (Fig. 2h; ρ = 0.125, P = 0.0129, two-sided t-test, n = 8). In principle, this correlation could arise from a common modulation of both regions by behavioural factors such as running speed, which affects responses of both visual cortex8,9 and hippocampus12,13,14. To isolate the effect of speed, we shuffled the data between time points while preserving the relationship between speed and position (see Supplementary Methods). After shuffling, the correlation between decoding errors in V1 and CA1 decreased substantially from 0.125 to 0.022 (P = 0.0115; Fig. 2g, h). Moreover, when we subtracted the shuffled distribution from the original joint distributions, the residual decoding errors were distributed along the diagonal (Fig. 2i, j), indicating that representations in V1 and CA1 are more correlated than expected from common speed modulation. This correlation could also not be explained by common encoding of behavioural factors such as licking (Extended Data Fig. 9d–f). Indeed, a prediction of V1-encoded position from all external variables (true position, running speed, licks and rewards) could still be improved by the position decoded from CA1 activity (Extended Data Fig. 10).

We next tested whether the spatial position encoded by V1 and CA1 relates to the mouse’s subjective estimate of position (Fig. 3a–f). CA1 activity is influenced by the performance of navigation tasks15,16,17,18, and may reflect the animal’s subjective position more than its actual position15,17,19. We assessed a mouse’s subjective estimate of position from the location of its licks. We divided trials into three groups: early trials, in which too many licks (usually 4–6) occurred before the reward zone, causing the trial to be aborted; correct trials, during which one or more licks occurred in the reward zone; and late trials, in which the mouse missed the reward zone and licked afterwards. To understand how spatial representations in V1 and CA1 related to this behaviour, we trained the Bayesian decoder on the activity measured in correct trials, and analyzed the likelihood of decoding different positions in the three types of trial. Decoding performance in early and late trials showed systematic deviations: in early trials, V1 and CA1 overestimated the animal’s progress along the corridor (deviation above the diagonal, Fig. 3a, d), whereas in late trials they underestimated it (deviation below the diagonal, Fig. 3b, e). Accordingly, the probability of being in the reward zone, predicted from both CA1 and V1, peaked before the reward zone in early trials and after it in late trials (Fig. 3c, f). These consistent deviations suggest that the representations of position in V1 and CA1 correlate with the animal’s decisions to lick and thus reflect its subjective estimate of position.

Fig. 3: Positions encoded by visual cortex and hippocampus correlate with animal’s spatial decisions.
figure 3

a, Distribution of positions decoded from the V1 population, as a function of the animal’s actual position, on trials in which mice licked early. The decoder was trained on separate trials during which mice licked in the correct position. b, Same plot for trials during which mice licked late. c, The average decoded probability that the mouse is in the reward zone, as a function of distance from the reward. The curve for early trials (red) peaks before the reward zone, whereas the curve for late trials (blue) peaks after it, consistent with V1 activity reflecting subjective position rather than actual position. Probabilities were normalized relative to the probability of being in the reward zone in the correct trials (green). Red dots, positions at which the decoded probability of being in the reward zone differed significantly between early and correct trials (P < 0.05, two-sample two-sided t-test). Blue dots: same, for correct versus late trials. Shaded regions indicate mean ± s.e.m., n = 68 early trials (red), 334 correct trials (green), and 30 late trials (blue). df, Same as ac, for decoding using the population of CA1 neurons. g, Position decoded from V1 activity as a function of mouse position, in an example session. Crosses show positions when the animal licked during early (red) or late trials (blue). Late trials can include some early licks. These distributions (mean ± s.d.) are summarized as shaded ovals for early trials (red, n = 20 licks) and late trials (blue, n = 12 licks). Green regions mark the reward zone. h, Summary distributions for all sessions (n = 8). i, Fraction of licks as a function of distance from reward location in positions decoded from V1 activity. jl, Same as in gi, for CA1 neurons.

The licks provide an opportunity to gauge when the mouse’s subjective estimate of position lies in the reward zone. If activity in V1 and CA1 reflects subjective position, it should place the animal in the reward zone whether the animal correctly licked in that zone or incorrectly licked earlier or later. To test this prediction, we decoded activity in V1 and CA1 at the time of licks. By definition, the distributions of licks in early, correct, and late trials were spatially distinct (Fig. 3g, h, j, k). However, when plotted as a function of decoded position, these distributions came into register over the reward zone, whether the decoding was done from V1 (Fig. 3g–i) or from CA1 (Fig. 3j–l). Thus, regardless of the animal’s position, when a mouse licked for a reward, the activity of both V1 and CA1 indicated a position in the reward zone.

Together, these results indicate that visual responses in V1 are modulated by the same spatial signals as those represented in the hippocampus, and that these signals reflect the animal’s subjective estimate of position. This modulation may become stronger as environments become familiar6,7, perhaps contributing to the changes observed in V1 as animals learn behavioural tasks20,21,22. The correlation between representations in V1 and CA1 may be due to feed-forward signals from vision or feedback signals from navigational systems. Although V1 and CA1 are not directly connected, they could share spatial signals through indirect connections23,24; these could involve the retrosplenial, parietal, entorhinal, or prefrontal cortices, which are known to carry spatial information25,26. Further insights into the nature of these signals could be obtained by modulating the relationship between actual position and distance run3,10 or time27, and by investigating more natural 2D environments28,29,30. In such environments, however, it would be difficult to control and repeat visual stimulation, which proved essential in our study. Our results show that signals related to an animal’s own estimate of position appear as early as in primary sensory cortex. This observation suggests that the mouse cortex does not keep a firm distinction between navigational and sensory systems; rather, spatial signals may permeate cortical processing.

Methods

All experiments were conducted according to the UK Animals (Scientific Procedures) Act, 1986 under personal and project licenses issued by the Home Office following ethical review.

For simultaneous recordings in V1 and CA1, we used four C57BL/6 mice (all male, implanted at 4–8 weeks of age). For calcium imaging experiments, we used double or triple transgenic mice expressing GCaMP6 in excitatory neurons (5 females, 1 male, implanted at 4–10 weeks of age). The triple transgenic mice expressed GCaMP6 fast31(Emx1-Cre;Camk2a-tTA;Ai93, 3 mice). The double transgenic mice expressed GCaMP6 slow32 (Camk2a-tTA;tetO-G6s, 3 mice). Because Ai93 mice may exhibit aberrant cortical activity33, we used the GCamp6 slow mice to validate the results obtained from the GCaMP6 fast mice. Additional tests33 confirmed that none of these mice displayed the aberrant activity that is sometimes seen in Ai93 mice. No randomization or blinding was performed in this study. No statistical methods were used to predetermine sample size.

Virtual environment and task

The virtual reality environment was a corridor adorned with a white noise background and four landmarks: two grating stimuli oriented orthogonal to the corridor and two plaid stimuli (Fig. 1a). The corridor dimensions were 100 × 8 × 8 cm, and the landmarks (8 cm wide) were centred 20, 40, 60 and 80 cm from the start of the corridor. The mice navigated the environment by walking on a custom-made polystyrene wheel (15 cm wide, 18 cm diameter). Movements of the wheel were captured by a rotary encoder (2,400 pulses per rotation, Kübler, Germany), and used to control the virtual reality environment presented on three monitors surrounding the animal, as previously described9. When the mouse reached the end of the corridor, it was placed back at the start of the corridor after a 3–5-s presentation of a grey screen. Trials longer than 120 s were timed out and were excluded from further analysis.

Mice used for simultaneous V1 and CA1 recordings (n = 4 animals, 8 sessions) were trained to lick in a specific region of the corridor, the reward zone. This zone was centred at 70 cm and was 8 cm wide. Trials in which the animals were not engaged in the task, that is, when they ran through the environment without licking, were excluded from further analysis. The animal was rewarded for correct licks with ~2 μl water using a solenoid valve (161T010; Neptune Research, USA), and licks were monitored using a custom device that detected breaks in an infrared beam.

Mice used for calcium imaging (n = 6 animals, 25 recording sessions) ran the two versions of the virtual corridor, with no specific task.

In the standard version of the corridor, two of the mice (10 sessions) were motivated to run with water rewards: one mouse received rewards at random positions along the corridor and the other at the end of the corridor. To control for the effect of the reward on V1 responses, no reward was delivered to two other mice ( 8 sessions).

To ensure that the spatial modulation of V1 responses could not be explained by the end wall of the corridor being more visible in the second half than in the first half, two additional mice used for calcium imaging were trained in a modified version of the corridor, where visual scenes were strictly identical 40 cm apart (7 sessions). In this environment, mice ran the same distance as before (100 cm) and were also placed back at the start of the corridor after a 3–5-s presentation of a grey screen. The same four landmarks were also centred in the same positions as before. However, the corridor was extended to 200 cm length, repeating the same sequence of landmarks (Extended Data Fig. 4). The virtual reality software was modified to render only up to 70 cm ahead of the animal, ensuring the visual scenes were strictly identical in the sections between 10 and 50 cm and 50 and 90 cm; the white noise background also repeated with the same 40 cm periodicity. Prior to recording in the 200 cm corridor, mice were first exposed to 5 sessions in the 100 cm corridor, then placed in the 200 cm corridor and allowed to habituate to the new environment for another two or three sessions before the start of recordings.

Surgery and training

The surgical methods are similar to those described previously9,34. In brief, a custom head-plate with a circular chamber (3–4 mm diameter for electrophysiology; 8 mm for imaging) was implanted on 4–10-week-old mice under isoflurane anaesthesia. For imaging, we performed a 4-mm craniotomy over the left visual cortex by repeatedly rotating a biopsy punch. The craniotomy was shielded with a double coverslip (4 mm inner diameter; 5 mm outer diameter). After 4 days of recovery, some mice were water restricted (>40 ml/kg/day) and were trained for 30–60 min, 5–7 days/week.

Mice used for simultaneous V1 and CA1 recordings were trained to lick selectively in the reward zone using a progressive training procedure. Initially, the animals were rewarded for running past the reward location on all trials. After this, we introduced trials in which the mouse was rewarded only when it licked in the rewarded region of the corridor. The width of the reward region was progressively narrowed from 30 cm to 8 cm across successive days of training. To prevent the animals from licking all across the corridor, trials were terminated early if the animal licked more than a certain number of times before the rewarded region. We reduced this number as the animals performed more accurately, typically reaching a level of 4–6 licks by the time recordings were made. Once a sufficient level of performance was reached, we controlled on some (random) trials that the animal performed the task visually by measuring the performance when we decreased visual contrast or changed the distance to the reward zone (Extended Data Fig. 8). Training was carried out for 3–5 weeks. Animals were kept under light-shifted conditions (9 a.m. light off, 9 p.m. light on) and experiments were performed during the day.

Widefield calcium imaging

For widefield imaging we used a standard epi-illumination imaging system35,36 together with an SCMOS camera (pco.edge, PCO AG). A Leica 1.6× Plan APO objective was placed above the imaging window and a custom black cone surrounding the objective was fixed on top of the headplate to prevent contamination from the monitors’ light. The excitation light beam emitted by a high-power LED (465 nm LEX2-B, Brain Vision) was directed onto the imaging window by a dichroic mirror designed to reflect blue light. Emitted fluorescence passed through the same dichroic mirror and was then selectively transmitted by an emission filter (FF01-543/50-25, Semrock) before being focused by another objective (Leica 1.0 Plan APO objective) and finally detected by the camera. Images of 200 × 180 pixels, corresponding to an area of 6.0 × 5.4 mm, were acquired at 50 Hz.

To measure retinotopy we presented a 14° wide vertical window containing a vertical grating (spatial frequency 0.15 cycles per degree), and swept37,38 the horizontal position of the window over 135° of azimuth angle, at a frequency of 2 Hz. Stimuli lasted 4 s and were repeated 20 times (10 in each direction). We obtained maps for preferred azimuth by combining responses to the two stimuli moving in opposite directions, as previously described37.

Two-photon imaging

Two-photon imaging was performed with a standard multiphoton imaging system (Bergamo II; Thorlabs) controlled by ScanImage439. A 970 nm laser beam, emitted by a Ti:sapphire laser (Chameleon Vision, Coherent), was targeted onto L2/3 neurons through a 16× water-immersion objective (0.8 NA, Nikon). The fluorescence signal was transmitted by a dichroic beamsplitter and amplified by photomultiplier tubes (GaAsP, Hamamatsu). The emission light path between the focal plane and the objective was shielded with a custom-made plastic cone, to prevent contamination from the monitors’ light. In each experiment, we imaged four planes set apart by 40 μm. Multiple-plane imaging was enabled by a piezo focusing device (P-725.4CA PIFOC, Physik Instrumente), and an electro-optical modulator (M350-80LA, Conoptics Inc.), which allowed us to adjust the laser power with depth. Images of 512 × 512 pixels, corresponding to a field of view of 500 × 500 μm, were acquired at a frame rate of 30 Hz (7.5 Hz per plane).

Pre-processing of raw imaging movies was done using the Suite2p pipeline40 and involved: 1) image registration to correct for brain movement; 2) ROI extraction (that is, cell detection); and 3) correction for neuropil contamination. For neuropil correction, we used an established method41,42. We used Suite2p to determine a mask surrounding each cell’s soma, the ‘neuropil mask’. The inner diameter of the mask was 3 µm and the outer diameter was <45 µm. For each cell we obtained a correction factor, α, by regressing the binned neuropil signal (20 bins in total) from the fifth percentile of the raw binned cell signal. For a given session, we obtained the average correction factor across cells. This average factor was used to obtain the corrected individual cell traces, from the raw cell traces and the neuropil signal, assuming a linear relationship. All correction factors fell between 0.7 and 0.9.

To manually curate the output of Suite2p, we used two criteria: one anatomical and one activity-dependent. One of the anatomical criteria in Suite2p is ‘area’, that is, mean distance of pixels from ROI centre, normalized to the same measure for a perfect disk. We used this criterion (area <1.04) to exclude ROIs that were likely to correspond to dendrites rather than somata. The activity-related criterion is the standard deviation of the cell trace, normalized to the standard deviation of the neuropil trace. We used this criterion to exclude ROIs whose activity was too small relative to the corresponding neuropil signal (typically with std(neuropil corrected trace)/std(neuropil signal) < 2). We finally excluded cells that fired extremely seldom (once or twice within a 20 min session).

Pupil tracking

We tracked the eye of the animal using an infrared camera (DMK 21BU04.H, Imaging Source) and a zoom lens (MVL7000, Navitar) at 25 Hz. Pupil position and size were calculated by fitting an ellipsoid to the pupil for each frame using a custom software. X and Y positions of the pupil were derived from the centre of mass of the fitted ellipsoid.

Electrophysiological recordings

On the day before the first recording session, we made two 1-mm craniotomies, one over CA1 (1.0 mm lateral, 2.0 mm anterior from lambda), and a second one over V1 (2.5 mm lateral, 0.5 mm anterior from lambda). We covered the chamber using KwikCast (World Precision Instruments) and the mice were allowed to recover overnight. The CA1 probe was lowered until all shanks were in the pyramidal layer, which was identified by the increase in theta power (5–8 Hz) of the local field potential and an increase in the number of detected units. The V1 probe was lowered to a depth of ~800 µm. We waited ~30 min for the tissue to settle before starting the recordings. In two mice, we dipped the probes in red-fluorescent DiI (Fig. 2a). In these mice, we had only one recording session. The other two mice underwent two and four recording sessions, respectively.

Offline spike sorting was carried out using the KlustaSuite43 package, with automated spike sorting using KlustaKwik44, followed by manual refinement using KlustaViewa43. Hippocampal interneurons were identified by their spike time autocorrelation and excluded from further analysis. Only time points with running speeds greater than 5 cm/s were included in further analyses.

Data analysis and modelling methods

See Supplementary Methods for details of analysis and models.

Reporting summary

Further information on experimental design is available in the Nature Research Reporting Summary linked to this paper.

Code availability

The custom code from this study is available from the corresponding author upon reasonable request.