Main

Spatial firing is often considered to be one specific example of a ‘cognitive map’—a general representation of relationships between cognitive entities7,8,11. These entities can correspond to different locations, but can also be distinct stimuli or even abstract concepts. Consistent with this idea, the firing of hippocampal cells is modulated not only by location, but also by non-spatial variables, including sensory, behavioural and internal parameters12,13. For example, hippocampal neurons respond to discrete stimuli such as sounds14, odors15, faces and objects16. Hippocampal and entorhinal neurons also respond to locations in visual space17 and can fire at different time points during temporal delay tasks (‘time cells’18,19,20,21). Finally, recent functional MRI studies have suggested that cognitive spaces defined by continuous dimensions are represented by the human hippocampal–entorhinal system9,10.

One interpretation of these findings is that any arbitrary continuous variables that are relevant to an animal can be represented by the hippocampal–entorhinal activity using a common circuit mechanism. To test this idea, we designed a ‘sound manipulation task’ (SMT), in which rats changed the frequency of sound in their environment (Fig. 1a). Animals deflected a joystick to activate a pure tone produced by a sound speaker. They continued deflecting the joystick to increase the frequency along a perceptually uniform logarithmic axis22. Rewards were obtained by releasing the joystick within a fixed target frequency range. To uncouple frequency from the amount of elapsed time, we randomly varied, across trials, the ‘speed’ of frequency traversal. The resulting trials varied in duration by up to a factor of 2, on average in the range of 5–10 s.

Figure 1: Sound modulation task.
figure 1

a, Schematic of the SMT. Rat deflects a joystick to increase sound frequency and must release it in a target zone. J, joystick; L, lick tube; N, nosepoke; S, speaker. b, For a single session, frequencies at which the joystick was released on individual trials (bottom), and the distribution of these frequencies across trials (top). Most releases occurred early in the target zone (green). c, Same data as in b, but plotted as a function of time. The COV indicates a bigger spread of the distribution. d, COV values of frequencies and times at the joystick release across all 189 sessions from 9 rats (blue). Red circles, median values across sessions for each of the rats.

PowerPoint slide

Source data

Rats typically released the joystick at frequencies that were narrowly distributed early in the target zone (Fig. 1b). Across animals, the release was within the target zone on 70.8 ± 2.6% of the trials (n = 9 rats, mean ± s.e.m.). Rats did not follow a simple timing strategy, but released the joystick later during slower trials (Fig. 1c), indicating that sound frequency influenced their behaviour. In fact, joystick releases could be largely predicted by sound frequency alone with almost no added influence of elapsed time (Extended Data Fig. 1). Consequently, trial durations were more broadly distributed than sound frequencies at the release (coefficient of variation (COV) 0.146 ± 0.002 and 0.046 ± 0.005 for trial duration and sound frequency, respectively; n = 9 rats; P < 0.001, t-test; Fig. 1d). Thus, rats successfully performed the SMT and appeared to use a sound frequency-guided strategy.

We recorded from 2,208 units in the dorsal CA1 hippocampal region of 5 rats and 1,164 units in the dorsal MEC of 9 rats (Extended Data Fig. 2). We observed that 40.0% and 51.3% of cells in these regions, respectively, had firing rates that were significantly modulated during the SMT (P < 0.01, shuffle test). The activity of these cells tended to be stable across trials (Extended Data Fig. 3) and was largely confined to discrete firing fields (Fig. 2a), akin to those observed during spatial navigation. Across the population, fields clustered at trial boundaries (that is, near joystick presses and releases; Fig. 2b). However, they spanned the entire task, occurring both during and outside the sound presentation period. Neural activity exhibited other properties similar to those observed during spatial navigation, including theta modulation and precession (Extended Data Fig. 4), as well as a larger number of fields per cell in MEC than in CA1 (Extended Data Fig. 5).

Figure 2: CA1 and MEC activity in the SMT.
figure 2

a, Cells that were active during the joystick press (P, cell 1) and release (R, cell 2) and during sound presentation (cell 3). Top, peri-stimulus time histograms (PSTHs). Bottom, spike raster plots, aligned to the joystick press (cells 1 and 3) or to the joystick release (cell 2) and sorted by trial duration. For cell 3, the same spiking data are also plotted as a function of frequency, with trials sorted by the frequency at the joystick release. FR, firing rate. b, Firing rates of all SMT-modulated cells across rats (882 cells of 2,208 total for CA1 and 596 cells of 1,164 total for MEC). Each row corresponds to a field; cells with multiple fields are included more than once. Time is linearly warped in order to average trials of different durations. Each row is normalized to the maximum firing rate of the field to which it is aligned, and rows are sorted by field time. Colour scale is from 0 to 1.5, accommodating fields other than the one used for alignment. Individual examples from a are marked on the right. c, PSTHs of simultaneously recorded neurons, averaged separately across trials of different durations. The sequence of activity expands and contracts with trial duration.

PowerPoint slide

Source data

Some cells fired around consistent sound frequency values (for example, cell 3 in Fig. 2; Extended Data Fig. 6) independently of trial duration, forming ‘frequency fields’ analogous to place fields in spatial navigation. As a result, population activity could be viewed as a sequence of firing fields that expanded and contracted in time for trials of different durations (Fig. 2c). To quantify this frequency locking, we implemented a model that tested the strength of alignment of the neural activity to various task events (Extended Data Fig. 7). Of the fields that occurred during sound presentation, more than half in both CA1 and MEC aligned best to particular sound frequencies, whereas the rest were more strongly time-locked either to the press or to the release of the joystick (Extended Data Fig. 8).

To investigate whether SMT-modulated cells responded to particular features of the auditory stimulus or to the progression of the behavioural task itself, we presented sound frequency sweeps of the same frequencies and durations when rats were not performing the SMT (passive playback). Almost none of the CA1 cells, including the SMT-modulated ones, responded to the passive playback (1.7% compared to 31.1% during the SMT; n = 295 cells in 3 rats; P < 0.001, χ2 test; Fig. 3a, b). In another experiment, we presented additional rats with frequency sweeps, but increased the salience of these stimuli by delivering a reward at the end of each sweep (passive playback + reward). In this case, we observed task-modulated activity, but in fewer cells than during the SMT (20.2% of 248 cells in 2 rats; P < 0.001, χ2 test; Fig. 3c, d; Extended Data Fig. 9). Firing fields in this task were also wider than during the SMT (2.49 ± 0.29 s and 1.16 ± 0.05 s; n = 1,252 and 44 fields, median ± s.e.m., respectively; P < 0.001, Wilcoxon rank-sum test; Fig. 3e). Thus, behavioural context affected the fraction of neurons activated and their temporal precision. However, coupling between actions and sounds (that is, agency) was not strictly required to engage hippocampal activity.

Figure 3: Activity depends on behavioural context.
figure 3

a, Activity of the same CA1 neuron during the SMT and during passive playback (PP) of acoustic stimuli that matched those in the SMT. Top, PSTHs. Bottom, raster plots, with time linearly warped between the onset and the offset of the sound. On, sound onset; off, sound offset. b, Firing rate modulation of all 295 CA1 neurons recorded during the SMT and passive playback. ‘Normalized info’ is the mutual information between spikes and the phase of the task, divided by the average value from samples with shuffled spike timing. Points are coloured according to whether the cell was modulated by SMT and/or passive playback. c, Activity of a neuron during passive playback of acoustic stimuli that were followed by rewards (PPR). d, Cumulative histograms of the normalized information in the three tasks (295 cells for SMT and passive playback and 248 cells for PPR). Task modulation of activity is stronger during PPR than during passive playback and even stronger during the SMT. e, Cumulative histograms of the field durations during SMT and PPR. Activity shows more temporally precise task modulation during the SMT.

PowerPoint slide

Source data

To test whether SMT-modulated neurons were also spatially selective, we recorded some cells during a random foraging task, in which rats searched a spatial environment for pellets of food. We found a mixed representation of the two tasks in CA1; whereas some cells participated in only one of the two tasks, some produced firing fields in both. Of the 295 place cells, 25.1% were SMT-modulated (Fig. 4a). MEC grid cells also participated in the SMT (34.3% of 105 grid cells; Fig. 4b). Both place cells and grid cells were less likely to be SMT-modulated than other cells in their respective brain regions (34.7% of the 623 CA1 non-place cells and 46.5% of the 776 MEC non-grid cells; P < 0.001 and P < 0.02, χ2 test, respectively). However, the amount of SMT modulation across neurons had a very weak correlation to measures of both ‘placeness’ and ‘gridness’23 (r2 = 0.0054 and r2 = 0.0053, respectively; Fig. 4c, d), suggesting that the activity patterns in the two tasks were nearly independent. The SMT firing fields of both place cells and grid cells were similar to those of other cells and spanned the entire task, including periods when the rat was immobile (Extended Data Fig. 10). Other spatial cell types, including border cells and head direction cells, were also SMT-modulated (for example, cell 9 in Fig. 4b; Extended Data Fig. 10). Thus, SMT-modulated and spatially selective neurons were not distinct subclasses of the circuit, but represented a shared use of the same neural population between tasks.

Figure 4: SMT-modulated and spatially modulated cells overlap.
figure 4

a, Left, activity of CA1 cells during the SMT, plotted as in Fig. 3. Right, spatial firing rate maps for random foraging; the maximum firing rate is indicated. Cells 2 and 4 were silent during the SMT; cells 3 and 4 were silent during foraging. All firing rate scales are from 0 Hz to the nearest integer number of hertz above the maximum firing rate. b, Activity of MEC grid cells during the two tasks. Cells 5 and 6 are from module 1 in the same rat and are plotted on the same firing rate scale. Of these cells, only cell 5 was active during the SMT. Cells 7 and 8 are from modules 2 and 3, respectively. Cell 9 is a border cell. c, Normalized information for all 918 CA1 cells during the SMT, as in Fig. 3, plotted against normalized spatial information (the mutual information between spikes and the location, divided by the average value from samples with shuffled spike timing). Points are coloured and shaded according to whether the cell was a place cell and whether it was SMT-modulated. Information values in the two tasks are not expected to be similar owing to the different task structures. d, Normalized information for all 881 MEC cells during the SMT plotted against the cells’ grid scores. Points are coloured and shaded according to whether the cell was a grid cell and whether it was SMT-modulated. e, Cumulative histograms of the average field width for all 48 grid cells in module 1 and all 51 grid cells in modules 2/3. Groups were separated at 42 cm. Inset, distribution of grid spacings across cells and a mixture of three Gaussians fit to the distribution. Peaks corresponding to modules 1–3 are numbered.

PowerPoint slide

Source data

Grid cells occur in discrete ‘modules’ with distinct spacings and widths of the firing fields24. SMT-modulated cells included grid cells from all modules detectable in our data (Fig. 4b). However, in modules with larger spacing, we observed a higher incidence of particularly wide SMT firing fields (for example, cell 8 in Fig. 4b). The distributions of field widths were different between modules 1 and 2/3 (0.78 ± 0.02 s and 1.36 ± 0.05 s, respectively; n = 48 and 51 cells, median ± s.e.m.; P < 0.01, Wilcoxon rank-sum test; Fig. 4e). Thus, grid cells with wider fields in the spatial environment tended also to have wider fields in the SMT, suggesting shared neural mechanisms. One possibility is that the SMT firing patterns corresponded to 1-dimensional slices through a hexagonal lattice25; however, the small number of fields produced by grid cells in the SMT (typically 0–3) precluded an analysis of this correspondence.

Because the SMT evolves along a continuous axis (sound frequency), it is analogous to spatial navigation on a linear track. Our results show that the SMT shares some key features of neural representation with this spatial task. Just like location, the non-spatial dimension is represented in the hippocampal–entorhinal system by discrete firing fields that continuously tile the entire behavioural task. Several other properties are shared between the two tasks, including a tendency of MEC cells to produce multiple fields2, a clustering and tightening of fields at salient features of the task26,27, and a dependence of firing on behavioural context28. Critically, spatial and non-spatial representations are produced by the same neuronal population, suggesting a common circuit mechanism for encoding fundamentally different kinds of information across tasks. Our results therefore suggest that the well-known spatial patterns in the hippocampal–entorhinal circuit may be a consequence of the continuous nature of the relevant task variables (for example, location), rather than a primacy of physical space for this network9,10,18,19,20,21.

What is the purpose of these continuous representations? In the SMT, rats did not need to represent the structure of the entire acoustic space. They could, in fact, respond to a particular sound frequency—a strategy that is also sufficient in operant tasks known to be hippocampus-independent6. However, our observations lead to an intriguing conjecture that in more complex tasks (for example, those containing memory-guided decision points), the hippocampal–entorhinal system might similarly represent arbitrary behavioural states. In this framework, task performance activates a sequence of neural activity, in which firing fields are elicited parametrically with progress through behaviour. Neighbouring and partially overlapping fields therefore represent the order and adjacency of behavioural states. This could be useful for linking events in episodic memory and for planning future actions18,29 (for example, via simulated continuous neural sequences30). Spatially localized place and grid codes might therefore be a manifestation of a general circuit mechanism for encoding sequential relationships between behaviourally relevant events. This view suggests a role for these cell types in supporting not only spatial navigation, but cognitive processes in general.

Methods

No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment.

Subjects

All animal procedures were approved by the Princeton University Institutional Animal Care and Use Committee and carried out in accordance with the National Institutes of Health standards. Subjects were adult male Long-Evans rats (Taconic). Training started at an age of ~10 weeks. Animals were placed on a water schedule in which supplementary water was provided after behavioural sessions, such that the total daily water intake was 5% of body weight.

Data were collected from 11 rats. In chronological order of their use, the first two were used for CA1 and MEC recordings in the SMT and random foraging, the next three were used for CA1 and MEC recordings in the SMT, random foraging, and passive playback experiments, the next four were used for MEC recordings only in the SMT and random foraging, and the final two were used for CA1 recordings in the passive playback + reward experiment. All data from all rats were included for analysis.

Apparatus for the sound modulation task

The apparatus was a modified rat operant conditioning chamber (30.5 cm L × 24.1 cm W × 29.2 cm H, Med Associates ENV-007CT) placed inside a custom-built sound isolation chamber. Two versions (1 and 2) of the apparatus were used: version 1 was used for three rats in the SMT experiment, while version 2 was used for the remaining six rats in the SMT and two additional rats in the passive playback experiments.

In both versions of the apparatus, rats operated a joystick (Mouser HTL4-112131AA12). In version 1, the lever arm of the joystick was extended to a total of 15 cm by attaching an aluminium rod (0.3 cm OD × 12 cm) coaxially to its existing handle. The joystick was mounted horizontally outside the chamber, with 11 cm of the handle protruding into the chamber through a cutout in the centre of the shorter wall. At rest, the handle was perpendicular to the wall of the chamber and was 4 cm above the floor. The cutout in the wall was a vertical groove (1.2 cm × 3.8 cm) that allowed the joystick arm to be deflected downward by up to 16°. Deflection of the arm required application of a force of 0.019 N per degree to the tip of the handle. A lick-tube for reward delivery was located at the centre of the opposite wall 6 cm above the floor and protruded 6 cm into the chamber. Rewards were delivered using a solenoid valve, and a blue LED was placed 2 cm above the lick-tube. A sound speaker (Med Associates ENV-224DM) was mounted on the wall directly above the joystick, and an infrared camera was used to monitor the behaviour.

In version 1 of the apparatus, rats occasionally moved their heads while deflecting the joystick. To eliminate this possible spatial confound, the following modifications were made in version 2. A custom-made nosepoke was attached at the centre of the wall that contained the joystick handle (that is, the centre of the nosepoke was horizontally aligned with the joystick handle). The nosepoke was 2 cm wide and could be triggered by breaking an infrared beam (7 cm above the floor, 3 mm from the wall). The lever arm of the joystick was shortened to 13.5 cm, with 1.5 cm protruding into the chamber; thus, deflection of the arm required a force of 0.022 N per degree at the tip. The lick-tube and the LED were positioned closer to the joystick handle on the same wall as the joystick, 8 cm to the left. The lick-tube was also shortened to 2.5 cm.

Sound modulation task

All behavioural paradigms were implemented using our software package, ViRMEn (Virtual Reality MATLAB Engine31, http://virmen.princeton.edu). Custom routines for ViRMEn were written to implement navigation in acoustic spaces and to synchronize the acquisition of behavioural and electrophysiological data. Software monitored the rat’s behaviour and defined four types of event: 1) A ‘press’ was defined as a downward deflection of the joystick exceeding 2° from the horizontal. 2) A ‘release’ was defined as a decrease in the amount of deflection to less than 1.5° from the horizontal, lasting longer than 250 ms. 3) A ‘poke’ was defined as a breaking of the infrared beam in the nosepoke. 4) An ‘un-poke’ was defined as restoration of the infrared beam for longer than 1 s.

Rats initiated trials by pressing the joystick and poking. The poke had to either precede the press (without an un-poke in between) or follow the press by less than 250 ms (without a release in between). If a press was not followed by a poke within 250 ms, the trial was not initiated, and a new trial could be started only after the joystick was released. Trials were terminated by releasing the joystick and un-poking. The un-poke could either follow the release (without a press in between) or precede the release by less than 250 ms. An un-poke that was not followed by a release within 250 ms was considered a premature termination of the trial; in this case, no reward was delivered, and a new trial could also be started only after the joystick was released. Animals using version 1 of the apparatus, which lacked a nosepoke, initiated and terminated trials by pressing and releasing the joystick, respectively.

A sound was played continuously by the speaker during the trial. At the beginning of the trial, the sound was a 2-kHz pure tone, ~80 dB SPL. Whenever the joystick was deflected by more than 2° from the horizontal, the frequency of the tone was increased using the following formula:

where fn is the sound frequency at time step n, θ is the amount of joystick deflection in degrees, Δt is the duration of the time step, and α is the traversal speed, chosen randomly from a uniform distribution at the beginning of each trial. The uniform distribution was chosen for each animal such that the range of trial durations was typically 6–12 s for version 1 of the apparatus and 4–8 s for version 2 of the apparatus. At each time step n + 1, the speaker produced a logarithmic sweep of tones from fn to fn + 1. Whenever the joystick deflection was less than 2°, sound frequency was unchanged. The range 15–22 kHz was defined as the ‘target zone’. When the frequency exceeded this range, white noise (80 dB SPL) was played instead of the pure tone to indicate overshooting of the target zone.

If the trial was terminated within the target zone, the LED above the lick-tube was turned on and a reward (25 μl water) was delivered. The LED persisted for 2 s. If the trial was terminated outside the target zone, no additional stimuli were delivered. In either case, a new trial could be initiated at any following time. If the animal obtained a reward, the new trial used a new randomly selected value of the traversal speed α; otherwise the same traversal speed was repeated.

Passive playback experiments

Two passive playback experiments were performed—with and without a reward. For animals that did not receive a reward, passive playback was presented for 15 min immediately following the last SMT session of the day. During this time, the nosepoke, the joystick handle, and the lick-tube were covered with a plastic cover to prevent access. Sweeps of pure-tone sounds were then played with 3-s pauses between the sweeps. Each sweep was from 2 to 22 kHz, as in the SMT. The speed of traversal of the frequency range for each sweep was chosen from a uniform distribution to roughly match trial durations from preceding SMT sessions.

For the passive playback + reward experiment, we used separate rats that were never trained to operate the joystick and thus never learned an association between actions and changes to auditory stimuli. For these rats, the LED and the lick-tube were uncovered, but an insert outside the chamber blocked the movement of the joystick handle. Passive playback was the same as above, but the sound sweep was immediately followed by a reward (25 μl water) and an LED signal lasting 2 s.

Behavioural training

Behavioural shaping for the SMT required 5–6 weeks and consisted of eight distinct stages. In stage 1, rats were trained to associate the LED with a reward. The LED was turned on and a reward was simultaneously delivered at random time intervals (exponentially distributed, τ = 10 s). In stage 2 (version 2 of the apparatus only), rats were trained to poke in order to trigger the LED and reward delivery. In stage 3, a capacitive touch sensor (SparkFun MPR121) was attached to the joystick handle. Rats were additionally trained to touch the joystick handle. In stage 4, rats were trained to deflect the joystick by progressively larger amounts, until the final threshold used in the SMT was achieved.

In stage 5, sound was introduced. Initially, the traversal speed of the frequency space was constant and very high, such that the joystick needed to be deflected for <500 ms in order to reach the target zone at 15 kHz. The target zone did not have a high bound, so the animal was not penalized for overshooting; however, if frequencies exceeding 22 kHz were reached, the sound speaker produced a 22-kHz tone instead. During this stage, the traversal speed was gradually decreased, by 0.5% after each reward, until trials were ~8 s long. For some animals, the traversal speed did not change during training, but instead the starting frequency of the sound sweeps was gradually decreased from 14.9 kHz to 2 kHz. This was the longest stage of training, requiring about 3 weeks.

In stage 6, the high bound was introduced to the reward zone at 22 kHz. Initially, rats were allowed to overshoot the high bound by ~5 s without activating white noise and failing the trial, but this value was gradually decreased to 0 s. In stage 7, a second value of the traversal speed was introduced and gradually increased, such that trials using the first speed value were ~8 s long, whereas values using the second speed value were ~4 s long. Trials using the two speed values were randomly intermingled. Stage 8 was the full version of the task, in which the entire range of traversal speeds (between the first and the second value from stage 7) was used.

Random foraging task

Random foraging experiments were performed in a square arena that was either 78 cm on the side, 61 cm high (for six rats used in random foraging) or 93 cm on the side, 61 cm high (for the remaining three rats). The walls and the floor of the arena were built using black plastic. A white cue card (28 cm W × 22 cm H) was placed in the centre of one of the walls, with the bottom edge 35 cm above the floor. Rats searched for pieces of yogurt treats (~50 mg, eCOTRICION Yogies) that were thrown into the arena one at a time, roughly every 15 s. The arena was adjacent to the acoustic navigation chamber, allowing the animal to be moved between the two tasks without the recording headstage being unplugged. We often observed cells in CA1 that had extremely low firing rates during one of the two tasks. To ensure that these cells were not lost during recording, we recorded rats on four interleaved sessions per day: two 30-min SMT sessions and two 15-min sessions in the random foraging task. Rats that received only MEC implants were recorded in a single 1-h SMT session and a 20-min random foraging session. The order of the sessions was varied across rats, depending on what appeared to be more motivating to each animal.

Once the animal was moved to the random foraging arena, a red and a green LED were plugged into the lateral edges of the recording headstage. An overhead video camera was used to record the locations of these LEDs. Thresholds were applied separately to the red and green channels of the videos, and the centres of mass of the pixels that passed the threshold were identified. A line segment connecting the red and green centres of mass was defined, and the animal’s location was defined as the midpoint of this line segment. The head direction was defined as the angle of a vector perpendicular to this line segment.

Electrophysiology

Tetrodes were constructed from twisted wires that were either PtIr (18 μm, California Fine Wire) or NiCr (25 μm, Sandvik). Tetrode tips were platinum-plated (for PtIr wire) or gold-plated (for NiCr wire) to reduce impedances to 150–250 kΩ at 1 kHz.

Microdrive assembly devices were custom-made and have been previously described31. Each device contained eight tetrodes that were independently movable using a manual screw/shuttle system adapted from ref. 32. Tetrodes were directed into the brain using a cannula that consisted of nine stainless steel tubes (0.014 inches OD, 0.0065 inches ID) soldered together into a 3 × 3 square grid and placed flush against the brain surface. One of the tubes was used for an immobile reference electrode (PtIr, 0.002 inches bare, 0.004 inches coated in PFA, 1 mm total length in the brain, 300 μm of insulation stripped at the tip). A single device was used for either CA1 or MEC recordings; two separate devices were implanted for dual CA1 and MEC recordings. One of the animals in the passive playback experiment received a dual implant into the left and right CA1.

Recordings were obtained using a previously described custom-built system31 that consisted of small headstages connected by lightweight 9-wire cables to an interface board 1.2 m above the animal. One 32-channel headstage was plugged into each 8-tetrode microdrive assembly. The system filtered (5 Hz–7.5 kHz), amplified (×1,000), and time-division multiplexed (32:1) signals from the electrode wires using an Intan RHA2132 chip and custom-designed circuitry. The multiplexed signals were relayed through a 25-channel slip-ring commutator (Dragonfly) and digitized at 1 MHz (31250 Hz per channel) using a data acquisition board (National Instruments PCI-6133). Custom MATLAB software was used to record the signals and to provide a real-time display of spikes and local field potentials.

Surgery

Rats were anaesthetized with 1–2% isoflurane in oxygen and placed in a stereotaxic apparatus. The cranium was exposed and cleaned, holes were drilled at 6 or 7 locations, and bone anchor screws (#0-80 × 3/32 inch) were screwed into each hole. A ground wire (5 mil Ag) was inserted between the bone and the dura through another hole. An antibiotic solution (enrofloxacin, 3.8 mg/ml in saline) was applied to the surface of the cranium. Craniotomies and duratomies were made above CA1, MEC, or both. A microdrive assembly was lowered to the surface of the brain and anchored to the screws with light-curing acrylic (Flow-It ALC flowable composite). Animals received injections of dexamethasone and buprenorphine after the surgery.

Recording procedures

For CA1, the centre of the electrode-guiding cannula was 3.5 mm posterior to Bregma, 2.5 mm lateral to the midline. For MEC, the cannula was implanted at a 10° tilt with electrode tips pointed in the anterior direction2. The centre of the cannula was 4.5 mm lateral to the midline, and the posterior edge of the cannula was ~0.1 mm anterior to the transverse sinus. On the day of the surgery, tetrodes were advanced to a depth of 1 mm. On the days following recovery from surgery, CA1 tetrodes were advanced until sharp-wave ripples were observed, and their waveforms were indicative of locations ~50 μm dorsal of the pyramidal cell layer33; the tetrodes were then immediately retracted by half of the distance they were advanced. This procedure was repeated every 2 or 3 days until tetrodes tips were within an estimated 50–100 μm from the pyramidal cell layer. After this, tetrodes were advanced by 15–30 μm per day until large-amplitude putative pyramidal cells were observed. MEC tetrodes were advanced in steps of 60 μm per day until theta-modulated units were observed; then tetrodes were advanced by no more than 30 μm per day.

Histology

In some animals, small lesions to mark tetrode tip locations were made by passing anodal current (15 μA, 1 s) through one wire of each tetrode. All animals received an overdose of ketamine and xylazine and were perfused transcardially with saline followed by 4% formaldehyde. Brains were extracted, and sagittal sections (80 μm thick) were cut and stained with the NeuroTrace blue fluorescent Nissl stain. Locations of all tetrodes were identified by comparing relative locations of tracks in the brain with the locations of individual tetrode guide tubes within the microdrive assembly.

Data analysis

Behavioural analysis

Each trial was characterized by the duration from the press of the joystick to the release of the joystick and by the sound frequency at the moment of the release. Because joystick deflection increased sound frequency exponentially, we used a logarithmic scale and measured frequency in octaves relative to the starting frequency of 2 kHz. When animals were not engaged in the task, they still occasionally deflected the joystick—for example, by stepping or leaning on it while exploring the chamber. We observed that most of the very brief trials resulted from such behaviour; for analysis, we therefore excluded trials shorter than 3 s.

We implemented a behavioural model to determine whether animals preferentially used sound frequency, the amount of elapsed time, or a combination of the two to perform the SMT. The model consisted of two parameters: f0 (measured in octaves) and Δt (measured in seconds). We simulated each trial by assuming that the rat released the joystick at a time Δt relative to the occurrence of frequency f0. For trials in which the joystick was released before the occurrence of frequency f0, we used linear extrapolation to determine when f0 would occur if the frequency continued increasing with the average speed of the trial. We then measured the mean squared error between the joystick release times simulated by the model and the actual release times. Parameters f0 and Δt were optimized to minimize the average mean squared error across all trials of a given behavioural session.

Spike sorting

We filtered electrode signals using a Parks-McClennan optimal equiripple FIR filter (pass-band above 1 kHz, stop-band below 750 Hz). The sum of the four signals from each tetrode was computed, and thresholds of −3 and +3 s.d. were applied to the summed data. Peaks in the data exceeding these thresholds but separated by more than 32 points were identified, and waveforms from 12 points before each peak to 19 points after each peak (1 ms total) were extracted. We computed the first three principal components of the extracted waveforms from each tetrode. Each waveform was then considered in a 7-dimensional space defined by its projection onto the three principal components and its peak-to-peak amplitudes on the four tetrode wires. Clustering was performed manually in two dimensional projections of this space using custom-written software in MATLAB. If two clusters on the same tetrode on two subsequent recording sessions had a Mahallanobis distance of less than 20, they were considered to belong to the same unit. In this case, data from the two sessions were pooled. Neurons whose average firing rate in any recording session exceeded 5 Hz (in CA1) or 10 Hz (in MEC) were considered putative interneurons and excluded from analysis.

Firing in acoustic tasks

For the analysis of activity in the SMT, sessions were first broken into individual trials. The starts of the trials were defined as the midpoints between each press of the joystick (starting with the second one of the session) and the previous release. The ends of the trials were defined as the midpoints between each release of the joystick (ending with the one before the last one of the session) and the next press. Each trial therefore consisted of three time intervals: the pre-press interval, the interval between the press and the release, and the post-release interval. These intervals were different in duration across trials. For the following analyses, we therefore linearly time warped each of the three intervals to its median duration across trials. After warping, time in each of the three intervals was divided into an integer number of bins, such that the bins were on average as close as possible to 50 ms across trials. Firing rates and PSTHs were calculated in these bins.

In CA1, some of the spikes were produced during sharp-wave ripple (SWR) events. During spatial navigation experiments, such events are typically excluded from firing rate maps by rejecting low-velocity time points. In the SMT, we instead excluded SWRs by directly detecting them in the local field potential33, as follows. Raw voltage signals from the electrodes were downsampled by a factor of 10. Signals were then band-pass filtered in the 140–230 Hz range (stop-bands below 90 and above 280 Hz) using a Parks-McClennan optimal equiripple FIR filter. The power of the band-passed signal was computed, smoothed with a 100-point square window, and the median value was measured across the four wires of each tetrode. SWRs were detected as peaks in the resulting trace that exceeded 3 s.d., but were separated by more than 312 points (100 ms). Spikes that occurred within 100 ms from each SWR were excluded from the analysis. Exclusion of these spikes did not qualitatively change any of our results, but tended to increase the ratio of the in-field firing rates to the background.

To measure the strength of the firing rate modulation in the SMT, we computed the mutual information rate between spikes and the phase of the task34 using the following formula

where λi is the mean firing rate in the ith time bin, λ is the overall mean firing rate, and pi is the fraction of time spent in the ith bin (in this case, 1/number of bins). For each cell, we then generated 100 shuffled samples in which the spike times of each trial were shifted forward in time by a random amount. (The spikes that shifted past the end of the trial were wrapped around to the beginning.) The normalized information was defined as the ratio of the information rate in the real data to the average information rate across the 100 shuffled samples.

In many cells, the information rate was high compared to shuffled samples owing to the presence of prominent firing fields. However, in some cells this occurred because of small differences in the background firing rate between periods of time during and outside the sound presentation (for example, cell 6 in Fig. 4b). We specifically wanted to characterize cells that showed strong peaks in the firing rate. Therefore, we computed a P value as the fraction of shuffled samples for which the peak firing rate in the PSTH was higher than in the PSTH of un-shuffled data. Firing rates of cells with P < 0.01 were considered to be significantly modulated by the SMT.

To detect firing fields, we smoothed the PSTH of each cell with a 20-point square window. We then defined a threshold that was 2 s.d. of the firing rate, but not below 0.2 Hz and not above 1 Hz. Any maximum of the PSTH that exceeded this threshold was considered to be a peak of a firing field. Two neighbouring fields were then merged if either they were separated by less than 2 s, or all values of the firing rate between them exceeded 75% of the smaller peak firing rate of the two fields. To determine the full extent of each field, we subtracted the baseline from the PSTH, defined as the fifth percentile of the firing rate. The extent of the field was then considered to be the contiguous period containing the peak and exceeding 50% of the field’s peak firing rate in the baseline-subtracted PSTH.

Analyses of the passive playback experiments were the same as above, but the onsets and offsets of the sound sweeps were used as anchor points instead of the presses and the releases of the joystick.

Analysis of theta oscillations

To analyse theta oscillations, the voltage from each electrode was band-pass filtered with a Parks-McClennan optimal equiripple FIR filter (pass-band: 6–12 Hz, stop-bands below 1 Hz and above 17 Hz) and the median value across all wires of a tetrode was measured. Forward and reverse filtering was implemented (MATLAB command filtfilt) in order to produce no phase shift in the signal. The theta phase was determined by measuring the angle of the Hilbert transform of the filtered signal.

To quantify theta precession, we considered all firing fields that occurred between the press and the release of the joystick. For each field, we considered spike times, linearly warped between 0 and 1 (corresponding to the joystick press and the release, respectively) and the phases of theta at the spike times (measured between 0° and 360°). Values of theta phase were then circularly shifted in 1° increments, with values exceeding 360° wrapping back to 0°. For each shift from 1 to 360°, linear regression was fit to the relationship between theta phase and the warped time. The shift for which this linear regression had the smallest mean squared error was chosen, and the slope of the theta precession was determined from the linear regression at that shift.

We found that the frequency of the theta oscillations was stable for the first several seconds of a behavioural trial, but tended to increase near the end of the trial in some rats. To quantify theta frequency, we therefore only considered the first 3 s of each trial. Frequency was determined by locating the peak in the multi-taper power spectral density estimate (MATLAB command pmtm with the time-bandwidth product of 4) between 6 and 12 Hz.

Alignment to task events

We implemented a model to measure how well the activity of a given cell aligned to the press of the joystick, to the release of the joystick, and to sound frequency. We considered only the time period between the press and the release of the joystick. First, all trials longer than 3 s were sorted by duration and grouped into five equal-sized ‘groups’, from the fastest to the slowest trials. (If the number of trials was not divisible by 5, some of the fastest groups contained one extra trial). For each group i from 1 to 5, we defined di as the average duration of trials in that group. We then determined the number of bins Ni as the duration of the fastest trial in the ith group divided by 50 ms and rounded down to the nearest integer. Each trial in the ith group was binned into Ni bins, the average firing rate was computed in each bin, and a PSTH was computed by averaging the firing rates across all trials in the group and smoothing with a 20-point square window. Thus, the five PSTHs contained the firing rates Fik, where i is the group number and k is the bin number from 1 to Ni.

Cells could have multiple fields during the sound presentation period, and different fields occasionally appeared to align differently to task events. We therefore performed analysis separately for each field. We defined the period at which the firing rate was above 20% of the maximum firing rate within a firing field and set firing rates outside of this period to 0.

On average, the centre of the kth bin in the ith group was at a certain time relative to the press of the joystick; we defined this time as . It could be computed as . The centre of this bin relative to the release of the joystick could be computed as . We also defined fik as the average frequency (on a logarithmic scale) of all sounds that were played during the time periods confined by the bin. Finally, we normalized all of these variables to a range from 0 to 1 as follows. For , we determined the smallest and largest values across all bins and PSTHs, and . We then computed the normalized values . The same procedure was repeated to compute and .

Next, we defined model parameters αpress, αrelease and αfrequency and defined a parametric variable βik for each kth bin in the ith PSTH:

Each ith PSTH could now be described by the β values of all bins and the corresponding firing rates: (βik, Fik) for all k from 1 to Ni. This parametric variable has the feature that the ratios of the three α coefficients determine the extent to which its value scales with the three real variables , and .

We next determined the set of parameters (αpress, αrelease, αfrequency) for which the five PSTHs were maximally correlated to one another. For each pair of PSTHs i and j, we first determined the range of β values on which these two PSTHs overlapped. This range was from max(βi1, βj1) to . We defined 50 values of β that were evenly spaced between these two limits of the range. We then used Fourier interpolation (MATLAB interpft) to compute each of the two PSTHs at these 50 values of β and computed the cross-correlation between the two sets of 50 values. This procedure was repeated for each of the 10 pairs of PSTHs, and the values of cross-correlation were averaged across all pairs. We asked at which values of (αpress, αrelease, αfrequency) the average cross-correlation value was maximal. Since the value of cross-correlation depended only on the ratios of the α parameters, not their magnitudes, we constrained the three parameters to the unit sphere. We then used MATLAB algorithms to optimize over points on the unit sphere. Examples of how the parameterized PSTHs varied across different values of (αpress, αrelease, αfrequency) are shown in Extended Data Fig. 7.

Each field was classified as a press-aligned, release-aligned, or frequency-aligned field by determining whether the 3D space of model parameters (αpress, αrelease, αfrequency) was closest to (1, 0, 0), (0, 1, 0) or (0, 0, 1), respectively. For each field, we also estimated the uncertainty of the model parameters by performing bootstrap analysis on the individual trials using 100 bootstrapped samples. Fields for which more than one of the model parameters (αpress, αrelease or αfrequency) was significantly different from 0 according to the bootstrap analysis were considered to show mixed representation of task parameters.

Firing during random foraging

For the analysis of the random foraging task, only time points with instantaneous speed exceeding 5 cm/s were used. Animals’ location values were sorted into a 40 × 40 grid of bins. The number of spikes and the amount of time spent in each bin (occupancy) were calculated, and both values were smoothed with a 7 × 7 point Hamming filter. The firing rate in each bin was then defined as the ratio of the smoothed number of spikes to the smoothed occupancy. For each cell, we also generated 100 shuffled samples, in which the spikes were shifted along the trajectory of the animal by a random amount between 20 s and the duration of the recording session minus 20 s.

To detect place cells, we calculated ‘spatial information’34—the mutual information rate between spikes and location—using the same formula as above (equation 2), but using the 1,600 spatial bins instead of time bins. Place cells were defined as cells for which the information rate exceeded 99% of the values for the shuffled samples.

To detect grid cells, we computed the grid score23, using the exact procedure described in ref. 31. The grid score measured the spatial correlation of a cell’s rate map to its own rotation at 60° and 120° and compared it to the correlation at 30°, 90° and 150° rotations. Firing rate maps with symmetry that was specific to 60° had high grid scores. We measured the 95th percentile of the grid scored across all shuffled samples from all the MEC cells we recorded. In our dataset, this value was 0.46. Cells whose grid score exceeded this value were considered grid cells. Grid spacing was determined by computing the firing rate autocorrelation, selecting the 6 peaks in the autocorrelation closest to the peak at (0,0) and measuring their average distance from (0,0). We detected fewer grid cells in the smaller environment that we used than in the larger one. This is consistent with previous studies (for example, ref. 35,) and might potentially be due to an insufficient number of fields for reliable grid detection or to boundary influences on the firing of grid cells36,37. We therefore verified all comparisons of grid and non-grid cells on the subset of cells that were recorded in the larger environment.

To detect border cells, we used the border score, described in ref. 5. This score captured cells whose activity was selectively adjacent to one or more walls of the environment. Border cells were defined as cells whose border score and spatial information were both above the 99th percentile of the corresponding values measured on the shuffled samples.

To detect head direction cells, we used the exact procedure described in ref. 31. In brief, we first computed the directional stability index3,38 by measuring the correlation between head direction tuning curves on the two halves of the recording session. We then measured the directional selectivity38 as the length of the vector average of the tuning curve in polar coordinates. Head direction cells were defined as cells whose directional stability and selectivity were both above the 99th percentile of the corresponding values measured on the shuffled samples.

Data availability statement

The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.