Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Most neurons in the brains of animals possess dendrites, elongated and branched protrusions which are receiving synaptic contacts from other neurons. The shapes of these dendrites are highly varied between different animals, different brain areas, and different neuronal types in one brain area. The branching patterns of these dendrites vary between highly elaborate trees many 100 μm long with several separate regions to short, simple extensions with no more than one degree of branching. Both the amount and the exact pattern of dendritic branches vary widely, with some neurons’ dendrites entirely confined to a 2-dimensional plane while others sample a volume of brain tissue and with significantly different degree of sampling of the area the dendrites pass through (Segev and London 2000; Stuart et al. 2007).

Besides these morphological differences, dendrites are also equipped with vastly different sets of ionic conductances. Ion channels conducting Na+, K+, Ca2+, and K+ ions are distributed spatially heterogeneously over the dendrites of neurons (Migliore and Shepherd 2002). The exact sets of conductances, their densities, and spatial distribution further add to the diversity of dendrites. The different conductance distributions can make dendritic trees essentially passive or dampened or excitable and allow no, weak, or completely regenerative action potential back-propagation into the dendrites (Stuart et al. 1997). The difference between different neurons’ dendrites is well demonstrated when looking at mammalian cortical layer V pyramidal neurons and cerebellar Purkinje neurons, two of the most thoroughly studied neuronal cell types. To illustrate how dendrites are diverse on several levels of biological organization (ion channels, excitability, morphology, synaptic connections), we use the occasion of this book chapter to briefly review the dendrites of these two neuron types. With this, we hope to illustrate to the reader the complexities found in the dendrites of neurons.

In layer V pyramidal neurons, the dendrites are divided into several subgroups. Close to the soma, in layer V of the cortex, are the basal dendrites. Typically between two and five thin (<2 μm) basal dendrites grow horizontally from the soma, remaining in layer V, and branch several times. Also growing from the soma is the main apical trunk that grows vertically towards the pia. In the region immediately above the soma, the oblique dendrites branch off from the apical dendrite. From there, the apical dendrite projects further upwards to cortical layer I, where it branches out into the apical tuft. The dendrites of a cortical layer V pyramidal neuron are covered in spines, tiny (several microns long) protrusions, which are the target of excitatory synapses from a variety of cortical and subcortical sources. Inhibitory synapses target both spines and dendritic stems. There are several types of spines, and their density is about 0.2/μm. Cortical layer V pyramidal neurons fire fast Na+ action potentials, which are initiated in the somato-proximal parts of the axon and back-propagate into the axon and dendrites. Due to a moderate density of Na+ conductance in the dendrites, the action potentials are not completely regenerated during this back-propagation. They lose amplitude and widen by the time they reach the distal parts of the dendrites (Hay et al. 2013). Layer V pyramidal dendrites also show very interesting Ca2+ electrogenesis. In the basal dendrites, NMDA-type glutamate receptors can cause Ca2+ spikes (Major et al. 2008). In the distal dendrites, voltage-gated Ca2+ conductances can cause such Ca2+ spikes (Schiller et al. 2000). Back-propagating action potentials can evoke Ca2+ spikes in the distal apical dendrites when they coincide with synaptic depolarizations at these locations. These Ca2+ spikes can then propagate to the soma and axon, where they evoke more action potentials in a ping-pong-like manner of interaction between the distal dendrites and the soma/axon (Larkum et al. 1999). A diverse set of K+ and mixed cation conductances, several of them increasing in density with path distance from the soma (Stuart et al. 1997), hold this dendritic excitability in check and alter the integration of synaptic inputs.

The dendrites of cerebellar Purkinje neurons are organized in one large, 2-dimensional fan. Emerging from the soma is a thick primary dendrite which then branches out evenly into a large array of higher order dendrites densely covering the plane of the Purkinje neuron. The dendrites of Purkinje neurons are densely covered by spines (>3/μm). The spines are the target of excitatory synapses from the parallel fibers, one of the two main groups of inputs to Purkinje neurons. The other main inputs are the climbing fibers. These inputs synapse on the main dendrite and via multiple synaptic connections cause a massive Ca2+ spike-mediated depolarization of the whole Purkinje neuron, a complex spike. In such a complex spike, several Na+ action potentials ride on a Ca2+ spike envelope. In contrast, the simple spikes fired by Purkinje neurons are single Na+ action potentials. They do not back-propagate into the dendritic trees due to the low Na+ conductance density of the Purkinje neuron dendrites and the large current sink the multiple dendritic branches constitute. Besides Na+ and Ca2+ conductances, the dendrites of Purkinje neurons contain a diverse set of K+ and mixed cation conductances which regulate spiking and excitability (Martina et al. 2003).

The function of the dendrites of central neurons is not completely clear. While a general understanding of the function of most brain regions exists, the exact nature of the input–output transformations of individual neurons in vivo is not known with certainty. Hence, we know that primary visual cortical neurons fire in response to the presence of a properly oriented stimulus in their receptive field (DeAngelis et al. 1993; Henry et al. 1974). But we do not know how exactly the individual neurons integrate all their synaptic inputs to arrive at this specific response. This issue is aggravated by the fact that in an area like the cortex the synaptic input to any neuron is many synapses removed from any sensory input or motor output. Furthermore, activity reaching cortical pyramidal neurons has often traveled trough recursive cortical connections: the representational meaning of input spike trains is simply not clear. Due to the lack of understanding of the coding function (what aspect of the external world does neuronal/dendritic activity code for?), discussion has often centered on the underlying computational function (what is the signal transformation performed by dendrites). One proposal is that dendrites are merely collecting synaptic inputs in the regions they are traversing. According to this view, it is the function of different branching patterns to sample inputs from different regions (Cuntz et al. 2008; Cuntz 2012). Another proposal is that dendrites are involved in purposely modifying the incoming synaptic signals. Dendrites have been shown to boost synaptic inputs, normalize its waveform, and locally integrate a subset of inputs before passing the result on to the soma (Berger et al. 2003; Polsky et al. 2004). Both the passive electrotonic properties of dendritic trees as well as their ionic conductances contribute to the modification of synaptic potentials. While we agree that dendrites serve both input sampling and input modification, the current study is only concerned with the latter.

So what are the signals dendrites have to process in the functioning brain? One class of signals are obviously specific sensory information, retrieved memory contents, and motor programs. But, noise and background fluctuations are also ubiquitously present in the brain. Ion channel noise, spontaneous synaptic release, and background network activity are some examples. Some neurobiological processes which generate membrane potential fluctuations most likely do not represent relevant information, such as the stochastic opening and closing of ion channels caused by thermal protein structure fluctuations (ion channel flicker, Hille 2001). However, even though the ion channel flicker is not thought to represent information about the external environment, this stochastic process is still important in setting the operation regimes for neurons (Schneidman et al. 1998; Cannon et al. 1998).

Other forms of background activity, such as the synaptic background activity prevalent in several brain regions, do likely carry information (Destexhe and Paré 1999—but see Waters and Helmchen 2006 for a differing view). Since this information is probably contextual and not directly correlated to a stimulus presented by an experimenter, the exact nature of this information is difficult to elucidate, and the distinction between noise and contextual fluctuations is difficult to make. Furthermore, the statistical properties of these background fluctuations (amplitude, power spectrum) could be of relevance, while the precise instance of the signal could be irrelevant. In any case, neurons are not existing in a quiescent resting state but are exposed to ongoing noise and background fluctuations (used interchangeably here) from diverse sources. Background activity in the cortex is significant and state dependent. During wakefulness, rapid-eye-movement sleep, and ketamine anesthesia, cortical neurons experience a high-conductance state. A constant barrage of partially synchronized excitatory and inhibitory synaptic potentials causes significant (standard deviation > 2 mV) membrane potential fluctuations which can be approximated as self-correlated noise (Destexhe et al. 2001). Background activity in the cerebellum is equally important, with constant (>1 Hz) synaptic input from 106 parallel fibers depolarizing the Purkinje neurons (Rapp et al. 1992). Mechanistically, this background activity depends more on the single cells than on networks of neurons. This is the case since Purkinje neurons fire tonically in the absence of synaptic input.

Naturally, signal processing and transmission work best in low-noise environments. The nervous system is no such low-noise environment, and the background noise could interfere with nervous system function. Hence, likely processes evolved which compensate for the noise or even take advantage of it. But little is known how dendrites cope with the brain’s background activity. A number of studies by Destexhe and colleagues (Hô and Destexhe 2000; Destexhe and Paré 1999; Destexhe et al. 2002) have shown how the cortical synaptic background activity influences the membrane properties of cortical pyramidal neurons. They found that neurons became depolarized and had shortened passive integration time constants and action potential back-propagation was enhanced (Hô and Destexhe 2000). The effect of cerebellar background activity is equally believed to be a depolarization of neurons and a shortening of passive time constants (Jaeger and Bower 1994; Rapp et al. 1992). While these results are intriguing, general principles of how dendrites cope with noisy inputs are not understood.

We have recently developed an inverse method for relating dendritic function to structure (Stiefel and Sejnowski 2007; Torben-Nielsen and Stiefel 2009, 2010a, b). This method is based on applying a genetic algorithm to morphologies generated by a morphogenetic algorithm (Fig. 9.1). The genetic algorithm tests for a user-selected function and selects the top-performing morphologies for inclusion in the next generation. Genetic operations are performed on the parameter strings encoding the neural morphologies.

Fig. 9.1
figure 1

Inverse approach and the simulation protocol. (a) Schematic of the inverse approach. A pool of parametrized model specifications is translated to a model neuron. Then, the performance of each model at the task at hand is assessed. Based on this performance, a genetic algorithm selects and slightly mutates the model specification. The algorithm iteratively increases the performance of the model neurons at the predefined computation. (b) Simulation protocol used to assess the model’s performance. Left: Initial setup defining the soma position and the locations where synapses can be added. In the noiseless case, synapses are inserted when a dendrite is inside the “target location” (green dots). In case of spatial noise, a stochastic distribution defines whether a synapse is actually added (green dots) or not (gray dot). In case of temporal noise, a uniform distribution jitters the exact activation time of each synapse. (Panel a reproduced from Torben-Nielsen and Stiefel (2010a, b))

In this study, we apply this method to study the morphologies of dendrites optimized for increasingly stochastic inputs. This should give us a series of artificial model neurons which allow us to investigate the integration of noisy inputs.

2 Methods

For the details of our optimization method we refer to Torben-Nielsen and Stiefel (2009, 2010a, b). The computational tasks we optimized for in this study were a stochastic variation of the input-order detection task investigated in Torben-Nielsen and Stiefel (2009). The original, deterministic task is designed so that neurons receive two distinct groups of inputs at different predefined locations in space and that these synapses are activated at a predefined time with a delay (Δt) between the two groups. The goal we define for the neuron is to respond as strongly as possible to one temporal order of activation of the two groups and as weak as possible to the inverse temporal order.

This differential response (= performance, P) is defined as the ratio between the amplitudes of the compound EPSP, in the non-preferred direction, divided by the response in the preferred direction:

$$ p=\frac{A_{null}}{A_{pref}}, $$

with A being the EPSP amplitude at the soma.

The soma is located at the center, and two synaptic target zones are defined in opposing directions at 200 μm away from the soma. The synaptic zones are spherical and have a radius of 50 μm.

In this work we introduce both spatial and temporal noise (but never both simultaneously) as sketched in Fig. 9.1b. In the deterministic task, the model neuron receives one synapse per 5 μm of dendrite that grows through the synaptic zones. In the spatial noise case, the synapses are added with some level of stochasticity: not every 5 μm a synapse is necessarily inserted. Higher spatial noise levels mean that less and less synapses get inserted into the model neuron. Thus, the actual location is not modified, but the probability (P) that a synapse will be inserted at that location is. Because we want to exclude neurons that are “optimized” for one noise level, we assess the performance of each model neuron at five noise levels: once for five noise levels between 0 and 25 % of noise and once with five noise levels between 0 and 50 % noise. In the deterministic case, all synapses belonging to a group are activated at either time t or at t + Δtt = 30 ms, because 30 ms proved to illicit strongest responses in passive model neurons (Torben-Nielsen and Stiefel 2009). Temporal noise assigns to each synapse a uniformly distributed activation time on [T − ε, T + ε] where T stands for the appropriate time (either T = t or T = t + Δt) and ε is the magnitude of the temporal jitter, which is here set to 5 ms (Fig. 9.1b). All optimization parameters and electrical defaults are described in Tables 1 and 2 from Torben-Nielsen and Stiefel (2009).

In contrast to the original study, where the optimization goal was a linear combination of multiple objectives, we used a multi-objective optimization algorithm. The two objectives are (1) the performance in the task as outlined above and (2) minimization of the total length. The latter objective is inspired by the parsimony principle generally found in nature. In multi-objective optimization, selection of good performing solutions is based on the ranking “performs equally,” “performs worse,” or “performs better.” More specifically, in theory, the algorithm finds a set of solutions which “perform equally”; this set is called the Pareto front. All individuals on the Pareto front are equally good solutions, which means that increasing the performance on one of the objectives will result in a decrease in the other objectives. Thus, while there can be a large variation in the solutions, they are said to be not worse than one another as the objectives are often conflicting. In the simulations performed here, we used an initial population size of 250 individuals and ran the optimization for 5,000 generations. To avoid overfitting a particular instance generated by one random seed, we tested each model with five random seeds and used the average outcome to measure the performance. We have no way of knowing if the genetic algorithm converged to an optimum and how close the optimization products are to a true optimum. Nevertheless, we argue that they are reasonably close because (1) in previous work the solution found by the genetic algorithm was essentially at the analytically computed optimum (Torben-Nielsen and Stiefel 2009) and (2) when distinct runs (starting at different locations in the fitness landscape due to different starting random seeds) converge to a similar output, we can interpret that phenomenon as the algorithm having found a global optimum.

3 Results

The optimization runs yielded model neurons that performed well as input-order detectors. By adopting a multi-objective optimization strategy, a set of model neurons is obtained after the optimization. These models are performing “not worse” than one another and reside on the Pareto front: the location that represents their performance on both optimization objectives. Smaller values are better. A relevant part of the set of solutions (input-order detection ratio P < 0.8 and total length < 2,100 μm) is shown in Fig. 9.2 (left). Clearly, a trade-off can be seen between both objectives. We consider better models to be aligned with the dotted arrow as these models score well on both objectives (rather than standing out in one of the objectives while neglecting the other objective). For fairness model comparison, we compared the different noise-affected models when they perform input-order detection at P = 0.7. One important observation is that the noiseless models perform worse than the models in which spatial or temporal noise is introduced (Fig. 9.2a, black triangles). Additionally, the models subjected to higher noise levels (50 % spatial noise or uniform jitter of 5 ms) perform best. The model neuron performing input-order detection nearest to P = 0.7 is plotted in Fig. 9.2b. In case of the spatial noise, we plotted the synapses at the highest noise levels (either 25 or 50 %). A second important observation is that at the same performance of input-order detection (say at P = 0.7), the model neurons subject to noise are smaller compared to the noiseless model neurons. Consequently, the model neurons subject to higher noise levels also contain less synapses.

Fig. 9.2
figure 2

Optimization results. (a) Relevant portion of the optimized models displayed in terms of their performance on both objectives. The dotted arrow indicates the axis along which we consider models “better” as they perform well on both objectives simultaneously. The noiseless models perform worse than model subjected to noise. Models achieving the same performance in input-order detection (y-axis) are smaller when subjected to noise during optimization. Higher noise levels (50 % spatial noise or 5 ms jitter) result in smaller neurons with less synapses. (b) Representative model morphologies after optimization under different noise levels: spatial noise of 25 % (S-25 %) or 50 % (S-50 %) and temporal noise with 2 (T-2 ms) and 5 (T-5 ms) ms jitter. The models are chosen to have similar input-order detection performance P = 0.7. Red dots indicate the synapses. In case of spatial noise, the synapses at the highest noise level are shown

In order to further elucidate how smaller dendritic trees would be beneficial for performing input-order detection on noisy inputs, we used a simplified ball-and-stick model neuron. This model neuron is composed of only a soma and two unbranched dendrites, bearing groups of four or less synapses each; electrical properties were the same as in the morphologically elaborate models.

We tested how the ball-and-stick model integrated stochastic and deterministic synaptic inputs. For this, we tested these ball-and-stick models for input-order detection over a range of time intervals (Δt) in both temporal orders between the two groups of synapses. We then calculated the ratio between the maximum compound EPSP amplitudes in the preferred and non-preferred direction as a function of Δt (Fig. 9.3). In the deterministic case, the inputs arrived exactly Δt apart, whereas in the stochastic case (as in the optimization runs), they were jittered around that time.

Fig. 9.3
figure 3

Simulations of stochastic input-order detection by a ball-and-stick model. (a) Simulations with one synapse per dendrite. (b) Four synapses per dendrite. Input-order detection performance (ratio between the maximum EPSP amplitudes in the preferred and non-preferred directions, deterministic simulations in bold). The thick lines represent the average, the thin lines the individual simulation runs. Please note that the higher performance for a smaller number of synapses, due to the smaller EPSPs, is corrected for in the optimizations, where larger EPSPs were part of the optimization goal

While we took measures in the optimization procedure against unrealistically small compound EPSPs (that lead to trivial solutions), we employed no such measures for EPSP size in the ball-and-stick model simulations. In these neurons, smaller EPSPs performed better due to a more linear addition of small voltage deletions. The relevant comparison here, however, is between models with the same number of unitary synapses at different noise levels.

We found that on average, all models performed the same with and without temporal input noise. Independently of the number of synapses, the input-order detection performance did not increase or decrease with increasing stochasticity. What did change, however, was the variance around the mean performance, which was higher (better), the fewer synapses a neuron contained. The maximum input-order detection performance increased with fewer synapses with respect to the case in which all synapses were activated simultaneously (Fig. 9.3).

Our interpretation of the prevalence of smaller neurons with fewer synapses in the optimizations hence is as follows:

  1. 1.

    The mean input-order detection performance of neurons (independently of the number of synapses) does not change with increased noise.

  2. 2.

    In contrast, the variance of the input-order detection performance is higher in neurons with few synapses, and positive outliers (very high performance) are more likely.

  3. 3.

    When such cases of very high performance occur, the corresponding neurons are always rewarded by the genetic algorithm and selected for inclusion in the next generation.

  4. 4.

    Over time, this enriches the population in neurons with fewer synapses.

The genetic algorithm acts on a population of neurons, in which it enriches high-performing neurons. This is also the case if the high performances are not consistent but just more likely, as in the case of neurons with fewer synapses.

The biological interpretation of this finding is that a population of neurons with fewer synapses per neuron can perform better than a population in which all input is relayed to all neurons. This originates from the fact that some neurons with less input will perform better than the average neuron that receives more but stochastic inputs. Such populations will be more likely to have at least one neuron reacting very strongly. In neurons with more synapses, the stochasticity will average out, and the response of the different neurons will be more similar to each other. A population with such many-synapsed neurons will be more likely to miss positive outliers. Therefore, we hypothesize that any network which aims to detect surprisingly large events in heterogeneous input streams should be composed of neurons with few synapses.

4 Discussion

Our findings show a surprising decrease in dendritic size when optimizing neural morphologies for optimal response to increasingly noisy inputs. A standard way to overcome the stochasticity to individual elements of a system is to use a large number of them. In this way, the fluctuations of the individual elements will cancel out. However, this was not the solution found by the genetic algorithm we employed to find optimized dendritic trees for input-order detection on stochastic inputs. Surprisingly, the genetic algorithm found smaller neurons receiving less synapses with increasing stochasticity of the input streams.

This finding suggested to us that for input-order detection it is not crucial to cancel out noise. In fact, variation between different instances of stimulus presentations might be beneficial as is getting different opinions on one question. Simulations with a simplified ball-and-stick model demonstrated this mechanism and supported this proposal. Neurons with fewer synapses will be more likely to pick up positive outliers when performing input-order detection. This is because that the selection algorithm picked neurons from a population which, by chance, picked up a positive outlier. This selection mechanism favored small neurons, which will also respond well to positive outliers in a population of neurons in a brain area or ganglion. However, the population of neurons in the genetic algorithm is not meant as a model for a population of neurons in such a brain area or ganglion.

The computational task we investigated here, stochastic input-order detection, is an interesting and time-critical task most likely performed by a number of neural systems (Stiefel et al. 2012). However, neurons will certainly perform a large number of other computational functions as well, some of which will not be enhanced by noise. Additionally, while we did study the influence of active conductance in the deterministic input-order detection task (Torben-Nielsen and Stiefel 2009), we have not investigated the role of active conductances in the stochastic version nor did we investigate spike initiation in the soma/axon, which could further influence neuronal morphologies.

We speculate that in contrast, neurons which are mainly concerned with carrying out computational functions harmed or at least not benefited by noise will be large. The cortical layer V pyramidal and cerebellar Purkinje neurons described in the introduction are unlikely to execute such noise-benefited computational functions on a whole-cell level. Smaller neurons, such as cerebellar granule cells or cortical inhibitory interneurons, are more likely to execute computational functions aided by noise. Large neurons could still compute such functions on a smaller scale of neural organization, such as the level of individual spines (Branco et al. 2010; Stiefel et al. 2012).

This study also demonstrates the usefulness of our inverse method for mapping computational functions on dendritic morphologies. Human intuition often fails at complex issues, such as relating a neural morphology to its response to noise. In essence, our approach finds the approximate inverse solution to the stochastic nonlinear partial differential equation describing the multi-compartmental model of a neuron. This is no trivial task, and surprises about function–structure relationships are inevitable. We hope to employ this approach in the future to probe a variety of computational functions of neurons.