1 Introduction

Understanding the relationship between external stimuli of a spiking neuron and the generated action potentials of its output is critical in explaining the underlying mechanism of the neural information processing. While such information processing and transmission in our nervous system is carried out by trains of spikes, our senses receive analog inputs from sensory cells which generate graded potentials in response to a continuous stimulus. Mapping this continuous stimulus into a series of spikes by the neuron is considered as an encoding mechanism. In the encoding mechanism, one tries to find the best model structure for a neuron and determine its internal parameters to obtain the same output as a real neuron, given the same input. Equally important is the determination of the continuous stimulus of the neuron just by observing its spike train, namely finding the decoding mechanism. In the decoding mechanism, one needs to assume an encoding model and then finds its internal parameters and its input signal as those that are most likely to generate the observed spike train. Our aim in this article is to propose an effective method to reconstruct the continuous input signal of a spiking neuron by decoding the observed spike train at its output.

Understanding the neuronal encoding mechanism and finding an appropriate model for a spiking neuron has been one of the major concerns of neuroscientists. There are two general types of spiking neuronal models: compartmental models and phenomenological-based models (Gerstner and Kistler 2002; Burkitt 2006). Compartmental models are based on the popular Hodgkin–Huxley (H–H) model which describes the membrane potential of the neuron in terms of the electrophysiological principles of its different ionic channels. Such models have a great level of accuracy in predicting the behavior of a spiking neuron, but they are overly complex for the interpretation of the results or simulating a large network of neurons. In contrast, phenomenological-based models disregard the underlying electrophysiological details of the action potential generation mechanism of the neuron and focus on reproducing the general behavior of a spiking neuronal mechanism. Such models are generally simple and are often used for analyzing the behavior of neuronal systems. There are also some combined compartmental and integrate-and-fire model (Benedetto and Sacerdote 2013; Bressloff 1995), but there are a few compared to the above two main groups.

The standard integrate-and-fire (IF) model is the most popular phenomenological-based model (Gerstner and Kistler 2002; Burkitt 2006). In the IF neuronal model, the state of the neuron is characterized by its membrane potential. The membrane potential receives excitatory or inhibitory contributions by synaptic inputs that arrive from other neurons. These inputs, that are each weighted by their respective synaptic strength, are often modeled as an injected current (Gerstner and Kistler 2002; Burkitt 2006). An action potential (spike) is generated when the membrane potential reaches a threshold.

The IF model indeed constitutes a reasonable compromise between mathematical complexity and neurophysiological relevance. More sophisticated forms of the IF model have also been developed to exhibit other observations of a real neuron which the standard IF model cannot (Izhikevich 2003; Mihalas and Niebur 2009; Brette and Gerstner 2005; Breen et al. 2003; Jolivet et al. 2004; Naud and Gerstner 2012). However, the main drawback of using these modified versions of the standard IF model is the level of sophistication added to include the observed neural phenomena. Most of these models fall into the category of nonlinear systems of differential equations which make their analytical investigations difficult. That is why the simple IF model is still widely used by many researchers to analyze various aspects of neuronal modeling and is also used here for analyzing the decoding mechanism.

Two general approaches have been taken for analyzing the encoding and decoding mechanisms of the IF model and its different variants: deterministic approach and stochastic approach. In the deterministic approach, the synaptic inputs and the parameters of the neuron are assumed deterministic functions or constants (Bruckstein et al. 1988; Jolivet et al. 2006; Sanderson 1980; Rudolph and Destexhe 2006; Shlizerman and Holmes 2012). The presynaptic input current modulates the spike train at the output of the neuron. Therefore, the input can be reconstructed by knowing the modulation process (neuronal model) and finding an appropriate inverse algorithm. In the stochastic approach, however, synaptic inputs to the neuron and its internal parameters are considered as random variables (Bruckstein et al. 1988; Buonocore et al. 2010; Gestri 1971; Inoue et al. 1995; Paninski et al. 2004; Sacerdote and Giraudo 2013). Then the decoding mechanism is formulated in a probabilistic framework and the input signal and model parameters are estimated by optimizing a stochastic metric.

Generally, the problems of estimating input from the output spikes of a neuron is ill posed (Kim and Shinomoto 2012). Traditional approaches were more frequently focused on interspike interval (ISI) distribution and analyzing its histogram. However, recent methods rely more on the modeling of the encoding mechanism of the neuron and finding an appropriate inverse algorithm. Besides estimating the stimulus, in some cases finding the internal parameters of the model is also pursued based on such models. Most often, only the ISI data are assumed to be available. In some other scenarios, some other information such as the subthreshold voltage trace is also given.

By considering the diffusion process in the leaky integrate-and-fire (LIF) model, namely the stochastic Ornstein–Uhlenbeck (OU) neuronal model, the authors in Inoue et al. (1995) used the first- and second-order moments of the ISI to estimate the mean input per unit time and the positive square root of the variance per unit time of the presynaptic current to the neuron. These authors also concluded that for a satisfactory estimation, the output spike train must exhibit a high-frequency firing activity. Their work was further developed by Kim and Shinomoto (2012) who assumed a small amplitude random input current and proposed a stochastic estimation algorithm to determine the same parameters. These authors constructed a nonlinear formula by inverting the forward neuronal transformation of input signals to the occurrence times of the output spikes. The forward transformation was modeled as a set of radial basis functions and polynomial functions with coefficients determined using the first-passage time of the OU model.

By having the ISI information and measuring the membrane depolarization voltage between spikes, authors in Lansky et al. (2006, 2010) estimated the input signal parameters of the OU model. They used two different methods for their estimation: the regression method and the maximum-likelihood (ML) method. Although it has been shown that the maximum-likelihood method is guaranteed to find the global optimum solution (Paninski et al. 2004), these authors found that the regression method gives more consistent results than the maximum-likelihood method.

The ML method was also employed by Mullowney and Iyengar (2008) to estimate the input parameters of the OU model based on the numerical inversion of the Laplace transform of the first-passage time probability density function and by Dong et al. (2011) for the generalized leaky integrate-and-fire model proposed in Mihalas and Niebur (2009). A similar approach can also be found in Ditlevsen (2007).

It has been shown that if one ignores the fact that the data are produced under the constraint of not crossing a boundary, the resulting ML estimator will be biased (Bibbona et al. 2008, 2010; Giraudo et al. 2011). It is often neglected that the neural membrane data come from a time interval between a resetting and the occurrence of a spike. This implies that a spike has not yet happened on the time interval since the previous resetting, and all the data recorded by that time must be subthreshold. This imposes an additional condition which changes the probabilistic features from an unconditioned one. Serious errors may arise by ignoring this fact. This problem has been treated by some authors by taking into account the presence of the boundary (Bibbona et al. 2010; Giraudo et al. 2011).

By incorporating the noise in the LIF model, each spike exhibits a noisy measurement of the underlying membrane potential, and by using the Bayesian formalism, this relationship can be inverted in order to infer the posterior distribution over stimuli. Using this idea, the authors in Gerwinn et al. (2009, 2011) obtained the posterior distribution over stimuli as a function of the observed spike trains. By optimizing this distribution, they could reconstruct the time-varying continuous stimuli modeled by a finite set of basis functions with Gaussian distributed coefficients. In a similar manner, three techniques were proposed in Pillow et al. (2001) for spike decoding based on the maximum a posteriori (MAP) estimate, the mutual information between the stimulus, and the response change-point detection based on marginal likelihood. Unfortunately, such stochastic algorithms are quite involved and their underlying assumptions are hard to be verified.

Instead of taking a stochastic approach, in this paper, we formulate the decoding problem in a deterministic framework and consider the objective as the minimization of the norm of the actual ISIs and those generated by the reconstructed input signal. By expanding the input signal into a set of orthogonal basis functions, we show that this minimization is converted to a linear regression problem and find a simple matrix identity to reconstruct the input signal. The system noise is still considered as a random term in the membrane potential, and the input stimulus can also be a realization of a stochastic process and be estimated by our algorithm over the period of observation. The proposed algorithm can also be considered as an extension of our previous work (Seydnejad and Kitney 2001) which had been offered in a different context to LIF neuronal models and analyzing its results in noisy conditions.

The organization of the paper is as follows. In the next section, the encoding mechanism of the LIF model and its governing equations are described. Our proposed algorithm for input signal reconstruction is explained in Sect. 3. The properties of the proposed method are highlighted in Sect. 4. The performance of the new algorithm in reconstructing different stimulus is evaluated by presenting four examples in Sect. 5. Finally, concluding remarks are drawn in Sect. 6.

2 The leaky integrate-and-fire model

The governing equation of a standard LIF model of a spiking neuron is expressed by the following first-order linear differential equation (Gerstner and Kistler 2002; Burkitt 2006),

$$\begin{aligned} \tau \frac{\mathrm{d}u(t)}{\mathrm{d}t}=-u(t)+Ri(t) \end{aligned}$$
(1)

in which u(t) represents the potential, R denotes the leakage resistance, \(\tau =RC\) is the time constant, and C is the capacitance of the membrane of the neuron. The input current i(t) represents the sum of the synaptic currents generated by firings of the neuron presynaptic cells. A spike is generated when u(t) reaches the threshold voltage \(U_{\mathrm{THR}} \). Generation of a spike resets u(t) to the resting membrane potential \(U_{\mathrm{RES}} \) whereupon another action potential generation cycle starts. The neuron is called leaky since in the absence of synaptic input, the membrane potential decays with the time constant \(\tau \) to its resting potential.

Based on this description, the block diagram of a LIF neuron can be plotted as it is shown in Fig. 1a. The differential equation (1) is demonstrated by its Laplace transform \(1/({s+1/\tau })\) with the input (presynaptic) current \(i_\mathrm{c} (t)=i(t)/C\). The comparator block generates a spike (action potential) whenever \(u(t)=U_{\mathrm{THR}}\) and simultaneously resets the membrane voltage to \(U_{\mathrm{RES}} \).

Fig. 1
figure 1

a Block diagram of a leaky integrate-and-fire (LIF) model. b Membrane potential and the generated spikes of the LIF model for a constant input current and typical parameters

Figure 1b displays the membrane potential u(t) and the spike train s(t) for a neuron with typical values \(R=20 \hbox {M }\Omega , C= 1~{ nF}\), when \(i_\mathrm{c} (t)\) is assumed to be a constant current \(I_\mathrm{cc} =1.6\, nA\). In such a scenario, a set of action potentials with constant interspike intervals \(\Delta T_k =T_k -T_{k-1} =\Delta T\) is produced (suprathreshold regimen) where \(T_k =1,2,\ldots ,K\) denotes the occurrence time of the kth spike. Decreasing \(I_\mathrm{cc}\) will increase\(\Delta T_k \) until reaching a point where the injected current dissipates through the time constant \(\tau \) with no action potential generation anymore (subthreshold regimen). With a time-varying \(i_\mathrm{c} (t)\), we would observe a time-varying \(\Delta T_k \). Our goal is to find \(i_\mathrm{c} (t)\) by having the values of interspike intervals \(T_k \). Our assumption here is that \(i_\mathrm{c} (t)\) is the sum of a nonzero \(I_\mathrm{cc} \) and a zero-mean time-varying current \(i_\mathrm{vc} (t)\). Assuming a time-varying signal superimposed on a constant mean value as a stimulus current has also been assumed in many stochastic- based models (Mullowney and Iyengar 2008; Picchini et al. 2008; Lansky et al. 2006; Iolov et al. 2014). This means that the neuron cannot be quiescent (suprathreshold regimen); it exhibits a firing with an average rate determined by \(I_\mathrm{cc} \). Any variation in \(\Delta T_k \) is therefore attributed to \(i_\mathrm{vc} (t)\). It is further assumed that \(|{i_\mathrm{vc} (t)}|<I_\mathrm{cc} , \forall t\). This renders that the net presynaptic current is always positive. An excitatory presynaptic activity causes a positive change in \(i_\mathrm{vc} (t)\), whereas an inhibitory presynaptic activity causes a negative change in \(i_\mathrm{vc} (t)\). Although the overall stimulus \(i_\mathrm{c} (t)\) can become negative (i.e., \(i_\mathrm{vc} (t)<-I_\mathrm{cc})\) for some limited periods of time and our method can handle such scenarios, we use the assumption \(|{i_\mathrm{vc} (t)}|<I_\mathrm{cc} , \forall t\) to ensure that the neuron keeps generating action potentials in the period of observation. When \(i_\mathrm{vc} (t)=0\), our model exhibits a tonic activity as a result of positive \(I_\mathrm{cc}\) which is assumed to be large enough to create the condition \(u(t)=U_{\mathrm{THR}} \).

3 Estimation of the LIF stimulus by basis functions

Although the governing equation of a LIF neuron expressed as (1) is a linear differential equation, the reset following the threshold crossing introduces a strong nonlinearity into the model.Footnote 1 This nonlinearity is nevertheless needed to convert the continuous input signal into discrete events of spikes in the model. Due to the presence of this nonlinearity, one cannot use the linear system theory to determine the input signal just by having the output and the system transfer function. Consequently, we need to find out how the occurrence times of spikes can be related to the input current \(i_\mathrm{c} (t)\). Without loss of generality, we henceforth assume a zero resting membrane potential \(U_{\mathrm{RES}} =0\) for simplicity (Paninski et al. 2004). This means that the membrane potential (including the threshold) is simply shifted upward with no effect on spike timings. Integrating from both sides of (1) gives (assuming a spike has occurred at \(t=0)\),

$$\begin{aligned} u(t)+\frac{1}{\tau }\int _0^t {u(\xi )\mathrm{d}\xi =\int _0^t {i_\mathrm{c} (\xi )\mathrm{d}}} \xi \end{aligned}$$
(2)

For each interspike interval, (2) can be written as,

$$\begin{aligned}&\left[ {0, T_1} \right] \rightarrow u(T_1 )+\frac{1}{\tau }\int _0^{T_1} {u(t)\mathrm{d}t=\int _0^{T_1} {i_\mathrm{c} (t)\mathrm{d}t}}\nonumber \\&\left[ {T_1, T_2} \right] \rightarrow u(T_2 )+\frac{1}{\tau }\int _{T_1}^{T_2} {u(t)\mathrm{d}t=\int _{T_1}^{T_2} {i_\mathrm{c} (t)\mathrm{d}t}}\\&\cdots \cdots \cdots \nonumber \\&\left[ {T_{K-1}, T_K} \right] \rightarrow u(T_K )+\frac{1}{\tau }\int _{T_{K-1}}^{T_K} {u(t)\mathrm{d}t=\int _{T_{K-1}}^{T_K} {i_\mathrm{c} (t)\mathrm{d}t}}\nonumber \end{aligned}$$
(3)

where \(u(T_k)=U_{\mathrm{THR}} , k=1,2,\ldots ,K\) for the kth interval. Let us now assume that \(i_\mathrm{c} (t)\) can be expanded into the set of orthogonal basis functions \(\varphi ^{\prime }_l (t), l=0,\ldots ,L\) where \(\varphi ^{\prime }_l (t)=d\varphi _l (t)/d(t)\) and \(\varphi _l (t)\)’s are continuous functions which have continuous derivatives over \([T_0, T_K]\). We further assume that \(\varphi ^{\prime }_l (t)\)s constitute a closed finite-dimensional subspace of the Hilbert space (Luenberger 97) comprising all square integrable functions on the interval \([T_0, T_K]\) where \(i_\mathrm{c} (t)\) is one of its members. Then \(i_\mathrm{c} (t)\) can be approximated by the linear combination of \(\varphi ^{\prime }_l (t)\) as,

$$\begin{aligned} i_\mathrm{c} \hbox {(t)}=\sum _{l=0}^L {\alpha _l \varphi ^{\prime }_l (t)} \end{aligned}$$
(4)

in which \(\alpha _i \)’s are constant parameters which should be identified. Now the goal of finding \(i_\mathrm{c} (t)\) will be converted to determining coefficients \(\alpha _l \). With \(i_\mathrm{c} \hbox {( t})\) expressed as (4), \(u\hbox {( t})\) will be the response of the first-order differential equation of (1) to two inputs, a constant input and a time-varying excitation. Consequently, \(u\hbox {( t})\) over one interspike interval can be written as,

$$\begin{aligned} u(t)=fe^{-t/\tau }+\sum _{l=0}^L {g_l \varphi ^{\prime }_l (t)},\quad \forall t\in (T_{k-1}, T_k) \end{aligned}$$
(5)

where f and \(g_l \) are unknown constants which should be determined. Using (5) and recalling \(u(T_k)=U_{\mathrm{THR}} \), we can write (3) as,

$$\begin{aligned} \int _0^{T_1} {\left\{ {\sum _{l=0}^L {\alpha _l \varphi ^{\prime }_l (t)} -\frac{1}{\tau }fe^{-t/\tau }-\frac{1}{\tau }\sum _{l=0}^L {g_l \varphi ^{\prime }_l (t)}} \right\} \mathrm{d}t=U_{\mathrm{THR}}} \end{aligned}$$
(6)

If \(\gamma _l\) is defined as,

$$\begin{aligned} \gamma _l =\alpha _l -g_l /\tau \end{aligned}$$
(7)

then (6) becomes,

$$\begin{aligned} \sum _{l=0}^L {\gamma _l \left\{ {\varphi _l (T_1)-\varphi _l (0)} \right\} +} f\left\{ {e^{-T_1 /\tau }-1} \right\} =U_{\mathrm{THR}} \end{aligned}$$
(8)

Since we assume the constant term \(I_\mathrm{cc}\) in \(i_\mathrm{c} \hbox {( t})\), the best selection for \(\varphi _0 (t)\) is \(\varphi _0 (t)=t\). This converts (8) to (9),

$$\begin{aligned} \sum _{i=1}^L {\gamma _l \left\{ {\varphi _l (T_1)-\varphi _l (0)} \right\} +\gamma _0 T_1} +f\left\{ {e^{-T_1 /\tau }-1} \right\} =U_{\mathrm{THR}} \end{aligned}$$
(9)

Rearranging the terms in (9) yields,

$$\begin{aligned} \sum _{l=1}^L {\gamma _l \left\{ {\varphi _l (T_1)-\varphi _l (0)} \right\} =U_{\mathrm{THR}} +f\{1-e^{-T_1 /\tau }\}-\gamma _0 T_1} \end{aligned}$$
(10)

Similarly for the intervals \([{T_1, T_2}]\), we obtain,

$$\begin{aligned}&\sum _{l=1}^L {\gamma _l \left\{ {\varphi _l (T_2)-\varphi _l (T_1)} \right\} } =U_{\mathrm{THR}} \nonumber \\&\quad +f\left\{ {e^{-T_1 /\tau }-e^{-T_2 /\tau }} \right\} +\gamma _0 [{T_1 -T_2}] \end{aligned}$$
(11)

In the same manner for any interspike interval \([{T_{k-1}, T_k}]\), we can write,

$$\begin{aligned}&\sum _{i=1}^L {\gamma _l \left\{ {\varphi _l (T_k)-\varphi _l (T_{k-1})} \right\} } =U_{\mathrm{THR}}\nonumber \\&\quad +f\left\{ {e^{-T_{k-1} /\tau }-e^{-T_k /\tau }} \right\} +\gamma _0 [{T_{k-1} -T_k}] \end{aligned}$$
(12)

Now let us define,

$$\begin{aligned} \beta _{k,k-1} =U_{\mathrm{THR}} +f\left\{ {e^{-T_{k-1} /\tau }-e^{-T_k /\tau }} \right\} +\gamma _0 [{T_{k-1} -T_k}] \end{aligned}$$
(13)

Then (12) becomes,

$$\begin{aligned} \sum _{l=1}^L {\gamma _l \left\{ {\varphi _l (T_k)-\varphi _l (T_{k-1})} \right\} } =\beta _{k,k-1} \end{aligned}$$
(14)

If \(\beta _{k,k-1} \)s were known, we could construct a set of linear system of equations from (14) for \(k=1,\ldots K\), solve it for \(\gamma _l \), and then obtain \(\alpha _l \) from (7) by having the membrane time constant \(\tau \). However, we still need to know \(g_l \) in (7). On the other hand, f and \(\gamma _0 \) should be known for \(\beta _{k,k-1}\). To determine f and \(g_l \), we take note of the fact that the general solution (5) should satisfy (1) and its initial conditions over any interval \([{T_{k-1}, T_k}]\). Thus, by substituting (5) in (1), we obtain,

$$\begin{aligned}&-\frac{1}{\tau }fe^{-t/\tau }+\sum _{l=0}^L {g_l \varphi ^{\prime \prime }_l (t)} +\frac{1}{\tau }fe^{-t/\tau }\nonumber \\&\qquad +\frac{1}{\tau }\sum _{i=0}^L {g_l \varphi ^{\prime }_l (t)} =\sum _{i=0}^L {\alpha _l \varphi ^{\prime }_l (t)} \end{aligned}$$
(15)

To find f and \(g_l \) from (15), \(\varphi ^{\prime }_l (t)\) and \(\varphi ^{\prime \prime }_l (t)\) should be given. We have already set \(\varphi _0 (t)=t\) due to the presence of the constant current \(I_\mathrm{cc} \). For \(\varphi _l (t), l=1,\ldots ,L\), we can have different selections of basis functions such as sinusoidal, Legendre, Laguerre, depending on which set could best describe \(i_\mathrm{vc} (t)\) in a finite-dimensional subspace. Without loss of generality, we assume here a set of complex sinusoidal basis functions (with L even) as,

$$\begin{aligned} \varphi _l (t)=e^{j\omega _l t},\quad l=1,\ldots ,L,\quad \omega _l =2\pi f_l \end{aligned}$$
(16)

Using (16) in (15) and equating both sides, we obtain,

$$\begin{aligned} \left\{ {{\begin{array}{l@{\quad }l} -g_l \omega _l^2 +\frac{jg_l \omega _l}{\tau }=j\alpha _l \omega _l , &{} l=1,\ldots ,L \\ \frac{g_0}{\tau } = \alpha _0 &{} l=0 \\ \end{array}}} \right. \end{aligned}$$
(17)

Therefore,

$$\begin{aligned} \left\{ {{\begin{array}{l@{\quad }l} g_l =\frac{\alpha _l /\tau -j\alpha _l \omega _l}{\omega _l^2 +1/\tau ^{2}}, &{} l=1,\ldots ,L \\ {g_0 =\alpha _0 \tau } &{} \\ \end{array}}} \right. \end{aligned}$$
(18)

Equation (18) gives the value of \(g_l \) in terms of \(\alpha _l \). Using the second equation of (18) in (13) and noting (7) yields,

$$\begin{aligned} \beta _{k,k-1} =U_{\mathrm{THR}} +f\left\{ {e^{-T_{k-1} /\tau }-e^{-T_k /\tau }} \right\} \end{aligned}$$
(19)

For finding f, the initial condition \(u(T_{k-1})=0\) over the interval \([{T_{k-1}, T_k}]\) is used in (5) to obtain,

$$\begin{aligned} u(T_{k-1})=fe^{-T_{k-1} /\tau }+\sum _{l=1}^L {j\omega _l g_l e^{j\omega _l T_{k-1}}} +g_0 =0 \end{aligned}$$
(20)

Therefore,

$$\begin{aligned} f=-e^{T_{k-1} /\tau }\left( {j\sum _{l=1}^L {\omega _l g_l e^{j\omega _l T_{k-1}}} +g_0} \right) \end{aligned}$$
(21)

Now by having f and \(g_l \), we can write (14) in a more tangible form by defining \(\phi _l (T_{k,k-1})=\varphi _l (T_k)-\varphi _l (T_{k-1})\). Thus,

$$\begin{aligned} \sum _{l=1}^L {\gamma _l \phi _l (T_{k,k-1})} =\beta _{k,k-1} \end{aligned}$$
(22)

For \(k=1,\ldots ,K,\) Eq. (22) produces the following matrix identity.

$$\begin{aligned}&\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\phi _1 (T_{1,0})}&{} {\phi _2 (T_{1,0})}&{} {\ldots }&{} {\phi _L (T_{1,0})} \\ {\phi _1 (T_{2,1})}&{} {\phi _2 (T_{2,1})}&{} {\ldots }&{} {\phi _L (T_{2,1})} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\phi _1 (T_{K,K-1})}&{} {\phi _2 (T_{K,K-1})}&{} {\ldots }&{} {\phi _L (T_{K,K-1})} \\ \end{array}}} \right] \nonumber \\&\quad \times \left[ {{\begin{array}{c} {\gamma {}_1} \\ {\gamma {}_2} \\ \\ {\gamma {}_L} \\ \end{array}}} \right] =\left[ {{\begin{array}{c} {\beta _{1,0}} \\ {\beta _{2,1}} \\ \vdots \\ {\beta _{K,K-1}} \\ \end{array}}} \right] \end{aligned}$$
(23)

Now we manipulate (23) to express it directly in terms of \(\alpha _l\). By defining

$$\begin{aligned} c_l =\tau \left( {1-j\omega _l \tau } \right) /\left( {\omega _l^2 \tau ^{2}+1} \right) \end{aligned}$$
(24)

we will have from (18),

$$\begin{aligned} g_l =c_l \alpha _l \end{aligned}$$
(25)

which can consequently convert (23) to (26).

$$\begin{aligned}&\left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\left( {1-c_1 /\tau } \right) \phi _1 (T_{1,0})}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{1,0})}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{1,0})} \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{2,1})}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{2,1})}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{2,1})} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{K,K-1})}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{K,K-1})}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{K,K-1})} \\ \end{array}}} \right] \nonumber \\&\left[ {{\begin{array}{c} {\alpha {}_1} \\ {\alpha {}_2} \\ \\ {\alpha {}_L} \\ \end{array}}} \right] =\left[ {{\begin{array}{c} {\beta _{1,0}} \\ {\beta _{2,1}} \\ \vdots \\ {\beta _{K,K-1}} \\ \end{array}}} \right] \end{aligned}$$
(26)

Substituting f in (19) by its equivalent value from (21) gives (27) over the interspike interval \([{T_{k-1} , T_k}]\),

$$\begin{aligned} \beta _{k,k-1}= & {} U_{\mathrm{THR}} -e^{T_{k-1} /\tau }\left( {g_0 +j\sum _{l=1}^L {c_l \alpha _l \omega _l e^{j\omega _l T_{k-1}}}} \right) \nonumber \\&\times \left( {e^{-T_{k-1} /\tau }-e^{-T_k /\tau }} \right) \end{aligned}$$
(27)

To further simplify (27) define,

$$\begin{aligned} d_{k,l}= & {} j\left( {e^{-(T_k -T_{k-1})/\tau }-1} \right) c_l \omega _l e^{j\omega _l T_{k-1}} \end{aligned}$$
(28)
$$\begin{aligned} h_k= & {} U_{\mathrm{THR}} +g_0 \left( {e^{-(T_k -T_{k-1})/\tau }-1} \right) \end{aligned}$$
(29)

Now (27) becomes,

$$\begin{aligned} \beta _{k,k-1} =h_k +\sum _{l=1}^L {d_{k,l} \alpha _l} \end{aligned}$$
(30)

Using (30) in (26) gives the following matrix identity,

$$\begin{aligned} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\left( {1-c_1 /\tau } \right) \phi _1 (T_{1,0})-d_{1,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{1,0})-d_{1,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{1,0})-d_{1,L}} \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{2,1})-d_{2,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{2,1})-d_{2,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{2,1})-d_{2,L}} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{K,K-1})-d_{K,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{K,K-1})-d_{K,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{K,K-1})-d_{K,L}} \\ \end{array}}} \right] \left[ {{\begin{array}{c} {\alpha {}_1} \\ {\alpha {}_2} \\ \vdots \\ {\alpha {}_L} \\ \end{array}}} \right] =\left[ {{\begin{array}{c} {h_1} \\ {h_2} \\ \vdots \\ {h_K} \\ \end{array}}} \right] \nonumber \\ \end{aligned}$$
(31)

If

$$\begin{aligned} \varvec{\Phi }= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} {\left( {1-c_1 /\tau } \right) \phi _1 (T_{1,0})-d_{1,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{1,0})-d_{1,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{1,0})-d_{1,L}} \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{2,1})-d_{2,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{2,1})-d_{2,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{2,1})-d_{2,L}} \\ {\ldots }&{} {\ldots }&{} {\ldots }&{} {\ldots } \\ {\left( {1-c_1 /\tau } \right) \phi _1 (T_{K,K-1})-d_{K,1}}&{} {\left( {1-c_2 /\tau } \right) \phi _2 (T_{K,K-1})-d_{K,2}}&{} {\ldots }&{} {\left( {1-c_L /\tau } \right) \phi _L (T_{K,K-1})-d_{K,L}} \\ \end{array}}} \right] \end{aligned}$$
(32)
$$\begin{aligned} \varvec{\alpha }= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c} {\alpha _1}&{} \ldots &{} {\alpha _L} \\ \end{array}}} \right] ^{T} \end{aligned}$$
(33)
$$\begin{aligned} \mathbf{h}= & {} \left[ {{\begin{array}{c@{\quad }c@{\quad }c} {h_1}&{} \ldots &{} {h_K} \\ \end{array}}} \right] ^{T} \end{aligned}$$
(34)

are defined then (31) can be written as,

$$\begin{aligned} \varvec{\Phi \alpha }=\mathbf{h} \end{aligned}$$
(35)

The unknown coefficients \(\alpha _l \) can now be found by the least squares (LS) solution of (36) as,

$$\begin{aligned} \varvec{\alpha }=\left( {\varvec{\Phi }^{T}\varvec{\Phi }} \right) ^{-1}\varvec{\Phi }^{T}{} \mathbf{h} \end{aligned}$$
(36)

To solve (35), we still need to know \(g_0 \) (or equivalently \(\alpha _0)\). By assuming \(i_\mathrm{vc} (t)=0\), we obtain from (5) and (21) that,

$$\begin{aligned}&u(t)=fe^{-t/\tau }+g_0 \end{aligned}$$
(37)
$$\begin{aligned}&f=-g_0 e^{T_{k-1} /\tau } \end{aligned}$$
(38)

Substituting (38) in (37) gives,

$$\begin{aligned} u(t)=g_0 \left( {1-e^{\left( {T_{k-1} -t} \right) /\tau }} \right) \end{aligned}$$
(39)

Over any interval \(\left[ {T_{k-1} T_k} \right] \), a spike is generated when \(u(T_k)=U_{\mathrm{THR}} \), therefore,

$$\begin{aligned} u(T_k)=U_{\mathrm{THR}} =g_0 \left( {1-e^{\left( {T_{k-1} -T_k} \right) /\tau }} \right) \end{aligned}$$
(40)

Replacing \(g_0 =\alpha _0 \tau \) from (18) in (40) and solving for \(\alpha _0 \) gives,

$$\begin{aligned} \alpha _0 =\frac{U_{\mathrm{THR}}}{\tau \left( {1-e^{\left( {T_{k-1} -T_k} \right) /\tau }} \right) }=\frac{U_{\mathrm{THR}}}{\tau \left( {1-e^{-\Delta T_k /\tau }} \right) } \end{aligned}$$
(41)

Since the parameters \(U_{\mathrm{THR}} \) and \(\tau \) of the model are given, from (41) \(g_0 \) and consequently \(h_k \) can be determined. Now (31) can be solved for unknown coefficients \(\alpha _l \).

In the above algorithm, the values of the membrane time constant \(\tau \) and threshold voltage \(U_{\mathrm{THR}} \) are assumed given as it is assumed in many articles (Kim and Shinomoto 2012; Ditlevsen 2007). These parameters can be measured by electrophysiological methods or deduced from the properties of other similar neurons. Furthermore, although \(i_\mathrm{vc} (t)\) is not zero in practice in order to obtain \(\alpha _0\) from (41) over one interspike interval, we can assume that it has a zero-mean value over a sufficiently long interval (Kim and Shinomoto 2012; Lansky et al. 2006; Mullowney and Iyengar 2008; Seydnejad and Kitney 2001). Therefore, we can replace \(\Delta T_k \) in (41) by the average of several interspike intervals.

4 Implementation remarks

The principle of the above algorithm lies on the minimization of the second norm of the distance between the occurrence times of the spikes which are generated by the reconstructed input current and the actual spikes. The coefficients of the basis functions constructing the input signal are adjusted such that their generated spikes fall as close as possible on the actual spikes. This minimization manifests itself as the LS solution of Eq. (35). This is similar to the methods which assume stochastic modeling of the LIF neuron. The optimum signal determination in a stochastic framework uses, for example, the Fisher information measure and considering the distribution of the interspike intervals and the variability of interspike intervals to find the neuron stimulus (Lansky et al. 2007; Brunel and Nadal 1998). Our approach uses the same notion but in a deterministic framework.

The decoding problem of a LIF model can be viewed as reconstructing its continuous time input signal by observing the discrete events which are generated as a result of the LIF nonlinear transformation on this input signal. Viewed in this perspective, this reconstruction indeed means recovery of a continuous time signal from its non-uninform samples. It has been shown in Lazar (2004) and Lazar and Pnevmatikakis (2008) that the bandlimited stimuli encoded with an integrate-and-fire neuron can be perfectly recovered from the spike train at its output provided that the difference between any two consecutive values of the time sequence is bounded by the inverse of the Nyquist rate. This is also in agreement with the analysis of other researchers (Bayly 1968; French and Holden 1971). Consequently, we postulate here that the variations in the presynaptic current to the LIF model is limited to such a rate. This in turn dictates the range of the frequencies in the selected basis functions in (16). The maximum selected frequency \(f_\mathrm{high} \) in the basis functions must therefore be set to half of the inverse of the mean interspike interval of the given spike train. The lowest- frequency component \(f_\mathrm{low} \) can be set to (almost) zero or an appropriate value if a priori information about the frequency distribution of the stimulus is known.

Once \(f_\mathrm{low} \) and \(f_\mathrm{high} \) are determined, the frequency resolution \(\Delta f=f_{l+1} -f_l \) can be selected. Smaller \(\Delta f\) means better approximation in reconstructing the input current. However, a very small value for \(\Delta f\) imposes high computational load and may cause numerical instability in finding the solution of the LS equation (36). Besides, smaller \(\Delta f\) means more number of basis functions; hence, the algorithm demands larger length of the output spikes for input signal reconstruction. Therefore, finding the best solution will be the result of a trade-off between the recorded length of data and the required accuracy.

Even though we use a deterministic approach in the signal modeling, the effect of neuronal (system) noise can still be considered in our algorithm by adding a random signal to the membrane potential (Lansky et al. 2006). Although this is the common approach to include the system noise, this noise has also been incorporated into the model as a random perturbation in the threshold level by some authors (Dong et al. 2011; Gerwinn et al. 2009). However, it has been shown that small variations in the threshold voltage can be equivalently modeled by similar variations in the membrane potential (Seydnejad 2008). Therefore, we consider the effect of noise in our model by random perturbation to the membrane voltage. The stimulus can also be the realization of a random process and can be well reconstructed by the algorithm. In such scenarios, the proposed algorithm still provides the best fit to this signal in the subspace spanned by the selected basis functions. Both these cases are considered in our simulations in the following section.

In our proposed method, the input current is actually approximated by a set of basis functions. A question which naturally arises here is the amount of error which we encounter in such an approximation. The following Lemma elaborates on the closeness of the reconstructed signal with the actual stimulus.

Lemma 1

Let \(\hat{{i}}_\mathrm{vc} (t)\) denote the reconstructed stimulus obtained by Eq (4) where the basis functions are determined by Eq. (16) with the parameters given by Eq. (36), then one of the following is true.

  1. (i)

    \(\hat{{i}}_\mathrm{vc} (t)=i_\mathrm{vc} (t)\) if \(i_\mathrm{vc} \hbox {(t)}=\sum _{l=1}^L {\omega _l \alpha _l e^{j\omega _l t}} , l=1,\ldots ,L, \omega _l =2\pi f_l \)

  2. (ii)

    \(\hat{{i}}_\mathrm{vc} (t)=i_\mathrm{vc} (t)\) if \(i_\mathrm{vc} (t)\) is limited to the frequency band \([f_\mathrm{low}, f_\mathrm{high}]\), \((f_\mathrm{low} \ne 0)\) and \(\Delta f\rightarrow 0, L\rightarrow \infty , K\rightarrow \infty ,\) where \(\Delta f=f_l -f_{l-1} \), \(f_l =\omega _l /2\pi \).

  3. (iii)

    For an arbitrary stimulus, the error signal becomes zero over each interspike interval, \(\int _0^{t_K} \left( {\hat{{i}}_\mathrm{vc} (t)-i_\mathrm{vc} (t)} \right) \mathrm{d}t= 0, k=1,\ldots ,K\).

Proof

  1. (i)

    This is the trivial part of the Lemma. Conceptually, this part means that the actual stimulus is fully recoverable if it is originally a combination of trigonometric functions and the same basis functions are employed for its estimation. The proof is, therefore, very straightforward by noting that uniqueness of the solution of the LS solution of (35).

  2. (ii)

    To prove this part, we first express \(i_\mathrm{vc} (t)\) as its (inverse) Fourier transform,

    $$\begin{aligned} i_\mathrm{vc} (t)=\int _{-\infty }^{+\infty } {I_\mathrm{vc} (f)e^{j2\pi ft}df} \end{aligned}$$
    (42)

    where \(I_\mathrm{vc} (f)\) denotes the Fourier transform of \(i_\mathrm{vc} (t)\). Eq. (42) can be written as:

    $$\begin{aligned} i_\mathrm{vc} (t)=\mathop {\lim }\limits _{\Delta f\rightarrow 0} \sum _{r=-\infty }^{+\infty } {I_\mathrm{vc} (r\Delta f)e^{j2\pi r\Delta ft}\Delta f} \end{aligned}$$
    (43)

    Recalling the bandlimited property of \(i_\mathrm{vc} (t)\) and invoking an appropriate change of variable yield,

    $$\begin{aligned}&i_\mathrm{vc} (t)\nonumber \\&\quad =\mathop {\lim }\limits _{\Delta f\rightarrow 0} \sum _{p=-P_\mathrm{high}}^{+P_\mathrm{high}} {I_\mathrm{vc} (f_\mathrm{low} +p\Delta f)e^{j2\pi (f_\mathrm{low} +p\Delta f)t}\Delta f}\nonumber \\ \end{aligned}$$
    (44)

    in which \(P_\mathrm{high} \) is determined such that \(f_\mathrm{low} +P_\mathrm{high} \Delta f=f_\mathrm{high} \). Therefore, full recovery of the input signal is possible if \(\Delta f\rightarrow 0\) and the number of basis functions approaches infinity \(L\rightarrow \infty \). For this infinite number of basis functions, we should of course assume that an infinite amount of information is available meaning that \(K\rightarrow \infty \).

  3. (iii)

    The actual stimulus which is encountered in practice often does not belong to the basis function space. Thus, we cannot fully reconstruct the stimulus with no error. However, based on (3) for the actual stimulus we can write,

    $$\begin{aligned} \int _0^{T_K} {i_\mathrm{c} (t)\mathrm{d}t} =\sum _{k=1}^K {u(T_K)} +\frac{1}{\tau }\int _o^{T_K} {u(t)\mathrm{d}t} , k=1,\ldots ,K \end{aligned}$$
    (45)

    Nevertheless, the reconstructed stimulus \(\hat{{i}}_\mathrm{c} \hbox {(t)}=\sum _{l=0}^L {\alpha _l \varphi ^{\prime }_l (t)} \)is determined such that it satisfies Eq. (46).

    $$\begin{aligned} \int _0^{T_K} {\hat{{i}}_\mathrm{c} (t)\mathrm{d}t} =\sum _{k=1}^K {u(T_K)} +\frac{1}{\tau }\int _o^{T_K} {u(t)\mathrm{d}t} , k=1,\ldots ,K \end{aligned}$$
    (46)

    Subtracting (46) from (45) gives,

    $$\begin{aligned} \int _0^{T_K} {\left( {i_\mathrm{c} (t)-\hat{{i}}_\mathrm{c} (t)} \right) \mathrm{d}t} =0, k=1,\ldots ,K \end{aligned}$$
    (47)

    If we define \(e_{i_\mathrm{c}} (t)=i_\mathrm{c} (t)-\hat{{i}}_\mathrm{c} (t)\)as the error signal between the actual signal and the reconstructed signal, then (47) yields,

    $$\begin{aligned} \int _0^{T_K} {e_{i_\mathrm{c}} (t)\mathrm{d}t} =0,\quad k=1,\ldots ,K \end{aligned}$$
    (48)

    In other words,

    $$\begin{aligned}&\int _0^{T_1} {e_{i_\mathrm{c}} (t)\mathrm{d}t} = 0\nonumber \\&\int _{T_1}^{T_2} {e_{i_\mathrm{c}} (t)\mathrm{d}t} = 0 \\&{\ldots }{\ldots }{\ldots }{\ldots }{\ldots }..\nonumber \\&\int _{T_{K-1}}^{T_K} {e_{i_\mathrm{c}} (t)\mathrm{d}t} = 0 \nonumber \end{aligned}$$
    (49)

    The above set of equations demonstrates the behavior of the error signal over the each interval. Importantly, it shows that the error signal between the estimated stimulus and the actual stimulus becomes zero over each interval. In other words, the positive and negative excursions of the error signal over each interval have compensating effects. This means that the error signal does not grow as we move further along the recorded spikes. However, no constraint is imposed to limit the level of this positive and negative excursions. If we do not select the type of basis functions or the number of basis functions wisely, the negative and positive deviations of the reconstructed signal from the actual signal may be very large over any given interval. Nevertheless, by appropriate selection of the basis functions, for example based on some a priori knowledge, one would expect to see a close fit between the reconstructed input signal and the actual one even for an arbitrary stimulus. Although, increasing the number of basis functions would generally reduce the amount of error, it increases the computational load of the algorithm and may cause numerical instability. A good compromise between the amount of error and the computational load can often be obtained empirically by comparing the simulated spikes (based on the reconstructed stimulus) with the actual spikes for the given experiment.

It should be noted that the only way to gather information about the characteristics of the input stimulus in our model (as well as many other stochastic models) is through occurrence times of the spikes. If the stimulus is very weak (subthreshold) and not able to generate a spike, there is no way to find the stimulus. Therefore, we have to assume that the combined stimulus (constant current + its superimposed time-varying current) is strong enough (suprathreshold) to produce a spike. Otherwise, there is no way to find the stimulus because no spike is generated and no information is made available.

There has been growing interest in the role that noise may play in enhancing the computational capability of neurons (Cecchi et al. 2000; Shimokawa et al. 1999). For example, neuronal noise may enhance the detectability of weak signals that would otherwise not reach threshold (stochastic resonance) (Stacey and Durand 2001; Rudolph and Destexhe 2001). However, in our deterministic approach noise is just treated as another component of a time-varying signal superimposed on the constant current. When this combination becomes strong enough to generate a spike, we are able to determine the time-varying part (of course, there is no way to separate the noise from the actual subthreshold time-varying stimulus). We have also included the effect of noise in our simulation.

5 Simulation results

To see how the input current can be reconstructed from the interspike intervals, we consider four different examples in this section. For simplicity in the analysis of the results, we used normalized values \(I_\mathrm{cc} =1\), \(U_{\mathrm{THR}} =1\), \(U_{\mathrm{RES}} =0\) for all simulations (Paninski et al. 2004). This means that with no time-varying stimulus (\(i_\mathrm{vc} =0)\), we obtain a tonic activity with an interspike interval of 1 Hz for an IF model (\(\tau =\infty )\). To have a high resolution in generating spikes and finding their locations, all signals were sampled at a frequency of 100 Hz.

A random signal was also added to the membrane potential to include the effect of system noise. To create this random noise, a zero-mean white Gaussian signal was passed through a sixth-order Butterworth low-pass filter with cutoff frequency of 0.4 Hz. The amplitude of the noise was adjusted to have 10 dB SNR with respect to the presynaptic current.

For estimating the quiescent current \(I_\mathrm{cc} \), the mean of the interspike intervals was obtained by averaging over the duration of the stimulation. A membrane time constant \(\tau =2\) was assumed for all simulations.

Example 1

For this example, a pure sinusoidal input signal at the frequency of 0.3 Hz and amplitude 0.3 \(( i_\mathrm{vc} (t)=0.3\cos (2\pi 0.3t) )\) was added to \(I_\mathrm{cc} \). This is the simplest input which could be used as a stimulus to the neuron. Nevertheless, a sinusoidal modulating signal is usually assumed in the analysis of relay neurons which are found in various brain nuclei including the thalamus (Agarwal and Sarma 2012; Smith et al. 2000).

A range of \(f_\mathrm{low} =0.1\) Hz and \(f_\mathrm{high} =0.4\,\hbox {Hz}\) with frequency steps \(\Delta f=0.05\,\hbox {Hz}\) was used for the basis functions in (16). This means a total number of 14 basis functions to estimate \(i_\mathrm{vc} (t)\). Figure 2a shows the original and the reconstructed \(i_\mathrm{vc} (t)\) superimposed on \(I_\mathrm{cc} \) without any noise in the membrane potential, and Fig. 2b shows the same signals in the presence of system noise. Without any noise, both signals completely overlap, and in the presence of noise, the estimated signal follows the original signal quite well. Since one of the basis functions matches the input signal, reconstructing the input signal would not be difficult in such a scenario.

Fig. 2
figure 2

Estimated and original signals for Example 1: a without noise in the membrane potential and b with added noise in the membrane potential (Original \(=\) blue, Estimated \(=\) red)

Example 2

In this example, two sinusoidal signals one at the lower range and another in the higher range of the frequency band were used as the stimulating current as \(i_\mathrm{vc} (t)=0.2\cos (2\pi 0.1t)+0.2\cos (2\pi 0.35t+\pi /4)\). For estimating the input signal, two sets of basis functions were used for comparison. The first set used \(f_\mathrm{low} =0.1\,\hbox {Hz}\) and \(f_\mathrm{high} =0.4\,\hbox {Hz}\) with frequency steps \(\Delta f_\mathrm{low} =0.01\,\hbox {Hz}\) and the second set used \(f_\mathrm{low} =0.1\,\hbox {Hz}\) and \(f_\mathrm{high} =0.4\,\hbox {Hz}\) with frequency steps \(\Delta f=0.02\,\hbox {Hz}\). While one of the basis functions in the first set matches the stimulating current at 0.35 Hz, the second set does not have any basis function corresponding to the injected current at this frequency.

Figure 3a shows that the first set can perfectly recover the input current (no noise was assumed for this example). Although the second set does not have any basis function at 0.35 Hz, Fig. 3b shows that the algorithm tries to find a linear combination of the existing basis functions to reconstruct the input current in order to produce the same spikes. Even the second set can reproduce the input signal quite well.

Fig. 3
figure 3

Estimated and original signals for Example 2: a with fine frequency steps and b with coarse frequency steps (Original \(=\) blue, Estimated \(=\) red)

Example 3

In the previous two examples, we assumed sinusoidal signals as the input current for the LIF model. These signals are of course compatible with the selected type of the basis functions. The question which may arise here is whether the proposed algorithm can recover a non-sinusoidal input current with sinusoidal basis functions. To answer this question, in this example a pulse-shaped input followed by a triangular-shaped input is selected as the stimulus as it is shown in Fig. 4a. These types of signals are also used in neuronal excitation and can examine the performance of our algorithm for non-sinusoidal signals. Additionally, a pulse input may cause a burst of spikes at the LIF output and could well represent the behavior of a typical cortical neuron (Izhikevich 2003).

Fig. 4
figure 4

Estimated and original signals for Example 3: a without noise in the membrane potential b with added noise in the membrane potential (Original \(=\) blue, Estimated \(=\) red)

The original signal along with the estimated signal which was reconstructed with \(f_\mathrm{low} =0.01\,\hbox {Hz}\) and \(f_\mathrm{high} =0.15\,\hbox {Hz}\) with frequency steps \(\Delta f=0.01\,\hbox {Hz}\) without any noise in the membrane potential is shown in Fig. 4a. Although the input signal is not a combination of sinusoidal signals, the proposed algorithm tries to find its best approximation with the combination of the selected basis functions. The sudden change in the pulse input is obviously harder to be modeled than the smoother triangular current. As it would be expected, there is more error in reconstructing the pulse input current than the triangular current, but the estimated input reproduces the actual input quite well.

Figure 4b illustrates the reconstructed signal in the presence of system noise. Even with the addition of the noise, the algorithm can still well recover the original input current.

Example 4

The input current of a real neuron may actually come from hundreds or thousands of its presynaptic neurons which fire random action potentials. For example, neurons deep in the cortex generate highly variable spike trains because they receive a large number of unsynchronized synaptic inputs from many other neurons. Therefore, the input current of the neuron could itself be a random signal. In this example, we assume that the time-varying input current \(i_\mathrm{vc} (t)\) is a Gaussian signal and try to find its variations over the period of observation. To produce a random input current, a zero-mean white Gaussian signal (independent of the noise of the membrane potential) was passed through a sixth-order Butterworth low-pass filter with cutoff frequency of 0.3 Hz; then its amplitude was adjusted to have an average power of 0.5 (half of the power of the quiescent current).

Figure 5a shows the original signal along with the estimated signal which was reconstructed with \(f_\mathrm{low} =0.01\,\hbox {Hz}\) and \(f_\mathrm{high} =0.3\,\hbox {Hz}\) with frequency steps \(\Delta f=0.01\,\hbox {Hz}\) assuming that there is no noise in the membrane potential. Although the input signal is the realization of a random signal, the algorithm tries to find its best match with the combination of the basis functions. Of course, for a different period of observation the variations in the random input will have a different shape, yet it will be approximated by the same basis functions but with different coefficients.

Fig. 5
figure 5

Estimated and original signals for Example 4: a without noise in the membrane potential and b with added noise in the membrane potential (Original \(=\) blue, Estimated \(=\) red)

The estimated signal when noise is added to the membrane potential is displayed in Fig. 5b. Although occasional erroneous peaks can be observed in the estimated signal in the presence of the membrane noise, it still closely resembles the original signal.

It is worthwhile mentioning that the addition of the noise to the membrane potential directly affects reaching its crossing point with the threshold level and consequently the location of the generated spikes. There is no way to distinguish between the effect of the actual stimulus and the noise since they are added linearly at the membrane potential. Of course, if the power of the noise exceeds that of the stimulus what would be estimated by any algorithm is the noise not the stimulus.

6 Conclusion

Similar to the encoding mechanism, finding the decoding mechanism of a spiking neuron can be considered in a deterministic framework. This consideration creates a rigorous yet simple approach for finding the input stimulus of the neuron by observing the occurrence times of the spikes at its output. By decomposing the input signal of the LIF model into a set of basis functions, the decoding mechanism is converted to solving a linear regression problem. This approach does not impose any limitation on the characteristics of the input signal; even random signals can be identified. Our simulation results show the satisfactory performance of the proposed algorithm in estimating signals with different characteristics. Together with its simple implementation, the new algorithm offers challenging properties for the decoding problem in neural information processing.

In the proposed algorithm, we assume that the locations of all spikes over the period of observation are made available to the algorithm at the beginning. However, we can convert the LS formulation of (35) to a recursive form and implement an adaptive reconstruction method through which the input current can be progressively determined by recording a new spike. Such formulation can provide us with an effective real-time tool which is most suitable for reconstructing random nonstationary input signals.