Neuroscience sees itself as multidisciplinary field and, indeed, it was probably among the first biological disciplines which benefited by an influx of trained scientists from other domains like physics and engineering (De Schutter 2008). But while neuroscience made good use of the additional expertise to advance technologies to study the brain, it stayed firmly with its biologically roots as far as its appreciation for theory goes. More than two decades after the declaration of computational neuroscience as a subfield (Sejnowski et al. 1988) we must conclude that its impact on mainstream neuroscience remains limited and, in particular, most neuroscientists deny theory a strong role in their scientific approaches.

This may seem a rather negative view at a time when experimentalists attend a growing number of computational neuroscience summer courses and when a small modeling section is often included in regular papers. We have indeed progressed compared to the early nineties when one wondered whether experimentalists and modelers could just get along (Bower and Koch, 1992). But after a grace period, when it was fairly easy to publish purely theoretical studies in mainstream neuroscience journals (e.g. Szilagyi and De Schutter 2004; Maex and De Schutter 1998, 2003)Footnote 1, the balance may have shifted. In fact, all the journals referred to have recently rejected several manuscripts of which I was a co-author because, summarizing, this was purely theoretical work and needed experimental verification. Conversely, studies where I collaborated with experimentalists to produce combined modeling and experimental studies were recently accepted in even higher ranked journals (Santamaria et al. 2006; Steuber et al. 2007).

While I laud the combined modeling-experiment study, as it has promoted a strong interest among experimentalists in applying modeling techniques, I worry that in the last few years this has led to a shift in the perception of what is the proper ‘place’ of modeling. It may have devalued pure theoretical work, shifting it to specialized journals as it is perceived as of ‘no interest’ to mainstream neuroscientists. It was never the intention of computational and theoretical neuroscience to become just a tool in the toolbox of experimentalists. Moreover, the combined approach seems to have generated a new class of experimentalists who call themselves modeling specialists, sometimes with little or no expertise to validate this claim. In fact, though second guessing who your reviewers are can be a dangerous game, I am convinced that, in the cases referred to earlier, pure modeling papers were reviewed exclusively by experimentalists despite the assurance of one of the editors that both reviewers “have done much modeling in their time”.

We typically get two kinds of negative comments indicating the mindset of these reviewers. The first is the insistence on experimental verification, with as ultimate recommendation concluding one of the reviews: “Frankly, the approach taken by this paper is not the proper one to reach this conclusion. The proper one is to measure calcium.” Such and similar statements in effect negate the value of theoretical prediction as a valuable contribution to the field. This view would be unacceptable in physics and other fields firmly grounded in theory, but in neuroscience it still seems to be acceptable. And while the same editor claimed that “this is not a reflection of any stance by the Journal against modeling or theoretical studies” I can only report that I and several colleagues are receiving similar comments much more often than in the recent past.

The second type of negative comment that is becoming common consists of a long litany of properties which the model fails to replicate, with specific reference to some detail reported in a paper or, even worse, some unpublished observation. To cite another reviewer “there are myriad of experimental observations that must be reproduced by the model”. This is the quest for the absolutely perfect model, which seems a noble goal but is it achievable? This question really confronts the fundamental paradigms of experimental neuroscience and its lack of a theoretical basis. A major weakness of the current paradigm is the almost complete absence of data integration. Neuroscientists produce nice papers, combining many advanced techniques if they want to have it published in a premium journal, but most often these papers concern a single finding. While these findings are put into a context, this is usually done in an informal way in the introduction or discussion. As a consequence, many of the claims linking particular findings to an integrated view of the system being studied are unverified and may very well be false.

An example from electrophysiology may help to make this point clear. It is quiet common to study the effects of a specific ion channel type on some property of a neuron, like for example bursting. These studies can be very sophisticated and may include an arsenal of pharmacological, molecular and transgenic approaches. Sometimes multiple channel types are studied, but always in isolation of each other. The experimental approach typically leads to statements like “channel A is responsible for firing behavior X, while channel B controls Y”. But in reality neuronal firing is controlled by the dynamic interaction among multiple channels, which is unfortunately difficult to address experimentally but quite accessible to modeling approaches. In many cases modeling has led to surprising results contradicting, in part, the experimental expectations. For example, modeling showed that a spatial separation of calcium channel types was not needed to generate both calcium plateaus and calcium spikes in Purkinje cells (De Schutter and Bower 1994) and that in cerebellar Golgi cells the Ih current does not actively participate to subthreshold oscillations that drive spontaneous firing (Solinas et al. 2007). At a more general level, combined modeling and experimental work has shown that homeostatic control of channels make specific links between channels and firing behaviors tenuous and that, in many cases trying to explain data with a unique model may be a fallacy (Achard and De Schutter 2006;Marder and Goaillard 2006). To summarize, we have learned that modeling sometimes turns experimental observations upside-down and that natural variability implies that there may not be a unique model. This implies that the quest for the perfect model may often be in vain and that the expectation of a model reproducing the complete experimental literature may turn model generation into a Sisyphean task.

Instead it is much more practical to use modeling to investigate the dynamic properties which are not very amenable to experimental investigation. And to be useful, models do not have to be perfect. There are many examples of incomplete models which have given very good predictions (Maex and De Schutter 1998; Vos et al. 1999) or, even more spectacular, models which are (partially) ‘known to be wrong’ that still accurately predicted experiments. For example, though we know that the original Purkinje cell model (De Schutter and Bower 1994) does not replicate important spiking mechanisms, like the subsequently discovered resurgent sodium current (Khaliq et al. 2003), it still correctly predicted the change in spiking response to large parallel fiber patterns after the induction of long-term depression (Steuber et al. 2007). I do not want to imply here that we should relax technical standards in modeling and promote sloppy work. But more realistic attitudes among experimentalists would be welcome, including a more sober assessment of the broader validity of a lot experimental work that gets published in the absence of any theoretical grounding.

I have reported a worrisome trend where experimentalists seem to take increasing control over evaluating modeling work and do so in an inappropriate manner. What can be done to remedy this problem? Partially this is a question of education. While the summer courses (http://www.neuroinf.org/courses/, http://www.irp.oist.jp/ocnc/index.html) I have co-organized focus a lot on teaching computational methods, we should maybe emphasize more the paradigms of theoretical neuroscience. But obviously there is something wrong when a theoretical paper gets reviewed by scientists who lack theoretical training. Just imagine how the community would react if the converse happened, and experimental papers were reviewed exclusively by theoreticians… No editor would find it acceptable if an experimental paper got rejected because the data were not being shared (Ascoli 2006; Teeters et al. 2008) or because reporting mean values ± s.e.m. assumes a normal distribution for which no supporting evidence was included, etc…

It should be a standard policy, accepted by all mainstream journals, to have theoretical papers judged by a mix of theoretical and experimental reviewers and, maybe, it would not hurt to implement this policy for all categories of papers. The latter would be the true multi-disciplinary approach. Closer to home, this journal has as policy to always invite a mix of reviewers. For example, when a neuroinformatics paper describing a software tool gets submitted, we make sure that it gets reviewed both by reviewers qualified to judge the IT components and underlying algorithms and by potential users of the software. We hope that by setting this example we may move editorial policies in the rest of the field forward also.