1 Introduction

Neuroscience, like all fields of science, must be based on both experiment and theory. Traditionally, experimental data has dominated; theory has been difficult to develop due to the complexity of neuronal structures and functions. Computational models, however, face a challenge: they are complex and difficult to describe completely and accurately in a publication (as reviewed in McDougal et al. 2016). The ModelDB repository was created to address this issue by acting as a companion resource to traditional publications. Upon acceptance of an article involving computational neuroscience models, the authors can share their accompanying code on ModelDB without restriction on simulator choice, modeling topic, etc. This open-acceptance policy combined with active model curation and the development of tools to aid in model understanding has made ModelDB a one-stop resource for researchers looking for computational neuroscience models.

Twenty years have passed since the first publication on ModelDB (Peterson et al. 1996). We take this occasion of ModelDB’s 20th anniversary to review ModelDB’s origins, its current state, and future plans for its development. It continues to grow and now hosts over 1100 published models. ModelDB’s mission is to store the computer code associated with published computational neuroscience models so that they may be shared, in order to facilitate the verification, understanding, and extensions of the original paper, and so that they may be reused as templates for new projects or building blocks for new projects. To understand the scope of the models in ModelDB, as well as ModelDB’s mission, it helps to review its development in the context of historical developments in computational neuroscience, and the challenges it faces as neuroscience enters a new era of high performance computing and simulation.

2 ModelDB’s origins

Computational modeling in the nervous system has two origins. One origin was the model of Alan Hodgkin and Andrew Huxley for the action potential in the squid giant axon. Developed during the late 1940s and early 1950s, the model represented the results of physiological experiments in a system of four ordinary differential equations. These equations not only reproduced the measured data, but they allowed quantitative predictions of axon response to different stimuli and introduced a framework for formalizing the response of an ion channel to changes in membrane potential that remains widely used. The second origin was compartmental modeling, introduced by Wilfrid Rall to study the spread of synaptic potentials in complex dendritic trees, initially of motor neurons (Rall 1964). These two methods were first combined in models of brain neurons in the olfactory bulb mitral cell and its synaptic interactions with granule cells which incorporated both synaptic potentials and Hodgkin-Huxley-like action potential dynamics (Rall and Shepherd 1968).

In each of these cases, the models provided critical tests of experimental data and made predictions that were confirmed by experimental tests. This was a significant advance over previous mathematical attempts to represent neuroscience data, which used analytical methods with limitations in representing morphology and function. Using numerical methods solved this problem, enabling arbitrary morphologies and channel dynamics to be simulated, limited mainly – especially in the early years – by available memory and computing power.

Early computers were room-occupying leviathans, expensive, remote from the laboratory, and difficult to program. These limitations greatly slowed the incorporation of modeling into experimental study. The olfactory bulb work was followed in the 1970s with models for motor neurons (Dodge and Cooley 1973; Traub and Llinas 1977), Renshaw cells (Traub 1977), cortical pyramidal cells (Traub and Llinas 1979) and a two-neuron dendro-dendritic microcircuit (Shepherd and Brayton 1979). Modeling effort picked up in the 1980s, and began to be more common in the 1990s as computers morphed into desk-sized equipment within experimental laboratories, with adequate speed and memory to simulate neurons with realistic morphology and properties.

A new impediment arose, however, since each model often had to be built from scratch, usually by a graduate student or postdoctoral fellow in the research group, who might spend several years developing it and then move on, so that any attempt to test the model further was often prohibitively difficult. There was no infrastructure to allow modelers to readily share the full details of their work. Space in journals was limited, typesetting a full model was error-prone, and computer networks were in a nascent state. Thus at that time, computer modeling did not follow the rule that published work must provide sufficient information about the methods to allow verification of the results. This limitation risked skepticism about the results and threatened to limit the scientific basis and use of computational modeling.

There was therefore an urgent need to create a database where investigators could identify models already produced in their area of interest, download and run them to test them in a way analogous to testing results in any other area of science, and build on previous work to generate new models for new applications. The establishment of the US Human Brain Project (Martin and Pechura 1991) provided the opportunity to create such a database, since one of its aims was to advance neuroinformatics in a way inspired by the recently established gene and protein databases. It was recognized from the beginning that the neuroscience domain was more challenging. In contrast to the one-dimensional strings of letters used to represent major components of genomic data, neuroscience data is characterized by its great diversity of data types, from spatial images to temporal spike firing. This diversity combined with a diversity of data formats presented challenges for archiving and searching.

SenseLab (http://senselab.med.yale.edu), one of the early participants in the US Human Brain Project, committed itself to attempting to address many of these challenges. Built on a flexible EAV/CR architecture (Nadkarni et al. 1999), SenseLab began by developing NeuronDB (Mirsky et al. 1998), a multidisciplinary resource combining data on morphology, functional properties, and pharmacology, and representing that data in the context of canonical neuronal structures. In addition to facilitating comparisons of experimentally measured properties between different neuron types, NeuronDB serves as a starting point for building Rall-type neuron models by providing a reference for what channels are present in different parts of a neuron.

ModelDB, a freely-accessible repository for published neuroscience models in their original source code form, was created in response to the increasing feasibility of neuroscience simulations due to advances in personal computer technology. It was built by combining SenseLab’s informatics infrastructure with the neuronal modeling expertise of Michael Hines, creator of the NEURON simulation environment (http://neuron.yale.edu; Hines 1993) who joined the SenseLab group in 1995. The decision to preserve models in their original form allowed the ModelDB group to avoid the need for special expertise to reproduce models in a standardized format. Instead of converting code, they could focus on infrastructure development and on collecting models for all simulators and neuroscience topics. From the beginning, each model in ModelDB was associated with metadata linking it to the experimental resources in NeuronDB and the rest of SenseLab. An archetype sample process of collecting data, building models, and making predictions along with the role of ModelDB is summarized in Fig. 1.

Fig. 1
figure 1

Experimental data can be combined via computer modeling to make predictions about not-directly measurable dynamics (membrane potential across the entire cell, intracellular chemical concentrations, etc.) or the response to new experimental protocols. The experimental data used as the basis of a model and for validation may come directly from a research group, from the literature, or from a database such as NeuronDB, NeuroElectro (Tripathy et al. 2014), or NeuroMorpho.Org (Ascoli et al. 2007). ModelDB provides the infrastructure to allow researchers to build on prior published models instead of having to create a new virtual model system de novo. The morphology in this figure is from (Barthó et al. 2007) via NeuroMorpho.Org

3 ModelDB at present

Since 2000, ModelDB has grown steadily and now contains over 1100 models (Fig. 2). Many of these models combine traced morphologies with conductance based ion channel models with experimentally derived channel distributions, in order to make predictions about dynamics that are currently impractical to test experimentally, such as calcium concentrations in the fine oblique dendrites of a pyramidal cell (Fig. 3).

Fig. 2
figure 2

ModelDB has grown steadily since 2001. The first five years of ModelDB’s existence (1996–2001) focused more on defining the nature of the platform and building the technology, so that period is omitted from this figure. Inset: In 2015, 132 models were added, including 49 (or 37 %) on one day from the Allen Brain Institute (enlarged in inset). The solid line indicates the total number of models; the dashed line shows the count without this large contribution

Fig. 3
figure 3

A typical morphologically detailed single neuron model (modeldb.yale.edu/87284; Morse et al. 2010). a A traced neuron (NeuroMorpho.Org c91662) is discretized into many (here 974) compartments. Each compartment has been assigned a random color. For visualization, the diameters have been expanded by a factor of 3 from the measured and simulated morphology. Numbered diamonds indicate locations measured in (D). b Each compartment has some density of a number of ion channels, modeled with Hodgkin-Huxley style dynamics. The compartments are connected to each other via the Cable Equation. C The conductances need not be uniform; here, A-type K+ current (IA) conductance grew with distance from the soma, and faster on the oblique dendrites (above the main diagonal line, red) than on the apical trunk (diagonal line). d The model makes a prediction; here: peak calcium concentration increases in the presence of IA block (thick lines), but the locations of the peaks are independent of IA blockade. Adapted from Morse et al. 2010; used by permission

With its steady expansion, ModelDB has emerged as a common place to seek out computational neuroscience models, both for specific known models and to discover other modeling work. For its recent renewal application, 30 users provided comments on aspects of ModelDB. We summarize some of those comments below.

Multiple platforms

Including models that run on different platforms is an attraction for many users. Unlike the CellML repository (Lloyd et al. 2008) or Biomodels.net (Le Novere et al. 2006), ModelDB hosts models expressed in any simulator format or programming language. Over 80 simulators or programming languages are represented. Approximately half the models in ModelDB are coded in NEURON, followed by MATLAB, Python, C/C++, and XPP (Table 1). Curation is carried out regardless of simulator.

Table 1 The top five most frequently associated regions, cell types, model concepts, and simulation environments for model entries in ModelDB as of February 28, 2016

Multiple research topics

Over 130 topics (for examples of the most frequent, see Table 1) range over many scales, including action potentials, calcium dynamics, influence of dendritic geometry, invertebrates, learning and memory, pattern recognition, synaptic integration and synaptic plasticity. Over 150 models focus on pathophysiology of the nervous system. By collecting this diversity in one location, ModelDB promotes model discovery in a way that is not possible when models are scattered across laboratory websites or general purpose code repositories.

Model search and discovery

ModelDB provides several tools to assist with model discovery. As part of the model entry and curation process (Hines et al. 2004), models are associated with categorized tags indicating the model type (e.g., neuron vs. network), brain region, cell types, channels, receptors, genes, transmitters, simulation environment, and model concepts (spatio-temporal activity patterns, calcium dynamics, schizophrenia, etc.). Links in the left column of every ModelDB page ((2) in Fig. 4) allow browsing by each of these categories. Where appropriate (e.g., cell types, currents, and model concepts) browsing is hierarchical. Clicking on a selection displays a new page listing all models so tagged and a brief explanation of the tag. According to one user, ModelDB’s “cross-referencing with keywords and related literature, and a simple yet very effective ontology… can also help to identify relevant related work that is not always easily found by traditional methods such as PubMed searches.” Some users have reported that their code being discovered on ModelDB led to new collaborations.

Fig. 4
figure 4

ModelDB offers many ways to find and explore models. a The full web page showing the Model Information Tab. (1) Search models. (2) Browse models by category. (3) Download the model. (4) Auto-launch a NEURON simulation. (5) Model file browser. (6) ModelView: visualize model structure. (7) Simulation platform. (8) 3D printable versions of cells from the model. (9) Summary of the model. (10) Paper(s) describing or using the model. (11) Find models and papers cited by this model’s paper or that cite this model. (12) Searchable metadata. (13) Links to NeuronDB for related experimental data. b The Model File tab allows exploring the model files. (14) Download the current file. (15) Directory browser, showing model file names. (16) View pane for the currently selected file. The readme file for model 87284 (Morse et al. 2010) is shown; modeldb.yale.edu/87284. c The Model Views tab displays a graphical representation of the model structure. (17) Interactive tree for exploring the model structure

Search tags

A unified search box on the upper-left of each page ((1) in Fig. 4) allows models to be searched by tag, authors, full-text contents, or accession numbers. Suggestions and matches are displayed as text is entered into the search box, avoiding the need to fully enter the search query. The full-text searching also supports searching for words beginning with a given character sequence, case-sensitive searches, and restricting searches to filenames matching a pattern or from a model of a certain year. The advanced search page allows more complicated queries.

Model viewing

For most NEURON and some NeuroML models, a Web tool called ModelView (McDougal et al. 2015; Fig. 4C) is provided in the Model Views tab which allows a modeler to examine the run-time morphology, channel types, and values of parameters in a model. A browsable tree ((17) in Fig. 4) provides information on both the basic structure of the model (how many cells or compartments, and which mechanisms such as ion channels or receptors, are present) and also the values of parameters (such as conductance densities, reversal potentials, specific membrane capacitance, etc.) at run time. This provides a quickly graspable overview of the model helpful for modelers and experimentalists.

Reuse

ModelDB automatically indicates when a file (e.g., describing a specific ion channel) is reused regardless of context, thus allowing comparisons across models. One file, a model of an A current (kaprox.mod in modeldb.yale.edu/2796) has been reused in at least 26 models; the corresponding paper has been cited by the papers accompanying 52 other ModelDB entries, as of August 1, 2016. This level of reuse -- in papers with a total of at least 50 distinct authors -- would likely be impossible if individual modelers had to contact the original modeler and request code.

This highlights a key benefit of ModelDB. It facilitates the reuse of model code. Reuse is possible at many scales: code snippets, model components such as ion channels, and whole models. In each case, reuse saves time and effort. As one user wrote, “By using well established model components in my network model, I have saved myself the better part of a year of work, reduced opportunities for error, and ensured that a greater proportion of my model has already been validated.”

ModelDB’s citation browser ((11) in Fig. 4) allows researchers to quickly identify what modeling papers are cited-by and cite the papers associated with a given model, providing another metric of reuse specific to the computational neuroscience community. Hodgkin and Huxley 1952, which helped launch computational neuroscience, is cited by more ModelDB models (185) than any other model. The next three models most cited by other ModelDB models are from three distinct areas of research: kinetic synaptic models for networks (ModelDB 18500; multiple papers including Destexhe et al. 1994; 74 citing models, 372 downloads), the Izhikevich model for spiking neurons (ModelDB 39948; multiple papers beginning with Izhikevich 2003; 73 citing models, 1219 downloads), and a model investigating the role of morphology (ModelDB 2488; Mainen and Sejnowski 1996; 73 citing models, 1438 downloads). Citations counts are as of August 1, 2016; download counts are as of July 20, 2016 and count only unique non-search engine IP addresses.

Reproducibility and replicability

In addition to facilitating reuse, sharing code on ModelDB or otherwise promotes reproducibility and replicability in computational neuroscience (Crook et al. 2013; McDougal et al. 2016). These are related but distinct aspects of the scientific method. Reproducibility is the ability to re-implement a model and get the same qualitative result. Replicability is the ability to repeat a simulation exactly. Replicability follows mostly from sharing code and the deterministic nature of digital computers, but is assisted by the curation process which seeks to ensure that models are run on as many platforms as possible (Linux, Macintosh, Windows, clusters, supercomputers), and that they contain no bugs that restrict their ability to run on different simulator versions. Reproducibility is assisted because the shared code provides a reference implementation for debugging, and parameters that are necessarily the same as those used in the simulation. Furthermore, ModelDB promotes both reproducibility and replicability by functioning as a stable, long-term home for code, ensuring that it does not get lost over time as individuals enter and leave research groups.

Running models

ModelDB currently offers several ways to make models more accessible for assessment, thereby making them more understandable. First, from the beginning, users have been able to download any model entry’s source code and run it on local hardware. This allows testing with different inputs and/or recording and analysis of different outputs than were used in the publication. Help pages in ModelDB provide notes on how to run models (in general) for many of the simulation environments. Second, many model entries (246 as of February 28, 2016) have a link that triggers interactive simulation over the Web on the INCF Japan Node’s Simulation Platform (launched by (7) in Fig. 4). Data generated during the interactive session may be downloaded for further analysis. Third, most NEURON model entries in ModelDB have an auto-launch button; a single click on this button downloads, compiles, and runs a simulation, provided that NEURON is installed and the browser is configured correctly. Finally, large network models that are impractical or impossible to run on a personal workstation may be uploaded and run on a cluster using the freely available Neuroscience Gateway resource (Carnevale et al. 2014).

Assisting review

ModelDB assists in the review process. Although public models must have an associated publication, ModelDB allows authors to upload their unpublished models privately and provide their reviewers with a read-only password that allows downloading (and thereby reviewing) the code and examining the model’s output. ModelDB’s auto-launch functionality further assists review by simplifying the launching process for NEURON models, and is in principle extendable to other simulation environments.

Creating a sharing community

ModelDB’s existence and the presence of over 1100 models on the site promotes sharing of model source code, which has been noted to be less common than data sharing in fields such as molecular biology (Ascoli 2006). Many journals now ask authors to explicitly address whether or not they will share model code. The Journal of Computational Neuroscience specifically suggests that authors share their model code via ModelDB.

ModelDB and most submitting authors impose no restrictions on the use of models obtained from the database beyond citing ModelDB and the model’s publication in any resulting work. Most of the remainder release code under GPL or free for non-commercial use licenses by including a file with the appropriate license text.

Full model descriptions

As one user summarized it, “I have found ModelDB to be an essential complement to formal publications in computational neuroscience, since most articles cannot provide the level of detail necessary to answer all questions that one may have about a particular model.” Sharing code provides a full description of a model in a way that a paper cannot. Generally, space limitations in publications preclude complete descriptions of model equations, parameters, numerical methods, etc. Even if there were no space limitations, typesetting and conversion back into source code risk introducing errors. Sharing source code avoids such errors and allows others to augment their own understanding of the model beyond the textual content of the associated publication. It increases the chance that model errors will be found, their seriousness examined, and the errors corrected.

Teaching

In addition to its role in research, ModelDB facilitates teaching in computational neuroscience courses around the world. Among its entries are many excellent examples of how to write code for various simulators, and how to document code so that others may understand it. In providing model source code linked to research publications, ModelDB makes those publications’ models interactive and thus more easily studied for both research and educational purposes.

4 ModelDB and the future of neuroscience

New trends are emerging that will shape the future of neuroscience; ModelDB and modeling are poised to play key roles in this development. New initiatives and new technology are leading to data being collected at an increasing rate; this data will need to be bound together with models to form coherent frameworks to give insight and guide future experiments. Advances in computer technology, especially the increasing performance and availability of high performance computers (HPCs) and graphics processing unit (GPU)-based parallelism, will allow larger, more-realistic multi-scale models. These models often will be built as a collaborative endeavor, combining expertise and data from many interdisciplinary researchers. Many of these more-realistic models will be used to study pathophysiology of complex disorders.

More data

Multiple factors are converging to lead to a rapid increase in neuroscience data collection and availability. Governments in many countries have prioritized developing a better understanding of the brain, most famously in the United States and Europe by funding the US Brain Initiative (Insel et al. 2013) and the EU Human Brain Project (Markram 2012). Simultaneously, new methods such as CLARITY (Chung and Deisseroth 2013) and Expansion Microscopy (Chen et al. 2015) allow imaging the brain in novel ways. Automated large-scale morphology reconstruction (Kasthuri et al. 2015) will extract neuron morphologies from the images and in addition will provide insight into cell types and possible connectivity within the local microcircuit. Electrophysiology data sets will likewise become more numerous due to the increasing use and availability of optogenetics techniques (Deisseroth 2011), multi-electrode arrays (Najafi and Wise 1986), and voltage and calcium sensitive dyes (reviewed in Baker et al. 2005). A gradually increasing expectation of data sharing will make more of the data that is gathered available to all.

To address this explosion of data, ModelDB will increase our outgoing links to related data resources (e.g., ModelDB currently links to NeuronDB to allow exploring information about what is experimentally known about a modeled cell type) and standardizing the identification of what data led to the model while simultaneously refining the specificity of our links. These links have been historically hampered by the lack of a widely adopted shared language (ontology) for identifying neuroscience concepts, but NeuroLex (Hamilton et al. 2012) and the Computational Neuroscience Ontology (Le Franc et al. 2012) offer the potential to overcome this challenge.

New models and new modelers

The explosion of data will drive the development of new models as experimentalists seek to infer concepts that can account for their own data and relate it to observations by others. ModelDB already provides a wealth of code examples and full mechanisms (e.g., ion channel models) associated with peer-reviewed publications that can help those new to modeling get started. In the future it will make discovering examples easier by manually and automatically adding more searchable metadata that identifies parts of model code with specific biological context (e.g., rat hippocampal CA1 pyramidal cell, Kv3.1 channel).

ModelDB’s ModelView tool is already creating the expectation that simulators other than NEURON should make their model structure graphically discoverable. This will become increasingly necessary as new simulators are created that address the specific needs of particular user groups. A big step in this direction would be for simulator developers to adopt interoperability with declarative model specification standards such as NeuroML (Gleeson et al. 2010); once exported to NeuroML, models originally written for arbitrary simulators can then be visualized with the existing ModelView tool. This standardization would facilitate comparing models and the data they generate. By providing visualization and analysis tools compatible with emerging model specification standards, ModelDB can further the adoption of such standards, making it easier for more researchers to use tools to combine and extend established models.

As the number of models produced increases, ModelDB will expand to include them. At present, many new model codes are never shared on ModelDB. Although some models may never be shared, ModelDB will work to expand the percentage of models made publicly available both by general advocacy and by actively soliciting models. Ascoli 2015 showed that for neuron morphology data, soliciting data in a transparent way increased the prevalence of sharing. To accommodate an increased volume of model submission, ModelDB will automate aspects of the curation process (metadata tagging, model testing) to continue to ensure quality and discoverability without overwhelming available resources.

Larger, multi-scale research

Many new models will be larger models spanning multiple spatial and temporal scales. This will be driven by two factors: (1) increased availability and discoverability of established model components on resources like ModelDB will reduce the amount of effort involved in building a multi-scale model, and (2) advancements in computer technology and availability, in particular developments in high performance computers (HPCs) and increased use of GPU acceleration (e.g., Yavuz et al. 2016). Resources like the Neuroscience Gateway (Sivagnanam et al. 2013) make HPC technology freely available to all researchers.

Already neuroscience research, both modeling and experiment, spans many scales, most of which are represented on ModelDB or its companion SenseLab sites. The structure of receptors is predicted using protein folding simulations (e.g., olfactory receptor structure in Man et al. 2004; ORModelDB 150627). MCell (Stiles and Bartol 2001) and Smoldyn (Andrews et al. 2010) allow high-resolution stochastic explorations of molecular dynamics in microdomains (e.g., Keller et al. 2015; ModelDB 182142). Deterministic approximations allow examining reaction-diffusion dynamics in whole dendrites (e.g., Calcium Waves in Neymotin et al. 2015; ModelDB 168874). Single cell models allow investigating the effects of modulators on the electrophysiology of individual neurons (e.g., Morse et al. 2010; ModelDB 87284 explores the early effects of amyloid beta on a CA1 pyramidal neuron). Another class of single cell models focuses on gene expression (e.g., circadian rhythms in the suprachiasmatic nucleus of the brain, Kim and Forger 2012; ModelDB 145801). Network models explore emergent effects from the interaction of multiple neurons (e.g., self-organization in the olfactory bulb in Migliore et al. 2014; ModelDB 151681). Functional aspects of the brain are explored with networks spanning multiple brain regions (e.g., Eliasmith et al. 2012; ModelDB 147103).

To date, these spatial scales have been largely studied independently; e.g., the multiple brain region model of Eliasmith et al. 2012 does not directly incorporate protein folding. Some work, however, is beginning to bridge these scales. Neymotin et al. 2016 (ModelDB 185858), for instance, incorporates ER calcium dynamics in a model of persistent activity. That study was performed using a single simulator, NEURON; others (e.g., Brandi et al. 2011) are building multiple-scale models by connecting multiple simulators. As this latter case becomes more common, ModelDB will begin to identify what parts of the code is associated with which simulator. In every case, ModelDB will expand its tools to facilitate navigating between spatial scales while visualizing the model structure and results.

Collaborative interdisciplinary research

Multi-scale study will require the collaboration of experimentalists and modelers who specialize in many different subfields. Two strategies for organizing these collaborations are emerging: formal structures like the large research groups of the Allen Brain Institute and the EU Human Brain Project, and ad-hoc collaborations such as the OpenWorm project (Szigeti et al. 2014) or others using the Open Source Brain (Gleeson et al. 2012) infrastructure.

The Allen Brain Institute and the EU Human Brain Project conduct experiments to collect data on morphology, electrophysiology, and connectivity which they use to build large numbers of single cell models (e.g., on one day in 2015, the Allen Brain Institute released 73 single cell models) and network models. Although these models represent different cells, they are not independent as they were developed with the same methodology. For example, the Allen Brain Institute models used the same set of ion channels. ModelDB will adapt its tools to group related models together to allow them to remain individually discoverable without impairing the discoverability of models constructed using different methodologies.

Ad-hoc alliances of researchers developing a shared model are another emerging form of collaboration. In this strategy, promoted by the Open Source Brain and embraced by OpenWorm, models are continually improved as new data becomes available. General purpose code repositories like GitHub (github.com) and Bitbucket (bitbucket.org) similarly provide tools to facilitate ongoing development work. ModelDB’s specialist nature will provide complementary support for these collaborations by: hosting the code version of record as used in a given publication, improving discoverability by adding curated neurobiological metadata in a standardized way, promoting model understandability with tools like ModelView (McDougal et al. 2015) that graphically present model structure and biological underpinnings, and providing models and to be a source of model components (e.g., ion channel model code). To further the latter role, ModelDB will provide tools to assist extracting code associated with biological concepts as this may be scattered across multiple files.

New modeling domains

Collaborative models will facilitate the entry of modeling into relatively new areas of research including the study of disease and eventually personalized medicine. The nascent field of computational psychiatry seeks to use models to better understand psychiatric conditions and potentially personalize treatment (Montague et al. 2012; Wang and Krystal 2014). Already ModelDB is home to seven schizophrenia-related models. As new fields become computationally tractable, ModelDB will expand its supported metadata, will add links, and will add domain-specific tools to welcome researchers from these fields into the modeling community.

5 Conclusions

As the amount of neuroscience data continues to grow, there will be an increased need for computational modeling to provide a rigorous basis for integrating the data into unified and predictive theoretical frameworks. The availability of published models within ModelDB, the enhanced ability to discover them, and the tools to understand them, help ensure that these models will be built on strong, established, peer-reviewed foundations. ModelDB will continue to become a more comprehensive resource, representing a larger portion of modeling research, and to make its models more accessible. Our goal is to enable theoretical work to advance more quickly, with fewer errors, in a way that will allow it to increasingly support the field’s ability to understand the neural basis of behavior.