NEST (NEural Simulation Tool)
Marc-Oliver Gewaltig and Markus Diesmann (2007), Scholarpedia, 2(4):1430. | doi:10.4249/scholarpedia.1430 | revision #130182 [link to/cite this article] |
The Neural Simulation Tool NEST is a computer program for simulating large heterogeneous networks of point neurons or neurons with a small number of compartments. NEST is best suited for models that focus on the dynamics, size, and structure of neural systems rather than on the detailed morphological and biophysical properties of individual neurons. Examples are:
- Models of sensory processing e.g. in the visual or auditory cortex of mammals.
- Models of network activity dynamics, e.g. in laminar cortical networks or random balanced networks.
- Models of spike-synchronization in feed-forward networks such as Synfire Chains.
- Learning and plasticity in models of sensory processing.
NEST is developed by the NEST Initiative and is available free of charge under the GNU General Public License.
Contents |
Overview
A simulation with NEST is like an electrophysiological experiment that takes place inside the computer. The user builds a neural system by creating one or more model neurons and connecting them via synapses to form a network. NEST is optimized for large networks of spiking neurons and can represent spikes in continuous time (Morrison et al. 2007). It is possible to combine different types of model neurons in one network. Different synapse models implement synaptic mechanisms such as spike-timing dependent plasticity (STDP) or short-term plasticity. It is possible to structure networks into layers, areas, and sub-networks. Special nodes, called devices, are responsible for providing input to or obtaining measurements from the network. NEST writes the results of a simulation to files so that they can be analyzed with tools such as Matlab®, Mathematica®, or Python®. NEST can take advantage of multiprocessor (multicore) computers and computer clusters to increase the available memory or to speed-up the simulation. In parallel simulations, NEST scales better than linearly (Morisson et al. 2005).
Modeling concepts
Nodes and networks
In NEST, a network is a set of nodes and their connections. Nodes can be neurons, devices, and sub-networks. Nodes can send and receive events of different types, for example spikes or currents. NEST provides a number of models for neurons, devices, synapses, and events. Among them are:
- Integrate-and-fire neuron models with current- and conductance-based synapse dynamics.
- Hodgkin-Huxley models.
- Models for static synapses, synapses with spike-timing dependent plasticity, and synapses with short-term facilitation and depression.
- Stimulus devices to generate AC/DC currents, random Poisson spike-trains, and custom spike-trains.
- Devices to record spike-trains and membrane potentials.
NEST has commands to define and structure large networks. Sub-networks help to organize a large network model into manageable pieces. Commands exist to create a large number of network elements in a way that is optimal for memory and processor cache. Other commands automatically create multi-dimensional network structures.
NEST is modular so that users can write their own models for NEST.
Connections
A connection is defined by a sending node, a receiving node, a weight, and a delay. The weight determines how strong a signal is and the delay how long it takes for the signal to travel from one node to the other. Each connection can carry one type of event. Synapse models define how incoming events affect the post-synaptic neuron. The simulation engine of NEST efficiently represents the heterogeneity of a biological neural network in the types of synapses. The simplest synapse model delivers the connection weight to the post-synaptic neuron. However, each synapse can define its own dynamics. NEST also has synapse models for spike-timing dependent plasticity (STDP) and synaptic depression and facilitation. The algorithm for STDP was documented by Morrison et al. (2007).
NEST has high-level functions to create various connectivity schemes. These include convergent or divergent connections to/from an individual neuron, random connections, and topological connections between layers of neurons.
Events
Events are pieces of information that travel along a connection from one neuron (or node) to another. Each event carries the time-stamp when it was created and the weight of the connection. The time-stamp is combined with the delay of the connection to determine when the event is delivered at the target node. Each neuron model implements functions to accept and handle incoming events.
There are different types of events, depending on the information they carry. The most important types are:
- SpikeEvent, to carry spike times.
- CurrentEvent, to carry AC or DC currents.
- PotentialEvent, to carry membrane potential values.
- PreciseSpikeEvent, to carry spike times that are not bound to the time-grid of the simulation.
Parallel network update and communication
NEST can take advantage of multiple processors, cores, or computers in a local network. On computer clusters, NEST distributes the network over the available computers. Each computer creates its own part of the network and only stores the connections to its own neurons. This not only reduces the run-time of the simulation supralinearly, it also increases the total computer memory available to NEST. On single computers, NEST distributes the network over the available processors, each executing one part. To reduce the simulation time at least proportionally to the number of processors, NEST exploits the fact that each processor contributes its own fast cache memory.
NEST uses both the Message Passing Interface (MPI) and POSIX threads (pthreads). MPI controls the distributed NEST processes on many computers. On each computer individual processors can run independent threads. NEST defines the number of so-called virtual processes as the number of pthreads times the number of MPI processes. NEST ensures that for a given number of virtual processes, the results of a simulation are reproducible, independent of how the virtual processes are shared between pthreads and MPI processes.
Conceptually, NEST evaluates the network model on an evenly spaced time-grid \(t_i:= i \cdot \Delta\) and at each point, the network is in a well-defined state \(S_i\ .\) Starting at an initial state \(S_0\ ,\) a global state transfer function \(U(S)\) propagates the system from one state to the next, such that \(S_{t+\Delta} \leftarrow U(S_t,\Delta)\ .\) NEST sets \(\Delta = d_{min}\) (the smallest connection delay in the network). During this period, all nodes are effectively decoupled (Lamport 1978). As part of \(U(S_t)\ ,\) nodes create events that are delivered to their targets after a delay that depends on the connection.
NEST simulates a neural system with the following algorithm:
1 t <- 0 2 WHILE t < T_stop 3 PARALLEL on all virtual processes 4 deliver all events due 5 call U(S_t) forall nodes 6 END 7 exchange events between all virtual processes 8 increment time t <- t+Delta 9 END
The virtual processes evaluate steps 4 and 5 in parallel, and in step 6 they synchronize to exchange their events in step 7. There is no global event queue. Nodes that live in the same virtual process directly exchange and queue their events. Between virtual processes, NEST does not transmit individual events, as there are far too many. Instead, for each node that produced an event, NEST transmits the identifier of the sender node and the time at which the event occurred. All other connection parameters, such as the long list of weights, delays and targets, are available at each virtual process. With this information, the virtual processes re-construct the actual events and deliver them to their destinations.
Although NEST uses a discrete event framework, it does not use global event-driven update (Fujimoto 2000,Brette et al. 2006). Aspects of NEST's update algorithm are available (Guerrero-Rivera et al. 2006) for FPGA hardware implementations of neural networks.
Accuracy
A common problem of time-driven simulation is to use the simulation step \(\Delta\) also as the integration step h for dynamical equations. NEST has several mechanisms to avoid this:
- NEST does not have a global equation solver. Since in a heterogeneous network each node may have its own set of state equations, each node can update its state with the most accurate algorithms for its equations.
- For a large class of neuron models, NEST integrates the dynamic equations with machine precision, using a technique termed Exact Integration in the context of pulse coupled neuronal systems (Rotter and Diesmann, 1999). This technique uses the matrix exponential to integrate systems of linear time-invariant differential equations.
- In NEST, the integration step h can be chosen independent of the simulation step \(\Delta\ .\) Models with approximate differential equation solvers such as Runge-Kutta can set the h an order of magnitude smaller than \(\Delta\ .\) Within a simulation step \(\Delta\) it is also possible to solve the equations with adaptive step-size.
Further integration errors occur if spikes are constrained to times that are integral multiples of the simulation step \(\Delta\ .\) This is often used to argue for event-driven simulation (Brette et al. 2006), where spikes can occur at arbitrary times. NEST solves this problem by handling spikes in continuous time within a globally time-driven simulation (Morrison et al. 2007).
User interfaces
NEST can be used with different user interfaces.
- PyNEST is a module for the interpreted programming language Python.
- PyNN is a simulator independent Python module which supports NEST
- SLI is NEST's native simulation language interpreter, which is described below.
The simulation language
NEST has a programmable user interface that interprets a high level scripting language, called SLI. The simulation language has functions
- to create and connect networks,
- to create and manipulate arrays and matrices,
- to define, manipulate, and execute functions,
- for mathematics, numeric, and random number generation (via the GNU Scientific Library),
- to handle and propagate exceptions occurring at the different levels of the software.
SLI is a stack-oriented programming language with postfix notation. The syntax is simple: Numbers, arrays, or character strings are put on a stack. Functions take their arguments from the stack and return their results back to it.
The following example is a transcript of an interactive session. The numbers 1,2, and 3 are entered and the interpreter shows the size of the stack in brackets after the prompt. The command stack shows the contents of the stack:
SLI ] 1 SLI [1] 2 SLI [2] 3 SLI [3] stack 3 2 1 SLI [3]
SLI has operators to change the order of stack elements and to add or remove stack elements.
Arithmetic
SLI has a full set of mathematical functions. The following example computes \(1+2\ .\)
1 2 add
the next one \(5 \cdot \exp(-3.0/10.0)\)
-3.0 10.0 div exp 5.0 mul
and the vector product \((1,2,3) \cdot (3,4,5)^T\) is expressed as
[1 2 3] [3 4 5] Dot
Variables
SLI has two operators to assign values to names. The following examples both assign the value \(10.0\) to the name tau. The leading slash indicates that tau is an argument for the assignment and protects the name from being replaced by its value.
/tau 10.0 def 10.0 /tau Set
Dictionaries
Dictionaries are associative arrays. They store lists of name/value pairs, enclosed in the delimiters << and >>. The following example defines a dictionary with three entries and assigns it to the variable /phonebook:
<< /Thomas (+49 691234567) /Michael (+30 2107654321) /Sue (+44 7102324252) >> /phonebook Set
This example uses character strings to representing the phone numbers. Dictionaries are used to set and retrieve the parameters of models and nodes (see below).
Procedures
Procedures are sequences of commands, enclosed by the delimiters { and }. The delimiters prevent the commands from being executed immediately. New commands are defined by assigning procedures to names:
The following code defines a command that prints Hello World!.
/HelloWorld { (Hello World!) = } def
Program control
SLI has commands to control the flow of execution. The most important are:
- if to execute a procedure if a given condition is fulfilled.
- ifelse to execute a procedure if the condition is true and another procedure otherwise.
- repeat to repeatedly execute a procedure.
- for to iterate over a range of numbers.
- forall and Map to iterate over arrays and vectors.
The following procedure expects two numbers on the stack and divides them if the divisor is not zero. The command ifelse distinguishes the two cases:
/divide { dup % copy the top element 0 eq % compare with zero { (Error: division by zero!) = } % if true print error { div } ifelse % else divide } def
The arguments of ifelse demonstrate that SLI procedures do not need a name. Procedures can be assembled and disposed on the fly like regular arrays, avoiding unnecessary pollution of the namespace.
The following is a recursive implementation of the factorial n!\[ fac(n) = \begin{cases} 1, & \mbox{if }n=1 \\ n \cdot fac(n-1), & \mbox{if }n>1 \end{cases} \]
Here is the implementation in SLI. fac expects its argument on the stack and replaces it by the result. The command if tests whether the argument is greater than 1. if takes two arguments, a Boolean and a procedure. The Boolean is returned by the comparison command gt, which returns true if the value at stack level 1 is greater than the value at level 0, and false otherwise.
/fac { dup % duplicate the argument, since we still need it. 1 gt % If n > 1 we call fac recursively { % according to fac(n)=n*fac(n-1) dup 1 sub fac % recursively call fac with n-1 as argument mul % multiply the result with the argument } if } def
The following example creates an array from a range of numbers and applies a function to each value:
[ 1 1 10 { } for ] % create an array with the numbers 1 to 10, using list comprehension /arr1 Set % assign to arr1 arr1 { 2 mul } Map % multiply each number by two
Models and nodes
Nodes are created from a set of pre-defined models using the command Create. Models are defined in the dictionary modeldict, which must be opened to use the models:
modeldict using
The command Create expects the name of a neuron or device model and returns a global identifier number (GID) as a handle for the new node. The following example creates one integrate-and-fire neuron:
iaf_neuron Create
Create can also create many nodes. It then returns the handle of the last node.
iaf_neuron 1000 Create
Other nodes are created in a similar way. There are many different models for neurons and synapses. The most important are:
Model name | Description |
subnet | Subnetworks are used to structure large models and can be nested to define hierarchical models. |
iaf_neuron | Default integrate-and-fire model with alpha-function post-synaptic currents. |
iaf_psc_alpha | Like iaf_neuron, but with separate time-constants for excitatory and inhibitory synapses. |
iaf_psc_delta | Integrate-and-fire model with delta-function post-synaptic currents. |
iaf_psc_exp | Integrate-and-fire model with exponential post-synaptic currents. |
iaf_cond_exp | Integrate-and-fire model with exponential synaptic conductances. |
iaf_cond_alpha | Integrate-and-fire model with alpha-function synaptic conductances. |
aeif_cond_alpha | Adaptive exponential Integrate-and-fire model with alpha-function synaptic conductances. |
mat2_psc_exp | Integrate-and-fire neuron model with exponential PSCs and adaptive threshold. |
hh_psc_alpha | Hodgkin-Huxley model with alpha-function post-synaptic currents. |
hh_cond_exp_traub | Hodgkin-Huxley model with exponential synaptic conductances. |
poisson_generator | Device to produce Poisson spike trains. |
spike_generator | Device to play-back a given spike-train. |
dc_generator | Device to generate constant currents. |
ac_generator | Device to generate sine-wave currents. |
spike_detector | Device to record spikes from one or more nodes. |
voltmeter | Device to record membrane potentials from one or more nodes. |
In addition to the neuron models, there exists a variety of synapse models that implement spike-timing dependent plasticity (STDP) and short-term plasticity according to Tsodyks and Markram.
Sub-networks and layers
NEST stores the nodes of a network as a tree with subnetworks as branches and neurons and devices as leaves.
The command LayoutNetwork creates a multi-dimensional network, here a 2 by 3 layer of iaf_neurons. The dimensions are given by an array and the total size is only limited by the computer’s memory:
iaf_neuron [2 3] LayoutNetwork
PrintNetwork shows a part of the network, starting with the node given as the first parameter down to the depth given as second parameter. This example shows that layers are represented as nested sub-networks.
[0] 3 PrintNetwork
This produces the following output:
+-[0] root dim=[1003] | +-[1] subnet dim=[2 3] | +-[1] subnet dim=[3] | | | +-[1]...[3] iaf_neuron | +-[2] subnet dim=[3] | +-[1]...[3] iaf_neuron
Connections
The command Connect connects two nodes. It expects the global identifiers (GIDs) of the pre-synaptic and post-synaptic nodes, and optionally the weight and delay for the connection:
pre_GID post_GID Connect
or
pre_GID post_GID weight delay Connect
The following example creates two neurons and connects them with specified weight and delay:
1.0 pA /weight Set 2.0 ms /delay Set iaf_neuron Create /pre Set iaf_neuron Create /post Set pre post weight delay Connect
Large models have many connections and it would be inefficient to create them one by one with the interpreter. SLI has functions to create many connections at once:
- ConvergentConnect connects a group of neurons to one target neuron.
- DivergentConnect connects one source neuron to a group of target neurons.
- RandomDivergentConnect and RandomConvergentConnect
connect one source/target neuron with a random selection of neurons.
In distributed simulations connections are created in parallel.
The following example connects all neurons in a 3 by 3 layer to one neuron. The command GetLeaves retrieves the identifiers of all neurons in the layer net.
iaf_neuron [3 3] LayoutNetwork /net Set iaf_neuron Create /neuron Set net GetLeaves neuron ConvergentConnect
The following example connects neuron to all neurons in the 3 by 3 layer.
neuron net GetLeaves DivergentConnect
NEST has a topology library to create layers with receptive fields.
A complete example
The following NEST program simulates one neuron that receives spikes from an excitatory and an inhibitory population of randomly firing neurons. The two populations are modelled as Poisson processes whose rates are equal to the number of neurons times the mean neuronal firing rate. The program tries to find the firing rate of the inhibitory population such that the target neuron fires at the same rate as the excitatory population. Examples of more complex network models are described in (Brette et al 2006); the corresponding NEST programs can be downloaded at ModelDB.
The first part of the program defines the simulation parameters:
1000. ms /t_sim Set % how long we simulate for 16000 /n_ex Set % size of the excitatory population 4000 /n_in Set % size of the inhibitory population 5.0 Hz /r_ex Set % mean rate of the excitatory population 12.5 Hz /r_in Set % initial rate of the inhibitory population 45.0 pA /epsc Set % peak amplitude of excitatory synaptic currents -45.0 pA /ipsc Set % peak amplitude of inhibitory synaptic currents 1.0 ms /d Set % synaptic delay 5.0 Hz /lower Set % lower bound of the search interval 25.0 Hz /upper Set % upper bound of the search interval 0.001 Hz /prec Set % how close need the excitatory rates be
The second part creates the nodes and stores their handles:
modeldict using % open namespace of neuron models M_ERROR setverbosity % suppress output from Simulate iaf_neuron Create /neuron Set % target neuron poisson_generator Create /ex_pop Set % excitatory population poisson_generator Create /in_pop Set % inhibitory population spike_detector Create /spikes Set % the spike detector device endusing % close namespace
The third part defines the parameters of the excitatory and inhibitory populations, using the command SetStatus, which expects the identifier of a node and a dictionary containing the parameters as arguments. A dictionary is a list of name/value pairs, delimited by the symbols << and >>.
ex_pop << /rate r_ex n_ex mul % multiply rate and number of neurons >> SetStatus in_pop << /rate r_in n_in mul % multiply rate and number of neurons >> SetStatus spikes << /withtime false % suppress output from the spike detector >> SetStatus
The fourth part connects the excitatory and inhibitory populations to the target neuron with the respective weights and delay as parameters. The command Connect accepts different sets of parameters. To connect the neuron to the spike detector, neither weight nor delay is needed.
ex_pop neuron epsc d Connect in_pop neuron ipsc d Connect neuron spikes Connect
To find the optimal rate of the neurons in the inhibitory population, the program simulates the network several times for different values of the inhibitory rate, while measuring the rate of the target neuron. This is repeated, until the rate of the target neuron matches the rate of the neurons in the excitatory population. This algorithm is implemented in two steps:
- The function OutputRate measures the firing rate of the target neuron for a given firing rate of the inhibitory neurons. The argument is stored in the variable guess and the inhibitory Poisson generator is configured accordingly. Next, the spike-counter of the spike detector (/events) is set to zero and the network is simulated for 10 seconds. The command Simulate takes the desired simulation time in milliseconds and advances the network for this amount of time. During simulation, the spike detector counts the spikes of the target neuron and after the simulation period the result is extracted. The return value of OutputRate is the mean firing rate of the target neuron.
- The SLI function FindRoot determines the optimal firing rate of the neurons of the inhibitory population, using bisection. FindRoot takes four arguments: First, the function whose zero crossing is to be determined: the firing rate of the target neuron should equal the firing rate of the neurons of the excitatory population. The next two arguments are the lower and upper bounds of the bisection interval. The final argument prec is the desired precision of the zero crossing.
/OutputRate { /guess Set % store the function parameter in_pop << /rate guess n_in mul % set a new rate for the inhibitory population. >> SetStatus spikes << /events 0 % reset the event counter of the spike detector >> SetStatus t_sim Simulate spikes GetStatus % retrieve parameters from the spike detector /events get t_sim div % read out the event counter and divide by t_sim } def {OutputRate r_ex 1.0e3 div sub} lower upper prec FindRoot
The example is stored in the file balancedneuron.sli. Running it in NEST produces the following:
SLI ] (balancedneuron) run 1.500000e+01 2.000000e+01 2.250000e+01 2.125000e+01 2.062500e+01 2.093750e+01 2.078125e+01 2.085938e+01 SLI [1] == 2.085938e+01 SLI ]
While FindRoot searches for a zero-crossing, it prints the candidate values. The first value is by definition (upper+lower)/2. The final value is the position where the function crosses zero within the error bound prec. This result is left as the return value on the stack. The command == prints and removes the top element of the stack.
Performance
The figure above demonstrates the performance of NEST for a network of 12500 integrate and fire neurons (80% excitatory and 20% inhibitory), each receiving input from 10% of all neurons. The connections used alpha-shaped current injecting synapses with a delay of 1 ms. The total number of synapses was 15.6 x 106. Neurons were initialized with random membrane potentials and received DC input to sustain asynchronous irregular firing at 12.7 Hz (Brunel 2000).
The figure shows that NEST scales better than linear. Panel (a) shows how the absolute run-time decreases with the number of processors used. Panel (b) shows the speedup, computed from the same numbers. For four processors, the simulation is more than four times as fast.
Extending NEST
NEST has a modular architecture. It is possible to add modules and to tailor NEST to particular problems. NEST is extensible in different ways:
- At the level of the simulation kernel, it is possible to add new models for neurons and devices.
- At the level of the simulation language, it is possible to add functions and libraries.
- And finally, at the level of C++, it is possible to extend the simulation language.
The NEST Initiative
NEST is developed by the Neural Simulation Technology Initiative (NEST Initiative), founded in 2001 to advance and document methods for the simulation of large neural systems. In 2012, The NEST Initiative re-constituted itself as an association under Swiss Federal Law with seat in Ecublens Switzerland.
How to get NEST
The NEST Initiative regularly prepares releases of the NEST software. Which are available under the GNU General Public License at http://www.nest-initiative.org.
To compile NEST, you need a recent C++ Compiler (e.g. GNU gcc) and a POSIX compatible operating system. To use the Python interfaces PyNEST or PyNN, a Python installation is required.
NEST has been successfully used on Linux, Mac OSX. Sun Solaris, and HP Tru64.
To run NEST under Windows, it is recommend to install Linux in a virtual machine and then install NEST on the virtual machine.
References
- Brette R, Rudolph M, Carnevale T, Hines M, Beeman D, Bower J. M, Diesmann M, Morrison A, Goodman P. H, Harris Jr F C, Zirpe M, Natschlaeger T, Pecevski D, Ermentrout B, Djurfeldt M, Lansner A, Rochel O, Vieville T, Muller E, Davison A P, El Boustani S, and Destexhe A. Simulation of networks of spiking neurons: A review of tools and strategies. submitted (2006).
- Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience 8:183-208 (2000)
- Diesmann M and Gewaltig M-O. NEST: An Environment for Neural Systems Simulations. Forschung und wisschenschaftliches Rechnen, Beiträge zum Heinz-Billing-Preis 2001. Ges. für Wiss. Datenverarbeitung.2002.
- Diesmann M, Gewaltig M-O, and Aertsen A. Stable propagation of synchronous spiking in cortical neural networks.Nature 402:529-33 (1999).
- Fujimoto R M Parallel and Distributed Simulation Systems. Wiley & Sons, New York (2000)
- Gewaltig M-O, Körner U, and Körner E. A model of surface detection and orientation tuning in primate visual cortex. Neurocomputing 52-54:519-524 (2003)
- Guerrero-Rivera R, Morrison A, Diesmann M, and Pearce T C. Programmable Logic Construction Kits for Hyper real-time Neuronal Modeling. Neural Computation 18:2651-2679 (2006).
- Lamport L. Time, clocks, and the ordering of events in a distributed system Communications of the ACM 21(7):558-565 (1978)
- Mehring C, Hehl U, Kubo M, Diesmann M, and Aertsen A. Activity Dynamics and Propagation of Synchronous Spiking in Locally Connected Random Networks. Biological Cybernetics 88(5):395--408 (2003).
- Morrison A, Mehring C, Geisel T, Aertsen A, and Diesmann M. Advancing the boundaries of high connectivity network simulation with distributed computing. Neural Computation 17:1776-1801 (2005).
- Morrison A, Straube S, Plesser H E, and Diesmann, M. Exact subthreshold integration with continuous spike times in discrete time neural network simulations. Neural Computation 19:47-79 (2007).
- Morrison A, Aertsen A, Diesmann M. Spike-time dependent plasticity in balanced recurrent networks. Neural Computation 19:1437-1467 (2007).
- Rotter S and Diesmann M. Exact Digital Simulation of Time-Invariant Linear Systems with Applications to Neuronal Modeling. Biological Cybernetics 81:381-402 (1999).
- Zeigler B. P., Praehofer H and Kim T. G. Theory of Modeling and Simulation, second edition, Academic Press Amsterdam (2000)
Internal references
- Frances K. Skinner (2006) Conductance-based models. Scholarpedia, 1(11):1408.
- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.
- James M. Bower and David Beeman (2007) GENESIS. Scholarpedia, 2(3):1383.
- Mark Aronoff (2007) Language. Scholarpedia, 2(5):3175.
- Rob Schreiber (2007) MATLAB. Scholarpedia, 2(7):2929.
- Ted Carnevale (2007) Neuron simulation environment. Scholarpedia, 2(6):1378.
- Arkady Pikovsky and Michael Rosenblum (2007) Synchronization. Scholarpedia, 2(12):1459.
- Bard Ermentrout (2007) XPPAUT. Scholarpedia, 2(1):1399.
Acknowledgements
Partially funded by DAAD 313-PPP-N4-lk, DIP F1.2, EU Grant 15879 (FACETS), and BMBF Grant 01GQ0420 to the Bernstein Center for Computational Neuroscience Freiburg.
The authors thank Julian Eggert, Jochen Eppler, Abigail Morrison, and Hans Ekkehard Plesser for their constructive comments.
External links
See also
Genesis, NEURON Simulation Environment, XPPAUT, Neural Simulation Language