Silicon Neurons

From Scholarpedia
Giacomo Indiveri et al. (2008), Scholarpedia, 3(3):1887. doi:10.4249/scholarpedia.1887 revision #89078 [link to/cite this article]
(Redirected from Silicon neuron)
Jump to: navigation, search
Post-publication activity

Curator: Rodney Douglas

Silicon neuron layout example

Silicon neurons emulate the electro-physiological behavior of real neurons. This may be done at many different levels, from simple models (like leaky integrate-and-fire neurons) to models emulating multiple ion channels and detailed morphology. Many different types of hardware implementations of neuron models have been proposed from as early as the 1940's (shortly after the proposition of the McCulloch-Pitts neuron), using whatever technology was available at the time. Hardware implementations were often the only means of modeling neurons and neural computation, until faster digital computers became available. With the advent of cheap computing power and effective neuron modelling tools (such as NEURON and GENESIS), software based neuron simulations have grown in popularity. Hardware emulation is still actively pursued because of very high efficiency and very large scale in implementation that can be obtained by using physical analogies with biological neurons in custom-designed silicon chips (Mead, 1989).

Silicon neurons may be implemented in digital or analog, or mixed signal (digital and analog) technologies. Further, they may be implemented by designing and building an application specific integrated circuit (ASIC), or using a field-programmable gate array (FPGA), or some mixture. Each silicon neuron may be implemented on a single chip, or some elements of the neuron may be distributed across chips. Alternatively, many neurons may be implemented on a single chip, with some elements of the neurons on other chips. Different approaches have different advantages and drawbacks: for example, using FPGAs allows for quick updates, but is slower than using digital ASICs; using analog techniques implies real-time and often lower power operation, whereas using digital techniques implies programmability and offers higher and controllable precision.

Analog implementations of neurons in Very Large Scale Integrated (VLSI) silicon technology are a recent development (initially suggested by Mead, 1989), and dedicated analog VLSI implementations of neural circuits have several merits:

  • A first merit of a silicon neuron is as a neural prosthesis, interfacing with biological neural tissue. Dedicated VLSI implementations (particularly analog VLSI) normally operate in real-time.
  • A second merit is as a modeling tool. Silicon implementations can run very rapidly (and often in real time), enabling them to work directly with external devices generating signals.
  • A third merit of a silicon neuron is as a cell in a larger network for neural computation. Again, real-time operation is possible, enabling direct interfacing (and hence real-time interaction) with devices like cameras or sound acquisition devices.

Real (and silicon) neurons have a number of active parts: one dissection of a neuron is into synapses, dendrites, soma, and axon. Not all silicon neurons actually implement all of these elements: synapses in particular are sometime placed on additional chips.

Many different versions of neurons have been implemented in silicon (see Smith, 2006 for a review written in 2004). Two main classes of models of spiking neurons that have been implemented in silicon are Conductance-based models and Integrate-and-Fire (I&F) models. Conductance based models model bulk ion channels, often modelling multiple compartment neurons. (Leaky) integrate-and-fire neurons reduce this complexity to a single compartment, with a single conductance.

Contents

Conductance-based models

These types of silicon neurons contain analog circuits that emulate the physics of a real ionic conductances and are typically based on prototypical ion conductance models that obey Hodgkin-Huxley principles. These conductances can be emulated using subthreshold CMOS circuits. For example, the silicon neurons of Douglas and Mahowald (1991) and Rasche and Douglas (2001) are composed of connected compartments, each of which is populated by modular sub-circuits that emulate particular ionic conductances. The dynamics of these types of circuits is qualitatively similar to the Hodgkin-Huxley mechanism without implementing their specific equations. Others have also developed circuits: one of the most elegant is Farquhar and Hasler's (2005). Hynna and Boahen have developed a small circuit which also emulates inactivation and the voltage-dependence of the time constant (Hynna and Boahen, 2007). An example of this type of silicon neuron circuit is shown here.

Figure 1: A conductance-based silicon neuron circuit

In this circuit the Passive block implements a conductance term that models the passive leak behavior of a neuron: in absence of stimulation the membrane potential Vmem leaks to Eleak following first-order low-pass filter dynamics. Similar strategies are used to implement the active sodium and potassium conductance circuits. The Sodium block implements the sodium activation and inactivation circuits that reproduce the sodium conductance dynamics observed in real neurons. The Potassium block implements the circuits that reproduce the potassium conductance dynamics. The bias voltages Gleak , VτNa , and VτK determine the neuron’s dynamic properties, while GNaon , GNaoff , GK , and Vthr are used to set the silicon neuron’s action potential characteristics. The transconductance amplifiers implement simple first-order low-pass filters to provide the kinetics. A current mirror is used to subtract the sodium activation and inactivation variables (INaon and INaoff), rather than multiplying them, as in the Hodgkin-Huxley formalism. Additional current mirrors half-wave rectify the sodium and potassium conductance signals, so that they are never negative. Besides the sodium and potassium circuits, several other conductance modules have been implemented using these principles, for example: persistent sodium current, various calcium currents, calcium-dependent potassium current, potassium A-current, non-specific leak current, and an exogenous (electrode) current source.

Integrate-and-Fire (I&F) models

Integrate and fire (I&F) models are less realistic than conductance-based ones, but require fewer transistors and less silicon real-estate. They allow for the implementation of large, massively parallel networks of neurons in a single VLSI device. I&F neurons integrate pre-synaptic input currents and generate a voltage pulse analogous to an action potential when the integrated voltage reaches a spiking threshold. Networks of I&F neurons have been shown to exhibit a wide range of useful computational properties, including feature binding, segmentation, pattern recognition, onset detection, and input prediction. Many variants of these circuits were built during the 1950s and 1960s using discrete electronic components. The first simple VLSI version was probably the Axon-hillock circuit, proposed by Carver Mead and colleagues in the late 1980s (Mead, 1989).

Figure 2: The Axon-hillock integrate-and-fire circuit

In this circuit, a capacitor that represents the neuron’s membrane capacitance integrates current input to the neuron. When the capacitor potential crosses the spiking threshold a pulse Vout is generated and the membrane potential Vmem is reset. This circuit captures the basic principle of operation of biological neurons, but cannot faithfully reproduce all of the dynamic behaviors observed in real neurons. In addition it has the drawback of dissipating non-negligible amounts of power while the membrane potential Vmem crosses the amplifier's switching threshold. Many other implementations of leaky I&F neurons have been implemented (Smith, 2006).

Hybrid approaches

Several low-power and compact variants of the Axon-Hillock circuits have been designed more recently (Culurciello et al. 2001). Similarly, alternative above threshold solutions for implementing conductance-based models of spiking neurons have been proposed (Alvado, 2004).

A compromise between the elaborate but bulky conductance-based approach, and the compact but simple I&F models is provided by elaborate models of I&F neurons, with additional neural characteristics, such as spike-frequency adaptation properties and refractory period mechanisms. An example of such a circuit, proposed in Indiveri, 2006 is shown here.

Figure 3: A low-power integrate-and-fire circuit with additional neural characteristics.

In addition to implementing the basic behavior of integrating input currents and producing output pulses at a rate that is proportional to the amplitude of its input, this low-power I&F neuron implements a leak mechanism (as in leaky I&F neuron models); an adjustable spiking threshold mechanism for adapting or modulating the neuron’s spiking threshold; a refractory period mechanism for limiting the maximum possible firing rate of the neuron; and a spike-frequency adaptation mechanism, for modeling some of the adaptation mechanisms observed in real neurons. The input current Iin is integrated onto the neuron’s membrane capacitor Cmem until the spiking threshold is reached. At that point the output signal Vspk goes from zero to the power supply rail, signaling the occurrence of a spike, and the membrane capacitor is reset to zero. The leak module implements a current leak on the membrane. The spiking threshold module controls the voltage at which the neuron spikes. The adaptation module subtracts a firing rate dependent current from the input node. The amplitude of this current increases with each output spike and decreases exponentially with time. The refractory period module sets a maximum firing rate for the neuron. The positive feedback module is activated when the neuron begins to spike, and is used to reduce the transition period in which the inverters switch polarity, dramatically reducing power consumption. The circuit’s biases (Vlk , Vadap , Valk , Vsf , and Vrf) are all subthreshold voltages that determine the neuron’s properties.

Implementing dendrites and synapses

Interconnecting silicon neurons is necessary for building network models. Where there are multiple neurons on a single chip, it may be desirable to place the interconnecting circuitry on the same chip. However, where there are multiple chips, some way of transmitting the spiking outputs between chips is necessary. One standard for this is the address-event representation (AER) (Boahen, 2000).

Two questions arise: where on the neuron's dendrites should the synapse be placed (and what should the effect of the dendritic morphology on the synapse be), and how should the synapse be modelled?

Synapse location affects synaptic effect, both in terms of overall effect (Williams and Stuart, 2002), and interaction between post-synaptic potentials. Northmore et al. (1996) modeled this, resulting in near-linear summation for mode distant synapses, plus the possibility of local saturation for nearby synapses.

How should the effect of a synapse be modelled? Chemical synapses change the depolarisation level post-synaptically because the neurotransmitter released at the presynaptic terminal binds to sites postsynaptically, resulting in the opening of some ion channels (and hence changes in conductance): ions then cross the membrane. Westerman et al. (1997) used the selection of one of an array of conductances. Vogelstein et al. (2007) used an off-chip look-up table whose values determine a conductance which is determined from a number of release sites, a probability of release and a quantum size: this emulates a more detailed model of a synapse.

References

  • Alvado, L., Tomas, J., Saighi, S., Renaud-Le Masson, S., Bal, T., Destexhe, A. and . Le Masson, G., “Hardware computation of conductance-based neuron models,” Neurocomputing, vol. 58–60, pp. 109–115, 2004.
  • Boahen, K. Point to point connectivity between neuromorphic chips using address-events, "IEEE Transactions on Circuits and Systems", 47:(5), 416-434, 2000.
  • Culurciello, E., Etienne-Cummings, R., Boahen, K.: Arbitrated address-event representation digital image sensor. Electronics Letters, 37, pp. 1443–1445, 2001
  • Farquhar, E. Hasler, P., A bio-physically inspired silicon neuron, "IEEE Transactions on Circuits and Systems", 52:(3), 477-488, 2005
  • Hynna, K.M., Boahen, K., Thermodynamically Equivalent Silicon Models of Voltage-Dependent Ion Channels, "Neural Computation", 19, 327-350, 2007
  • Indiveri, G. and Chicca, E. and Douglas, R. A VLSI array of low-power spiking neurons and bistable synapses with spike–timing dependent plasticity , IEEE Transactions on Neural Networks, 17:(1) pp. 211-221, 2006
  • Mahowald, M. and Douglas, R., A silicon neuron, Nature, 354, 1991.
  • Mead, C.A., Analog VLSI and Neural Systems, Addison-Wesley, Reading, MA, 1989.
  • Northmore, D., Elias J., Spike train processing by a silicon neuromorph: The role of sublinear summation in dendrites, "Neural Computation", 8:(6), 1245–1265, 1996.
  • Rasche, C. and Douglas, R., An improved silicon neuron, "Analog integrated circuits and signal processing", 23:(3), 227-236, 2001
  • Smith, L.S. Implementing Neural Models in Silicon, in Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies, ed A. Zomaya, Springer US, 2006, pp 433-475
  • Vogelstein, R. J. Mallik, U. Vogelstein, J. T. Cauwenberghs, G, Dynamically Reconfigurable Silicon Array of Spiking Neurons With Conductance-Based Synapses, IEEE Transactions on Neural Networks, 18:(1), 2007
  • Westerman, W. C., Northmore, D. P., and Elias, J. G. 1997. Neuromorphic Synapses for Artificial Dendrites. "Analog Integr. Circuits Signal Process". 13, 1-2 (May. 1997), 167-184.
  • Williams, S.R., Stuart, G.J., Dependence of EPSP Efficacy on Synapse Location in Neocortical Pyramidal Neurons, "Science", 295, 5561, pp. 1907 - 1910, 2002.

Internal references

  • James M. Bower and David Beeman (2007) GENESIS. Scholarpedia, 2(3):1383.


External links

See also

Biologically Inspired Robotics, Neurocomputer, Neuromorphic Engineering, Spiking Neurons, VLSI Implementation of Neural Networks

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools