Continuous attractor network

From Scholarpedia
This article has not yet been published; it may contain inaccuracies, unapproved changes, or be unfinished.
Jump to: navigation, search

A continuous attractor network (or continuous-attractor neural network, CANN) is an attractor network possessing one or more quasicontinuous sets of attractors that in the limit of an infinite number of neuronal units \(N\) merge into continuous attractor(s). Thus, a continuous attractor network is a special kind of an attractor neural network , which in turn is a special kind of a nonlinear dynamical system. The fact that the notion of a CANN only makes precise sense in the infinite \(N\) limit is consistent with the fact that most rigorous results of the artificial neural network theory are known in the same limit (Amit 1989, Hertz et al. 1991, Hoppensteadt & Izhikevich 1997). A good introduction to CANN models and their biological applications is given in the book of Trappenberg (2002, pp. 207-232).

Contents

The notion of a continuous attractor

Viewed as a set of points embedded in the phase space, an attractor can be discrete (a discrete set of points), or can be continuous (a continuous object embedded in the phase space, typically a manifold: see http://en.wikipedia.org/wiki/Manifold). Continuous attractors can be classified based on their dynamics. For instance, a trajectory of the system approaching a continuous attractor may keep moving around it, eventually visiting every neighborhood of every point of the attractor. Examples of such attractros include an attracting periodic orbit and a strange attractor. While these are classic cases of continuous attractors, the term "continuous attractor" usually is not associated with these examples in the literature. The points on a periodic orbit do form a continuum, but none of them corresponds to an equilibrium: instead, the whole orbit is continuously traversed by the system in the attractor state. Therefore, even though this attractor is described by a continuum of points in the phase space, it corresponds to only one dynamical trajectory of the system, not to a contimuum of attractor trajectories or points of equilibrium.

In another example scenario, each trajectory of the system attracted by a continuous attractor reaches one point of the continuous attractor in a finite time and stays there forever if no perturbation is applied; however, it takes an arbitrarily small perturbation to move the system around on its continuous attractor. Therefore, in this case each point of the continuous attractor is a point of irrelevant equilibrium and not a separate attractor. This example is the kind of a continuous attractor usually associated with the term "continuous attractor" in the biological and modeling literature. In effect, this continuous attractor works as a soft dynamical constraint, forcing the system to stay close to, or inside the attractor manifold (e.g., Wu and Amari, 2005).

Figure 1: A simple system with a continuous attractor.
  • As an example of a system with a continuous attractor, consider one billiard ball in a pool ( Figure 1). The ball can move horizontally and stop due to friction virtually at any point in the pool. Therefore, any still state of the ball at any point of the pool is a stationary state and belongs to the continuous attractor \(A\ .\) The basin of attraction \(B\) in this case includes a subset of motion states of the ball that result in stopping of the ball at some point in the pool. In particular, \(B\) includes states of the ball positioned above the pool: starting from these states, the ball falls and eventually stops somewhere inside the pool. There are also initial states starting from which the ball ends up outside of the pool. These states do not belong to \(B\ .\) Recall that \(A\) and \(B\) are subsets of the phase space (that can be understood in this example as the space of all position-plus-velocity states of the ball). One can check that (i) \(A\) is a closed set, (ii) \(A\) attracts an open set of states containing \(A\) itself, and (iii) given these properties, \(A\) is minimal, in the sense that it does not contain a proper subset (http://en.wikipedia.org/wiki/Proper_subset) that satisfies (i) and (ii). Therefore, \(A\) passes the definition of an attractor (Strogatz 1994, p. 324) and for this reason is called "a continuous attractor", not "a continuous attractor set": it is indeed one attractor rather than an infinite set of attractors.

One can imagine, however, a logically possible abstract situation when there is a continuum of points of stable equilibrium (e.g., the ball in the above example is somehow hard-constrained to move vertically only and cannot move horizontally in the vicinity of the surface of the pool, while it still can move horizontally at a higher altitude. Odd as it is (one could imagine a theoretical physical model of this situation involving an electrically charged ball and infinitely strong magnetic field), this is not an example of a continuous attractor or a single attractor at all, but rather an example of a continuum of attractors. Each point of the pool in this case is a point of stable equilibrium (not irrelevant equilibrium), and therefore is an attractor on its own. The whole continuum of points taken together is not an attractor, because it does not satisfy the requirement of minimality (see above).

Examples in biology

Biology offers many examples matching the concept of a continuous attractor explained above, ranging from more to less trivial ones, including, e.g., various forms of homeostasis. The most interesting examples, however, are found in dynamics of the neuronal networks of the brain (see attractor network for examples of all major classes of continuous attractors as they relate to the brain).

One central example of a biologically relevant continuous attractor is closely related to the notion of cognitive mapping in the brain (O'Keefe and Nadel 1978; Luo and Flanagan 2007). Typically, an active cognitive map has a single locus of activity that can be positioned virtually at any its point, like a billiard ball in a pool. Several mammalian brain systems are implicated in continuous attractor dynamics (while no rigorous experimental proof exists at present). Among these systems are the head direction (HD) system, the place cell system of the hippocampus, the grid cell system of the entorhinal cortex, and several other examples.

Head direction cells, or HD cells, in rat (for a review, see Taube 1998) were found in various parts of the subicular complex, in the anterior dorsal nuclei of the thalamus, in the retrosplenial (posterior cingulate) neocortex, in several portions of the extrastriate sensory neocortex, in the lateral mammillary nuclei, and in the dorasl striatum. An HD cell is active when the rat's head is oriented in a specific absolute direction in the environment, regardless of the spatial location, the position of the head with respect to the body and other behavioral variables.

Figure 2: HD cells allocated on a circle show the "bump" on their collective activity distribution.

When HD cells are symbolically allocated on a circle according to their directional preferences ( Figure 2), their collective activity distribution has one prominent feature: a "bump", called an activity packet, which is the population vector code for the direction of the head. In other words, the position of the activity packet on the circle represents the planar direction that the rat is facing. Because the rat can face any direction, the activity packet can be stationarily centered at any point on the circle. The stability of the activity packet shape and amplitude is independent of the availability of immediate sensory cues of direction, and therefore is likely to have an intrinsic origin. It is therefore assumed in the literature that the core network of HD cells is a one-dimensional CANN (apparently, without any relation between anatomical locations of neurons and their symbolic locations on the abastract circle: Zhang 1996). In this case the external mechanism updating the HD representation is likely to involve integration of the head's angular velocity (which is also represented in the brain), as well as information derived from immediate directional cues (Skaggs et al. 1995).

Figure 3: Toroidal CANN of grid cells (McNaughton et al. 2006)

Unlike HD cells, classical place cells are known to exist in the hippocampal formation only. A theoretical view that the hippocampus (specifically, CA3) implements a continuous-attractor-based map of the environment was developed by many researchers, based on the original concept of O'Keefe and Nadel (1978) that the rodent hippocampus implements a cognitive map of the environment. Several theoretical and computational CANN models have been developed to support this point of view (Samsonovich and McNaughton 1997; Tsodyks 1999; Doboli et al. 2000; Conklin and Eliasmith, 2005), yet the question of whether CA3 can be regarded as an attractor neural network remains a matter of speculations (Samsonovich 2007). On the other hand, local networks of the entorhinal grid cells are more likely CANN candidates (McNaughton et al. 2006; cf. alternative models: Fuhs & Touretzky 2006; Burgess et al. 2007; Hasselmo et al. 2007). Grid cells can be viewed as a variety of place cells characterized by regular patterns of their spatial activity. Assuming that these patterns result from continuous attractor dynamics, which corresponds to a toroidal continuous attractor (McNaughton et al. 2006; see also Witter and Moser 2006), the mystery that remains to be solved is the development of the toroidal CANN architecture ( Figure 3). At present, there is no widely accepted theoretical explanation of the emergence of the grid cell phenomenon, although several models have been recently proposed (McNaughton et al. 2006; Fuhs & Touretzky 2006; Burgess et al. 2007; Hasselmo et al. 2007).

Historically, early models of continuous attractors in biological neuronal networks were introduced by Shun-Ichi Amari (1972, 1977) and Christoph von der Malsburg (1973). Continuous-attractor dynamics are implicated in the ability of certain brain structures to track continuously changing stimuli smoothly. Examples include orientation columns in the visual cortex described by the ring model (Ben-Yishai et al., 1995) and the related models of von der Malsburg, smoothly changing representations of intention in the motor cortex (Georgeopoulos et al., 1993), continuous-attractor models of oculomotor control (Seung 1998), etc.

Charts, activity packets and CANN models

<review> I agree with reviewer B that the article is becoming too hippocampal centric at this point given the general nature of the title of the article; this should be toned down significantly &/or other examples need to be included. In fact, it's highly focussed on one specific model of hippocampus. </review> <review>Unfortunately no mention of the early works by Amari has been made in the article. Furthermore, the ring model (Ben-Yishai et al 95) and the related models of orientation selectivity in the visual cortex are important enough to be mentioned. </review> Returning to the abstract computational notion of a CANN and CANN models studied in the literature, the following can be noticed (although there are CANN models that do not conform to this characterization, see, e.g. attractor networks). A CANN model is typically defined by

  • (i) selecting a manifold as a base for the CANN: for example, the underlying manifold of for a CANN modeling HD cells would be a circle, for grid cells it would be a torus, etc.<review> here it would be useful to explain what you mean with examples, e.g. a circle for HD cells, a torus for grid cells, etc. </review> Points of the circle represent possible directions in a 2-D environment. A torus can be rolled over a 2-D environment, so that each point of it would represent a set of locations arranged in a hexagonal grid, etc.
  • (ii) allocating neuronal units in this manifold, that is, assigning to each unit a point on the manifold;
  • (iii) establishing weighted connections between neuronal units depending on their distances measured in the manifold (typically, the connection weight is a monotonically decreasing function of the distance that may include random static fluctuations),
  • (iv) specifying dynamic rules for neuronal units, and
  • (v) setting external input (if any) and initial conditions of the system.

A stationary state of a typical CANN model described in terms of collective neuronal activity distribution over the manifold is a localized activity packet that can be centered at any point of the manifold. At a finite \(N\ ,\) the system has a discrete set of attractors each corresponding to an activity packet centered at a particular neuronal unit. In each of the attractor states, only a fraction of neuronal units that are inside the activity packet are highly active. Typically, this is a small, finite, fixed fraction. Virtually for any model neuronal unit there is an attractor state in which the unit belongs to the top of the distribution of activity. As the number of units \(N\) goes to infinity, individual attractors merge into one continuous attractor that has the topology of the manifold. From a practical point of view, the merging occurs at a finite \(N\) (the value depends on the model), when perturbation thresholds separating individual attractors become small compared to intrinsic noise.

Figure 4: Here is a toy example of a (quasi-)continuous attractor network constructed based on Wilson-Cowan neuronal dynamics. 3,000 units (small circles) are randomly distributed in a square; associative excitatory connections are created based on the distance between units. Global inhibition is unform. The blue star represents the center of stimulation. The darkness of the circles represents the unit activity.

Given these observations, the following approximate notion can be defined (it can be given a precise sense in the infinite \(N\) limit). A chart of a given CANN is an allocation of neuronal units on an abstract manifold, such that neuronal units cover all parts of the manifold, and for each allocated neuronal unit there is an attractor state, such that it corresponds to an activity packet localized on this manifold and centered at this unit or very close to it. The animation ( Figure 4) demonstrates an example of a chart and the associated neuronal unit dynamics. In the infinite \(N\) limit, a chart becomes isomorphic to the corresponding continuous attractor \(A\ ,\) with the isomorphism relating attractor states in \(A\) to centers of the corresponding activity packets on the chart.

While the term "chart" is not widely accepted, it cannot be replaced here with the term "manifold", because it stands for something different: it refers to the embedding of neuronal units in the manifold. "Chart" is also a standard term in the theory of manifolds The notion of a chart in CANN theory is used in many sources, providing a means of analysis of CANN dynamics. There is a number of phenomena that are better understood at a "macroscopic" level rather than at the neuronal level: e.g., an activity packet can be driven along the chart by external input or by temporarily introduced asymmetry in intrinsic connections (which is believed to be the basis for the path integration nechamism in the nervous system), can oscillate horizontally (this phenomenon is called the phase precession), and under certan circumstances can be forced to jump to another location in the chart. The notion of a chart allows for an efficient continuous-limit description of CANN dynamics in these and other examples (Samsonovich & McNaughton 1997; see also Hoppensteadt & Izhikevich 1997).

The notion of CANN outlined above (and the term itself) emerged approximately 10-15 years ago. Today the literature on CANNs is substantial: a search of ISI Web of Knowledge for "continuous attractor (neural) network" returns 90 publications, most of which are very recent (e.g., Wu et al. 2008, Machens & Brody 2008). This is a rapidly growing field, many fundamental quesions in which remain open. One example of an interesting problem with potential implications for neuroscience is described below.

Multiple charts and remapping

In almost all studied CANN models (e.g., in the above examples), there exists one chart that works for all neuronal units and for all continuous attractor states. I.e., whenever the network is in a continuous attractor state, its activity is described by a standard activity packet located somewhere (could be virtually anywhere) on the chart.

There is, however, another logical possibility: that no single chart can work for all continuous attractor states of a CANN, and more than one chart is required to describe all continuous attractors of the network. In this case, the network can be called a multichart CANN (or MCANN). <review> Short addition here. That is, each unit is assigned more than one point to the manifold, and the synaptic matrix reflects contributions from all these charts. [a: no, this is not what an MCANN is. Please, mark clearly what you add to distinguish from what was written by me. Sorry, have to stop now.]</review> Given this situation, the following notions make sense. An active chart is the chart on which the distribution of neuronal activity is currently localized (described by an activity packet). A spurious state' of MCANN is an attractor state with two or more simultaneously active charts (not all MCANN models allow for spurious states).

The possibility of MCANNs was discussed in theoretical speculations several times (e.g., Muller et al. 1996); later the existence of MCANNs was established numerically - using an integrate-and-fire neural network model with a specially designed architecture (Samsonovich and McNaughton 1997), and at the same time analytically - using the replica method (Samsonovich 1997). Samsonovich (1997) calculated the critical storage capacity (the number of charts it can store) of a spherical MCANN model for \(\mathbf{S}^1\) and \(\mathbf{S}^2\) topologies of the base manifold. The answer is that the network can store the number of charts \(P\) that is proportional to the number of neurons \(N\ ;\) however, the storage capacity is surprisingly low\[P/N < \alpha_c = 0.0042\] for \(\mathbf{S}^2\ .\) Subsequently, Battaglia and Treves (1998) calculated the storage capacity of MCANN with the topology of a torus in two limits: fully connected and extremely diluted architecture. Their result for a two-dimensional torus is \(\alpha_c = 0.0008\) for a fully connected network (which is generally greater than the same number for diluted networks). Therefore, in order to observe phenomena associated with multiple charts, a network of a substantial size is required (an order of thousands of neuronal units or more).

Figure 5: Activity packet on a chart observed in the rat hippocampus (from Samsonovich & McNaughton 1997). Units on horizontal axes are centimeters. The animal is located at the center of the square and is moving to the left and toward the viewer. The same activity of the same units distributed on a different chart would show a "white noise" pattern.

A putative example of MCANN in biology is the autoassociative network of the hippocampal CA3 place cells. This network (regardless of whether it is an attractor network or not) exhibits nice activity packets (like the one shown in Figure 5, which actually represents CA1 activity). Both CA3 and CA1 may exhibit a perfect activity packet on one chart, in one environment, while in another environment the same chart would show a random distribution of activity, and on another chart a perfectly localized activity packet will emerge. Typically, there is no significant correlation between relative neuronal positions on differernt charts. The phenomenon of active chart switching is known as remapping; in the case of a substantial correlation among the two charts the remapping is called partial. Remapping may occur under various conditions: for example, it may occur reproducively in the same environment, during the same recording session, at a moment when the behavioral paradigm changes (Markus et al. 1995). While the underlying mechanisms remain to be understood, one possible explanation involves the notion of MCANN. <review> There is an important problem with implementing continuous attractors in biological neuronal network. The problem is the effect of inhomogeneity and noise. Network models of Continuous attractors require perfect homogeneity. Any small inhomogeneity in the network architecture breaks the symmetry of the manifold and the continuous attractor will be fragmented, turning into a usually small set of point attractors. Since inhomogeneity and noise are unavoidable features of biological systems it is an important challenge to understand how the symmetry could be recovered. This phenomenan was noted first in Tsodyks and Sejnowski Int. Journal of Neural Systems, 1995 and several studies have addressed this issue, e.g. using homostatic plasticity in the case spatial working memory models in the PFC (Renart et al, Neuron) and gain modulation in the representation of position o objects in the visual cortex (Roudi and Treves PLoS CB). I think this problem of biological implementation of continuous attractor and the possible solutions need to be addressed. </review>

References

  • Amit, D. J. (1989) Modeling Brain Function: The World of Attractor Neural Networks. New York: Cambridge UP.
  • Battaglia, F.P., and Treves, A. (1998) Attractor neural networks storing multiple space representations: A model for hippocampal place fields. Physical Review E 58 (6): 7738-7753.
  • Ben-Yishai, R., Lev Bar-Or, R., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences of the United States of America 92: 3844–3848.
  • Burgess, N., Barry, C., and O'Keefe, J. (2007) An oscillatory interference model of grid cell firing. Hippocampus 17(9): 801-812.
  • Conklin, J. and C. Eliasmith (2005). An attractor network model of path integration in the rat. Journal of Computational Neuroscience. 18: 183-203.
  • Doboli, S., Minai, A. A., and Best, P. J. (2000) Latent attractors: A model for context-dependent place representations in the hippocampus. Neural Computation 12: 1009-1043.
  • Fuhs, M. C., and Touretzky, D. S. (2006) A spin glass model of path integration in rat medial entorhinal cortex. Journal of Neuroscience 26 (16): 4266-4276.
  • Georgopoulos, A. P., Taira, M., and Lukashin, A. (1993). Cognitive neurophysiology of the motor cortex. Science 260: 47–52.
  • Hasselmo, M. E., Giocomo, L. M., and Zilli, E. A. (2007) Grid cell firing may arise from interference of theta frequency membrane potential oscillations in single neurons. Hippocampus 17: 1252-1271.
  • Hertz, J., Krogh, A., and Palmer, R. G. (1991) Introduction to the Theory of Neural Computation. Redwood City, CA: Addison-Wesley.
  • Hoppensteadt, F. C., and Izhikevich, E. M. (1997) Weakly Connected Neural Networks. Applied Mathematical Sciences, vol. 126. New York: Springer.
  • Luo, L., and Flanagan, J. G. (2007) Development of continuous and discrete neural maps. Neuron 56 (2): 284-300.
  • Machens, C. K., and Brody, C. D. (2008) Design of continuous attractor networks with monotonic tuning using a symmetry principle. Neural Computation 20 (2): 452-485.
  • von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14: 85–100.
  • Markus, E. J., Qin, Y. L., Leonard, B., Skaggs, W. E., McNaughton, B. L., and Barnes, C. A. (1995) Interactions between location and task affect the spatial and directional firing of hippocampal neurons. Journal of Neuroscience 15 (11): 7079-7094.
  • McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., & Moser, M. B. (2006) Path integration and the neural basis of the "cognitive map". Nature Reviews Neuroscience 7 (8): 663-678.
  • Muller, B., Reinhardt, J., and Strickland, M. T. (1995) Neural Networks: An Introduction. Berlin: Springer.
  • Muller, R.U., Stead, M., and Pach, J. (1996) The hippocampus as a cognitive graph. Journal of General Physiology 107: 663-694.
  • O'Keefe, J. and Nadel, L. (1978) The Hippocampus as a Cognitive Map. Clarendon, New York.
  • Samsonovich, A. (1997) Attractor-Map Theory of the Hippocampal Representation of Space: Ph. D. Dissertation, Applied Mathematics Program, The University of Arizona: Tucson, AZ. 302 pages. UMI Dissertation Services: Ann Arbor, MI.
  • Samsonovich, A. V. (2007). Bringing consciousness to cognitive neuroscience: A computational perspective. Journal of Integrated Systems, Design & Process Science 11 (3) 15-26.
  • Samsonovich, A., and McNaughton, B. L. 1997. Path integration and cognitive mapping in a continuous attractor neural network model. Journal of Neuroscience 17(15):5900-5920.
  • Seung, H. S. (1998) Continuous attractors and oculomotor control. Neural Networks 11: 1253-1258.
  • Skaggs, W. E. , Knierim, J. J., Kudrimoti, H. S., and McNaughton, B. L. (1995) A model of the neural basis of the rat's sense of direction. In: Tesauro, G., Touretzky, D., and Leen, T. (Eds.). Advances in Neural Information Processing Systems, pp 130-180. Cambridge: MIT.
  • Strogatz, S. H. (1994) Nonlinear Dynamics and Chaos. Readings, MA: Addison-Wesley.
  • Taube, J. S. (1998) Head direction cells and the neurophysiological basis for a sense of direction. Progress in Neurobiology 55 (3): 225-256.
  • Tsodyks, M. (1999) Attractor neural network models of spatial maps in hippocampus. Hippocampus 9 (4): 481-489.
  • Witter, M. P., and Moser, E. I. (2006) Spatial representation and the architecture of the entorhinal cortex. Trends in Neurosciences 29 (12): 671-678.
  • Wu, S., and Amari, S.-I. (2005). Computing with continuous attractors: Stability and online aspects. Neural Computation 17: 2215–2239.
  • Wu, S. Hamaguchi, K., and Amari, S.-I. (2008) Dynamics and computation of continuous attractors. Neural Computation 20 (4): 994-1025.
  • Zhang K (1996) Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. Journal of Neuroscience 16:2112-2126.

Recommended reading

  • Strogatz, S. H. (1994) Nonlinear Dynamics and Chaos. Readings, MA: Addison-Wesley.
  • Trappenberg, T. P. (2002) Fundamentals of Computational Neuroscience. New York: Oxford University Press.

External links

See also

Attractor, Attractor Network, Basin of Attraction, Cognitive Map, Dynamical System, Grid Cells, Head Direction Cells, Neural Networks, Place Cells, Recurrent Neural Networks, Stability

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools