# Freeman K-set

Walter J. Freeman and Harry Erwin (2008), Scholarpedia, 3(2):3238. | doi:10.4249/scholarpedia.3238 | revision #91278 [link to/cite this article] |

**Freeman K-sets** form a nested hierarchy of models of the dynamics
of neuron populations at the mesoscopic (intermediate) level of the
coordinated activity of cell assemblies of ~10^{4} neurons with
~10^{8} synapses that mediate between the microscopic activity
of small neural networks, and the macroscopic activity of the entire
brain. The topology of connections is modeled by networks of excitatory
and inhibitory populations of neurons; the dynamics is approximated by
piecewise linearization of nonlinear ordinary differential equations
(ODE).

## Historical background

K-sets are named to honor the biophysicist Aharon Katzir-Katchalsky
(1913-1972), who was *deeply interested in the theory of dynamic*
patterns, dissipative structures, and other nonequilibrium phenomena
(Schmitt, 1974). Katzir-Katchalsky was killed in the Lod Airport
massacre on May 30, 1972 while returning from a seminal conference
discussing his ideas on neurodynamics. (Curran, 1974) His untimely
death removed an outstanding theoretician and moving force from the
field of neurodynamics at a point where there was potential for early
development of an understanding of the role of cell assemblies in
cognition. Only in recent years has the ground lost in this act of
violence been made up even in part.

Aharon Katzir-Katchalsky was particularly interested in the application
of linear nonequilibrium thermodynamic reasoning to living systems, in
the steady states that occur in dissipative systems that are far from
equilibrium, and in the state transitions by which they emerge. He
defined a new approach to that those systems termed *network*
thermodynamics* (Curran, 1974). That work reflected a specific*
interest in how interacting cell assemblies could produce cognition.
Freeman’s use of K-sets to describe the neurodynamics of the mammalian
olfactory system is the most important *worked example* of these ideas.

## Introduction

The difficulty in doing neuroscience at the mesoscopic level is that the theoretical connection between the microscopic-level activity of neurons in small neural networks and the mesoscopic-scale activity of cell assemblies remains poorly understood. Katzir-Katchalsky suggested treating cell assemblies using network thermodynamics, which led to Freeman’s K-sets. These form a hierarchy for cell assemblies with the following elements:

- KO sets represent non-interactive collections of neurons with globally common inputs and outputs: excitatory in KOe sets and inhibitory in KOi sets. The KO set is the module for K-sets.

- KI sets are made of a pair of interacting KO sets, both either excitatory or inhibitory in positive feedback. The interaction of KOe sets gives excitatory bias; that of KOi sets sharpens input signals.

- KII sets are made of a KIe set interacting with a KIi set in negative feedback giving oscillations in the gamma and high beta range (20-80 Hz). Examples include the olfactory bulb and the prepyriform cortex.

- KIII sets made up of multiple interacting KII sets. Examples include the olfactory system and the hippocampal system. These systems can learn representations and do match-mismatch processing exponentially fast by exploiting chaos (Erwin, 1994).

- KIV sets made up of interacting KIII sets are used to model navigation by the limbic system (Huntsberger, Tunstel and Kozma, 2007).

- KV sets are proposed to model the scale-free dynamics of neocortex operating on and above KIV sets in mammalian cognition.

K-sets are used to model neural population dynamics with ODE to simulate mesoscopic K-fields. These are the local field potentials (LFP) and multi-spike activities that are observed with arrays of depth electrodes and the EGoG (electrocorticograms) recorded from arrays of electrodes placed directly on the cortical surface. They are intermediate between the microscopic axonal and dendritic potentials of neurons observed with microelectrodes and the macroscopic brain patterns observed with EEG, MEG, and fMRI. Mesoscopic scales are 10 times the duration of the action potential (1 ms) and 100 times the diameter of the synapse (1 micron). The state variables in K-sets represent mesoscopic activity density at points in time and space on a 2-D cortical surface. The point processes defined by microscopic action potentials are represented in a continuum by a pulse density function, \(p(x,y,t)\ .\) The synaptic currents defined by intracellular recording are represented by a wave density function, \(v(x,y,t)\ .\) Fluctuations in LFP and ECoG reflect changes in summed dendritic current flowing across a fixed extracellular resistance, which is a shared current path for the cortical pyramidal cells (Freeman, 1975; 2004).

The aim of this modeling is to simulate the spatiotemporal patterns of ECoG shown to be correlated with behavior by solving ODE for specified initial conditions. K-set models can be initially calibrated to simulate the cortical impulse responses evoked by single electric shocks delivered along afferent axonal pathways to three-layered allocortex (Mountcastle, 1974). To establish an initial linear approximation, the intensity of stimulation is kept within the near-linear range of the impulse response (the averaged evoked potential, AEP, and the post stimulus time histogram, PSTH); paired-shock testing serves to demonstrate superposition. This initial range is then extended to higher amplitudes by piece-wise linear analysis, in which the basis functions for the linear response are fitted to a family of impulse responses by changing the parameters of the fitted equation as stimulus intensity is increased. Experimental analysis has shown that the equation describing the dynamic operations of cortical populations can be divided into three parts: a linear time-dependent transfer function, \(F(t)\ ;\) a nonlinear amplitude-dependent function, \(G(v)\ ;\) and a collection of space-dependent functions, \(H(x)\ .\) The resulting equation is:

\[\tag{1} v(x,t) = F(t)G(v)H(x) + I(t) \]

For AEP the single shock input \(I(t)\) is modeled as an impulse \(\delta(t)\ .\)
For recording at a point in space, \(H(x) = 1\ .\) \(G(v)\) is replaced
by a gain constant, \(k\ ,\) at each shock intensity. The impulse responses
recorded at each point in space are summed to get an AEP, which is
fitted with the wave density function, \(kF(t)\ .\) Evoked action potentials
from single neurons are summed over repeated trials and produce a PSTH
that is fitted with the pulse density function, \(p(x,y,t)\ .\) Each K-set has
its characteristic impulse response in the pulse and wave modes, which
is determined by its topology of connections (
Figure 1).

This method begins with the collection of a family of impulse responses by varying the stimulus intensity (control parameter), which changes the frequencies and decay rates of the AEP and PSTH. The AEP and PSTH are fitted with sums of linear basis functions (sines, cosines and exponentials), and the frequencies, \(j\omega\ ,\) and decay rates, \(\alpha\ ,\) are plotted as points in the complex plane ( Figure 2, A).

The second step is a description of how the groups of neurons interact by describing their dynamics with a linear ODE, for which the solution is \(F(t)\ .\) Interaction is defined as a forward gain given by the product of a gain constant at synapses (pulse to wave) and a coefficient derived from \(G(v)\ ,\) the nonlinear operator at the trigger zones (wave to pulse). The product is specified by a forward gain coefficient\[k_{ee}, k_{ei}, k_{ie}, k_{ii},\] depending on the sign of transmission and reception. Modeling shows that the observed rate changes result from changes not in the membrane time constants but in the strengths of synapses and the amplitude-dependent nonlinearities when \(v(t)\) is converted to \(p(t)\) by \(G(v)\ .\) The two conversion operations are sequential and multiplicative, making it easy to combine them into forward gains and feedback gains that represent average interaction strengths in populations.

In the third step an ODE is constructed, using the open loop rate constants ( Figure 1, C) and the forward gains, and solved by use of the Laplace transform. The eigenvalues of the ODE are plotted for ranges of values of the gains ( Figure 2, B). The real and complex rate constants are plotted as points in the complex plane, and the changes in the roots are plotted as the root loci that are orthogonal to the gain contours. In this way the rate constants measured in AEP and PSTH define the strengths of interactions by which neural groups transform themselves into neural populations.

## KO: a non-interactive collection of neurons and the module for K-sets

A KO set models a lumped 2-D array of neurons, either excitatory (e) or inhibitory (i), with no functional inter-connections but with general common input and output connections for the neural population. The dendritic dynamics can be modeled by a linear ODE (Freeman, 1975; 2004) with the state variables, v and p, representing the magnitude of neural activity in the wave and pulse density mode. In the simplest case a 2nd order ODE suffices to approximate the essential features of the summed impulse responses of the dendrites contributing to the response to an electric shock given to an axonal tract afferent to an area of cortex ( Figure 2, A).

\[\tag{2} F(t) = a \exp (-a t) - b \exp (-b t)\]

Where a is the rate of decay and b is the rate of rise of the open loop
AEP. A 1st order ODE fails to capture the stability properties of
interactive neural populations. The corresponding ODE is

\[\tag{3} d^2v(t)/dt^2 + (a + b) dv/dt + ab = k_{jk}\delta(t)\]

for which the Laplace transform is

\[\tag{4} F(s) = k_{jk}ab/[(s + a) (s + b)]\]

A KO set modeling the open-loop dynamics of a neural population has only
a zero point attractor, to which the activity returns after
perturbation. This amplitude increases in proportion to increasing
stimulus intensity with responses to successive inputs additive, so that
the operation of synapses converting incoming pulse density to wave
density can be simply represented by a forward gain constant. The KOe
and KOi sets modeling those microscopic properties of neural groups
support the emergence of mesoscopic populations at higher levels in the
hierarchy.

Because anesthetics suppress firing of action potentials, G(v) must be evaluated in the closed loop state by calculation of the probability of pulse firing conditional on wave amplitude (Freeman, 1975). This is described in the following section.

## KI: populations with mutual excitation or mutual inhibition

The simplest interesting form of mesoscopic interaction is mutual excitation among excitatory neurons (Freeman, 2000, Ch. 8, pp. 177-208), for example, the periglomerular cells of the input layer of the olfactory bulb. The PSTH shows the impulse response to have a rapid increase above the background, po , followed by an exponential decay without overshoot ( Figure 1, B), reflecting the lack of inhibition in the response. This decay rate increases with increased stimulus intensity and response amplitude ( Figure 2, A). The interaction can be modeled by a positive feedback loop between two KOe sets. The root locus plot for the KIe set as a function of feedback gain, kee ( Figure 2, B) shows changes in the configuration of poles and zeroes as the stimulus intensity is varied, which define the root loci given by the solutions to the closed loop equation for positive feedback using equation (4):

\[\tag{5} V(s) = F(s)/[1 - k_{ee} F^2 (s)]\]

Extrapolation of the root loci to threshold (zero response amplitude)
gives zero decay rate, which is shown by the symbol,
\(\Delta\ ,\) at the origin of the complex plane in
Figure 2, B.

\(G(v)\ ,\) the non-linear function giving the dependence of \(p(t)\) on \(v(t)\ ,\) can be estimated based on measurements at trigger zones in the closed loop state. It is evaluated by recording a long time series of ECoG and the pulse train of a single neuron simultaneously, then calculating the pulse probability at each time step conditional on ECoG amplitude. The pulse probability function is normalized by dividing \(p(t)\) by \(p_o\ ,\) the mean pulse rate. The resulting sigmoid curve of normalized conditional pulse probability is then fitted with the solution to a second order static nonlinear equation in normalized coordinates derived by generalizing the Hodgkin-Huxley equations that includes an expression for the reduction in pulse probability by the refractory period after each action potential (Freeman, 2000, Ch. 10, pp. 241-264). Since action potentials in the background state are uncorrelated, the refractory periods are distributed uniformly in time, and G(v) becomes a static nonlinearity in contrast to the time-dependent nonlinearities in the Hodgkin-Huxley equations. This permits the simplification of isolating \(F(t)\) from \(G(v)\) and \(H(x)\ .\)

The normalized conditional pulse probability is fitted with the asymmetric sigmoid curve:

\[\tag{6} p(t) = p_0[q(t) + 1]\]

where \(p_0\) is the mean firing rate, \(p_m\) is the
maximal rate, and \(q_m = p_m/p_0-1\ ,\) so that

\[\tag{7} q(t)=q_m(1-e^{-[e^{v(t)} - 1]/q_m})\]

The forward gain is given by the derivative of equation
(7);

\[\tag{8} \frac{dp}{dv} = p_0e^{{v(t)}-[e^{v(t)} - 1]/q_m}\]

and the asymmetry is measured where the second derivative is zero,

\[\tag{9} v_{max}=\ln q_m\]

An example ( Figure 3) shows the fitted curve and its derivative, \(dp/dv\ ,\) the nonlinear, amplitude-dependent gain. The linearized forward gain, \(K_e^{0.5}\) or \(K_i^{0.5}\ ,\) in sets at an operating point is calculated from the slope of the tangent to G(v) at that point. There are two values that produce unity feedback gain, \(K_e = K_i = 1\ ,\) where a steady state can prevail. For the KIe set ( Figure 3, A) the population is in steady state at one point past the maximum slope of G(v). When wave density increases, the feedback gain, Ke, decreases, decreasing the gain. This implies that the KIe set is locally stabilized by refractory periods without need for inhibition. The resulting pole at the origin of the complex plane \(\Delta\) ( Figure 2, B) represents a non-zero point attractor, which governs the steady-state excitatory output of the KIe set and provides an excitatory bias that is required for oscillations.

A KIi set operates by mutual inhibition, which is also represented by a positive feedback loop, with the difference that the feedforward KOi set decays exponentially from a peak to \(p_o\ ,\) while the feedback KOi set rises exponentially to \(p_o\) from a minimum below \(p_o\ .\) In a distributed KIi set this produces lateral inhibition around a central excited focus. The KIe sets model the self-stabilized interaction that sustains spontaneous background cortical activity, for which the power spectral density (PSD) approaches “brown noise” (\(1/f^2\)) in the awake state and “black noise” (\(1/f^3\)) in the slow-wave sleep state. The KIi set is required to provide the lateral inhibition that sustains textured patterns of spatial amplitude modulation (AM) in active states of cognition.

## KII: interactions among excitatory and inhibitory populations

The feedback interaction of densely connected excitatory and inhibitory populations can be modeled by a KII set. The frequencies and decay rates of impulse responses expressed in the root loci calculated with the KII set depend on the route of input to the olfactory bulb. The simplest case for negative feedback is represented by the interaction of a KOe set with a KOi set when all gains are equal, resulting in pole-zero cancellation:

\[\tag{10} V(s) = F(s)/[1 + k_{ei}k_{ie} F^2(s)]\]

Orthodromic stimulation of axons in the olfactory nerve from sensory
receptor cells excites periglomerular neurons in the input layer of the
bulb, modeled by a KIe set ( Figure 2, A), producing an
excitatory bias. The root locus ( Figure 4, A, Mode 1e)
is nearly parallel to the real axis, because the frequency is invariant,
and the decay rate increases with increased amplitude, as does the PSTH
of the periglomerular cells ( Figure 2, A). Antidromic
stimulation of output axons from the bulb that monosynaptically excite
bulbar inhibitory neurons but not periglomerular neurons produces an
inhibitory bias. The amplitude of the AEP to single shock stimulation of
the olfactory tract increases with input intensity, but the decay rate
is unchanged, and the frequency decreases ( Figure 4, A,
Mode 1i) because the excitatory bias is lacking.

The effect of increasing intensity of antidromic stimuli to the bulb is to increase the AEP initial amplitude above the background activity. The same effect results by fixing the stimulus intensity and reducing the background activity with anesthetics. The decrease in frequency with invariant decay rate ( Figure 4, A, Mode 1i) gives root loci that cross into the right half of the complex plane, predicting sustained low frequency oscillations. These “spindles” are seen under barbiturate anesthesia and indicate the existence of a limit cycle attractor in the beta range. Likewise the extrapolation to zero decay rate at zero amplitude for Mode 1e indicates the existence of a complex conjugate pair of poles on the imaginary axis, producing a limit cycle attractor governing oscillations in the gamma range.

The root loci in Figure 4, A designated Mode 1e and
Mode 1i are seen only when the impulse intensity drives the firing of
neurons outside the self-stabilized background amplitude range, either
into refractory periods at high rates or below thresholds at low rates.
Within the self-stabilized ECoG range where superposition holds, the
PSTH do not go to zero. The AEP on repeated trials for fixed stimulus
intensity vary in amplitude and inversely in decay rate. That is the
reverse of Mode 1e. The root loci in Mode 2 cross into the right half of
the complex plane with increased response amplitude. Once the roots are
in the right half, the increasing envelope would imply further amplitude
increase, but the root loci then converge back to a point on the
imaginary axis. These Mode 2 root loci indicate the limit cycle
oscillation for KII sets results from high values of K_{n} (
Figure 4, B), which in turn are provided by the asymmetry in
the sigmoid curve ( Figure 3, B, Mode 2). This asymmetry
is based in the tendency of neurons to increase their probability of
firing exponentially as they are brought closer to threshold (equation
(7)); it provides the mechanism for destabilization of
sensory cortex by excitatory input, leading to the emergence of spatial
patterns of amplitude modulation of the carrier gamma waves that are
classifiable with respect to conditioned stimuli. Changes in AEP and
PSTH with learning and with pharmacological manipulations are likewise
simulated with root locus techniques.

## More complex systems

### KIII: The olfactory system, learning, and pattern classification

The characteristic waveforms of ECoG in the olfactory system are aperiodic (“chaotic”) with power spectral densities that generally conform to brown noise (\(1/f^2\)) or black noise (\(1/f^3\)) depending on the lengths of the refractory periods. Aperiodic activity is simulated by interconnecting the KIe set and three KII sets with incommensurate characteristic frequencies in a system with multiple feedback loops and long delays (Kozma and Freeman, 2001).

Studies of the mesoscopic dynamics of learning in cortex must address
the spatial patterns of connectivity in KII sets, as the effects of
learning appear in spatial patterns of amplitude modulation of
oscillations in the beta and gamma ranges (Freeman, 2000, Chapters.
11-13, pp. 265-350). Each ECoG pattern and its spatial AM pattern can be
modeled as the output of a distributed KII set embedded in a KIII set
(Kozma and Freeman, 2001). The amplitudes for each time point or frame
form a column vector that specifies a point in space. Similar AM
patterns form clusters of points; classification of AM patterns can be
done by using the Euclidean distance from each AM point to the nearest
cluster. The three required types of synaptic changes with learning are
selective increase in K_{ee} with Hebbian association, decrease
in K_{ee} and K_{ei} with habituation, and normalization
by \(\bar{K}_{ij} = K_{ij} / K_{mean}, i = j = 1, m\ .\) These
changes can be observed by training subjects with classic conditioning
to respond to the electric stimulus as a conditioned stimulus (CS) and
measuring the AEP as a conditioned response (CR) by optimizing the least
squares fit of v(t), to the AEP (Freeman, 1975; 2004).

The advantages of KIII pattern classifiers over other artificial neural networks are the small number of training examples needed, the convergence to an attractor in a single step rather than by tree search or gradient descent, and geometric rather than linear increase in the number of classes with number of nodes (e.g., Shimoide and Freeman, 1995; Kozma and Freeman, 2002; Li X et al., 2005; Li G et al., 2006). The disadvantage is the length of computational time required to solve the ODE numerically. Substantial progress has been made with simulations in VLSI (Principe et al., 2001; Tovares et al., 2007) at the KII level. Extension to KIII and KIV may be facilitated by development of random graph theory and neuropercolation to replace ODE (Kozma et al., 2004).

Each cerebral hemisphere of the primitive vertebrate forebrain comprises three areas of allocortex (no neocortex): anterolateral sensory, which is dominated by olfaction but includes all other modalities entering via the thalamus; posterolateral motor, which includes the basal ganglia; and medial associational, which includes the septohippocampal system (Herrick, 1948). These three parts make up the limbic system, for which one of the key functions is navigation through the environment. This can be simulated by interconnecting three KIII sets to simulate the dynamics of the exteroceptive systems, the interoceptive systems, and the hippocampal complex (Kozma, Freeman and Erdí, 2003; Huntsberger, Tunstel and Kozma, 2006).

### KV: The neocortex and cognition

Six-layered neocortex comprises in each hemisphere a continuous sheet of neuropil that overarches and is connected with all parts of the limbic system, and that embeds with interconnections multiple primary sensory and motor cortices. Experimental analysis and modeling of neocortical ECoG have shown that the dynamics of the visual, auditory and somatic neocortices are compatible with allocortical dynamics in the essential processes of forming and transmitting AM patterns by state transitions. The carrier waves of the AM patterns modeled are broad spectrum oscillations in the beta and gamma ranges; the frames repeat aperiodically at rates in the theta and alpha ranges. K-sets offer a platform by which to conduct analyses of the unifying actions of neocortex in the creation and control of intentional and cognitive behaviors (Freeman, 2007).

## References

- Curran P. F. (1974) Aharon Katzir-Katchalsky. Biophysical Journal 15(7): i-iv.
- Erwin, H., (1994) The Application of 'Katchalsky Networks' to Radar Pattern Recognition in Origins: Brain and Self-Organization, K. Pribram, ed., INNS Press and Lawrence Erlbaum Associates, Inc., 1994. ); pp. 546-555.
- Freeman WJ (1975) Mass Action in the Nervous System. New York: Academic Press.
- Freeman WJ (2000) Neurodynamics. An Exploration of Mesoscopic Brain Dynamics. London UK: Springer.
- Freeman WJ (2007) Proposed cortical "shutter" mechanism in cinematographic perception. In: Neurodynamics of Cognition and Consciousness, Perlovsky L, Kozma R (eds.) Heidelberg: Springer Verlag, pp. 11-38.
- Herrick CJ (1948) The Brain of the Tiger Salamander. Chicago IL: Univ Chicago Press.
- Huntsberger T, Tunstel E, Kozma R (2006) Onboard learning strategies for planetary surface rovers, Chapter 20 in: Intelligence for Space Robotics, A. Howard, E. Tunstel (eds). TCI Press, San Antonio, TX, pp. 403-422.
- Kozma R, Freeman WJ (2001) Chaotic Resonance: Methods and applications for robust classification of noisy and variable patterns. Int. J. Bifurc. Chaos, 11(6): 2307-2322.
- Kozma R, Freeman WJ (2002) Classification of EEG patterns using nonlinear dynamics and identifying chaotic phase transitions. Neurocomputing 44: 1107-1112.
- Kozma R, Freeman WJ (2003) Basic principles of the KIV model and its application to the navigation problem. J Integrative Neurosci 2: 125-145.
- Kozma R, Freeman WJ, Erdí P (2003) The KIV model – nonlinear spatio-temporal dynamics of the primordial vertebrate forebrain. Neurocomputing 52: 819-826.
- Kozma R, Puljic M, Balister P, Bollobas B, Freeman WJ (2004) Neuropercolation: A random cellular automata approach to spatio-temporal neurodynamics. Ch. in: Lecture Notes Comp Sci: 141.225.40.170 springerlink.com http://repositories.cdlib.org/postprints/1013
- Li G, Lou Z, Wang L, Li X, Freeman WJ (2005) Application of chaotic neural model based on olfactory system on pattern recognition, pp. 378-381 in Wang L, Chen K, Ong YS (eds.) Lecture Notes in Computer Science, Berlin: Springer-Verlag.
- Li X, Li G, Wang L, Freeman WJ (2006) A study on a bionic pattern classifier based on olfactory neural system. Intern J Bifurc Chaos 16(8): 2425-2434.
- Mountcastle VB (ed.) (1974) Medical Physiology, 13th ed. St Louis MO: C V Mosby, p. 232.
- Principe, J. C., Tavares, V. G., Harris, J. G. & Freeman, W. J. (2001) Design and implementation of a biologically realistic olfactory cortex in analog VLSI. Proc. IEEE 89, 1030-1051.
- Shimoide K, Freeman WJ (1995) Dynamic neural network derived from the olfactory system with examples of applications. Institute of Electronics, Information and Communication Engineers (IEICE) Transactions on Fundamentals of Electronics, Communications, and Computer Sciences. E78A: 869-884.
- Schmitt, F. O., (1974) Foreward to Katchalsky, A., Rowland, V., and R. Blumenthal. (1974) Dynamic Patterns of Brain Cell Assemblies. MIT Press.
- Tovares V, Tabarce S, Principe J, Oliveira P (2007) Freeman olfactory cortex model: A multiplexed KII network implementation,. IEEE Analog Integrated Circuits & Sig Proc 50(3): 251-259.

**Internal references**

- John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815.

- Peter Redgrave (2007) Basal ganglia. Scholarpedia, 2(6):1825.

- Valentino Braitenberg (2007) Brain. Scholarpedia, 2(11):2918.

- Nestor A. Schmajuk (2008) Classical conditioning. Scholarpedia, 3(3):2316.

- Yuri A. Kuznetsov (2007) Conjugate maps. Scholarpedia, 2(12):5420.

- James Meiss (2007) Dynamical systems. Scholarpedia, 2(2):1629.

- Paul L. Nunez and Ramesh Srinivasan (2007) Electroencephalogram. Scholarpedia, 2(2):1348.

- Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.

- Seiji Ogawa and Yul-Wan Sung (2007) Functional magnetic resonance imaging. Scholarpedia, 2(10):3105.

- Peter Jonas and Gyorgy Buzsaki (2007) Neural inhibition. Scholarpedia, 2(9):3286.

- Robert Kozma (2007) Neuropercolation. Scholarpedia, 2(8):1360.

- Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358.

- Walter J. Freeman (2007) Scale-free neocortical dynamics. Scholarpedia, 2(2):1357.

- Cesar A. Hidalgo R. and Albert-Laszlo Barabasi (2008) Scale-free networks. Scholarpedia, 3(1):1716.

- Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.

- S. Murray Sherman (2006) Thalamus. Scholarpedia, 1(9):1583.

## Recommended reading

- Freeman WJ (1975) Mass Action in the Nervous System. New York: Academic Press.
- Freeman WJ (2001) How Brains Make Up Their Minds. Columbia University Press. ISBN 0-231-12008-7
- Freeman WJ (2000) Neurodynamics. An Exploration of Mesoscopic Brain Dynamics. London UK: Springer. ISBN 1852336161
- Katchalsky, A., Rowland, V., and R. Blumenthal. (1974) Dynamic Patterns of Brain Cell Assemblies. MIT Press.