User:Axel Cleeremans/Proposed/Computational Correlates of Consciousness

From Scholarpedia
Jump to: navigation, search

Dr. Axel Cleeremans accepted the invitation on 6 October 2008 (self-imposed deadline: 6 January 2009).

The expression “Computational Correlate of Consciousness” was first used by Mathis and Mozer (1995) , who, taking a computational approach to the problem of consciousness, asked “What conditions must a mental representation satisfy in order for it to reach consciousness? What are the computational consequences of a representation reaching consciousness? Does a conscious state affect processing differently than an unconscious state? What is the computational utility of consciousness?” (p. 11). In contrast to current efforts to identify the “neural correlates of consciousness”, this perspective on the problem of consciousness is thus focused more specifically on identifying the computational principles that differentiate between information processing with and without consciousness rather than on identifying and localizing their neural underpinnings. Of course, any putative computational principle that differentiates between information processing with and without consciousness will necessarily be implemented by specific neural mechanisms occurring in the brain. But the point is that by analyzing the relationships between neural and mental states from a point of view that amounts neither to hardcore neuroscience nor to pure phenomenology, one may hope to identify bridging principles that better characterize how particular neural states are associated with conscious processing. A similar perspective was recently proposed by Seth (2009), who advocates seeking "explanatory correlates" of consciousness, defined as "neural processes that not only correlate with, but also account for fundamental properties of conscious experience".

Such principles can take the form of abstract characterizations of types of information processing in our cognitive system (i.e., holistic vs. analytical processing; controlled vs. automatic processing, etc.), or be expressed in terms of high-level properties of neural processing (i.e., recurrent vs. feedforward processing). Further, such characterizations can concern either processes or the representations that are manipulated by such processes.

This yields an organization of computational theories of consciousness (Figure 1, see A. P. Atkinson, Thomas, & Cleeremans, 2000) along two dimensions: (1) A process vs. vehicle dimension, which contrasts theories that characterize consciousness in terms of specific processes operating over representations with theories that characterize consciousness in terms of intrinsic properties of mental or neural representations, and (2) A specialized vs. non specialized dimension, which contrasts theories that posit the existence of specific information processing systems dedicated to consciousness with theories for which consciousness can be associated with any information processing system as long as this system has the relevant properties.

Figure 1: Graphical representation of extant computational theories of consciousness organized along two dimensions. See text for further details

Thus, Specialized vehicle theories assume that consciousness depends on the properties of representations occurring within a specialized system in the brain. An example of such an account is Atkinson and Shiffrin’s model of short-term memory (R. C. Atkinson & Shiffrin, 1971), which specifically assumes that representations contained in the short-term memory store (a specialized system) only become conscious if they are sufficiently strong (a property of representations).

Specialized process theories assume that consciousness arises from specific computations that occur in a dedicated mechanism, as in Schacter’s CAS (Conscious Awareness System) model (Schacter, 1989). Shacter’s model indeed assumes that the CAS’s main function is to integrate inputs from various domain specific modules and to make this information available to executive systems. It is therefore a specialized model in that it assumes that there exist specific regions of the brain whose function it is to make its contents available to conscious awareness. It is a process model to the extent that any representation that enters the CAS will be a conscious representation in virtue of the processes that manipulate these representations, and not in virtue of properties of those representations themselves. More recent computational models of consciousness also fall into this category, most notably Dehaene and collagues’s neural workspace model (Dehaene, Kerszberg, & Changeux, 1998) and Crick and Koch (2003)’s framework, both of which assume, albeit somewhat differently, that the emergence of consciousness depends on the occurrence of specific processes in specialized systems (e.g., specific long-distance cortico-cortical connectivity).

Non-specialized vehicle theories include any model that posits that availability to consciousness only depends on properties of representations, regardless of where in the brain these representations exist. O’Brien & Opie’s “connectionist theory of phenomenal experience” (O'Brien & Opie, 1999) is the prototypical example of this category, to the extent that it specifically assumes that any stable neural representation will both be causally efficacious and form part of the contents of phenomenal experience. Mathis and Mozer (1995) likewise propose to associate consciousness with stable states in neural networks,. Zeki and Bartels’ (1998) notion of “micro-consciousness” is also an example of this type of perspective.

Non-specialized process theories, finally, are theories in which it is assumed that representations become conscious whenever they are engaged by certain specific processes, regardless of where these representations exist in the brain. Many recent proposals fall into this category. Examples include Tononi and Edelman (1998)’s “dynamic core” model; Crick and Koch’s (1990) idea that synchronous firing constitutes the primary mechanisms through which disparate representations become integrated as part of a unified conscious experience or Grossberg (1999)’s characterization of consciousness as involving processes of “adaptive resonance” through which representations that simultaneously receive bottom-up and top-down activation become conscious because of their stability and strength.

Cutting across this classification, several computational principles through which to distinguish between conscious and non-conscious processing have now been proposed:

Contents

Stability

Mathis and Mozer (1996), as well as philosophers O’Brien and Opie (1999) have proposed that stability of activation is a computational correlate of consciousness. The claim is thus that stable representations, that is, representations that continue to exist beyond some temporal interval, form the contents of consciousness, wherever they occur in the brain. According to this perspective, consciousness does not depend on specialized systems. Rather, any module that produces stable representations can contribute to the contents of consciousness. Representations acquire stability as a result of relaxation processes as they occur in dynamical systems. An interactive network, for instance, will tend to “settle” in one of a limited number of stable, “attractor” states (see *Attractor Networks). In O’Brien and Opie’s theory, such states—stable activation patterns in a connectionist network—constitute explicit representations, the ensemble of which form the contents of phenomenal consciousness at any point in time. Further, stable representations, because they persist in time, will tend to exert more influence on other modules than shorter-lived representations. As O’Brien and Opie put it, “Stability begets stability” (p. 48): Stable states in one network will promote the emergence of stable states in other networks, hence allowing a “winning coalition” of stable states to form. The notion of stability may be further expanded through dynamical systems theory to "metastability", which refers to the fact that complex systems often find themselves visiting the same set of unstable states, so achieving a form of stability outside true equilibrium states (see Kelso et al, 1988, Varela et al. 2001). Metastability offers a natural account of the fact that our conscious experiences often seem to exhibit a sequential character, whereby one moves from one fleeting state of awareness to another in a relatively stable manner.

Strength and "fame in the brain"

Dennett (2001), along with others, has put forward the closely related proposal that consciousness amounts to “fame in the brain”, that is, that the contents of phenomenal experience consist of just those states that have “won the competition”, and hence have achieved not only stability (as in Mathis and Mozer’s proposal), but also strength. Strength, in this context, could refer to the number of neurons involved in the representation relative to the number of neurons involved in competing representations, or to the fact that a self-sustaining coalition of neurons has formed and inhibits other competing coalitions. Dennett’s proposal, just as O’Brien & Opie’s, remains mute concerning the reason why such states are conscious states. Conscious experience, in this perspective, requires no further explanation; it merely consists of the fact that a particular representation has come to dominate processing at some point in time, so making its contents conscious.

Reentrant processing and adaptive resonance

Several authors have independently proposed that “reentrant”, or “recurrent” processing (see *Reentrant Processing) constitutes a computational correlate of consciousness. Neural networks in the brain are massively recurrent, with “downstream” neurons connecting back to the “upstream” neurons from which they receive connections. Recurrent networks have very different computational properties than purely “feedforward” networks. In particular, recurrent networks have internal dynamics and can thus settle onto particular attractor states independently of the input, whereas feedforward networks only become active when their preferred inputs are present. Lamme (2006) has argued that it is only the processing that occurs in such reentrant pathways that is associated with conscious experience. The representations that arise in purely “feedforward” pathways is thus never associated with conscious experience. It is easy to see how this principle is closely related to both stability and strength: Recurrence is necessary to achieve either in a dynamical neural network. It is indeed in virtue of the existence of recurrent connectivity that neural patterns of activity can gain temporal stability, for in a purely feedforward pathway, any pattern of activity will tend to fade away as soon as the input vanishes. Likewise, it is again in virtue of recurrent connectivity that an ensemble of interconnected neurons can “settle” in a state that represents the most likely (the “strongest”) interpretation of the input. Grossberg (1999), in earlier work, had proposed similar ideas in the context of his Adaptive Resonance Theory (ART).

Synchrony and gamma oscillations

Though they later revised their position (Crick and Koch, 2003), Crick and Koch (1990) suggested that synchronized gamma oscillations, that is, the observation of fast, synchronized firing in neural assemblies in response to visual input, for instance, constitutes a neural correlate of consciousness. Computationally, synchronous firing constitutes one way of addressing the so-called “binding problem”, since such synchronous firing makes it possible to bind together the activity of distributed assemblies into functionally coherent sets (Engel & Singer, 2001) (see also *gamma oscillations) through temporal correlation. Thus, the different features of a single object presented among many, for instance, can be selected out of multiple possible alternative bindings in virtue of the fact that the neurons that specifically represent the features of that object fire synchronously. Synchronous firing has further computational consequences, among which is the fact that precisely timed firings have a larger impact on post-synaptic neurons, thus achieving greater effect. Such consequences, that is, the fact that synchrony enables selection and amplification, constitute an alternative or complementary mechanism to strength and stability: All enable the emergence of specific global states in the vastly interconnected dynamical system that the brain is.

Global availability

It is commonly assumed that conscious representations are available globally in a manner that that unconscious representations are not. Bernard Baars (1988), and later Stanislas Dehaene, have proposed that conscious processing engages a network of interconnected high-level processors or modules dubbed the “neuronal global workspace”, to which unconscious processors continuously compete for access. The core hypothesis of Global Workspace Theory (see *Neuronal Workspace and *Global Workspace entries) is that one is conscious of those representations that form the contents of the global workspace at some point in time. Activity in the global workspace thus “ignites” a large network of cortical processors interconnected by long-range cortico-cortical connections. Patterns of activity in this large-scale flexible network can in turn temporarily amplify (a process dubbed “dynamic mobilization”) information represented in other cortical areas and subsequently broadcast these contents to yet further processors, thus making the information “globally available” in a way that would be impossible without involvement of the workspace. Global workspace theory thus builds on all the computational principles identified so far: Stability, Strength, Recurrent Processing and Synchrony. The theory specifically focuses on the main computational consequence of the momentary existence of stable, strong representations made possible by recurrent connectivity, namely that such representations (and only such representations) can then bias and influence processing globally, thus implementing a form of global constraint satisfaction (Maia & Cleeremans, 2005).

Information integration and differentiation

Tononi (see *Integrated Information Theory) has proposed that conscious states, from a computational point of view, are characterized by the fact that they are both highly integrated and highly differentiated states. Integration refers to the fact that conscious states are states in which contents are fundamentally linked to each other and hence unified: One cannot perceive shape independently from color, for instance. Differentiation refers to the fact that conscious states are one among many possible states; for each conscious state, there is almost an infinity of alternative possibilities that are ruled out. Thus, only systems capable of both integrating and of representing a wide array of distinct states are capable of consciousness. Based on this hypothesis, one can thus analyze, from a computational point of view, what kinds of systems are capable of such integrated and differentiated representation. In this light, Tononi and colleagues have proposed several measures aimed at indexing a neural network's capacity for simultaneous integration and differentiation of information. One such measure is simply called "Neural Complexity", which takes high values whenever subsets of a system can instantiate many different states in such a way that each influences the rest of the system. A later proposed measure is called phi, which indexes a system's capacity to adopt different, causally efficacious states (see Seth et al., 2009, for a review).

Metarepresentation and higher-order thoughts

The different computational correlates described so far have all involved sub-personal properties of information processing systems. In constrast, David Rosenthal has offered a completely different view of the putative mechanisms associated with consciousness, one that is pitched at the personal level. According to Rosenthal (2006) ’s “Higher-Order Thought” (HOT) theory, at any given time a representation is conscious to the extent that one has a thought that one is conscious of that representation. In other words, a representation is a conscious representation when there is a further, higher-order representation that targets this representation. Mere “fame in the brain” is therefore neither sufficient nor necessary to make a representation conscious in this perspective; what is needed instead is the occurence of meta-representations that redescribe in specific ways lower-level representations. Three recent theories of consciousness have espoused this view to different degrees. First, In Dienes and Perner's (1999) framework, dubbed "A theory of implicit and explicit knowledge", conscious knowledge is equated with explicit knowledge, defined in turn as knowledge that represents not only the existence of a particular fact, but also one's one own attitude of knowing that fact. Maximally implicit knowledge, on the other hand, is knowledge that merely represents the occurrence of a particular state of affairs, without any further tagging of individuality, predication, temporality or factuality. Second, Cleeremans (2008), has proposed that the extent to which a representation is available to consciousness depends on both its "quality" — its strength, stability in time, and distinctiveness — and on the fact that it is the target of a meta-representation that characterize the mental attitude by which the first-order representation is known (i.e., is this something I know, I hope, I regret, &c). Crucially, a central assumption of this account is that such meta-representations are learned unconsciously by the brain as it continuously redescribes its own activity to itself, so enriching its representational repertoire by knowledge about the geography of its own representations. This hypothesis Cleeremans calls "The Radical Plasticity Thesis". Third, a very similar idea was independently proposed by Lau (2008). According to Lau's "Higher-order Bayesian theory of consciousness", the brain essentially performs statistics on itself, continuously seeking to identify appropriate decision criteria over noisy internal signal distributions reflecting, for instance, the presence of a stimulus or activity in a particular cerebral region. This account thus borrows both from signal detection theory, in that it assumes that the mechanisms by which the brain learns about itself involve the same sorts of processes involved in making optimal decisions with respect to external stimuli, and from Rosenthal's HOT's theory in that it assumes that this learning results in higher-order representations of the brain's activity.

Summary

Different "computational correlates of consciousness", — putative computational principles that distinguish between information processing with and without consciousness — have now been proposed. Most converge towards the key idea that conscious states instantiate a form of “global constraint satisfaction” whereby widely distributed neuronal populations continuously compete to form stable coalitions that offer the best intepretation of the current state of affairs. Such competition requires mechanisms that make it possible for stable, strong states to emerge and to bias processing elsewhere by making their contents globally available. Such mechanisms may involve synchronous firing and reentrant processing. The main competing proposal is that consciousness involves the unconscious occurrence of higher-order thoughts in virtue of which the target first-order representations become conscious. This mechanism has not so far received a computational implementation (but see Cleeremans, Timmermans, & Pasquali, 2007 and Lau, 2008, for recent attempts).

In many cases, the proposed principles have been instantiated in the form of specific models of certain cognitive tasks that differentiate between information processing with or without consciousness (see Maia & Cleeremans, 2005, for a review). Thinking about consciousness in terms of specific computational mechanisms is undoubtedly a useful strategy to further our understanding of its neural underpinnings and of its function. The hard problem, however, remains whole, since most of the existing proposals have been formulated to address access-consciousness rather than phenomenal experience.

References

  • Atkinson, A. P., Thomas, M. S. C., & Cleeremans, A. (2000). Consciousness: mapping the theoretical landscape. Trends in Cognitive Sciences, 4(10), 372-382.
  • Atkinson, R. C., & Shiffrin, R. M. (1971). The control of short-term memory. Scientific American, 224, 82-90.
  • Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press.
  • Cleeremans, A. (2008). Consciousness: The radical plasticity thesis. In R. Banerjee & B.K. Chakrabarti (Eds.), Progress in Brain Science, 168, 19-33
  • Cleeremans, A., Timmermans, B., & Pasquali, A. (2007). Consciousness and metarepresentation: A computational sketch. Neural Networks, 20(9), 1032-1039.
  • Crick, F. H. C., & Koch, C. (1990). Towards a neurobiological theory of consciousness. Seminars in the Neurosciences, 2, 263-275.
  • Crick, F. H. C., & Koch, C. (2003). A framework for consciousness. Nature Neuroscience, 6(2), 119-126.
  • Dehaene, S., Kerszberg, M., & Changeux, J.-P. (1998). A neuronal model of a global workspace in effortful cognitive tasks. Proceedings of the National Academy of Sciences of the U.S.A., 95(24), 14529-14534.
  • Dennett, D. C. (2001). Are we explaining consciousness yet? Cognition, 79, 221-237.
  • Dienes, Z., & Perner, J. (1999). A theory of implicit and explicit knowledge. Behavioral and Brain Sciences, 22, 735-808.
  • Engel, A. K., & Singer, W. (2001). Temporal binding and the neural correlates of sensory awareness. Trends in Cognitive Science, 5(1), 16-25.
  • Grossberg, S. (1999). The link between brain learning, attention, and consciousness. Consciousness and Cognition, 8, 1-44.
  • Kelso, J.A. Scott; et al. (1988). "Dynamic pattern generation in behavioral and neural systems". Science 239 (4847), 1513–1520.
  • Lamme, V. A. F. (2006). Toward a true neural stance on consciousness. Trends in Cognitive Sciences, 10(11), 494-501.
  • Lau, H. (2008). A higher-order Bayesian decision theory of consciousness. In R. Banerjee & B.K. Chakrabarti (Eds.), Progress in Brain Science, 168, 35-48
  • Maia, T. V., & Cleeremans, A. (2005). Consciousness: Converging insights from connectionist modeling and neuroscience. Trends in Cognitive Sciences, 9(8), 397-404.
  • Mathis, W. D., & Mozer, M. C. (1995). On the computational utility of consciousness. In G. Tesauro & D. S. Touretzky (Eds.), Advances in neural information processing systems (Vol. 7, pp. 10-18). Cambridge: MIT Press.
  • O'Brien, G., & Opie, J. (1999). A connectionist theory of phenomenal experience. Behavioral and Brain Sciences, 22, 175-196.
  • Rosenthal, D. (2006). Consciousness and Mind. Oxford, UK: Oxford University Press.
  • Schacter, D. L. (1989). On the relations between memory and consciousness: Dissociable interactions and conscious experience. In H. L. R. III & F. I. M. Craik (Eds.), Varieties of Memory and Consciousness: Essays in Honour of Endel Tulving. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Seth, A.K. (2009). Explanatory correlates of conscioueness: Theoretical and computational challenges. Cognitive Computation, 1, 50-63.
  • Seth, A.K., Dienes, Z., Cleeremans, A., Overgaard, M., & Pessoa, L. (2008). Measuring consciousness: Relating behavioural and neurophysiological approaches. Trends in Cognitive Sciences, 12, 314-321.
  • Tononi, G., & Edelman, G. M. (1998). Consciousness and complexity. Science, 282(5395), 1846-1851.
  • Varela, F., Lachaux, J-P., Rodriguez, E., Martineri, J. (2001). The brainweb: phase synchronization and large-scale integration. Nature Reviews Neuroscience, 2, 29-239.
  • Zeki, S., & Bartels, A. (1998). The asynchrony of consciousness. Proceedings of the Royal Society B, 265, 1583-1585.

External links

Note

This article adapted from the article "Computational Correlates of Consciousness" (A. Cleeremans), in T. Bayne, A. Cleeremans, and P. Wilken (Eds.) (in press), The Oxford Companion to Consciousness, Oxford University Press.

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools