Notice: Undefined offset: 1219 in /var/www/scholarpedia.org/mediawiki/includes/parser/Parser.php on line 5961
MirrorBot - Scholarpedia

MirrorBot

From Scholarpedia
Guenther Palm (2011), Scholarpedia, 6(10):6633. doi:10.4249/scholarpedia.6633 revision #90679 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Guenther Palm

MirrorBot is the name of a robot and a European project that investigated computational requirements and consequences of the mirror neuron system. The robot was able to associate and recognise verbal commands, visual input and perform related motor actions. The project was financed by the EU and was carried out by teams lead by S. Wermter (University of Sunderland, Coordinator), G. Palm (Ulm University), G. Rizzolatti (University of Parma), F. Pulvermueller (MRC Cognition and Brain Science Unit, Cambridge) and F. Alexandre (INRIA Lorraine/LORIA-CNRS).

Contents

Motivation

A mirror neuron is a neuron that is active when the agent either executes a specific action, observes others perform this action or hears sounds related to this action. Different mirror neurons respond for different intentional actions like grasping for food, irrespective of the actor. The discovery of mirror neurons posed some hard questions to computational neuroscientists and researchers in intelligent systems:

  1. How can one understand neural responses with these properties?
  2. Can these properties emerge from learning?
  3. What are these neurons good for?
  4. How can we create computational intelligent systems based on these neurons?

For issue 1 and 2 one can argue that a monkey may watch his own hand when grasping and also other monkey's hands when they are doing it. Since the visual appearances will usually be quite similar he can associate or generalize across these. When learning his own eye-hand coordination he can associate his motor commands and proprioceptive inputs with the visual appearance of his arm and hand. This can lead to the generalization necessary for the mirror neurons. In addition he can associate the goal he is pursuing or the reward he is predicting to those actions or the corresponding sensory inputs.

For issue 3 and 4, computational and robot models can be built to test some of these questions. It is widely held that mirror neurons are an integral part of the human brain mechanisms which support imitation. As a result, the idea of imitation learning of robots from humans has initially been put forward as building on simulated mirror neurons. Another possibility is that mirror neurons can be used as an internal representation of goal-directed actor-independent actions, which can form an essential ingredient of the emergence of languages, namely a semantical representation of verbs. In other words, mirror neurons may contribute to the building of a semantical representation of verbs and thus to the emergence of action verbs in language. Finally the goal directedness of the mirror neuron responses could be useful in supporting social interactions, perspective switching, and empathy.

Realization

The project MirrorBot develops and studies emerging embodied representations based on mirror neurons. The MirrorBot project focused on five main objectives:

  1. To collect behavioural, imaging, and neural recording data to clarify the perceptual processing
  2. To identify a realistic neural architecture for the MirrorBot to process perceptual data and generate actions
  3. To develop cell assemblies implementing the mirror neuron concept
  4. To integrate the cell assemblies to produce the behaviour in the MirrorBot
  5. To train and evaluate the MirrorBot.

In order to address these objectives neuroscience experiments related to the representation of the brain and mirror neuron system were performed. In addition brain imaging experiments were also performed to measure brain activity when subjects processed leg-, arm-, and face-related words to indicate the manner that words are represented in the brain. Such neuroscience experiments were able to provide information related to the nature of the mirror neuron system since they responded not only when performing an action but also when seeing or hearing the action. One finding of these experiments was that mirror neurons fire not due to the actual movements but the goal of the set of movements that these mirror neurons react to. Hence, what turns a set of movements into an action is the goal and holding the belief that performing the movements will achieve a specific goal.

With regards the representation of words there is finer grained grounding of language instruction in actions. This creates a division of representation in the cerebral cortex based on the part of the body that performs that action between leg, head and hand. The findings of the neuroscience experiments direct the computational models that were developed for the computational and the trainable MirrorBot robot assistant.

MirrorBotgrasp.jpg

A further outcome of the MirrorBot project is the language architecture, a visual attention control system based on self-organising maps, object recognition that combines visual attention control system, feature extraction system and classification system and robot docking based on reinforcement learning (Weber et al 2006, Weber and Wermter 2007, Wermter et al. 2005a, 2005b, Panchev and Wermter 2006). The models are combined to produce the trainable robot assistant by devising an appropriate scenario and the development of a mirror neuron-based novel architecture. The MirrorBot system also demonstrates that it is possible to generate a model of command understanding using only the most elementary ingredients of neuroscience (threshold neurons, spikes and synaptic plasticity) (Fay et. al., 2005) (Markert et. al., 2007).

The overall scenario involves the robot being placed at a random position in the environment with fruits lying on a table that are not in sight. An instruction is given and the robot needs to detect and approach the table before any docking can take place. MIRA, an early version of MirrorBot also took part in the competition for the 2003/4 Machine Intelligence Award by the British Computer Society. MirrorBot was able to successfully fetch objects on demand and was awarded the Machine Intelligence Prize.

Summary

Concepts from mirror neurons suggest that own actions, observed actions and language are very much interrelated since the same mirror neurons fire at the same time based on experiments with monkeys and humans. Mirror neuron areas correspond to cortical areas which are related to human language centres (e.g. Broca region) and could provide a cortical substrate for the integration of vision, language and action. The project MirrorBot develops and studies emerging embodied representations based on mirror neurons. New architectures were developed based on cell assemblies, associative neural networks, and Hebbian-type learning in order to associate vision, language and motor concepts. Finally biomimetic multimodal learning and language instruction was implemented in a robot to investigate the task of searching for objects.


References

  • Wermter S., Palm G., Elshaw M. (2005a) Biomimetic Neural Learning for Intelligent Robots. Springer, Heidelberg, Germany.
  • Panchev C., Wermter S., (2006) Temporal Sequence Detection with Spiking Neurons: Towards Recognizing Robot Language Instruction. Connection Science, Vol 18,1, pp. 1-22.
  • Wermter S., Weber C., Elshaw M., (2005b) Associative Neural Models for Biomimetic Multi-modal Learning in a Mirror Neuron-based Robot. In Cangelosi A., Bugmann G. Borisyuk R. (Eds.), Modeling Language, Cognition and Action. Singapore: World Scientific. pp. 31-46.
  • Weber C., Wermter S. (2007) A Self-Organizing Map of Sigma-Pi Units. Neurocomputing. Vol. 70, pp. 2552-2560.
  • Weber C., Wermter S., and Elshaw (2006) M. A hybrid generative and predictive model of the motor cortex. Neural Networks, Vol. 19(4), pp. 339-353.
  • Markert H., Knoblauch A. and Palm G. (2007) Modelling of syntactical processing in the cortex. Biosystems 89:300-315.
  • Fay R., Kaufmann U., Knoblauch A., Markert H., Palm G. (2005) Combining Visual Attention, Object Recognition and Associative Information Processing in a Neurobotic System. In: Wermter S., Palm G., Elshaw M. Biomimetic Neural Learning for Intelligent Robots. Springer LNAI 3575.

Internal references

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools