User:Jonathan R. Williford/Models of tactile perception and development

From Scholarpedia
Jump to: navigation, search

    Models of tactile perceptions are mathematical constructs that attempt to explain the process with which the tactile sense accumulates information about objects and agents in the environment. Since touch is an active sense, i.e., the sensor organ is moved during the process of sensation, these models often describe the motion strategies that optimize the perceptual outcome.

    Models of tactile development attempt to explain the emergence of perception and the accompanying motor strategies from more basic principles. These models often involve learning of exploration strategies and are aimed at explaining ontogenetic development of behavior.

    These models are used in two complementary ways. The first is in an attempt to explain and predict animal and human behaviors. For this purpose, the vibrissae system of rodents is often used as it is a well-studied system in neuroscience. Vibrissal behaviors, i.e., movement strategies of rodent's facial hairs, during different perceptual tasks are modeled in an attempt to uncover the underlying common principles, as well as the neuronal mechanism of tactile perception and development. The same models are also used in artificial constructs, e.g., robots, in an attempt to both validate the emergence of tactile sensorimotor strategies, as well as to try and optimize tactile perception in novel robotic platforms.

    Contents

    Introduction

    Tactile perception means the information gathered on tactile objects in the environment. This information can be the position, shape, material or surface texture of the object. Models of tactile perception are thus aimed at explaining how this information is accumulated, integrated and used in tactile tasks, such as discrimination and localization.

    Touch is an active sense, i.e., the sensory organ is usually moved in order to perceive the environment. Hence modeling tactile perception involves modeling the sensorimotor strategy that results in the accumulation of tactile information. In other words, these models describe the behavior, or motion, of the sensory organ as it interacts with the tactile object. Models attempt to either describe observed tactile-oriented behaviors in animals and humans or derive optimal perceptual strategies and then compare them to observed behaviors.

    Figure 1: Active tactile perception models architecture.

    Since touch, as opposed to vision, audition and smell, is a proximal sense, i.e., the sensory organ must be in contact with the object in order to perceive, locomotion is often part of the description of the tactile strategy. In nocturnal animals, such as many rodents, the vibrissae system, an array of moveable facial hairs, is used to perceive the environment in darkness. Navigation and object recognition is hence done mainly by the tactile sense. Several models of tactile-guided locomotion have been developed to address this cross-modality integration.

    As any other sense, tactile perception changes during ontological development, based on the agent's experience and interaction with the environment. Part of this change is the emergence of sensorimotor tactile strategies that explore tactile objects. For example, pups' vibrissae have been shown to move in different ways as they mature to adulthood (Grant et al., 2012). Developmental models try to describe this emergence of exploration behavior using basic principles of sensory-guided motor learning and intrinsic motivation exploration.

    Model types

    Tactile perception modeling is usually composed of two main components, namely, perception and action. The perception component attempts to describe the integration of tactile information into a cohesive percept. The action component attempts to describe the motor strategies used in order to move the sensory organ so that it can acquire this information.

    Tactile perception is usually modeled by either artificial neural networks or Bayesian inference. Artificial neural networks (ANN) are used in order to describe the learning process during the perceptual task. They are more closely related to the biological neural system and there are many computationally efficient tools to implement them. ANN are usually used in a supervised learning fashion, where the aim is to learn either tactile discrimination via labeled training sets, or continuous-variable forward models that capture the entire sensorimotor agent-environment interaction. Bayesian inference models capture the optimal integration of new observed information into a single framework of perceptual updates. Each new evidence from the possibly noisy environment is used in an optimal way to update the tactile perception in the current task. These models have fewer free parameters to tune and have been shown in recent years to describe many perceptual tasks in humans and animals very well.

    Motor strategies for tactile perception are usually modeled by either optimal control theory or reinforcement learning. Optimal control theory is a mathematical formalism wherein one defines a cost function and then uses known mathematical techniques to find the optimal trajectory or policy that minimizes the cost. In tactile perceptual tasks, the cost function is usually a combination of perceptual errors, e.g., discrimination ambiguity, and energy costs of moving the sensory organ. Thus an optimal control solution can give the policy, or the optimal behavior, that maximizes perception while minimizing energy costs. Reinforcement learning is a computational paradigm that attempts to find the policy or behavior that maximizes future accumulated rewards. This is a gradual learning process where repeated interactions with the environment results in convergence to an optimal policy. In tactile perceptual tasks the reward is the completion of the task and the model results in a converged sensorimotor tactile strategy.

    Model application

    Tactile perception and developmental models can be used in several ways. The first one is to attempt and describe, explain and predict animal and human tactile behavior. In each tactile task, the observed behavior is recorded and analyzed. Models are then constructed to attempt and re-capture the same behavior and then produce prediction of behaviors in novel tasks. The models are then validated in these new predicted tasks.

    The second application for tactile models is the understanding of the underlying neuronal mechanism. For example, the vibrissae system of rodents have been studied for decades and have produced a deep understanding of the underlying neuronal network that result in tactile perception. Linking model components that describe tactile perception to specific brain areas or functions can increase understanding of these areas and may attempt to explain abnormal behavior in model and neurological terms.

    Another application for tactile models is their implementation in artificial agents such as robots. Robotic platforms that have tactile senses are inspired by new understanding of biological tactile perception models. Integrating motors into the sensory organ, e.g., artificial whisker robots or tactile-sensor covered robotic fingers, enables new capabilities of object perception. However, controlling these robotic platforms become non-trivial as known motor-oriented control strategies fail in these perception-oriented domains. Implementing biologically-inspired sensorimotor models results in better performing robots.

    Active sensing

    Biological application

    In an attempt to properly understand the tactile sensorimotor strategy rodents employ during a well-known perceptual task called pole-localization, humans were used as models for rodents (Saig et al, 2012). Subjects were equipped with artificial whiskers at their fingertips and were asked to localize a vertical pole, i.e., determine which pole was more posterior, using only information they got from their whiskers. Force and position sensors were placed on the finger-whisker connection, which enabled full access to the information into the "system", i.e., the human subject. It was shown that humans spontaneously employed similar strategies as rodents, i.e., they "whisked" with their artificial whiskers by moving their hands synchronously and perceiving temporal differences according to pole location. In other words, they determined which pole was more posterior by moving their hands together and detecting which hand touched a pole first.

    In order to model this behavior, a Bayesian inference approach was selected for the tactile perception, whereas an optimal control theory approach was selected for the motor strategy analysis. The task was then described as a simple binary discrimination task, i.e., which pole is more posterior, and a Bayes update rule was modeled by integrating perceived temporal differences between the two hands. A Gaussian noise model was assumed for the perceived temporal difference, introducing a parameter of the temporal noise, i.e., how close in time can two stimuli be to still be perceived as distinct. Another important parameter introduced in the Bayesian inference model was the confidence probability above which subjects decided to report their perceived answer. In other words, after repeated contacts with the poles, the probability of one pole being more posterior increases; above which threshold does the subject stop the interaction and report the perceived result?

    The selected Bayesian inference model of this tactile perception task resulted in only two parameters, temporal noise and confidence probability, and allowed their estimation based on fitting to experimental results. The number of contacts prior to reporting was shown to increase with task difficulty, measured by decreased distance between the poles, as was predicted by the Bayesian model. Fitting the model prediction to the experimental results enabled estimation of the parameters: the temporal noise was assessed to be $312ms$ and the confidence probability $84\%$. The temporal noise was somewhat higher than previously reported purely tactile temporal discrimination thresholds, due to the fact that this experimental setup was an active sensing setup which introduced also motor noise. The confidence probability was comparable to many other psychological experiments, within which subjects had to report their perceived result after accumulating information. Hence, the Bayesian inference model of tactile perception eloquently described the accumulation and integration of tactile information.

    The motor strategies employed by the subjects were also structured and exhibited initial longer, larger amplitude movements followed by decreasingly shorter and smaller amplitude ones. To model this behavior, an optimal control theory approach was taken, where a cost function was defined, followed by optimization techniques that resulted in an optimal policy that minimized costs. The cost function had three components: a perceptual error term representing the task; an energy cost term representing penalty for laborious actions; and a perceptual cost term, symmetrically identical to the energy term representing a cost to too much information. The model captured the behavior exhibited by the subjects and resulted in a simple principle governing it, namely, maintaining a constant information flow. In other words, the optimal control model "distilled" the complex tactile-perceptual driven behavior to a single guiding principle.

    Robotic application

    Inspired by the rodents vibrissae system, a robotic platform was constructed that had fully controlled moving artificial whiskers (Sullivan, et al., 2012). The robot was used in similar tasks to those studied in rodents, namely, surface distance and texture estimation. In other words, the robot moved its whiskers in biologically-inspired motor strategies and collected information about the surfaces via sensors located at the base of the whiskers. The robot employed models of both tactile perception and motor strategies designed based on understanding of the biological vibrissae system.

    Tactile perception was modeled using a naive Bayes approach, where during training the robot collected sensory information on each type of surface and each distance from the surface, constructing labeled probability distributions for each. Then, during validation, the robot whisked upon a surface, collected information and classified the texture and distance according to the most probable class, based on the trained distributions.

    The motor strategy employed an observed behavior in rodents, namely, rapid cessation of protraction (RCP), which means that rodents whisk with smaller amplitude after an initial contact with an object. This strategy results in "light touch" upon the second whisk and onward with the surface. The same behavior was modeled and executed in the robotic rodent, whereupon the amplitude of the whisking was decreased after an initial perceived contact with the surface. The goal of the task and the specific models was to ascertain the potential benefits rodents might have for employing such a strategy.

    The results of the study showed that the robot performed much more efficient and accurate classification of both texture and distance of the surface when employing the rapid cessation of protraction (RCP) strategy, compared to unmodulated whisking. Further analysis of the results showed that using RCP resulted in less noisier sensory information which in turn resulted in improved classification. This model thus suggests that rodents employ the RCP strategy not only to keep their whisker intact, but also to improve signal-to-noise ratio and tactile perception. It also enables the development of more robust and more accurate artificial agents with moving tactile sensors.

    Tactile navigation

    Biological application

    Since touch is a proximal sense, direct contact with the objects in the environment are mandatory for tactile perception (Gordon et al., 2014b, Gordon et al., 2014c). In order to understand exploration behavior of rodents, a model was constructed that attempted to capture the complexity and structure of their exploration patterns. When rodents are allowed to explore a new round dark arena on their own, they move around the arena and sense its walls using their whiskers. They exhibit a complex exploration pattern in which they first explore the entrance to the arena, only then walk along the circumference walls of the arena and only then explore the open space in the center of the arena. Their exploration is composed of excursions made up of an outbound exploratory part and a fast retreat part in which they return to their home cage.

    This tactile-driven exploration strategy was modeled using a novelty-based approach which combined tactile-perceptual representation of the arena and a motor strategy that balances between exploration motor primitives and retreats. For tactile perception of the arena a Bayesian inference approach was taken to represent the forward model of locomotion. In other words, the arena was represented as the prediction of the sensory information in a given location and orientation, e.g., a wall is represented as "in location $x$ and orientation $o$, the left whisker is predicted to experience touch". This representation is updated whenever the animal perceives a new tactile perception in any location using Bayes rule and assuming sensory noise, i.e., the perceived tactile sensation is not necessarily the correct one.

    The exploration motor strategy was taken to consist of a balance between exploration motor primitives and retreats, where novelty was used as the thresholding factor. Exploration motor primitives are policies that determine the locomotive behavior of the rodent based on its perceived tactile sensation, e.g., wall-following primitive is the policy "if left whisker senses a wall, go forward", whereas wall-avoidance primitive is the policy "if right whisker senses a wall turn left". Three motor primitives were modeled, namely, circle-in-place, wall-following and wall-avoidance. Another "retreat primitive" was modeled as, given the current estimation of the arena, take the shortest path from the current location to the home cage.

    Figure 2: Novelty management model architecture for tactile-driven navigation (Gordon et al., 2014c).

    The balance between these motor primitives was dictated based on novelty, measured as the information gain in each time step that the arena model was updated. In other words, whenever the tactile forward model of the arena was updated, the number of bits that were updated, quantified by the Kullbak-Leibler divergence between the prior and posterior distributions, represented the novelty. Whenever novelty was higher than a certain threshold, the retreat primitive was employed. Whenever novelty was lower than a certain threshold for a certain amount of time, the next exploration motor primitive was employed. This generative model captured many of the observed behaviors in tactile-driven exploring rodents and showed that the basic principle of novelty management can be used to model complex and structured exploration behaviors.

    Robotic application

    A robotic platform with actuated artificial whiskers was used to study a tactile-based simultaneous localization and mapping (tSLAM) model (Pearson et al., 2013). In this setup, the perceptual task is dual, i.e., the robot needs to both localize itself in space and also map out the objects in the environment. As opposed to many other SLAM models, this model used only odometry and tactile sensation from the whisker-array as its input, i.e., it had no vision.

    The tactile-driven exploration of the environment consisted of an occupancy map particle filter-based model of tactile perception and an attention-based "orient" motor strategy. The tactile perception model was composed of an occupancy map in which each cell in the modeled environmental grid had a probability of being occupied by an object. Each whisk of the artificial whisker on the robot updated this occupancy map in the estimated location of the robot, i.e., if a whisker made contact with an object, the probably of occupancy in that cell was increased. To optimize the simultaneous estimation of location and mapping, a particle filter algorithm was used, where each particle had its own occupancy map that was updated according to the "flow of information" from the whiskers. For estimation, the particle with the highest posterior probability was taken.

    The motor strategy employed governed the motion of the moveable whisker array and was based on an attention model that executed an orienting behavior. In other words, a salience-based attention map was constructed based on the perceived whisker information, resulting in an orienting behavior of the entire "head" of the robot towards the salient tactile object. Thus, once contact was made with an object in the environment, the robot explored that object in greater detail. This increased the information collection required for the tSLAM algorithm.

    The results of the study showed that the robot, which made several exploratory bouts in an arena with several geometric shapes, has performed a simultaneous localization and mapping of its environment, with an impressive agreement with the ground truth, as measured by an overhead camera. This model shows how known and well established models from other senses can be adapted to the unique properties of the tactile domain and inform of possible perceptual characteristics of exploring rodents, as well as improve the performance of tactile-based robotic platforms.

    Development of tactile perception

    Biological application

    Developmental models attempt to explain the emergence of tactile perception and their accompanying motor strategies from more basic principles (Gordon and Ahissar, 2012a). The latter assume repeated interaction between the agent and its environment, thus accumulating statistical representations of the underlying mechanism of sensory perception. Furthermore, the optimal sensorimotor strategies that maximize the perceptual confidence are learned in these developmental models, and not assumed or pre-designed.

    Figure 3: Intrinsic reward reinforcement learning model architecture (Gordon et al., 2014c).

    One framework of developmental models is artificial curiosity, wherein a reinforcement learning paradigm is used to learn the optimal policy, yet the reward function is intrinsic and is proportional to the learning progress of sensory perception. In one instantiation of this framework in the tactile domain, an artificial neural network was used to model the tactile forward model, i.e., the network predicted the next sensory state based on the current state and the action performed. More specifically, the network was implemented on the vibrissae system, where the sensory states were composed of whisker angle and binary contact information and the action was protraction (increased whisker angle) or retraction (decreased whisker angle). Thus, the ANN learned to map objects in the whisker field, e.g., given the current whisker angle and no contact, if the whisker protracts will it induce contact (there is an object) or not. By moving the whisker, the tactile perceptual model learned about the environment.

    The question the developmental model tries to answer is, what is the optimal way to move the whisker so as to maximize the efficiency of mapping the environment? For this purpose, an intrinsic reward reinforcement learning was used, where the reward was proportional to the prediction error of the perceptual ANN. Thus, the more prediction errors made, the higher the reward, exemplifying the concept of "you learn by making mistakes". The policy converged to moving the whisker to the more unknown places.

    The results of this developmental model showed the convergence of whisking behaviors, starting from random motion and ending up with behaviors observed in adult rodents, e.g., periodic whisking for learning free-space and touch-induced pumps (Deutsch et al., 2012) for localizing tactile objects in the whisker field. The model suggests that these behaviors are learned during development and are not innate in the rodent brain. Furthermore, the model suggests developmental-specific brain connectivity, between the perceptual-learning brain areas, e.g., barrel cortex, and the reward system, e.g., basal ganglia, such that the former supplied the reward signal to the latter.

    Robotic application

    A study of artificial curiosity principles on a finger robotic platform with tactile sensors was also performed (Pape et al, 2012). The goal was to study the emergence of tactile-oriented finger movements, that optimize tactile perception of surface textures. For the robotic platform, a robotic finger with two tendon-based actuators and a $2 \times 2$ array of 3D micro-electro-mechanical system (MEMS) tactile sensors at its tip was used. The finger was able to flex in order to touch a surface with changing textures.

    For tactile perception, a clustering algorithm was used to distinguish between the resulting frequency spectra of the MEMS recordings during $0.33s$. This unsupervised learning model represented the abstraction of tactile sensory information into discrete tactile perceptions. However, the clustering was performed only on recent observations and was thus dependent on the movement of the finger, e.g., free movements without contact resulted in different spectra than tapping on the surface. The question asked in this study was "Which skills will be learned by intrinsically motivating the robotic finger to learn about different tactile perceptions?".

    For this purpose, a rewarding mechanism was developed such that intrinsic rewards were given to various aspects of exploration: reward was high for unexplored states of the finger position encouraging exploration; reward was given for ending up in a tactile perceptual state, thus driving sensation towards a specific tactile perception, embodying active sensing principles; reward was given for skills that are still changing, thus focusing on stabilization of skills. This complex reward mechanism ensured the appearance of several intrinsically motivated stabilized skills, that were aimed at reaching specific tactile perceptions. Each developed skill thus resulted in a unique perception in a repeatable manner.

    The study resulted in the emergence of several specific intrinsically motivated skills:

    1. free movements that avoided the surface that resulted in free-air tactile perception;
    2. tapping movements that resulted in unique spectra of the surface and;
    3. sliding movements that resulted in a texture-specific spectra.

    These well-known and documented tactile strategies of human finger-driven tactile perceptions emerged from intrinsic motivation and were not pre-designed. Thus, the developmental model resulted in learned tactile skills that were associated with unique tactile perceptions.

    References

    • R. A. Grant, B. Mitchinson, and T. J. Prescott. The development of whisker control in rats in relation to locomotion. Developmental Psychobiology, 54(2):151–168, 2012.
    • Avraham Saig*, Goren Gordon*, Eldad Assa, Amos Arieli, and Ehud Ahissar. Motor-sensory confluence in tactile perception. The Journal of Neuroscience, 32(40):14022–14032, 2012.
    • J. C. Sullivan, B. Mitchinson, M. J. Pearson, M. Evans, N. F. Lepora, C. W. Fox, C. Melhuish, and T. J. Prescott. Tactile discrimination using active whisker sensors. Sensors Journal, IEEE, 12(2):350–362, 2012.
    • Goren Gordon, Ehud Fonio, and Ehud Ahissar. Emergent exploration via novelty management. Journal of Neuroscience, 34(38):12646–12661, 2014.
    • Goren Gordon, Ehud Fonio, and Ehud Ahissar. Learning and control of exploration primitives. Journal of Computational Neuroscience, 37(2):259–280, 2014.
    • Martin J Pearson, Charles Fox, J Charles Sullivan, Tony J Prescott, Tony Pipe, and Ben Mitchinson. Simultaneous localisation and mapping on a multi-degree of freedom biomimetic whiskered robot. In Robotics and Automation (ICRA), 2013 IEEE International Conference on, pages 586–592. IEEE.
    • G. Gordon and E. Ahissar. Hierarchical curiosity loops and active sensing. Neural Netw, 32:119–29, 2012.
    • D. Deutsch, M. Pietr, P. M. Knutsen, E. Ahissar, and E. Schneidman. Fast feedback in active sensing: Touch-induced changes to whisker-object interaction. PLoS One, 7(9):e44272, 2012.
    • L. Pape, C. M. Oddo, M. Controzzi, C. Cipriani, A. Frster, M. C. Carrozza, and J. Schmidhuber. Learning tactile skills through curious exploration. Frontiers in Neurorobotics, 6, 2012.

    Further reading

    • Goren Gordon, Ehud Fonio, and Ehud Ahissar. Emergent exploration via novelty management. Journal of Neuroscience, 34(38):12646–12661, 2014.
    • Goren Gordon, Ehud Fonio, and Ehud Ahissar. Learning and control of exploration primitives. Journal of Computational Neuroscience, 37(2):259–280, 2014.

    Internal references

    External links

    See also

    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools