Crossmodal and multisensory interactions between vision and touch
Simon Lacey and Krish Sathian (2015), Scholarpedia, 10(3):7957. | doi:10.4249/scholarpedia.7957 | revision #150498 [link to/cite this article] |
Over the past two decades, there has been growing appreciation of the multisensory nature of perception and its neural basis. Consequently, the concept has arisen that the brain is "metamodal", with a task-based rather than strictly modality-based organization (Pascual-Leone & Hamilton, 2001; Lacey et al., 2009a; James et al., 2011). Here we focus on interactions between vision and touch in humans, including crossmodal interactions where tactile inputs evoke activity in neocortical regions traditionally considered visual, and multisensory integrative interactions. It is now established that cortical areas in both the ventral and dorsal pathways, previously identified as specialized for various aspects of visual processing, are also routinely recruited during the corresponding aspects of touch (for reviews see Amedi et al., 2005; Sathian & Lacey, 2007; Lacey & Sathian, 2011, 2014). When these regions are in classical visual cortex so that they would traditionally be regarded as unisensory, their engagement is referred to as crossmodal, whereas other regions lie in classically multisensory sectors of the association neocortex. Much of the relevant work concerns haptic perception (active sensing using the hand) of shape; this work is therefore considered in detail. We consider how vision and touch might be integrated in various situations and address the role of mental imagery in visual cortical activity during haptic perception. Finally, we present a model of haptic object recognition and its relationship with mental imagery (Lacey et al., 2014).
Contents |
Activation of visually responsive cortical regions during touch
The first demonstration that a visual cortical area is active during normal tactile perception came from a positron emission tomographic (PET) study in humans (Sathian et al., 1997). In this study, tactile discrimination of the orientation of gratings applied to the immobilized fingerpad, relative to a control task requiring tactile discrimination of grating groove width, activated a focus in extrastriate visual cortex, close to the parieto-occipital fissure. This focus, located in the region of the human V6 complex of visual areas (Pitzalis et al. 2006), is also active during visual discrimination of grating orientation (Sergent et al., 1992). Similarly, other neocortical regions known to selectively process particular aspects of vision are activated by analogous non-visual tasks: The human MT complex (hMT+), a region well-known to be responsive to visual motion, is also active during tactile motion perception (Hagen et al., 2002; Blake et al., 2004; Summers et al., 2009). This region is sensitive to auditory motion as well (Poirier et al., 2005), but not to arbitrary cues for auditory motion (Blake et al., 2004). Together, these findings suggest that hMT+ functions as a modality-independent motion processor. Parts of early visual cortex and a focus in the lingual gyrus are texture-selective in both vision and touch (Stilla & Sathian, 2008; Sathian et al., 2011; Eck et al., 2013), although one group found that haptically and visually texture-selective regions in medial occipitotemporal cortex were adjacent but non-overlapping (Podrebarac et al., 2014). Further, the early visual regions are sensitive to the congruence of texture information across the visual and haptic modalities (Eck et al., 2013), and information about haptic texture flows from somatosensory regions into these early visual cortical areas (Sathian et al., 2011). Both visual and haptic location judgments involve a dorsally directed pathway comprising cortex along the intraparietal sulcus (IPS) and that constituting the frontal eye fields (FEFs) bilaterally: the IPS is classically considered multisensory while the FEF is now recognized to be so. For both texture and location, several of these bisensory areas show correlations of activation magnitude between the visual and haptic tasks, indicating some commonality of cortical processing across modalities (Sathian et al., 2011).
Most research on visuo-haptic processing of object shape has concentrated on the lateral occipital complex (LOC), an object-selective region in the ventral visual pathway (Malach et al., 1995) that is also object- or shape-selective in touch (Amedi et al. 2001, 2002; James et al. 2002; Stilla & Sathian, 2008). The LOC responds to both haptic 3-D (Amedi et al. 2001, 2002; Stilla & Sathian, 2008) and tactile 2-D stimuli (Stoesz et al., 2003; Prather et al., 2004) but does not respond during auditory object recognition cued by object-specific sounds (Amedi et al., 2002). However, this region is activated when participants listen to the impact sounds made by metal or wood objects and categorize these sounds by the shape of the associated object (James et al., 2011). Auditory shape information can be conveyed by a visual-auditory sensory substitution device using a specific algorithm to convert visual information into an auditory stream or 'soundscape'. Both sighted and blind humans can learn to recognize objects by extracting shape information from such soundscapes, albeit after extensive training; interestingly, the LOC responds to soundscapes after such training, but not when participants simply learn to arbitrarily associate soundscapes with particular objects (Amedi et al., 2007). Thus, the LOC can be regarded as processing geometric shape information independently of the sensory modality used to acquire it, similar to the view of hMT+ as processing modality-independent motion information (see above).
Apart from the LOC, visuo-haptic responses have also been observed in several, classically multisensory, parietal regions, including multiple loci along the IPS (Grefkes et al., 2002; Saito et al. 2003; Stilla & Sathian, 2008). Given that many of these IPS regions are involved in discrimination of both visual and haptic location of object features, their responsiveness during shape perception may be concerned with reconstruction of global shape representations from object parts (Sathian et al., 2011). The postcentral sulcus (PCS), which corresponds to Brodmann's area 2 of primary somatosensory cortex (S1) (Grefkes et al., 2001), also shows visuo-haptic shape-selectivity (Stilla & Sathian, 2008).
It is critical to determine whether haptic or tactile recruitment of visual cortical areas is functionally relevant, i.e. whether these regions are actually necessary for task performance. Although research along these lines remains sparse, there is some evidence in support of this idea. Firstly, case studies indicate that the LOC is necessary for both haptic and visual shape perception: A lesion to the left occipito-temporal cortex, which likely included the LOC, resulted in both tactile and visual agnosia even though somatosensory cortex and basic somatosensory function were intact (Feinberg et al., 1986). Another patient with bilateral LOC lesions was unable to learn new objects either visually or haptically (James et al., 2005; James et al., 2007). Transcranial magnetic stimulation (TMS) is a technique used to temporarily deactivate specific, functionally defined, cortical areas, i.e. to create 'virtual lesions' (Sack, 2006). TMS over the parieto-occipital region active during tactile grating orientation discrimination (Sathian et al., 1997) interfered with performance of this task (Zangaladze et al., 1999) indicating that this area is functionally, rather than epiphenomenally, involved in the task. More work is necessary to fully test the dependence of haptic perception on classic visual cortical areas.
Why are visual cortical regions active during touch?
Activation of the LOC and other visual cortical areas during touch might arise from direct, "bottom-up" or "feedforward" somatosensory input. Human electrophysiological studies are consistent with this possibility: activity in somatosensory cortex propagates to the LOC as early as 150ms after stimulus onset (Lucan et al., 2010; Adhikari et al., 2014) in a beta-band oscillatory network (Adhikari et al., 2014). This might reflect cortical pathways between primary somatosensory and visual cortices demonstrated in the macaque (Négyessy et al., 2006). A case study is also illuminating: a patient with bilateral ventral occipito-temporal lesions, with sparing of the dorsal part of the LOC that likely included the multisensory sub-region, showed visual agnosia but intact haptic object recognition associated with activation of the intact dorsal part of the LOC, suggesting that somatosensory input could directly activate this region (Allen & Humphreys, 2009). Thus, neocortical regions classically considered to engage in unisensory visual processing may in actuality integrate multisensory inputs.
Studies of congenitally or early blind individuals are consistent with the notion that many classical visual cortical areas are modality-independent but task-specific. Thus, non-visual stimuli in the early blind, and visual stimuli in the sighted, activate the same extrastriate cortical regions on comparable tasks (reviewed by Sathian, 2014). For instance, an area known as the visual word-form area in the left fusiform gyrus, which responds selectively to visually presented words in the sighted, is also sensitive to Braille words (Reich et al., 2011). Congenitally blind people also engage visual cortical regions that are not active during corresponding tasks in the normally sighted population, most interestingly, visual cortical areas located at the site of primary (and adjacent non-primary) visual cortex of the sighted, i.e. medial occipital cortex (reviewed in Pascual-Leone et al., 2005; Sathian, 2005; Sathian & Lacey, 2007; Pavani & Röder, 2012; Sathian, 2014). As pointed out earlier (Sathian, 2014), it cannot be assumed that primary visual cortex occupies exactly the same anatomical extent in those who are born without vision as in normally sighted people. Across a host of studies, a wide range of non-visual tasks has been reported to recruit medial occipital cortex in the congenitally blind but not the sighted – these tasks include somatosensory, auditory and language tasks (Pavani & Röder, 2012; Sathian, 2014). However, each study has typically focused on just one or a few tasks, so how these different functional domains are organized in visual cortex of the congenitally blind remains essentially unknown. Further, the idea that blind people are superior to their sighted counterparts on non-visual tasks is not a universal truth; rather, the evidence pooled over many studies suggests that their superiority reflects the specifics of their experience in the absence of vision (Sathian & Lacey, 2007; Pavani & Röder, 2012; Sathian, 2014).
An alternative to feedforward activation of visual cortex by tactile inputs is that haptic perception might evoke visual imagery of the felt object resulting in "top-down" activation of the LOC by "feedback" connections from higher-order areas (Sathian & Zangaladze, 2001). In keeping with this hypothesis, many studies demonstrate LOC activity during visual imagery: During auditorily-cued mental imagery of the shape of familiar objects, both blind and sighted participants show left LOC activation, where shape information would arise mainly from haptic experience for the blind and mainly from visual experience for the sighted (De Volder et al., 2001). The left LOC is also active when geometric and material object properties are retrieved from memory (Newman et al., 2005) and haptic shape-selective activation magnitudes in the right LOC were highly correlated with ratings of visual imagery vividness (Zhang et al., 2004). Even early visual cortical areas have been reported to respond to changes in haptic shape (Snow et al., 2014); however, as with other studies of crossmodal recruitment of visual cortex, it is not possible to exclude visual imagery as an explanation. A counter-argument is that visual imagery cannot explain haptically-evoked LOC activity. In support of this, LOC activity was found to be substantially lower during visual imagery compared to haptic shape perception (Amedi et al., 2001); however, this study did not verify that participants actually engaged in imagery throughout the scan. Another argument against the role of visual imagery is based on the observations that early- as well as late-blind individuals show haptic shape-related LOC activation (Pietrini et al., 2004). While the early blind clearly do not experience visual imagery, these findings do not necessarily rule out a visual imagery explanation in the sighted, given the extensive consequences of visual deprivation on neocortical organization (see above).
Recently, multivariate pattern analyses of voxel-wise activity have been used to demonstrate that activity patterns in primary sensory cortices can differentiate stimuli presented in modalities other than the canonical one. Thus, S1 activity could distinguish between objects in video clips that were being haptically explored, although there was only visual and no somatosensory input (Meyer et al., 2011). Along similar lines, stimulus modality (tactile, pain, auditory or visual) could be decoded in primary sensory cortices (S1, primary visual cortex or primary auditory cortex) regardless of their canonical associations (Liang et al., 2013). These studies provide further evidence of widely distributed multisensory processing in the neocortex; however, it remains uncertain whether the observed non-canonical activity arises from feedforward or feedback projections.
Integration of visual and tactile inputs
Vision and touch share many similarities in the way stimuli are processed. Ahissar and Arieli (2001) proposed that visual and tactile systems use analogous spatiotemporal coding schemes. Consistent with this idea, single neurons in macaques are similarly tuned for curvature direction at intermediate levels of the processing hierarchy in both visual and somatosensory cortex (areas V4 and S2) (Yau et al., 2009). Such similarities in coding lend themselves to multisensory integration. Evidence that visual and tactile inputs are indeed integrated comes from studies showing that the orientation of a tactile grating can disambiguate binocular rivalry (Lunghi et al., 2010), and that tactile motion can bias visually perceived bistable alternations of motion direction (James & Blake, 2004). Ernst and Banks (2002) demonstrated that humans integrate visual and haptic inputs in a manner that turns out to be statistically optimal, with the dominant modality being the one associated with lower variance in its estimates. Thus, vision tends to be dominant when assessing object shape but not surface texture (Klatzky et al., 1987). Statistically optimal integration is probably learnt during development, since it is not apparent in the first decade of life (Gori et al., 2008). A dramatic illustration of the importance of multisensory integration in body ownership is offered by the rubber-hand illusion (RHI), in which a viewed rubber hand is aligned with one's own hand screened from view; when both the rubber hand and the real hand are tapped synchronously, the rubber hand is rapidly incorporated into the body image and perceived as one's own (Botvinick & Cohen, 1998). The RHI can be induced in sighted people in the absence of vision, if an experimenter taps the subject on the real hand and synchronously moves the subject's other hand to tap the rubber hand, suggesting that it is multisensory congruence of body-related information (in this case between tactile and proprioceptive inputs) that is critical, rather than visuo-tactile congruence specifically (Ehrsson et al., 2005). However, this version of the RHI is absent in early blind people, pointing to potential differences in multisensory integrative processes as a consequence of early visual deprivation (Petkova et al., 2012). In a variant of the RHI, one can be induced to perceive a third arm (Guterstam et al., 2011), and visuotactile conflicts can disrupt the feeling of ownership of one's own limb (Gentile et al., 2013). The illusion has even been extended to the entire body using head-mounted virtual reality displays to create an out-of-body experience (Ehrsson, 2007; Lenggenhager et al., 2007).
There has been some study of the neural processes underlying visuo-tactile integration, although a comprehensive account is not yet feasible. Most neurons in the ventral intraparietal areas (VIP) of macaques exhibit modulation of their responses by bisensory stimulation, even when their overt responses to unisensory stimuli are limited to either vision or touch (Avillac et al., 2007). Similarly, bisensory stimuli in rats augment the somatosensory evoked response and reset the phase of induced network oscillations (Sieben et al., 2013). In the putative human homolog of VIP, topographic maps of tactile and proximal visual stimuli are aligned (Sereno & Huang, 2006), although at a single neuron level in macaques, the reference frame for tactile stimulation is head-centered whereas that for visual stimuli varies between head-centered, eye-centered or intermediate (Avillac et al., 2005). In one study, visuo-haptic responses were enhanced in the LOC and IPS by stimulus salience (Kim & James, 2010), although a subsequent study by the same group showed that when spatiotemporal congruence was maximized across modalities, the inverse effectiveness pattern characteristic of classic neurophysiologic studies of multisensory integration emerged in the LOC and IPS as well as parietal opercular cortex, all on the left (Kim et al., 2012). Visuo-haptic responses in perirhinal cortex are also sensitive to the congruence of stimuli across modalities (Holdstock et al., 2009). The RHI appears to have a different neural substrate, being associated with activity in ventral premotor cortex (Ehrsson et al., 2004), although IPS and cerebellar activity is also reported (Ehrsson et al., 2004, 2005). Repetitive TMS (rTMS) over the left anterior IPS impairs visual-haptic, but not haptic-visual, shape matching using the right hand (Buelte et al. 2008), while rTMS over occipitotemporal cortex affects the Müller-Lyer illusion regardless of whether it is induced visually, haptically or cross-modally (Mancini et al., 2011) – these studies imply that the multisensory convergence reported in the preceding studies is functionally relevant. Further study of visuo-tactile integration and its neural substrates is warranted.
Individual differences
Two kinds of visual imagery have been described: "object imagery" (involving pictorial images that are vivid and detailed, dealing with the literal appearance of objects in terms of shape, color, brightness, etc.) and "spatial imagery" (involving schematic images more concerned with the spatial relations of objects, their component parts, and spatial transformations) (Kozhevnikov et al., 2002, 2005; Blajenkova et al., 2006). An experimentally important difference is that object imagery includes surface property information while spatial imagery does not. To establish whether object and spatial imagery differences occur in touch as well as vision, we required participants to discriminate shape across changes in texture, and texture across changes in shape (Figure 1), in both visual and haptic within-modal conditions. We found that spatial imagers could discriminate shape despite changes in texture but not vice versa, presumably because their images tend not to encode surface properties. By contrast, object imagers could discriminate texture despite changes in shape, but not the reverse (Lacey et al., 2011), indicating that texture, a surface property, is integrated into their shape representations. In addition, greater preference for object imagery was associated with greater impairment by texture changes (Lacey et al., 2011). Thus, the object-spatial imagery continuum characterizes haptics as well as vision, and individual differences in imagery preference along this continuum affect the extent to which surface properties are integrated into object representations (Lacey et al., 2011).
Cross-modal visuo-haptic object recognition, while fairly accurate, comes at a cost compared to within-modal recognition (Bushnell & Baxt 1999; Casey & Newell, 2007; Lacey et al., 2007). Importantly, while within-modal recognition of a set of previously unfamiliar and highly similar objects is view-dependent in both vision and touch, cross-modal recognition of these objects turns out to be view-independent (Lacey et al., 2007). Moreover, training in either the visual or the haptic modality to induce view-independence in the trained modality automatically confers view-independence in the untrained modality, and cross-modal training yields view-independence in each modality, implying that unisensory, view-dependent representations converge onto a bisensory, view-independent representation, possibly in the LOC (Lacey et al., 2009b). Further, spatial imagery preference correlates with the accuracy of cross-modal object recognition (Lacey et al., 2007). It appears, then, that the multisensory representation has some features that are stable across individuals, like view-independence, and some that vary across individuals, such as integration of surface property information and individual differences in imagery preference.
A model of visuo-haptic multisensory object representation
We now describe a model of visuo-haptic multisensory object representation (Lacey et al., 2009a; Lacey et al., 2014) and review the evidence for this model from studies designed to explicitly test the visual imagery hypothesis discussed above (Lacey et al., 2010; Lacey et al., 2014). In this model, object representations in the LOC can be flexibly accessed either bottom-up or top-down, independently of the input modality, and object familiarity plays a modulatory role. There is no stored representation for unfamiliar objects so that during haptic recognition, an unfamiliar object has to be explored in its entirety in order to compute global shape and to relate component parts to one another. We proposed (Lacey et al., 2009a) that this occurs in a bottom-up pathway from somatosensory cortex to the LOC, with involvement of the IPS in computing part relationships and thence global shape, facilitated by spatial imagery processes. For familiar objects, global shape can be inferred more easily, perhaps from distinctive features or one diagnostic part, and we suggested (Lacey et al., 2009a) that haptic exploration rapidly acquires enough information to trigger a stored visual image and generate a hypothesis about its identity, involving primarily object imagery processes via a top-down pathway from prefrontal cortex to LOC, as has been proposed for vision (e.g., Bar, 2007).
We tested this model by directly comparing activations and effective connectivity during haptic shape perception and both visual object imagery and spatial imagery (Lacey et al., 2010; Lacey et al., 2014), reasoning that reliance on similar processes across tasks would lead to correlations of activation magnitude across participants, as well as similar patterns of effective connectivity across tasks. In contrast to previous studies, we ensured that participants engaged in the desired kind of imagery throughout each scan by using appropriate tasks and recording responses. Participants also performed haptic shape discrimination using either familiar or unfamiliar objects. We found that object familiarity modulated inter-task correlations: Of eleven regions common to visual object imagery and haptic perception of familiar shape, six (including bilateral LOC) showed inter-task correlations of activation magnitude. By contrast, object imagery and haptic perception of unfamiliar shape shared only four regions, only one of which (an IPS region) showed an inter-task correlation (Lacey et al., 2010). Relatively few regions showed inter-task correlations between spatial imagery and haptic perception of either familiar or unfamiliar shape, with parietal foci featuring in both sets of correlations (Lacey et al., 2014). This suggests that spatial imagery is relevant to haptic shape perception regardless of object familiarity (contrary to the initial model), whereas object imagery is more strongly associated with haptic perception of familiar, than unfamiliar, shape (in agreement with the initial model). However, it remains possible that the parietal foci showing inter-task correlations between spatial imagery and haptic shape perception reflect spatial processing more generally, rather than spatial imagery per se (Jäncke et al., 2001; Lacey et al., 2014), or generic imagery processes, e.g., image generation, common to both object and spatial imagery (Lacey et al., 2014; Mechelli et al., 2004).
We also conducted effective connectivity analyses, based on the inferred neuronal activity derived from deconvolving the hemodynamic response out of the observed BOLD signals (Lacey et al., 2014). These analyses supported the broad architecture of the model, showing that the spatial imagery network shared much more commonality with the network associated with unfamiliar, compared to familiar, shape perception, while the object imagery network shared much more commonality with familiar, than unfamiliar, shape perception (Lacey et al., 2014). More specifically, the model proposes that the component parts of an unfamiliar object are explored in their entirety and assembled into a representation of global shape via spatial imagery processes (Lacey et al., 2009a). Consistent with this, in the parts of the network common to spatial imagery and unfamiliar haptic shape perception, the LOC is driven by parietal foci, with complex cross-talk between posterior parietal and somatosensory foci. These findings fit with the notion of bottom-up pathways from somatosensory cortex and a role for cortex in and around the IPS in spatial imagery (Lacey et al., 2014). The IPS and somatosensory interactions were absent from the sparse network shared by spatial imagery and haptic perception of familiar shape. By contrast, the relationship between object imagery and familiar shape perception is characterized by top-down pathways from prefrontal areas reflecting the involvement of object imagery (Lacey et al., 2009a). Supporting this, the LOC was driven bilaterally by the left inferior frontal gyrus in the network shared by object imagery and haptic perception of familiar shape, while these pathways were absent from the sparse network common to object imagery and unfamiliar haptic shape perception (Lacey et al., 2014).
Figure 2 shows the current version of our model for haptic shape perception in which the LOC is driven bottom-up from primary somatosensory cortex as well as top-down via object imagery processes from prefrontal cortex, with additional input from the IPS involving spatial imagery processes (Lacey et al., 2014). We propose that the bottom-up route is more important for haptic perception of unfamiliar than familiar objects, whereas the converse is true of the top-down route, which is more important for haptic perception of familiar than unfamiliar objects. It will be interesting to explore the impact of individual preferences for object vs. spatial imagery on these processes and paths.
Acknowledgments
Support to KS from the National Eye Institute at the NIH, the National Science Foundation, and the Veterans Administration is gratefully acknowledged.
References
- Adhikari, B M; Sathian, K; Epstein, C M; Lamichhane, B and Dhamala, M (2014). Oscillatory activity in neocortical networks during tactile discrimination near the limit of spatial acuity. NeuroImage 91: 300-310. doi:10.1016/j.neuroimage.2014.01.007.
- Ahissar(2001). Figuring space by time. Neuron 32(2): 185-201. doi:10.1016/s0896-6273(01)00466-4.
- Allen(2009). Direct tactile stimulation of dorsal occipito-temporal cortex in a visual agnosic. Current Biology 19(12): 1044-1049. doi:10.1016/j.cub.2009.04.057.
- Amedi, A; Jacobson, G; Hendler, T; Malach, R and Zohary, E (2002). Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex 12(11): 1202-1212. doi:10.1093/cercor/12.11.1202.
- Amedi, A; Malach, R; Hendler, T; Peled, S and Zohary, E (2001). Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience 4(3): 324-330. doi:10.1038/85201.
- Amedi, A et al. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience 10(6): 687-689. doi:10.1038/nn1912.
- Amedi, A; von Kriegstein, K; van Atteveldt, N M; Beauchamp, M S and Naumer, M J (2005). Functional imaging of human crossmodal identification and object recognition. Experimental Brain Research 166(3-4): 559-571. doi:10.1007/s00221-005-2396-5.
- Avillac, M; Ben Hamed, S and Duhamel, J-R (2007). Multisensory integration in the ventral intraparietal area of the macaque monkey. Journal of Neuroscience 27(8): 1922-1932. doi:10.1523/jneurosci.2646-06.2007.
- Avillac, M; Denève, S; Olivier, E; Pouget, A and Duhamel, J-R (2005). Reference frames for representing visual and tactile locations in parietal cortex. Nature Neuroscience 8(7): 941-949. doi:10.1038/nn1480.
- Bar, M (2007). The proactive brain: using analogies and associations to generate predictions. Trends in Cognitive Sciences 11(7): 280-289. doi:10.1016/j.tics.2007.05.005.
- Blajenkova, O; Kozhevnikov, M and Motes, M A (2006). Object-spatial imagery: A new self-report imagery questionnaire. Applied Cognitive Psychology 20(2): 239-263. doi:10.1002/acp.1182.
- Blake, R; Sobel, K V and James, T W (2004). Neural synergy between kinetic vision and touch. Psychological Science 15(6): 397-402. doi:10.1111/j.0956-7976.2004.00691.x.
- Botvinick(1998). Rubber hands 'feel' touch that eyes see. Nature 391: 756. doi:10.1038/35784.
- Buelte, D; Meister, I G and Staedtgen, M (2008). The role of the anterior intraparietal sulcus in crossmodal processing of object features in humans: An rTMS study. Brain Research 1217: 110-118. doi:10.1016/j.brainres.2008.03.075.
- Bushnell(1999). Children's haptic and cross-modal recognition with familiar and unfamiliar objects. Journal of Experimental Psychology: Human Perception & Performance 25(6): 1867-1881. doi:10.1037//0096-1523.25.6.1867.
- Casey(2007). Are representations of unfamiliar faces independent of encoding modality? Neuropsychologia 45(3): 506-513. doi:10.1016/j.neuropsychologia.2006.02.011.
- De Volder, A G et al. (2001). Auditory triggered mental imagery of shape involves visual association areas in early blind humans. NeuroImage 14(1): 129-139. doi:10.1006/nimg.2001.0782.
- Eck, J; Kaas, A L and Goebel, R (2013). Crossmodal interactions of haptic and visual texture information in early sensory cortex. NeuroImage 75: 123-135. doi:10.1016/j.neuroimage.2013.02.075.
- Ehrsson, H H (2007). The experimental induction of out-of-body experiences. Science 317(5841): 1048. doi:10.1126/science.1142175.
- Ehrsson, H H; Holmes, N P and Passingham, R E (2005). Touching a rubber hand: Feeling of body ownership is associated with activity in multisensory brain areas. Journal of Neuroscience 25(45): 10564-10573. doi:10.1523/jneurosci.0800-05.2005.
- Ehrsson, H H; Spence, C and Passingham, R E (2004). That's my hand! Activity in premotor cortex reflects feeling of ownership of a limb. Science 305(5685): 875-877. doi:10.1126/science.1097011.
- Ernst(2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415: 429-433. doi:10.1038/415429a.
- Feinberg, T E; Rothi, L J and Heilman, K M (1986). Multimodal agnosia after unilateral left hemisphere lesion. Neurology 36(6): 864-867. doi:10.1212/wnl.36.6.864.
- Gentile, G; Guterstam, A; Brozzoli, C and Ehrsson, H H (2013). Disintegration of multisensory signals from the real hand reduces default limb self-attribution: An fMRI study. Journal of Neuroscience 33(33): 13350-13366. doi:10.1523/jneurosci.1363-13.2013.
- Gori, M; Del Viva, M; Sandini, G and Burr, D C (2008). Young children do not integrate visual and haptic form information. Current Biology 18(9): 694-698. doi:10.1016/j.cub.2008.04.036.
- Grefkes, C; Geyer, S; Schormann, T; Roland, P and Zilles, K (2001). Human somatosensory area 2: Observer-independent cytoarchitectonic mapping, interindividual variability, and population map. NeuroImage 14(3): 617-631. doi:10.1006/nimg.2001.0858.
- Grefkes, C; Weiss, P H; Zilles, K and Fink, G R (2002). Crossmodal processing of object features in human anterior intraparietal cortex: An fMRI study implies equivalencies between humans and monkeys. Neuron 35(1): 173-184. doi:10.1016/s0896-6273(02)00741-9.
- Guterstam, A; Petkova, V I and Ehrsson, H H (2011). The illusion of owning a third arm. PLoS ONE 6: e17208. doi:10.1371/journal.pone.0017208.
- Hagen, M C et al. (2002). Tactile motion activates the human middle temporal/V5 (MT/V5) complex. European Journal of Neuroscience 16(5): 957-964. doi:10.1046/j.1460-9568.2002.02139.x.
- Holdstock, J S; Hocking, J; Notley, P; Devlin, J T and Price, C J (2009). Integrating visual and tactile information in the perirhinal cortex. Cerebral Cortex 19(12): 2993-3000. doi:10.1093/cercor/bhp073.
- James, T W et al. (2002). Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia 40(10): 1706-1714. doi:10.1016/S0028-3932(02)00017-9.
- James(2004). Perceiving object motion using vision and touch. Cognitive, Affective, & Behavioral Neuroscience 4(2): 201-207. doi:10.3758/cabn.4.2.201.
- James, T W; James, K H; Humphrey, G K and Goodale, M A (2006). Do visual and tactile object representations share the same neural substrate? In: M A Heller and S Ballesteros (Eds.), Touch and Blindness, Psychology and Neuroscience (pp. 139-155). Mahwah, New Jersey: Lawrence Erlbaum Associates. ISBN: 9780805847253.
- James, T W; Kim, S and Fisher, J S (2007). The neural basis of haptic object processing. Canadian Journal of Experimental Psychology 61(3): 219-229. doi:10.1037/cjep2007023.
- James, T W; Stevenson, R W; Kim, S; VanDerKlok, R M and James, K H (2011). Shape from sound: Evidence for a shape operator in the lateral occipital cortex. Neuropsychologia 49(7): 1807-1815. doi:10.1016/j.neuropsychologia.2011.03.004.
- Jäncke, L; Kleinschmidt, A; Mirzazade, S; Shah, N J and Freund, H-J (2001). The role of the inferior parietal cortex in linking the tactile perception and manual construction of object shapes. Cerebral Cortex 11(2): 114-121. doi:10.1093/cercor/11.2.114.
- Kim(2010). Enhanced effectiveness in visuo-haptic object-selective brain regions with increasing stimulus salience. Human Brain Mapping 31: 678-693. doi:10.1002/hbm.20897.
- Kim, S; Stevenson, R A and James, T W (2012). Visuo-haptic neuronal convergence demonstrated by an inversely effective pattern of BOLD activation. Journal of Cognitive Neuroscience 24(4): 830-842. doi:10.1162/jocn_a_00176.
- Klatzky, R L; Lederman, S J and Reed, C (1987). There's more to touch than meets the eye: The salience of object attributes for haptics with and without vision. Journal of Experimental Psychology 116(4): 356-369. doi:10.1037/0096-3445.116.4.356.
- Kozhevnikov, M; Hegarty, M and Mayer, R E (2002). Revising the visualiser-verbaliser dimension: Evidence for two types of visualisers. Cognition & Instruction 20(1): 47-77. doi:10.1207/S1532690XCI2001_3.
- Kozhevnikov, M; Kosslyn, S M and Shephard, J (2005). Spatial versus object visualisers: A new characterisation of cognitive style. Memory & Cognition 33: 710-726. doi:10.3758/bf03195337.
- Lacey, S; Flueckiger, P; Stilla, R; Lava, M and Sathian, K (2010). Object familiarity modulates the relationship between visual object imagery and haptic shape perception. NeuroImage 49(3): 1977-1990. doi:10.1016/j.neuroimage.2009.10.081.
- Lacey, S; Lin, J B and Sathian, K (2011). Object and spatial imagery dimensions in visuo-haptic representations. Experimental Brain Research 213(2-3): 267-273. doi:10.1007/s00221-011-2623-1.
- Lacey, S; Pappas, M; Kreps, A; Lee, K and Sathian, K (2009). Perceptual learning of view-independence in visuo-haptic object representations. Experimental Brain Research 198(2-3): 329-337. doi:10.1007/s00221-009-1856-8.
- Lacey, S; Peters, A and Sathian, K (2007). Cross-modal object representation is viewpoint-independent. PLoS ONE 2(9): e890. doi:10.1371/journal.pone.0000890.
- Lacey(2011). Multisensory object representation: Insights from studies of vision and touch. Progress in Brain Research 191: 165-176. doi:10.1016/b978-0-444-53752-2.00006-0.
- Lacey(2014). Visuo-haptic multisensory object recognition, categorization and representation. Frontiers in Psychology 5: 730. doi:10.3389/fpsyg.2014.00730.
- Lacey, S; Stilla, R; Sreenivasan, K; Deshpande, G and Sathian, K (2014). Spatial imagery in haptic shape perception. Neuropsychologia 60: 144-158. doi:10.1016/j.neuropsychologia.2014.05.008.
- Lacey, S; Tal, N; Amedi, A and Sathian, K (2009). A putative model of multisensory object representation. Brain Topography 21(3-4): 269-274. doi:10.1007/s10548-009-0087-4.
- Lenggenhager, B; Tadi, T; Metzinger, T and Blanke, O (2007). Video ergo sum: manipulating bodily self-consciousness. Science 317(5841): 1096-1099. doi:10.1126/science.1143439.
- Liang, M; Mouraux, A; Hu, L and Iannetti, G D (2013). Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nature Communications 4: 1979. doi:10.1038/ncomms2979.
- Lucan, J N; Foxe, J J; Gomez-Ramirez, M; Sathian, K and Molholm, S (2010). Tactile shape discrimination recruits human lateral occipital complex during early perceptual processing. Human Brain Mapping 31: 1813-1821. doi:10.1002/hbm.20983.
- Lunghi, C; Binda, P and Morrone, M (2010). Touch disambiguates rivalrous perception at early stages of visual analysis. Current Biology 20(4): R143-R144. doi:10.1016/j.cub.2009.12.015.
- Malach, R et al. (1995). Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America 92(18): 8135-8139. doi:10.1073/pnas.92.18.8135.
- Mancini, F; Bolognini, N; Bricolo, E and Vallar, G (2011). Cross-modal processing in the occipito-temporal cortex: A TMS study of the Müller-Lyer illusion. Journal of Cognitive Neuroscience 23(8): 1987-1997. doi:10.1162/jocn.2010.21561.
- Mechelli, A; Price, C J; Friston, K J and Ishai, A (2004). Where bottom-up meets top-down: Neuronal interactions during perception and imagery. Cerebral Cortex 14(11): 1256-1265. doi:10.1093/cercor/bhh087.
- Meyer, K; Kaplan, J T; Essex, R; Damasio, H and Damasio, A (2011). Seeing touch is correlated with content-specific activity in primary somatosensory cortex. Cerebral Cortex 21(9): 2113-2121. doi:10.1093/cercor/bhq289.
- Négyessy, L; Nepusz, T; Kocsis, L and Bazsó, F (2006). Prediction of the main cortical areas and connections involved in the tactile function of the visual cortex by network analysis. European Journal of Neuroscience 23(7): 1919-1930. doi:10.1111/j.1460-9568.2006.04678.x.
- Newman, S D; Klatzky, R L; Lederman, S J and Just, M A (2005). Imagining material versus geometric properties of objects: An fMRI study. Cognitive Brain Research 23(2-3): 235-246. doi:10.1016/j.cogbrainres.2004.10.020.
- Pascual-Leone, A; Amedi, A; Fregni, F and Merabet, L B (2005). The plastic human brain cortex. Annual Review of Neuroscience 28(1): 377-401. doi:10.1146/annurev.neuro.27.070203.144216.
- Pascual-Leone(2001). The metamodal organization of the brain. Progress in Brain Research 134: 427-445. doi:10.1016/s0079-6123(01)34028-1.
- Pavani, F and Röder, B (2012). Crossmodal plasticity as a consequence of sensory loss: Insights from blindness and deafness. In: B E Stein (Ed.), The New Handbook of Multisensory Processes (pp. 737-759). Cambridge, MA: MIT Press.
- Petkova, V I; Zetetrberg, H and Ehrsson, H H (2012). Rubber hands feel touch, but not in blind individuals. PLoS ONE 7: e335912. doi:10.1371/journal.pone.0035912.
- Pietrini, P et al. (2004). Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences of the United States of America 101(15): 5658-5663. doi:10.1073/pnas.0400707101.
- Pitzalis, S et al. (2006). Wide-field retinotopy defines human cortical visual area V6. Journal of Neuroscience 26(30): 7962-7973. doi:10.1523/jneurosci.0178-06.2006.
- Podrebarac, S K; Goodale, M A and Snow, J C (2014). Are visual texture-selective areas recruited during haptic texture discrimination? NeuroImage 94: 129-137. doi:10.1016/j.neuroimage.2014.03.013.
- Poirier, C et al. (2005). Specific activation of the V5 brain area by auditory motion processing: An fMRI study. Cognitive Brain Research 25(3): 650-658. doi:10.1016/j.cogbrainres.2005.08.015.
- Prather, S C; Votaw, J R and Sathian, K (2004). Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia 42(8): 1079-1087. doi:10.1016/j.neuropsychologia.2003.12.013.
- Reich, L; Szwed, M; Cohen, L and Amedi, A (2011). A ventral visual stream reading center independent of visual experience. Current Biology 21(5): 363-368. doi:10.1016/j.cub.2011.01.040.
- Sack, A T (2006). Transcranial magnetic stimulation, causal structure-function mapping and networks of functional relevance. Current Opinion in Neurobiology 16(5): 593-599. doi:10.1016/j.conb.2006.06.016.
- Saito, D N; Okada, T; Morita, Y; Yonekura, Y and Sadato, N (2003). Tactile-visual cross-modal shape matching: A functional MRI study. Cognitive Brain Research 17(1): 14-25. doi:10.1016/s0926-6410(03)00076-4.
- Sathian, K (2005). Visual cortical activity during tactile perception in the sighted and visually deprived. Developmental Psychobiology 46(3): 279-286. doi:10.1002/dev.20056.
- Sathian, K (2014). Cross-modal plasticity in the visual system. In: M E Selzer, S Clarke, L Cohen, G Kwakkel and R Miller (Eds.), Textbook of Neural Repair and Rehabilitation, 2nd Ed. (pp. 140-153). Cambridge, UK: Cambridge University Press. ISBN: 9781107010475.
- Sathian(2007). Journeying beyond classical somatosensory cortex. Canadian Journal of Experimental Psychology 61(3): 254-264. doi:10.1037/cjep2007026.
- Sathian, K et al. (2011). Dual pathways for somatosensory information processing. NeuroImage 57: 462-475.
- Sathian(2001). Feeling with the mind’s eye: the role of visual imagery in tactile perception. Optometry & Vision Science 78(5): 276-281. doi:10.1097/00006324-200105000-00010.
- Sathian, K; Zangaladze, A; Hoffman, J M and Grafton, S T (1997). Feeling with the mind's eye. NeuroReport 8(18): 3877-3881.
- Sereno(2006). A human parietal face area contains aligned head-centered visual and tactile maps. Nature Neuroscience 9(10): 1337-1343. doi:10.1038/nn1777.
- Sergent, J; Ohta, S and MacDonald, B (1992). Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain 115(1): 15-36. doi:10.1093/brain/115.1.15.
- Sieben, K; Röder, B and Hanganu-Opatz, I L (2013). Oscillatory entrainment of primary somatosensory cortex encodes visual control of tactile processing. Journal of Neuroscience 33(13): 5736-5749. doi:10.1523/jneurosci.4432-12.2013.
- Snow, J C; Strother, L and Humphreys, G W (2014). Haptic shape processing in visual cortex. Journal of Cognitive Neuroscience 26(5): 1154-1167. doi:10.1162/jocn_a_00548.
- Stilla(2008). Selective visuo-haptic processing of shape and texture. Human Brain Mapping 29(10): 1123-1138. doi:10.1002/hbm.20456.
- Stoesz, M et al. (2003). Neural networks active during tactile form perception: Common and differential activity during macrospatial and microspatial tasks. International Journal of Psychophysiology 50(1-2): 41-49. doi:10.1016/s0167-8760(03)00123-5.
- Summers, I R; Francis, S T; Bowtell, R W; McGlone, F P and Clemence, M (2009). A functional magnetic resonance imaging investigation of cortical activation from moving vibrotactile stimuli on the fingertip. The Journal of the Acoustical Society of America 125(2): 1033-1039. doi:10.1121/1.3056399.
- Yau, J M; Pasupathy, A; Fitzgerald, P J; Hsiao, S S and Connor, C E (2009). Analogous intermediate shape coding in vision and touch. Proceedings of the National Academy of Sciences of the United States of America 106(38): 16457-16462. doi:10.1073/pnas.0904186106.
- Zangaladze, A; Epstein, C M; Grafton, S T and Sathian, K (1999). Involvement of visual cortex in tactile discrimination of orientation. Nature 401(6753): 587-590. doi:10.1038/44139.
- Zhang, M; Weisser, V D; Stilla, R; Prather, S C and Sathian, K (2004). Multisensory cortical processing of object shape and its relation to mental imagery. Cognitive, Affective, & Behavioral Neuroscience 4(2): 251-259. doi:10.3758/cabn.4.2.251.