What
is an inter-sensory object? Sergei Gepshtein [1],
Johannes Burge [1], Martin S. Banks [1], Marc O. Ernst [2] Recent work
showed that humans combine visual and haptic information about object
size in a way that approaches statistical optimality: The precision of
combined estimates is higher than with vision or touch alone (Ernst &
Banks, 2002; Gepshtein & Banks, 2003). If the brain combines the visual
and haptic signals optimally when they appear to come from the same object,
the precision of combination should be greater when the signals originate
from the same location in space. We examined this by varying the spatial
offset between the visual and haptic stimuli. In a 2-IFC procedure, each
interval contained visual and haptic stimuli, spatially superimposed or
separated by up to 10 cm. The visual stimuli were random-dot stereograms
of two parallel surfaces; the haptic stimuli were two parallel surfaces
created by force-feedback devices. Observers indicated the interval containing
the greater perceived inter-surface distance. The increase in precision
with two cues as opposed to one cue should be greatest when visual and
haptic weights are equal, so we equated the weights for each observer
by finding the surface slant at which vision and haptics were equally
precise (Gepshtein & Banks, 2003). We found that inter-modality, just-noticeable
differences (JND) for object size grew as a function of spatial separation
between the visual and haptic stimuli. With no separation, JNDs were close
to optimal. With large separations, JNDs worsened. We examined whether
this effect of spatial coincidence is affected by scene layout; for example,
when the lack of coincidence is "explained" by occlusion of
the haptic stimulus. |