English  Sprachen Icon  |  Gebärdensprache  |  Leichte Sprache  |  Kontakt


The Sense of Integration

Devika Narain

In our daily lives when we interact with our surroundings, we depend on our senses to gather information. We hear for a boiling kettle, are able to see the steam, and feel the heat from the sense of touch. In other words, we integrate information from several sensory modalities to let us know better about events happening around us. In the research at the CrossModal Lab, we explore how human beings integrate sensory information from multiple sources to recognize events and investigate the perception of simultaneity involved in such cognitive processes. Interestingly, the implications of simultaneity perception are very important in communication, particularly in remote interactions when the presence of the people involved can be simulated. This is known as telepresence and requires sensory information and responses to be coordinated in a natural manner.

Theoretical Background and Methods
Our research uses psychophysical studies with a virtual reality Phantom machine that allows us to investigate how the participants perceive synchronous or asynchronous events from different sensory inputs, such as the sense of touch (haptics) and vision. Modulating the time at which an event occurs enables us to examine a brief period of time, known as a temporal window, in which the integration of these sensory inputs takes place in the brain. The signals from different sensory modalities travel at different speeds to the brain and are processed by different areas in the cerebral cortex. For example, a simple visual stimulus may take 70 to 100 milliseconds to travel from the eye to the visual cortex, whereas even a touch stimulus to the fingers may take lesser time to reach the sensory cortex. However, when these signals carry information from the same event, they must combine to give a unique perceptual representation. We use psychophysics to determine at what point in time the events from different modalities are perceived as if they were synchronous. We ask participants to judge which sensory event occurs first or if they perceived them as simultaneous. This is called temporal order judgment (TOJ). The point of time at which they perceived that the two sensory events were simultaneous is known as the point of subjective simultaneity (PSS). Participant responses vary depending on the difficulty of the detection task. We use these responses to calculate a just noticeable difference (JND), which refers to recognition of the signal at the threshold of perception. These two parameters give us an excellent basis to understand how the human brain integrates multimodal signals in time.

Figure 1
Figure 2
Figure 4

Our current study couples visual and haptic responses to motor responses to multisensory stimuli. Participants watch through three dimensional glasses upon a semi-transparent mirror surface as an object that appears to have depth is displayed (Figures 3,4). Using finger gloves attached to a phantom device (Figures 1,2), they are able to touch and feel the object on display, even though in reality their fingers only grasp at air. When we grasp a solid object, it pushes back at us with a solid force, has a texture, and a certain friction. The finger gloves artificially create the same effect and the participants feel that they are grasping a three dimensional object that is projected from the mirror into the space in front of them, by creating a small force feedback. It is also possible for the Phantom machine to create this movement itself, known as passive movement. During the experiment, in some trials, we provided participants with a visual cue at the same time as the onset of the feedback.

The study demonstrated that when the participants had to respond actively through motor movements, they were more sensitive and quicker to detect the change (decreased PSS and JND), hence giving us a small and highly resolved temporal window for integration. If there is no visual cue to the feedback, the PSS increases, i.e. the time of visual processing takes longer. Similarly the accuracy of the participants' judgment, indicated by JND, also becomes worse under these circumstances. This study, set in three-dimensional paradigms, takes us much closer to understanding real-world interactions, integration and the interplay between such cognitive processes.

[Bildunterschrift / Subline]: Figure 3

Future outlooks
Current research is motivated by its applications in the telepresence, such as tele-operation in which a surgeon at a remote location may be able to direct a robot arm to perform a very precise surgical operation. In this respect the synchronous feedback of multimodal stimuli, such as audio, video, haptics are very critical. Knowing the capacity and integration window of human perception would be crucial to designing communication protocols in such system. In essence, any multimodal robotic system that is closely guided by human action will be able to utilize these results modeled from human cognition.

Devika Narain
* 1985

Graduate program

Academic Background
Bachelors of Engineering in Biotechnology, Diploma in Information Sciences and Applied Mathematics
Research Interests
Multimodal Cognition, Attention, Computational Vision, Artificial Intelligence
Awards and Scholarships
PFL Student Grant (2007), Indian National Talent Search Exam Scholar (2001)

  • Dr. Zhuanghua Shi, Experimental Psychology, LMU, Munich
  • Prof. Dr. Hermann J. Müller, Director, Neuro-cognitive Psychology, LMU Munich