Temporal Synchrony Effects of Self-Motion Signals on Spatial Heading Perception

Time:2021-11-16

  A recent study published in Cell Reports found that, compared with synchronous visual and inner ear vestibular stimuli, macaques discriminated self-motion directions more precisely when visual stimuli appeared about 250-500 ms earlier than vestibular stimuli. This result, although surprising at the first glance, suggests that in natural spatial navigation, the brain typically integrates multiple senses in a temporal-dynamics-incongruent way, i.e., integration of visual velocity and vestibular acceleration. This work was performed by researchers in Dr. GU Yong’s lab at the Institute of Neuroscience, Center for Excellence in Brain Science and Intelligent Technology of the Chinese Academy of Sciences. 

  Vector-based navigation, or path integration, is a strategy often used by animals and humans in spatial navigation, especially in complex and changeable environments. When the landmark information is unclear, animals use path integration strategy to update the direction and distance changes of the displaced organism in real time relying on self-motion information, thus achieving spatial localization. The visual optic flow and the inner ear vestibular inputs provide important self-motion information for path integration. Previous studies have found that humans and non-human primates can integrate visual optic flow and inner ear vestibular information to improve heading perception, but the neural mechanisms underlying multisensory integration is unclear. For example, visual channel generally processes motion velocity information. By contrast, peripheral vestibular organs detect acceleration information, and this information is integrated in the temporal domain when propagating to the central nervous system (CNS), leading to a broad distribution from acceleration to velocity in the CNS. Hence, temporal dynamics of the two sensory signals in CNS could be either same (speed vs. speed), or different (speed vs. acceleration). 

  So, does the brain integrate visual velocity signals with the vestibular velocity or acceleration signals for multisensory heading perception? Interestingly, evidence from the brain supported either hypothesis has been found. On one hand, neurons in the dorsal portion of medial superior temporal area (MSTd) in the dorsal visual pathway mainly encode velocity quantity for both visual and vestibular signals. The matched temporal dynamics of these two modalities signals are thus thought to be just perfect for cue integration. On the other hand, recent study has also found the opposite evidence. In particular, the lateral intraparietal area (LIP), located in the posterior parietal lobe, predominantly processes vestibular acceleration and visual velocity signals with a time lag of up to 300-400 ms!  

  To test the two different models, researchers in Dr. GU’s lab trained macaques to perceive heading directions in a virtual reality motion system wherein vestibular signals were generated by the motion platform, and optic flow were provided onto a large visual display mounted on the system. As a control, the researchers first produced a condition in which visual and vestibular stimuli were perfectly synchronized at the input level, simulating the natural situation. Under this condition, the researchers first verified the previous phenomenon that the animal integrated the two stimuli and improved its heading performance. Moreover, the amount of improvement was close to the prediction from an optimal Bayesian integration theory. Next, the researchers introduced offset between the vestibular and visual inputs. Surprisingly, in the asynchronous stimuli conditions where visual cue was 250-500 ms ahead of the vestibular cue, macaques' performance of heading discrimination was further improved, which was significantly better than that in the natural synchronous stimuli condition. Such effect was not observed when visual stimuli was ahead of vestibular by 750 ms, or when visual stimuli was further delayed after vestibular input, suggesting that the 250-500 ms temporal synchrony offset window was specific for further-improved heading performance. 

  Why does the temporal offset of 250-500 ms could further enhance the efficiency of multisensory integration? The researchers speculated that it's related to LIP and FEF. Thus, the researchers implanted electrodes in these two brain regions in macaques and performed in vivo single-unit electrophysiological recordings. The results showed that under naturally synchronous stimuli conditions, neurons in both brain regions essentially processed a relatively slower visual velocity signal and a relatively faster vestibular acceleration signal, with a temporal difference of exactly 250-500 ms between them. When the "lagged" visual cue was advanced vestibular cue by 250-500 ms, the temporal dynamics difference between two modality signals (velocity peak-acceleration peak) of the FEF and LIP neurons was reduced and the signal peaks overlapped and superimposed, increasing the capacity of the encoded heading information, which exactly explained the asynchronous stimuli effect of animal behavior. 

  In summary, by introducing a new paradigm with artificial asynchronous conditions, this study found enhanced heading perception in animals under asynchronous conditions and increased heading information capacity in the corresponding frontal and posterior parietal cortices. These results support that under natural conditions, the brain uses a model with inconsistent physical quantities of temporal dynamics for multisensory heading perception. As to why the brain needs to integrate visual velocity and vestibular acceleration signals under natural conditions, the researchers proposed that it may be because vestibular acceleration is a faster signal, which could help the organism use the signal to update changes in its headings quickly and thus respond in a timely way. 

  This paper entitled "Temporal synchrony effects of optic flow and vestibular inputs on multisensory heading perception" was published online in the Cell Reports on November 16, 2021. The research was conducted together by graduate students ZHENG Qihao and ZHOU Luxin under the supervision of Dr. GU Yong at the Center for Excellence in Brain Science and Intelligent Technology. The research was funded by the National Natural Science Foundation of China, Chinese Academy of Sciences, and the Shanghai Municipal Government. 

 

  Figure legend: (A) Top right: Macaques move through the environment via a virtual reality platform to discriminate directions of self-motion. The platform provides two modal stimuli, including visual optic flow generated by the display and vestibular cue by the motion platform. Left: When macaques undergo self-motion, the retina receives optic flow from the display and the inner ear vestibular organs receives motion stimuli from motion platform, and this information is further transferred to the cerebral cortex for processing and integration. Bottom right: Compared to visual-vestibular synchronous stimuli (0 ms), the macaque's ability to perceive spatial heading is improved, as reflected by a lower threshold for discriminating self-motion direction, under asynchronous stimulation, especially when the visual optic flow is 250 or 500 ms ahead of vestibular cue. (B) From top to bottom, the amount of neuronal information in the frontal eye field of macaques when visual cue was advanced vestibular cue by 500 ms, 250 ms, and 0 ms (synchronous), respectively, which is consistent with the corresponding behavioral performance. (image by CEBSIT)

    

   AUTHOR CONTACT: 

    GU Yong 

    Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China. 

    E-mail: guyong@ion.ac.cn 

    

    

   

附件下载: