The ability of processing crossmodal information is a fundamental
feature of the brain that provides a robust perceptual experience for an
efficient interaction with the environment. Consequently, the
integration of multisensory information plays a crucial role in
autonomous systems to create robust and meaningful representations of
objects and events.
For dealing with real-world information, an autonomous, intelligent
system must be capable of processing, integrating, and segregating
different modalities for the purpose of coherent perception,
decision-making, and cognitive learning.
Recent neurophysiological findings in crossmodal learning have inspired
novel computational models with the aim to trigger biologically inspired
behavioral responses. A rich set of neural mechanisms support the
integration and segregation of multimodal stimuli, providing the means
to efficiently solve conflicts across modalities.
This special issue aims to invite contributors from psychology,
computational neuroscience, artificial intelligence, and cognitive
robotics to discuss current research on crossmodal learning mechanisms
both from the theoretical and modelling perspective.
II. Potential Topics
Topics include, but are not limited to:
- New theories and findings on crossmodal processing
- New neuroscientific results on crossmodal learning
- Machine learning and neural networks for learning multisensory
- Computational models of crossmodal attention and perception
- Brain-inspired approaches for multisensory integration