Adaptive Multimodal Human-Robot and Machine Interaction
Compared to a common Graphic User Interface (GUI), a robotic platform can deploy also speech, gestures and other non-verbal signals that can be used to enhance the naturalness of the interaction. This targets a more natural way of dialoguing preferred to those based on a single communication channel as already seen in most of the applications on tablets and mobile phones. In this context, the goal is to study and design adaptive multimodal interaction mechanism where the focus will not only be on relying on different modalities and on how to apply on them fusion techniques in order to generate the correct interpretation of the user intention, but also to select the proper feature set and to optimize them with reference to both each modal channel and to each user. In this view, the majority of the robotic applications are based on static user models, this makes such prevents systems to adapt independently and proactively to changes in the needs and preferences of users. The aim of the
present proposal is to investigate how to merge human-robot and human-machine multimodal interaction research issues with online adaptive learning ones.
The selected candidate will join the PRISCA Laboratory (Projects of Intelligent Robotics and Advanced Cognitive Systems) in Naples. The PRISCA Lab is a dynamic, international, and multidisciplinary team that offers exciting scientific projects, as well as an excellent and stimulating research environment.