[robotics-worldwide] [journals] CFP: Special Issue "ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception" in Frontiers in Robotics and AI (Extended Submission Deadline: 22rd January 2020)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[robotics-worldwide] [journals] CFP: Special Issue "ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception" in Frontiers in Robotics and AI (Extended Submission Deadline: 22rd January 2020)

Dear Colleagues,

Please consider submitting papers to the Frontiers in Robotics and AI’s special issue on “ViTac: Integrating Vision and Touch for Multimodal and Cross-Modal Perception”.

URL: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.frontiersin.org_research-2Dtopics_10004_vitac-2Dintegrating-2Dvision-2Dand-2Dtouch-2Dfor-2Dmultimodal-2Dand-2Dcross-2Dmodal-2Dperception&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0ocyogpvBpVDrFkUcGBSb8wufxwFWioGEPVO8bNTEc8&s=RllgxgE2q3pS-yuyNkJaruTSm60lOpPKATQ9wYQeFfI&e= 

Extended Submission Deadline: 22nd January 2020

Aims and Scope:
This Research Topic is based on outputs from the workshop 'ViTac: Integrating Vision and Touch for Multimodal and Cross-modal Perception'<https://urldefense.proofpoint.com/v2/url?u=http-3A__wordpress.csc.liv.ac.uk_smartlab_icra-2D2019-2Dvitac-2Dworkshop_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0ocyogpvBpVDrFkUcGBSb8wufxwFWioGEPVO8bNTEc8&s=4_24zFjPbHIX4nhIKK1sIiy5OBmY85DzUtN17DMp0jo&e= > at the International Conference on Robotics and Automation (ICRA) 2019. However, we would also welcome spontaneous submissions not associated with this workshop, which should fit the scope of the Research Topic.

Animals interact with the world through multimodal sensing inputs, especially vision and touch sensing in the case of humans. In contrast, artificial systems usually rely on a single sensing modality, with distinct hardware and algorithmic approaches developed for each modality, e.g. computer vision and tactile robotics. Future robots, as embodied agents, should make best use of all available sensing modalities to interact with the environment. Over the last few years, there have been advances in the fusing of information from distinct modalities and selecting between those modalities to use the most appropriate information for achieving a goal, e.g. grasping or manipulating an object. Furthermore, there has been a recent acceleration in the development of optical tactile sensors using cameras, such as the GelSight and TacTip tactile sensors, bridging the gap between vision and tactile sensing, and creating cross-modal perception.

This Research Topic will encompass recent progress in the area of combining vision and touch sensing from the perspective of how touch sensing complements vision to achieve a better robot perception, exploration, learning and interaction with humans. The Research Topic aims to enhance active collaboration, discussion of methods for the fusion of vision and touch, discussion of challenges for multimodal and cross-modal sensing, development of optical tactile sensors and applications.

Topics of Interest:
• trends in combining vision and tactile sensing for robot perception

• development of optical tactile sensors (using visual cameras or optical fibres)
• integration of optical tactile sensors into robotic grippers and hands
• roles of vision and touch sensing in different object perception tasks, e.g., object recognition, localization, object exploration, planning, learning and action selection
• interplay between touch sensing and vision
• bio-inspired approaches for fusion of vision and touch sensing
• psychophysics and neuroscience of combining vision and tactile sensing in humans and animals
• computational methods for processing vision and touch data in robot learning
• deep learning for optical tactile sensing and relation/interaction with deep learning for robot vision
• the use of vision and touch for safe human-robot interaction/collaboration

Tactile Sensing,
Vision and Touch,
Sensor Fusion,
Multimodal Perception,
Cross-Modal Perception

Guest Editors:
Dr Shan Luo, University of Liverpool, U.K.
Prof. Nathan F. Lepora, University of Bristol, U.K.
Dr. Uriel Martinez-Hernandez, University of Bath, U.K.
Dr. Joao Bimbo, Italian Institute of Technology, Italy
Dr. Huaping Liu, Tsinghua University, China

Best regards,


Dr. Shan Luo

Lecturer (Assistant Professor) in Robotics

Director of the smARTLab<https://urldefense.proofpoint.com/v2/url?u=http-3A__wordpress.csc.liv.ac.uk_smartlab_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0ocyogpvBpVDrFkUcGBSb8wufxwFWioGEPVO8bNTEc8&s=nobAOoCE1WfAw5eddSEhfZgKIhXvzwf_JFm_uzvg7gU&e= >
Department of Computer Science
The University of Liverpool
Liverpool, L69 3GJ
United Kingdom

Office: G25, Ashton Building

Email: [hidden email]
Web: https://urldefense.proofpoint.com/v2/url?u=https-3A__cgi.csc.liv.ac.uk_-7Eshanluo_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0ocyogpvBpVDrFkUcGBSb8wufxwFWioGEPVO8bNTEc8&s=yURnAJTtoBb7JhyvZtJ92GQKXz0mA-9uOEnvFQXhKKg&e= 
<https://urldefense.proofpoint.com/v2/url?u=https-3A__cgi.csc.liv.ac.uk_-7Eshanluo_&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0ocyogpvBpVDrFkUcGBSb8wufxwFWioGEPVO8bNTEc8&s=yURnAJTtoBb7JhyvZtJ92GQKXz0mA-9uOEnvFQXhKKg&e= >
robotics-worldwide mailing list
[hidden email]