[robotics-worldwide] [meetings] CFP: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[robotics-worldwide] [meetings] CFP: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

Clark, Ronald
Dear colleagues,

This is a reminder that the submission deadline for the 1st International Workshop on Deep Learning for Visual SLAM to be held at CVPR2018 is coming up on 20 March!

Workshop overview:
Visual SLAM and ego-motion estimation are two of the key challenges and cornerstone requirements of machine perception. To enable the next generation of visual SLAM, we need to pursue better means of integrating prior knowledge and understanding of the world. This workshop will focus on the intersection of deep learning and real-time visual SLAM. The workshop will explore ways in which data-driven models can be harnessed for creating robust Visual SLAM algorithms which are less fragile and much more robust than existing state-of-the-art approaches. The workshop will also investigate ways in which we can use deep-learned models alongside traditional approaches in a unified and synergistic fashion.

Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=J_ts2JSts9kRMvschtJ6bnzPAhznoABhaqisyl8c7zI&e=

Paper submission: 20 March 2018
Notification of Acceptance: 20 April 2018
Final Schedule: 15 May 2018
Workshop date: 18 June 2018

Paper details:
Paper length must be limited to 8 pages.
Submisisons may be accepted as either oral or poster presentations.
Accepted papers will be published in the CVPR workshop proceedings.
Submission is via CMT: https://urldefense.proofpoint.com/v2/url?u=https-3A__cmt3.research.microsoft.com_DLVSLAM2018&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=sPihu41NLPWw-XDefwrvqkWEJOhlcnjDxeQQYXX2tjo&e=

We are particularly soliciting papers containing new and innovatitive ideas (with possibly preliminary experimental evaluation) related, but not limited to, the following topics:

- Dense, Direct and Sparse Visual SLAM methods
- Learning for Real-time Odometry, Tracking and Ego-Motion estimation
- Learning for Single and Multi-View 3D Reconstruction
- Visual Place recognition and Relocalization
- Semantic SLAM methods
- Semantic elements are a fundamental component of human perception and scene understanding.
- New methods for 3D Scene Representation and Compression

More info: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=J_ts2JSts9kRMvschtJ6bnzPAhznoABhaqisyl8c7zI&e=

We hope to see you all there!

Ronald Clark
Dyson Research Fellow
Imperial College London

robotics-worldwide mailing list
[hidden email]