[robotics-worldwide] [meetings] CFP: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [meetings] CFP: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

Clark, Ronald
Dear colleagues,

This is a reminder that the submission deadline for the 1st International Workshop on Deep Learning for Visual SLAM to be held at CVPR2018 is coming up on 20 March!

Workshop overview:
Visual SLAM and ego-motion estimation are two of the key challenges and cornerstone requirements of machine perception. To enable the next generation of visual SLAM, we need to pursue better means of integrating prior knowledge and understanding of the world. This workshop will focus on the intersection of deep learning and real-time visual SLAM. The workshop will explore ways in which data-driven models can be harnessed for creating robust Visual SLAM algorithms which are less fragile and much more robust than existing state-of-the-art approaches. The workshop will also investigate ways in which we can use deep-learned models alongside traditional approaches in a unified and synergistic fashion.

Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=J_ts2JSts9kRMvschtJ6bnzPAhznoABhaqisyl8c7zI&e=

Deadlines:
Paper submission: 20 March 2018
Notification of Acceptance: 20 April 2018
Final Schedule: 15 May 2018
Workshop date: 18 June 2018

Paper details:
Paper length must be limited to 8 pages.
Submisisons may be accepted as either oral or poster presentations.
Accepted papers will be published in the CVPR workshop proceedings.
Submission is via CMT: https://urldefense.proofpoint.com/v2/url?u=https-3A__cmt3.research.microsoft.com_DLVSLAM2018&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=sPihu41NLPWw-XDefwrvqkWEJOhlcnjDxeQQYXX2tjo&e=

Topics:
We are particularly soliciting papers containing new and innovatitive ideas (with possibly preliminary experimental evaluation) related, but not limited to, the following topics:

- Dense, Direct and Sparse Visual SLAM methods
- Learning for Real-time Odometry, Tracking and Ego-Motion estimation
- Learning for Single and Multi-View 3D Reconstruction
- Visual Place recognition and Relocalization
- Semantic SLAM methods
- Semantic elements are a fundamental component of human perception and scene understanding.
- New methods for 3D Scene Representation and Compression

More info: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=J_ts2JSts9kRMvschtJ6bnzPAhznoABhaqisyl8c7zI&e=

We hope to see you all there!

-----------------------------------------------------------------------------------------------
Ronald Clark
Dyson Research Fellow
Imperial College London
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ronnieclark.co.uk&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Myb7vyxuzrDhVlXHAHGzZMEnvqoD9a77_7AVm-ZjqhU&s=yZrg_C2V_9aPmQ9k7huScs95H-KJg7H-DJzY-bBHhAs&e=

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [meetings] Deadline Extended: 1st Workshop on Deep Learning for Visual SLAM at CVPR'18

Clark, Ronald
Due to numerous requests we have extended the deadline for the 1st Workshop on Deep Learning for Visual SLAM to 10 April 2018!


This is especially to give those who have not yet submitted a chance to submit.


Please see below for revised dates...

--------------------------------------------------------------------------
Website: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=DwRb8yb5OohmefN8sTsdmvFFnO7XG7zigxEE4s1Pzj4&e=

Workshop overview:
Visual SLAM and ego-motion estimation are two of the key challenges and cornerstone requirements of machine perception. To enable the next generation of visual SLAM, we need to pursue better means of integrating prior knowledge and understanding of the world. This workshop will focus on the intersection of deep learning and real-time visual SLAM. The workshop will explore ways in which data-driven models can be harnessed for creating robust Visual SLAM algorithms which are less fragile and much more robust than existing state-of-the-art approaches. The workshop will also investigate ways in which we can use deep-learned models alongside traditional approaches in a unified and synergistic fashion.

Deadlines:
Paper submission: 10 April 2018
Notification of Acceptance: 15 April 2018
Camera ready: 19 April 2018
Final Schedule: 15 May 2018
Workshop date: 18 June 2018

Paper details:
Paper length must be limited to 8 pages.
Submisisons may be accepted as either oral or poster presentations.
Accepted papers will be published in the CVPR workshop proceedings.
Submission is via CMT: https://urldefense.proofpoint.com/v2/url?u=https-3A__cmt3.research.microsoft.com_DLVSLAM2018&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=0SzDQ8_8BVG964IKxfXlyogCQ5eEB6oWRLrMef-4Ank&e=

Topics:
We are particularly soliciting papers containing new and innovatitive ideas (with possibly preliminary experimental evaluation) related, but not limited to, the following topics:

- Dense, Direct and Sparse Visual SLAM methods
- Learning for Real-time Odometry, Tracking and Ego-Motion estimation
- Learning for Single and Multi-View 3D Reconstruction
- Visual Place recognition and Relocalization
- Semantic SLAM methods
- Semantic elements are a fundamental component of human perception and scene understanding.
- New methods for 3D Scene Representation and Compression

More info: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visualslam.ai&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=DwRb8yb5OohmefN8sTsdmvFFnO7XG7zigxEE4s1Pzj4&e=
-----------------------------------------------------------------------------------------------

We hope to see you all there!

Ronald Clark
Dyson Fellow
Imperial College London
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ronnieclark.co.uk&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=y34pOdIYaC1TpgYtVDS9bMb8y45srQxG9Lf0QFw-ypk&s=nNRLUkqrefR40_onBub8P7BZroYZwt05a0f_2dftDg0&e=

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide