[robotics-worldwide] [meetings] RSS workshop New Benchmarks, Metrics, and Competitions for Robotic Learning: extended deadline June 8

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [meetings] RSS workshop New Benchmarks, Metrics, and Competitions for Robotic Learning: extended deadline June 8

Niko Sünderhauf
--------------------------------------------------------------------------------------------
Call for Contributions and Deadline Extension
---------------------------------------------------------------------------------------------
New Benchmarks, Metrics, and Competitions for Robotic Learning

Extended Submission Deadline: June 8 (anywhere on the planet)
Workshop at Robotics: Science and Systems (RSS)
June 29, 2018   ---   Pittsburgh, USA

Website: https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_view_rss2018-2Drobotic-2Dlearning_home&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=9YwEy0WKTmNXszMY1crOiv8jXlry56A4I203DTbvHRo&s=s7R7Tf6zH44iKNpPQLbBoejgWGhDZDghto1SGq-TDY8&e=

===========================================
Call for Contributions:

This workshop will discuss and propose new benchmarks, competitions, and
performance metrics that address the specific challenges arising when
deploying (deep) learning in robotics.

Researchers in robotics currently lack widely-accepted meaningful
benchmarks and competitions that inspire the community to work on the
critical research challenges for robotic learning, and allow repeatable
experiments and quantitative evaluation. This is in stark contrast to
computer vision, where datasets like ImageNet and COCO, and the
associated competitions, fueled much of the advances in recent years.

Our workshop puts a very strong emphasis on developing new benchmarks
that address the challenges arising when deploying deep learning for
robotics in complex real-world scenarios, current gaps in our collective
knowledge in this area, and the necessary new research directions to
close these gaps.


We therefore invite authors to contribute extended abstracts or full
papers that:

- identify the shortcomings of existing benchmarks, datasets, and
evaluation metrics for robotics

- propose improved datasets, evaluation metrics, benchmarks, and
protocols for robotics  that foster repeatable evaluation and motivate
research in important areas not well covered by existing benchmarks

- address specific robotics learning-related research challenges like
coping with open-set conditions, uncertainty estimation, incremental /
continuous learning, active learning, active vision, transfer learning

Papers on benchmarks and datasets should be guided by the following
questions:

- Where do you see the shortcomings in existing benchmarks and
evaluation metrics?

- What are important research challenges for robotic learning that are
not well covered by existing benchmarks?

- What characteristics should new benchmarks have to allow meaningful
repeatable evaluation of approaches in robotic vision, while steering
the community to addressing the open research challenges?


Organizers:
Niko Sünderhauf, Markus Wulfmeier, Anelia Angelova, Feras Dayoub, Ken
Goldberg, et al.


Kind regards,
        Niko

--
Niko Sünderhauf
Chief Investigator - Australian Centre for Robotic Vision
Senior Lecturer - Queensland University of Technology
phone: + 61 7 3138 9971
Gardens Point, S Block 1120
2 George Street, Brisbane, QLD 4000
_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide