[robotics-worldwide] [meetings] ICRA workshop & Demos: "Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding"

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[robotics-worldwide] [meetings] ICRA workshop & Demos: "Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding"

Luca Carlone-2
Dear colleagues,

We cordially invite you to attend the ICRA’18 workshop on Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding.
It will be an amazing full day event (May 21st, room: M1 – Mezzanine Level), including a terrific lineup of invited speakers, spotlight presentations, poster sessions, and open discussion.

We also plan to have a demo session – please contact us if you would like to arrange a last-minute demo related to the topics of the workshop.

We will have a $1000 monetary award – sponsored by iSEE.ai<https://urldefense.proofpoint.com/v2/url?u=http-3A__isee.ai_&d=DwIGaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=CGdJq0Yt6NrClwloSEaUVE55ZM7bvl9Gnhkop_yMKMc&s=QdexQWtSQxV2kWgKXLcr5gR-xVPtmn361ZTkfZNofak&e=> – for the best workshop paper and the best demo!

Looking forward to seeing you in Brisbane,

Luca & Nikolay

Perception, Inference, and Learning for Joint Semantic, Geometric, and Physical Understanding
International Conference on Robotics and Automation
Date: May 21, 2018, Brisbane
Website: multimodalrobotperception.mit.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__multimodalrobotperception.mit.edu_&d=DwIGaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=CGdJq0Yt6NrClwloSEaUVE55ZM7bvl9Gnhkop_yMKMc&s=R__BWbTM82KpAwVCuxYP65Okrr-5rcyc4ZoVkQAEjKE&e=>
Room: M1 – Mezzanine Level (https://urldefense.proofpoint.com/v2/url?u=https-3A__icra2018.org_accepted-2Dworkshops-2Dtutorials_&d=DwIGaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=CGdJq0Yt6NrClwloSEaUVE55ZM7bvl9Gnhkop_yMKMc&s=6h801lRq7laD4IKuX4Tp96QcDA_sVUPy_rfxzlRCi18&e=)

The goal of this workshop is to bring together researchers from robotics, computer vision, machine learning, and neuroscience to examine the challenges and opportunities emerging from the design of environment representations and perception algorithms that unify semantics, geometry, and physics. This goal is motivated by two fundamental observations. First, the development of advanced perception and world understanding is a key requirement for robot autonomy in complex, unstructured environments, and an enabling technology for robot use in transportation, agriculture, mining, construction, security, surveillance, and environmental monitoring. Second, despite the unprecedented progress over the past two decades, there is still a large gap between robot and human perception (e.g., expressiveness of representations, robustness, latency). The workshop aims to bring forward the latest breakthroughs and cutting-edge research on multimodal representations, as well as novel perception, inference, and learning algorithms that can generate such representations.

  *   The workshop will include keynote presentations from established researchers in robotics, machine learning, computer vision, human and animal perception.
  *   There will be two panel discussions and two poster sessions highlighting contributed papers throughout the day.
  *   There will be a demo session including exciting live demos (best demo takes home a monetary prize - see below).

The workshop is endorsed by the IEEE RAS Technical Committee for Computer & Robot Vision.

To encourage rigorous innovative submissions, this year we will award a $1000 monetary prize to be split between the best paper and the best demo presented during the workshop. Quality and impact of the submissions will be evaluated by an award committee.

- Workshop Date: May 21, 2018

We plan to have a broad participation from the robotics community through the MultimodalRobotPerception<https://urldefense.proofpoint.com/v2/url?u=https-3A__plus.google.com_communities_102832228492942322585&d=DwIGaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=CGdJq0Yt6NrClwloSEaUVE55ZM7bvl9Gnhkop_yMKMc&s=JBRltxrSLcJGrXI6pnnyAwJjeB-Vlz5MrwqxeAFC4eM&e=> Google Community. Feel free to post thought-provoking questions and ideas related to joint metric-semantic-physical perception. The posts will be moderated by the organizers and addressed by the invited speakers during the workshop.

- Nikolay Atanasov, University of California, San Diego
- Luca Carlone, Massachusetts Institute of Technology

- Dieter Fox, University of Washington
- Jana Kosecka, George Mason University
- Ian Reid, University of Adelaide
- Michael Milford, QUT
- Srini Srinivasan, Queensland Brain Institute
- Torsten Sattler, ETH

Please send any questions to Nikolay Atanasov ([hidden email])<mailto:[hidden email])> or Luca Carlone ([hidden email]<mailto:[hidden email]>). Please include "ICRA 2018 Workshop Submission" in the subject of the email.

Luca Carlone
Charles Stark Draper Assistant Professor
Laboratory for Information and Decision Systems (LIDS)
Massachusetts Institute of Technology (MIT)
office: 32 Vassar St., Cambridge, MA 02139, Room: 31-243
web: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.lucacarlone.com_&d=DwIGaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=CGdJq0Yt6NrClwloSEaUVE55ZM7bvl9Gnhkop_yMKMc&s=zQg0hnGibuysKUXAVj-dY2vnJt6UKw-Wt0Sge_tIJNE&e=

robotics-worldwide mailing list
[hidden email]