[robotics-worldwide] [meetings] Invitation to attend IROS 2019 Workshop: Semantic Policy and Action Representations for Autonomous Robots (SPAR)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [meetings] Invitation to attend IROS 2019 Workshop: Semantic Policy and Action Representations for Autonomous Robots (SPAR)

Karinne Ramirez
4th Workshop on Semantic Policy and Action Representations for
Autonomous Robots (SPAR)

November 8, 2019 - Room: LG-R10 - IEEE/RSJ International Conference on
Intelligent Robots and Systems - Macau, China


----Workshop URL

https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_view_spar2019_home&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=KxA2AL2SD_kEAxY7ab95OhYfaEEqEiA66af8zYlWCQU&e= 

Contact email:  [hidden email]


----Workshop objectives

It has been a long-standing question whether robots can reach human
level of intelligence that understands the essence of observed actions
and imitates them even under different circumstances. Contemporary
research in robotics and machine learning has attempted to solve this
question from two different perspectives: One in a bottom-up manner by,
for instance, solely relying on perceived continuous sensory data,
whereas the other by approaching rather from the symbolic level in a
top-down fashion. Although there have been shown encouraging results in
both flows, understanding and imitation of actions have yet to be fully
solved.

Action semantics stands as a potential glue for bridging the gap between
a symbolic action representation and its corresponding continuous signal
level description. Semantic representation provides a tool for capturing
the essence of action by revealing the inherent characteristics. Thus,
semantic features help robots to understand, learn, and generate
policies to imitate actions even in various styles with different
objects. Thus, more descriptive semantics yields robots with greater
capability and autonomy.

This workshop focuses on new technologies that allow robots to learn
generic semantic models for different tasks. In this workshop, we will
bring together researchers from diverse fields, including robotics,
computer vision, and machine learning in order to overview the most
recent scientific achievements and the next break-through topics, and
also to propose new directions in the field.



*** We have a very exciting program for the SPAR workshop

  Time               Speaker
09:00 – 09:15       Welcome and introduction. Karinne Ramirez-Amaro &
Yezhou Yang

09:20 – 09:55       Kei Okada, The University of Tokyo. "Task
Instantiation from Life-long Episodic Memories of Service Robots"

10:00 – 10:35       Tanja Schultz, University of Bremen. "Biosignal
Processing for Modeling Human Everyday Activities"

10:45 – 11:15       Coffee Break

11:20 – 11:55       Stefanos Nikolaidis, University of Southern
California. "Learning Collaborative Action Plans from YouTube Videos"

12:00 – 12:35       Darius Burschka, Technical University of Munich.
"Understanding the Static and Dynamic Scene Context for Human-Robot   
Collaboration in Households"

12:40 - 13:05       Joseph Lim, University of Southern California. TDB

13:05 – 14:00       Lunch

14:00 – 14:35       Georg von Wichert, Siemens. TBD

14:40 - 15:15       Chris Paxton, Nvidia Robotics lab. "From Pixels to
Task Planning and Execution"

15:20 - 15:40       Spotlight talks

15:45 – 16:15       Coffee Break and poster presentations

16:20 – 16:40       Poster presentations

16:45 - 17:20       Jesse Thomason, University of Washington. "Action
Learning from Realistic Environments with Directives"

17:25 - 17:40       Final remarks and end of the Workshop


----Invited Speakers (all confirmed)

* Kei Okada, The University of Tokyo. Japan.
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.jsk.t.u-2Dtokyo.ac.jp_-7Ek-2Dokada_index-2De.html&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=e33skWHC6J06O3-8ru0e8PZrUUMKyHvcUGNOLJfNdps&e= 

* Tanja Schultz, University of Bremen.
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.uni-2Dbremen.de_en_csl_team_staff_prof-2Ddr-2Ding-2Dtanja-2Dschultz_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=p4qD89CWcxpdVQT0c7XRJM7_W3Ms2hBiLj1ci3hhVAU&e= 

* Georg von Wichert, Siemens. Germany.
  https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_georg-2Dvon-2Dwichert-2D7a74796_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=oHPvlTOuWYzJXDIZj2LUj4nwwVTksPiYkk2gIdjtRlg&e= 

* Stefanos Nikolaidis, University of Southern California. USA
https://urldefense.proofpoint.com/v2/url?u=https-3A__stefanosnikolaidis.net_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=ZHThIzI3HGA8xJE2h-Iyj9ABjwMvIbIvY60XYxz-QAQ&e= 

* Joseph Lim, University of Southern California.
https://viterbi-web.usc.edu/~limjj/

* Darius Burschka, Technical University of Munich.
https://urldefense.proofpoint.com/v2/url?u=http-3A__robvis01.informatik.tu-2Dmuenchen.de_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=m6LqBs5rhHrmz_M8wYEFZh3kkoQWZ8adgUYd-LwNiGo&e= 

* Chris Paxton, Nvidia Robotics lab.
https://urldefense.proofpoint.com/v2/url?u=https-3A__research.nvidia.com_person_chris-2Dpaxton&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=HK4A15bU7dTBOd8Ltbpna3R6wYKUjZnotN85kEzlK-Y&e= 

* Jesse Thomason, University of Washington
https://urldefense.proofpoint.com/v2/url?u=https-3A__jessethomason.com_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rStC00sfV_qVWqlcnmGhKrRoTMim5rLso-FTzVW2NfNyo&m=HWylDZCgNOUFlTbLHwchFJKWC1so5buB4r1HTzRbOjg&s=UCE3mLmP0RtmatOtTtPPKGGYE4kIPrL8HDF84z4j2CA&e= 


----Organizers

* Karinne Ramirez-Amaro, Chalmers University of Technology, Sweden

* Eren Erdal Aksoy, Halmstad University, Sweden

* Yezhou Yang, Arizona State University, USA

* Shiqi Zhang, SUNY Binghamton, USA



----Advisory Board

* Michael Beetz, University Bremen, Germany

* Yiannis Aloimonos, University of Maryland, USA

* Tamim Asfour, Karlsruhe Institute of Technology, Germany

* Florentin Wörgötter, University of Göttinngen, Germany

--
Asst. Prof. Karinne Ramirez Amaro
Chalmers University of Technology
Department of Electrical Engineering
Systems and Control Division
SE-412 96 Göteborg, Sweden

IEEE Associate Vice President - Conference Operations

E-mail:[hidden email]
Office telephone: +46-31-772-1074
 
www.chalmers.se

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide