It has been a long-standing question whether robots can reach human
level of intelligence that understands the essence of observed actions
and imitates them even under different circumstances. Contemporary
research in robotics and machine learning has attempted to solve this
question from two different perspectives: One in a bottom-up manner by,
for instance, solely relying on perceived continuous sensory data,
whereas the other by approaching rather from the symbolic level in a
top-down fashion. Although there have been shown encouraging results in
both flows, understanding and imitation of actions have yet to be fully
Action semantics stands as a potential glue for bridging the gap between
a symbolic action representation and its corresponding continuous signal
level description. Semantic representation provides a tool for capturing
the essence of action by revealing the inherent characteristics. Thus,
semantic features help robots to understand, learn, and generate
policies to imitate actions even in various styles with different
objects. Thus, more descriptive semantics yields robots with greater
capability and autonomy.
This workshop focuses on new technologies that allow robots to learn
generic semantic models for different tasks. In this workshop, we will
bring together researchers from diverse fields, including robotics,
computer vision, and machine learning in order to overview the most
recent scientific achievements and the next break-through topics, and
also to propose new directions in the field.
*** We have a very exciting program for the SPAR workshop
09:00 – 09:15 Welcome and introduction. Karinne Ramirez-Amaro &
09:20 – 09:55 Kei Okada, The University of Tokyo. "Task
Instantiation from Life-long Episodic Memories of Service Robots"
10:00 – 10:35 Tanja Schultz, University of Bremen. "Biosignal
Processing for Modeling Human Everyday Activities"
10:45 – 11:15 Coffee Break
11:20 – 11:55 Stefanos Nikolaidis, University of Southern
California. "Learning Collaborative Action Plans from YouTube Videos"
12:00 – 12:35 Darius Burschka, Technical University of Munich.
"Understanding the Static and Dynamic Scene Context for Human-Robot
Collaboration in Households"
12:40 - 13:05 Joseph Lim, University of Southern California. TDB
13:05 – 14:00 Lunch
14:00 – 14:35 Georg von Wichert, Siemens. TBD
14:40 - 15:15 Chris Paxton, Nvidia Robotics lab. "From Pixels to
Task Planning and Execution"
15:20 - 15:40 Spotlight talks
15:45 – 16:15 Coffee Break and poster presentations
16:20 – 16:40 Poster presentations
16:45 - 17:20 Jesse Thomason, University of Washington. "Action
Learning from Realistic Environments with Directives"
17:25 - 17:40 Final remarks and end of the Workshop