**apologies for cross-postings**
HRI 2018 Workshop
*Explainable Robotic Systems*
Held on March 5 in conjunction with HRI 2018 conference, Chicago, IL, USA.
The call for Autonomous Intelligent Systems (AIS) to be transparent has
recently become loud and clear and currently is a pressing funding and
research agenda. Some forms of transparency, such as traceability and
verification, are particularly important for software and hardware
engineers; other forms, such as explainability or intelligibility, are
particularly important for ordinary people. As artificial agents, and
especially socially interactive robots, enter human society, the demands
for these agents to be transparent and explainable grow rapidly. When a
system is able, for example, to explain how it made classifications or
arrived at a judgment, users are better able to judge the accuracy and
adequacy of the systems and have better calibrated trust in them.
More and more AI systems process vast amounts of information and make
classifications or recommendations that humans use for financial,
employment, medical, military, and political decisions. More precariously
yet, autonomous social robots, by definition, make decisions reaching
beyond direct commands and perform actions with social and moral
significance for humans. The demand for these agents to become transparent
and explainable is particularly urgent. However, to make robots
explainable, we need to understand how people interpret the behavior of
such systems and what expectations they have of them.
Aim of the workshop
In this workshop, we will address the topics of transparency and
explainability, for robots in particular, from both the cognitive-science
perspective and the computer-science and robotics perspective. Cognitive
science elucidates how people interpret robot behavior; computer science
and robotics elucidate how the computational mechanisms underlying robot
behavior can be structured and communicated so as to be
human-interpretable. The implementation and use of explainable robotic
systems may prevent the potentially frightening confusion over why a robot
is behaving the way it is. Moreover, explainable robot systems may allow
people to better calibrate their expectations of the robot’s capabilities
and be less prone to treating robots as almost-humans.
The aim of this full-day workshop is to provide a forum to share and learn
about recent research on requirements for artificial agents’ explanations
as well as the implementation of transparent, predictable and explainable
robotic systems. Extended time for discussions will highlight and document
promising approaches and encourage further work. A large part of this
effort is to bring together a community of researchers, strengthen existing
connections, and build new ones.
The morning session will cover invited talks from Rachid Alami, Joanna
Bryson, Bradley Hayes, and Alessandra Sciutti, and short presentations in
themed discussion sessions around the key topics that are raised by
accepted paper submissions.
The afternoon will be devoted to break-out sessions to discuss the next
steps in research and development of explainable and transparent robotic
systems. Groups will be composed of representatives of different
disciplines in order to work on integrating the multiple necessary
perspectives in this endeavor. To boost the discussions, we will ask
presenters to prepare questions or raise pressing issues that provide
starting points for the discussion groups.
Call for papers
In this workshop, we want to bring together researchers and practitioners
from a wide range of different disciplines who are interested in
explainable robot and AI systems. We welcome multidisciplinary
contributions that intersect with robot systems (e.g. human-human
interaction, HCI, HRI, human factors, engineering, computer sciences,
cognitive science, interactive design, sociology, anthropology, psychology,
We welcome prospective participants to submit extended abstracts (max. 2
pages) covering any topic that could potentially contribute to the
discussion of people’s interpretation of robot actions as well as the
implementation of transparent, predictable and explainable behaviors in
robotic systems. All papers should be submitted in PDF format using the HRI
LBR template, and should be sent to [hidden email]. All
submitted papers within the scope of the workshop will be peer-reviewed.
Papers will be selected based on their originality, relevance,
contributions, technical clarity, and presentation. Accepted papers will
require that at least one author registers for and attends the workshop.
After the conference, and with permission of the authors, we will provide
online access to the workshop proceedings on this website. In addition, we
have submitted a special issue proposal to the ACM Transactions on
Human-Robot Interaction (formerly the Journal of HRI), giving accepted
authors the opportunity to disseminate full-versions of their work and
other ideas developed during the workshop.
We encourage researchers to attend the workshop even without a paper
submission. Our goal is to maximize community engagement and the uptake of
concepts regarding explainable robotic systems within the field of HRI.
Submission deadlin*e*: January 12, 2018
Acceptance notification: January 31, 2018 (or before the early bird
Camera-ready deadline: February 15, 2018
*Maartje De Graaf - *Brown University, USA
*Bertram Malle - *Brown University, USA
*Anca Dragan – *UC Berkely, USA
*Tom Ziemke - *University of Skövde and Linköping University, Swede
For more information: https://urldefense.proofpoint.com/v2/url?u=https-3A__explainableroboticsystems.wordpress.com&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dcvrFiGIhhSiQWl_WiC6Vpc9D6zhkjV5jcFZi02I84M&s=Qtk6A5BVueWz8yoxS0QW4oyIgmsgUlUs-36XWv2_bJg&e=
robotics-worldwide mailing list
|Free forum by Nabble||Edit this page|