[robotics-worldwide] [meetings] [1st Call for papers] Ro-MAN 2017 Workshop on Social Interaction and Multimodal Expression For Socially Intelligent Robots (WS-SIME)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
Report Content as Inappropriate

[robotics-worldwide] [meetings] [1st Call for papers] Ro-MAN 2017 Workshop on Social Interaction and Multimodal Expression For Socially Intelligent Robots (WS-SIME)

Christiana Tsiourti
Apologies for cross-posting


*[1^st Call for papers] Ro-MAN 2017 Workshop on Social Interaction and
Multimodal Expression For Socially Intelligent Robots (WS-SIME)**

August 28, 2017

Pestana Palace Hotel, Lisbon, Portugal

Workshop organized in conjunction with the RO-MAN 2017
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ro-2Dman2017.org_site_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=aICnyhhb19eoKBcthWc3rtDro_uA-xlZU29scxuoGyY&s=75wWEPpEB-v5UtrCLcdRj0oCWNNrWl4Zl5dda6ofsk4&e= > conference.



The aim of this full-day workshop is to present rigorous scientific
advances on social interaction and multimodal expression for socially
intelligent robots, address current challenges in this area, and to set
a research agenda to foster interdisciplinary collaboration between
researchers on the domain.

Recent advances in the field of robotics and artificial intelligence
contributed to the development of "socially interactive robots" that
engage in social interactions with humans and exhibit certain human-like
social characteristics, including the abilities to communicate with
high-level dialogue, to perceive and express emotions using natural
multimodal cues (e.g., facial expression, gaze, body posture) and to
exhibit distinctive personalities and characters. Applications for
socially interactive robots are plentiful: companions for children and
elderly, household assistants, partners in industries, guides in public
spaces, educational tutors at school and so on. Despite this progress,
the social interaction and multimodal expression capabilities of robots
are still far behind the intuitiveness and naturalness that is required
to allow uninformed users to interact, establish and maintain social
relationships with them in their everyday lives.

The area of social interaction and multimodal expression for socially
intelligent robots remains very much an active research area with
significant challenges in practice due to limitations both in technology
and in our understanding of how different modalities must work together
to convey human-like levels of social intelligence. Designing reliable
and believable social behaviors for robots is an interdisciplinary
challenge that cannot be solely approached from a pure engineering
perspective. Human sciences, social sciences, and cognitive sciences
play a primary role in the development and the enhancement of social
interaction skills for socially intelligent robots.

This workshop will bring together a multidisciplinary audience
interested in the study of multimodal human-human and human-robot
interactions to address challenges in these areas, and elaborate on
novel ways to advance research in the field, based on theories of human
communication and empirical findings validated human-robot interaction
studies. We welcome contributions on both theoretical aspects as well as
practical applications. The analysis of human-human interactions is of
particular importance to understand how humans send and receive social
signals multimodally, through both parallel and sequential use of
multiple modalities (e.g., eye gaze, touch, vocal, body, and facial
expressions. Results achieved by researchers studying human-robot
interactions offers researchers the opportunity to understand how
uninformed interaction partners perceive the multimodal communication
skills developed for social robots (e.g., children, elderly) and how
they influence the interaction process (e.g., regarding usability and

*Topics of interest **
*Workshop topics include, but are not limited to:

1.Contributions of fundamental nature

a.Psychophysical studies and empirical research about multimodality

2.Technical contributions on multimodal interaction

a.Novel strategies of multimodal human-robot interactions

b.Dialogue management using multimodal output

c.Work focusing on novel modalities (e.g., touch)

3.Multimodal interaction evaluation

a.Evaluation and benchmarking of multimodal human-robot interactions

b.Empirical HRI studies with (partial) functional systems

c.Methodologies for the recording, annotation, and analysis of
multimodal interactions

4.Applications for multimodal interaction with social robots

a.Novel application domains for multimodal interaction

5.Position papers and reviews of the state-of-the-art and ongoing research


WS-SIME is a full-day workshop, including two invited talks, two rounds
of paper presentations and an interactive poster session, from an
interdisciplinary set of selected participants. After the presentations,
there will be we a brainstorming discussion, and the workshop will
terminate with a final summary and outlook.

*We invite two types of submissions:


*1) FULL PAPERS (4 pages):*Accepted submissions will be presented in an
oral presentation at the workshop (15 minutes + 5 minutes of Q&A).


*2) SHORT PAPERS (2 pages):*Accepted submissions will have a 1-minute
“teaser” presentation and will be presented in an interactive poster
session at the workshop.

All papers will go through a single-blind peer review and selected based
on relevance, originality and theoretical and/or technical quality.
Papers must be formatted according to the guidelines and templates on
the main RO-MAN
<https://urldefense.proofpoint.com/v2/url?u=http-3A__ras.papercept.net_conferences_support_support.php&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=aICnyhhb19eoKBcthWc3rtDro_uA-xlZU29scxuoGyY&s=hQIQ7kGKLaZabqcoeLLAboWxIf_o82O3d4o9ulRHCYw&e= > 2017
conference website, anonymized, and submitted in PDF format through the
EasyChair conference system
<https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Dwssime2017&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=aICnyhhb19eoKBcthWc3rtDro_uA-xlZU29scxuoGyY&s=Pyxg3Ij1doK6nY_NwUKINV3TuHopPGBt2kWLMv4pLsg&e= >. At least one
author will be required to register and attend the workshop. Accepted
full paper contributions will be published in the online CEUR Workshop
Proceedings <CEUR-WS.org>.

*Important Dates*
- Paper submission deadline: June9, 2017

- Author notification: June 22, 2017
- Camera-ready submission: July 5, 2017

- Main conference: August 28- September 1, 2017

- Workshop day: August 28, 2017


*Invited speakers**
*Ana Paiva – Instituto Superior Técnico, Technical University of Lisbon

Jorge Dias - Institute of Systems and Robotics, University of Coimbra

Christiana Tsiourti (Institute of Service Science, University of Geneva,

Jorge Dias (Institute of Systems and Robotics, University of Coimbra,

Astrid Weiss(ACIN Insititute of Automation and Control, Vienna
University of Technology, Austria)

Sten Hanke (Center for Health & Bioresources, AIT Austrian Institute of
Technology GmbH, Austria)

Julian Angel-Fernandez (ACIN Insititute of Automation and Control,
Vienna University of Technology, Austria)


Christiana Tsiourti: [hidden email]
<mailto:[hidden email]>

Sten Hanke: [hidden email] <mailto:[hidden email]>

robotics-worldwide mailing list
[hidden email]