[robotics-worldwide] [meetings] Final Schedule: RO-MAN Workshop, WCIR-2019

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [meetings] Final Schedule: RO-MAN Workshop, WCIR-2019

K Madhava Krishna

Dear All

Please find the final schedule for the Workshop on Cognitive and Interactive
Robotics, https://urldefense.proofpoint.com/v2/url?u=https-3A__robotics.iiit.ac.in_roman-2Dworkshop_&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Q5sqlE7oyfhTv6POcEmvlimVlI67Nf_9D7j7vf4HoO0&s=CU0JfFHmYzEmInr-9-5PrjK345WbcTPIYJkI-WoZ7m4&e= ,  as a part of RO-MAN
2019

                    Venue: R5
                    Time: 2 - 5pm
                    Date: 14 October 2019

2 pm: Initial Remarks
2.10 -3.00pm Keynote Talk by Prof. Rachid Alami
3.00 - 3:45pm  Keynote Talk by Prof. Dinesh Manocha
3:45 - 4.30pm  Key Note by Prof Mohan Sridharan
4.45 - 5:15pm  paper presentations

1. Title: Implementing Robot Navigation in Human Environment as a
Human-Robot Cooperative Activity
---------------------------------------------------------------------------------------------------------------------

Speaker: Rachid Alami
              LAAS-CNRS
             
https://urldefense.proofpoint.com/v2/url?u=https-3A__homepages.laas.fr_rachid&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=UnJ65338BZ4CbMiBsndtqR3YG-dxt1qlQVOYwvOGdJk&s=a4K6-pfD5NKR03IATAgCIb5Hg3-GFhqyF2nHr6FGCUk&e= 
              e-mail: [hidden email]

Abstract:
We claim that navigation in human environments can be viewed as cooperative
activity especially in constrained situations. Humans concurrently aid and
comply with each other while moving in a shared space. Cooperation helps
pedestrians to efficiently reach their own goals and respect conventions
such as the personal space of others.

To meet human comparable efficiency, a robot needs to predict the human
intentions and trajectories and plan its own trajectory correspondingly in
the same shared space. In this work, I we present a reactive navigation
planning  that is able to plan such cooperative trajectories.

Sometimes, it is even necessary to influence the other or even force him/her
to act in a certain way.

Using robust social constraints, potential resource conflicts, compatibility
of human-robot motion direction, and proxemics, our planner is able to
replicate human-like navigation behavior not only in open spaces but also in
confined areas. Besides adapting the robot trajectory, the planner is also
able to proactively propose co-navigation solutions by jointly computing
human and robot trajectories within the same optimization framework. We
demonstrate richness and performance of the cooperative planner with
simulated and real world experiments on multiple interactive navigation
scenarios.

---------------------------------------------------------------------------------------------------------------------

Title: Refinement-based Architecture for Knowledge Representation,
Explainable Reasoning, and Interactive Learning in Robotics

Speaker: Mohan Sridharan
             
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.cs.bham.ac.uk_-7Esridharm_&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=UnJ65338BZ4CbMiBsndtqR3YG-dxt1qlQVOYwvOGdJk&s=WffGb3JwWoIqV2CNJYcKYkSQ1uQRKTSORJKuoUpEmmU&e= 

Abstract:

This talk describes an architecture for robots based on the principle of
step-wise refinement, and inspired by theories of human cognition and
control. The architecture computationally encodes theories of intention,
affordance, and explanation, and the principles of persistence,
non-procrastination, and relevance. It is based on tightly-coupled
transition diagrams of the domain at different resolutions, with a
fine-resolution transition diagram defined as a refinement of a
coarse-resolution diagram. For any given goal, non-monotonic logical
reasoning with incomplete commonsense knowledge at the coarse resolution
provides a plan of abstract actions. Each abstract action is implemented as
a sequence of more concrete actions by automatically zooming to and
reasoning with the relevant part of the fine-resolution transition diagram.
Execution of each concrete action is based on probabilistic models of the
uncertainty in sensing and actuation, with the corresponding outcomes being
used for subsequent coarse-resolution reasoning. In addition, the
architecture uses inductive learning, relational reinforcement learning, and
deep learning to acquire previously unknown knowledge of domain dynamics.
Furthermore, the architecture provides explanatory descriptions of the
decisions, the underlying beliefs, and the related experiences, at the
desired level of abstraction. This talk will illustrate the architecture's
capabilities in the context of simulated and physical robots assisting
humans in moving and manipulating objects in indoor domains.

#######################################################

TITLE: Motion Planning Technologies for Human-Robot Interaction

Speaker:   Dinesh Manocha
                Department of Computer Science and Electrical & Computer
Engineering
                University of Maryland at College Park
               
https://urldefense.proofpoint.com/v2/url?u=http-3A__gamma.cs.unc.edu_&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=UnJ65338BZ4CbMiBsndtqR3YG-dxt1qlQVOYwvOGdJk&s=4UkfWbO6PLX_IMxbJdpVHQjmbK7cUz9Cy8_WJ8YrhBQ&e= 


Abstract:

Robotics are increasing being use for manufacturing, assembly, warehouse
automation, and service industries. However, current robots have limited
capabilities in terms of handling new environments or working next to humans
or with the humans. In this talk, we highlight some challenges in terms of
developing motion and task planning capabilities that can enable robots to
operate automatically in such environments. These include real-time planning
algorithms that can also integrate with current sensor and perception
techniques.  We present new techniques for realtime motion planning and how
they can be integrated with vision-based algorithms for human action
prediction as well as natural language processing.  We address many issues
related to human motion prediction and mapping high level robot command to
actions using appropriate planning algorithms.  We also new collision and
proximity algorithms for handling sensor data  and real-time optimization
algorithms that take into account various constraints and utilize commodity
parallel processors (e.g. GPUs)  to compute realtime solutions. Furthermore,
we combine with dynamics and stability constraints to generate plausible
plans. We also extend these ideas for simulation and navigation for high DOF
manipulators and demonstrate their benefits in dense scenarios. The
resulting approaches use a combination of ideas from AI planning, topology,
optimization, computer vision, machine learning, natural language
processing, and parallel computing. We also demonstrate many applications in
terms of autonomous picking (e.g. Amazon Picking Challenge), avoiding human
obstacles, cloth manipulation, robot navigation in dense environments, and
operating as cobots for human-robot interaction.

Regards,

Workshop chairs






--
Sent from: https://urldefense.proofpoint.com/v2/url?u=http-3A__robotics-2Dworldwide.1046236.n5.nabble.com_&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=Q5sqlE7oyfhTv6POcEmvlimVlI67Nf_9D7j7vf4HoO0&s=khYMDchHVm3Nb3z0T3f_QHTVnb6bvtfTwXg0vKpJRVc&e= 
_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide