[robotics-worldwide] [jobs] PhD Position on Human-Robots Interaction

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [jobs] PhD Position on Human-Robots Interaction

Caroline Chanel
**This PhD thesis subject is financially supported by the Dassault
Aviation Chair**

https://urldefense.proofpoint.com/v2/url?u=https-3A__www.isae-2Dsupaero.fr_fr_isae-2Dsupaero_mecenat-2Drelations-2Davec-2Dla-2Dfondation-2Disae-2Dsupaero_chaire-2Ddassault-2Daviation_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=wxNyskrvJ9j887CKEEnypoHRyhiTD6_3yJvGSeim3aQ&s=fuBbriXpQmihpsiryzr2SLvI3vQjjrIO2n1eRPOJUgk&e= 

*
***Title:** Human-Robots Interaction : Integrating the Human Operator’s
Physiological State into the Supervisory Control Loop

*Context*

The actual ratio factor between UAV operators (O) and UAVs units (N) is
O≥N. For example, in the US army, UAVs are managed by several operators
: one is in charge of following the flight parameters, another in charge
of the payload, and the last one is responsible of the mission
supervision. In the next future this ratio should be inverted (O<N)
(Gangl et al., 2013). Indeed, UAVs are getting more and more automated,
taking decisions by themselves, which lightens the need of such a number
of operators. The idea is that UAVs could explore safety automations to
ensure a completely autonomous navigation and even a completely
autonomous mission planning. However, the human operator is still
considered as a providential agent (Casper and Murphy, 2003; Schurr et
al., 2009), who gets over the autonomous or automatic system when some
hazardous event occurs. But, it is known that, in UAV operations, Human
Factors represents the most important part of operation accidents
(Williams, 2004).

A promising research topic is to consider that automated planning and
execution should include, in particular, actions to balance the
operators’ workload (Gangl et al., 2013; de Souza et al., 2015a). In
this sense, one could chose to consider the human operator no more as a
providential agent, but rather as an integral part of the human-robot
team, i.e. as an agent that can fails (de Souza et al., 2015a).
Recently, researchers have addressed this delicate point. For example,
(Donath et al., 2010) propose a framework in which a cognitive assistant
cooperates with the fighter pilot to help him in his mission’s
management and balance his workload. (Gateau et al., 2016) proposed a
decisional framework in which UAVs take into account the human’s
non-deterministic behavior and availability during the mission before
launching requests to him. A reasonable key point could be to try to
infer the human’s cognitive state (de Souza et al., 2015a), via
physiological sensors, in order to ensure safety constraints or even the
respect of operational guidelines accepting that autonomous systems
could take over human operators when necessary.

*State-of-the-art*

Multi-robots or multi-UAV applications are more and more studied by
different research communities. On the one hand, the robotic community
addresses multi-agent cooperative or coordinated issues (Durfee, 2001;
Saber et al., 2003; Nigam et al., 2012; de Souza et al., 2015b) in order
to find robust algorithms to drive robots or UAVs joint actions to
fullfil, for instance, exploring, search and rescue or surveillance
missions. On another hand, the Human Factors’ researches globally study
and evaluate the conditions in which the human reaches the limits of
engagement during the operation task (Pope et al., 1995; Régis et al.,
2014; Dehais et al., 2015), with a special attention on workload or
fatigue (Cummings et al., 2013; Roy et al., 2016a,b). A recent report of
the U.S. Army, Navy, and Air Force stated that human factors are
responsible for 80% of UAVs operation accidents (Williams,2004)
involving human errors. Some researches are now trying to explore
physiological measurements and behavioral data to infer the human’s
cognitive state in such multi-UAVs or multiple-robots operational
contexts (Roy et al., 2016a; Senoussi et al., 2017; Drougard et al.,
2017a) in order to predict performance for adapting the human-robots
interaction.

The integration of such inferred human’s cognitive states, based on
physiological measurements and behavioral data, in the control loop of a
Human-Robot team is a novel topic for both communities (Drougard et al.,
2017a), (Gateau et al., 2016). Recently, works of our lab have addressed
this issue (de Souza, 2017; Drougard et al., 2017a; Gateau et al., 2016;
Senoussi et al., 2017; Royet al., 2016a). In the context of the Dassault
Aviation chair, we have proposed an human-robotic cooperative mission
(Drougard et al., 2017b), in which the human operator and robot must
extinguish fires, and for that the human operator has to manage the
water tank while paying attention to which operation mode the robot is
active (automatic or manual) to possibly take over the robot if
necessary. We are currently acquiring physiological data (ECG based)
from human operators. This problem is challenging because we need to
learn all transitions and observation probabilities (Drougard et al.,
2017a) in order to define a probabilistic planning domain to apply a
MOMDP (Mixed-Observability Markov Decision Process) solver (de Souza et
al., 2015a) in order to find a policy that will drive the human-robot
interaction.

The relatively recent field of physiological computing proposes metrics
derived from the electroencephalography (EEG) to assess the operator’s
mental state (e.g. fatigue, workload, stress, engagement, etc). These
cerebral measures can be processed through a signal processing pipeline
which encompasses a machine learning step in order to automatically
estimate a given mental state and close the loop. This kind of system is
called a passive brain-computer interface (pBCI) (Roy and Frey, 2016).
Additional peripheral measures such as the electrocardiography (ECG) and
eye-tracking can also provide relevant metrics. Of course modality
fusion can help better assess the operator’s state and therefore improve
the human-robot interaction.

In this sense, this thesis subject would propose a framework in where an
artificial agent, that is able to continuously estimate the human’s
cognitive and UAVs states, should be in charge of driving the mission
and should chose which agent would be more apt, in a given decision time
step, to perform high level decisions. This work is a continuation of
studies already conducted in our lab (de Souza et al., 2015a; Gateau et
al., 2016; Drougard et al., 2017a), but should this time take into
account physiological measurements and perform an online estimatation of
the human’s cognitive state to adapt the human-robots interaction in an
ecological mission context.

*Thesis proposition*

A first research track should help to define the mission scenario, in
which a human operator (pilot) has to cooperate and to coordinate UAVs
while piloting his plane (Gangl et al., 2013). This human operator would
be in charge of taking difficult decisions, such as to recognize targets
(Gateau et al., 2016), or to decide to rescue possible victims (de Souza
et al., 2016). Previous studies demonstrated that an artificial agent,
able to infer the availability of the operator during the mission, could
help the latter to better perform while decreasing his/her workload
(Donath et al., 2010; Gateau et al., 2016). Another previous study has
demonstrated that this artificial agent can also predict the operator’s
performance when he/she needs to take a decision within a small amount
of time (de Souza, 2017). Choosing how to present the information to the
human operator allows to maximize the chances that he/she takes an
aligned decision considering the operational guidelines. A second
research point should better define what kind of physiological means
could be explored. Based on previous studies (Roy et al., 2016a;
Senoussi et al., 2017; Poussot-Vassal et al.,2017), we believe that the
use of EEG and ECG data combined with eye-tracking data could increase
the human’s state estimator
precision to better predict human operator’s performance. The last issue
is the integration of this psychophysiological information into the
decision framework.

These interesting and challenging aspects would be a departure point to
this actual PhD thesis subject, and in this sense, a proposition of
study plan can be decomposed as follows :
— First year : bibliography study and positioning definition with
respect to the state-of-art approaches. Definition of the mission
scenario. First experiments to collect physiological data in order to
determine which metrics could be explored given the human’s cognitive
state observed in such experiments.
— Second year : decisional model definition including online
classification of the addressed human’s cognitive states and overall
mission modeling.
— Third year : final close-loop experiments to evaluate the proposed
approach. Thesis writing.

Communication papers would be proposed to : IEEE System’s, Man and
Cybernetics (SMC) conference and/or journal, IROS, HRI, ICRA, ICAPS,
AAMAS conferences.
*
***PhD Candidate’s profile**

— Applied Mathematics, Artifical Intelligence or Automatic Control
background ;
— Strong skills in programming ;
— Autonomous, hard-working, problem-solver ;
— Interested in Human Factors research

Candidates are invited to apply by e-mail to
*[hidden email]* and to *[hidden email]*
sending CV, motivation and recommendation letters and grades.



--
-- Caroline P Carvalho Chanel --
Ingénieur Chercheur au DCAS
ISAE - Institut Supérieur de l'Aéronautique et de l'Espace
10 av. Edouard Belin - BP 54032
31055 TOULOUSE Cedex 4
+33 (0)5 61 33 81 50

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide