[robotics-worldwide] [jobs] ANITI - FRANCE : CertifIA -- 2 PhD and 1 post doc positions

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [jobs] ANITI - FRANCE : CertifIA -- 2 PhD and 1 post doc positions

Jeremie Guiochet
Dear all,
within ANITI (Artificial Intelligence Institute of Toulouse, France) project, the CertifIA chair offers 2 PhD and 1 post doc positions.

-------------------------------------------------------------------------------------------------------------
Application Procedure
Formal applications should include detailed CV, a motivation letter and transcripts of bachelors' degree.
Samples of published research by the candidate and reference letters will be a plus. Applications should be
sent by email to: [hidden email]
More information about ANITI: https://urldefense.proofpoint.com/v2/url?u=https-3A__aniti.univ-2Dtoulouse.fr_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=MmeEd-3L03exzKcbznZcU6KW01HEvR6Cq5k9ooqI5qY&s=7VZeN4qEX9jRLXAhOMa3QtnhGQOnJmxAzBsC5M0CWYU&e= 
------------------------------------------------------------------------------------------------------------


**********************************************************************

PhD position with Airbus: Certified programming framework for machine learning applications

Beginning: sept 2020

Net salary: 2096€ per month with  some teaching (64, hours per year on average)

Topic: The scope of the PhD is the real-time implementation of neural networks on platforms in the context of future aircraft systems. The first objective is to review existing COTS technologies
available on the market to execute neural network applications (e.g. Kalray Coolidge, NXP QorIq family). In addition to the pure hardware part, it is also important to investigate the associated
frameworks (e.g. TensorFlow), the available code generation approaches and compilation procedures of these frameworks. Predictable and safe implementation strategies will be developed for a candidate
COTS architecture and a family of applications.
**********************************************************************

PhD position with NXP: Methodology for integrating neural network IP in a chip with safety assurance

Beginning: sept 2020

Net salary: 2096€ per month with  some teaching (64, hours per year on average)

Topic: NXP is a chip maker that designs for many years chips for the embedded market. The objective is to prepare the next generation of chips to be used for vision-based computer, automotive and
autonomous driving or radar / lidar computing. Those chips will integrate neural network (NN) hardware IP and should ensure the same level of safety as expected by the ISO 26262. For that, a safety analysis
will be made in order to identify the potential random hardware failures associated to the NN component and to characterize their effect. Mitigation means will be defined in order to support the failures.
Validation and fault injection framework will be done on an FPGA-based emulator developed at NXP.

**********************************************************************

Post doctoral position :  Runtime Verification for Critical Machine Learning Applications
Beginning: ASAP for a duration of 12 to 24 months

Net salary: Negotiable with a minimum of 2600e per month with some teaching (20 hours per year on average)
Topic: In the last decade, the application of Machine Learning (ML) has encountered an increasing interest in various applicative domains, especially for a wide panel of complex tasks
(e.g. image-based pedestrian detection) classically performed by human operators (e.g. the driver). When used in safety critical domains, designers must demonstrate that
the obtained models are reasonably safe.
We propose in this post-doc to specify, implement and verify a new runtime verification approach for ML, an adversarial runtime monitor. This approach is based on adversaries generated at runtime, and used
to assess if the ML maybe fooled in an unsafe state. This might lead the monitor to detect if the ML is in a potential unsafe erroneous state, or in a potential erroneous state but safe. Once such a monitor would be designed, we also plan to use formal methods (verification) to prove the correctness of the monitor. This work will be applied to a case study, a ML software for drone collision avoidance studied and deployed in the context of the Delta project. The code of the Delta project is available at https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_delta-2Donera_delta-5Ftb_tree_master_workspace_isprs&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=MmeEd-3L03exzKcbznZcU6KW01HEvR6Cq5k9ooqI5qY&s=VveWqU7o2DES9XHoDGGp9ipb6C2HQAcGRbwJQFqd9F8&e= 
**********************************************************************

Contact me for more information : [hidden email]

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide