[robotics-worldwide] [Jobs] Computer Vision and Machine Learning Positions at Honda Research Institute (Silicon Valley)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [Jobs] Computer Vision and Machine Learning Positions at Honda Research Institute (Silicon Valley)

Behzad Dariush
Honda Research Institute USA (HRI-US) is at the cutting edge of Honda's
research and development
activities. Inspired by Honda's global slogan - The Power of Dreams - we
pursue emerging technologies
and bring them into reality to make people happy, even as we are engaged
daily in highly scientific,
pioneering work. We realize that dreams don't come from organizations,
systems, or money. They come
from people, and we seek people who have such a challenging nature to work
with us.

HRI-US (Silicon Valley) is searching for talented scientists and engineers
with expertise in computer
vision and machine learning to join our team of engineers and scientists in
Mountain View, California, to
support activities in the next generation mobility systems.

As a member of the group, we encourage the candidates to contribute new
research ideas and to
participate in presentations and scientific publications. You will also
have the opportunity to build close
relationships with our research partners at world-class universities and at
other Honda Research
Institutes in Europe and Japan to create cutting edge solutions to complex
and real world problems.
For more information, please visit the our link:
https://urldefense.proofpoint.com/v2/url?u=http-3A__usa.honda-2Dri.com_Pages_Careers.aspx&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=jYa6pjLp0fjOMAA-ptK_DLqzfurM1AWheqLbCXmwX14&s=GZnP1jzeIpUiaVlSFYuqlmISnv8rnnLIjGC-4kN9WPA&e= 

---------------------------------------------------------------------------------------------------
Scientist:  3D Computer Vision (Job Number:  P16F04)
This position offers the opportunity to work on a broad and exciting set of
problems related to
processing of 3D point cloud data, including recognition, registration,
segmentation, tracking,
representation, and transmission.

Key Responsibilities:
-  Propose, create, and implement state-of-art point cloud segmentation and
classification
algorithms.
- Develop algorithms for spatial and temporal registration of 3D point
cloud data, recorded from
multiple LiDAR sensors.
- Develop and evaluate metrics to verify the reliability of the proposed
algorithms.
- Participate in ideation, creation, and evaluation of various related
technologies, including 3D
SLAM.
- Contribute to a portfolio of patents, academic publications, and
prototypes to demonstrate
research value.
- Participate in software development and implementation on various
experimental platforms.

Qualifications:
- PhD in computer science, electrical engineering, or related field.
- Strong familiarity and research experience in 3D computer vision and
machine learning.
- Hands-on experience in one or more of the following: LIDAR data
processing, Simultaneous
Localization and Mapping (SLAM), Perception, Machine Learning, Sensor
Fusion.
- Preferred hands on experience in handling multi-modal sensor data.
- Highly proficient in software engineering using C++ and Python.
- Preferred experience with Point Cloud Library (PCL), Robot Operating
System (ROS), and GPU
programming.
- Preferred experience in open-source Deep Learning frameworks such as
TensorFlow or Caffe.
- Strong written and oral communication skills including development and
delivery of
presentations, proposals, and technical documents.
- Strong publication record one or more of the following areas: 3D computer
vision, machine
learning, or SLAM.

---------------------------------------------------------------------------------------------------
Scientist:  Video/Multimodal Data Analytics (Job Number:  P16F03)
This position offers the opportunity to conduct innovative research on a
broad set of problems related
to multi-modal temporal segmentation.

Key Responsibilities:
- Propose, create, and implement supervised and unsupervised data
segmentation/clustering
algorithms from multimodal and multisensory data streams obtained from
traffic scenes.
- Develop and evaluate metrics to verify reliability of the proposed
algorithms.
- Participate in ideation, creation, and evaluation of related technologies
in various domains other
than traffic scenes, including temporal segmentation of human activities.
- Contribute to a portfolio of patents, academic publications, and
prototypes to demonstrate
research value.
- Participate in data collection, sensor calibration, and data processing.
- Participate in software development and implementation on various
experimental platforms.

Qualifications:
- PhD in computer science, electrical engineering, or related field.
- Research experience in computer vision, machine learning, and multi-modal
signal processing.
- Strong familiarity with machine learning techniques pertaining to
sequential data processing.
- Preferred hands on experience in handling multi-modal sensor data.
- Preferred experience in open-source Deep Learning frameworks such as
TensorFlow or Caffe.
- Highly proficient in software engineering using C++ and Python.
- Strong written and oral communication skills including development and
delivery of
presentations, proposals, and technical documents.
- Strong publication record in one or more of the following areas:
 computer vision, machine
learning, or computer vision.

---------------------------------------------------------------------------------------------------
Research Engineer:  Computer Vision/Multimodal Systems (Job Number:  P16F02)
The position focuses on vehicle sensor calibration and applying machine
learning and computer vision
algorithms to recorded data obtained from our experimental platforms.

Key Responsibilities:
- Applying computer vision/multimodal data analysis methods to data
collected on street scenes
using our advanced test vehicles.
- Developing processes, procedures, and algorithms for calibration of
multiple sensors including
camera, LiDAR, GPS, IMU, CAN-bus, and radar.

Qualifications:
- M.S. or PhD in computer science, electrical engineering, or related field.
- Research experience in computer vision, and driver behavioral data
analytics.
- Hands on experience in multi-sensor calibration and setup.
- Highly proficient in software engineering using C++ and Python.
- Experience in Robot Operating System (ROS) preferred.
- Experience in hardware and software synchronization of asynchronous data
streams preferred.
- Strong written and oral communication skills including development and
delivery of
presentations, proposals, and technical documents.

---------------------------------------------------------------------------------------------------
[Contract Software Engineer]:  Computer Vision and Sensor Fusion (Job
Number: P16T02)
This position offers the opportunity to work on real-world perception
problems such as object detection,
tracking and sensor fusion and deploy your solutions to real AD vehicles.

Key Responsibilities:
- Integrate and implement perception algorithms for our AD vehicles.
- Improve the runtime robustness of these perception algorithms.
- Perform real-time optimization on automotive platforms equipped with GPUs.
- Perform evaluation of the developed algorithms both in simulation and in
real world.
- Translate perception research output into efficient code.

Minimum Qualifications:
- M.S. in Computer Science, Electrical Engineering, or related field.
- Strong programming skills in Python or C++.

Preferred Qualifications:
- Expertise in Computer Vision and Machine Learning.
- Hands-on experience with libraries such as OpenCV and numpy.
- Experience with processing different sensor modalities such as lidars and
radars.
- Experience in deep learning frameworks such as Tensorflow or Caffe.
- Experience with GPU programming.
- Familiarity with Robot Operating System (ROS).

Duration:
- 2 years

---------------------------------------------------------------------------------------------------
[Contract Systems Engineer]:  Sensor Fusion for Automotive Interfaces (Job
Number:  P16T03)
This position offers the opportunity to work on a broad and exciting set of
problems related to research
and prototyping of automotive interfaces for next generation autonomous
driving and advanced driver
assistance systems using multiple modalities working together.

Key Responsibilities:
- Use information from various sensor signal streams (e.g., cameras, Lidar,
GPS, CAN bus data)
and apply data fusion and inference methods to create new interactive
experiences for the
driver.
- Work with a team of engineers/scientists with diverse backgrounds to
design and prototype
your ideas from concepts to execution and evaluation of those concepts.

Minimum Qualifications:
- M.S. in Computer Science, Electrical Engineering, or related field.
- Strong background in sensor signal processing, data fusion, probabilistic
inference.
- Excellent skills in C++/C#, Python, ROS, Linux.
- Versatility with a variety of relevant APIs such as OpenCV, Unity.

Desirable:
- Experience designing, implementing and evaluating interactive systems.
- Experience in implementing multimodal systems using a combination of
sensors.

Duration:
- 2 years

---------------------------------------------------------------------------------------------------
The candidate must possess excellent interpersonal and communication
skills, eagerness to learn and
grow, and have a flexible approach to solving problems.

Application Instructions:
Please send an e-mail to
[hidden email]

with the following:
- Subject line including the job number you are applying for
- Recent CV
- A cover letter explaining how your background matches the qualifications

Candidates must have the legal right to work in the U.S.A.
_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide