[robotics-worldwide] [journals] ISPRS Journal of Photogrammetry and Remote Sensing Theme Issue “Point Cloud Processing”

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[robotics-worldwide] [journals] ISPRS Journal of Photogrammetry and Remote Sensing Theme Issue “Point Cloud Processing”

Ruisheng Wang

ISPRS Journal of Photogrammetry andRemote Sensing Theme Issue “Point Cloud Processing”

Guest Editors:

RuishengWang (University of Calgary, Canada)

Bisheng Yang(Wuhan University, China)

FlorentLafarge (INRIA, France) 

Suya You(University of Southern California, USA)

 

Submissiondeadline: Oct. 3 2017

Plannedpublication date: Summer 2018

 

Over the past few decades, point clouds from LiDAR(light detection and ranging) and passive imaging technologies have been majordata sources for mapping applications in the photogrammetry and remote sensingcommunities. In recent years, processing large-scale geospatial data,especially point clouds, has also drawn considerable attention from the computervision, computer graphics and robotics communities. Workshops in recent premiercomputer vision and graphics conferences, such as the workshop on point cloudprocessing in computer vision at CVPR (computer vision and pattern recognition)2012 and IQmulus workshop on processing large geospatial data at SGP (symposiumon geometry processing) 2014, were focused on point cloud processing. SeveralISPRS conference events, such as the 2014 Photogrammetric Computer VisionSymposium, have also placed an emphasis on point cloud processing. The purposeof this theme issue is to increase interdisciplinary interaction andcollaboration in point cloud processing among photogrammetry, computer vision,computer graphics (geometry processing and geometric modeling), androbotics.  

This theme issue covers a range of topics on point clouds generatedfrom LiDAR and various image sources. Georeferenced point clouds collected fromdifferent platforms such as aircraft, UAV (unmanned aerial vehicles), vehicle,terrestrial scanning, hand-held devices, and backpacks of indoor and outdoorscenes, are particularlyrelevant to this theme issue. Point clouds generated from images such as aerial images, satellite imagery, street view panoramas,and camera phone images are also of relevance. The topics range from low-levelprocessing to high-level understanding including feature extraction,segmentation, recognition, and modeling. The list of suggested topics includes butis not limited to:

·      Deep learning for point cloud processing

·      Point clouds from stereo, panoramas, camera phone images, obliqueand satellite imagery

·      Point cloud registration and segmentation

·      High performance computing for large-scale point clouds

·      3D object recognition, classification, and change detection

·      Large-scale urban modeling from aerial and mobile LiDAR

·      2D floorplan generation and 3D modeling of indoor pointclouds

·      Fusion of images and point clouds for semantic segmentation

·      Industrial applications with large-scale point clouds

·      Rendering and visualization of large-scale point clouds

The theme issue seeks high-qualityresearch and application submissions in all aspects of 3D point cloudprocessing. Test on Benchmark datasetsare strongly encouraged. Papers must be originalcontributions, not previously published or submitted to other journals. Submissionsbased on previous published or submitted conference papers may be consideredprovided they are considerably improved and extended. Papers must follow the instructions for authorsat https://urldefense.proofpoint.com/v2/url?u=http-3A__www.elsevier.com_journals_isprs-2Djournal-2Dof-2Dphotogrammetry-2Dand-2Dremote-2Dsensing_0924-2D2716_guide-2Dfor-2Dauthors&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=TVB5Ge1sJc2SXBXhH1o-2C-fYY-bIDHYzBnoTP_LYNk&s=VMWzIJd2EeBDVGtwMEk_EAfVNg-0U_BBhgZtR9D-WCc&e=

Submityour manuscript to https://urldefense.proofpoint.com/v2/url?u=https-3A__www.evise.com_profile_-23_PHOTO_login&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=TVB5Ge1sJc2SXBXhH1o-2C-fYY-bIDHYzBnoTP_LYNk&s=h05FPxOxqSfmdnTrJXEjeSp9u5jrcuZuD8bCBZxlzvc&e=  by Oct. 3, 2017. 

 

 

Prof. Ruisheng Wang

Department of Geomatics Engineering
University of Calgary, Canada

[hidden email]

 

Prof. Bisheng Yang

State Key Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing

Wuhan University, China

[hidden email]

 

Dr. Florent Lafarge

Titane research group

INRIA, France
[hidden email]



Prof. Suya You

The Computer Graphics and Immersive Technologies (CGIT)laboratory

University of Southern California, USA

[hidden email]

 

 


Ruisheng Wang, Ph.D. P.Eng.

Associate Professor 
Department of Geomatics Engineering
Schulich School of Engineering 
University of Calgary2500 University Drive NW 
Calgary, Alberta T2N 1N4 CANADA
Tel: (403)210-9509  Fax: (403)284-1980

Email: [hidden email]
Web: www.ucalgary.ca/rswang


On Wednesday, August 23, 2017, 1:01:18 PM MDT, <[hidden email]> wrote:

Send robotics-worldwide mailing list submissions to
    [hidden email]

To subscribe or unsubscribe via the World Wide Web, visit
    http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide
or, via email, send a message with subject or body 'help' to
    [hidden email]

You can reach the person managing the list at
    [hidden email]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of robotics-worldwide digest..."


Today's Topics:

  1. [jobs] Multiple PhD positions in Robotics at Arizona State
      University (Panagiotis Artemiadis)
  2. [meetings] CFP: The Seventh Joint IEEE International
      Conference on Development and Learning and on Epigenetic Robotics
      (Yukie Nagai)
  3. [journals] IEEE Transactions on Robotics Volume 33, Number 4
      (IEEE T-RO)
  4. [meetings] Extended Deadline for Abstract Submission - IROS
      Workshop on Shared Autonomy (Malte Schilling)
  5. [meetings] 2 weeks to Barcelona ROS Courses (YUHONG LIN)
  6. [jobs] PhD studentship on "Adaptive Multimodal Human-Robot
      and Machine Interaction" (deadline September 21st, 2017)
      (Silvia Rossi)
  7. [jobs] Robotics Software Engineer (C++) at Intelligent Flying
      Machines (CHICAGO, USA) (Marc Gyongyosi)
  8. [Jobs] Two Fully funded PhD positions in the area of
      Neurorobotics are available at the University of South Florida in
      Tampa starting Spring or Fall 2018. (Weitzenfeld, Alfredo)
  9. [Software] Rutgers PHYSIM Tool: Automatic Generation of
      Physically-realistic Datasets for Object Detection and Pose
      Estimation in Robotics (Abdeslam Boularias)
  10. [meetings] Call for posters: 2017 IROS Workshop on "Physical
      Human-Robot & Human-Telerobot Interaction" (S. Farokh Atashzar)
  11. [meetings] 2nd CFP Papers/posters Workshop on Language
      Learning in children and robots at IEEE ICDL-EPIROB (Michael Spranger)
  12. [jobs] AI/Robotics Engineer, Drone - Kakao Brain (bryan.robot)
  13. [meetings] KCAP2017 || Call for short papers, late breaking
      results, and vision papers (Ilaria.Tiddi)
  14. [jobs] Funded PhD in action modelling with episodic memory at
      CITEC - Bielefeld University (Robert Haschke)
  15. [jobs] Funded PhD in tactile-driven robotics at CITEC -
      Bielefeld University (Robert Haschke)


----------------------------------------------------------------------

Message: 1
Date: Sun, 20 Aug 2017 09:34:52 -0700
From: Panagiotis Artemiadis <[hidden email]>
To: "[hidden email]" <[hidden email]>
Subject: [robotics-worldwide] [jobs] Multiple PhD positions in
    Robotics at Arizona State University
Message-ID:
    <CAAH-X+YUFrngR3Q4p6_U9D-uFmXG6ncjp+3SKC-8XNd07A0=[hidden email]>
Content-Type: text/plain; charset="UTF-8"

Multiple PhD positions in the Human-Oriented Robotics and Control (HORC)
lab at Arizona State University
Deadline for applications: September 30th, 2017.

The Human-Oriented Robotics and Control Lab at Arizona State University (
https://urldefense.proofpoint.com/v2/url?u=http-3A__horc.engineering.asu.edu_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ChwiXKCldySW4Fesh2XHf-viASSNObhw1UOq-Dcn9w0&s=MWIaI4nqQ8xZVcqaeTylASa3oFCOArSnyZDU9yiO0To&e=) is seeking applications for multiple PhD
positions in the field of rehabilitation robotics, myoelectric control,
biped locomotion and prostheses.

Prospective students should hold a master's or bachelor's degree in
mechanical engineering, electrical engineering, computer
science/engineering or other related disciplines. Strong background in
control systems and robotics is required. Working experience with C and
Matlab is desired. Fluency in English speaking/writing is required.
Experience in human-robot physical interaction is a plus.

Interested candidates should e-mail a single pdf file including a Cover
Letter describing background and motivation, CV including 3 references, and
transcripts of their bachelor and master studies to:
[hidden email]. Please put HORC Lab PHD in the subject line.
Deadline for applications: September 30th, 2017.

The research will be conducted within the School for Engineering of Matter,
Transport and Energy (https://urldefense.proofpoint.com/v2/url?u=http-3A__semte.engineering.asu.edu_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ChwiXKCldySW4Fesh2XHf-viASSNObhw1UOq-Dcn9w0&s=pAWvIyWCSlZFUce7N4WMAeJe_8HrSdkhD8sUS7yPR0Q&e=), in the Mechanical
& Aerospace program. ASU (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.asu.edu_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ChwiXKCldySW4Fesh2XHf-viASSNObhw1UOq-Dcn9w0&s=bxYZVi6G7QMG_Oj04qzZyMkPRKWzicasC7tJB8m7oik&e=) is the largest public
research university in the United States, with a 2016 student enrollment of
72,000. ASU is located in Phoenix, which is the 5th largest city in the US.

--
Panagiotis (Panos) Artemiadis, Ph.D.
Associate Professor of Mechanical and Aerospace Engineering
School for Engineering of Matter, Transport and Energy
Ira A. Fulton Schools of Engineering, Arizona State University
Office: ERC 355, Phone: (480) 965-4182 <%28480%29%20965-4182>, Fax: (480)
727-9321 <%28480%29%20727-9321>
Personal: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.public.asu.edu_-7Epartemia_&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ChwiXKCldySW4Fesh2XHf-viASSNObhw1UOq-Dcn9w0&s=tE-He6hkS5f_UHN8uyUCVEUIq72M16y7Wgesz4vLYRw&e=
Lab: https://urldefense.proofpoint.com/v2/url?u=http-3A__horc.engineering.asu.edu&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=ChwiXKCldySW4Fesh2XHf-viASSNObhw1UOq-Dcn9w0&s=hXPDXsqH_ZeLT21M91fC5azp-D0RSbPSKEiXaLV1Eys&e=


------------------------------

Message: 2
Date: Mon, 21 Aug 2017 11:33:37 +0900
From: Yukie Nagai <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [meetings] CFP: The Seventh Joint IEEE
    International Conference on Development and Learning and on Epigenetic
    Robotics
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="utf-8"

* Apologies for cross-posting *

==============================================================

Final Call for Participation
IEEE ICDL-EPIROB 2017
The Seventh Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics.
Lisbon, Portugal.
September 18th-21st, 2017.
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.icdl-2Depirob.org&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=mXg5AJJ3Tqw2grnKvJGLb_-FS7pkvtcvxlMvdqYM9G4&e=


== Program available at: https://urldefense.proofpoint.com/v2/url?u=https-3A__ras.papercept.net_conferences_conferences_ICDLER17_program_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=1yPzM0YvmcGz_0Qb8j3aVJ0obYNHsViv1zNtZvDK4Fs&e=

== Registration: https://urldefense.proofpoint.com/v2/url?u=https-3A__www.congressospco.abreu.pt_EPIROB-2D34358.aspx&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=rBj9zP-pWLxxyLY3anXK4gDnoFbQXUWgkaNy0rlnqH0&e=

== Keynote speakers
* Josep Call (https://urldefense.proofpoint.com/v2/url?u=https-3A__risweb.st-2Dandrews.ac.uk_portal_en_persons_josep-2Dcall-28e16c323e-2D1faa-2D4dc8-2D8b9c-2Df6e9432766a2-29.html&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=WDQDpVN5bnSHKqIemu8Nes_ZKdC6IjcypyaGmHZTsmE&e=)
* Rod Grupen (https://urldefense.proofpoint.com/v2/url?u=http-3A__www-2Drobotics.cs.umass.edu_-7Egrupen_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=IONiWiVDXv2f7G1GamAUsE9W0Pf6hC-bHQG5YwlUVkg&e=)
* Donald Pfaff (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.rockefeller.edu_research_faculty_labheads_DonaldPfaff_-23content&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=_kSNPdwAx68Jh0gqoWxLPy7Tor1fmPJJr457O9QvFyI&e=)
* Tamar Flash (https://urldefense.proofpoint.com/v2/url?u=http-3A__www.weizmann.ac.il_math_tamar&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=W0awRCsqpVkFnFXp5zIbsFKVmIwyeQsF6LuvWz0w3GE&e=)

== Conference description
The past decade has seen the emergence of a new scientific field in which computational techniques are employed to study how intelligent biological and artificial systems develop sensorimotor, cognitive and social abilities through dynamic interactions with their physical and social environments, with a twofold objective: to gain a better understanding of human and animal intelligence, and to enable artificial systems with more adaptive and flexible behaviors.
The two most prominent conference series of this area, the International Conference on Development and Learning (ICDL) and the International Conference on Epigenetic Robotics (EpiRob), are joining forces for the seventh time and invite submissions for a joint meeting in 2017 to explore, extend, and consolidate the interdisciplinary boundaries of this exciting research field. In addition to the usual paper submission-selection process, the BabyBot Challenge will crown computational models that capture core aspects of specific psychology experiments.

== Topics of interest include (but are not limited to):
* general principles of development and learning;
* development of skills in biological systems and robots;
* nature VS nurture, critical periods and developmental stages;
* architectures for cognitive development and life-long learning;
* emergence of body knowledge and affordance perception;
* models for prediction, planning and problem solving;
* models of human-human and human-robot interaction;
* emergence of verbal and non-verbal communication skills;
* epistemological foundations and philosophical issues;
* models of child development from experimental psychology.
Any of the topics above can be simultaneously studied from the neuroscience, psychology or modeling/robotic point of view.

== Program committee
* General chairs
Jos? Santos-Victor (Instituto Superior Tecnico, Portugal)
Giulio Sandini (Italian Institute of Technology, Italy)
* Program chairs
Lorenzo Jamone (Queen Mary University of London, UK)
Emre Ugur (Bogazici University, Turkey)
*Bridge chairs
Angelo Cangelosi (Plymouth University, UK)
Minoru Asada (Osaka University, Japan)
Jeffrey Lockman (Tulane University, USA)
* Publicity chairs
Alexandre Bernardino (Instituto Superior Tecnico, Portugal)
Yukie Nagai (NICT, Japan)
Maya Cakmak (University of Washington, USA)
Serena Ivaldi (INRIA Nancy, France)
* Publication chair
Jorge Marques (Instituto Superior Tecnico, Portugal)
* Finance chair
Jos? Gaspar (Instituto Superior Tecnico, Portugal)
* Local chairs
Plinio Moreno (Instituto Superior Tecnico, Portugal)
Rodrigo Ventura (Instituto Superior Tecnico, Portugal)


?
Yukie Nagai, Ph.D.
Senior Researcher, National Institute of Information and Communications Technology
Visiting Professor, Bielefeld University
[hidden email] | https://urldefense.proofpoint.com/v2/url?u=http-3A__developmental-2Drobotics.jp&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=oh2MNewVsoPEs7ctHaUBOzGOts0r3q5CtVV-cEjMTOI&e=
CREST Cognitive Mirroring: https://urldefense.proofpoint.com/v2/url?u=http-3A__cognitive-2Dmirroring.org&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=JT-QYOuHUHt-_UmAYEGrn7-rJsgFiS3MdpJ7W3EaG2U&s=zYUxY_97XHntQd_qSTLySfEzvBHD7CrcTqYUL8B329U&e=



------------------------------

Message: 3
Date: Mon, 21 Aug 2017 13:29:19 +0900
From: IEEE T-RO <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [journals] IEEE Transactions on Robotics
    Volume 33, Number 4
Message-ID:
    <CAJbaOoPjpa5jzrt_YyCLnwZfaSB_=[hidden email]>
Content-Type: text/plain; charset="UTF-8"

IEEE Transactions on Robotics
Volume 33, Number 4, August 2017
Table of Contents

[Regular Papers]

??
Stiffness Control With Shape Memory Polymer in Underactuated Robotic
Origamis
Amir Firouzeh, Marco Salerno, Jamie Paik
pp 765 - 777
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7915733_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=kfXl0NCFF-g2jr8B72JOuftbH1Lc4lXVPSMTYdJHz6A&e=

Dynamic In-Hand Sliding Manipulation
Jian Shi, J. Zachary Woodruff, Paul B. Umbanhowar, Kevin M. Lynch
pp 778 - 795
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7913727_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=45oyUEZo_iZQ4igt0GrhUip_5JSUgfMqIaFV17h7nTE&e=

Toward Modular Active-Cell Robots (MACROs): SMA Cell Design and Modeling of
Compliant, Articulated Meshes
Ahsan I. Nawroj, John P. Swensen, Aaron M. Dollar
pp 796 - 806
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7918530_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=aGjIQmz1YvgF1RXlCiUSY7yK4iOTRaQybNBkXaiMeVs&e=

Modeling Robotic Manipulators Powered by Variable Stiffness Actuators: A
Graph-Theoretic and Port-Hamiltonian Formalism
Stefan S. Groothuis, Stefano Stramigioli, Raffaella Carloni
pp 807 - 818
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7875152_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=eP_1AWuFzWmipYgaigWQjyKoX8O-JP1PxxDXivgLM9A&e=

Regularized Hierarchical Differential Dynamic Programming
Mathieu Geisert, Andrea Del Prete, Nicolas Mansard, Francesco Romano,
Francesco Nori
pp 819 - 833
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7885106_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=-EVIFnFFqocnesQMPyyKsELUs8OAJuN-XEowTynEnjs&e=

Dynamics, Control, and Estimation for Aerial Robots Tethered by Cables or
Bars
Marco Tognon, Antonio Franchi
pp 834 - 845
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7892030_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=9tzE7_l3gOXp99hP6vp8Eppk3xfMFzjpqQcBxjli0JM&e=

Human Movement Modeling to Detect Biosignal Sensor Failures for Myoelectric
Assistive Robot Control
Jun-ichiro Furukawa, Tomoyuki Noda, Tatsuya Teramae, Jun Morimoto
pp 846 - 857
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7906627_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=imXP-KxEXvvFun6QXudJb-iwWnhQ8haCeCikNTEgT0M&e=

Obstacle Avoidance Strategy using Onboard Stereo Vision on a Flapping Wing
MAV
Sjoerd Tijmons, Guido C. H. E. de Croon, Bart D. W. Remes, Christophe De
Wagter, Max Mulder
pp 858 - 874
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7919189_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=46xfsnM78sE_-gPKToXQjSM094-FQw34uWVK4fx2SNY&e=

An Opportunistic Linear-Convex Algorithm for Localization in Mobile Robot
Networks
Sam Safavi, Usman A. Khan
pp 875 - 888
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7907331_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=yhgt4AQ2gm-EJZ5lc-qjsEqOm_ZLuX5gPNkuZVbToOU&e=

Analyses and Solutions for the Buckling of Thin and Flexible Electrostatic
Inchworm Climbing Robots
Hongqiang Wang, Akio Yamamoto
pp 889 - 900
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7903715_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=PY11Xd9cNdCk-ql9HLWCIX2ESvV_Xl6dURO5glPem_A&e=

Unified Tracking and Shape Estimation for Concentric Tube Robots
Alessandro Vandini, Christos Bergeles, Ben Glocker, Petros Giataganas,
Guang-Zhong Yang
pp 901 - 915
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7913726_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=L2IO9KSRePB1z-T2CfPaciEHfLEsGAjNsmqMamNHTpw&e=

Bayesian Nonparametric Learning of Cloth Models for Real-Time State
Estimation
Nishanth Koganti, Tomoya Tamei, Kazushi Ikeda, Tomohiro Shibata
pp 916 - 931
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7930514_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=m8RRanDAe4PRxMAufvrVLYcCili8I3i_tbd5kuFJTXs&e=

A Distributed Version of the Hungarian Method for Multirobot Assignment
Smriti Chopra, Giuseppe Notarstefano, Matthew Rice, Magnus Egerstedt
pp 932 - 947
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7932518_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=zIO7T8uVPeZ5JpnA2GVNZnDkGQujoMq4HZxTaZ3GZTE&e=

The Input-to-State Stable (ISS) Approach for Stabilizing Haptic Interaction
With Virtual Environments
Aghil Jafari, Muhammad Nabeel, Jee-Hwan Ryu
pp 948 - 963
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7886315_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=Et1r0vCYIAeFel6uA7Edq8AsMCzL1sSeDIKpOvqqMN0&e=

FreMEn: Frequency Map Enhancement for Long-Term Mobile Robot Autonomy in
Changing Environments
Tomas Krajnik, Jaime P. Fentanes, Joao M. Santos, Tom Duckett
pp 964 - 977
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7878680_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=HA3dgGuwwWtoGxn59tLwmtOU4kVVJFomlwqOfEYomxA&e=


[Short Papers]

Dual REPS: A Generalization of Relative Entropy Policy Search Exploiting
Bad Experiences
Adria Colome, Carme Torras
pp 978 - 985
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7889017_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=uqiggzVWNOR4YNbMUHElJHd4-3kFaGcspnwZlqUcC-U&e=

New Actuation Mechanism for Actively Cooled SMA Springs in a Neurosurgical
Robot
Shing Shin Cheng, Yeongjin Kim, Jaydev P. Desai
pp 986 - 993
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7889001_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=zp_7JuDzV-5DQ5_X70KtGcYqKqZErsE-qHMO6M2pEWo&e=

Generalized Topology Control for Nonholonomic Teams With Discontinuous
Interactions
Ryan K. Williams, Andrea Gasparri, Giovanni Ulivi, Gaurav S. Sukhatme
pp 994 - 1001
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7893695_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=eSHo7aXOc72VL7JUuG0BodLyZx0pAC1TKEuquHO2Qzk&e=

Criterion for the Design of Low-Power Variable Stiffness Mechanisms
Vincent Chalvet, David J. Braun
pp 1002 - 1010
https://urldefense.proofpoint.com/v2/url?u=http-3A__ieeexplore.ieee.org_document_7929334_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=dw_C2crl8Z3X_uRZHbnQNo1o1SaEeK-G6O3jSVin_Hk&s=gTn38ib2DkcfI4oX1T9KIECeRLJcdjKt9tXPeviC61Y&e=


------------------------------

Message: 4
Date: Mon, 21 Aug 2017 11:52:07 +0200
From: Malte Schilling <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [meetings] Extended Deadline for
    Abstract Submission - IROS Workshop on Shared Autonomy
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset="utf-8"

Dear Colleagues,

Due to multiple requests: for our workshop on Shared Autonomy at IROS (Vancouver, on the 28th of September) we will extend the deadline for extended abstract submissions until the end of the month.

There are already a nice number of contributions, but we will still welcome further submissions and current results.

New Deadline for Abstract Submission
Aug 31, 2017 - notification will be given until Sept 4, 2017

For further information, see below and see the website containing more information as a continuously updated schedule: https://urldefense.proofpoint.com/v2/url?u=https-3A__aiweb.techfak.uni-2Dbielefeld.de_iros2017-5Fworkshop-5Fshared-5Fautonomy_&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=A0yBft8mhKD9ENBDKoy1ZZxFd5lAlSRQW8KUs3GZyEs&s=xb9xtWj9lYsOBetlKic_kpvnq2-rtdqselvX47WIv4w&e=

With best regards,

    malte schilling

***************************************************************************
Dr. rer. nat. Malte Schilling
Cognitive Interaction Technology - Center of Excellence
Bielefeld University, Postfach 100131, D-33501 Bielefeld
[hidden email]


==========================================================
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems
Vancouver, CA

Workshop on Shared Autonomy ? Joint Learning in Human-Robot Collaboration
Full day workshop on September 28
==========================================================
 
As more and more robots are introduced into our everyday life and workspaces, we find new ways in which they can provide support in many different tasks. In such increasingly hybrid environments, our interactions with robots and other virtual agents will increase. This will require novel forms of cooperation and collaboration. Robots should learn to adapt to our needs over long timescales and should help free us of tedious tasks. This will require autonomous robots that don?t need detailed instruction, but will operate freely within boundary conditions that specify high level goals.

Shared Autonomy focuses on how autonomous systems can successfully interact and shape each other?s autonomy spaces. It is about how two or more autonomous agents mediate how they individually and jointly can contribute, on the one hand, to an overarching goal, but also, on the other hand, to their individual goals. The workshop aims, first, at addressing the underlying theoretical issues and potential models. Secondly, the main focus of the workshop is on the realization of such models in robotic systems and in answering the question how such systems can successfully realize collaboration between humans and robots.
 

TOPICS OF INTEREST
--------------------------------
    - Shared Control in collaborative human-robot tasks
    - Learning and adaptation of coordination patterns in agents
    - Discussion on the concept and possible extensions of Shared Autonomy
    - Representations for collaboration in HRI (intentions, goals, ?)
    - Shared decision making
    - Learning and modelling human-robot interaction, human instructions and collaborative behavior
    - Transfer and incremental learning in cooperative tasks and in multi-agent systems
    - Estimating the state and beliefs of other agents
    - Predicting actions and/or motions of other agents

The workshop is organized by Prof. Wolfram Burgard (University of Freiburg), Przemyslaw ?Pem? Lasota (MIT), Prof. Helge Ritter (Bielefeld University), Dr. Malte Schilling (Bielefeld University), Associate Prof. Julie A. Shah (CSAIL, MIT), and Prof. Britta Wrede (Bielefeld University).
 

OVERVIEW PROGRAM
--------------------------------
Overall, the workshop aims at a multidisciplinary perspective on key aspects and challenges of Shared Autonomy. Therefore, the presentations will reflect the diversity of approaches and topics as well as there will be ample time for discussion. A poster session will offer participants further chance to contribute their perspectives and work.

Confirmed Speakers:
    - Anticipatory Movements of a 3rd hand in HRC   
        Jan Peters and Guilherme Maeda
    - Learning complex motor tasks from humans
        Katharina Muelling
    - Collaborative human-robot lifting and handover tasks
        Elizabeth Croft
    - Learning manipulation and interaction
        Siddharta Srinivasa
    - Coordinating Interaction on different levels in Shared Autonomy
        Helge Ritter
    - Shared Mental Models for Human-Robot Teaming
        Julie A. Shah
    - Sharing Space ? Navigation in Crowds
        Wolfram Burgard


SUBMISSION
-------------------
Participants are invited to submit a contribution (via email: [hidden email]) as an extended abstract (maximum 2 pages in length, following the IEEE conference style in two-columns). Contributions will be reviewed and selected by the organizers.

The workshop contributions will appear as online proceedings on the workshop webpage. In addition, contributors to the workshop will be invited to submit extended versions of the manuscripts to a special issue (more information to follow).

The accepted abstracts will be presented in a poster session. We want to give researchers a chance to present their (ongoing) work. But we also want to provide a forum for relevant work that has recently been published in journals and other conferences.

Abstracts will be reviewed by the organizers.


IMPORTANT DATES
----------------------------
Aug 31, 2017, Deadline for Abstract submission
Sept 4, 2017, Notification of acceptance
Sept 28, 2017, Day of the workshop



------------------------------

Message: 5
Date: Mon, 21 Aug 2017 14:15:48 +0200
From: YUHONG LIN <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [meetings] 2 weeks to Barcelona ROS
    Courses
Message-ID:
    <CAEaQTZe_aZtH0N0GVcjKioM4VZX5jj036fDzV=[hidden email]>
Content-Type: text/plain; charset="UTF-8"

*** Apologies if you have received this call for participation more than
once  ***

*Barcelona ROS Courses |* September 2017
Dear Roboticists,

The *ROS Onsite Courses* will be starting on September 4. These courses are
ideal for start learning ROS from zero and advance ROS learning to next
level.
You can select:

  - *ROS IN 5 DAYS *(Entering ROS)
  - *ROS NAVIGATION + ROS Auto-Vehicles 101 *(Mastering ROS)

You can enroll the courses here:
<https://urldefense.proofpoint.com/v2/url?u=https-3A__mail.google.com_mail_u_0_goog-5F538053291&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=A71g5maDzW11l1u7cOahg8VArEJZ83u2WFjDbSj1s68&s=L_fbZDxdaQTTHsdBdoz6rL2YEVcL4_ZIIhLOXd2UN9o&e=>https://urldefense.proofpoint.com/v2/url?u=http-3A__www&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=A71g5maDzW11l1u7cOahg8VArEJZ83u2WFjDbSj1s68&s=2XdAPI6yK8GP9I9BLhuVhHAKIEkk2HPi9BO8hq_VZq4&e=.
theconstructsim.com/ros-onsite-course-sep2017/

Entirely practical Robot Operating System training which integrates theory
and practice, learn by programming different simulated robots and full
integrated development environment.

:: ABOUT THE COURSES
===================

Class given in Barcelona.
Morning: teaching with exercises from 9:30 to 13:30
Afternoon: working on ROS project from 14:30 to 16:30

  - Exclusive for ONLY 5 individuals
  - Exam and ROS Certification: There will be an exam at the end of the
  course. Those who pass the exam will receive a ROS certification.
  - Duration:
      - ROS IN 5 DAYS: *Sep 4th to 8th, 2017*
      - ROS Navigation+ROS Auto-Vehicles 101: *Sep 11th to 15th, 2017*


  - Location: Gran Via de les Corts Catalanes, 608, 3?D, 08007 Barcelona,
  SPAIN


:: SCHEDULE
===========

*ROS IN 5 DAYS *(from 4th to 8th, September)
* Monday: ROS BASICS: basic structure of ROS programs and its concepts
* Tuesday: ROS TOPICS: how to create topics and how to access them
* Wednesday: ROS SERVICES: how to create services and how to use them
* Thursday: ROS ACTIONS: how to create action servers and use them
* Friday: ROS DEBUG: ROS tools that allow to find errors and debug and EXAM

*ROS NAVIGATION + ROS Autonomous Vehicles 101 *(from 11th to 15th,
September)
* Monday: Navigation Basics concepts
* Tuesday: Mapping
* Wednesday: Localisation
* Thursday: Path Planning + Obstacle Avoidance
* Friday: ROS Autonous Vehicles 101 + EXAM


:: REQUIREMENTS FOR ATTENDANCE
==============================

* Basic knowledge of Linux shell
* Basic knowledge of Python programming
* A laptop (can have any operating system)
* NO PRIOR KNOWLEDGE OF ROS REQUIRED (for ROS IN 5 DAYS)
* Basic ROS knowledge (for ROS Navigation + ROS Auto-Vehicles 101)


:: CONTACT US
============

* You can contact us with questions and doubts here:
[hidden email]

--
*YUHONG LIN*
Business Assistant of R&D
The Construct

[hidden email]
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.theconstructsim.com&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=A71g5maDzW11l1u7cOahg8VArEJZ83u2WFjDbSj1s68&s=clyUEGAunnHsDlfoWSVsEeYZjvhGIisWNlzheepebEY&e=


------------------------------

Message: 6
Date: Mon, 21 Aug 2017 14:51:44 +0200
From: Silvia Rossi <[hidden email]>
To: [hidden email]
Subject: [robotics-worldwide] [jobs] PhD studentship on "Adaptive
    Multimodal Human-Robot and Machine Interaction" (deadline September
    21st, 2017)
Message-ID: <[hidden email]>
Content-Type: text/plain;    charset=us-ascii

The PhD Program in Information Technology and Electrical Engineering at the University of Naples Federico II (Italy) has an open call for 1 PhD studentship reserved for non-Italian citizens who are not resident in Italy <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.unina.it_documents_11958_14656033_Call-2Bfor-2Bapplications-2Bof-2Bnon-2Ditalian-2Bcitizens-2Bwho-2Bare-2Bnot-2Bresident-2Bin-2BItaly.pdf&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=Z_Lu7YCD_bAYYFSD4xSdNEyDJzUkIrfosVJSAQFwEJI&e=> on the following topic:

Adaptive Multimodal Human-Robot and Machine Interaction

Compared to a common Graphic User Interface (GUI), a robotic platform can deploy also speech, gestures and other non-verbal signals that can be used to enhance the naturalness of the interaction. This targets a more natural way of dialoguing preferred to those based on a single communication channel as already seen in most of the applications on tablets and mobile phones. In this context, the goal is to study and design adaptive multimodal interaction mechanism where the focus will not only be on relying on different modalities and on how to apply on them fusion techniques in order to generate the correct interpretation of the user intention, but also to select the proper feature set and to optimize them with reference to both each modal channel and to each user. In this view, the majority of the robotic applications are based on static user models, this makes such prevents systems to adapt independently and proactively to changes in the needs and preferences of users. The aim of the
  present proposal is to investigate how to merge human-robot and human-machine multimodal interaction research issues with online adaptive learning ones.

The selected candidate will join the PRISCA Laboratory (Projects of Intelligent Robotics and Advanced Cognitive Systems) in Naples. The PRISCA Lab is a dynamic, international, and multidisciplinary team that offers exciting scientific projects, as well as an excellent and stimulating research environment.

============================================================

How to apply

Closing Date: September 21st, 2017

Detailed instructions and further details on the PhD call are available at:

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.unina.it_didattica_post-2Dlaurea_dottorati-2Ddi-2Dricerca_english-2Dversion&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=HzE7OxI7lTHnxTXw1V7OkQ2n-xAQlouFoLdCJGN1Nas&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.unina.it_didattica_post-2Dlaurea_dottorati-2Ddi-2Dricerca_english-2Dversion&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=HzE7OxI7lTHnxTXw1V7OkQ2n-xAQlouFoLdCJGN1Nas&e=>
The selection will be based on CV and a skype interview.

If you have any questions related to the position, please don't hesitate to contact dr. Silvia Rossi by e-mail ([hidden email] <mailto:[hidden email]>).

============================================================
About Naples

Naples (Italian: Napoli) is the third largest city in Italy, and is the capital of the Campania region. World-known for its rich history, art, culture, architecture, music, and gastronomy, Naples is a lively, exciting and bustling city situated on the southwest coast in a gorgeous gulf, and is surrounded by attractive tourist and archaeological sites such as Capri, Ischia, Amalfi Coast, Pompei, Ercolano, Mount Vesuvius. See https://urldefense.proofpoint.com/v2/url?u=http-3A__en.wikipedia.org_wiki_Naples&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=c6aJGp5j8z6zFHm-5FEmQCfnQJsTZsQbtuv8iB1G7Fs&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__en.wikipedia.org_wiki_Naples&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=c6aJGp5j8z6zFHm-5FEmQCfnQJsTZsQbtuv8iB1G7Fs&e=> for further in
 formation.

---------------------------------------------------
Silvia Rossi, PhD
Assistant Professor
Department of Electrical Engineering and Information Technology - D.I.E.T.I.

University of Naples Federico II
Via Claudio, 21, 80125 - Naples, Italy
phone:  +39 081 679963
skypeid: silvross
w-page: https://urldefense.proofpoint.com/v2/url?u=http-3A__wpage.unina.it_silrossi&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=vYlU5Et9YS8aaS8QjQZPuwlpFhPmIcQyyJsatB0RTkg&e= <https://urldefense.proofpoint.com/v2/url?u=http-3A__wpage.unina.it_silrossi&d=DwIFAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=QaKYeL-c9ivuPoBfLNHELps4zVTe4tu0vj0l9rnyo-8&s=vYlU5Et9YS8aaS8QjQZPuwlpFhPmIcQyyJsatB0RTkg&e=>
e-mail: [hidden email] <mailto:[hidden email]>
e-mail: [hidden email] <mailto:[hidden email]>



------------------------------

Message: 7
Date: Mon, 21 Aug 2017 08:48:02 -0500
From: Marc Gyongyosi <[hidden email]>
To: [hidden email]
Subject: [robotics-worldwide] [jobs] Robotics Software Engineer (C++)
    at Intelligent Flying Machines (CHICAGO, USA)
Message-ID:
    <CAP7-Z0EAk-t3bSW=[hidden email]>
Content-Type: text/plain; charset="UTF-8"

DESCRIPTION

Intelligent Flying Machines, Inc. (IFM) is looking for an experienced
software engineer who has worked on large-scale C++ software projects in
the Robotics and Computer Vision domain to join a small team developing
flying robots.

*About IFM:*

We are IFM, a Chicago based robotics and computer vision company building
intelligent perception systems for the digital future of logistics and
manufacturing. As a company, we are focused on commercializing cutting edge
computer vision and machine learning technologies to bridge the gap between
what is technically feasible and realistically achievable in real-world,
continuous business applications.

Today, we are working with our customers to automate their inventory
counting process using a fleet of proprietary flying robots. By combining
Deep Learning with conventional Computer Vision methods, our robots know
where they are and what they see using no external infrastructure.

*What you will be working on:*

As we are deploying around the world, we are looking for a person with
experience in building and maintaining complex and large C++ software
stacks. You will be working directly with our founder, Marc, and the rest
of the core team to write application layer programs around low-level
vision and machine learning software while overseeing its integration with
customer facing interfaces and enterprise software systems.

*How we work:*

We are a small team with a healthy focus on maintaining a proper software
development workflow, documentation, and testing. Our projects are usually
planned in 2-week sprints with bi-weekly reviews and constant validation on
our dev systems (robots). We plan our sprints based on a set of questions
or hypotheses that we want to answer and aim to be 100% data driven in our
decision-making process.


*How to apply: *

https://urldefense.proofpoint.com/v2/url?u=https-3A__ifmtech.workable.com_j_77E5FD5619&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=srecxoGzF3I5oo8WIaKigwX9sxTpDIFfto38gLlwS6Q&s=6TBgyJABrN7QqQGdDEIgtamO1SfVGf_7rgqdSVjFu6c&e=
?


------------------------------

Message: 8
Date: Mon, 21 Aug 2017 14:50:32 +0000
From: "Weitzenfeld, Alfredo" <[hidden email]>
To: "[hidden email]" <[hidden email]>
Subject: [robotics-worldwide] [Jobs] Two Fully funded PhD positions in
    the area of Neurorobotics are available at the University of South
    Florida in Tampa starting Spring or Fall 2018.
Message-ID:
    <[hidden email]>
   
Content-Type: text/plain; charset="Windows-1252"

Two Fully funded PhD positions in the area of Neurorobotics are available at the University of South Florida in Tampa starting Spring or Fall 2018.

Research Topic

??Neurorobotics: Multi-Scale Spatial Cognition??

Research Description:

The BioRobotics Lab at the Computer Science and Engineering Department (CSE) in the University of South Florida (USF) in Tampa is looking for two new PhD students in the area of neurorobotics, where neurorobotics is the research area where neuroscience and robotics intersect. Neurorobotics provides a testbed for evaluating brain-inspired algorithms while proposing new computational models to control autonomous robotics systems.

The specific neurorobotics area is in Spatial Cognition, an increasingly important research area relating memory and navigation in animals. In this particular research project we seek to elucidate the basic principles of multi-scale memory and spatial navigation based on Hippocampus studies. These studies will serve as basis for the development of new algorithms for multi-scale spatial navigation, mimicking the neural computations of the brain. The simulations will then be tested on physical autonomous mobile robots navigating in challenging complex environments.

Research Position Description:

Required:
- A Bachelor??s degree in computer science, computer engineering or related field;
- Strong background in software programming, particularly with C++ and Java;
- Excellent English proficiency;
- Excellent writing and communication skills;

Preferred:
- Master??s degree;
- Experience with computational modeling of the brain,
- Hands-on experience with programming of autonomous robots;
- Demonstrated research experience (i.e., a good publication record).

Please send the following documents to [hidden email]:
- One page cover letter describing your research interests;
- CV or resume;
- Transcripts;
- 3 recommendation contact names with emails.

To apply to the CSE PhD program see the following link:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.grad.usf.edu_programs_programinfomation.php-3Fpcode-3DCSEEN-2DPh.D&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=7K9bZ1GYsZlFJSBeKPfU9j2MR7A-itOOR8njJ6f7U3I&e=.

Note: The CSE Department at USF encourages minority students and under-represented groups to apply to the PhD program additionally offering several funded positions each year. In addition to Robotics, the CSE Department welcomes and regularly funds research positions in other areas, including Image Processing, Computer Vision, Data Visualization, Hardware, Software and Networks Security, Programming Languages, Distributed Systems, Brain Computer Interfaces, Computer Graphics, Databases, Big Data, and Artificial Intelligence.

About the CSE Dept:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cse.usf.edu&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=Ayp8o8xW0aQ85etj9PkBfPrn-_cs6xkJp-4X5I_7wsk&e=

About USF:
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.usf.edu&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=bo0825jxnCKOTgtJPc8qz9xDGm1GAoEyeL0fC3wle24&e=

About Tampa:
https://urldefense.proofpoint.com/v2/url?u=https-3A__www.visittampabay.com&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=NjAoUs_vpZsT2xX8gY0bu10a-jDZAZkBgbC0ALwsG10&e=

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.visitflorida.com_en-2Dus_cities_tampa.html&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=n3twTKVIs5cI8Fd0dVCNiOLW6LzsKTBJ14AGH69OMJo&e=





Alfredo Weitzenfeld, PhD
Professor
Department of Computer Science and Engineering
College of Engineering
University of South Florida

Email: [hidden email]

USF Office: ENB 314
Tel (813) 974-4905
USF BioRobotics Lab: ENG 122B
Tel (813) 974-4463

https://urldefense.proofpoint.com/v2/url?u=http-3A__www.cse.usf.edu_people_faculty_weitzenfeld-5F-5Falfredo&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=26hj0MCxXInLoadw6PtpdjGf06wuIr1gxQSKVKh5r5w&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__weitzenfeld.robolat.org&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=uhy2ltSSN_5jVFTk5jyy4kK341grqwfRW5fa3FnbZzQ&e=
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.usfrobobulls.org&d=DwIF-g&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=0F8XLptVvAGd6nqy3P28V77LIxUASVQjExoK9wnX110&s=Kz2uQXoV40r8mWZgmg2DQrVSlJb2jeIeNePRPd0tWdg&e=

USF CSE Mail Address:
4202 East Fowler Avenue, ENB-118
Tampa, FL 33620-5399
Phone: (813) 974-3652
Fax: (813) 974-5456



------------------------------

Message: 9
Date: Mon, 21 Aug 2017 17:57:20 -0400
From: Abdeslam Boularias <[hidden email]>
To: [hidden email]
Subject: [robotics-worldwide] [Software] Rutgers PHYSIM Tool:
    Automatic Generation of Physically-realistic Datasets for Object
    Detection and Pose Estimation in Robotics
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset="UTF-8"

Dear colleagues,

We are releasing a software for automatically generating synthetic training
datasets for Object Detection and Pose Estimation in Robotics. It uses 3D
CAD models of objects and simulates several aspects of the environment to
automatically generate labeled images of objects in clutter. The objects
are placed in physically-realistic poses through the use of an
appropriately tuned physics-engine.

The tool can be downloaded from the following link:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cmitash_physim-2Ddataset-2Dgenerator&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=STQtsiJZM49065pTe1eOIrQQCKKFpEzw31SFRvg_vwE&e=
<https://urldefense.proofpoint.com/v2/url?u=https-3A__na01.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Fgithub.com-252Fcmitash-252Fphysim-2Ddataset-2Dgenerator-26data-3D02-257C01-257Ccm1074-2540cs.rutgers.edu-257C74fcd3b14a5644611b3e08d4e8be4ca5-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C636389351410458522-26sdata-3DrqXE48jhN-252B6iqqQiVvxMY5kZ0QADK9IWcs4aoNVhIVs-253D-26reserved-3D0&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=0C_cmLkJz6CW3pt6pYx4PFThD5GtANkdirrluxsHcKQ&e=>

The implementation is based on the following IROS 2017 paper:

Chaitanya Mitash, Kostas Bekris, Abdeslam Boularias, ?A Self-supervised
Learning System for Object Detection using Physics Simulation and
Multi-view Pose Estimation?, International Conference on Intelligent Robots
and Systems (IROS), 2017

[pdf] https://urldefense.proofpoint.com/v2/url?u=https-3A__arxiv.org_abs_1703.03347&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=Lj0MnFJYDxsM_hwkwafBDyvsIIkA_q4oQSf1qOo8KNU&e=
<https://urldefense.proofpoint.com/v2/url?u=https-3A__na01.safelinks.protection.outlook.com_-3Furl-3Dhttps-253A-252F-252Farxiv.org-252Fpdf-252F1703.03347.pdf-26data-3D02-257C01-257Ccm1074-2540cs.rutgers.edu-257C74fcd3b14a5644611b3e08d4e8be4ca5-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C1-257C636389351410458522-26sdata-3DoS39k3P7ATGIoJSOfB7y4RrZgg3SKE2686jbesKQ3fc-253D-26reserved-3D0&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=CA__hoHB-i2zDfji3tdU0rDw_OzcwuARGo_dGgemKGw&e=>
[webpage] https://urldefense.proofpoint.com/v2/url?u=http-3A__www.physimpose.com&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=zxglWBrZlhZ02tssC4LIaJbJDs3RYuttVgjnbibgn20&e=
<https://urldefense.proofpoint.com/v2/url?u=https-3A__na01.safelinks.protection.outlook.com_-3Furl-3Dhttp-253A-252F-252Fwww.physimpose.com-26data-3D02-257C01-257Ccm1074-2540cs.rutgers.edu-257C74fcd3b14a5644611b3e08d4e8be4ca5-257Cb92d2b234d35447093ff69aca6632ffe-257C1-257C0-257C636389351410458522-26sdata-3DuyJSf9yE0lqyem3JeWZaONvrZwu47W3-252FUEe2lkDHWZk-253D-26reserved-3D0&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=kcxDZQtjP6ClPuV72NZ8sBzi8GnisdTSWFc9EspHrSI&s=Q6rWpjT4D1kp75oteIBPMazVDi_JPv0LaOkgODa9-wA&e=>

Best,
  Chaitanya Mitash, Kostas Bekris and Abdeslam Boularias


------------------------------

Message: 10
Date: Mon, 21 Aug 2017 20:02:25 -0400
From: "S. Farokh Atashzar" <[hidden email]>
To: [hidden email]
Subject: [robotics-worldwide] [meetings] Call for posters: 2017 IROS
    Workshop on "Physical Human-Robot & Human-Telerobot Interaction"
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset="UTF-8"

We invite participation in a workshop at the 2017 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS 2017) (Vancouver,
Canada), entitled: "Physical Human-Robot & Human-Telerobot Interaction:
>From Theory to Application for Neuro-rehabilitation" that will be held
on Thursday,
September 28, 2017.


>> Click here for IROS 2017 Workshops
<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.iros2017.org_program_workshops-2Dand-2Dtutorials&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=8QEVyB6YP3yJqcv0UJkzVntRaySIPjoOy-h1CQlBDDU&s=GuMU52IYVv_Or2mRG0O5Mx7HWed5OG5rWjsxUfXQqrk&e=>


>> Click here for this Workshop
<https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_document_d_1-2D-2DsOHdnXbtwNzXQa5krpLNDqSy1n4DtqIHTmH9P12Uw_pub&d=DwIBaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=8QEVyB6YP3yJqcv0UJkzVntRaySIPjoOy-h1CQlBDDU&s=0afT1STZk_x1pzWPzxVGYo1Bp85l3MnU85-WRZ824Pk&e=>


Topics of Interest:

  *  Safety and stability of physical human-robot and human-telerobot
interaction
  *  Assist-as-needed strategies in robotic/telerobotic rehabilitation
  *  Control of human-centered robotic/telerobotic systems for
rehabilitation and assistance
  *  Machine learning for human-centered robotic rehabilitation,
assistance and assessment
  *  Cooperative and multi-player systems for robotic/telerobotic
rehabilitation and assistance
  *  In-home/remote robotic rehabilitation, assistance and sensorimotor
assessment


Call for Posters: We invite participants to present their recent research
on the topic of the workshop in the form of posters. One-page proposals in
the standard format for the IEEE IROS conference for poster presentations
should be submitted before September 4, 2017. All submissions will be
judged based on relevance to the workshop topic, technical quality, and
novelty. Authors of accepted abstracts are expected to attend the workshop
and to present their work as posters during the interactive session. Please
send your poster to [hidden email]


Speakers:

  *  Neville Hogan, Massachusetts Institute of Technology, USA.
  *  Todd Murphey, Northwestern University, USA.
  *  Panagiotis Artemiadis, Arizona State University, USA.
  *  Dario Farina, Imperial College, UK.
  *  Etienne Burdet, Imperial College London, UK.
  *  Damiano Zanotto, Stevens Institute of Technology, USA.
  *  Dana Kulic, University of Waterloo, Canada.
  *  Carlo Menon, Simon Fraser University, Canada.
  *  Sami Haddadin,  Leibniz University of Hanover, Germany.
  *  James Patton, University of Illinois at Chicago, USA.
  *  Mahdi Tavakoli, University of Alberta, Canada.
  *  Mahya Shahbazi, University of Western Ontario, Canada.
  *  S. Farokh Atashzar, University of Western Ontario, Canada


About the Workshop: This full day workshop will present recent developments
in enhancing human-robot interaction in advanced intelligent
robotic/telerobotic systems for movement rehabilitation and assistance. We
aim to focus particularly on issues related to physical and kinesthetic
interaction between patients and robots. The workshop will discuss a broad
range of related subjects including safety, smart autonomy, mathematical
modeling, instrumentation and intelligent control algorithms. All of these
will be in the context of patient-centered robotic/telerobotic systems for
use in rehabilitation and assistance.


Workshop Organizers and Technical Committee:

 * Dr. S. Farokh Atashzar,  Email: [hidden email]
 * Dr. Mahya Shahbazi, Email: [hidden email]
 * Prof. Mahdi Tavakoli, Email: [hidden email]
 * Prof. Rajni Patel, Email: [hidden email]


*--------------------------------*
*S. Farokh Atashzar, Ph.D.*
*Postdoctoral Research Associate, *
*Rehabilitation Engineering and Assistive Control Technologies (REACT),*
*and Canadian Surgical Technologies & Advanced Robotics (CSTAR), *
*Department of **Electrical and Computer Engineering, University of Western
Ontario*
*London, Ontario, Canada.*
*Contact information:*
*Email address (1): [hidden email] <[hidden email]>*

*Email address (2): [hidden email] <[hidden email]>*


------------------------------

Message: 11
Date: Tue, 22 Aug 2017 12:30:34 +0900
From: Michael Spranger <[hidden email]>
To: [hidden email]
Subject: [robotics-worldwide] [meetings] 2nd CFP Papers/posters
    Workshop on Language Learning in children and robots at IEEE
    ICDL-EPIROB
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="utf-8"

Call for Papers/Posters (apologies for multiple postings)

===========================================================
Call for Papers/Posters: Workshop on Language Learning@ICDL
===========================================================

Monday, September 18th 2017 Lisbon, Portugal

https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_view_epirob2017language&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=hBBqvWVHwsRuAJHyBA8-oESx5jrSlrOu1PzLpGYLHWc&s=SHcHllucBfOoXCl_JuD_AawiBsGCShwt1r_89AdC2Tc&e=

This workshop is part of the 2017 IEEE ICDL-EPIROB conference (ICDL-EPIROB 2017)
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.icdl-2Depirob.org&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=hBBqvWVHwsRuAJHyBA8-oESx5jrSlrOu1PzLpGYLHWc&s=VoDYqWvptBtSEbyzVkTcS6wlk56MPbDJLuo1IvttsCM&e=

===========================================================
Important Dates
===========================================================

Aug 31, 2017, Deadline for Abstract submission
Sep 10, 2016, Notification of acceptance

===========================================================
Workshop Topic
===========================================================

Language Learning

Children acquire language by interacting with their social and physical environment. When children start to talk using language, their sensory-motor intelligence, visual and auditory perception, body movement, navigation, object manipulation, and articulatory control, has already developed a high level of competence. Together with social support, these competences and growing representations provide a basis for the ongoing development of communication. Emerging skills such as basic turn-taking, establishing eye-contact and first systematic vocalizations like canonical babbling significantly shape early social interactions.
These interactions are multimodal in nature and vary across contexts. Especially early communicative exchange can be characterized by the fact that different, not necessarily conventional, means are applied. This fact is intriguing for the research on symbol emergence: How do participants choose and agree on particular means? How do the means become conventionalized?

Concerning the context, within which interaction takes place, it can vary not only across developmental time and situations within individuals, but also between individuals, socio-economic groups and cultures. Continuously, representations become further enriched in ongoing interactions and across different contexts.
Importantly, continuously acquiring knowledge in different multimodal contexts and being able to continuously enrich the underlying representations provides a potential powerful mechanism (cross-situational learning) which is already well recognized in learning in children. Nonetheless, we need to know more about how children recognize contexts and how their language learning benefits from different language use varying across contexts.

Even though there are various efforts in developmental robotics to model communication, the emergence of symbolic communication is still an unsolved problem. We are still lacking convincing theories and implementations that show how cooperation and interaction skills could emerge in long-term experiments with populations of robotic agents.


===========================================================
Structure of Workshop
===========================================================

The workshop is a full day workshop with three sessions of two-three talks each. All posters will be presented in a lightning talk around before the workshop. We finish the workshop with a poster session.

Speakers:
* Afra Alihashi (U Tilburg, The Netherlands)
* Angelo Cangelosi (U Plymouth, UK)
* Iris Nomikou (U Portsmouth,UK)
* Junko Kanero (Ko? University, Turkey)
* Max Garagnani (Goldsmith, UK)
* Pierre Yves Oudeyer (INRIA Bordeaux, France)
* Takayuki Nagai (U Electro-Communications, Tokyo)
? more TBA

===========================================================
Submission and Publication
===========================================================

We invite short abstracts (around 200 words) for the workshop. The accepted abstracts will be presented in a poster session. We want to give researchers a chance to present their (ongoing) work. But we also want to provide a forum for relevant work that has recently been published in journals and other conferences.

Abstracts will be reviewed by the organizers.

Publication: Suitable posters will be invited to submit their work to an upcoming special issue in TCDS

Submission: send your abstracts to [hidden email]

===========================================================
Organizers
===========================================================

Chen Yu (Indiana University, USA)
Katharina J. Rohlfing (Paderborn University, Germany)
Malte Schilling (CITEC Bielefeld, Germany)
Michael Spranger (Sony Computer Science Laboratories Inc, Japan)
Paul Vogt (Tilburg University, the Netherlands)
Tadahiro Taniguchi (Ritsumeikan University, Japan)

For more information contact [hidden email] and see our webpage https://urldefense.proofpoint.com/v2/url?u=https-3A__sites.google.com_view_epirob2017language&d=DwIFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=hBBqvWVHwsRuAJHyBA8-oESx5jrSlrOu1PzLpGYLHWc&s=SHcHllucBfOoXCl_JuD_AawiBsGCShwt1r_89AdC2Tc&e=



------------------------------

Message: 12
Date: Mon, 21 Aug 2017 23:22:17 -0700
From: "bryan.robot" <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [jobs] AI/Robotics Engineer, Drone -
    Kakao Brain
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="us-ascii"

Kakao Brain is looking for exceptional engineers and scientists to work on
exciting drone-based projects at our main office just outside of Seoul.
Multiple openings for DeepDrone team are available for those ready to take
on the challenging, vision-based machine perception problems for autonomous
drones.

Kakao Brain was established in early 2017 to develop future breakthrough
technology, service and business by leveraging AI and robotics. Kakao Brain
is a subsidiary of Kakao Corp., a household Korean mobile company behind the
ubiquitous KakaoTalk messaging service - used by virtually all of South
Korea, and available in over 230 countries in 15 different languages. Kakao
also provides variety of online/mobile services that bring immense benefit
to daily lives of all Kakao users.

Kakao Brain is located in Pangyo Techno Valley special district, modeled
after the Silicon Valley. The DeepDrone team (Deep Learning + Drone) at
Kakao Brain is working hard at bringing technological advancements to
consumer selfie drones as well as heavy-duty ones for agricultural,
industrial applications.


*Core qualities* we look for in a successful candidate are:

  1) *Great understanding of relevant concepts & knowledge in the fields of
aerial drone / mobile-robot perception / machine learning.* Advanced
understanding and knowledge, as well as extensive hand-on experience on
computer vision and machine learning is a must.
  2) *Demonstrable record of software development experiences and skills.*
Applicants are more than welcome to emphasize his or her software
development experience by sharing their personal Github repo, or any other
means (if preferred).
  3) *Solid communication skills and willingness to work as a team.* We
believe it is crucial for every engineer to be able to effectively and
efficiently communicate in both verbal and written form to present their
designs, products, modules to others, just as much as being able to write
good code, because in the end it is always about team effort. Also,
documentation!

*Current Job Openings: *

  /- Perception/computer vision scientists/engineers
  - Aerial flight control engineer
  - Artificial intelligence scientists/

*Perception/Computer Vision Engineers*


*Job Description:*
  - Provide software design and programming support to research projects
  - Design, develop, implement, evaluate, integrate of object detection,
obstacle avoidance, sensor fusion, state estimation, etc on aerial platform
  - Use various measurements from different sensing modalities, including
mono- or stereo- camera image, LIDAR, radar, IMU, etc on aerial platform
  - Develop inertial-visual pose estimation for drones

*Basic Qualifications:*
  - Masters or Doctorate Degree in EE, CS, Robotics or related area with
minimum 1 year of experience of software development
  - Bachelor Degree in EE, CS, Robotics or related area with minimum 3
years of experience of software development
  - Passionate about drone and autonomous drone flight
  - Strong interest in solving hard and challenging drone perception
problems
  - Demonstrable proficiency in Python, C++
  - Good working knowledge in prototyping using any programming languages
  - Solid understanding of computer vision in object detection, tracking,
SLAM, structure from motion
  - Architect and implement software libraries and algorithms

*Preferred Qualifications:*
  - Experience with integrating, testing and deploying any robotic systems
  - Experience with hardware devices interfacing, embedded system
  - Experience with GPGPU, CUDA, CUDNN, OpenCL or SIMD
  - Experience developing device drivers for on-board-sensors
  - Experience with Agile software development is a plus
  - Experience with contributing to open source projects is a plus
  - Familiar with typical programming development toolchain: OpenCV, Git,
ROS
  - Broad understanding of machine learning algorithms
  - Good working knowledge of TensorFlow or similar frameworks
  - Strong oral and written skills in English

*Aerial Flight Control Engineers*


*Job Description: *
  - Design, simulate, and test new control algorithms for a drone
  - Understand given flight mission and design and test path-planning
algorithms with on-board sensors and mcu
  - Utilize state-of-the-art SLAM and stochastic control algorithms to fly
a drone
  - Perform detailed analysis of control system performance under a variety
of operating conditions, turbulence models, and aerodynamic models.

*Basic Qualifications:*
  - Masters or Doctorate Degree in EE, CS, Robotics or related area with
minimum 1 year of experience of software development
  - Bachelor Degree in EE, CS, Robotics or related area with minimum 3
years of experience of software development
  - Passionate about drone and autonomous drone flight
  - Strong interest in solving hard and challenging drone perception
problems
  - Demonstrable proficiency in Python, C++
  - Good working knowledge in prototyping using any programming languages

*Preferred Qualifications:*
  - Experience with integrating, testing and deploying any robotic systems
  - Experience with hardware devices interfacing, embedded system
  - Experience in Python and C/C++; experience translating control
algorithms into code
  - Experience with a variety of control techniques, e.g. classical SISO
methods, adaptive control, optimal control, Kalman filtering, and
discrete-time signal processing
  - Experience with Agile software development is a plus
  - Experience with contributing to open source projects is a plus
  - Strong oral and written skills in English

*Artificial Intelligence Scientists*


*Job Description:*
  - Design, implement and evaluate deep learning models
  - Publish and deliver research findings and developments
  - Represent deep learning research goals in computer vision such as
object detection depth-estimation, etc or robotics including SLAM, SfM,
aerial flight control, etc
  - Design, simulate, and test autonomous aerial flight using artificial
intelligence

*Basic Qualifications:*
  - Masters or Doctorate Degree in EE, CS, Machine Learning, Robotics or
related area
  - Demonstrable proficiency in Python or any programming language
  - Good working knowledge of TensorFlow or similar frameworks
  - Relevant experience in deep learning
  - Relevant publications or open source contributions
  - Familiar with standard artificial intelligence and machine learning
techniques, scientific thinking, and the ability to invent

*Preferred Qualifications:*
  - Publications in the top academic conferences such as NIPS, CVPR, AAAI,
ICML, ICLR, ICRA, etc
  - Award winner in data science challenges of ILSVRC, WMT, Kaggle, etc
  - Ability to independently define and solve visual machine learning
problems, e.g. CNN-SLAM
  - Experience with Agile software development is a plus
  - Strong oral and written skills in English

For questions or submissions, please find us at [hidden email] by
stating the position of your interest, attach your resume, along with your
self-introduction or any other supplementary material.




--
View this message in context: https://urldefense.proofpoint.com/v2/url?u=http-3A__robotics-2Dworldwide.1046236.n5.nabble.com_robotics-2Dworldwide-2Djobs-2DAI-2DRobotics-2DEngineer-2DDrone-2DKakao-2DBrain-2Dtp5717312.html&d=DwICAg&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=nU2X10VpJqwE-tSVjedC0aSyoolIkxP9WY17GYRrS18&s=b1L7VdRqgxzWSm3f_MMsERcuw0ATDijmglBtFD8BNis&e=
Sent from the robotics-worldwide mailing list archive at Nabble.com.


------------------------------

Message: 13
Date: Wed, 23 Aug 2017 06:31:31 +0000
From: "Ilaria.Tiddi" <[hidden email]>
To: "[hidden email]" <[hidden email]>
Subject: [robotics-worldwide] [meetings] KCAP2017 || Call for short
    papers, late breaking results, and vision papers
Message-ID:
    <[hidden email]>
   
Content-Type: text/plain; charset="iso-8859-1"

K-CAP2017
The 9th International Conference on Knowledge Capture
Austin, Texas, December 4-6, 2017
https://urldefense.proofpoint.com/v2/url?u=http-3A__k-2Dcap2017.org_&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=XBi_G3O11qtBFK8hqP01iRndZTH9ws33aD2MGTzgmas&s=Efanq2XL5Sw8Akv0-IEJUCX8Zhpv_wfeEDP9QUjhlt8&e=
Twitter : @KCAP2017

***************************************
** CALL FOR SHORT PAPERS **
***************************************

== IMPORTANT DATES ==

At 23:59 Hawaii Time (HAST) for all deadlines

[[ Short papers, vision papers, and late breaking results papers ]]
- Submission: September 22, 2017
- Notification: October 16, 2017
- Camera-ready: October 23, 2017

[[ Conference ]]
Workshops/Tutorials: December 4, 2017
Main conference: December 5-6, 2017

== KCAP CONFERENCE ==

The 9th International Conference on Knowledge Capture aims at attracting researchers from diverse areas of Artificial Intelligence, including knowledge representation, knowledge acquisition, intelligent user interfaces, problem-solving and reasoning, planning, agents, text extraction, and machine learning, information enrichment and visualization, as well as researchers interested in cyber-infrastructures to foster the publication, retrieval, reuse, and integration of data.

Today these data come from an increasingly heterogeneous set of resources that differ with regards to their domain, media format, quality, coverage, viewpoint, bias, and so on.  More than the sheer amount of these data, their heterogeneity allows us to arrive at better models and answer complex questions that cannot be addressed in isolation but require the interaction of different scientific fields or perspectives.

In most cases, knowledge is not captured as a means to an end but to, for instance, enable better user interfaces, improve retrieval beyond simple keyword search, and so forth. For K-CAP 2017, we would like to focus on the creation, enrichment, querying, and maintenance of knowledge graphs (not necessarily limited to the Semantic Web technology stack) out of heterogeneous data sources.

== TOPICS AND SUBMISSIONS ==

The 9th International Conference on Knowledge Capture, K-CAP 2017 will feature a track for short papers, vision papers, and late breaking results with a deadline in September.

The special theme of the conference this year will be on **Knowledge Graphs from Heterogeneous Data**. Therefore, we especially encourage submissions of papers on the topics such as:

* Extracting knowledge graphs from unstructured/semi-structured and multi-media data
* Summarization techniques for knowledge graphs
* Visualization of knowledge graphs and visual query interfaces
* Statistical analysis from Web data
* Services and API for knowledge graphs
* Addressing scalability issues for distributed knowledge graphs
* Provenance, trust, and credibility as they relate to knowledge graphs
* Enrichment and cleaning of knowledge graphs and alignment to existing graphs
* Hybrid approaches for knowledge capture combining knowledge engineering and machine learning
* Data dynamicity, heterogeneity, and decay in knowledge intensive systems

Submissions on more general topics, traditionally covered by the K-CAP conference series, are highly encouraged, including but not restricting to:

* Knowledge acquisition
* Knowledge authoring
* Knowledge extraction
* Knowledge management
* Knowledge publication
* Knowledge capture for the Semantic Web and the Web of Linked Data
* Knowledge capture and enrichment in specific domains such as the earth sciences
* Collaborative and social approaches to knowledge management and acquisition
* Crowdsourcing for knowledge capture and refinement
* Knowledge Capture from Social Environments and Contexts
* Mixed-initiative planning and decision-support
* Problem-solving knowledge and methods
* Knowledge-based markup techniques
* Knowledge engineering and modeling methodologies
* Narrative intelligence
* Knowledge capture through storytelling
* Provenance and trust issues in knowledge intensive systems
* Services and applications that enable or utilize data capture techniques
* Semantic search and query enrichment
* Ontology engineering and engineering methodologies
* Ontology design patterns
* Similarity measurement and Analogy-based reasoning

All papers are limited to 4 pages and should be submitted through Easychair (using https://urldefense.proofpoint.com/v2/url?u=https-3A__easychair.org_conferences_-3Fconf-3Dkcap2017&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=XBi_G3O11qtBFK8hqP01iRndZTH9ws33aD2MGTzgmas&s=_GgzZLDjEaRiOta_WdDPLoXQ0C725dhKaKBjRX93XC0&e=) and should be formatted according to the ACM format (2 column SIG Conference Proceedings template).

== VENUE AND REGISTRATION ==

K-CAP2017 will be held at the Hilton Garden Inn Austin Downtown/Convention Center, in Austin, Texas.

International Airports near the Convention Center are the Austin-Bergstrom International Airport (12 miles), the San Antonio International Airport (74 miles), the Houston George Bush Intercontinental Airport (167 miles) and the Dallas/Fort Worth International Airport (215 miles).

Special room rates ($169/night for single and double usage) have been arranged for K-CAP2017. Please contact [hidden email] to request further reduced costs for accommodation.

Registration fees range from 425$ for students to 600-650$ for ACM/non-ACM SIG members, inclusive of coffee break and lunch costs, as well as special events held during the conference. Check https://urldefense.proofpoint.com/v2/url?u=http-3A__k-2Dcap2017.org_registration_&d=DwIFAw&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=XBi_G3O11qtBFK8hqP01iRndZTH9ws33aD2MGTzgmas&s=GGifTejDmNnTm6WF6uFnw3_jJpV37L3bTsWF_Iqj0L4&e= for further info.

== ORGANIZING COMMITTEE ==

* Oscar Corcho, Universidad Politecnica de Madrid, Spain (General chair)
* Krzysztof Janowicz, University of California, U.S.A. (Program chair)
* Giuseppe Rizzo, Istituto Superiore Mario Boella, Italy (Workshops/Tutorials chair)
* Ilaria Tiddi, The Open University, United Kingdom (Publicity chair)
* Daniel Garijo, University of Southern California, U.S.A. (Treasurer and sponsor chair)




------------------------------

Message: 14
Date: Wed, 23 Aug 2017 09:18:22 +0200
From: Robert Haschke <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [jobs] Funded PhD in action modelling
    with episodic memory at CITEC - Bielefeld University
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset=utf-8; format=flowed

The NeuroInformatics Group (Prof. Ritter) at the Faculty of Technology,
Bielefeld University, is looking for a PhD candidate in the area of
computational modelling of actions and episodic memory. The project is
part of the Collaborative Research Center (CRC) ?Everyday Activity
Science and Engineering (EASE)?
<https://urldefense.proofpoint.com/v2/url?u=http-3A__ease.informatik.uni-2Dbremen.de_&d=DwIDaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=0w3solp5fswiyWF2RL6rSs8MCeFamFEPafDTOhgTfYI&m=WeerZPAaWsop0GJR24MJ6cP3YR00GrZck9G4HcNIGaA&s=Iu-1tLVts_QR1RePsxoWDIqqIeMUoSyH9ayCG37DDOs&e=>in cooperation with Bremen
University.

The ideal candidate should hold a MS degree in Computer Science or
Engineering, with a particular focus on machine learning (particularly
deep learning), robotics, action modelling, and cognitive architectures.
Strong programming skills in C++ and/or python as well as experience
with ROS are essential and candidates should be strongly motivated to
quickly get acquainted with large existing code bases. A history of
open-source contributions is desired (if you have one, please mention
e.g. your github account). Proficiency in written and spoken English is
required. Knowledge of German is useful but not required.

Successful candidates will be employed as research assistants with an
annual gross salary of ca. 50.000 EUR according to the TVL-E13 pay scale
and are limited to 4 years. Reduced working hours are possible. The
position is meant for scientific qualification. According to the legal
requirements of the WissZeitVG and the contract on good employment
practice, individual contract terms can arise.

Interested candidates should submit a cover letter, a detailed CV, and
the names of two references (or recommendation letters) via email to
[hidden email].

Interviews will start on September 18th, 2017. Late applications will be
treated depending on availability of the position.

------

*About the project ?Episodic memory for everyday manual activities?
within EASE*

The project is part of the CRC EASE at University of Bremen. Together
with a colleague from the Institute for Artificial Intelligence (IAI),
lead by Prof. Beetz, you will investigate cognitive architectures to
leverage episodic memory of robotic agents to learn sophisticated object
manipulation skills and to reason about them. The core idea is to
combine symbolic and subsymbolic activity descriptions in a hybrid
episodic memory architecture to steer the robot action along manifolds
that enable robust and safe goal achievement.

*------*

*About Neuroinformatics Group / CITEC*

Applicants will work in the context of the Neuroinformatics Group at
CITEC, headed by Prof. Ritter and cooperating with the Institute for
Artificial Intelligence (IAI), headed by Prof. Beetz. We provide an
excellent and stimulating research environment. Our modern robotics lab
hosts two bimanual robot setups equipped with anthropomorphic,
tactile-sensorized Shadow Robot Hands. The project will benefit from its
embedding in the research environment of the Cluster of Excellence
?Cognitive Interaction Technology? (CITEC).

Creating the scientific and technology foundations for systems that
interact with human users seamlessly and at a semantic level is the
focus of the Cluster of Excellence in Cognitive Interaction Technology
(CITEC) at Bielefeld University. CITEC's research topics range from
smart devices, over virtual or augmented reality to sophisticated robots
with cognitive or even social capabilities.

CITEC was established as part of the German Excellence Initiative in
2007 and brings together over30 research groups in computer science,
robotics, biology, linguistics, psychology, and sports science,
co-located in a dedicated buildingthat offers cutting-edge lab facilities.

The Neuroinformatics Group, headed by Prof. Ritter, is particularly
focusing on tactile- and vision-based grasping and manipulation in
unknown environments employing anthropomorphic Shadow Robot Hands.
Methods include machine learning, particularly deep neural networks, but
also classical engineering approaches.


*What CITEC Offers You*

We offer a strongly interdisciplinary and stimulating research
environment with excellent laboratory infrastructure as well as a wide
network of national and international research partners. The positions
are remunerated at the salary level TV-L 13 and are provided with a
dedicated budget for research costs. We are strongly committed to
support the career development of our members and can offer facilities
to reconcile professional demands and family life.

*Application Process*

Applications are to be submitted in English and must include the
following (no more than two pages for each item 1. - 4.):

 1.

    A statement describing the applicant?s area(s) of research (topics
    and methods)

 2.

    A motivation letter,

 3.

    A curriculum vitae

 4.

    Record of publications, with copies of 3 representative papers

 5.

    References to software projects the candidate developed / contributed to

 6.

    Up to two letters of recommendation


Applications should be sent by email ([hidden email])
<mailto:[hidden email]%29>to Dr. Robert Haschke, in one
PDF document. Should you have any questions, please also contact us at
this email address.


Applications from qualified severely handicapped persons and persons who
are on a par with severely handicapped persons are explicitly welcome.


Bielefeld University is recognized as a family-friendly university. We
encourage women to apply for jobs; this is particularly true with regard
to both academic and technical posts as well as positions in information
technology, trades and crafts. Applications are handled according to the
provisions of the state equal opportunity statutes.


*About Bielefeld *

Bielefeld is one of the 20 largest cities in Germany (with close to
330.000 inhabitants). Founded in 1214 CE as merchant city, Bielefeld is
today a dynamic university city with a total of seven universities and
nearly 35.000 students. It is a lively city with favourable living costs
with a lot of cultural and entertainment opportunities. Bielefeld is at
the heart of the East Westphalia-Lippe region, which is home to a number
of worldwide acting companies, including Miele, Bertelsmann, Dr. Oetker
and Sch?co. Fast ICE trains make Berlin, Hamburg and Cologne reachable
within two and a half hours. Located in the heart of the Teutoburg
Forest, the place is one of Germanies greenest cities, offering many
opportunities for outdoors and leisure activities.



------------------------------

Message: 15
Date: Wed, 23 Aug 2017 09:19:42 +0200
From: Robert Haschke <[hidden email]>
To: <[hidden email]>
Subject: [robotics-worldwide] [jobs] Funded PhD in tactile-driven
    robotics at CITEC - Bielefeld University
Message-ID:
    <[hidden email]>
Content-Type: text/plain; charset=utf-8; format=flowed

The NeuroInformatics Group (Prof. Ritter) at the Faculty of Technology,
Bielefeld University, is looking for a PhD candidate in the area of
tactile-based control for grasping and manipulation. The ideal candidate
should hold a MS degree in Computer Science or Engineering, with a
particular focus on sensor-driven robotics. S/he should have a
background in machine learning (particularly deep learning), robotics,
and control. Strong programming skills in C++ and/or python as well as
experience with ROS are essential and candidates should be strongly
motivated to quickly get acquainted with large existing code bases. A
history of open-source contributions is desired (if you have one, please
mention e.g. your github account). Proficiency in written and spoken
English is required. Knowledge of German is useful but not required.

The position is available immediately with an initial contract until
March 2019 and the option for prolongation to a full 3-years PhD period.
Reduced working hours are possible. The position is meant for scientific
qualification. According to the legal requirements of the WissZeitVG and
the contract on good employment practice, individual contract terms can
arise with an annual gross salary of ca. 50.000 EUR.

Interested candidates should submit a cover letter, a detailed CV, and
the names of two references (or recommendation letters) via email to
[hidden email].

Interviews will start on September 18th, 2017. Late applications will be
treated depending on availability of the position.

------

*About the project ?TACT-HAND?*, Improving control of prosthetic hands
using tactile sensors and realistic machine learning

In this cooperation with DLR and EPFL we aim for an augmentation of a
prosthetic hand (Touch Bionics) with tactile sensing and appropriate
force-based control algorithms to facilitate more advanced usage of this
5DoF hand. The main task of the candidate will be the realization of
control algorithms for tactile-based grasping, applying deep learning
approaches for incipient slip detection and tactile sensor development.

*------*

*About Neuroinformatics Group / CITEC*

Applicants will work in the context of the Neuroinformatics Group at
CITEC, headed by Prof. Ritter, further developing a bimanual robot setup
equipped with anthropomorphic, tactile-sensorized Shadow Robot Hands.
The project will also benefit from its embedding in the research
environment of the Cluster of Excellence ?Cognitive Interaction
Technology? (CITEC).

Creating the scientific and technology foundations for systems that
interact with human users seamlessly and at a semantic level is the
focus of the Cluster of Excellence in Cognitive Interaction Technology
(CITEC) at Bielefeld University. CITEC's research topics range from
smart devices, over virtual or augmented reality to sophisticated robots
with cognitive or even social capabilities.

CITEC was established as part of the German Excellence Initiative in
2007 and brings together over30 research groups in computer science,
robotics, biology, linguistics, psychology, and sports science,
co-located in a dedicated buildingthat offers cutting-edge lab facilities.

The Neuroinformatics Group, headed by Prof. Ritter, is particularly
focusing on tactile- and vision-based grasping and manipulation in
unknown environments employing anthropomorphic Shadow Robot Hands.
Methods include machine learning, particularly deep neural networks, but
also classical engineering approaches.


*What CITEC Offers You*

We offer a strongly interdisciplinary and stimulating research
environment with excellent laboratory infrastructure as well as a wide
network of national and international research partners. The positions
are remunerated at the salary level TV-L 13 and are provided with a
dedicated budget for research costs. We are strongly committed to
support the career development of our members and can offer facilities
to reconcile professional demands and family life.

*Application Process*

Applications are to be submitted in English and must include the
following (no more than two pages for each item 1. - 4.):

 1.

    A statement describing the applicant?s area(s) of research (topics
    and methods)

 2.

    A motivation letter,

 3.

    A curriculum vitae

 4.

    Record of publications, with copies of 3 representative papers

 5.

    References to software projects the candidate developed / contributed to

 6.

    Up to two letters of recommendation


Applications should be sent by email ([hidden email])
<mailto:[hidden email]%29>to Dr. Robert Haschke, in one
PDF document. Should you have any questions, please also contact us at
this email address.


Applications from qualified severely handicapped persons and persons who
are on a par with severely handicapped persons are explicitly welcome.


Bielefeld University is recognized as a family-friendly university. We
encourage women to apply for jobs; this is particularly true with regard
to both academic and technical posts as well as positions in information
technology, trades and crafts. Applications are handled according to the
provisions of the state equal opportunity statutes.


*About Bielefeld *

Bielefeld is one of the 20 largest cities in Germany (with close to
330.000 inhabitants). Founded in 1214 CE as merchant city, Bielefeld is
today a dynamic university city with a total of seven universities and
nearly 35.000 students. It is a lively city with favourable living costs
with a lot of cultural and entertainment opportunities. Bielefeld is at
the heart of the East Westphalia-Lippe region, which is home to a number
of worldwide acting companies, including Miele, Bertelsmann, Dr. Oetker
and Sch?co. Fast ICE trains make Berlin, Hamburg and Cologne reachable
within two and a half hours. Located in the heart of the Teutoburg
Forest, the place is one of Germanies greenest cities, offering many
opportunities for outdoors and leisure activities.



------------------------------

Subject: Digest Footer

_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide


------------------------------

End of robotics-worldwide Digest, Vol 145, Issue 10
***************************************************
_______________________________________________
robotics-worldwide mailing list
[hidden email]
http://duerer.usc.edu/mailman/listinfo.cgi/robotics-worldwide