Abstract
We present a unified visuomotor neural architecture for the robotic task of identifying, localizing, and grasping a goal object in a cluttered scene. The RetinaNet-based neural architecture enables end-to-end training of visuomotor abilities in a biological-inspired developmental approach. We demonstrate a successful development and evaluation of the method on a humanoid robot platform. The proposed architecture outperforms previous work on single object grasping as well as a modular architecture for object picking. An analysis of grasp errors suggests similarities to infant grasp learning: While the end-to-end architecture successfully learns grasp configurations, sometimes object confusions occur: when multiple objects are presented, salient objects are picked instead of the intended object.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics (ICDL-EpiRob2019) |
| Number of pages | 7 |
| Place of Publication | Oslo, Norway |
| Publication date | 1 Aug 2019 |
| Pages | 19-25 |
| DOIs | |
| Publication status | Published - 1 Aug 2019 |
| Externally published | Yes |
Keywords
- Visuomotor
- Robotic grasping
- RetinaNet
- End-to-end learning
- Cluttered scenes
Fingerprint
Dive into the research topics of 'Neurocognitive Shared Visuomotor Network for End-to-end Learning of Object Identification, Localization and Grasping on a Humanoid'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver