Preprint Article Version 1 This version is not peer-reviewed

Visual Imitation Learning from One-Shot Demonstration for Multi-Step Robot Pick-and-Place Tasks

Version 1 : Received: 14 August 2024 / Approved: 15 August 2024 / Online: 15 August 2024 (06:28:53 CEST)

How to cite: Lu, S.; Haerdtlein, C.; Schilp, J. Visual Imitation Learning from One-Shot Demonstration for Multi-Step Robot Pick-and-Place Tasks. Preprints 2024, 2024081123. https://doi.org/10.20944/preprints202408.1123.v1 Lu, S.; Haerdtlein, C.; Schilp, J. Visual Imitation Learning from One-Shot Demonstration for Multi-Step Robot Pick-and-Place Tasks. Preprints 2024, 2024081123. https://doi.org/10.20944/preprints202408.1123.v1

Abstract

Imitation learning, also known as programming by demonstration, has been shown to be a promising paradigm for intuitive robot programming by non-expert users. However, the classical kinesthetic approach with physical hand guidance suffers from generalizability across different robot types and is impractical for demonstrating tasks with long horizons. Visual imitation learning enables the recording of multi-step tasks as a single continuous video, allowing non-experts to demonstrate tasks naturally. Existing approaches typically require a large amount of data to develop end-to-end deep learning models that map raw pixels to robot actions. This paper explores the application of visual imitation learning from one-shot demonstration, significantly reducing the data requirements and simplifying the programming process. To achieve this target, a framework is proposed to map hand trajectories to the robot end-effector, consisting of four essential components: hand detection; object detection; segmentation of the trajectories into elemental skills; and learning the skills. Methods are developed for each component and evaluated on recorded videos to demonstrate the effectiveness of the proposed framework.

Keywords

Visual imitation learning; Robot; One-shot demonstration

Subject

Computer Science and Mathematics, Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.