Preprint Article Version 1 This version is not peer-reviewed

Path Planning for Robot Combined with Zero-Shot and Hierarchical Reinforcement Learning in Inexperienced Environments

Version 1 : Received: 29 October 2024 / Approved: 30 October 2024 / Online: 30 October 2024 (12:04:46 CET)

How to cite: Mei, L.; Xu, P. Path Planning for Robot Combined with Zero-Shot and Hierarchical Reinforcement Learning in Inexperienced Environments. Preprints 2024, 2024102427. https://doi.org/10.20944/preprints202410.2427.v1 Mei, L.; Xu, P. Path Planning for Robot Combined with Zero-Shot and Hierarchical Reinforcement Learning in Inexperienced Environments. Preprints 2024, 2024102427. https://doi.org/10.20944/preprints202410.2427.v1

Abstract

Path planning for robots based on reinforcement learning encounters challenges in integrating semantic information about environments into the training process. In those unseen or complex environmental information, agents often perform sub-optimally and require more training time. In response to these challenges, this manuscript pioneers a framework integrating zero-shot learning combined with hierarchical reinforcement learning to enhance agent decision-making in complex environments. Zero-shot learning enables agents to infer correct actions for previously unseen objects or situations based on learned semantic associations. Subsequently, the path planning component utilizes hierarchical reinforcement learning with adaptive replay buffer, directed by the insights gained from zero-shot learning, to make decisions effectively. Two parts are trained separately, so zero-shot learning is available in different and unseen environments. Through simulation experiments, the proposed method proves that this structure can make full use of environmental information to generalize across unseen environments and plan collision-free paths.

Keywords

path planning; zero-shot learning; hierarchical reinforcement learning; adaptive agents

Subject

Computer Science and Mathematics, Robotics

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.