Article
Version 1
This version is not peer-reviewed
On Renormalization Group Based Deep Q-Network
Version 1
: Received: 11 July 2024 / Approved: 11 July 2024 / Online: 11 July 2024 (13:03:43 CEST)
How to cite: Garayev, G.; Alili, A. On Renormalization Group Based Deep Q-Network. Preprints 2024, 2024070953. https://doi.org/10.20944/preprints202407.0953.v1 Garayev, G.; Alili, A. On Renormalization Group Based Deep Q-Network. Preprints 2024, 2024070953. https://doi.org/10.20944/preprints202407.0953.v1
Abstract
In This paper we introduce the integration of Renormalization Group (RG) methods with Deep Q-Networks (DQNs) to improve reinforcement learning in high-dimensional state spaces. RG methods provide multi-scale analysis, enhancing state representation, learning stability, and exploration. The proposed RG-DQN algorithm uses hierarchical Q-value estimation and multi-scale representations, demonstrating superior performance on synthetic genomic data compared to traditional DQNs.}
Keywords
DQN; renormalization group; AI; loss functions
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment