Preprint Article Version 1 This version is not peer-reviewed

Adaptive Control of Retrieval-Augmented Generation for LLMs Through Reflective Tags

Version 1 : Received: 29 August 2024 / Approved: 29 August 2024 / Online: 30 August 2024 (03:45:59 CEST)

How to cite: Yang, C.; Fujita, S. Adaptive Control of Retrieval-Augmented Generation for LLMs Through Reflective Tags. Preprints 2024, 2024082152. https://doi.org/10.20944/preprints202408.2152.v1 Yang, C.; Fujita, S. Adaptive Control of Retrieval-Augmented Generation for LLMs Through Reflective Tags. Preprints 2024, 2024082152. https://doi.org/10.20944/preprints202408.2152.v1

Abstract

While Retrieval-Augmented Generation (RAG) enhances Large Language Models (LLMs), it also presents challenges that can affect model accuracy and performance. Practical applications show that RAG can mask the intrinsic capabilities of LLMs. Firstly, LLMs may become overly dependent on external retrieval, underutilizing their own knowledge and inference abilities, which can reduce responsiveness. Secondly, RAG techniques might introduce irrelevant or low-quality information, adding noise to the LLM. This can disrupt the normal generation process, leading to inefficient and low-quality content, especially when dealing with complex problems. This paper proposes a RAG framework that uses reflective tags to control retrieval. This framework evaluates retrieved documents in parallel and incorporates the Chain of Thought (CoT) technique for step-by-step content generation. The model selects the highest quality and most accurate content for final generation. The main contributions include: 1) Reducing the hallucination problem by selectively utilizing high-scoring document, 2) Enhancing real-time performance through timely external database retrieval, and 3) Minimizing negative impacts by filtering out irrelevant or unreliable information through parallel content generation and reflective tagging. These advancements aim to optimize the integration of retrieval mechanisms with LLMs, ensuring high-quality and reliable outputs.

Keywords

Retrieval-Augmented Generation; Large Language Models; Chain of Thought; reflective tag

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.