The use of Retrieval-augmented generation (RAG) using large language models (LLMs) has shown potential for addressing issues such as hallucinations and inadequately contextualized responses. A pivotal stage in the RAG process involves a retriever for retrieving chunks based on semantic similarity with the query. In this study, we advocate for and provide experimental evidence supporting integrating and maintaining questions and answers (QA) formatted databases to improve retrieved-context representations and response quality. Our experiments evaluate our approach on benchmark RAG datasets using standard evaluation metrics and provide comparative analyses against state-of-the-art retrieval methods, showing the potential of our approach.