Version 1
: Received: 18 September 2024 / Approved: 18 September 2024 / Online: 19 September 2024 (10:01:58 CEST)
How to cite:
Thottempudi, S. G.; Borra, S. Leveraging Large Language Models to Enhance an Intelligent Agent with Multifaceted Capabilities. Preprints2024, 2024091446. https://doi.org/10.20944/preprints202409.1446.v1
Thottempudi, S. G.; Borra, S. Leveraging Large Language Models to Enhance an Intelligent Agent with Multifaceted Capabilities. Preprints 2024, 2024091446. https://doi.org/10.20944/preprints202409.1446.v1
Thottempudi, S. G.; Borra, S. Leveraging Large Language Models to Enhance an Intelligent Agent with Multifaceted Capabilities. Preprints2024, 2024091446. https://doi.org/10.20944/preprints202409.1446.v1
APA Style
Thottempudi, S. G., & Borra, S. (2024). Leveraging Large Language Models to Enhance an Intelligent Agent with Multifaceted Capabilities. Preprints. https://doi.org/10.20944/preprints202409.1446.v1
Chicago/Turabian Style
Thottempudi, S. G. and Sagar Borra. 2024 "Leveraging Large Language Models to Enhance an Intelligent Agent with Multifaceted Capabilities" Preprints. https://doi.org/10.20944/preprints202409.1446.v1
Abstract
This project aims to create a virtual assistant with AI integration to improve Siemens Energy's internal processes. Using cloud-based technologies, microservice architecture, and large language models (LLMs), the project seeks to create a reliable, effective, and user-friendly assistant customized to Siemens Energy's requirements. The first significant business difficulty identified by the study was the time engineers had to spend looking for information in large volumes of company papers. The proposed virtual assistant responds with precision and context awareness to optimize productivity. The assistant uses a microservice architecture to guarantee scalability, flexibility, and integration for various use scenarios. Tasks like document retrieval, translation, summarization, and comparison can now be handled effectively. Utilizing Amazon Web Services (AWS) for cost-effectiveness and scalability, the backend is cloud-deployed, backed by a frontend created for natural user interaction. To increase precision and relevance, the system uses cutting-edge AI, such as vector databases and Retrieval Augmented Generation (RAG). The assistant expedites document management procedures, improves data accessibility, and reduces search time. The results highlight how it may enhance workflow efficiency for Siemens Energy engineers and how flexible it can be for future AI-driven applications.
Keywords
large language models; retrieval augmented generation
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.