Version 1
: Received: 23 September 2024 / Approved: 24 September 2024 / Online: 24 September 2024 (11:58:11 CEST)
How to cite:
Li, S.; Zhou, X.; Wu, Z.; Long, Y.; Shen, Y. Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints2024, 2024091875. https://doi.org/10.20944/preprints202409.1875.v1
Li, S.; Zhou, X.; Wu, Z.; Long, Y.; Shen, Y. Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints 2024, 2024091875. https://doi.org/10.20944/preprints202409.1875.v1
Li, S.; Zhou, X.; Wu, Z.; Long, Y.; Shen, Y. Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints2024, 2024091875. https://doi.org/10.20944/preprints202409.1875.v1
APA Style
Li, S., Zhou, X., Wu, Z., Long, Y., & Shen, Y. (2024). Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints. https://doi.org/10.20944/preprints202409.1875.v1
Chicago/Turabian Style
Li, S., Yujian Long and Yanxin Shen. 2024 "Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach" Preprints. https://doi.org/10.20944/preprints202409.1875.v1
Abstract
This study explores the enhancement of deductive reasoning capabilities in Large Language Models (LLMs) through a strategic dual-agent framework. In this framework, one agent acts as a questioner and another as an answerer, with both employing advanced linguistic and logical processing to optimize information exchange. Utilizing a structured environment that limits query opportunities, our approach emphasizes the development of LLMs that can efficiently generate and interpret questions to deduce hidden information effectively. The models, which incorporate self-defined agents with a combination of pretraining and llama-3-8b enhancements, demonstrate a remarkable ability to navigate the complexities of logical deduction. Performance evaluations, based on a series of simulated interactions, illustrate the agents' improved precision and strategic acumen in narrowing down possibilities through targeted inquiries. These findings underscore the potential of LLMs in tasks requiring intricate reasoning and collaboration, marking a significant step towards more intelligent and autonomous systems.
Keywords
Language Models; Deductive Reasoning; Information Collection; Pre-trained Models; Strategic Reasoning
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.