Preprint Article Version 1 This version is not peer-reviewed

Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach

Version 1 : Received: 23 September 2024 / Approved: 24 September 2024 / Online: 24 September 2024 (11:58:11 CEST)

How to cite: Li, S.; Zhou, X.; Wu, Z.; Long, Y.; Shen, Y. Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints 2024, 2024091875. https://doi.org/10.20944/preprints202409.1875.v1 Li, S.; Zhou, X.; Wu, Z.; Long, Y.; Shen, Y. Strategic Deductive Reasoning in Large Language Models: A Dual-Agent Approach. Preprints 2024, 2024091875. https://doi.org/10.20944/preprints202409.1875.v1

Abstract

This study explores the enhancement of deductive reasoning capabilities in Large Language Models (LLMs) through a strategic dual-agent framework. In this framework, one agent acts as a questioner and another as an answerer, with both employing advanced linguistic and logical processing to optimize information exchange. Utilizing a structured environment that limits query opportunities, our approach emphasizes the development of LLMs that can efficiently generate and interpret questions to deduce hidden information effectively. The models, which incorporate self-defined agents with a combination of pretraining and llama-3-8b enhancements, demonstrate a remarkable ability to navigate the complexities of logical deduction. Performance evaluations, based on a series of simulated interactions, illustrate the agents' improved precision and strategic acumen in narrowing down possibilities through targeted inquiries. These findings underscore the potential of LLMs in tasks requiring intricate reasoning and collaboration, marking a significant step towards more intelligent and autonomous systems.

Keywords

Language Models; Deductive Reasoning; Information Collection; Pre-trained Models; Strategic Reasoning

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.