Article
Version 1
Preserved in Portico This version is not peer-reviewed
Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows
Version 1
: Received: 20 August 2024 / Approved: 21 August 2024 / Online: 22 August 2024 (05:48:11 CEST)
How to cite: Fraga, N. Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows. Preprints 2024, 2024081527. https://doi.org/10.20944/preprints202408.1527.v1 Fraga, N. Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows. Preprints 2024, 2024081527. https://doi.org/10.20944/preprints202408.1527.v1
Abstract
As Large Language Models (LLMs) increasingly accommodate larger inputs, context windows spanning hundreds of thousands or even millions of tokens are touted as promising for a wide array of applications. However, the potential decay in reasoning ability for larger inputs may compromise their utility. This study introduces a new benchmark called Find the Origin, which progressively tests the efficacy of LLMs on a simple intellectual task as the size of the context window increases. The test, conducted on 14 different LLMs for comparative analysis, demonstrated that reasoning ability is dependent on input size. Additionally, three independent tests were performed with the GPT-4 Turbo model to demonstrate its reasoning degradation in different contexts as input size increases.
Keywords
Large Language Models; Context Window; Reasoning Ability
Subject
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)
* All users must log in before leaving a comment