Preprint Article Version 2 Preserved in Portico This version is not peer-reviewed

Elucidating Common Fallacies and Misconceptions Around LLMs

Version 1 : Received: 23 November 2023 / Approved: 23 November 2023 / Online: 24 November 2023 (05:47:20 CET)
Version 2 : Received: 24 November 2023 / Approved: 24 November 2023 / Online: 25 November 2023 (14:29:21 CET)

How to cite: Khan, M. A.-Z.; Innan, N.; Hammujuddy, J. Elucidating Common Fallacies and Misconceptions Around LLMs. Preprints 2023, 2023111559. https://doi.org/10.20944/preprints202311.1559.v2 Khan, M. A.-Z.; Innan, N.; Hammujuddy, J. Elucidating Common Fallacies and Misconceptions Around LLMs. Preprints 2023, 2023111559. https://doi.org/10.20944/preprints202311.1559.v2

Abstract

This paper discusses some of the most common misconceptions about large language models (LLMs), including the belief that they are sentient or conscious, that they are always accurate, and that they can replace human creativity. The paper also proposes a strategy for overcoming these misbeliefs, which involves educating the public about the capabilities and limitations of LLMs, developing guidelines for the responsible use of LLMs, and conducting more research to understand the potential impact of LLMs on society.

Keywords

Large Language Models, Artificial Intelligence

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (1)

Comment 1
Received: 25 November 2023
Commenter: Muhammad Al-Zafar Khan
Commenter's Conflict of Interests: Author
Comment: The incorrect LaTeX / TeX compressed folder was uploaded resulting in the incorrect paper being uploaded with the incorrect title.When compiled, please verify that the PDF generated from LaTeX / TeX file, and PDF file uploaded are the same.
+ Respond to this comment

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 1


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.