Preprint Article Version 1 This version is not peer-reviewed

Towards Evaluating the Diagnostic Ability of LLMs

Version 1 : Received: 6 September 2024 / Approved: 9 September 2024 / Online: 9 September 2024 (12:46:20 CEST)

How to cite: Sarvari, P.; Al-fagih, Z. Towards Evaluating the Diagnostic Ability of LLMs. Preprints 2024, 2024090688. https://doi.org/10.20944/preprints202409.0688.v1 Sarvari, P.; Al-fagih, Z. Towards Evaluating the Diagnostic Ability of LLMs. Preprints 2024, 2024090688. https://doi.org/10.20944/preprints202409.0688.v1

Abstract

On average, one in ten patients die because of a diagnostic error and medical errors are the third largest cause of death in the word. While LLMs have been proposed to help doctors with diagnoses, no research results have been published on comparing the diagnostic ability of many popular LLMs on an openly accessible real-patient cohort. In thus study, we compare LLMs from Google, OpenAI, Meta, Mistral, Cohere and Anthropic using our previously published evaluation methodology and explore improving their accuracy with RAG.

Keywords

Generative AI; LLM; GPT-4; RAG; clinical medicine; diagnosis

Subject

Public Health and Healthcare, Primary Health Care

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.