Version 1
: Received: 9 March 2023 / Approved: 10 March 2023 / Online: 10 March 2023 (09:33:21 CET)
How to cite:
Kaneda, Y.; Tanimoto, T.; Ozaki, A.; Sato, T.; Takahashi, K. Can ChatGPT Pass the 2023 Japanese National Medical Licensing Examination?. Preprints2023, 2023030191. https://doi.org/10.20944/preprints202303.0191.v1
Kaneda, Y.; Tanimoto, T.; Ozaki, A.; Sato, T.; Takahashi, K. Can ChatGPT Pass the 2023 Japanese National Medical Licensing Examination?. Preprints 2023, 2023030191. https://doi.org/10.20944/preprints202303.0191.v1
Kaneda, Y.; Tanimoto, T.; Ozaki, A.; Sato, T.; Takahashi, K. Can ChatGPT Pass the 2023 Japanese National Medical Licensing Examination?. Preprints2023, 2023030191. https://doi.org/10.20944/preprints202303.0191.v1
APA Style
Kaneda, Y., Tanimoto, T., Ozaki, A., Sato, T., & Takahashi, K. (2023). Can ChatGPT Pass the 2023 Japanese National Medical Licensing Examination?. Preprints. https://doi.org/10.20944/preprints202303.0191.v1
Chicago/Turabian Style
Kaneda, Y., Tomohiko Sato and Kenzo Takahashi. 2023 "Can ChatGPT Pass the 2023 Japanese National Medical Licensing Examination?" Preprints. https://doi.org/10.20944/preprints202303.0191.v1
Abstract
ChatGPT is gaining widespread acceptance for its ability to generate natural language sentences in response to various inputs and is expected to become a supplementary tool for diagnosing and determining treatment policies in clinical settings. ChatGPT was used to evaluate its ability to perform clinical inference and accuracy in answering questions on the 117th Japanese National Medical Licensing Examination held in February 2023. The exam questions were manually inputted into ChatGPT's window, and the accuracy of ChatGPT's responses was determined based on answers provided by a preparatory school. ChatGPT provided answers for 389 out of 400 questions, and its overall correct answer rate was 55.0%. The correct answer rate for 5-choice-1, 5-choice-2, and 5-choice-3 were 57.8%, 42.9%, and 41.2%, respectively. The highest correct answer rate was for the compulsory exam (67.0%), followed by the specific knowledge exam (54.1%) and the cross category exam (47.9%). The correct answer rate for non-image questions was 56.2% and for image questions, it was 51.5%. The study suggests that ChatGPT has potential to support healthcare professionals in clinical decision-making in Japanese clinical settings, but caution should be exercised in interpreting and using the answers generated by ChatGPT due to room for improvement in performance.
Keywords
ChatGPT; Medical Licensing Examination; Clinical Settings; Japan
Subject
Medicine and Pharmacology, Other
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.