Version 1
: Received: 13 August 2024 / Approved: 13 August 2024 / Online: 14 August 2024 (09:43:31 CEST)
How to cite:
Attai, K.; Ekpenyong, M.; Amannah, C.; Asuquo, D.; Ajuga, P.; Obot, O.; Johnson, E.; John, A.; Maduka, O.; Akwaowo, C.; Uzoka, F.-M. Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models. Preprints2024, 2024080966. https://doi.org/10.20944/preprints202408.0966.v1
Attai, K.; Ekpenyong, M.; Amannah, C.; Asuquo, D.; Ajuga, P.; Obot, O.; Johnson, E.; John, A.; Maduka, O.; Akwaowo, C.; Uzoka, F.-M. Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models. Preprints 2024, 2024080966. https://doi.org/10.20944/preprints202408.0966.v1
Attai, K.; Ekpenyong, M.; Amannah, C.; Asuquo, D.; Ajuga, P.; Obot, O.; Johnson, E.; John, A.; Maduka, O.; Akwaowo, C.; Uzoka, F.-M. Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models. Preprints2024, 2024080966. https://doi.org/10.20944/preprints202408.0966.v1
APA Style
Attai, K., Ekpenyong, M., Amannah, C., Asuquo, D., Ajuga, P., Obot, O., Johnson, E., John, A., Maduka, O., Akwaowo, C., & Uzoka, F. M. (2024). Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models. Preprints. https://doi.org/10.20944/preprints202408.0966.v1
Chicago/Turabian Style
Attai, K., Christie Akwaowo and Faith-Michael Uzoka. 2024 "Enhancing the Interpretability of Malaria and Typhoid Diagnosis with Explainable AI and Large Language Models" Preprints. https://doi.org/10.20944/preprints202408.0966.v1
Abstract
Malaria and Typhoid fever are prevalent diseases in tropical regions, exacerbated by unclear protocols, drug resistance, and environmental factors. Prompt and accurate diagnosis is crucial to improve accessibility and reduce mortality rates. Traditional diagnosis methods cannot effectively capture the complexities of these diseases due to the presence of confusable symptoms. Although machine learning (ML) models offer accurate predictions, they operate as "black boxes" with non-interpretable decision-making processes, making it challenging for healthcare providers to comprehend how the conclusions are reached. This study employs explainable AI (XAI) models such as Local Interpretable Model-agnostic Explanations (LIME), and Large Language Models (LLMs) like ChatGPT, Gemini, and Perplexity, to clarify diagnostic results for healthcare workers, building trust and transparency in medical diagnostics. The models were implemented on Google Colab and Visual Studio Code because of their rich libraries and extensions. Results showed that the Random Forest model outperformed Extreme Gradient Boost and Support Vector Machines and important features for predicting Malaria and Typhoid were identified with the LIME plots. Among LLMs, ChatGPT 3.5 demonstrates a comparative advantage over Gemini and Perplexity, hence highlighting how crucial it is to use a hybrid strategy that combines ML, XAI, and LLMs to improve diagnostic performance and reliability in healthcare applications. The study recommends implementing the integrated ML, LIME, and LLM processes because it streamlines the development and maintenance process's workflow overall, makes use of resources more effectively, and improves explainability by allowing the LLM to consider the complete context that LIME provides.
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.