Preprint Review Version 1 This version is not peer-reviewed

Enhancing AI Transparency for Human Understanding: A Comprehensive Review

Version 1 : Received: 2 October 2024 / Approved: 3 October 2024 / Online: 3 October 2024 (12:14:11 CEST)

How to cite: Ara, M. Enhancing AI Transparency for Human Understanding: A Comprehensive Review. Preprints 2024, 2024100262. https://doi.org/10.20944/preprints202410.0262.v1 Ara, M. Enhancing AI Transparency for Human Understanding: A Comprehensive Review. Preprints 2024, 2024100262. https://doi.org/10.20944/preprints202410.0262.v1

Abstract

One of the most hotly debated topics in technology is the transparency between AI models and humans. As artificial intelligence (AI) continues to permeate various sectors, the demand for transparency in AI decision-making has become increasingly critical. This paper presents a comprehensive review of Explainable Artificial Intelligence (XAI), examining 57 key studies that focusing on various explanation approaches and their impact on the trust and accountability of end-users. Recognizing the obstacles resulting from the black-box nature of AI models, this work focuses on the need for the proper methods that can be used in the explanation process, enabling both people and AI models to work together. The findings highlight the importance of XAI in enhancing trust, particularly in complex environments such as healthcare and finance, and propose directions for future research to further develop reliable and interpretable AI solutions.

Keywords

artificial intelligence; machine learning; blackbox; explainable artificial intelligence

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.