Artificial Intelligence (AI) has seen increased application and widespread adoption over the past decade despite, at times, offering a limited understanding of its inner working. AI algorithms are, in large part, built on weights, and these weights are calculated as a result of large matrix multiplications. Computationally intensive processes are typically harder to interpret. Explainable Artificial Intelligence (XAI) aims to solve this black box approach through the use of various techniques and tools. In this study, XAI techniques are applied to chronic wound classification. The proposed model classifies chronic wounds through the use of transfer learning and fully connected layers. Classified chronic wound images serve as input to the XAI model for an explanation. Interpretable results can help shed new perspectives to clinicians during the diagnostic phase. The proposed method successfully provides chronic wound classification and its associated explanation. This hybrid approach is shown to aid with the interpretation and understanding of AI decision-making processes.
Keywords:
Subject: Engineering - Control and Systems Engineering
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.