Preprint Article Version 1 This version is not peer-reviewed

ExShall-CNN: An Explainable Shallow Convolutional Neural Network for Medical Image Segmentation

Version 1 : Received: 21 October 2024 / Approved: 21 October 2024 / Online: 22 October 2024 (11:57:43 CEST)

How to cite: Khalkhali, V.; Azim, S. M.; Dehzangi, I. ExShall-CNN: An Explainable Shallow Convolutional Neural Network for Medical Image Segmentation. Preprints 2024, 2024101667. https://doi.org/10.20944/preprints202410.1667.v1 Khalkhali, V.; Azim, S. M.; Dehzangi, I. ExShall-CNN: An Explainable Shallow Convolutional Neural Network for Medical Image Segmentation. Preprints 2024, 2024101667. https://doi.org/10.20944/preprints202410.1667.v1

Abstract

Explainability is essential for AI models, especially in clinical settings where understanding the model's decisions is crucial. Despite their impressive performance, black-box AI models are unsuitable for clinical use if their operations cannot be explained to clinicians. While deep neural networks (DNNs) represent the forefront of model performance, their explanations are often not easily interpretable by humans. On the other hand, using hand-crafted features extracted to represent different aspects of the input data and traditional machine learning models are generally more understandable. However, they often lack the effectiveness of advanced models due to human limitations in feature design. To address this, we propose ExShall-CNN, a novel explainable shallow convolutional neural network for medical image processing. This model enhances hand-crafted features to maintain human interpretability while achieving performance levels comparable to advanced deep convolutional networks, such as U-Net, for medical image segmentation. ExShall-CNN and its source code are publicly available at: https://github.com/MLBC-lab/ExShall-CNN

Keywords

Explainability; Image segmentation; shallow convolutional neural network

Subject

Computer Science and Mathematics, Artificial Intelligence and Machine Learning

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.

Leave a public comment
Send a private comment to the author(s)
* All users must log in before leaving a comment
Views 0
Downloads 0
Comments 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.