Version 1
: Received: 3 July 2024 / Approved: 4 July 2024 / Online: 6 July 2024 (03:16:08 CEST)
How to cite:
Loddo, A.; Usai, M.; Di Ruberto, C. Gastric Cancer Image Classification: a Comparative Analysis and Feature Fusion Strategies. Preprints2024, 2024070478. https://doi.org/10.20944/preprints202407.0478.v1
Loddo, A.; Usai, M.; Di Ruberto, C. Gastric Cancer Image Classification: a Comparative Analysis and Feature Fusion Strategies. Preprints 2024, 2024070478. https://doi.org/10.20944/preprints202407.0478.v1
Loddo, A.; Usai, M.; Di Ruberto, C. Gastric Cancer Image Classification: a Comparative Analysis and Feature Fusion Strategies. Preprints2024, 2024070478. https://doi.org/10.20944/preprints202407.0478.v1
APA Style
Loddo, A., Usai, M., & Di Ruberto, C. (2024). Gastric Cancer Image Classification: a Comparative Analysis and Feature Fusion Strategies. Preprints. https://doi.org/10.20944/preprints202407.0478.v1
Chicago/Turabian Style
Loddo, A., Marco Usai and Cecilia Di Ruberto. 2024 "Gastric Cancer Image Classification: a Comparative Analysis and Feature Fusion Strategies" Preprints. https://doi.org/10.20944/preprints202407.0478.v1
Abstract
Gastric cancer is the fifth most common and fourth deadliest cancer worldwide, with a bleak 5-year survival rate of about 20%. Despite significant research into its pathobiology, prognostic predictability remains insufficient due to pathologists’ heavy workloads and the potential for diagnostic errors. Consequently, there is a pressing need for automated and precise histopathological diagnostic tools. This study leverages Machine Learning and Deep Learning techniques to classify histopathological images into healthy and cancerous categories. By utilizing both handcrafted and deep features and shallow learning classifiers on the GasHisSDB dataset, we conduct a comparative analysis to identify the most effective combinations of features and classifiers for differentiating normal from abnormal histopathological images without employing fine-tuning strategies. Our methodology achieves an accuracy of 95% with the SVM classifier, underscoring the effectiveness of feature fusion strategies. Additionally, cross-magnification experiments produced promising results with accuracies close to 80% and 90% when testing the models on unseen testing images with different resolutions.
Computer Science and Mathematics, Computer Vision and Graphics
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.