VLR Net-An Ensemble Model to Enhance the Accuracy of Classification of Colposcopy Images
How to cite: Chatterjee, P.; Siddiqui, S.; Abdul Kareem, R. S. VLR Net-An Ensemble Model to Enhance the Accuracy of Classification of Colposcopy Images. Preprints 2024, 2024102511. https://doi.org/10.20944/preprints202410.2511.v1 Chatterjee, P.; Siddiqui, S.; Abdul Kareem, R. S. VLR Net-An Ensemble Model to Enhance the Accuracy of Classification of Colposcopy Images. Preprints 2024, 2024102511. https://doi.org/10.20944/preprints202410.2511.v1
Abstract
Early detection of cervical cancer is the need of the hour to stop the fatality from this disease. There have been various CAD approaches in the past that promise to detect cervical cancer in an early stage, each of them having some constraints.This study proposes a novel ensemble deep learning model, VLR (variable learning rate), aimed at enhancing the accuracy of cervical cancer classification. The model architecture integrates VGG16, Logistic Regression, and ResNet50, combining their strengths in an offbeat ensemble design. VLR learns in two ways: firstly, by dynamic weights from three base models, each of them trained separately; secondly, by attention mechanisms used in the dense layer of the base models. Hyperparameter tuning is applied to further reduce loss, fine tune the model’s performance, and maximize classification accuracy. We performed K-fold cross-validation on VLR to evaluate any improvements in metric values resulting from hyperparameter fine tuning. We have also validated our model on images captured in three different solutions and on a secondary dataset. Our proposed VLR model outperformed existing methods in cervical cancer classification, achieving a remarkable training accuracy of 99.95% and a testing accuracy of 99.89%.
Keywords
cervical cancer; colposcopy; deep learning; image; clustering
Subject
Copyright: This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Comments (0)
We encourage comments and feedback from a broad range of readers. See criteria for comments and our Diversity statement.
Leave a public commentSend a private comment to the author(s)