I. Introduction
As the most common cancer in the world, skin cancer has increased dramatically over the last few decades. Skin cancer is divided by dermatologists into two major types: non-melanoma skin cancers (NMSC), and melanoma, recognized for its aggressive behavior and risk to metastasize. While occurring less frequently than NMSC, melanoma is responsible for the majority of skin cancer mortality, pointing out the urgent need for accurate diagnosis [
1].
The most common type of skin cancer is NMSC, which includes mainly basal cell carcinoma (BCC) and squamous cell carcinoma (SCC). BCC rarely metastasizes but is associated with significant morbidity in those cases that are left untreated or treated with delay. SCC, although more aggressive than BCC in terms of metastasis, usually responds well to treatment if detected and treated on time. Increased incidence of NMSC has been associated with higher ambient UV, related to ozone depletion, and increased public awareness and screening activities [
2].
Melanoma, although a relatively rare form of skin cancer, associates a highly metastatic behavior. Early detection is of extreme importance to support a high survival rate in effective melanoma management. The five-year survival rate for patients with early-stage melanoma is over 90%, but this figure decreases significantly with the higher stage at diagnosis [
3].
Benign skin lesions, including actinic keratosis (AK), benign keratosis-like lesions, and melanocytic nevi, are widespread and typically non-malignant. Although with benign behavior, clinical differentiation from their malignant counterpart is sometimes challenging even for an experienced dermatologist. When distinguishing malignant from benign disease, this diagnostic challenge may result in either unnecessary invasive procedures or delayed interventions.
The differential diagnosis of skin lesions is a complex task that reflects a cumulative knowledge of their morphology, distribution, and evolution over time. For malignant and benign skin lesions, dermoscopy, a technique that permits improved visualization, has provided a great deal of diagnostic accuracy. However, dermoscopic analysis can be subjective and experience-dependent, which can result in diagnostic variability and outcomes [
4].
Early and accurate diagnosis of skin lesions is of utmost importance for treatment outcomes, especially when considering skin cancer. According to the World Health Organization, 1.5 million cases of skin cancers were reported in 2022, of which 330.000 were melanoma cases, leading to almost 60.000 deaths [
5]. Therefore, the precise and timely diagnosis of skin lesions is essential, as it directly impacts patient management and prognosis. Although established diagnostic methods are effective, they can be subjective when using clinical evaluation as the main diagnostic tool. Such heterogeneity highlights the importance of moving toward diagnostic tools that are less subjective and more repeatable and scalable, which can also enhance early detection and improve clinical outcomes.
Today, the diagnostic landscape in dermatology is changing through the great impact of emerging technologies with depth in science, such as deep learning (DL) and artificial intelligence (AI). DL, a subset of machine learning (ML) has emerged as a powerful tool that is predicted to be a game changer in multiple fields of medicine, including dermatology. Computers and improved ML models can now solve hard, complicated diagnostic tasks with high accuracy. AI models, particularly when trained on large-scale databases, have a potential ability similar or superior to dermatologists in skin lesion classification. Moreover, these AI systems have a major impact in providing standardization in diagnostic interpretations and in the reduction of the inter-observer variability and frequency of diagnostic errors [
6].
This consistent, repeatable interpretation may revolutionize skin lesion diagnostics as AI becomes integrated into clinical practice. Further, such integration allows for enhanced distribution of dermatologic expertise to underserved regions, democratizing access to expert-level diagnosis and care. To accommodate the vast diversity of skin types and conditions we can expect to handle in clinical practice, a durable deployment of AI tools in dermatology may require extensive validation and continuous training on multiple datasets [
7].
Up to the present, in dermatology, DL models have outperformed standard diagnostic approaches in several studies. For instance, research performed by Esteva et al. (2017) has demonstrated that convolutional neural networks (CNNs) can classify skin cancer with a level of competence comparable to dermatologists, using vast datasets of dermatoscopic images [
8]. Similarly, Codella et al. (2017), reported that ensemble DL could remarkably improve melanoma detection in dermoscopy images [
9]. In a more recent paper from Brinker et al. (2019), deep neural networks were found to outperform dermatologists after being trained on images of melanomas, indicating the potential of these models for clinical decision-making applications [
10]. Moving forward, Liu et al. (2020) also reported an AI attempt to diagnose skin diseases with a differential diagnostic accuracy comparable to dermatologists [
11]. These advances in AI are not only being transformed into plain research findings. Still, they are also being manifested into practical application results that could make use of saving lives through real clinical operations. These echo the belief within the medical field that AI, specifically DL algorithms can vastly improve both the accuracy and the consistency of diagnostic tests.
Nonetheless, as these advanced AI tools are being considered to be included in the clinic, significant challenges with deployment remain, driven by requirements in extensive training data and generalization across diverse patient populations. This includes also the validation process intended to demonstrate accuracy and reliability.
In this study, EfficientNetB3, an architecture that achieves state-of-the-art accuracy in image classification, was used. Developed by Tan and Le (2019), it is the ideal type of architecture for medical image analysis as it performs the best in benchmarks considering both size and computational efficiency [
12]. The work presented in this article focuses on the usage of a custom model based on EfficientNetB3 that uses transfer learning to analyze skin lesions on an extended dataset. This study aims to bridge the gap between the rapid advancements in technology and the clinical utility, in a scalable and efficient fashion to improve diagnostic accuracy and speed in dermatology by applying advanced AI methods. The model was trained and validated across six classes of skin diseases, encompassing benign and malignant conditions such as melanoma, basal cell carcinoma, squamous cell carcinoma, actinic keratoses, benign keratosis-like lesions, and melanocytic nevi.
By integrating the latest AI methods, progress can be achieved, such as improving diagnosis or reducing clinicians’ workload, allowing more time for patient care and less for routine diagnosis. Combining AI strengths on the technical side with the nuanced understanding of skin pathology from expert dermatologists, we have created an interdependent mix that betters the accuracy and applicability of the diagnostic process.
There are several novel contributions introduced to the dermatological research and clinical practice:
Ranks among the top-performing models in the European region, indicating its potential to effectively address regional medical challenges.
Achieves competitive results with our custom model based on EfficientNetB3, demonstrating efficient utilization of limited training data.
Enhances practical feasibility and cost-effectiveness of deployment due to its modest computational requirements.
It shows robust performance with fewer images compared to models that achieve similar or better results with larger datasets.
The following part of the study is organized as follows: Section II details the training dataset, data preprocessing steps, the model architecture and hyper-parameters, and techniques to combat overfitting, along with the training and validation processes. Section III presents the model’s performance. Section IV interprets the results, highlighting their implications and limitations. Finally, the conclusion summarizes the findings and suggests directions for future research.
IV. Discussion
AI integration in dermatology offers a substantial transformation of the medical field, and models such as EfficientNet certainly have the potential to revolutionize the diagnosis of skin lesions providing a better tool to assist with accuracy and efficiency.
The results of our study showed that the custom model based on EfficientNetB3 has an impressive ability to classify skin lesions with high accuracy. The model’s performance was high in pathology categories well-represented by a large number of images (and therefore deep learnable data). For the categories having sufficient examples, an average accuracy of 95.4% was gained, which demonstrates a high recognition and classification capacity. Yet, the introduction of pathologies with a lower number of images led the model performance to drop to 88.8%. The reduced performance suggests the necessity for a balanced and representative dataset in each pathology category. These results stress the necessity to enhance data acquisition, allowing the model to be more generalizable and applied to a variety of clinical scenarios thereby providing reliable diagnostic outcomes.
To provide a comprehensive evaluation of our proposed model, its performance was compared with the results of several studies that utilize the EfficientNet architecture for skin lesion classification.
Table 5 briefly summarizes comparative studies that used EfficientNet models, with a more detailed description of the studies’ work presented below.
Karthik et al. (2022) introduced in their study Eff2Net, a model designed to classify skin diseases with improved accuracy and reduced computational complexity. By integrating the Efficient Channel Attention (ECA) block into the EfficientNetV2 architecture, the authors replace the traditional Squeeze and Excitation (SE) block, thereby significantly reducing the number of trainable parameters. Eff2Net was trained on a diverse dataset comprising 4930 images, with data augmentation expanding this to 17,329 images across four skin disease categories: acne, AK, melanoma, and psoriasis. The model achieved a testing accuracy of 84.70%, outperforming other contemporary models like InceptionV3, ResNet-50, DenseNet-201, and EfficientNetV2 in overall accuracy with fewer parameters. Despite its strengths in reducing computational complexity and achieving high accuracy, Eff2Net has limitations, particularly in the accuracy of actinic keratosis [
33].
Ali et al. (2022) explored the use of EfficientNet models (B0-B7) for classifying 7 classes of skin lesions using the HAM10000 dataset. Transfer learning from pre-trained ImageNet weights and fine-tuning on the HAM10000 dataset were applied to train the EfficientNet variants. Performance metrics like precision, recall, accuracy, F1 score, specificity, ROC AUC score, and confusion matrices were used to evaluate the models. The findings revealed that intermediate complexity models, such as EfficientNet B4 and B5, performed the best, with EfficientNet B4 achieving an F1 Score of 87% and a top-1 accuracy of 87.91% [
34]. It is noteworthy that the accuracy of the EfficientNetB3 model in this study was reported to be 83.9% [
34], which is lower than the accuracy achieved by our proposed model.
In their study, Rafay et al. (2023) aimed to perform the classification of a wide range of skin diseases (31 categories) using a novel dataset by blending two existing datasets, Atlas Dermatology and ISIC, resulting in 4910 images. The study utilized transfer learning with three types of convolutional neural networks: EfficientNet, ResNet, and VGG, and found that EfficientNet achieved the highest testing accuracy. The EfficientNet-B2 model was identified as the top performer, mainly due to its compound scaling and depth-wise separable convolutions, which enable efficient training with fewer parameters [
35].
The study performed by Venugopal et al. (2023) focused on the binary classification of skin lesions (malignant vs. benign) using EfficientNet models (EfficientNetV2-M and EfficientNet-B4) and a database created by combining datasets from ISIC 2018, ISIC 2019, and ISIC 2020, totaling 58,032 images. The modified EfficientNetV2-M model achieved high performance, with an accuracy of 95.49% on the ISIC 2019 dataset, while the accuracy of the EfficientNet-B4 model was 93.17%. [
36].
In their study, Harahap et al. (2024) investigate the use of EfficientNet models for classifying BCC, SCC, and melanoma using the ISIC 2019 dataset. The study implemented all eight EfficientNet variants (B0 to B7), with EfficientNet-B4 achieving the highest overall accuracy of 79.69%. The EfficientNet-B3 model achieved a validation accuracy of 74.87% and a testing accuracy of 77.60%, with a precision of 85.98%, recall of 73.44%, and F1-score of 79.21% [
37]. Notably, these results are lower than the ones reported in our study, where we classified six diseases, including the three from the mentioned study, and achieved higher accuracy.
To summarize, these recent studies showcase models like EfficientNetB0, EfficientNetB2, EfficientNetV2-M, EfficientNet-B4, and EfficentNetB3. The datasets used are varied, including DermNet NZ, Derm7Pt, DermatoWeb, Fitzpatrick17k, HAM10000, ISIC2019, and proprietary collections, covering both public and private data sources. The scope of these studies is diverse, with some focusing on a broad range of skin diseases, such as 31 classes in EfficientSkinDis [
35], while others concentrate on specific categories like 4 to 7 skin diseases, covering both benign and malignant lesions. Also, several studies tested the accuracy of EfficientNet models in comparison with other CNNs, which they surpassed in performance. The reported accuracies ranged from 84.7% to 95.49% (higher values only for binary classification), highlighting the variations in model performance depending on the dataset and classification task (binary or multi-class). Notably, the proposed model achieves 95.4% accuracy for classifying 4 skin diseases and 88.8% for 6 skin diseases, demonstrating competitive performance within this comparative framework.
Moving forward, the proposed model was also compared with other state-of-the-art classification models, all of them using images from ISIC or HAM10000 datasets (
Table 6).
Several published studies are focusing on binary classification (benign vs malignant) using the Kaggle/ISIC dataset [
38,
39,
40]. More specifically, Bazgir and colleagues (2024) present an approach to classify skin cancer using an optimized InceptionNet architecture. The study focused on distinguishing between melanoma and non-melanoma skin lesions using a dataset of 2637 dermoscopic images, split into 1197 malignant and 1440 benign lesions. The InceptionNet model was evaluated using performance metrics, including precision, sensitivity, specificity, F1-score, and area under the ROC curve. The optimized InceptionNet achieved an accuracy score of 84.39% and 85.94% when using Adam and Nadam optimizers, respectively [
38]. Using the same dataset, Rahman et al. (2024) present an approach to classify skin cancer using the NASNet architecture optimized for improved performance in detecting malignant versus benign lesions. The NASNet model’s performance was evaluated using metrics like precision, sensitivity, specificity, F1-score, and area under the ROC curve. The optimized NASNet model achieved an accuracy of 86.73% [
39]. In their study, Anand et al. (2022) focus on improving the VGG16 model using transfer learning for the classification of skin cancer into benign and malignant categories. The VGG16 model was enhanced by adding a flatten layer, two dense layers with the LeakyReLU activation function, and another dense layer with the sigmoid activation function. The improved model achieved an overall accuracy of 89.09% on a batch size of 128 using the Adam optimizer over ten epochs [
40].
Singh et al. (2022) introduce a novel two-stage DL pipeline named SkiNet for the diagnosis of skin lesions. The framework integrates lesion segmentation followed by classification, incorporating uncertainty estimation and explainability to enhance model reliability and clinician trust. Using Bayesian MultiResUNet for segmentation and Bayesian DenseNet-169 for classification, the SkiNet pipeline achieves a diagnostic accuracy of 73.65%, surpassing the standalone DenseNet-169’s accuracy of 70.01% [
41]. Having the same image dataset and scope of classifying skin diseases into 7 categories, Ahmed et al. (2024) present a new deep learning model, SCCNet, based on the Xception architecture, with the inclusion of additional layers to enhance performance. These layers include convolutional layers for feature extraction, batch normalization layers for improved convergence, activation layers to introduce non-linearity, and dense layers for better classification performance. The model achieved an accuracy of 95.20%, with precision, recall, and F1-score values all above 95%, outperforming several state-of-the-art models such as ResNet50, InceptionV3, and Xception [
42].
The article [
43] by Al-Rasheed et al. presents a new approach to skin cancer classification using an ensemble of transfer learning models, specifically VGG16, ResNet50, and ResNet101. The study leverages Conditional Generative Adversarial Networks to augment the dataset, addressing class imbalance issues. The proposed models were trained on both balanced and unbalanced datasets, and their performance was evaluated using accuracy, precision, recall, and F1-score metrics. The ensemble approach achieved a superior accuracy of 93.5%, demonstrating a significant improvement over individual models, which had accuracies of around 92% [
43].
Naeem et al. have published two studies using the ISIC 2019 dataset, focusing on the classification of 8 types of skin diseases [
44,
45]. In [
44], DVFNet achieved an impressive accuracy of 98.32%, outperforming several baseline CNN models like AlexNet, VGG-16, Inception-V3, and ResNet-50. In [
45], the proposed SNC_Net model outperforms baseline models like EfficientNetB0, MobileNetV2, DenseNet-121, and ResNet-101, achieving an accuracy of 97.81%.
To sum up, the proposed model addresses a complex task of multi-class classification while still achieving an accuracy (95.4% for four classes and 88.8% for six classes) superior to the binary classification accuracies of the Inception Network, VGG16, and NASNet models. Also, the proposed EfficientNetB3 model outperforms Bayesian DenseNet-169, which achieved an accuracy of 73.65%. While SNC_Net and DVFNet achieve higher accuracies (97.81% and 98.32%, respectively), it is essential to recognize that these models benefit from more specialized architectures and additional data preprocessing techniques. Overall, the proposed model using EfficientNetB3 demonstrates strong performance in multi-class classification tasks, particularly given its simpler architecture and lower computational requirements, with its competitive accuracy highlighting EfficientNetB3’s capability to handle diverse and challenging dermatological datasets effectively.
Limitations of Current Research
Despite the presented advancements, our study also has several limitations that need to be recognized.
First, the dataset primarily contains images of individuals from selected demographics, skin diseases, or skin phototypes, which may not represent the population’s global diversity. This aspect could limit the model’s performance when tested on other skin types and conditions, consequently affecting its generalization.
Second, the total number of images is still relatively low, particularly for infrequent lesions such as certain types of melanoma. This might lead to overfitting, where the model overperforms on training data but underperforms if applied to new unseen data.
To mitigate these constraints, the research could evolve from the current work to widen the dataset in terms of size and diversity, with the inclusion of broader skin phototypes and less common skin conditions. It will allow the development of a model that is both accurate in common conditions and applicable with high fidelity in the early detection of less common (and more likely dangerous) lesions. For a broader range of data, more international dermatology centers are required to collaborate and compile a dataset that can lead to a model that is more representative of global diagnostic applications.
Furthermore, one of the biggest areas that needs to be addressed in dermatological AI research is the need for standardization when it comes to collecting images such as using high-resolution images as the benchmark. The efficacy of ML models, like EfficientNet, is heavily dependent on the quality of the input data. Higher-resolution images get the finer details of dermatological conditions which is important for accurate specification and diagnostics. Presently, the heterogeneity of imaging due to the differences in the devices used and the settings applied to collection centers remains one of the major obstacles. Creating a standard high-resolution image during the collection process would make the training data not only more uniform but also more descriptive. The standardization also would correct some of the dataset variability that was seen when models trained on mixed-quality image datasets do not perform as consistently or have reduced diagnostic performance in different clinical settings.
Future work should also focus on improving the interpretability properties of the model, offering more details related to AI’s diagnostic rationale. This would be very helpful in educational settings and would increase the model’s acceptability in clinical practice.
Also, incorporating multimodal data like the patient’s history and demographics could improve the diagnostic accuracy of the model and make it more patient-specific. If it lays the groundwork for individual dermatological assessments, then this could be the first step in aligning more with the goals of precision medicine.
Figure 1.
Examples of clinical and dermoscopic images used for training.
Figure 1.
Examples of clinical and dermoscopic images used for training.
Figure 2.
The architecture and the setup of the model.
Figure 2.
The architecture and the setup of the model.
Figure 3.
The validation accuracy and loss for the BCC, Benign Keratosis-like lesions, Melanocytic nevi and Melanoma classes
Figure 3.
The validation accuracy and loss for the BCC, Benign Keratosis-like lesions, Melanocytic nevi and Melanoma classes
Figure 4.
The validation accuracy and loss for BCC, Benign Keratosis-like lesions, Melanocytic nevi, Melanoma, SCC, and AK classes
Figure 4.
The validation accuracy and loss for BCC, Benign Keratosis-like lesions, Melanocytic nevi, Melanoma, SCC, and AK classes
Figure 5.
The ROC curve for the BCC, benign keratosis-like lesions, melanocytic nevi, and melanoma classes.
Figure 5.
The ROC curve for the BCC, benign keratosis-like lesions, melanocytic nevi, and melanoma classes.
Figure 6.
The ROC curve for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Figure 6.
The ROC curve for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Figure 7.
Confusion matrix for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma classes.
Figure 7.
Confusion matrix for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma classes.
Figure 8.
Confusion matrix for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Figure 8.
Confusion matrix for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Figure 9.
Errors per class for BCC, benign keratosis-like lesions, melanocytic nevi, and melanoma classes.
Figure 9.
Errors per class for BCC, benign keratosis-like lesions, melanocytic nevi, and melanoma classes.
Figure 10.
Errors per class for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Figure 10.
Errors per class for BCC, benign keratosis-like lesions, melanocytic nevi, melanoma, SCC, and AK classes.
Table 1.
Distribution of images across the skin conditions.
Table 1.
Distribution of images across the skin conditions.
Classes |
No of Images |
No of Augmented Images |
Total |
Melanoma |
1655
|
489
|
2144
|
BCC |
1811
|
333
|
2144
|
Benign Keratosis-like lesions |
1663
|
481
|
2144
|
Melanocytic Nevi |
1686
|
458
|
2144
|
SCC |
606 |
1538
|
2144
|
AK |
801 |
1343
|
2144
|
Total |
8222 |
4642 |
12864 |
Table 2.
The hyperparameters of the model.
Table 2.
The hyperparameters of the model.
Hyperparameters |
Values |
Learning Rate |
0.001 |
Batch size |
32 |
Number of Epochs |
19 |
Optimizer |
Adamax |
Dropout Rate |
0.45 |
Activation Functions |
Relu, Softmax |
Regularization Parameters |
Kernel Regularizer: L2 regularization with strength 0.016 Activity Regularizer: L1 regularization with strength 0.006 Bias Regularizer: L1 regularization with strength 0.006 |
Loss Function |
Categorical Cross Entropy |
Augmentation techniques |
Rotate, Scale, Flip, Zoom |
Table 3.
Testing results for four categories.
Table 3.
Testing results for four categories.
|
Precision |
Recall |
F1-score |
Support |
Basal cell carcinoma |
0.94 |
0.98 |
0.96 |
225 |
Benign keratosis-like lesions |
0.94 |
0.89 |
0.91 |
208 |
Melanocytic nevi |
0.95 |
0.97 |
0.96 |
210 |
Melanoma |
1.00 |
0.99 |
1.00 |
207 |
|
Accuracy |
|
|
0.96 |
850 |
Macro Avg |
0.96 |
0.96 |
0.96 |
850 |
Weighted Avg |
0.96 |
0.96 |
0.96 |
850 |
Table 4.
Testing results for six categories.
Table 4.
Testing results for six categories.
|
Precision |
Recall |
F1-score |
Support |
Actinic keratosis |
0.74 |
0.77 |
0.75 |
100 |
Basal cell carcinoma |
0.87 |
0.84 |
0.85 |
227 |
Benign keratosis-like lesions |
0.85 |
0.85 |
0.85 |
208 |
Melanocytic nevi |
0.94 |
0.97 |
0.96 |
210 |
Melanoma |
1.00 |
1.00 |
1.00 |
207 |
Squamous cell carcinoma |
0.69 |
0.54 |
0.61 |
76 |
|
Accuracy |
|
|
0.89 |
1028 |
Macro Avg |
0.85 |
0.84 |
0.85 |
1028 |
Weighted Avg |
0.89 |
0.89 |
0.89 |
1028 |
Table 5.
Comparative studies using EfficientNet models.
Table 5.
Comparative studies using EfficientNet models.
Model |
Year |
Dataset |
Model Used |
Scope |
Accuracy |
Karthik et al. [33] |
2022 |
DermNet NZ, Derm7Pt, DermatoWeb, Fitzpatrick17k |
EfficientNetV2 in conjunction with the Efficient Channel Attention block |
Classification of 4 skin diseases: acne, AK, melanoma, and psoriasis. |
84.7% |
Ali et al. [34] |
2022 |
HAM10000 dataset of dermatoscopic images |
EfficientNet variants (results presented refer to EfficientNet B0) |
Classification of 7 skin diseases |
87.9% |
Rafay et al. [35] |
2023 |
Manually curated from Atlas Dermatology & ISIC Dataset |
Fine-tuned EfficientNet-B2 |
Classification of 31 skin diseases |
87.15% |
Venugopal et al. [36] |
2023 |
ISIC2019 dataset |
EfficientNetV2-M |
Binary classification: malignant vs benign |
95.49% |
Venugopal et al. [36] |
2023 |
ISIC2019 dataset |
EfficientNet-B4 |
Binary classification: malignant vs benign |
93.17% |
Harahap et al. [37] |
2024 |
ISIC2019 dataset |
EfficientNet-B0 to EfficientNet-B7 (results reported to EfficientNet-B3) |
Classification of 3 skin diseases: BCC, SCC, melanoma |
77.6% |
Harahap et al. [37] |
2024 |
ISIC2019 dataset |
EfficientNet-B0 to EfficientNet-B7 (results reported to EfficientNet-B4, the highest result obtained) |
Classification of 3 skin diseases: BCC, SCC, melanoma |
79.69% |
Proposed model |
|
ISIC2019 & personal images collection |
EfficientNetB3 |
Classification of 4 skin diseases (benign& malign) |
95.4%
|
Proposed model |
|
ISIC2019 & personal images collection |
EfficientNetB3 |
Classification of 6 skin diseases (benign& malign) |
88.8% |
Table 6.
Comparative studies using state-of-the-art CNN models.
Table 6.
Comparative studies using state-of-the-art CNN models.
Model |
Year |
Dataset |
Model used |
Scope |
Accuracy |
Bazgir et al. [38] |
2024 |
Kaggle/ISIC |
Inception Network |
Binary classification: malign vs benign |
85.94% |
Rahman et al. [39] |
2024 |
Kaggle/ISIC |
NASNet |
Binary classification: malign vs benign |
86.73% |
Anand et al. [40] |
2022 |
Kaggle/ISIC |
Modified VGG16 architecture |
Binary classification: malign vs benign |
89.9% |
Singh et al. [41] |
2022 |
ISIC2018 |
Bayesian DenseNet-169 |
Classification of 7 skin diseases |
73.65% |
Ahmed et al. [42] |
2024 |
ISIC2018 |
SCCNet derived from Xpection architecture |
Classification of 7 skin diseases |
95.2% |
Al-Rasheed et al. [43] |
2022 |
HAM10000 |
Combination of VGG16, ResNet50, ResNet101 |
Classification of 7 skin diseases |
93.5% |
Naeem et al. [44] |
2024 |
ISIC2019 |
SNC_Net |
Classification of 8 skin diseases |
97.81% |
Naeem et al. [45] |
2024 |
ISIC2019 |
DVFNet |
Classification of 8 skin diseases |
98.32% |
Proposed model |
|
ISIC2019 |
EfficientNetB3 |
Classification of 4 skin diseases |
95.4% |
Proposed model |
|
ISIC2019 |
EfficientNetB3 |
Classification of 6 skin diseases |
88.8% |