Submitted:
12 October 2024
Posted:
15 October 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
Evaluating Artificial Intelligence Algorithms
2. Artificial Intelligence in the Diagnosis of Melanoma
Utilization of Clinical Images
3. Dermoscopic Images
3.1. Distinguishing Melanoma from Benign Lesions
| Publication | End-point | Dataset | Algorithm | Performance |
| Masood et al. [35] | Classification (benign/melanoma) | 135 images (Clinical + dermoscopic) 107 for training, 14 for validation 14 for testing |
Compared 3 ANN algorithms (RP, LM, SCG) |
SCG: Acc: 91,9% Sen: 92.6% Spe: 91.4% LM: Acc: 91,1% Sen: 85.2% Spe: 95.1% RP: Acc: 88,1% Sen: 77.8% Spe: 95.1% |
| Aswin et al. [49] | Classification (Cancerous/Non-cancerous) | 30 dermoscopic images for training 50 dermoscopic images for testing |
Hybrid Genetic Algorithm + ANN | Acc: 88% |
| Xie et al. [50] | Classification (MM/BN) | Dermoscopic images Xanthous race:240 images (80 MM, 160 BN) Caucasian race: 360 images (120 MM, 240 BN) |
Proposed: meta-ensemble model of multiple neural network ensembles Ensemble 1: single-hidden-layer BP nets with same structures Ensemble 2: single-hidden-layer BP nets and fuzzy nets Ensemble 3: double-hidden-layer BP nets with different structures |
Xanthous race: Sen: 95% Spe: 93.75% Acc: 94.17% Caucasian race: Sen: 83.33% Spe: 95% Acc: 91.11% |
| Marchetti et al. [36] | Classification (MM/BN) | ISBI 2016 challenge dataset [51], MM:248 images BN:1031 images Train set:900 images Test set:379 images Reader study:100 images (50 MM, 50 BN) |
Five methods (unlearned and machine learning) were used to combine individual automated predictions into “fusion” algorithms |
Top Fusion Algorithm: Greedy Fusion: Sen: 58% Spe: 92% AUC: 86% Dermatologists: Sen: 82% Spe: 59% AUC: 71% |
| Marchetti et al. [37] | Classification (MM/BN/SK) and (biopsy/observation) |
ISIC Archive [52]: 2,750 dermoscopy images (521 (19%) MM, 1,843 (67%) BN, and 386 (14%) SK) Training set: 2,000 images Validation: 150 images Test set: 600 images |
ISBI 2017 Challenge top ranked algorithm |
Algorithm: Sen: 76% Spe: 85% AUC: 0.87 Dermatologists: Sen: 76.0% Spe: 72.6% AUC: 0.74 |
| Cueva et al. [53] | Classification (Cancerous/Non-cancerous) |
PH² database [54] Training set: 30 images (10 MM, 10 common mole, 10 no-common mole) Test set: 201 images (80 common mole, 80 no-common mole, 41 MM) |
ANN with backpropagation algorithm | After an analysis of 201 images in the algorithm developed a performance of 97.51% was obtained |
| Navarro et al. [55] | Segmentation and registration to evaluate lesion change | ISIC archive [52]: Training set: 2000 dermoscopic images Validation: 150 dermoscopic images Test set: 600 dermoscopic images |
Segmentation: LF-SLIC Registration:SP-SIFT |
Acc: 0.96 for segmentation |
| Yu C. et al. [38] | Classification (melanoma/non-melanoma) |
725 images (AM: 350 images, BN: 374 images) Group A: 175 images AM, 187 images BN Group B: 175 images AM, 187 images BN Training set: Group A images for training Group B Group B images for training Group A Test set: Group A images for Group A Group B images for Group B |
CNN (VCG-16) | Group A: CNN: Sen: 92.57 Spe: 75.39 Acc: 83.51 Expert: Sen: 94.88 Spe: 68.72 Acc: 81.08 Non-expert: Sen: 41.71 Spe: 91.28 Acc: 67.84 Group B: CNN: Sen: 92.57 Spe: 68.16 Acc: 80.23 Expert: Sen: 98.29 Spe: 65.36 Acc: 81.64 Non-expert: Sen: 48.00 Spe: 77.10 Acc: 62.71 |
| Abbas et al. [39] | Classification (benign nevus/acral melanoma) |
724 images from Yonsei University [38] (350 acral melanoma, 374 benign nevi) 4344 images with data augmentation (2100 acral melanoma, 2244 benign nevi) |
Compared three CNN algorithms (Seven-layered deep CNN, ResNet-18, AlexNet) |
ResNet-18 Acc: 0.97 AUC: 0.97 AlexNet: Acc: 0.96 AUC: 0.96 Proposed ConvNet Acc: 0.91 AUC: 0.91 |
| Fink et al. [40] | Classification (Benign/Malignant) |
Training set: >120.000 dermoscopic images and labels Test set: 72 images (36 combined naevi, 36 melanomas) |
CNN (Moleanalyzer-Pro) based on a GoogleNet Inception_v4 architecture |
CNN: Sen: 97.1% Spe: 78.8% Dermatologists: Sen: 90.6% Spe: 71.0 % |
| Phillips et al. [56] | Classification (MM/dysplastic nevi/other) | Pretrained algorithm Training set (in study): 289 images (36 melanoma lesions; 67 nonmelanoma lesions, 186 control lesions) Test set:1550 images |
SkinAnalytics (CNN) |
The algorithm: İphone 6s image: AUC: 95,8% Spe: 78,1% Galaxy S6 image: AUC: 93,8% Spe: 75,6% DSLR image: AUC: 91,8% Spe: 45,5% Specialists: AUC: 77,8% Spe: 69,9% |
| Martin-Gonzalez et al. [57] | Classification (benign/ malignant skin lesion) |
Pretrained with 37,688 images from ISIC Archive [52] 2019 and 2020 Training set: 339 images (143 MM, 196 BN) Test set:232 images (55 MM, 177 BN) |
QuantusSKIN (CNN) | AUC: 0,813 Sen: 0,691 Spe: 0,802 Acc: 0,776 |
| Brinker et al.[41] | Classification (Melanoma/Nevi) |
Training set: 12,378 dermoscopic images from ISIC dataset [52] Test set:100 dermoscopic images (20 MM, 80 Nevi) |
ResNet-50 (CNN) |
Algorithm: Sen: 74.1% Spe: 86.5% Dermatologists: Sen: 74.1% Spe: 60% |
| Giulini et al.[42] | Classification (Melanoma/Nevi) |
Over 28,000 dermoscopic images CNN test set: 2489 images (344 melanomas, 2155 nevi) Physician test set: 100 images (50 MM, 50 nevi) |
Session 1: Physicians without CNN Session 2: Physicians with CNN |
Physicians without CNN Sen: 56.31% Spe: 69.28% Physicians with CNN Sen: 67.88% Spe: 73.72% |
| Ding et al.[58] | Classification (Binary:melanoma/non-melanoma and multiclass: benign nevi, seborrheic keratosis or melanoma) |
ISIC Dataset [52] Training set: 2000 images (374 MM, 254 SK, 1,372 BN) Validation set: 150 images (30 MM, 42 SK, 78 BN) Test set: 600 images (117 MM, 90 SK, 393 BN) |
Segmentation: U-Net Classification: Five CNNs (Inception-v3, ResNet-50, Densenet169, Inception-ResNet-v2 and Xception) with SE-block and the neural network for ensemble learning consisting of two local connected layers and a softmax layer |
Binary: Inception-v3 Acc: 0.885 AUC: 0.883 ResNet-50 Acc: 0.88 AUC: 0.882 Densenet169 Acc: 0.893 AUC: 0.882 Inception-ResNet-v2 Acc: 0.89 AUC: 0.894 Xception Acc: 0.891 AUC: 0.896 Ensemble Acc:0.909 AUC: 0.911 Multiclass: Inception-v3 Acc: 0.792 AUC: 0.883 ResNet-50 Acc: 0.762 AUC: 0.864 Densenet169 Acc: 0.800 AUC: 0.881 Inception-ResNet-v2 Acc: 0.800 AUC: 0.873 Xception Acc: 0.810 AUC: 0.896 Ensemble Acc: 0.851 AUC: 0.913 |
| Yu L. Et al.[59] | Segmentation and Classification (Benign/Malignant) |
ISIC dataset [52] Training set: 900 images Test set: 350 images |
FCRN for skin lesion segmentation and very deep residual network for classification |
Segmentation: Sen: 0,911 Spe: 0,957 Acc: 0,949 Classification with segmentation: Sen: 0,547 Spe: 0,931 Acc: 0,855 |
| Bisla et al. [60] | Classification (Nevus, SK, MM) |
Training set: ISIC dataset [52]: 803 MM, 2107 nevus, 288 SK PH² dataset [54]: 40 MM, 80 Nevus Edinburgh dataset [31]: 76 MM, 331 nevus, 257 SK Test set: ISIC data sets 600 images (117 MM, 90 SK, and 393 nevus) |
Segmentation:Modified U-Net (CNN) Augmentation: de-coupled DCGANs Classification:ResNet-50 |
AUC: 0,915 Acc: 81,6% |
| Mahbod et al. [43] | Classification (MM/All, SK/All) |
ISIC dataset [52] Training: 2037 dermoscopic images (411 MM, 254 SK, 1372 BN) |
Feature Extraction: Pretrained CNNs (AlexNet, ResNet-18 and VGG16) Classification: SVM |
AUC: 90,69 |
| Bassel et al. [61] | Classification (Benign/Malignant) |
ISIC dataset [52]: 1800 images of benign type and 1497 pictures of malignant cancer Training set: 70% of images (1440 benign, 1197 malignant) Test set: 30% of images (360 benign, 300 malignant) |
Model 1:Feature Extraction: ResNet50 Model 2:Feature Extraction: VCG-16 Model 3:Feature Extraction: Xception Classification: Stacked CV model (SVM+NN+RF+KNN) |
ResNet Model: Acc: 81,6% AUC: 0,818 VCG-16 Model: Acc: 86,5 % AUC: 0,843 Xception Model: Acc: 90,9 AUC: 0,917 |
| Ningrum et al. [44] | Classification (Melanom/benign) |
ISIC dataset [52] 900 images Training set: 720 images Validation set: 180 images Test set: 300 (93 malignant, 207 nonmalignant) |
Classification: CNN model for images + ANN model for patient metadata . |
CNN Acc: 73.69 AUC: 82.4 CNN+ANN Acc: 92.34 AUC: 97.1 |
| Nambisan et al.[62] | Segmentation and classification (Melanoma/Benign) |
ISIC dataset [52] Segmentation task: 487 MM images Classification task: 1000 images (500 MM, and 500 benign (100 images per class from the Actinic keratosis, Melanocytic nevus, Benign keratosis, Dermatofibroma, and Vascular lesion) |
Segmentation (Classification dataset+Segmentation dataset (Irregular networks)) U-Net/U-Net++/MA-Net/PA-Net Handcrafted Feature Extraction Classification: Level 0 (without segmentation): DL classification model Level 1 (With segmentation and with level 0 model’s results): Conventional classification model |
Conventional Ensemble Acc: 0.793 DL Ensemble Acc: 0.838 EfficientNet-B0 + Conventional Ensemble Acc: 0.862 |
| Collenne et al. [63] | Classification (Melanoma/Nevi) |
ISIC dataset [52] (6371 nevi and 1301 melanoma) Training set 70% of images: Validation set: 10% of images Test set: 20% of images |
Segmentation: U-Net Classification ANN( for asymmetry features + CNN (EfficientNet) |
Handcrafted Model with asymmetry features (ANN): Acc: 79% AUC: 0.87 Sen: 90% Spe: 67% ANN+CNN: Sen: 0.92 Spe: 0.82 Acc: 0.87 AUC: 0.942 |
| Hekler et al. [45] | Classification (Melanoma/Nevi) |
HAM10000 [64] and BCN20000 [65] Datasets 29,562 images (7,794 melanoma and 21,768 nevi) %80 training, %20 validation Test set: SCP2 dataset, 293 melanoma and 363 melanocytic nevi from 617 patients |
ConvNeXT architecture 1. Classification using single image 2. Classification using multiple real-world images 3. Classification using multiple artificially modified images |
Single image approach: Acc: 0.905 ECE: 0.131 Multiview real-world approach: Acc: 0.930 ECE: 0.072 Multiview artificial approach: Acc:0.929 ECE: 0.086 |
| Crawford et al.[46] | Classification (Excision/no excision) |
Self-referred patients | MoleAnalyzer Pro |
AI Sen: 64.7% Spe: 75.76% PPV: 40.0% NPV: 89.6% Acc: 73.56% |
3.2. Distinguishing Melanoma from Other Skin Cancers
| Publication | End-point | Dataset | Algorithm | Performance |
| Esteva et al. [66] | Classification Binary: Keratinocyte carcinoma/SK; melanoma/nevi 3 way: Benign/Malign/Non-neoplastic 9 way: Cutaneous lymphoma and lymphoid infiltrates/ Benign dermal tumors, cysts, sinuses/ Malignant dermal tumor/ Benign epidermal tumors, hamartomas, milia, and growths/ Malignant and premalignant epidermal tumors/ Genodermatoses and supernumerary growths/ Inflammatory conditions/ Benign melanocytic lesions/ Malignant Melanoma |
ISIC [52] and Edinburgh dataset [31] and the Stanford Hospital: 129,450 clinical images, including 3,374 dermoscopic images of 757 disease classes Training set: 127,463 images Test set:1,942 images |
Google Inception v3 (CNN) | Binary classification (Algorithm AUC) Carcinoma: 0,96 Melanoma: 0,0,94 Melanoma (Dermoscopic images): 0,91 3 way classification: Dermatologist 1 Acc: 65.6% Dermatologist 2 Acc: 66.0% CNN Acc: 69.4 ± 0.8% CNN partitioning algorithm Acc: 72.1 ± 0.9% 9 way classification: Dermatologist 1 Acc: 53.3% Dermatologist 2 Acc: 55.0% CNN Acc: 48.9 ± 1.9% CNN partitioning algorithm Acc: 55.4 ± 1.7% |
| Rezvantalab et al. [67] | Classification (MM/Melanocytic Nevi/BCC/AKIEC/Benign keratosis/DF/Vascular lesion) |
HAM10000 dataset [64] :10015 dermatoscopic images (1113 MM, 6705 nevi, 514 BCC, 327 AK and intraepithelial carcinoma (AKIEC), 1099 benign keratosis, 115 DF, 142 vascular lesions) PH² set (55): 80 nevi, 40 MM Training set: 70 % Validation set: 15% Test set: 15% |
Compared CNNs for classification: Inception v3/InceptionResNet v2/ResNet 152/DenseNet 201 |
AUC (Melanoma) Dermatologist: 82,26 DenseNet 201: 93,80 ResNet 152: 94,40 Inception v3: 93,40 InceptionResNet v2: 93,20 AUC (BCC) Dermatologist: 88,82 DenseNet 201: 99,30 ResNet 152: 99,10 Inception v3: 98,60 InceptionResNet v2: 98,60 |
| Maron et al. [68] | Classification Two way:Benign/Malignant Five way:AKIEC/BCC/MM/Nevi/BKL (benign keratosis, including seborrhoeic keratosis, solar lentigo and lichen planus like keratosis) |
Training set: 11,444 images (ISIC Archive[52] and HAM10000 dataset [64]) Test set: 300 test images (60 for each of the five disease classes) (HAM10000 dataset) |
CNN (ResNet50) |
Two way classification: CNN AUC: 0,928 CNN Spe: 91,3% Dermatologist Spe: 59,8% Five way classification: CNN AUC: 0,960 CNN Spe: 89,2% Dermatologist Spe: 98,8% |
| Tschandl et al [69] | Classification (Benign/Malignant) |
Training set:7895 dermoscopic and 5829 close-up images Test set: 2,072 dermoscopic and close-up images |
Combined convolutional neural network (cCNN) (InceptionResNetV2, InceptionV3, Xception, ResNet50) |
cCNN: AUC: 0,695 Sen: 80,5% Spe: 53,5% Human Raters: AUC: 0,742 Sen: 77,6% Spe: 51,3% |
| Tschandl et al. [70] | Classification (7 way classification: intraepithelial carcinoma including AK and Bowen’s disease; BCC; benign keratinocytic lesions including solar lentigo, SK, and LPLK; dermatofibroma; melanoma; melanocytic nevi; and vascular lesions) |
Training set: 10,015 dermoscopic images Test set: 1,195 images |
Top 3 algorithms of the ISIC 2018 challenge[73] |
Algorithms (mean): Sen: 81,9% Spe: 96,2% Human readers (mean): Sen: 67,8% Spe: 94,0% |
| Haenssle et al.[71] | Classification (Benign/Malignant) Management decision (treatment/ excision, no action, follow-up examination) |
Pretrained CNN Test set: 100 images including pigmented/ non-pigmented and melanocytic/non-melanocytic skin lesions |
Inception v4/ Moleanalyzer Pro (CNN) |
CNN Management Decision: Sen: 95.0% Spe: 76.7% Acc: 84.0% AUC: 0.918 CNN Diagnosis (Benign/Malignant) Sen: 95.0% Spe: 76.7% Acc: 84.0% Level 1 Management Decision: Dermatologist: Sen: 89.0% Spe: 80.7% Acc: 84.0 % Level 1 Diagnosis (Benign/Malignant) Dermatologist: Sen: 83.8% Spe: 77.6% Acc: 80.1% Level 2 Management Decision: Dermatologist: Sen: 94.1% Spe: 80.4% Acc: 85.9% Level 2 Diagnosis (Benign/Malignant) Dermatologist: Sen: 90.6% Spe: 82.4% Acc: 85.7% |
| Hekler et al. [72] | Primary end point: Classification to 5 categories (MM/nevus/BCC/AK,Bowen’s disease or squamous cell carcinoma/seborrhoeic keratosis, lentigo solaris or lichen ruber planus) Secondary end-point: Binary classification (Benign/malignant) |
Training set: 12336 dermoscopic images (585 images of AK,Bowen,SCC, 910 images of BCC, 3101 images of seborrhoeic keratosis,lentigo Solaris,lichen ruber planus, 4219 images of nevi,3521 images of MM) | CNN (ResNet50) |
Multiclass classification: Physician Acc: 42.94% CNN Acc: 81.59% Physician+CNN Acc: 82.95% Binary classification: Physician Sen: 66% Spe: 62% CNN Sen: 86.1% Spe: 89.2% Physician+CNN Sen: 89% Spe: 84% |
| Xinrong Lu et al. [74] | Classification (normal, carcinoma, and melanoma) |
HAM10000 dataset [64] Training set: 8012 images (%80) Test set: 2003 images (%20) |
Proposed Xception (The ReLU activation function of the model was replaced with the swish activation function) compared with VGG16, InceptionV3, AlexNet and Xception |
VGG16: Acc: 48.99 Sen: 53.7 InceptionV3 Acc: 52.99 Sen: 53.99 AlexNet Acc: 75.99 Sen: 76.99 Xception Acc: 92.90 Sen: 91.99 Proposed Xception Acc: 100 Sen: 94.05 |
| Mengistu et al. [75] | Classification (BCC, SCC, MM) |
235 images (162 images for training and 73 images for testing) | Combined SOM and RBFNN and compared them with KNN, ANN, and naïve-Bayes |
Proposed model Acc: 93,15% KNN Acc:71,23% ANN Acc: 63,01% Naïve-Bayes Acc: 56,16% |
| Rashid et al. [76] | Classification (MM/Melanocytic Nevus/BCC/AKIEC/Benign Keratosis/DF/Vascular Lesion) |
ISIC dataset [52] Training set: 8000 images Test set: 2000 images |
GAN compared with CNN (DenseNet and ResNet-50) | GAN Acc: 0,861 DenseNet Acc: 0.815 ResNet-50 Acc: 0.792 |
| Alwakid et al. [77] | Classification (MM/BN/BCC/Vascular lesion/Benign keratosis/Actinic Carcinoma/DF) |
HAM10000 dataset [64] 10015 dermatoscopic images Training set: 8029 images Validation set: 993 images Test set: 993 images |
Inception-V3, InceptionResnet-V2 |
Inception-V3 Acc: 0.897 Spe: 0.89 Sen: 0.90 InceptionResnet-V2 Acc: 0.913 Spe: 0.90 Sen: 0.91 |
4. In Vivo Skin Imaging Devices
4.1. RCM
4.2. Optical Coherence Tomography (OCT) and OCT-like Devices
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Narayanan, D.L.; Saladi, R.N.; Fox, J.L. Ultraviolet radiation and skin cancer. International journal of dermatology 2010, 49, 978–986. [Google Scholar] [CrossRef] [PubMed]
- Society, A.C. What Is Melanoma Skin Cancer? 2023. [Google Scholar]
- Society, A.C. Key Statistics for Melanoma Skin Cancer. 2023. [Google Scholar]
- Rastrelli, M.; Tropea, S.; Rossi, C.R.; Alaibac, M. Melanoma: epidemiology, risk factors, pathogenesis, diagnosis and classification. In Vivo 2014, 28, 1005–1011. [Google Scholar] [PubMed]
- Jones, S.; Henry, V.; Strong, E.; Sheriff, S.A.; Wanat, K.; Kasprzak, J.; Clark, M.; Shukla, M.; Zenga, J.; Stadler, M.; et al. Clinical Impact and Accuracy of Shave Biopsy for Initial Diagnosis of Cutaneous Melanoma. J Surg Res 2023, 286, 35–40. [Google Scholar] [CrossRef]
- Alam, M.; Lee, A.; Ibrahimi, O.A.; Kim, N.; Bordeaux, J.; Chen, K.; Dinehart, S.; Goldberg, D.J.; Hanke, C.W.; Hruza, G.J.; et al. A multistep approach to improving biopsy site identification in dermatology: physician, staff, and patient roles based on a Delphi consensus. JAMA Dermatol 2014, 150, 550–558. [Google Scholar] [CrossRef]
- St John, J.; Walker, J.; Goldberg, D.; Maloney, M.E. Avoiding Medical Errors in Cutaneous Site Identification: A Best Practices Review. Dermatol Surg 2016, 42, 477–484. [Google Scholar] [CrossRef]
- Dubois, A.; Levecq, O.; Azimani, H.; Siret, D.; Barut, A.; Suppa, M.; Del Marmol, V.; Malvehy, J.; Cinotti, E.; Rubegni, P.; et al. Line-field confocal optical coherence tomography for high-resolution noninvasive imaging of skin tumors. J Biomed Opt 2018, 23, 1–9. [Google Scholar] [CrossRef]
- Cinotti, E.; Couzan, C.; Perrot, J.L.; Habougit, C.; Labeille, B.; Cambazard, F.; Moscarella, E.; Kyrgidis, A.; Argenziano, G.; Pellacani, G.; et al. In vivo confocal microscopic substrate of grey colour in melanosis. J Eur Acad Dermatol Venereol 2015, 29, 2458–2462. [Google Scholar] [CrossRef]
- Jones, O.T.; Matin, R.N.; van der Schaar, M.; Prathivadi Bhayankaram, K.; Ranmuthu, C.K.I.; Islam, M.S.; Behiyat, D.; Boscott, R.; Calanzani, N.; Emery, J.; et al. Artificial intelligence and machine learning algorithms for early detection of skin cancer in community and primary care settings: a systematic review. Lancet Digit Health 2022, 4, e466–e476. [Google Scholar] [CrossRef]
- Chu, Y.S.; An, H.G.; Oh, B.H.; Yang, S. Artificial Intelligence in Cutaneous Oncology. Frontiers in Medicine 2020, 7. [Google Scholar] [CrossRef] [PubMed]
- Hogarty, D.T.; Su, J.C.; Phan, K.; Attia, M.; Hossny, M.; Nahavandi, S.; Lenane, P.; Moloney, F.J.; Yazdabadi, A. Artificial Intelligence in Dermatology-Where We Are and the Way to the Future: A Review. Am J Clin Dermatol 2020, 21, 41–47. [Google Scholar] [CrossRef] [PubMed]
- Patel, S.; Wang, J.V.; Motaparthi, K.; Lee, J.B. Artificial intelligence in dermatology for the clinician. Clin Dermatol 2021, 39, 667–672. [Google Scholar] [CrossRef] [PubMed]
- Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
- Nahm, F.S. Receiver operating characteristic curve: overview and practical use for clinicians. Korean J Anesthesiol 2022, 75, 25–36. [Google Scholar] [CrossRef]
- Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
- Jaccard, P. The distribution of the flora in the alpine zone. 1. New phytologist 1912, 11, 37–50. [Google Scholar] [CrossRef]
- Duarte, A.F.; Sousa-Pinto, B.; Azevedo, L.F.; Barros, A.M.; Puig, S.; Malvehy, J.; Haneke, E.; Correia, O. Clinical ABCDE rule for early melanoma detection. European Journal of Dermatology 2021, 31, 771–778. [Google Scholar] [CrossRef]
- Nasr-Esfahani, E.; Samavi, S.; Karimi, N.; Soroushmehr, S.M.R.; Jafari, M.H.; Ward, K.; Najarian, K. Melanoma detection by analysis of clinical images using convolutional neural network. In Proceedings of the 2016 38th annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2016; pp. 1373–1376. [Google Scholar]
- Yap, J.; Yolland, W.; Tschandl, P. Multimodal skin lesion classification using deep learning. Experimental dermatology 2018, 27, 1261–1267. [Google Scholar] [CrossRef]
- Esfahani, P.R.; Mazboudi, P.; Reddy, A.J.; Farasat, V.P.; Guirgus, M.E.; Tak, N.; Min, M.; Arakji, G.H.; Patel, R. Leveraging machine learning for accurate detection and diagnosis of melanoma and nevi: an interdisciplinary study in dermatology. Cureus 2023, 15. [Google Scholar] [CrossRef]
- Dorj, U.-O.; Lee, K.-K.; Choi, J.-Y.; Lee, M. The skin cancer classification using deep convolutional neural network. Multimedia Tools and Applications 2018, 77, 9909–9924. [Google Scholar] [CrossRef]
- Soenksen, L.R.; Kassis, T.; Conover, S.T.; Marti-Fuster, B.; Birkenfeld, J.S.; Tucker-Schwartz, J.; Naseem, A.; Stavert, R.R.; Kim, C.C.; Senna, M.M. Using deep learning for dermatologist-level detection of suspicious pigmented skin lesions from wide-field images. Science Translational Medicine 2021, 13, eabb3652. [Google Scholar] [CrossRef] [PubMed]
- Pomponiu, V.; Nejati, H.; Cheung, N.-M. Deepmole: Deep neural networks for skin mole lesion classification. In Proceedings of the 2016 IEEE international conference on image processing (ICIP); 2016; pp. 2623–2627. [Google Scholar]
- Han, S.S.; Kim, M.S.; Lim, W.; Park, G.H.; Park, I.; Chang, S.E. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. Journal of Investigative Dermatology 2018, 138, 1529–1538. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Jain, A.; Eng, C.; Way, D.H.; Lee, K.; Bui, P.; Kanada, K.; de Oliveira Marinho, G.; Gallegos, J.; Gabriele, S. A deep learning system for differential diagnosis of skin diseases. Nature medicine 2020, 26, 900–908. [Google Scholar] [CrossRef] [PubMed]
- Sangers, T.; Reeder, S.; van der Vet, S.; Jhingoer, S.; Mooyaart, A.; Siegel, D.M.; Nijsten, T.; Wakkee, M. Validation of a market-approved artificial intelligence mobile health app for skin cancer screening: a prospective multicenter diagnostic accuracy study. Dermatology 2022, 238, 649–656. [Google Scholar] [CrossRef]
- Potluru, A.; Arora, A.; Arora, A.; Joiya, S.A. Automated Machine Learning (AutoML) for the Diagnosis of Melanoma Skin Lesions From Consumer-Grade Camera Photos. Cureus 2024, 16. [Google Scholar] [CrossRef]
- Asan and Hallym Dataset (Thumbnails). 2017. [CrossRef]
- Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert systems with applications 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
- Ballerini, L.; Fisher, R.B.; Aldridge, B.; Rees, J. A color and texture based hierarchical K-NN approach to the classification of non-melanoma skin lesions. Color medical image analysis 2013, 63–86. [Google Scholar]
- DermIS.
- Boer, A.; Nischal, K. www. derm101. com: A growing online resource for learning dermatology and dermatopathology. Indian Journal of Dermatology, Venereology and Leprology 2007, 73, 138. [Google Scholar] [CrossRef]
- Kato, J.; Horimoto, K.; Sato, S.; Minowa, T.; Uhara, H. Dermoscopy of melanoma and non-melanoma skin cancers. Frontiers in medicine 2019, 6, 180. [Google Scholar] [CrossRef] [PubMed]
- Masood, A.; Al-Jumaily, A.A.; Adnan, T. Development of automated diagnostic system for skin cancer: Performance analysis of neural network learning algorithms for classification. In Proceedings of the Artificial Neural Networks and Machine Learning–ICANN 2014: 24th International Conference on Artificial Neural Networks, Hamburg, Germany, 15-19 September 2014; Proceedings 24. pp. 837–844. [Google Scholar]
- Marchetti, M.A.; Codella, N.C.; Dusza, S.W.; Gutman, D.A.; Helba, B.; Kalloo, A.; Mishra, N.; Carrera, C.; Celebi, M.E.; DeFazio, J.L. Results of the 2016 international skin imaging collaboration isbi challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images. Journal of the American Academy of Dermatology 2018, 78, 270. [Google Scholar] [CrossRef] [PubMed]
- Marchetti, M.A.; Liopyris, K.; Dusza, S.W.; Codella, N.C.; Gutman, D.A.; Helba, B.; Kalloo, A.; Halpern, A.C.; Soyer, H.P.; Curiel-Lewandrowski, C. Computer algorithms show potential for improving dermatologists' accuracy to diagnose cutaneous melanoma: Results of the International Skin Imaging Collaboration 2017. Journal of the American Academy of Dermatology 2020, 82, 622–627. [Google Scholar] [CrossRef]
- Yu, C.; Yang, S.; Kim, W.; Jung, J.; Chung, K.-Y.; Lee, S.W.; Oh, B. Acral melanoma detection using a convolutional neural network for dermoscopy images. PloS one 2018, 13, e0193321. [Google Scholar]
- Abbas, Q.; Ramzan, F.; Ghani, M.U. Acral melanoma detection using dermoscopic images and convolutional neural networks. Vis Comput Ind Biomed Art 2021, 4, 25. [Google Scholar] [CrossRef]
- Fink, C.; Blum, A.; Buhl, T.; Mitteldorf, C.; Hofmann-Wellenhof, R.; Deinlein, T.; Stolz, W.; Trennheuser, L.; Cussigh, C.; Deltgen, D. Diagnostic performance of a deep learning convolutional neural network in the differentiation of combined naevi and melanomas. Journal of the European Academy of Dermatology and Venereology 2020, 34, 1355–1361. [Google Scholar] [CrossRef]
- Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Holland-Letz, T. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. European Journal of Cancer 2019, 113, 47–54. [Google Scholar] [CrossRef]
- Giulini, M.; Goldust, M.; Grabbe, S.; Ludwigs, C.; Seliger, D.; Karagaiah, P.; Schepler, H.; Butsch, F.; Weidenthaler-Barth, B.; Rietz, S. Combining artificial intelligence and human expertise for more accurate dermoscopic melanoma diagnosis: A 2-session retrospective reader study. Journal of the American Academy of Dermatology 2024, 90, 1266–1268. [Google Scholar] [CrossRef]
- Mahbod, A.; Schaefer, G.; Wang, C.; Ecker, R.; Ellinge, I. Skin lesion classification using hybrid deep neural networks. In Proceedings of the ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), 2019; pp. 1229–1233.
- Ningrum, D.N.A.; Yuan, S.-P.; Kung, W.-M.; Wu, C.-C.; Tzeng, I.-S.; Huang, C.-Y.; Li, J.Y.-C.; Wang, Y.-C. Deep learning classifier with patient’s metadata of dermoscopic images in malignant melanoma detection. Journal of multidisciplinary healthcare 2021, 877–885. [Google Scholar] [CrossRef]
- Hekler, A.; Maron, R.C.; Haggenmüller, S.; Schmitt, M.; Wies, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M. Using multiple real-world dermoscopic photographs of one lesion improves melanoma classification via deep learning. Journal of the American Academy of Dermatology 2024, 90, 1028–1031. [Google Scholar] [CrossRef]
- Crawford, M.E.; Kamali, K.; Dorey, R.A.; MacIntyre, O.C.; Cleminson, K.; MacGillivary, M.L.; Green, P.J.; Langley, R.G.; Purdy, K.S.; DeCoste, R.C. Using artificial intelligence as a melanoma screening tool in self-referred patients. Journal of Cutaneous Medicine and Surgery 2024, 28, 37–43. [Google Scholar] [CrossRef] [PubMed]
- Chanda, T.; Hauser, K.; Hobelsberger, S.; Bucher, T.-C.; Garcia, C.N.; Wies, C.; Kittler, H.; Tschandl, P.; Navarrete-Dechent, C.; Podlipnik, S. Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma. Nature Communications 2024, 15, 524. [Google Scholar] [CrossRef] [PubMed]
- Correia, M.; Bissoto, A.; Santiago, C.; Barata, C. XAI for Skin Cancer Detection with Prototypes and Non-Expert Supervision. arXiv preprint 2024, arXiv:2402.01410. [Google Scholar]
- Aswin, R.; Jaleel, J.A.; Salim, S. Hybrid genetic algorithm—Artificial neural network classifier for skin cancer detection. In Proceedings of the 2014 International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2017; pp. 1304–1309. [Google Scholar]
- Xie, F.; Fan, H.; Li, Y.; Jiang, Z.; Meng, R.; Bovik, A. Melanoma Classification on Dermoscopy Images Using a Neural Network Ensemble Model. IEEE Trans Med Imaging 2017, 36, 849–858. [Google Scholar] [CrossRef] [PubMed]
- Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin lesion analysis toward melanoma detection: A challenge at the international symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC). arXiv preprint 2016, arXiv:1605.01397. [Google Scholar]
- ISIC Archive.
- Cueva, W.F.; Muñoz, F.; Vásquez, G.; Delgado, G. Detection of skin cancer” Melanoma” through computer vision. In Proceedings of the 2017 IEEE XXIV International Conference on Electronics, 2017, Electrical Engineering and Computing (INTERCON); pp. 1–4.
- Mendonca, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH² - a dermoscopic image database for research and benchmarking. Annu Int Conf IEEE Eng Med Biol Soc 2013, 2013, 5437–5440. [Google Scholar] [CrossRef] [PubMed]
- Navarro, F.; Escudero-Vinolo, M.; Bescós, J. Accurate segmentation and registration of skin lesion images to evaluate lesion change. IEEE journal of biomedical and health informatics 2018, 23, 501–508. [Google Scholar] [CrossRef]
- Phillips, M.; Marsden, H.; Jaffe, W.; Matin, R.N.; Wali, G.N.; Greenhalgh, J.; McGrath, E.; James, R.; Ladoyanni, E.; Bewley, A. Assessment of accuracy of an artificial intelligence algorithm to detect melanoma in images of skin lesions. JAMA network open 2019, 2, e1913436–e1913436. [Google Scholar] [CrossRef]
- Martin-Gonzalez, M.; Azcarraga, C.; Martin-Gil, A.; Carpena-Torres, C.; Jaen, P. Efficacy of a deep learning convolutional neural network system for melanoma diagnosis in a hospital population. International Journal of Environmental Research and Public Health 2022, 19, 3892. [Google Scholar] [CrossRef]
- Ding, J.; Song, J.; Li, J.; Tang, J.; Guo, F. Two-stage deep neural network via ensemble learning for melanoma classification. Frontiers in Bioengineering and Biotechnology 2022, 9, 758495. [Google Scholar] [CrossRef]
- Yu, L.; Chen, H.; Dou, Q.; Qin, J.; Heng, P.-A. Automated melanoma recognition in dermoscopy images via very deep residual networks. IEEE transactions on medical imaging 2016, 36, 994–1004. [Google Scholar] [CrossRef] [PubMed]
- Bisla, D.; Choromanska, A.; Berman, R.S.; Stein, J.A.; Polsky, D. Towards automated melanoma detection with deep learning: Data purification and augmentation. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops,; 2019. [Google Scholar]
- Bassel, A.; Abdulkareem, A.B.; Alyasseri, Z.A.A.; Sani, N.S.; Mohammed, H.J. Automatic malignant and benign skin cancer classification using a hybrid deep learning approach. Diagnostics 2022, 12, 2472. [Google Scholar] [CrossRef]
- Nambisan, A.K.; Maurya, A.; Lama, N.; Phan, T.; Patel, G.; Miller, K.; Lama, B.; Hagerty, J.; Stanley, R.; Stoecker, W.V. Improving Automatic Melanoma Diagnosis Using Deep Learning-Based Segmentation of Irregular Networks. Cancers (Basel) 2023, 15. [Google Scholar] [CrossRef]
- Collenne, J.; Monnier, J.; Iguernaissi, R.; Nawaf, M.; Richard, M.A.; Grob, J.J.; Gaudy-Marqueste, C.; Dubuisson, S.; Merad, D. Fusion between an Algorithm Based on the Characterization of Melanocytic Lesions' Asymmetry with an Ensemble of Convolutional Neural Networks for Melanoma Detection. J Invest Dermatol 2024, 144, 1600–1607. [Google Scholar] [CrossRef] [PubMed]
- Tschandl, P. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. 2018. [Google Scholar] [CrossRef]
- Hernández-Pérez, C.; Combalia, M.; Podlipnik, S.; Codella, N.C.F.; Rotemberg, V.; Halpern, A.C.; Reiter, O.; Carrera, C.; Barreiro, A.; Helba, B.; et al. BCN20000: Dermoscopic Lesions in the Wild. Scientific Data 2024, 11, 641. [Google Scholar] [CrossRef] [PubMed]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Rezvantalab, A.; Safigholi, H.; Karimijeshni, S. Dermatologist level dermoscopy skin cancer classification using different deep learning convolutional neural networks algorithms. arXiv preprint 2018, arXiv:1810.10348. [Google Scholar]
- Maron, R.C.; Weichenthal, M.; Utikal, J.S.; Hekler, A.; Berking, C.; Hauschild, A.; Enk, A.H.; Haferkamp, S.; Klode, J.; Schadendorf, D.; et al. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur J Cancer 2019, 119, 57–65. [Google Scholar] [CrossRef]
- Tschandl, P.; Rosendahl, C.; Akay, B.N.; Argenziano, G.; Blum, A.; Braun, R.P.; Cabo, H.; Gourhant, J.Y.; Kreusch, J.; Lallas, A.; et al. Expert-Level Diagnosis of Nonpigmented Skin Cancer by Combined Convolutional Neural Networks. JAMA Dermatol 2019, 155, 58–65. [Google Scholar] [CrossRef]
- Tschandl, P.; Codella, N.; Akay, B.N.; Argenziano, G.; Braun, R.P.; Cabo, H.; Gutman, D.; Halpern, A.; Helba, B.; Hofmann-Wellenhof, R.; et al. Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study. Lancet Oncol 2019, 20, 938–947. [Google Scholar] [CrossRef] [PubMed]
- Haenssle, H.A.; Fink, C.; Toberer, F.; Winkler, J.; Stolz, W.; Deinlein, T.; Hofmann-Wellenhof, R.; Lallas, A.; Emmert, S.; Buhl, T.; et al. Man against machine reloaded: performance of a market-approved convolutional neural network in classifying a broad spectrum of skin lesions in comparison with 96 dermatologists working under less artificial conditions. Ann Oncol 2020, 31, 137–143. [Google Scholar] [CrossRef] [PubMed]
- Hekler, A.; Utikal, J.S.; Enk, A.H.; Hauschild, A.; Weichenthal, M.; Maron, R.C.; Berking, C.; Haferkamp, S.; Klode, J.; Schadendorf, D.; et al. Superior skin cancer classification by the combination of human and artificial intelligence. Eur J Cancer 2019, 120, 114–121. [Google Scholar] [CrossRef] [PubMed]
- Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (isic). arXiv preprint 2019, arXiv:1902.03368. [Google Scholar]
- Lu, X.; Firoozeh Abolhasani Zadeh, Y.A. Deep Learning-Based Classification for Melanoma Detection Using XceptionNet. J Healthc Eng 2022, 2022, 2196096. [Google Scholar] [CrossRef]
- Mengistu, A.D.; Alemayehu, D.M. Computer vision for skin cancer diagnosis and recognition using RBF and SOM. International Journal of Image Processing (IJIP) 2015, 9, 311–319. [Google Scholar]
- Rashid, H.; Tanveer, M.A.; Khan, H.A. Skin lesion classification using GAN based data augmentation. In Proceedings of the 2019 41St annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2019; pp. 916–919. [Google Scholar]
- Alwakid, G.; Gouda, W.; Humayun, M.; Jhanjhi, N.Z. Diagnosing Melanomas in Dermoscopy Images Using Deep Learning. Diagnostics 2023, 13. [Google Scholar] [CrossRef]
- Maier, K.; Zaniolo, L.; Marques, O. Image quality issues in teledermatology: A comparative analysis of artificial intelligence solutions. J Am Acad Dermatol 2022, 87, 240–242. [Google Scholar] [CrossRef]
- Winkler, J.K.; Fink, C.; Toberer, F.; Enk, A.; Deinlein, T.; Hofmann-Wellenhof, R.; Thomas, L.; Lallas, A.; Blum, A.; Stolz, W.; et al. Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. JAMA Dermatol 2019, 155, 1135–1141. [Google Scholar] [CrossRef]
- Sies, K.; Winkler, J.K.; Fink, C.; Bardehle, F.; Toberer, F.; Kommoss, F.K.F.; Buhl, T.; Enk, A.; Rosenberger, A.; Haenssle, H.A. Dark corner artefact and diagnostic performance of a market-approved neural network for skin cancer classification. J Dtsch Dermatol Ges 2021, 19, 842–850. [Google Scholar] [CrossRef]
- Sultana, N.N.; Puhan, N.B. Recent deep learning methods for melanoma detection: a review. In Proceedings of the Mathematics and Computing: 4th International Conference, ICMC 2018, Varanasi, India, 9-11 January 2018; Revised Selected Papers. pp. 118–132. [Google Scholar]
- Jafari, M.H.; Karimi, N.; Nasr-Esfahani, E.; Samavi, S.; Soroushmehr, S.M.R.; Ward, K.; Najarian, K. Skin lesion segmentation in clinical images using deep learning. In Proceedings of the 2016 23rd International conference on pattern recognition (ICPR); 2016; pp. 337–342. [Google Scholar]
- Atak, M.F.; Farabi, B.; Navarrete-Dechent, C.; Rubinstein, G.; Rajadhyaksha, M.; Jain, M. Confocal microscopy for diagnosis and management of cutaneous malignancies: clinical impacts and innovation. Diagnostics 2023, 13, 854. [Google Scholar] [CrossRef] [PubMed]
- Kose, K.; Bozkurt, A.; Alessi-Fox, C.; Brooks, D.H.; Dy, J.G.; Rajadhyaksha, M.; Gill, M. Utilizing machine learning for image quality assessment for reflectance confocal microscopy. Journal of Investigative Dermatology 2020, 140, 1214–1222. [Google Scholar] [CrossRef] [PubMed]
- Gerger, A.; Wiltgen, M.; Langsenlehner, U.; Richtig, E.; Horn, M.; Weger, W.; Ahlgrimm-Siess, V.; Hofmann-Wellenhof, R.; Samonigg, H.; Smolle, J. Diagnostic image analysis of malignant melanoma in in vivo confocal laser-scanning microscopy: a preliminary study. Skin Research and Technology 2008, 14, 359–363. [Google Scholar] [CrossRef]
- Koller, S.; Wiltgen, M.; Ahlgrimm-Siess, V.; Weger, W.; Hofmann-Wellenhof, R.; Richtig, E.; Smolle, J.; Gerger, A. In vivo reflectance confocal microscopy: automated diagnostic image analysis of melanocytic skin tumours. Journal of the European Academy of Dermatology and Venereology 2011, 25, 554–558. [Google Scholar] [CrossRef]
- Wodzinski, M.; Skalski, A.; Witkowski, A.; Pellacani, G.; Ludzik, J. Convolutional neural network approach to classify skin lesions using reflectance confocal microscopy. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); 2019; pp. 4754–4757. [Google Scholar]
- Kose, K.; Bozkurt, A.; Alessi-Fox, C.; Gill, M.; Longo, C.; Pellacani, G.; Dy, J.G.; Brooks, D.H.; Rajadhyaksha, M. Segmentation of cellular patterns in confocal images of melanocytic lesions in vivo via a multiscale encoder-decoder network (MED-Net). Medical image analysis 2021, 67, 101841. [Google Scholar] [CrossRef]
- D’Alonzo, M.; Bozkurt, A.; Alessi-Fox, C.; Gill, M.; Brooks, D.H.; Rajadhyaksha, M.; Kose, K.; Dy, J.G. Semantic segmentation of reflectance confocal microscopy mosaics of pigmented lesions using weak labels. Scientific Reports 2021, 11, 3679. [Google Scholar] [CrossRef]
- Herbert, S.; Valon, L.; Mancini, L.; Dray, N.; Caldarelli, P.; Gros, J.; Esposito, E.; Shorte, S.L.; Bally-Cuif, L.; Aulner, N. LocalZProjector and DeProj: a toolbox for local 2D projection and accurate morphometrics of large 3D microscopy images. BMC biology 2021, 19, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Mandal, A.; Priyam, S.; Chan, H.H.; Gouveia, B.M.; Guitera, P.; Song, Y.; Baker, M.A.B.; Vafaee, F. Computer-aided diagnosis of melanoma subtypes using reflectance confocal images. Cancers 2023, 15, 1428. [Google Scholar] [CrossRef]
- Gambichler, T.; Jaedicke, V.; Terras, S. Optical coherence tomography in dermatology: technical and clinical aspects. Archives of dermatological research 2011, 303, 457–473. [Google Scholar] [CrossRef]
- Sattler, E.; Kästle, R.; Welzel, J. Optical coherence tomography in dermatology. Journal of biomedical optics 2013, 18, 061224–061224. [Google Scholar] [CrossRef]
- Chou, H.-Y.; Huang, S.-L.; Tjiu, J.-W.; Chen, H.H. Dermal epidermal junction detection for full-field optical coherence tomography data of human skin by deep learning. Computerized Medical Imaging and Graphics 2021, 87, 101833. [Google Scholar] [CrossRef] [PubMed]
- Silver, F.H.; Mesica, A.; Gonzalez-Mercedes, M.; Deshmukh, T. Identification of Cancerous Skin Lesions Using Vibrational Optical Coherence Tomography (VOCT): Use of VOCT in Conjunction with Machine Learning to Diagnose Skin Cancer Remotely Using Telemedicine. Cancers 2022, 15, 156. [Google Scholar] [CrossRef] [PubMed]
- Lee, J.; Beirami, M.J.; Ebrahimpour, R.; Puyana, C.; Tsoukas, M.; Avanaki, K. Optical coherence tomography confirms non-malignant pigmented lesions in phacomatosis pigmentokeratotica using a support vector machine learning algorithm. Skin Research and Technology 2023, 29, e13377. [Google Scholar] [CrossRef] [PubMed]
- You, C.; Yi, J.-Y.; Hsu, T.-W.; Huang, S.-L. Integration of cellular-resolution optical coherence tomography and Raman spectroscopy for discrimination of skin cancer cells with machine learning. Journal of Biomedical Optics 2023, 28, 096005–096005. [Google Scholar] [CrossRef] [PubMed]
| Publication | End-point | Dataset | Algorithm | Performance |
| Nasr-Esfahani et al. [19] | Classification (benign/melanoma) | 170 clinical images that underwent data augmentation to generate 6120 images (80% training, 20% validation) | CNN with 2 convolutional layers each followed by pooling layers along with a fully connected layer | Acc: 81% Spe: 80% Sen: 81% NPV: 86% PPV: 86% |
| Yap et al. [20] | Classification of melanoma from 5 different types of lesions | 2917 cases with each case containing patient metadata, macroscopic image and dermoscopic images with 5 classes (naevus, melanoma, BCC, SCC, and pigmented benign keratoses) | ResNet-50 with embedding networks | Macroscopic images alone AUC: .791 Macroscopic and dermoscopy AUC: .866 Macroscopic, dermoscopy and metadata AUC: .861 |
| Riazi Esfahani et al. [21] | Classification (malignant melanoma/benign nevi) | 793 images (437 malignant melanoma and 357 benign nevi) | CNN | Acc: 88.6% Spe: 88.6% Sen: 81.8% |
| Dorj et al.[22] | Classification of melanoma from 4 different skin cancers (actinic keratoses, BCC, SCC, melanoma) | 3753 images (2985 training and 758 testing) including 958 melanoma | AlexNet with ECOC-SVM classifier | Acc: .942 Spe: .9074 Sen: .9783 |
| Soenksen et al. [23] | Classification across 6 different classes as well as distinguishing SPLs | 33,980 (including backgrounds, skin edges, bare skin sections, low priority NSPLs, medium priority NSPLs and SPLs) (60% training, 20% validation and 20% as testing) | DCNN with VGG16 Image Net pretrained network as transfer learning | Across all 6 classes AUCmicro: .97 Spemicro: .903 Senmicro:.899 For SPLs AUC:.935 |
| Pomponiu et al. [24] | Classification (melanoma/benign nevi) | 399 images (217 benign, 182 melanoma) from online image libraries | CNN with a KNN classifier | Acc: .83 Spe: .95 Sen: .92 |
| Han et al. [25] | Melanoma detection from 12 different skin diseases | Training: 19,938 images from the Asan dataset [29], MED-NODE dataset [30], and atlas site images Testing: 480 images from Asan and Edinburgh datasets [31] |
ResNet152 | Asan AUC: .96 Spe: .904 Sen: .91 Edinburgh AUC: .88 Spe: .855 Sen: .807 |
| Liu et al. [26] | Primary: classification among 26 different skin conditions Secondary: classification among a full set of 419 different skin conditions |
Training: 64,837 images with metadata Validation set A: 14833 images with metadata Validation set B used to compare to dermatologists: 3707 images with metadata |
DLS with Inception-v4 modules and shallow module | Validation set A for 26 image classification: Acctop1:.71 Acctop3: .93 Sentop1:.58 Sentop3: .83 Validation set B for 26 image classification: Acctop1:.66 Acctop3: .9 Sentop1:.56 Sentop3: .64 Dermatologists: Acctop1:.63 Acctop3: .75 Sentop1:.51 Sentop3: .49 |
| Sangers et al. [27] | Classification (low/high risk) | 785 images (418 suspicious, 367 benign) | RD-174 | Overall app classification Sen: .869 Spe:.704 Classification for melanocytic lesions: Sen: .819 Spe: .733 |
| Polturu et al. [28] | Classification (non-melanoma/melanoma) | 206 images from DermIS [32] and Derm Quest [33] (87 nonmelanoma and 119 melanoma, 85% used for training and 15% used for testing) | AutoML was created using a no-code online service platform | Acc: .844 Sen: .833 Spe: .857 |
| Publication | End-Point | Dataset | Algorithm | Performance |
| Kose et al. [84] | Segmentation; detection of artifacts | 117 RCM mosaics | MED-Net; an automated semantic segmentation method | Sensitivity:82%, Specificity: 93% |
| Gerger et al. [85] | Classification; benign nevi vs melanoma | 408 benign nevi and 449 melanoma images | CART (Classification and Regression Trees) | Learning set: 97.31% of images correctly classified Training set: 81.03% of images correctly classified |
| Koller et al. [86] | Classification; benign nevi vs melanoma | 4669 melanoma and 11 600 benign nevi RCM images | CART (Classification and Regression Trees) | Learning set: 93.60% of the melanoma and 90.40% of the nevi images correctly classified |
| Wodzinski et al. [87] | Classification; benign nevi vs melanoma vs BCC | 429 RCM mosaics | a CNN based on ResNet architecture | F1 score for melanoma in test set: 0.84 ± 0.03 |
| Kose et al. [88] | Segmentation; six distinct patterns (aspecific, non-lesion, artifact, ring, nested, meshwork) | 117 RCM mosaics | an automated semantic segmentation method, MED-Net | Pixel-wise mean sensitivity: 70 ± 11% Pixel-wise mean specificity: 95 ± 2%, respectively, with 0.71 ± 0.09 Dice coefficient over six classes. |
| D’Alonzo et al. [89] | Segmentation; “benign” and “aspecific (nonspecific)” regions | 157 RCM mosaics | Efficientnet, a deep neural network (DNN) | AUC of 0.969, and Dice coefficient of 0.778 |
| Mandal et al. [91] | Classification; Atypical intraepidermal melanocytic proliferation (AIMP) vs Lentigo Maligna (LM) | 517 RCM stacks (389 LM and 148 AIMP) from 110 patients | DenseNet169, a CNN classifier. | Accuracy: 0.80 F1 score for LM: 0.87 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
